id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
225746550 | pes2o/s2orc | v3-fos-license | MnO2-Coated Dual Core–Shell Spindle-Like Nanorods for Improved Capacity Retention of Lithium–Sulfur Batteries
The emerging need for high-performance lithium–sulfur batteries has motivated many researchers to investigate different designs. However, the polysulfide shuttle effect, which is the result of dissolution of many intermediate polysulfides in electrolyte, has still remained unsolved. In this study, we have designed a sulfur-filled dual core–shell spindle-like nanorod structure coated with manganese oxide (S@HCNR@MnO2) to achieve a high-performance cathode for lithium–sulfur batteries. The cathode showed an initial discharge capacity of 1661 mA h g−1 with 80% retention of capacity over 70 cycles at a 0.2C rate. Furthermore, compared with the nanorods without any coating (S@HCNR), the MnO2-coated material displayed superior rate capability, cycling stability, and Coulombic efficiency. The synergistic effects of the nitrogen-doped hollow carbon host and the MnO2 second shell are responsible for the improved electrochemical performance of this nanostructure.
Introduction
The depletion of fossil fuels and environmental issues arising from CO 2 emissions will force humankind to find alternative, clean means to satisfy its energy needs [1]. Rechargeable batteries are a vital component of the solution and research in this field has been increasing throughout the world [2,3]. Rechargeable Li-ion batteries have dominated portable electronic devices and hybrid electric vehicles (HEVs) since first being introduced in 1991. However, their high cost and low energy density have inhibited the mass-scale production of electric vehicles (EVs). The main reason for not shifting from HEVs to EVs on the roads is that state-of-the-art technology cannot satisfy the requirements of long-mileage driving, because of its energy density limitation barriers [4].
Lithium-sulfur (Li-S) batteries are among the most promising electrochemical energy storage devices of the near future [5]. The low cost and natural abundance of sulfur as well as its high theoretical specific capacity (1675 mA h g −1 ) makes it an attractive cathode material [6]. In addition, sulfur has a high energy density (2600 W h kg −1 ) and is environmentally friendly for energy storage applications. However, the commercialization of Li-S batteries is hindered, primarily due to the insulating nature of sulfur (5 × 10 −30 S cm −1 ) and volume expansion of the active material (sulfur) during discharge due to the polysulfide shuttle (PSS) effect. The PSS leads to loss of active material from the cathode and causes irreversible reactions between the PSS intermediates and the lithium metal anode, which results in low coulombic efficiency and short cycle life [7][8][9]. Upon discharge, the sulfur is reduced to high-order PSS intermediates Li 2 S n (3 ≤ n ≤ 8) with an approximately 80% volume expansion that promotes the loss of active material [10,11]. The large volumetric expansion occurring due to the density difference between sulfur (2.03 g cm −3 ) and Li 2 S (1.66 g cm −3 ) pulverizes the active material and accelerates rapid decay [6,12,13]. Furthermore, volume expansion can lead to mechanical cracking and deposition of the active material outside the electrode, which also results in loss of capacity [11,14].
To overcome the limitations of the insulating nature of sulfur, an effective approach has been focused on using carbon-based materials as the sulfur host (e.g., porous carbon [15][16][17][18], carbon nanotubes [19][20][21][22], graphene-graphene oxides [23][24][25][26][27]), since they address many of the challenges associated with Li-S technology. Carbon-based host materials have garnered the most interest since carbon provides higher conductivity and a physical barrier towards PSS, and accommodates the volume change that occurs during expansion, hence enhancing the utilization of the sulfur [16,28]. However, significant migration of the lithium polysulfides (LiPSs) is observed with carbon-based hosts due to the weak interaction between the polar LiPSs and the nonpolar carbon, resulting in capacity decay upon long-term cycling [29]. Recent literature has shown that polar materials such as sulfides, hydroxides, metal oxides and polymers can be employed as host materials in order to trap LiPSs as well as significantly improve long-term cycling stability by offering higher efficiency of LiPS chemisorption [30][31][32][33]. However, these materials generally have low electrical conductivity, which can potentially lead to low coulombic efficiency [34,35]. Therefore, it is essential to design a host structure that combines polar materials with carbon, which potentially may offer good conductivity and suppress the migration of LiPSs [36][37][38]. Manganese oxide has attracted significant attention because of its easy preparation and high efficiency in trapping polysulfides [39][40][41][42][43]. For instance, Nazar and coworkers demonstrated how MnO 2 is considered to be a remarkable chemical inhibitor of LiPSs by mediating polysulfides through the conversion of thiosulfate to polythionate species [44,45].
In response to the abovementioned issues associated with Li-S batteries, we prepared MnO 2 -coated dual core-shell spindle-like nanorods, denoted as S@HCNR@MnO 2 . The conductive nitrogen-doped carbon shell enhances the electrical conductivity of the cathode, while the outer polar MnO 2 layer is known to suppress the LiPSs. The double coating layers help to physically and chemically constrain the LiPSs. Furthermore, the volumetric expansion of sulfur upon lithiation is contained inside the nanorods. This novel design leads to a higher capacity and rate retention compared to pristine sulfur cathodes. To our knowledge, no other research group have tried to prepare such composite nanorods for this purpose.
Preparation of Nitrogen-Doped Hollow Porous Carbon Nanorods (N-HCNRs)
In a typical synthesis process [46], β-FeOOH nanorods were synthesized by using 2.25 g of FeCl 3 .6H 2 O and 2.4 g of urea dissolved in 50 mL deionized (DI) water. The solution was refluxed at 90-95 • C for 8 h and further centrifuged and washed with deionized water multiple times to ensure removal of chloride ions from the surface of the product. The product was dried at 60 • C overnight. 0.63 g of β-FeOOH was mixed with 0.42 g of dopamine in Tris-buffer (700 mL, 10 mM; pH 8.5) and ChemEngineering 2020, 4, 42 3 of 11 stirred at 50 • C for 24 h. The resultant product was collected by centrifuge and washed with DI water and ethanol, followed by drying at 60 • C overnight. Calcination was carried out in a tubular furnace under Ar flow at 400 • C for 2 h with a heating rate of 1 • C min −1 , followed by further treatment at 500 • C for 2 h with a heating rate of 5 • C min −1 . The obtained powder was labeled as Fe 3 O 4 @N-C nanorods. The Fe 3 O 4 core was etched with 2 M HCl aqueous solution and the precipitated layer was separated by centrifuge followed by washing with DI water until the pH of solution was stabilized at about 7. The final product (N-HCNR) was collected and dried in a vacuum oven at 60 • C overnight.
Preparation of S@HCNR
The sulfur/carbon composite was prepared by the melt-diffusion method. Elemental sulfur was ground with N-HCNR at a weight ratio of 7:3 and transferred into a Teflon ® -lined autoclave and sealed under Ar gas. The autoclave was heated at 155 • C for 12 h to obtain S@HCNR.
Preparation of S@HCNR@MnO 2
The as-synthesized S@HCNR was dispersed in aqueous solution containing 40 mg of PVP for 2 h by ultrasonication, and then the mixture was stirred at room temperature for 1 h. Then, 48 mg of KMnO 4 was added to the solution, and the mixture was sonicated and the solid product was air-dried at 60 • C overnight (S@HCNR@MnO 2 ).
Characterization
Information about the surface morphology and elemental composition of the samples was confirmed by Phenom ™ ProX scanning electron microscope (SEM). The nitrogen adsorption tests were carried out by a two-channel Quantachrome ® Nova 2200e. All samples were degassed at 200 • C for 12 h under vacuum prior to testing. The amount of sulfur content in each specimen was determined by heating about 4 g of the material to 400 • C for 1 h at a heating rate of 10 • C min −1 under the nitrogen atmosphere. The remaining sample was collected carefully and weighed with a 4-digit balance to determine the weight change.
Electrochemical Measurements
The composite cathode cells of S@HCNR and S@HCNR@MnO 2 were fabricated by using 80 wt.% of active material, 10 wt.% Super P carbon black, and 10 wt.% of polyvinylidene fluoride (PVDF) with an appropriate amount of N-methyl pyrrolidone (NMP) to reach a viscous slurry. The slurry was casted onto a carbon-coated Al foil current collector and dried at 60 • C overnight. The loading of active material was maintained as~1 mg cm −2 . We expect that the active material volume percentage of about 50% and the remaining 50% volume of the electrode is occupied by porosity, binder, conductive diluents, etc. Considering the percentage of the sulfur and C/MnO 2 in the active material, we can expect 40 vol.% of sulfur and 10 vol.% C/MnO 2 in the electrode. The porosity volume percentage is also estimated at about 30%. The electrochemical performances of the sulfur cathodes were tested using CR2032-type coin cells with Li metal as both reference and counter electrodes. Celgard ® porous membrane was used as the separator. The electrolyte solution was composed of 1.0 M lithium bis(trifluoromethanesulfonyl)imide (LiTFSI) in 1,2-dimethoxyethane (DME) and 1,3-dioxolane (DOL) (v/v = 1:1), with 2 wt.% of LiNO 3 . A quantity of 27 µL of the electrolyte was added to each cell. The cells were assembled inside an Ar-filled glove box, where both water and oxygen levels were below 1 ppm. Galvanostatic charge-discharge testing was carried out with Neware ® battery testers within a potential range of 1.7-2.8 V vs. Li + /Li. The reported capacities were normalized based on the sulfur content of the samples. Figure 1 shows a schematic illustration of the several steps involved in the synthesis of the S@HCNR@MnO 2 nanorods. The first step was involved with the preparation of the well-defined β-FeOOH nanorods by hydrothermal method, which acted as a hard template. Then, a dopamine layer was formed around the nanorods through a polymerization reaction at 50 • C, followed by subsequent carbonization to generate a nitrogen-doped carbon shell on the surface of the iron oxide core (Fe 3 O 4 @N-C). Polydopamine (PDA) was selected as the polymeric carbon precursor because of the presence of many amine groups in its structure. This high percentage of nitrogen leaves a nitrogen-doped carbon layer, which substantially improves the electrical conductivity of the coated material [47][48][49]. The hard template was etched with 2 M HCl aqueous solution and the remaining N-HCNR was introduced to molten sulfur (S@HCNR). In the final step, a thin uniform layer of MnO 2 was formed around the nanorods (S@HCNR@MnO 2 ).
Results and Discussions
ChemEngineering 2020, 4, x FOR PEER REVIEW 4 of 12 was formed around the nanorods through a polymerization reaction at 50 °C, followed by subsequent carbonization to generate a nitrogen-doped carbon shell on the surface of the iron oxide core (Fe3O4@N-C). Polydopamine (PDA) was selected as the polymeric carbon precursor because of the presence of many amine groups in its structure. This high percentage of nitrogen leaves a nitrogendoped carbon layer, which substantially improves the electrical conductivity of the coated material [47][48][49]. The hard template was etched with 2 M HCl aqueous solution and the remaining N-HCNR was introduced to molten sulfur (S@HCNR). In the final step, a thin uniform layer of MnO2 was formed around the nanorods (S@HCNR@MnO2). (Figure 2a). After polymerization of dopamine and further carbonization, β-FeOOH oxidized to Fe3O4 core with a nitrogen-doped carbon shell, Fe3O4@N-C. The strong binding affinity of polydopamine to the iron oxide core provoked the evolution of a uniform carbon layer, derived from polydopamine, after heat treatment at 500 °C (Figure 2b) [50]. Next, the Fe3O4 template was completely dissolved by HCl aqueous solution and hollow nitrogen-doped carbon nanorods (N-HCNRs) remained (Figure 2c and Figure 3a). Both ends of the carbon nanorods were slightly smaller than the middle core. The N-HCNRs maintained the spindle morphology, with lengths of 1-1.5 µm and diameters ranging from 150 to 200 nm. This unique structure makes them excellent hosts for sulfur. The sublimed sulfur was melted at 155 °C and diffused into the host inside the autoclave to obtain S@HCNRs (Figure 2d). This is confirmed by the presence of strong elemental sulfur peaks in the EDX spectrum (Figure 3b). It is important to note that the EDX spectra for each specimen were collected at different spots in order to generate more reliable data. Figure 3c clearly indicates the presence of Mn in the final structure, S@NHCNR@MnO2. In the final step, the δ-MnO2 was formed on the surface of S@HCNR with the aid of the reduction of KMnO4 in the presence of both sulfur and carbon (Equations (1) and (2)). The MnO2 layer wrapped the sulfur-filled nanorods conformally without causing any change in their size or morphology (Figure 2e). The elemental manganese peaks originated from the MnO2 shell ( Figure 3c). 6KMnO4 + 3S + H2O = 6MnO2 + K2SO4 + K3H(SO4)2 + KOH (1) 4KMnO4 + 3C + H2O = 4MnO2 + 2KHCO3 + K2CO3 (2) After polymerization of dopamine and further carbonization, β-FeOOH oxidized to Fe 3 O 4 core with a nitrogen-doped carbon shell, Fe 3 O 4 @N-C. The strong binding affinity of polydopamine to the iron oxide core provoked the evolution of a uniform carbon layer, derived from polydopamine, after heat treatment at 500 • C (Figure 2b) [50]. Next, the Fe 3 O 4 template was completely dissolved by HCl aqueous solution and hollow nitrogen-doped carbon nanorods (N-HCNRs) remained (Figures 2c and 3a). Both ends of the carbon nanorods were slightly smaller than the middle core. The N-HCNRs maintained the spindle morphology, with lengths of 1-1.5 µm and diameters ranging from 150 to 200 nm. This unique structure makes them excellent hosts for sulfur. The sublimed sulfur was melted at 155 • C and diffused into the host inside the autoclave to obtain S@HCNRs (Figure 2d). This is confirmed by the presence of strong elemental sulfur peaks in the EDX spectrum (Figure 3b). It is important to note that the EDX spectra for each specimen were collected at different spots in order to generate more reliable data. Figure 3c clearly indicates the presence of Mn in the final structure, S@NHCNR@MnO 2 . In the final step, the δ-MnO 2 was formed on the surface of S@HCNR with the aid of the reduction of KMnO 4 in the presence of both sulfur and carbon (Equations (1) and (2)). The MnO 2 layer wrapped the sulfur-filled nanorods conformally without causing any change in their size or morphology (Figure 2e). The elemental manganese peaks originated from the MnO 2 shell (Figure 3c). The porous structure of N-HCNR was evaluated by the nitrogen adsorption-desorption technique. Both adsorption and desorption isotherms are plotted in Figure 4. These isotherms were identified as type IV isotherms with type III hysteresis. We consider this is the most important structure for BET analysis, because the information (surface area, pore diameter and pore volume) we get is directly related to sulfur loading. The calculated Brunauer-Emmett-Teller (BET) surface area at 77 K was 509 m 2 g −1 , while a pore volume and pore diameter of 0.251 cm 3 g −1 and 3 nm were computed by Barrett−Joyner−Halenda (BJH) analysis, respectively. The relatively high surface area and pore volume of the N-HCNR makes it a strong host candidate for sulfur loading. We did not perform BET analysis of the follow-up structures, S@N-HCNR and S@N-HCNR@MnO 2 , because the results will not be informative, as all voids will be filled up with sulfur after the loading of sulfur/MnO 2 coating. The porous structure of N-HCNR was evaluated by the nitrogen adsorption-desorption technique. Both adsorption and desorption isotherms are plotted in Figure 4. These isotherms were identified as type IV isotherms with type III hysteresis. We consider this is the most important structure for BET analysis, because the information (surface area, pore diameter and pore volume) we get is directly related to sulfur loading. The calculated Brunauer-Emmett-Teller (BET) surface area at 77 K was 509 m 2 g −1 , while a pore volume and pore diameter of 0.251 cm 3 g −1 and 3 nm were computed by Barrett−Joyner−Halenda (BJH) analysis, respectively. The relatively high surface area and pore volume of the N-HCNR makes it a strong host candidate for sulfur loading. We did not perform BET analysis of the follow-up structures, S@N-HCNR and S@N-HCNR@MnO2, because the results will not be informative, as all voids will be filled up with sulfur after the loading of sulfur/MnO2 coating. The electrochemical performance of the both S@HCNR and S@HCNR@MnO 2 nanorods were investigated in half-cell with Li chip as both counter and reference electrode. The reported capacities were normalized based on the sulfur content of each sample. S@HCNR was compromised of about 70 wt.% of sulfur, while S@HCNR@MnO 2 nanorods contained 60 wt.% of sulfur and 10 wt.% of MnO 2 . The charge-discharge behavior of the electrode material was evaluated at a 0.2C rate (1C = 1675 mA h g −1 ) in the voltage window of 1.7-2.8 V vs. Li + /Li (Figure 5a). The S@HCNR@MnO 2 nanorods delivered an excellent initial discharge capacity of 1661 mA h g −1 , while, after 70 cycles, the capacity decayed to 1342 mA h g −1 with a Coulombic efficiency of 99%. This translates to añ 80% capacity retention. After the first cycle, the discharge capacity decreased to 1500 mA h g −1 and then the cell stabilized with a slow decay rate. Typical galvanostatic discharge-charge profiles of S@HCNR@MnO 2 electrodes for different cycles at 0.2C are shown in Figure 5b. It is worth noting that, during the first charge, the voltage reached 2.3 V and then dropped to 2.2 V. This hump is due to the MnO 2 coating layer, which leads to the increase in the charge resistance [45,51]. The height of this hump decreased in the successive cycles. Furthermore, no plateau related to the reaction of lithium with MnO 2 shell was detected in the voltage window of 1.7-2.8 V [52,53]. Consecutive cycling performance of the S@HCNR and S@HCNR@MnO 2 nanorods with gradual increase in current densities for every 10 cycles are shown in Figure 5c. The rate was increased from 0.2C to 2C, followed by a recovery at 0.2C. The S@HCNR@MnO 2 electrode delivered initial specific capacity of 1641 mA h g −1 at 0.2C without any noticeable overpotential, which is~98% of the theoretical specific capacity of sulfur. As the C-rate increased to 0.5C, 1C and 2C, the specific discharge capacity was gradually reduced to 1300 mA h g −1 , 400 mA h g −1 and 320 mA h g −1 , respectively. When the current rate was switched back to 0.2C, the discharge capacity was recovered to~1350 mA h g −1 , which is close to the delivered capacity recorded at 0.2C in the first cycle. In comparison, the discharge capacity of the S@HCNR nanorods decreased more significantly with the increase in charging/discharging rates; the specific capacity at initial cycle at 0.2C was 1300 mA h g −1 , but it declined to 220 mA h g −1 at 2C. This demonstrates the excellent rate capability of S@HCNR@MnO 2 . To further understand the kinetics of the redox reactions, the voltage profiles of S@HCNR@MnO 2 cathodes between 1.7 and 2.8 V (vs. Li + /Li) at different current rates are presented in Figure 5d. Two voltage plateaus at 2.3 and 2.0 V are associated with the formation of long-and short-chain LiPSs, respectively [54]. No peaks or shoulders related to the intercalation of Li + ion to MnO 2 were observed. The good electrochemical properties of the S@HCNR@MnO 2 hybrid cathode are related to the engineered design of the spindle-like nanorods. The inner carbon layer is in close contact with the sulfur and helps to improve the electrical conductivity. The outer MnO 2 layer serves as the protective layer against the polysulfide shuttling effect and partially increases the overall conductivity of the nanorod structure. Although the electrochemical performance of the nanorods is good, the capacity drop from 0.5C to 1C was very serious. In fact, the capacity drop at higher rates is a common problem in Li-S battery technology. Several other groups have also pointed out this big capacity drop at high-rate charge/discharge [41,43,53]. It is mainly related to the slow lithiation/delithiation reaction kinetics and poor Li + ion diffusion through the cathode active materials at higher rates. The huge volume change of sulfur during the initial cycles can cause some cracks in the carbon host structure, which in successive cycles can promote the leakage of LiPSs. Additionally, the presence of some un-infiltrated sulfur on the surface of the S@HCNR@MnO 2 electrode can have negative effects on the electrode performance, since these free sulfur particles can undergo redox reactions in a different way than that of the encapsulated ones. Even with the huge capacity decay at higher current densities, the S@HCNR@MnO 2 electrode showed superior discharge capacities compared to the S@HCNR electrode. Finally, the delivered discharge capacity and rate performance of the S@HCNR@MnO 2 was superior to MnO 2 -coated sulfur-filled hollow carbon nanospheres (S@HCN@MnO 2 ) [55]. This is due to better utilization of sulfur during the charge/discharge process in the hollow nanorod structure compared to the hollow nanospheres. Moreover, the physical and chemical confinement of sulfur/LiPSs can be better achieved in the hollow nanorod design. Furthermore, the sulfur is in better contact with the carbon layer in nanorods, which simply translates to higher sulfur utilization [39,40,43,52,54]. The electrochemical performance of the both S@HCNR and S@HCNR@MnO2 nanorods were investigated in half-cell with Li chip as both counter and reference electrode. The reported capacities were normalized based on the sulfur content of each sample. S@HCNR was compromised of about 70 wt.% of sulfur, while S@HCNR@MnO2 nanorods contained 60 wt.% of sulfur and 10 wt.% of MnO2. The charge-discharge behavior of the electrode material was evaluated at a 0.2C rate (1C = 1675 mA h g −1 ) in the voltage window of 1.7-2.8 V vs. Li + /Li (Figure 5a). The S@HCNR@MnO2 nanorods superior to MnO2-coated sulfur-filled hollow carbon nanospheres (S@HCN@MnO2) [55]. This is due to better utilization of sulfur during the charge/discharge process in the hollow nanorod structure compared to the hollow nanospheres. Moreover, the physical and chemical confinement of sulfur/LiPSs can be better achieved in the hollow nanorod design. Furthermore, the sulfur is in better contact with the carbon layer in nanorods, which simply translates to higher sulfur utilization [39,40,43,52,54].
Conclusions
In summary, we have synthesized dual core-shell-structured S@HCNR@MnO 2 spindle-like nanorods as a promising cathode material for lithium-sulfur batteries. The nitrogen-doped hollow carbon nanorods with a diameter of less than 200 nm and length of 1-2 µm act as a host for the sulfur material. The cathode delivered an excellent initial discharge capacity of 1661 mA h g −1 and capacity retention of above 80% after 70 cycles at a 0.2C rate. The enhanced performance of S@HCNR@MnO 2 is attributed to several factors. First, N-doped hollow carbon nanorods not only enhance the electrical conductivity of the cathode, but also facilitate chemical binding with polysulfide intermediate products.
Second, the hollow structure can accommodate volumetric expansion of sulfur upon lithiation and possesses physical encapsulation of polysulfide in the cathode structure design. Third, the formation of one-dimensional (1D) structure (linear electron conduction path) facilitates fast ion and electron transport. Finally, the polar MnO 2 shell, with the ability to form strong chemical bonding with polysulfides, minimizes polysulfide shuttle effect in the cell. Although the results obtained in this study were promising, further optimizations are required to achieve a robust sulfur nanocomposite with better rate capability and higher delivered capacities. The carbon framework can be prepared from other carbon sources such as polyacrylonitrile (PAN) with different pore size distribution. The MnO 2 layer thickness needs to be adjusted to achieve maximum protection against sulfur/polysulfides leakage. It is also worth testing this nanocomposite with separators coated with polysulfide barrier materials, such as In 2 O 3 [33] and AlF 3 [56]. | 2020-06-25T09:09:03.924Z | 2020-06-19T00:00:00.000 | {
"year": 2020,
"sha1": "1e45c321ca458211fa08c2c18b7b81168144be16",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2305-7084/4/2/42/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0a7010567245d709818febee5d16481ed2a04042",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
231840422 | pes2o/s2orc | v3-fos-license | Combined Metabolite and Transcriptome Profiling Reveals the Norisoprenoid Responses in Grape Berries to Abscisic Acid and Synthetic Auxin
The abscisic acid (ABA) increase and auxin decline are both indicators of ripening initiation in grape berry, and norisoprenoid accumulation also starts at around the onset of ripening. However, the relationship between ABA, auxin, and norisoprenoids remains largely unknown, especially at the transcriptome level. To investigate the transcriptional and posttranscriptional regulation of the ABA and synthetic auxin 1-naphthaleneacetic acid (NAA) on norisoprenoid production, we performed time-series GC-MS and RNA-seq analyses on Vitis vinifera L. cv. Cabernet Sauvignon grape berries from pre-veraison to ripening. Higher levels of free norisoprenoids were found in ABA-treated mature berries in two consecutive seasons, and both free and total norisoprenoids were significantly increased by NAA in one season. The expression pattern of known norisoprenoid-associated genes in all samples and the up-regulation of specific alternative splicing isoforms of VviDXS and VviCRTISO in NAA-treated berries were predicted to contribute to the norisoprenoid accumulation in ABA and NAA-treated berries. Combined weighted gene co-expression network analysis (WGCNA) and DNA affinity purification sequencing (DAP-seq) analysis suggested that VviGATA26, and the previously identified switch genes of myb RADIALIS (VIT_207s0005g02730) and MAD-box (VIT_213s0158g00100) could be potential regulators of norisoprenoid accumulation. The positive effects of ABA on free norisoprenoids and NAA on total norisoprenoid accumulation were revealed in the commercially ripening berries. Since the endogenous ABA and auxin are sensitive to environmental factors, this finding provides new insights to develop viticultural practices for managing norisoprenoids in vineyards in response to changing climates.
Introduction
Norisoprenoids are among the most important grape-derived flavor compounds in wine, especially for non-Muscat cultivars. With very low olfactory perception thresholds and powerful aroma properties, they contribute to the floral and fruity attributes of grapes and wines [1,2]. Due to the important sensory contribution of norisoprenoids, extensive researches were conducted on the response of these compounds to some treatments such as synthetic auxin application [3], sunlight exposure [4], and partial rootzone drying [5]. Norisoprenoids are carbonyl compounds with 9, 10, 11, or 13 carbon atoms which derived from oxidative degradation of carotenoids, a diverse group of C40 pigments in plant [6]. Previous studies have identified three carotenoid cleavage dioxygenase (CCD) enzyme family members (CCD1, CCD4a, and CCD4b) that can catalyze the cleavage of carotenoid substrates, and then produce norisoprenoids [6,7]. The expression of these three genes has been found to significantly increase in grape berries at the onset of ripening (veraison), compared to pre-veraison berries [7]. Chen et al. observed that VviCCD1 increased from the green stage and peaked at around veraison, while the transcript abundance of VviCCD4b and VviCCD4a started to increase from veraison and after veraison, respectively [8]. They suggested that the higher level of norisoprenoids content in response to distinct climate was related to up-regulated VviCCD4b. However, the other regulatory mechanisms of norisoprenoid accumulation like alternative splicing, transcriptional regulation by transcription factors (TFs), and regulatory network with other genes are poorly understood. Alternative splicing (AS), as one important posttranscriptional regulatory mechanism that can affect mRNA stability and increase protein diversity, has recently gained attention in grapevine research. Vitulo et al. found that 30% (8668) of the grapevine predicted genes undergo AS with 64% of these alternatively spliced genes possess more than two isoforms and produce 32,395 different isoforms in grape berry [9]. They also suggested that AS can affect miRNA target sites, indicating its contribution to the transcriptional complexity and regulation. Furthermore, it was found that AS seems to be conserved among different varieties [10]. Using combined transcriptomic and proteomic analysis, Jiang and the colleagues have proven that AS plays an important posttranscriptional regulatory role in the response of grape leaves to high-temperature stimuli [11]. In the aspect of transcriptional regulation, we recently have identified a MADs family transcription factor (TF) VviMADS4 directly down-regulating VviCCD4b expression [12], and a VviWRKY40 transcription factor responsible for monoterpenoid glycosylation [13]. However, to date, there is no report uncovering the potential TFs positively regulating the biosynthesis of norisoprenoids and other volatile organic compounds in grape berry.
The norisoprenoids markedly increase at around veraison [8,14], which is a crucial shift point for grape berries to change from green/immature to ripe/mature, and encompasses physiological and metabolic changes. These changes include berry softening, sugar, anthocyanin, and flavor accumulation, and parallel an increase of abscisic acid (ABA) level and a decrease of auxin level [15][16][17]. Hormones are proven to play a major role in controlling several ripening-associated processes like fruit coloration and aroma development [18,19]. Among the hormones accumulated in grape berry, ABA and auxin are considered critical for the regulation of ripening progression. Several studies confirmed that ABA increase and auxin decline are tightly associated with the initiation of ripening [17,18,20]. However, the understanding of the role of the two hormones in regulating norisoprenoid production is limited. One report mentioned that the exogenous application of auxin-like compounds can delay ripening and simultaneously affect the concentration of norisoprenoids in wines [3]. Moreover, our previous study preliminarily investigated the effects of ABA and synthetic auxin 1-naphthaleneacetic acid (NAA) on anthocyanins and volatile compounds in grape berry at the beginning of ripening [21]. In the present study, we further evaluated the roles of ABA and NAA in regulating the biosynthesis of norisoprenoid at both the metabolite and transcriptional levels. The purpose of this work was to elucidate the regulatory mechanism underlying the effects of ABA and NAA application on the accumulation of norisoprenoids, a class of important carotenoid degradation products in grape berry, and to find the potential regulatory genes. The gained results will provide new insight into controlling metabolism from carotenoids to norisoprenoids. Due to the important contribution of norisoprenoid compounds to the varietal aroma of neutral variety wines like Cabernet Sauvignon, the research outcome also can guide viticulturists on the decision of viticulture management that leads to a good varietal aroma.
Evolution of Sugar and Acidity
Two ripening parameters of grape berry, total soluble solids (TSS) and titratable acidity (TA), were compared between the treatments and the control at five phenological stages. There was no significant difference in TSS and TA at E-L 33 stage before the treatments were conducted. After ABA spraying in 2015 and 2016, the initiation of ripening (E-L 35 stage) of grape berries was advanced about one week and the subsequent ripening process was also accelerated by both ABA800 and ABA1000 treatments ( Figure 1). Both ABA1000 and ABA800-treated berries accumulated TSS faster than the control. As a result, these berries achieved technological maturity (E-L 38 stage) about 17 days ahead of the control in 2015 and 25 days earlier than the control in 2016 (Figure 1). In contrast, NAA-treated berries reached technological maturity about 25 days later than the control group owing to the slower speed of TSS accumulation. At the same time, the ABA application markedly promoted the TA decrease, whereas NAA suppressed it. It was interesting to see that the influence of NAA mainly laid in delaying the onset of grape berry ripening (from E-L33 to E-L 35) and extending the berry coloration process (from E-L35 to E-L 36) also can guide viticulturists on the decision of viticulture management that leads to a good varietal aroma.
Evolution of Sugar and Acidity
Two ripening parameters of grape berry, total soluble solids (TSS) and titratable acidity (TA), were compared between the treatments and the control at five phenological stages. There was no significant difference in TSS and TA at E-L 33 stage before the treatments were conducted. After ABA spraying in 2015 and 2016, the initiation of ripening (E-L 35 stage) of grape berries was advanced about one week and the subsequent ripening process was also accelerated by both ABA800 and ABA1000 treatments ( Figure 1). Both ABA1000 and ABA800-treated berries accumulated TSS faster than the control. As a result, these berries achieved technological maturity (E-L 38 stage) about 17 days ahead of the control in 2015 and 25 days earlier than the control in 2016 (Figure 1). In contrast, NAAtreated berries reached technological maturity about 25 days later than the control group owing to the slower speed of TSS accumulation. At the same time, the ABA application markedly promoted the TA decrease, whereas NAA suppressed it. It was interesting to see that the influence of NAA mainly laid in delaying the onset of grape berry ripening (from E-L33 to E-L 35) and extending the berry coloration process (from E-L35 to E-L 36) Figure 1. Total soluble solids and titratable acidity in two consecutive seasons. The different shape represents the phenological stage: square (E-L 33 stage), big circle (E-L 34 stage), triangle (E-L 35 stage), rhombus (E-L 36 stage), small circle (E-L 38 stage). The lines of green, light purple, purple and light green represent Control, 800 mg/L abscisic acid (ABA) (ABA800), 1000 mg/L ABA (ABA1000), and 100 mg/L synthetic auxin 1-naphthaleneacetic acid (NAA) (NAA100), respectively. Bars represent ± standard deviation.
Changes of Endogenous ABA and Auxin Biosynthesis and Signaling Pathway
We then investigated the responses of endogenous ABA and auxin biosynthesis and signaling to ABA and NAA spraying. The levels of endogenous ABA and indole-3-acetic acid protein (IAA) in grape berries were measured at stages of E-L 33, E-L 34, E-L 35, and E-L 36 ( Figure 2A). There was no significant difference in ABA and IAA concentration among all samples at E-L 33, before treatment application. The ABA concentration peaked at E-L 34 stage then decreased until ripening in the control group. In contrast, the concentration of ABA increased sharply at E-L 34 after ABA1000 and ABA800 application, while the ABA accumulation in NAA-treated samples was inhibited by NAA treatment. Both . The lines of green, light purple, purple and light green represent Control, 800 mg/L abscisic acid (ABA) (ABA800), 1000 mg/L ABA (ABA1000), and 100 mg/L synthetic auxin 1-naphthaleneacetic acid (NAA) (NAA100), respectively. Bars represent ± standard deviation.
Changes of Endogenous ABA and Auxin Biosynthesis and Signaling Pathway
We then investigated the responses of endogenous ABA and auxin biosynthesis and signaling to ABA and NAA spraying. The levels of endogenous ABA and indole-3-acetic acid protein (IAA) in grape berries were measured at stages of E-L 33, E-L 34, E-L 35, and E-L 36 ( Figure 2A). There was no significant difference in ABA and IAA concentration among all samples at E-L 33, before treatment application. The ABA concentration peaked at E-L 34 stage then decreased until ripening in the control group. In contrast, the concentration of ABA increased sharply at E-L 34 after ABA1000 and ABA800 application, while the ABA accumulation in NAA-treated samples was inhibited by NAA treatment. Both ABA treated and control samples were found to have their ABA peak at E-L 34, whereas the peak time was delayed to E-L 35 stage in NAA treated samples. Transcriptomic data of ABA1000, NAA100, and the control in 2015 showed that three VviNCEDs, which are involved in both norisoprenoid and ABA biosynthesis, were found to be up-regulated by ABA treatments at three different developmental stages, respectively ( Figure 2B). On the contrary, the VviNCEDs were down-regulated by NAA treatment, especially at E-L 34 stage. It can explain the higher concentrations of ABA in ABA-treated berries while lower ABA levels in NAA-treated berries. Additionally, different from ABA, the concentration of IAA was very low in all samples. There was a decline of the IAA concentration at E-L 34 and then a small increase was found at the following stages. Generally, no differences were observed among treatments and control; however, a lower amount of IAA was found in ABA samples at E-L 36. We observed that ABA application suppressed the expression of the genes encoding tryptophan aminotransferase related 1 (TAR1, VIT_200s0225g00230) and YUC flavin monooxygenase 10 (YUC10, VIT_207s0104g01260) at E-L 35 and E-L38 stages, respectively ( Figure 2C). The VviTAR1, VviTAR3, and VviYUC10 also showed low levels in NAA-treated berries at E-L 34, E-L 36, and E-L38 stages, respectively. The VviYUC6 (VIT_204s0023g01480) was markedly up-regulated by NAA treatment at E-L 36 stage.
Strong Transcriptional Changes of Ripening Switch Genes
A previous study has identified 190 grapevine berry switch genes, which trigger the onset of the ripening process [16]. All those genes were observed to express at low levels during the immature phase and show a significant increase at veraison. They were mainly involved in transcription activation, cell wall metabolism and the development process. In the present study, the expression pattern of the switch genes was investigated to detect the roles of these genes in varied ripening procession induced by ABA and NAA treat-
Strong Transcriptional Changes of Ripening Switch Genes
A previous study has identified 190 grapevine berry switch genes, which trigger the onset of the ripening process [16]. All those genes were observed to express at low levels during the immature phase and show a significant increase at veraison. They were mainly involved in transcription activation, cell wall metabolism and the development process. In the present study, the expression pattern of the switch genes was investigated to detect the roles of these genes in varied ripening procession induced by ABA and NAA treatments. After removing the switch genes with low expression (RPKM < 1) among our samples, the remaining 184 switch genes were performed K-means clustering analysis, and two clusters were generated ( Figure S1). The expression of 107 genes in cluster 1 kept increasing from E-L 34 to E-L 38 stage in all samples and these genes expressed at a higher level in NAA-treated berries at E-L 36 stage. In the contrast, the transcript abundance of 77 genes in cluster 2 increased at the early stage then decreased, and these genes showed lower expression level under NAA treatment. Among the switch genes of cluster 1 and 2, the 147 significant differentially expressed switch genes (DESGs) between treatments and control were shown in Figure 4, and the biological process annotation of these DESGs was listed in Table S1. According to their distinct responses to the treatments, the DESGs in the above-mentioned cluster 1 and cluster 2 were roughly divided into 3 (a-c) and 2 (d-e) groups, respectively. Interestingly, we found most genes in the group a and d were up-regulated by ABA at E-L 34 or E-L 35 stage, while down-regulated by NAA at the early stages. These genes encompassed TFs of VviMYBA2 (VIT_202s0033g00390) and zinc finger family genes (VIT_206s0061g00760 and VIT_212s0028g03860) in group a, as well as VviMYBA1 (VIT_202s0033g00410) and VviMYBA3 (VIT_202s0033g00450) in group d. Additionally, a total of 50 genes in group b were only up-regulated in NAA-treated berries particularly at E-L 36 stage. The most overrepresented biological process in this group is "Secondary Metabolic Process", including the genes of cytochrome P450 family, glutathione S-transferase and carotenoid cleavage dioxygenase 4b (CCD4b). The genes in the group of c and e showed lower levels of expression at specific stages under NAA treatment, and they are mainly related to the transcription factor activity and cell wall metabolism. The TFs of VviWRKY75 (VIT_217s0000g01280), VviWRKY23 (VIT_207s0005g01710), VviNAC33 (VIT_219s0027g00230), VviNAC60 (VIT_208s0007g07670), three zinc finger proteins (VIT_20 5s0020g04730, VIT_208s0040g01950 and VIT_218s0001g01060) and two lateral organ boundaries proteins (VIT_206s0004g07790 and VIT_203s0091g00670), were found in these genes. It also included several genes encoding cellulase (VIT_201s0137g00430), endo-1,4-betaglucanase (VIT_200s2526g00010, VIT_200s0340g00050 and VIT_200s0340g00060) and xyloglucan endotransglucosylase/hydrolase (VIT_206s0061g00550 and VIT_205s0062g00610) that are involved in fruit softening [25,26]. Moreover, among these NAA-inhibited genes, three genes are involved in carbohydrate metabolism (glycolysis and sucrose biosynthesis), including phosphopyruvate hydratase (VIT_216s0022g01770), sucrose synthase (VIT_207s0005g00750), sucrose-phosphate synthase (VIT_218s0075g00350).
Effects on Norisoprenoid Production and Related Gene Expression
The concentration of free-form and total norisoprenoids varied among the berries of the treatments and the control ( Figure 5). ABA1000 and ABA800 application were observed to markedly increase the concentration of free-form norisoprenoids in 2015 at E-L 34 and E-L 38 stages. Despite a similar pattern was observed in 2016, the difference between the control and ABA1000-treated berries were significant only at E-L 34 stage. Lower levels of total norisoprenoid were observed in ABA-treated berries than Control berries at E-L 34 and E-L 35 stages in 2015, higher levels were observed in ABA800treated berries than Control berries at E-L 35 stage in 2016. No effects of ABA treatments on the total norisoprenoid concentration were observed at harvest (E-L 38). NAA100treated berries showed a high level of both free-form and total norisoprenoids at E-L 38 in 2015. The total concentration of each norisoprenoid was presented in Table S2. When grape berries reached maturity (E-L 38 stage), a different response to the ABA and NAA treatments among the identified norisoprenoids was found. Higher concentrations of vitispirane A, vitispirane B and (E)-1-(2,3,6-trimenthylphenyl)buta-1,3-diene (TPB) were observed in NAA-treated berries at all sampling stages. Most norisoprenoid compounds such as β-damascenone and β-ionone appeared to be significantly increased by NAA treatment in harvested berries in 2015, except for geranylacetone and 6-methyl-5-hepten-2one (MHO). ABA800 and ABA1000 had consistent negative effects on the accumulation of vitispirane A, vitispirane B and MHO in harvested berries in both seasons. Additionally, geranylacetone concentration was not influenced by ABA treatments at E-L 38 stage, and the responses of other norisoprenoids to ABA were not consistent among the two seasons. The transcriptome analysis was conducted on the ABA1000, NAA100 and control berries collected in 2015. Concerning the genes involved in norisoprenoid biosynthesis ( Figure 2B), we found that ABA treatment markedly suppressed the expression of VviLECY, VviCCD4a and VviABA-Hase at the early stage, but elevated the expression of VviCCD4b and key ABA biosynthesis-related genes including VviNCED1, VviNECD2, and VviNCED3. Expectedly, NAA treatment up-regulated the norisoprenoid biosynthesis-related genes such as VviPSY1, VviPSY2, VviPSY3, VviLECY, VviLBCY, VviZEP, VviCCD4a, and VviCCD4b, which corresponded to the increase in total concentration of norisoprenoids. Meanwhile, this treatment also suppressed the expression of VviPSY (VIT_203s0038g00450) at E-L 36 stage, three VviNCEDs at E-L 34 stage and two VviABA-Hases. The transcription of other norisoprenoid genes such as PDS, ZISO, CRTISO, LUT5/BCH, LUT1, AAO, and CCD1 were insensitive to both ABA and NAA treatments.
Effects on Norisoprenoid Production and Related Gene Expression
The concentration of free-form and total norisoprenoids varied among the berries of the treatments and the control ( Figure 5). ABA1000 and ABA800 application were observed to markedly increase the concentration of free-form norisoprenoids in 2015 at E-L 34 and E-L 38 stages. Despite a similar pattern was observed in 2016, the difference be- and key ABA biosynthesis-related genes including VviNCED1, VviNECD2, and VviNCED3. Expectedly, NAA treatment up-regulated the norisoprenoid biosynthesis-related genes such as VviPSY1, VviPSY2, VviPSY3, VviLECY, VviLBCY, VviZEP, VviCCD4a, and VviCCD4b, which corresponded to the increase in total concentration of norisoprenoids. Meanwhile, this treatment also suppressed the expression of VviPSY (VIT_203s0038g00450) at E-L 36 stage, three VviNCEDs at E-L 34 stage and two VviABA-Hases. The transcription of other norisoprenoid genes such as PDS, ZISO, CRTISO, LUT5/BCH, LUT1, AAO, and CCD1 were insensitive to both ABA and NAA treatments.
An Integrated Gene Co-Expressed and Regulatory Network Regulating Norisoprenoid Biosynthesis
Our results indicate that ABA and NAA treatments can significantly influence the expression of norisoprenoid-related genes, previously identified switch genes, and the genes involved in ABA and auxin biosynthesis and signaling nearly at the same time. Therefore, we hypothesized that there is a crosstalk between norisoprenoid-related genes and the other genes. The potential link between these genes is supported by the study demonstrating norisoprenoid-related gene VviCCD4b (VIT_202s0087g00930) is a berry switch gene [16]. Furthermore, the ABA and norisoprenoid share the common substrate of carotenoids, and hence are under the regulation of the same upstream genes. Therefore, we integrated the results of WGCNA and DAP-seq analysis to build a gene co-expressed and regulatory network, the objective of which was to improve our understanding of the regulation of norisoprenoid accumulation.
In the norisoprenoid-related WGCNA modules of turquoise, cyan, and black, we observed some switch genes and hormone-related genes exhibited similar expression patterns with the genes involved in norisoprenoid biosynthesis (Table S5). A high edge weight threshold of 0.4 was chosen to select the candidate genes to construct the network within these modules. After removing the edges according to the threshold, the gene interactions observed in the network were all come from the module of turquoise. The norisoprenoid-related genes of VviPSY1, VviZDS (VIT_214s0030g01740), and VviABA-Hase (VIT_204s0079g00680), which were the target genes of VviGATA26 identified by DAP-seq, were also included in the network. It was found that VviCCD4a co-expressed with VviGATA26, and two switch genes encoding eukaryotic peptide chain release factor subunit 1-3 (eRF1-3, VIT_218s0072g01010) and CBL-interacting protein kinase 25 (CIPK25, VIT_204s0008g05770) ( Figure 7A). In addition to these three genes, the VviPSY3 interacted with the other three switch genes encoding myb RADIALIS (VIT_207s0005g02730), phosphatidylserine synthase 2 (PTDSS2, VIT_201s0011g04370), and MAD-box (VIT_213s0158g00 100). Additionally, the heatmap clearly exhibited that VviGATA26 expression was upregulated at the E-L 36 stage of NAA-treated grape berries. A relationship between VviZEP and VviGATA26 was also observed in the network. The expression pattern of these genes in all samples suggested VviGATA26 could negatively regulate the expression of the DAPseq identified genes of VviPSY1, VviZDS, and VviABA-Hase in cluster 1, while positive correlations were found between pairs of genes in cluster 2 ( Figure 7B).
Potential Contribution of Differentially Alternative Splicing of VviDXS and VviCRTISO
The effects of the ABA and NAA treatments on alternative splicing (AS) events of those genes were also investigated. As a posttranscriptional regulation of genes, AS has been shown to be affected by salt stress and high temperature in grape berry [9,11]. Splice variants are mainly generated by intron retention (IR), exon skipping (ES), mutually exclusive exon (MXE), alternative 3 splice site (A3SS) and alternative 5 splice site (A5SS). The differential alternative splicing between two RNA-Seq samples was detected by rMATs [27]. In the present study, besides the norisoprenoid-associated genes mentioned above, the upstream genes involved in the plastidial 2-methyl-D-erythritol-4-phosphate phosphate (MEP) and cytoplasmic mevalonic acid (MVA) pathways were also considered [28]. Among these genes, there were 32 differentially splicing ES or IR events were found (Tables S6 and S7). However, only the occurrence of alternative splicing at the genes encoding 1-deoxy-Dxylulose-5-phosphate synthase (DXS; VIT_204s0008g04970) and prolycopene isomerase (CRTISO; VIT_208s0032g00800) was further validated by reverse transcription PCR ( Figure 8). The rMATS paired model identified that the two transcripts of the VviDXS with different ES were up-regulated by NAA at the stage of E-L 34 (Table S6), and a transcript of the VviCRTISO with IR was also expressed at relatively high levels in NAA-treated berries compared to the control at E-L 36 stage (Table S7).
(VIT_213s0158g00100). Additionally, the heatmap clearly exhibited that VviGATA26 expression was up-regulated at the E-L 36 stage of NAA-treated grape berries. A relationship between VviZEP and VviGATA26 was also observed in the network. The expression pattern of these genes in all samples suggested VviGATA26 could negatively regulate the expression of the DAP-seq identified genes of VviPSY1, VviZDS, and VviABA-Hase in cluster 1, while positive correlations were found between pairs of genes in cluster 2 (Figure 7B).
Potential Contribution of Differentially Alternative Splicing of VviDXS and VviCRTISO
The effects of the ABA and NAA treatments on alternative splicing (AS) events of those genes were also investigated. As a posttranscriptional regulation of genes, AS has been shown to be affected by salt stress and high temperature in grape berry [9,11]. Splice variants are mainly generated by intron retention (IR), exon skipping (ES), mutually exclusive exon (MXE), alternative 3′ splice site (A3SS) and alternative 5′ splice site (A5SS). The differential alternative splicing between two RNA-Seq samples was detected by rMATs [27]. In the present study, besides the norisoprenoid-associated genes mentioned above, the upstream genes involved in the plastidial 2-methyl-D-erythritol-4-phosphate phosphate (MEP) and cytoplasmic mevalonic acid (MVA) pathways were also considered [28]. Among these genes, there were 32 differentially splicing ES or IR events were found (Tables S6 and S7). However, only the occurrence of alternative splicing at the genes encoding 1-deoxy-D-xylulose-5-phosphate synthase (DXS; VIT_204s0008g04970) and prolycopene isomerase (CRTISO; VIT_208s0032g00800) was further validated by reverse transcription PCR (Figure 8). The rMATS paired model identified that the two transcripts of the VviDXS with different ES were up-regulated by NAA at the stage of E-L 34 (Table S6), and a transcript of the VviCRTISO with IR was also expressed at relatively high levels in NAA-treated berries compared to the control at E-L 36 stage (Table S7).
Response of ABA and IAA Biosynthesis and Signaling
This study deals with the responses of endogenous ABA and auxin biosynthesis and signaling, and previously identified switch genes to exogenous ABA and NAA treatments (Figures 2-4). These data provide a framework to understand numerous aspects of ABA and NAA-regulated ripening. We found that the level of endogenous ABA significantly increased after ABA treatments in 2015 (Figure 2A), which is consistent with the previous Figure 8. Qualitative RT-PCR analysis of expression of VviDXS and VviCRTISO splice variants in grape berries at specific stage under NAA100 and ABA1000 treatments. The forward and reverse primers were designed from the upstream and downstream exons of the skipped exon or retention intron, respectively.
Response of ABA and IAA Biosynthesis and Signaling
This study deals with the responses of endogenous ABA and auxin biosynthesis and signaling, and previously identified switch genes to exogenous ABA and NAA treatments (Figures 2-4). These data provide a framework to understand numerous aspects of ABA and NAA-regulated ripening. We found that the level of endogenous ABA significantly increased after ABA treatments in 2015 (Figure 2A), which is consistent with the previous study [20]. The elevated ABA could result from both induced endogenous ABA biosynthesis and the absorb from exogenous spraying ABA. However, the uptake of exogenous ABA into the grape was predicted to be an inefficient process because of the berry waxy cuticle [20,29]. Given the increased ABA mainly came from ABA biosynthesis, a small amount of external ABA entering the berry may be enough to drive the flux of carotenoid into endogenous ABA production. This speculation is supported based on the upregulation of VviNECDs in ABA-treated berries ( Figure 2B), as well as the higher correlation between ABA concentration and VviNCED1 expression (r = 0.79, p = 0.011) or VviNCED2 expression (r = 0.70, p = 0.036). However, auxin analogues NAA had both positive and negative effects on the expression of auxin biosynthetic genes ( Figure 2C), and this explained why there was no significant difference in IAA concentration between NAA treatment and the control. From the aspect of interactive crosstalk, the ABA treatment inhibited the endogenous auxin production by suppressing the expression of VviTAR1 and VviYUC10, and NAA application suppressed endogenous ABA biosynthesis by synchronously down-regulating three VviNCEDs at E-L 34 stage ( Figure 2B). The lower expression of ABA biosynthetic genes in NAA-treated berries has been also observed in a previous study [30].
The ABA and NAA treatments also caused changes in the transcription rate of ABA and auxin signaling genes. As a whole, ABA spraying activated the ABA signaling transduction and suppressed the auxin signaling transduction, whereas the effects of NAA treatment on the two signaling pathways appeared to be complex, owing to the up-regulation or down-regulation of various genes ( Figure 3). The complicated regulatory network between ABA and auxin signaling pathways has been already suggested by Nemhauser et al. [31] and Fiorenza et al. [30]. As expected, some ripening switch genes previously identified in grape berry were found to have the opposite expression behavior in response to ABA and NAA treatments, such as the DESGs in the group of a and d (Figure 4b). Indeed, VviMYBA2, VviMYBA1, VviMYBA3, and zinc finger family genes in the groups of a and d have been demonstrated to involve in berry development and ripening [32,33]. It was considered that these TFs were up-regulated by ABA while down-regulated by NAA treatment at E-L 34 or E-L 35 stage, in turn, which monitored grape ripening. Moreover, the delayed ripening of NAA-treated berries could also attribute to the switch genes in c and e groups that were significantly down-regulated by NAA while almost unchanged in ABA-treated berries. In these two groups, the potential ripening regulators of VviWRKY75, VviNAC33, VviNAC60, and two lateral organ boundaries proteins were found to be associated with ripening regulation (Table S1). In addition to TFs, we found that the expression of genes concerning fruit softening and carbohydrate metabolism in group c and e was inhibited by NAA spraying.
Response of Norisoprenoid Biosynthesis
Our study is the first to reveal the effect of exogenous ABA on both free and total norisoprenoids in grape berries at both transcriptional and post-transcriptional levels. As is known, the biosynthetic pathways of ABA and norisoprenoid share the same part of the substrate of carotenoids. The present study indicated that though ABA biosynthesis was enhanced by ABA treatment, there is no visible decrease of total norisoprenoids in ABA-treated harvested berries. It was found the total norisoprenoid levels are low in both ABA1000 and ABA800-treated berries only at E-L 34 stage ( Figure 5A,B) when the concentration of ABA largely increased (Figure 2A). The previous researchers have confirmed the increase of carotenoids after ABA treatment in many fruits such as in grapes [34] and tomato [35] and seeds of bean, tobacco, beet, and corn [36]. Hence, we speculated that ABA treatment elevated the concentration of carotenoids or ABA precursors during berry ripening, thus supporting the biosynthesis of both norisoprenoids and ABA. Alternatively, since vitispirane A, vitispirane B, and MHO exhibited lower levels in ABAtreated berries in consecutive two seasons (Table S2), ABA treatments or the increased endogenous ABA may negatively regulate the accumulation of these compounds by downregulating VviCCD4a ( Figure 2B). The pre-veraison NAA treatment was found to increase total norisoprenoids at E-L 35 and E-L 36 stages in grape berries [21] and β-damascenone level in wine [3]. Different from ABA treatment, NAA effect on norisoprenoid biosynthesis, by comparison to ABA biosynthesis, appeared to be more apparent. NAA up-regulated a series of norisoprenoid-associated genes including VviPSY1, VviPSY2, VviPSY3, VviLECY, VviLBCY, VviZEPs, VviCCD4a, and VviCCD4b expression at a certain stage ( Figure 2B), ultimately elevating the level of most norisoprenoid components at harvest, except for geranylacetone and 6-methyl-5-hepten-2-one. In tomato fruit, silencing of PSY1 can significantly reduce carotenoid accumulation, while the silencing of PSY2 or PSY3 was less efficient in controlling carotenoid biosynthesis [37], suggesting the three PSY genes played different roles in the carotenoid biosynthetic pathway. Similarly, since carotenoids are synthesized mostly from fruit formation until veraison in grape berry [1], the higher expression of VviPSY1 at E-L 34 and E-L 35 stages ( Figure 7B) indicated its important role in carotenoid metabolism. In contrast, the elevated expression levels of VviPSY2 and VviPSY3 were observed at E-L 36 and E-L 38 stages, which could be responsible for the carotenoid biosynthesis after veraison. It may explain the different responses of VviPSYs to NAA treatments ( Figure 2B). WGCNA further expounded that VviCCD4a expression had a high correlation with (E)-β-damascenone, a norisoprenoid component with the highest concentration, and (Z)-β-damascenone, riesling acetal, TPB, as well as total norisoprenoid concentration ( Figure 6), while VviCCD4b was correlated only with the concentration of 6-methyl-5-hepten-2-one. This result is different from our previous study in which we found that total norisoprenoids and β-damascenone were positively correlated with the expression of VviCCD4b, rather than VviCCD4a during Cabernet sauvignon grape berry development [12]. This difference is inferred to mainly relate to the ripening process retarding and transcriptional alteration induced by NAA. In the present study, the expression of VviCCD4a was remarkedly up-regulated by NAA treatment at E-L 34, E-L 35, and E-L 36 stages, but was down-regulated by ABA spraying at E-L 35 and E-L 36 stages ( Figure 2B). Moreover, VviCCD4a co-expressed with the upstream VviPSYs and VviZEPs of norisoprenoid biosynthesis pathway (Table S5). Both our present data and previous finding [12] indicated that VviCCD4a in the non-treated grape berries is expressed at a very low level before E-L 36 stage, and this gene expression is sharply increased when berries approach technical maturity ( Figure S2). This study also observed that the expression level of VviCCD4a was much higher than VviCCD4b in NAA-treated berries at E-L 34, E-L 36, and E-L 38 stages. Taken together, these results indicate that VviCCD4a should be a key enzyme affecting the norisoprenoid accumulation in response to NAA treatment.
Potential Regulation Relating to NAA-Induced Norisoprenoid Accumulation
GATA transcription factors are a family of zinc finger proteins that bind the consensus DNA sequence (T/A) GATA (A/G) [38]. They are widely present in plants and involved in light response regulation, chlorophyll synthesis, and carbon/nitrogen metabolism. Interestingly, Evidence has demonstrated that light is the most important environmental factors affecting plant carotenoid metabolism; light significantly promotes the expression of carotenoid biosynthetic genes, particularly PYS and the activity of the related enzymes [39]. In the present study, integrated WGCNA and DAP-seq analysis indicated that VviGATA26 played a critical role in the regulatory network relating to NAA-induced norisoprenoid biosynthesis based on the following data. Firstly, VviGATA26 was indicated to be able to bind with the promoter sequence of VviPSY1 (VIT_204s0079g00680) and VviZDS (VIT_214s0030g01740), and VviGATA26 exhibited an opposite expression pattern with the two target genes at the transcriptional level ( Figure 7B). Both VviPSY and VviZDS are crucial enzymes in the biosynthesis of norisoprenoid precursor of carotenoid [40]. As the rate-limiting step of carotenogenesis, VviPSY1 has garnered much attention and numerous strategies targeting PSY1 have been performed to increase carotenoid concentration in tomato [41]. Over-expression of AtZDS in tomato resulted in the increased carotenoid all trans-lycopene, and reduced carotenoid content was found in ZDS repressed fruit [42]. Secondly, the expression of these two target genes, especially VviPSY1, was reversely paralleled with the accumulation of (E)-β-damascenone, (Z)-β-damascenone, and total norisoprenoids (Table S8). Thirdly, the higher expression of VviGATA26 expression was showed in NAA-treated berries at E-L 36 stage compared to the control ( Figure 7B). Combining these three points, we hypothesized that VviGATA26 could positively respond to NAA treatment and be triggered to down-regulate the two upstream genes and resulted in the increased accumulation of norisoprenoids. In addition, VviGATA26 expression was positively correlated with VviCCD4a, VviPSY2, VviPSY3 and VviZEPs ( Figure 7B), but VviGATA26 cannot bind to the promoter regions of these genes according to DAPseq. A possible explanation is that VviGATA26 indirectly affects the expression of these genes via regulatory cascades. Two TFs VviMYB RADIALIS (VIT_207s0005g02730) and VviMADS-box (VIT_213s0158g00100) were also precited to involve in norisoprenoid accumulation based on their co-expression patterns with VviPSY3. Both MYB and MADS-box transcription factor families have been demonstrated to associate with ripening process modulation [43,44]. Our previous study has elucidated that MADS4 (VIT_201s0010g03900) can participate in the regulation of norisoprenoid accumulation by negatively regulating VviCCD4b [12]. In citrus, a MADS transcription factor, CsMADS6 was reported to be able to bind to the promoter of PSY and up-regulated its expression [45]. However, the present data is far from supporting our hypothesis regarding the regulatory function of the VviMYB RADIALIS and VviMADS-box in norisoprenoid biosynthesis. In future work, the genome-editing techniques or transgenic grapevines will be used to verify this point.
AS, as an important posttranscriptional regulation of genes, has been reported to happen in 8668 genes of v2 predicted genes (29150) in grape berry [9]. As observed in grape and Arabidopsis [11,46], AS contribute significantly to the transcriptional complexity and should be taken into consideration when performing genome wider transcriptomic studies. The expression of the isoforms of VviDXS and VviCRTISO was significantly altered in the comparison of NAA and control, which may then affect the protein properties and finally contribute to higher norisoprenoid. It would be interesting and useful to further investigate if the AS produced transcript will be biologically functional in the future, although how to distinguish aberrant and functional splicing remains an unresolved question [47]. The integration of transcriptomics and metabolomics with proteomics could serve as a starting point to identify functional AS events in grape and other plants.
ABA and NAA Treatments and Sampling
Clusters of Vitis vinifera L. cv. Cabernet Sauvignon grapevines cultivated in Shanxi Academy of Agricultural Sciences Pomology Institute, Shanxi, China, in 2015 and 2016, were used for the study. The 1000 mg/L ABA, 800 mg/L ABA, and 100 mg/L NAA solutions containing 0.05% Tween 20 were applied at seven weeks after flowering (berry still hard and green, E-L 33 stage), with 0.05% Tween solution as the control. The spray of these solutions was performed at sunset to avoid the rapid evaporation of the solutions. After the first spray solution, the second application was conducted ten hours later on the same day. A randomized block design was considered for this study, and each treatment or control was replicated in three plots of 50 vines. Each replicate included 500 berries collected from 50 vines. Berry sampling was performed at four E-L stages (E-L 34, E-L 35, E-L 36, and E-L 38) according to the modified E-L system [48]. The leaves at E-L 34 stage were collected for DNA extraction. All samples were kept in dry ice and immediately transferred to the laboratory. Before the extraction, samples were washed with distilled water to remove the unabsorbed ABA and NAA residues on the berry surface. Approximately 30 berries were used for the total soluble solids (TSS) and titratable acidity analysis and the others were frozen using liquid nitrogen and stored at −80 • C for further analysis.
Measurements of Total Soluble Solids and Titratable Acid
The total soluble solids and titratable acid (expressed as g tartaric acid equivalents per liter of juice) were measured using a digital handheld pocket Brix refractometer (PAL-2, ATAGO, Tokyo, Japan) and adjusting the pH of the grape juice to 8.2 using NaOH, respectively.
RNA Extraction and Sequencing
The berry samples used for RNA sequencing including the berries of the control, ABA1000-treated and NAA-treated at E-L 34, E-L 35, E-L 36, and E-L 38 stages in 2015. Total RNA was isolated using a plant RNA isolation kit and following the manufacture's protocol (Sigma RT-250, St. Louis, MO, USA), and quality and quantitation of RNA were assessed by a Qubit 2.0 fluorometer RNA Assay Kit (Invitrogen Inc., CA, USA) and Agilent 2100 Bioanalyzer (Agilent, Santa Clara, CA, USA). Totally 36 RNA-seq libraries were constructed (each sample has three biological replicates) using Illumina Hiseq X Ten (Illumina Inc., San Diego, CA, USA) to yield 150-bp pair-end reads.
Extraction and Determination of ABA and IAA
For each biological replicate, 10 g deseeded grape berries were ground into powder under liquid nitrogen. The extraction and quantification of ABA and IAA were performed according to the published method [49]. Briefly, 100 mg powder was weighed into a 1.5 mL centrifuge tube, then the powder with 750 µL water added was placed in an ultrasonic cleaner for 30 min. After centrifugation at 15,000 rpm for 10 min, the supernatant was transferred to a new tube and 750 µL MeOH-ACN (1:1, v/v) was added. Then sonication and centrifugation were performed. The 400 µL mixed supernatant was dried using a vacuum concentrator and re-dissolved in aliquots of 80 µL MeOH-H2O (1:1, v/v), filtered through a 0.1 µm membrane and transferred to vials for LC-MS analysis. Quantification was performed using a UPLC-HRMS system (UPLC, ACQUITY UPLC H-Class Bio, Waters; MS, Q-Exactive, Thermo Scientific, Bremen, Germany) coupled with heated electrospray ionization (HESI) source. UPLC separation was performed on a BEH C18 column (2.1 × 100 mm, 1.7 µm) at a flow rate of 0.3 mL min −1 . The mobile phases were composed of 0.1% FA in water (phase A) and 0.1% FA in ACN (phase B). The following gradient program was applied: 95% A at 0 min to 55% A at 7 min, 5% A at 10 min and held for 4 min, then returned to the initial condition. ABA and IAA were used as external standards to quantify ABA and IAA levels in grape berries.
Analysis of Norisoprenoids in Berries Using SPME-GC-MS
About 50 g berries with the seed removed were blended with 1 g PVPP and ground into powder in liquid nitrogen. The extraction of free and total norisoprenoid was conducted following published research with some modification [50]. For free norisoprenoid, we weighted 1 g berry powder and put it into a 20 mL autosampler vial with 5 mL of citrate buffer (0.2 M, pH 3.2, saturated with NaCl) and 10 µL internal standard (1.008 mg/L 4-Methyl-2-pentanol) added. Then the vials were tightly capped and equilibrated at 50 • C in a thermostatic bath for 15 min. To extract total norisoprenoid, 1 g berry powder was put into a 20 mL autosampler vial with 5 mL of citrate buffer (0.2 M, pH 2.5, saturated with NaCl) and 10 µL internal standard (1.008 mg/L 4-Methyl-2-pentanol) added. Then the vials were tightly capped and equilibrated at 99 • C in a thermostatic bath for 1 h.
The volatile compounds were extracted by headspace-solid phase microextraction (HS-SPME) using 2 cm DVB/CAR/PDMS 50/30 µm SPME fiber (Supelco, Bellefonte, PA, USA) at 40 • C for 30 min with stirring. An Agilent 6890 gas chromatography coupled with an Agilent 5975C mass spectrometer was used to analyze the volatile compounds in the samples according to the method described by Wang et al. [51]. The compound separation was achieved with an HP-INNOWAX capillary column (60 m × 0.25 mm × 0.25 µm, J & W Scientific, Folsom, CA, USA). Volatile compounds were identified by comparing mass spectrums and retention time with the available external standards. The compounds with reference standards were identified by comparing their retention indices and mass spectrums with the NIST11 database. Quantitation followed our published method [52]. The volatile compounds with available standards were quantified based on their reference standards, whereas the volatiles without available standards were quantified using standards that had the same functional groups and/or similar numbers of carbon atoms.
RNA Isolation, Cloning, and Expression of VviGATA26
Total RNA was extracted from grape berry using the plant DNA isolation kit (Sigma-Aldrich, St. Louis, MO, USA) according to the manufacturer's instructions. The quality and concentration of RNA were detected by agarose gel electrophoresis using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, MA, USA). First-strand cDNA was synthesized from 1 mg total RNA in a 20 µL reverse transcription reaction mixture following the protocol of HiScript R II Q RT SuperMix for qPCR C gDNA wiper (Vazyme, Nanjing, China). PCR cloning of full-length VviGATA26 was performed in a total volume of 25 µL containing 1 µL of cDNA template, 2 µL RT-PCR primers, 12.5 µL 2 × Taq PCR MasterMix (KT201) (Tiangen Biotech, Beijing, China) and 9.5 mL ddH2O. A pair of primers (forward: ATGGTACCTTCAAGGAAGAG, reverse: TCAGGGACGCAAAAGATGTG) was designed using primer 5.0 based on the nucleotide sequence of VviGATA26 (XM_003635597.2) from the NCBI. The PCR product was gel purified and then ligated to the pMD18-T vector (Takara, Beijing, China) for DNA sequencing. The coding sequencing of VviGATA26 was cloned into a pFN19K HaloTag T7 SP6 Flexi expression vector. TNT SP6 Coupled Wheat Germ Extract System (Promega, Madison, WI, USA) was used for Halo-PeWRKY1 fusion protein expression following the manufacturer's specifications for expression in a 50 µL reaction with a 2 h incubation at 37 • C. Expressed proteins were directly captured using Magne Halo Tag Beads (Promega, Madison, WI, USA).
DAP Affinity Purification Sequencing
Genomic DNA was extracted from leave tissues of Cabernet Sauvignon grapevines using a one-step plant DNA extraction reagent (Bio Teke, Beijing, China). The DNA was dissolved in 50 µL of Tris-EDTA buffer. DNA-seq binding assays were performed as described by a previous study [53]. Sequencing was performed on an Illumina NavoSeq. Reads were mapped to the grape reference genome sequence using BOWTIE2 and annotated in comparison with the V2.1 version (http://genomes.cribi.unipd.it/grape/). Peak calling was conducted using Macs2. Association of DAP-seq peaks located upstream or downstream of the transcription start site within 2 kb were analyzed using Homer, according to the General Feature Format (GFF) files. Gene function annotation was blasted from NT, NR, Swissprot, and Pfam databases. FASTA sequences were obtained using BEDTools for motif analysis and motif discovery was performed using MEME-Chip suite ( http://meme-suite.org/tools/meme-chip).
RT-PCR Analysis of Alternative Splicing
The reverse transcription polymerase chain reaction (RT-PCR) splicing analysis of VviDXS and VviCRTISO was performed in a 10 µL reaction volume. Each reaction contains 1 µL of the cDNA template, 0.5 µL forward primer, 0.5 µL reverse primer, 3 µL ddH2O, and 5 µL of 2 × Premix Ex-Taq polymerase (Takara). The cycling conditions were 98 • C for 30 s, followed by 40 cycles of 95 • C for 10 s, 60 • C for 30 s, and a 60 s extension at 72 • C, with a final 10 min extension at 72 • C. The PCR amplicons were analyzed using 2% agarose gel electrophoresis. The primers were designed from the upstream and downstream exons of the skipped exon or retention intron (Table S9).
Data Analysis
Data were expressed as the mean ± standard deviation of triplicate tests. One-way analysis of variance (ANOVA) was performed to measure the difference among the means under Duncan's multiple range test (DMRT) at a significant level of 0.05 using the R package "agricolae". The average number of clean reads generated by RNA sequencing was 68.04 million. Clean reads were then mapped to the grape reference genome ( http://genomes.cribi.unipd.it/grape/) using TopHat. The read mapping rate all exceeded 70% for the respective RNA-seq libraries (Table S10), indicating that the sequencing quality was sufficient for further data mining. All of these reads were assembled into 25,280 genes. The normalized expression of the gene was calculated as a Reads Per Kilobases Per Million Reads (RPKM) value. The transcriptomic data are available in NCBI Gene Expression Omnibus repository (http://www.ncbi.nlm.nih.gov/geo/) under accession number GSE150343. We used the R package "DESeq2" to analyze the differentially expressed genes (DEGs), and the significance was judged based on the False Discovery Rate ≤ 0.01 and the absolute value of log2Ratio ≥ 1. K-means analysis was performed using the R package "factoextra" and "stats", and WGCNA was conducted by R package "WGCNA". Hierarchical clustering analysis of metabolites was performed using the expander R package "ComplexHeatmap." All the data were analyzed with the open-source R statistical computing environment (3.6.2) in the present study.
Conclusions
In the present study, we characterized the roles of endogenous ABA and auxin biosynthesis and signaling, along with the previously identified switch genes in the ABA and NAA-induced ripening changes. The responses of free and total norisoprenoids in Cabernet Sauvignon grape berries to ABA and synthetic auxin were revealed and interpreted in both transcriptional and posttranscriptional levels. GATA26 (VIT_200s2393g00010), myb RADIALIS (VIT_207s0005g02730), and MAD-box (VIT_213s0158g00100) were identified as potential regulators of norisoprenoid accumulation by WGCNA and DAP-seq. Vvi-GATA26 was inferred to target to down-regulate the expression of VviPSY1 and VviZDS, and positively regulate the expression of VviCCD4a, VviPSY2, VviPSY3, and two VviZEPs, to benefit the norisoprenoid accumulation. The future study will focus on the molecular mechanism of GATA26 regulating the metabolism from carotenoids to norisoprenoids and the inducing function of auxin on this mechanism. From the perspectives of viticulturists and winemakers, the present findings also can give them some suggestions to improve the concentration of norisoprenoids in Cabernet Sauvignon grape berries.
Supplementary Materials: Supplementary Materials can be found at https://www.mdpi.com/1422 -0067/22/3/1420/s1, Figure S1. K-means analysis clustering of switch genes; Figure S2. Expression of VviCCD4a and VviCCD4b during berry development and ripening; Table S1. Functional annotation of differentially expressed switch genes; Table S2. Total concentrations (µg/kg) of individual norisoprenoid in 2015 and 2016; Table S3. Hub genes identified in the modules related to norisoprenoid; Table S4. Potential target genes of transcription factor VviGATA26 in plant signal transduction; Table S5. Genes in the WGCNA identified modules of 'turquoise', 'cyan' and 'black'; Table S6. Differentially splicing ES events between treatments and control; Table S7. Differentially splicing IR events between treatments and control; Table S8. Pearson correlation coefficient r. Values in bold letters show significant correlations (t-test, α = 0.05); Table S9. The primers of RT-PCR splicing analysis; Table S10. Summary of RNA-seq mapping statistics. | 2021-02-08T05:40:10.581Z | 2021-01-31T00:00:00.000 | {
"year": 2021,
"sha1": "50b852c03e973a9136f9c34e368810ba588929d7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijms22031420",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "50b852c03e973a9136f9c34e368810ba588929d7",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3608196 | pes2o/s2orc | v3-fos-license | Kinetics of Microbial Translocation Markers in Patients on Efavirenz or Lopinavir/r Based Antiretroviral Therapy
Objectives We investigated whether there are differences in the effects on microbial translocation (MT) and enterocyte damage by different antiretroviral therapy (ART) regimens after 1.5 years and whether antibiotic use has impact on MT. In a randomized clinical trial (NCT01445223) on first line ART, patients started either lopinavir/r (LPV/r) (n = 34) or efavirenz (EFV) containing ART (n = 37). Lipopolysaccharide (LPS), sCD14, anti-flagellin antibodies and intestinal fatty acid binding protein (I-FABP) levels were determined in plasma at baseline (BL) and week 72 (w72). Results The levels of LPS and sCD14 were reduced from BL to w72 (157.5 pg/ml vs. 140.0 pg/ml, p = 0.0003; 3.13 ug/ml vs. 2.85 ug/ml, p = 0.005, respectively). The levels of anti-flagellin antibodies had decreased at w72 (0.35 vs 0.31 [OD]; p<0.0004), although significantly only in the LPV/r arm. I-FABP levels increased at w72 (2.26 ng/ml vs 3.13 ng/ml; p<0.0001), although significantly in EFV treated patients only. Patients given antibiotics at BL had lower sCD14 levels at w72 as revealed by ANCOVA compared to those who did not receive (Δ = −0.47 µg/ml; p = 0.015). Conclusions Markers of MT and enterocyte damage are elevated in untreated HIV-1 infected patients. Long-term ART reduces the levels, except for I-FABP which role as a marker of MT is questionable in ART-experienced patients. Why the enterocyte damage seems to persist remains to be established. Also antibiotic usage may influence the kinetics of the markers of MT. Trial Registration ClinicalTrials.gov NCT01445223
Introduction
A sustained control of the human immunodeficiency virus type 1 (HIV-1) replication is obtained by antiretroviral therapy (ART) in the majority of patients, reducing plasma HIV-1 load to undetectable levels. However, HIV-1 persists in reservoirs like latently infected CD4+ T cells [1,2] and in body compartments that have restricted permission for antiretroviral drugs. It is hypothesized that these reservoirs are refilled by the low grade viral replication seen in patients on suppressive ART who otherwise have undetectable viremia by routine assays [3,4]. Translocation of bacterial products across a damaged gut-blood barrier has been proposed to be one important mechanism for the persistence of a chronic immune activation which is found also in well-treated patients [5,6]. There is growing evidence that this immune activation may contribute to the low-grade viremia and e.g. cardiovascular and CNS complications [7,8,9].
Several markers are used to assess microbial translocation (MT) in patients with HIV or inflammatory bowel disease [10], such as microbial products [lipopolysaccharide (LPS), plasma bacterial 16SrDNA, anti-flagellin antibodies] [11], markers of the systemic response to bacterial products (sCD14, LPS-binding protein), and of enterocyte damage [intestinal fatty acid binding protein (I-FABP)] [12,13]. During successful ART, MT and systemic immune activation are usually reduced, but not normalized, suggesting that the damage of the gut-blood barrier is only partly restored [5]. The reasons why this improvement varies between patients are not known. The origin of low-level viremia in patients on suppressive ART has been disputed. Residual virus replication from anatomical compartments like the gut could be one of the explanations. Firstly, levels of HIV-1 DNA and RNA were substantially elevated in the gut compared with peripheral blood in patients on ART with ,40 HIV-1 RNA copies/ml, indicating that the gut may serve as a potential source of viremia during suppressive ART [14]. Additionally, in a set of patients on long-term ART, levels of HIV DNA in sigmoid colon were positively correlated to plasma LPS levels [15], suggesting a connection between residual viremia and MT. In a cross-sectional study of patients with persistently undetectable HIV-1 RNA, a higher proportion of participants treated with nevirapine and efavirenz achieved ,2.5 HIV-1 RNA copies/ml compared to lopinavir/r based ART [16]. Given the link between MT and low-level viremia, we assumed that choice of ART could differently affect kinetics of MT markers.
In the present study, we analyzed the levels of LPS, sCD14, I-FABP, and anti-flagellin antibodies, at baseline (BL) and after 72 weeks (w72) of ART in a controlled randomized clinical trial in which the patients received either lopinavir/r +2 nucleoside analogues (NRTI) or efavirenz +2 NRTI. Additionally, we studied if ongoing antibiotics treatment had an impact on the explored parameters.
Subjects
During 2004-2007, 239 HIV-1 infected subjects received allocated intervention after written consent in a Scandinavian randomized clinical phase IV efficacy trial (RCT) (ClinicalTrials.gov identifier: NCT01445223). The protocol for this trial and supporting CONSORT checklist are available as supporting information; see Checklist S1 and Protocol S1. The study protocol was approved by the Regional Ethics Committee (Gothenburg Ö 739-03). The study design and participants have been described elsewhere [17,18]. In our substudy, the patients were randomized to receive either efavirenz (EFV) +2 NRTI once daily (n = 37) or ritonavir-boosted lopinavir (LPV/r) +2 NRTI twice daily (n = 34) ( Figure 1). Totally 59 patients were excluded from our analysis because of insufficient remaining plasma volumes after the main study analyzes. CD4+ T-cell count, viral load (VL), rate of hepatitis B/C co-infection and age were similar in the group of excluded patients at BL, compared to the substudy group.
Data on antibiotic therapy was available in 63 patients of whom 29 were given antibiotics at baseline (BL) (n = 27) and/or week72 (w72) (n = 10), while 34 had not received antibiotics at any of the two time points (Table 1). At BL, the patients received: cotrimoxazole (TMP-SMX) as Pneumocystis jiroveci prophylaxis (PCP) (n = 24) or for treatment of pneumonia (n = 2), or clindamycin (n = 1). Also, two were on Mycobacterium tuberculosis treatment, and one was on fluconazole. At w72, TMP-SMX was given as PCP prophylaxis (n = 9), and one patient received nitrofurantoin; 8 of these 10 had had TMP-SMX at baseline.
Microbial Translocation Markers
Plasma samples obtained at the sampling day were frozen at 280uC and later thawed. The analyses were performed blindly in relation to clinical data and treatment outcome. Levels of LPS were measured by limulus amebocyte assay (LAL, Lonza, Maryland, USA) as previously described [5]. sCD14 and I-FABP levels were determined by enzyme-linked immunosorbent assays (R&D Systems, USA and DS Pharma Biomedical Co, Japan; respectively), according to the manufacturers instructions [19]. Samples at BL and w72 from the same patient were assayed on the same plate.
Antibody titers to flagellin, and total IgG levels were assessed by an in-house anti-flagellin specific IgG ELISA [20] using purified flagellin monomers from S. typhimurium (InvivoGen, USA). It is known that human sera have a similar recognition pattern of flagellin monomers whether isolated from flagellated E. coli or S. typhimurium [21]. Briefly, microwell plates (MWP) were coated overnight with purified flagellin from S. typhimurium (25 ng/well). The following day, plasma samples from HIV-1 patients were diluted 1:1000 and applied to the MWP. After incubation and washing, the MWPs were incubated with HRP-conjugated antihuman IgG. For total IgG ELISA, the manufacturer's procedure was followed (MABTECH, Nacka, Sweden).
Statistical Analysis
Data were analyzed using GraphPad Prism v. 5.02 and R 2.13.1. Independent groups were compared using Mann-Whitney U-test and paired data analyzed with Wilcoxon signed rank test. Correlations were analyzed by Spearmans rank test. Differences in LPS, sCD14, I-FABP and anti-flagellin IgG levels between patients with or without antibiotics were analyzed with ANCOVA using covariates age, sex, log viral load and CD4+ T-cell count at BL or w72, and residual plots were inspected. Model selection was done by backward elimination with removal of variables if P.0.1.
HIV RNA Load and Recovery of CD4+ T-cells
Following initiation of ART, all patients achieved plasma HIV-RNA ,50 copies/mL within 24 weeks, except for five patients (all ,250 c/mL). Four of these reached undetectable HIV-RNA at w72. Two patients had a relapse with detectable HIV-RNA at w72 (190 respective 640 c/mL). No statistically significant differences were found at BL or follow-up between the two arms ( At w72 the CD4+ T-cell count tended to be higher in the LPV/r group as compared to the EFV group (data not shown). There was no correlation between change in CD4+ T-cells and change of any of the markers between BL and w72.
Decrease of LPS and sCD14 after 72 Weeks of ART
The overall plasma levels of LPS were reduced at w72 compared to BL (157. As expected, the total IgG concentrations were reduced at w72 (median OD
Antibiotic Treatment and Markers of Microbial Translocation
We explored the impact of antibiotics on markers of MT with three comparisons using ANCOVA adjusting for significant
Discussion
The microbial translocation (MT) markers assessed in our study (LPS, sCD14, I-FABP, anti-flagellin antibodies) were all increased in plasma before treatment and ART influenced the levels during the follow-up period of 72 weeks. The reliability of our results is strengthened by the fact that the patients were included in a randomized clinical trial and were followed closely before, during and after the study. Based on the randomization, we found that the effects on the selected MT parameters could partly differ depending on the type of ART and if antibiotics were given or not. We therefore suggest that other factors than HIV treatment in itself may be relevant for the kinetics of the biomarkers which should be considered when interpreting studies on MT in HIV-1 infected patients.
Data from our RCT cohort showed that the plasma levels of LPS were significantly reduced after 72 weeks of ART with similar pattern in patients treated with NNRTI or PI/r containing therapy. Similar observations on the decrease of plasma LPS after ART have been reported in some [5,6,22], but not all studies [23,24]. Although, estimating MT by measurement of plasma LPS levels is common, both LAL assay variability as well as patient cohort heterogeneity (e.g. diverse rate of opportunistic conditions or hepatitis co-infection) may lie behind the different results. E.g., in a recent work, the kinetics of MT markers (measured by LPS and sCD14 levels) differed according to the severity of pre-ART CD4+ T-cell count [25].
The effect on sCD14 has also been disputed with studies showing decline [23,26] or increase [22,24] after initiation of ART. In our study, the sCD14 levels were reduced during ART with a unified pattern in the two treatment arms which is in line with a decreased MT during 72 weeks of ART. It has also been suggested that sCD14 independently predicts mortality in HIV-1 infected patients when adjusting for other key markers [13]. We found a strong inverse correlation between CD4+ T-cells and sCD14 as well as a direct correlation between sCD14 and VL [27]. These findings warrant the hypothesis that higher viral replication is associated with more profound inflammatory response (reflected by sCD14) and leads to lower CD4 T cell counts that also reflect poorer surveillance of gut microbes. The viscous circle of viral replication and inflammation is created and maintained by the ''leaky gut'' as suggested by Douek et al [28]. One should acknowledge that the sCD14 levels are also elevated in other infections (RSV, Dengue Virus, Mycobacteria) and inflammatory conditions [29,30]. Thus, although analysis of sCD14 is important for the evaluation of MT in the setting of HIV-1 infected patients, it cannot be considered as a specific marker for MT alone.
Our untreated patients exhibited elevated plasma levels of I-FABP, as compared to historical healthy controls [19], which is in line with the suggestion that I-FABP can be applied as a marker of enterocyte damage and possibly of MT in HIV-1 infected patients [19]. Actually, the levels were similar to those reported in inflammatory bowel disease (IBD) [31], supporting the presence of a significant damage to the enterocytes in HIV-1 patients, at least in those who have a relatively advanced immunodeficiency. To our knowledge, no data on the effect of ART on I-FABP levels have earlier been published. Surprisingly, we found that the I-FABP levels increased in the whole cohort despite 72 weeks of efficient ART. A subgroup analysis revealed that the I-FABP increase occurred in the patients treated with EFV, but not in those with LPV/r. This difference between the two treatment arms could not be explained by any coexisting liver failure, gastrointestinal disease (Inflammatory Bowel Disease, gastroenter-itis) or reported side effects such as diarrhea (data not shown). The finding was unexpected particularly in view of the reports claiming a lower level of the residual viremia in EFV-treated patients as compared to those on a PI/r regimen [16,32,33]. Although I-FABP has been firmly described as marker of enterocyte damage in other diseases [34,35], we cannot exclude that the increased I-FABP concentrations in our patients might be related to other events such as drug toxicity or metabolic lipid changes. However, the available laboratory blood and lipid-profile data (baseline and follow up) from our patients did not support these relationships (data not shown). Another hypothesis could be based on that EFV may induce oxidative stress related cell apoptosis [36,37]. Conversely, PIs are attributed the anti-apoptotic effect [38,39]. The CD4+ T-cell recovery did not differ significantly between treatment arms, but tended to be higher in LPV/r group. As such, the possibility of immune reconstitution in the gut as a cause of the increasing I-FABP levels in EFV group thus seems to be less likely. Further studies are required to determine if the difference reported by us is related to variation in apoptosis or enhanced turnover of intestinal epithelial cells and as a consequence shedding of I-FABP. In the light of our report, the use of I-FABP for evaluating MT in patients introduced to ART seems questionable, as the kinetics differed compared to the other markers of MT. Most likely the systemic I-FABP levels in patients on ART do not reflect only MT itself. Further studies that include also intestinal biopsies should address these questions.
The bacterial flagellin is known as a microbial compound with strong immune modulatory properties that has an essential role in conditions with intestinal damage, like IBD. Hence, flagellin is regarded as a dominant immune antigen in Crohns disease (CD) where the antibodies to bacterial flagellin (Anti-CBir1) are detected in about half of patients [40]. Recently we found elevated levels of flagellin-specific IgG in three cohorts of HIV-1 infected patients [11]. Additionally, two years of ART reduced the levels of anti-flagellin IgG, although they were not normalized [20]. In the present study, we confirm and expand our previous observations. Thus, the baseline anti-flagellin levels were increased in all patients and decreased significantly after 72 weeks of ART. However, stratifying the cohort into the treatment arms showed that the significant decrease occurred in the LPV/r containing arm only. This effect was not due to the general polyclonal B-cells activation as the normalization of specific anti-flagellin antibodies to total IgG levels yielded similar results. Our hypothesis is therefore that the decline of the anti-flagellin antibodies was due to decrease of exposure for flagellin itself, as a consequence of restoration of the gut-blood barrier. We also claim that the positive correlation between anti-flagellin antibodies and LPS supports the credibility of anti-flagellin antibodies as a marker of MT, theoretically being more specific than e.g. sCD14, which is a polyclonal marker in nature [41]. The relevance of the observed difference between treatment arms is not clear. It is thus possible, that since both arms tended to decrease the levels of anti-flagellin antibodies, differences in time kinetics between the arms could possibly result in a faster decline in LPV/r group.
Antibiotic treatment leads to changes in the gut microbiota [42,43]. Also, plasma LPS levels are reduced in SIV infected macaques that are treated with ''gut sterilizing'' antibiotics [6]. However, so far the impact of antibiotics has not been considered in studies on MT in HIV-1 infected patients. In our study, antibiotic treatment was found to influence the sCD14 levels. ANCOVA-analysis thus revealed that former use of antibiotics at baseline as well as ongoing antibiotic treatment lowered sCD14 levels at week 72 compared to non-antibiotic treated individuals. Additionally, the group using antibiotics at base-line had a larger decrease in sCD14 levels up to week 72. A higher reduction of sCD14 levels in antibiotic treated patients could be due to decreased LPS signaling as a consequence of altered gramnegative intestinal flora. The use of TMP-SMX prophylaxis in ART naïve patients has been associated with a lower annual increase in HIV-1 RNA [44]. Antibiotics like minocycline decrease monocyte activation and can dampen the general inflammation as shown in SIV model [45]. Moreover, rifaximin, a broad-spectrum antibiotic with minimal systematic uptake, has recently been shown to affect endotoxinemia in patients with decompensated cirrhosis. Thus, plasma LPS levels were significantly reduced after 8 weeks course of this antibiotic [46]. These findings together suggest that antibiotic treatment may influence levels of MT markers, potentially providing a bias in studies not compensating for the use.
Collectively, we found that markers of MT were reduced after 72 weeks of ART. EFV and LPV/r based ART, which had similar virological outcome, presented with somewhat different effects on the kinetics of MT markers. Thus, although the profiles of the established markers LPS and sCD14 were unified after 72 weeks of treatment, the levels of I-FABP increased and anti-flagellin antibodies were not significantly reduced, respectively, in the patients on EFV containing therapy. Our data underscore the importance of unraveling the multifaceted correlations between microbial translocation, immune activation, antibiotics usage, HIV-1 replication and the chosen ART. Further longitudinal studies should address this complex issue.
Supporting Information
Checklist S1 CONSORT Checklist. | 2016-05-12T22:15:10.714Z | 2013-01-28T00:00:00.000 | {
"year": 2013,
"sha1": "03fef5c0f66f1a27892f01ee54feaf90d51f9720",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0055038&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "03fef5c0f66f1a27892f01ee54feaf90d51f9720",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
234182580 | pes2o/s2orc | v3-fos-license | Impact of No-Till technology and winter wheat precursors on soil fertility in arid conditions of Stavropol territory
At present, agricultural production of the region faces the main task – to ensure further growth and greater production stability of winter wheat. The new No-till technology for the arid zone of Stavropol Territory is an optimal option within the system of resource-saving technologies. Its application in the peasant farm Vodopyanov S.S. in the conditions of dark chestnut soil became possible with the introduction of a scientifically valid system of farming and sufficient availability of equipment, fertilizers and pesticides at the enterprise. The study of the effect of the No-till technology and precursors on agrophysical and agrochemical properties of dark chestnut soil in the arid zone showed that the winter wheat crops have the largest amount of productive moisture reserve in the upper layer (0.0–0.20 m) during the booting stage of winter wheat for sunflower and winter rape. The largest amount of agronomically valuable aggregates is noted in winter wheat crops in the booting stage for sunflower, and to the blooming stage it increases for all precursors. The soil density of winter wheat crops increases down the layers. The number of water-stable aggregates increases to the firm ripe stage. As nitrogen consumption by plants increases, the amount of nitrate in the soil decreases and reaches its minimum by the firm ripe stage. The maximum concentration of labile phosphorus in the soil is observed during the initial sampling period. Regardless of the precursor, there is a tendency to pH decrease. There is a decrease in humus and mobile sulfur down the soil layers. In general, the No-till technology and precursors affect the reduction of nitrogen, phosphorus and potassium in soil.
Introduction
Successful development of agricultural production is possible not only through the use of climatic cropping patterns, but also through a wide introduction of energy-saving and soil-protective technologies for the cultivation of winter wheat. In this regard, the technology of cultivation without soil treatment presents a great interest [1]. The main principle of the No-till technology is to use natural processes that occur in the soil [2]. When No-till is used, the soil is not machined before sowing and during handling of plants (ploughing, disking and cultivation operations are completely absent). Plant residues remain on the soil surface and favor better accumulation and preservation of moisture for winter wheat [3][4][5][6]. The mineralization process is significantly reduced in the soil, which contributes to the increase of soil fertility [7]. Failures in the introduction of low-cost technologies in agribusiness enterprises of the arid zone of Stavropol Territory are largely caused by the lack of a systematic approach to the development of the No-till technology and winter wheat precursors [8,9]. Weather-climatic, logistical and other factors reduce crop yield and quality [10,11]. The advantage of the No-till technology compared to traditional and minimal technologies is the elimination of water and wind erosion, accumulation of nutrient medium for soil biota, reduction of mineral fertilizers and
Climatic conditions
The territory of the farm belongs to the arid agroclimatic area of the region. Hydrothermal coefficient equals 0.7-0.9. The main negative factor for increasing crop yields is lack of moisture. The average temperature in January is 3-5 °C. There are often prolonged thaw with temperatures up to +2 °C. In winter, eastern winds dominate. Snow height does not exceed 10 cm on lowland and 20 cm in elevated areas.
March is the beginning of rapid increase in daily temperatures, which reach +3-8 °C and the maximum temperature reaches +26 °C. Every spring there is freezing weather. In late March and early April, the average daily temperature is + 5 °C. The average monthly temperature in July is 23.7 °C and the coldest in January is -4.1°C. The sum of active air temperatures above +10 °C -3200-3500 °C (Tables 1, 2). The duration of the frost-free period is 186 days. VIII IX X XI XII I II III IV V VI VII Perennial 52 33 40 38 32 25 22 23 47 63 74 57 506 During the warm period the average annual rainfall makes 314 mm. Eastern winds are prevailing causing dust storms. Dry winds on average amount to 68 days. The precipitation amount is 40.0-70.0 mm.
Soil-agrochemical characteristic
The territory of the farm represents a wide hilly plain with valley-ravine relief. The wide hilly plain is characterized by an alternation of uniform watershed and valley downsides. The soil cover is mainly represented by dark chestnut soils.
In terms of their agronomic properties the soils of the farming enterprise are one of the best, but differ from chernozem by lower humus content and thickness. Cutting depth -96 cm. Boiling from The soils of the farm are characterized by the content of mobile forms of nutrients: humus content -low, labile phosphorus -medium, exchange potassium -high. Soil solution reaction is weakly alkaline (Table 3). The loess light and medium loam, yellow-brown skeletal light loam and sand clay with a capacity of 60-100 m serve as mother rocks. According to its mechanical composition the soil is light-carbon and medium-carbon. Soil-forming rocks are highly carbonate, enriched with carbonate new growths in the form of white-eyed bream and mold.
Water-physical properties of soil
The scientific study revealed that the largest amount of productive moisture reserve in the booting stage of winter wheat is observed for such a precursor as winter rape and in the layer 0.0-0.20 m makes 16.3 mm, which is 2.2 mm more than for sunflower and 4.3 mm more than for grain maize. In the meter layer the productive moisture reserve is more for rape and makes 98.5 mm, for sunflower and corn per grain it is 93.9-93.5 mm respectively (Table 4). By the blooming phase, the productive reserve in the upper layer and in the meter layer is reduced and by the phase of complete ripeness in the upper 0.0-0.2 m layer, such precursor as winter rape is 6.3 mm, more than sunflower and grain maize by 6.0 and 5.8 mm, respectively. As for the meter layer, there is also a tendency: the margin of productive moisture to the phase of complete ripeness decreases. Rape (as a precursor) is 42.7 mm, sunflower and grain maize -43.9 and 42.2 mm, respectively.
The structural aggregate composition of the soil plays a significant role in crop formation [11]. In the crops of winter wheat cultivated by grain maize, the largest amount of agronomically valuable structure is noted. In the booting stage it makes 49.1 %, which is 0.9 % more than for winter rape and 6.2 % more than for sunflower. The largest amount of agronomically valuable aggregates is observed in the blooming phase of winter wheat and for grain maize -53.7 %, for winter rape and sunflower this indicator amounts to 50.7 and 50.3 % respectively (Table 5). By the complete ripeness phase, the cloddy fraction increases. A large amount of cloddy fraction is observed in grain maize and makes 46.9 %, in winter rape and sunflower -46.3 and 46.7 %, respectively. The structural factors for all precursors are almost unchanged. The dust fraction is marked by slightly higher values -5.3-6.4 % for such precursor as winter rape.
Fertile soil shall contain significant water reserves. In winter wheat crops the amount of waterstable aggregates increases to the phase of complete ripeness of the culture. The largest quantity of water-stable aggregates is observed for winter rape per grain as a precursor in the phase of complete ripeness and makes 70.2 %, which corresponds to excellent water stability and this is 5.1 % more than for sunflower and 1.9 % more than for grain maize. The quantity of water-stable aggregates for sunflower and grain maize in the phase of complete ripeness ranges within 65.1-68.2 %, which corresponds to good water-strength (Fig. 1). 3 . In the phases of blooming and complete ripeness also in the layers down the soil density increases and in the layer 0.20-0.30 m makes 1.27-1.29 g/cm 3 . the same dependence is observed for precursors such as sunflower and grain maize. Down the layers, the soil density increases in all phases of winter wheat growth and development. In the phase of complete ripeness in the layer 0.20-0.30 m it makes 1.28-1.29 g/cm 3 (Table 6).
Cultivation technology of Bagrat winter wheat
Winter wheat is grown according to the following precursors: winter rape, sunflower and grain maize. After removing the precursor on the field there were up to 3 weeds (mainly one-year-old cereals and dicotyledons) per 1 m 2 height not more than 3.0 cm. At the same time, the soil was treated with herbicide Tornado 500 with the consumption rate of 1.5 l/ha during 1-5 days before sowing, sprayer Amazone UG 3000 in the unit with MTZ 1221 tractor. Sowing was on September 25-27 by large, calibrated etched seeds with the sowing rate of 4.5 million germinating seeds per 1 ha. The etchant used was Dividend Extreme, KS with the consumption rate of 1.5-2.0 l/t. Seeding was carried out by BERTINI 8000 DCF in an aggregate with a К-700 tractor to a depth of 5.0 cm. In autumn in the phase of autumn the tillering of carbamide-ammonia mixture was made at a dose of 100 l/ha DUPORT liquilazer in an aggregate with МТZ-1221tractor.
Upon resumption of spring vegetation, the wheat crops were fertilized with ammonium nitrate at a dose of 100.0 kg/ha with MTZ 1221 PUM + РУМ -8, the second fertilizing -in the booting phase with KAS at a dose of 150.0 l/ha DUPORT liquilazer in the unit with MTZ 1221 tractor.
As a system for protecting winter wheat from weeds, in early spring it was treated with herbicide Pallas 45 MD at a rate of 0.5 l/ha. Spraying was carried out with Amazone UG 3000 in the aggregate with MTZ 1221.
In the leaf formation phase, Borey insecticide at a rate of 0.1 l/ha and fungicide Title Duo, RCC at a rate of 0.32l/ha were used, spraying was carried out with Amazone UG 3000 sprayer in the aggregate with MTZ 1221. In the complete ripeness phase, when the grain moisture reached 14 %, harvesting was carried out with ACROS 530 harvesters.
Agrochemical indicators of soil and nutrient content in plants
Nitrate nitrogen. During the study we found that the dynamics of nitrate nitrogen content in winter wheat crops, regardless of the precursor, soil layer and soil cover, had a uniform focus: with the increase of nitrogen consumption by plants, the amount of nitrate in the soil decreased from the booting phase and reached its minimum by the phase of complete ripeness (Table 7). Ammonium nitrogen. The winter wheat cultivation technology under study had a certain impact on ammonium nitrogen content of the soil. Analyzing Table 8 it can be noted that during the booting phase of winter wheat after sunflower (23.7 mg/kg in the soil layer 0.0-10.0 cm) and grain maize (18.7 mg/kg) the highest content of ammonium nitrogen in the soil was observed compared to other selection periods. Subsequently, during winter wheat vegetation, there was a steady decrease in the Labile phosphorus. The analysis showed that the maximum concentration of the element in the sample was recorded at the initial sampling regardless of the soil layer and culture under study (Table 9). Later, there was a steady decrease in available phosphorus to the phase of complete ripeness of winter wheat (from 22.7 to 16.0 mg/kg in the soil layer of 0.0-10.0 cm). For all versions of the experiment, there was a tendency to decrease the studied nutrient down the profile of the dark chestnut soils. Exchange potassium. The agrochemical analysis of soil samples allowed establishing peculiarities in the dynamics of accumulation of exchange potassium depending on the technology under study and soil layers. In winter wheat crops after all precursors a decrease in the nutrient from the booting phase to the blooming phase was recorded and a subsequent increase by complete ripeness in the soil layer of 0.0-10.0 cm (Table 10). Table 10. Impact of no-till technology on dynamics of potassium exchange content in soil, mg/kg of soil The soil layer of 0.0-10.0 cm in all versions was characterized by higher content of exchange potassium throughout the study period relative to other observed soil layers.
Soil solution reaction. In dark chestnut soils, the application of the No-nill technology on winter wheat independently of the precursor contributed to the alkalizatlon of the soil horizon in the soil layer of 21.0-30.0 cm relative to the other layers (Table 11). The increase of pH reaction in the soil layer of 0.0-10.0 cm in the phase of complete ripeness of wheat relative to the initial value in the booting phase irrespective of the precursor is observed (winter rape -7.85, sunflower -7.97, grain maize -7.85).
Humus. Data on the effect of no-till technology on humus content in soil under the conditions of the studied farm are given in Table 12.
It can be noted that regardless of the precursors and farm conditions, there was a tendency of humus content decrease down the soil layers. During the study, there was a steady increase in humus content in the soil layer of 0.0-10.0 cm (winter rape -2.80 %, sunflower -2.70 %, grain maize -2.80 %). Table 13.
Similarly to humus, there was a tendency of decreasing the content of mobile sulfur down the soil layers for the analyzed period. The no-till technology contributed to higher content of mobile sulfur in the soil layer of 0.0-10.0 cm compared to the other layers.
Results
The study of the influence of the No-till technology and precursors on agrophysical and agrochemical indicators of dark chestnut soils in the arid zone of the territory showed the following: 1) regarding winter wheat the largest amount of productive moisture reserve is in the upper 0.0-0.20 m layer in the phase of winter wheat booting (sunflower and winter rape). For all precursors there is a decrease in the productive moisture reserve to the phase of complete ripeness of winter wheat; 2) the largest amount of agronomically valuable aggregates in winter wheat crops in the booting phase is noted for sunflower; by the blooming phase -increases across all precursors; by the complete ripeness phase there is the reduction of agronomically valuable aggregates to 42.0-48.5 %; 3) the soil density of winter wheat crops increases down the layers; 4) regarding winter wheat crops the amount of water-stable aggregates increases to the phase of complete ripeness (especially for winter rape as a precursor); 5) the dynamics of nitrate nitrogen content in winter wheat crops irrespective of the precursor and soil layer has a uniform direction: with the increase of nitrogen consumption by plants, the amount of nitrate in the soil decreases and reaches its minimum by the phase of complete ripeness; 6) the maximum concentration of labile phosphorus in the soil is fixed within the initial sampling period irrespective of the soil layer; 7) regardless of the precursor, there is a tendency for pH decrease from the booting phase to the blooming phase, followed by an increase to the complete ripeness phase; alkalizatlon of the soil horizon in the soil layer of 21.0-30.0 cm is observed compared to the other studied layers; 8) there is a decrease in humus and mobile sulfur content down the soil layers. Thus, based on the study of the influence of the no-till technology and precursors, a decrease in nitrogen, phosphorus and potassium content in the soil was detected. | 2021-05-11T00:04:29.555Z | 2021-01-08T00:00:00.000 | {
"year": 2021,
"sha1": "9d3b538fc692e2f418aba3e88941d00da9dfc7e5",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/624/1/012200",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "cbb252d0c61cbcec4251432c39264ec54dab15e6",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science"
]
} |
237681378 | pes2o/s2orc | v3-fos-license | A future with no MVC patients? Impact of autonomous vehicles on orthopaedic trauma may be slow and steady
Abstract Introduction: Orthopaedic trauma results in significant patient morbidity. Autonomous vehicle (AV) companies have invested over $100 billion in product development. Successful AVs are projected to reduce motor vehicle collision (MVC)-related injuries by 94%. The purpose of this study was to estimate the timing and magnitude of AV impact on orthopaedic trauma volume. Methods: ICD 9 codes consistent with acetabulum (OTA 62), pelvis (OTA 61), hip (OTA 31), femur (OTA 32–33), tibia (OTA 41–43), ankle (OTA 44), and calcaneus (OTA 82) fractures and the proportion of cases caused by MVC were taken from the National Trauma Databank (NTDB) 2009–2016. Regression was performed on estimates of market penetration for autonomous vehicles taken from the literature. Results: For NTDB years 2009 to 2016, 300,233 of 987,610 fractures of interest were the result of MVC (30.4%). However, the percentage of MVC mechanism of injury ranged from 9% to 53% depending on fracture type. Regression of estimates of AV market penetration predicted an increase of 2.2% market share per year. In the next 15 years we project 22% market penetration resulting in a 6% reduction in orthopaedic lower extremity trauma volume. Conclusion: Adoption of AVs will result in a projected 8% reduction in MVC-related orthopaedic trauma-related injuries over a 15-year period. Although this represents a significant reduction in morbidity, the advent of AVs will not eliminate the need for robust orthopaedic trauma programs. The gradual rate of injury reduction will allow hospitals to adapt and reallocate resources accordingly.
Introduction
When IBM's "Deep Blue" chess algorithm beat the world's top chess player in 1997, predictions of smarter-than-human computers were rampant. However, it took nearly 15 years before IBM's "Watson" was able to win Jeopardy in 2012. [1] Although Watson's natural language processing technology is now taken for granted on our smartphones; Alexa, Siri, and Google Assistant are far from replacing human to human interaction, activity performance, and decision-making. However, specific domains once thought untouchable are mastered by artificial intelligence (AI) every year. AI has become superhuman in facial recognition, strategic gaming, and photorealistic style transformation. Now AI companies are focusing on autonomous vehicles (AVs).
Waymo, Tesla, Uber, Ford's Argo AI, Chevy's Cruise Automation, Amazon's Aurora Innovations, Apple's project Titan, Intel, and Mobile Eye in partnership with Chrysler, BMW, Nissan, and VW are all developing autonomous vehicles. [2] Together they have invested over $100 billion with the intention that driving will be one of the next domains in which computers can consistently outperform humans. [3] Many expect that the computerized mastery of driving will lead to a dramatic reduction in motor vehicle collisions, citing a national highway transportation safety administration (NHTSA) report that 94% of MVCs are the result of human error. [4] This estimate has yet to be supported with any real-world data or closely scrutinized as an accurate representation of the proportion of injuries that would actually be avoided by autonomous vehicles.
If the projected reduction in MVCs as a result of AVs comes to fruition, it would have a tremendous positive impact on society. Among those impacts would be a reduction in complex orthopaedic trauma. The purpose of this study was to estimate the timing and magnitude of AV impact on lower extremity orthopaedic trauma volume.
Methods
Estimates of autonomous vehicle arrival, market penetration, and reduction in MVCs were taken from literature, periodicals, industry websites, and manufacturer's statements. The ratio of cases caused by MVC was taken from the 2009 to 2016 NTDB. Injuries caused by MVC or pedestrian or bicycle struck by motor vehicle were considered MVC related. MVC-related injuries were considered avoidable by AVs. Motorcycle, ATV, and bicycle collisions were not considered avoidable by AVs even when a motor vehicle was involved in the incident. Independent samples t tests were used for continuous and ordinal variables, with values less than 0.001 considered to represent a statistically significant difference. A Pearson Chi-Square test less than 0.001 was considered to represent a significant difference in categorical variables. Linear regression was used to project the adoption of autonomous vehicles. Binary logistic regressions were used to calculate odds ratios. Multivariate binary logistic regression was attempted for all significant variables. All analysis was performed in SPSS version 25
Estimate of AV arrival and market penetration
Statements from 5 automotive manufacturers with projected year of release of autonomous vehicles were included in the regression and these points were taken as 1% market penetration in the year predicted. Articles from 10 sources printed between 2015 and 2019 were found with predictions for AV market penetration at various time points. The mean year of predicted arrival was 2023 ± 3.6 years. The mean prediction for advanced market penetration was 88% ± 13.3% by the year 2051 ± 11.7 years (Table 1).
Linear regression of all estimates of market penetration by year revealed an R squared of 0.66 for the equation y = 0.0223x À 45.158 where y is the percent market penetration and x is the year. This correlates with a 2.2% increase in market share per Linear regression was used to project the adoption of autonomous vehicles. These projections were carried through the case proportions using the formula below. year starting from the year 2025. This would yield a theoretical date of 100% market penetration occurring in 2070 ( Fig. 1). Literature comparing rates of MVC from real-world crash databases in cars with advanced driver assistance (ADAS) features showed up to 27% MVC reduction and 20% injury reductions for cars equipped with forward collision warning (FCW), [5] up to 38% reduction in injuries for cars equipped with automatic emergency breaking (AEB), [6] and up to a 41% reduction in MVCs for cars equipped with both FCW and AEB. [7] Analysis of large crash databases has also shown reductions in crashes of 14% for blind spot monitoring (BSM), [8] 18% for lane departure warning (LDW), [9] and 30% for LDW with lane keeping assist (LKA). [10] In contrast, literature reviewing crash data from autonomous vehicles on the road have shown a marked increase in rate of MVC compared to traditional vehicles without any evidence of improvement. [11] Evaluating the types of MVCs that involve AVs reveals they are predominantly lowspeed crashes that largely go unreported in traditional vehicles [12] and most are in intersections or involve being rear ended. [13] Despite the data on current immature AV systems, predictions of reductions in MVCs in AVs are consistently above 90% [4] ( Table 2).
National Trauma Databank
International Classification of Disease -9 th Edition (ICD-9) codes corresponding with major lower extremity trauma including pelvic, acetabular, femur, tibia, and calcaneal fractures were extracted from the 2009 to 2016 NTDB resulting in 988,248 records with injuries of interest, 987,610 having complete records. MVC (23.5%) combined with bicycle struck by motor vehicle (0.2%) and pedestrians struck by motor vehicle (6.6%) were combined as MVC-related injuries and comprised 30.4% of all injuries. However, fall (41.5%) was the most common mechanism of injury. Motorcycle crash (8.8%), high-energy fall (8.7%), and pedestrian struck by vehicle (7.5%) were also common ( Table 3). Patients injured in an MVC were more likely to be male (58.5% vs 50.9%, P < 0.001) have open fractures (13.5% vs 10.0%, P < 0.001), blood EtOH above the legal limit at the time of injury (13.0% vs 4.9%, P < 0.001), and illegal drug use confirmed by test at the time of injury (13.0% vs 4.8%, P < 0.001). Patients injured in an MVC are more likely to be treated at university-affiliated teaching hospitals (57.2% vs 44.2%, P < 0.001). Patients injured in an MVC were more likely to have fractures of the acetabulum (15.3% vs 5.9%, P < 0.001), and pelvis (26.2% vs 16.2%, P < 0.001). In the NTDB, MVC was the mechanism for 53.1% of acetabulum fractures, 41.4% of pelvis fractures, 9.2% of hip fractures, 33.8% of femur fractures, 36.5% of tibia fractures, 20.7% of bi-or trimalleolar ankle fractures, and 39.0% of calcaneus fractures (Table 3). Projected reduction in MVCs with 33% market penetration of AVs by 2040 would result in 16% reduction in acetabulum, 13% reduction in pelvis, 3% reduction in hip, 10% reduction in femur, 11% reduction in tibia, 6% reduction in bi-or trimalleolar ankle, and 12% reduction in calcaneus fracture surgeries (Fig. 2).
Discussion
Predictions of AI completely changing industries are common. [14] These estimates usually focus on industries such as manufacturing and trucking. [15] Orthopaedic trauma stands to benefit from reductions in motor vehicle crashes secondary to improved safety of autonomous vehicles. Despite a paucity of data, it is important to start the discussion on the scale and timing of the impact of autonomous vehicles orthopaedic trauma patients. Our regression of estimates of AV arrival and market penetration taken from the literature show that estimates are largely conservative: on average predicting 33% of cars on the road being AVs by 2040. This relatively slow progression suggests that orthopaedic trauma programs will have time to adapt and adjust. Changes in injury patterns will be slow and steady.
Previous automotive safety technologies have led to changes in fracture patterns. The introduction of seatbelts led to increased MVC survival rates and therefore more need for fracture treatment. [16] Initial research suggested seatbelts led to increased injury to the lumbar spine, [17] and thorax. [18] Airbags further reduced central injuries while paradoxically increasing distal upper [19] and lower extremity injuries. [20] It is likely that current changes in complex case volume [21] are related to increased market penetration of safety equipment such as standard air bags, www.otainternational.org Table 3 Mechanisms associated with lower extremity trauma crumple zones, antilock brakes and traction control, lane departure warning, and blind spot monitoring. [22] It can be extrapolated that similar changes in case volume may occur with increased market penetration of AVs in the upcoming years.
No rigorous estimates of the percentage of MVCs that could be avoided by AVs were found in the literature. Nor was any analysis of the types of MVCs that will be affected by AVs available. To date, no AV company has demonstrated reduced injuries as a result of decreased collisions in autonomous vehicles. Only Tesla claims a 10 times reduction in collision rate, having demonstrated that its cars on autopilot travel on average 4.7 million miles between MVCs while traditional vehicles travel 479,000 miles between accidents. [23] Waymo and other AV companies tout safety improvements while pointing out the correlation of reported "disengagements" with the difficulty of the driving environment. [24] This is important as injury patterns from highway crashes are not the same as those from city streets. Nearly all safety estimates analyzed were derived from a national highway transportation safety statistic that 94% of accidents are caused by avoidable human errors such as texting and driving. [25] In reality this may be much less, and although data from studies of driver assistance features have shown significant decreases in morbidity and mortality, studies of current AV performance are limited (Table 2). There have not been previous estimates of the impact of AVs on trauma presentation injury patterns nor surgical case volumes.
Analysis of the NTDB revealed that less than one-third of major pelvic and lower extremity cases are caused by MVCs. The fractures most affected by AVs would be the ones caused most often by MVCs. Namely, pelvic and acetabular fractures are projected to decrease while hip and ankle fractures would largely be unaffected. In addition, the aging of the US population associated with the baby boomers is expected to lead to a doubling of hip fractures by 2050. [26] The reduction in pelvic and acetabular trauma projected by our model combined with this increase in hip fractures mean that hip fractures could make up one-third of all trauma cases by 2050.
There are several limitations of this study as it attempts to project currently unproven technology into the future. It fails to model the above-mentioned increases in periprosthetic, hip, and other fragility fractures due to aging population. Furthermore, the NTDB did not allow classification of fracture severity; therefore, we are unable to determine the changes in more complex fracture patterns. The study uses a linear model for the timeline of adoption because it best fit the estimates from the literature; however, technologies are often adopted in an exponential fashion. Furthermore, all data in this study refers to European countries or the United States; therefore, this analysis likely does not generalize to all countries. These weaknesses mean that the magnitude and timing of the impact in this paper will be inaccurate. However, the authors believe it is a starting point for a conversation about the impact that AVs may have on our training and practices.
Conclusion
If changes in pelvic and lower extremity case volumes and distribution due to adoption of autonomous vehicles do materialize, it will likely be slow and steady. Furthermore, 70% of pelvic and lower extremity trauma cases are not caused by MVC and will therefore remain unaffected. Our analysis projects that 6% of cases will be affected in the next 15 years and only 24% of cases are likely to be eliminated over 50 years. | 2021-09-27T20:54:36.536Z | 2021-07-15T00:00:00.000 | {
"year": 2021,
"sha1": "aac89ab73be1d4b03db81430b1552e85efc9324c",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/oi9.0000000000000136",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "626cc658332de0784f2f66251df299dea1bfb85f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232484696 | pes2o/s2orc | v3-fos-license | Status and Prospects of Agricultural Growth Domestic Product in the Kingdom of Saudi Arabia
The Kingdom of Saudi Arabia (KSA) has set Vision 2030 to reduce the total dependency of the country oil sectors, diversifying the economy and achieving sustainable food security. This necessitated conducting this study which aimed at estimating and analyzing the association and impact of selected agricultural subsectors (dates, honey, fish, chicken, and cattle) on Agricultural Growth Domestic Product (AGDP) of KSA, and identifying the leading subsector in the economy that might substantially affect AGDP and other subsectors. Unit root test, Johansen co-integration, vector error correction model (VECM), multiple regression techniques, and impulse test were used in analyzing the secondary data that covered the period from 1985 to 2017. Results revealed the presence of long-run co-integration between designated variables. Only the coefficient of adjustment parameter for dates (as dependent variable) is negative (−5.42) and significant (critical t value = −2.52 with p = .02), meaning that the model was able to correct its past-time disequilibrium. Furthermore, short-run causality was noticed between few variables. The regression analysis results indicated the existence of positive and significant relationships between the dependent (AGDP) and each of the independent variables: cattle (0.83; p = .00), honey (50.05; p = .06), and chicken (0.07; p = .00). On the contrary, results of the impulse tests showed that the cattle subsector is leading in the economy. Accordingly, cattle, honey, and chicken subsectors should be given high priority in the government investment policy.
Introduction
The Kingdom of Saudi Arabia (KSA) is a country that contains mostly of desert acreage, with inadequate naturally arising ground water (Baig & Straquadine, 2014). It is subjected to high temperature typical of an arid climate. The country is narrowly perfect for agricultural development. With such low agriculture-favoring conditions, the KSA has successfully achieved self-sufficiency in fresh milk, wheat, eggs, some vegetables, and dates (Baig & Straquadine, 2014). Although the portion of agriculture in the national economic gross domestic product (GDP) is quite low and fluctuates (Figure 1), the fertile regions in the KSA supported adequate agriculture sector. The agriculture sector is the third major contributing sector in the GDP earnings and has furthermore contributed to the improvement of the livelihoods of the rural population (Baig & Straquadine, 2014). A study conducted by Kabir (2018) proved the existence of long-run relationship between Agricultural Growth Domestic Product (AGDP) and GDP with bidirectional causality moving from AGDP to GDP and vice versa.
The GDP measures the overall value of end products and services manufactured by a country (Özpençe, 2017). It incorporates a list of macroeconomic concepts as it represents the full employment condition of an economy. The GDP is a part of the national financial records of a country that provides an essential set of indicators that would enable policy-makers to decide whether the economy is in a state of reduction or advancement (Al-Bakr & Al Salman, 2016). The Eighth Development Plan for the Agricultural sector in the KSA focuses on increasing the role of agriculture to become the base of the economy instead of the existing base. The plan aimed at enhancing the investment capacities of the agricultural sector and emphasized the importance of the livestock subsector (FARUK, 2014). Recently, the KSA is in an alteration from dependence on the oil sector to a further diversifying economy. The agricultural sector and its value chains have given much attention in Vision 2030 as unexploited potential for growth. In fact, the sector is expected to play a leading role in the country's economic growth.
Livestock is a crucial subsector that contributes significantly to the national GDP of KSA. The subsector serves as a store of value in the absence of formal economic institutions and other missing markets (Negassa et al., 2012).
The domestic production of poultry meat satisfies only 41% of the consumption demand of the nation. The per capita consumption of poultry meat exceeds 47 kg per person year (Hester, 2016). Regarding date production, Saudi Arabia has more than 400 date palm varieties, of which only 40 of them have an economic value (Al-Sheikh, 2009). These are found across seven provinces in the country. The growth and fruiting regime of those date palms are adaptable to the local climates of those provinces (Rahman et al., 2014).
Aquaculture production increased gradually from 6,000 tons in 2000 to around 26,000 tons in 2009 and 2010. It was followed by a fall in production to less than 16,000 tons in 2011 caused by the white spot disease, known to be an epidemic in marine shrimp, the most farmed varieties in the KSA. After a recapture to 21,000 tons in 2012, the total production added to 30,000 tons in 2015, 26% up from 2014 (Food and Agriculture Organization [FAO], 2017).
Despite the importance and huge potential of the agricultural subsectors to the kingdom economy, yet, very scant work has been conducted to identify the impact of those subsectors on the country's AGDP. Such knowledge will help policy-makers to design sound policies and plans for improving the Saudi's economy and achieving the kingdom vision of diversification and achieving food security.
Literature Review
Many studies analyzed the impact of agricultural sector on the GDP. For instance, Rehman et al. (2017) studied the relation between GDP and livestock production. They used Johansen co-integration test and ordinary least square (OLS) methods as analytical tools. Their results revealed that the output of fat, milk, bones, mutton, and eggs had positive and significant associations with AGDP in Pakistan, whereas the output of wool, beef, hair, hides, poultry meat and skins had negative and insignificant ones.
Another study was conducted by Rahman et al. (2014) to examine livestock sector in Bangladesh. Their results revealed that the livestock resources were relatively well distributed and stable in the country. Furthermore, the contribution of the selected sectors on the country GDP is ranging between 2.1% and 3.6%.
Another study was conducted in Romania by BĂLAN et al. (2014) with the aims of examining the relationship between the agricultural output and its determinant variables of capital and labor. They used Cobb-Douglas production function, analysis of variance (ANOVA), F test, and other analytical techniques to achieve their objectives. They argued that GDP increases due to the increment of the selected agricultural determinants. The study recommended the importance of the government investment on improved technical innovations and employees' capacity buildings. In the same vein, Okezie and Ihebuzoaju (2017) used multiple regression techniques to analyze the effects of selected sectors (agriculture, petroleum, education, health, and telecommunication) on Nigerian GDP. Their results indicated that various sectors donated to the development of GDP and the Nigeria economy at various rates. Furthermore, contribution is expected in upcoming years leading to a speedy growth of the economy, provided an effective and sustainable policy is put in place by the government. Sertoglu et al. (2017) studied the relationship between economic progress and agricultural sector of Nigeria. Their findings revealed the presence of long-run stable relationships between studied variables: agricultural output, real GDP, and oil rents. However, the speed of adjustment of the variables toward long-run balance is low. They stress on the government to make special financial arrangements for the agricultural sector.
On the contrary, Muhammed and Alhiyali (2018) used multivariate co-integration and autoregressive distributed lag (ARDL) model in measuring the impact of some economic variables on AGDP. They also employed the causality test to determine the direction of the relationship between the economic variables. Their research findings proved long-term effect between the AGDP index and the economic variables under consideration. Reddy and Dutta (2018) examined the effect of different variables on AGDP in Indian during the period from 1980/1981 to 2015/2016. Their results showed that the independent variables (high-yielding variety [HYV] seeds, electricity, rainfall, and pesticides) are statistically significant in explaining the variation in AGDP. Rehman et al. (2015) studied the impact of selected field crops on AGDP in Pakistan. They used Johansen's co-integration test and OLS tools for analysis. Their results reveal that the outputs of cotton, wheat, and rice have positive and significant relationships with the AGDP, whereas the output of sugarcane has a negative and nonsignificant one. They recommend the introduction of innovative financial programs to enhance agricultural sector development.
Other study tested the role of dairy industry on Pakistanian economic growth for the period 1975 to 2015 . Augmented Dickey-Fuller (ADF) unit root test, P-P unit root test, co-integration test, Granger causality, and OLS method were applied in the study. The results showed the presence of long-run relationship among the varirables. Furthermore, the coefficient of dairy industry production showed a positive and highly significant association with AGDP. It also showed bidirectional causality associations among the variables. Accordingly, the study recommends the importance of introducing new credit schemes.
Another study examined sector-wise share in agriculture GDP in Pakistan, using secondary data from 1998 to 2015 (Chandio et al., 2016). An economteric method was used to analyze the data. The results appeared that agricultural subsectors affect positively and significantly the agriculture GDP. The results pointed out the following recommendation: launching innovative technologies in agriculture is supplied by government so as to increase the subsectors' share in agricultural GDP.
Other researchers studied the relationship among aquaculture capture fisheries production and economic growth in Pakistan argued the presence of positive effect of aquaculture and capture fisheries production on the economic growth. However, it is worth noting that they used ARDL in their analysis (Rehman, Deyuan, Hena, & Chandio, 2019).
It is very clear from the reviewed literature that different analytical tools were used in the analysis of the time series data related to the topic of the study. Some examples of those models are Johansen co-integration test, multivariate analysis, VECM, ARDL, ANOVA, and regression analysis. However, it is worth noting that there are specific requirements of the selection of specific model in the analysis of each study. For instance, Johansen test is the best-fit model when dealing within significant data at level but becomes significant when first differenced. VECM is usually used when co-integration exists between studied variables (Shrestha & Bhatta, 2018), like the case of this study. It is also noticed that none of the mentioned studies has covered all of the variables used in the study; likewise, none of them was conducted in KSA context.
This article aimed at examining the short-run relationships and long-run speed of adjustment between AGDP and selected agricultural subsector production (dates, honey, fish, chicken, and cattle) in KSA. It also aimed at identifying the leading subsector that might significantly contribute to the development of AGDP and other subsectors in the country.
This article is organized into four sections. The first section covered the introduction and literature review. The second section is devoted to data collection and methodology, whereas the third one is the results and discussion. Finally, the last section is devoted to the conclusion.
Data Description and Study Area
The research was based primarily on secondary data, from 1985 to 2017, obtained from the FAO and aimed at estimating the contribution of selected agricultural products on AGDP (AGDP in 1000 Riyal) in the KSA. In particular, the selected products include production of date palm (D), honey (H), fish (F), chicken (Ch), and cattle (Cat) (in metric tons).
Methods of Analysis
Unit root test, Johansen tests for co-integration, error correction model, and the multiple regression models were used to discover and to assess the relation between variables using EVIEWS 9 statistical package.
Descriptive Statistics Test
To test normality series, a descriptive statistics test is run using the Jarque-Bera statistics indicator. If Jarque-Bera statistics is less than .05 level, the H 0 is rejected (normal distribution), significant that the series shows non-normal distribution. Accordingly, the series is theoretical to be changed to logarithm form.
The Unit Root Test
ADF test is conducted for analysis whether we have a nonstationary time series (Dickey & Fuller, 1979). The change in variables was regressed on lagged values of variables given by the following equations: (1) Testing H 0 : X has a unit root (nonstationary), against H 1 : X (stationary). The t statistic of ADF coefficient is compared with the t statistic of test critical values. If the series is stationary, acceptance of H 1 of ADF statistic is bigger than the critical t value (Emam et al., 2018). EVIEWS 9 program (which was used in this study) normally adopts 1% and 5% level of significance.
Specification of Lag
A maximum lag number is identified and considered in the co-integration test. Different standards can be used in choosing the lag orders: Akaike information criterion (AIC) is the most widely used model (Emam et al., 2018). AIC is the best model in lag selection when dealing small samples (less than 60; Liew, 2004).
Johansen Tests for Co-Integration
Johansen tests, namely eigenvalue and trace tests, are used for testing co-integration. For both tests, the null hypothesis of no integration was examined against co-integration. However, the two tests are different in alternative hypothesis. Hence, the maximum eigenvalue examines the biggest eigenvalue in relation to the following largest value which is zero. The test statistics is specified by the next equation (Emam et al., 2018): LR r r ( ) , 0 0 1 + is the likelihood ratio statistic to test whether rank (Π) = r 0 against the alternative hypothesis that ranks (Π) = r 0 + 1.
The trace test examines whether the rank of matrix Π is equal to r 0 , in particular, tests the null hypothesis rank (Π) = r 0 , against the alternative hypothesis r 0 < rank (Π) ≤ n, n represents the maximum number of co-integrating vectors (Baig & Straquadine, 2014). The following equation is used to calculate the likelihood ratio (Emam et al., 2018):
VECM
The ECM explores the degree of short-run dynamics of equilibrium and the implications of short-run performance, one or more variables changes to return stability, where short-run equilibrium variation is required to preserve long-run relations. The following equation was estimated (Dhungel, 2014): where AGDP = Agricultural Gross Domestic Product, and production of the followings in metric ton: D = date; H = honey; F = fish; Ch= chicken; Cat = cattle. For short-run causality from independent variables to dependent variables, coefficient b s should be individually statistically significant. α 1t . . . α 6t represent the speed of adjustment parameter, which indicates how fast the previous moves back toward the equilibrium and indicates long-run causality from independent variables to dependent variable market ( α must be negative and significant). e e t t 1 6 are the stationary random processes that capture other information not contained in the model.
Regression Analysis
Multiple regression model analysis was applied to estimate the influence of agricultural products' AGDP using the following specification: where all variables were identified previously.
Equation 11 can be rewritten as follows: where β β 1 5 … and C represent coefficients to be estimated and constant, respectively.
Descriptive Statistics
The results of descriptive statistics record that the probabilities of Jarque-Bera statistics are more than .05 levels for all variables, concluding that the variables appeared normal distribution (Table 1). Data are used directly without transformation.
The Unit Root Tests Results
After the first difference for all series, ADF statistics are significant at 1% level, concluding that AGDP, date, honey, fish, chicken, and cattle are stationary ( Table 2).
Identification of Lag Number
Lag 1 was chosen by the VAR model as shown in Table 3.
Results of Co-Integration Test
Johansen multivariate co-integration results are presented in Tables 4 and 5. Trace and Max eigenvalue statistics indicated two co-integrating equations at the .05 level and no co-integrating equations, respectively. Therefore, there is a long-run connection between AGDP, date, honey, fish, chicken, and cattle.
Results of VECM Test
The VECM is able to correct its past-time disequilibrium when the coefficient of adjustment parameter is negative and shows significance. The coefficient of adjustment parameter for D (as dependent variable) is negative (−5.42) and significant (critical t value = −2.52 with prob. = .02), meaning that model is able to correct its past-time disequilibrium. Also, the results convey that the coefficients of adjustment parameter for other variables (when acting as dependent variable) were statistically insignificant, concluding that the model may be needed more than 1 year to correct its past-time disequilibrium (Table 6).
For short-run equilibrium analysis, the relationship between AGDP (dependent) and Ca in one lag period (independent variables) conveys the existence of positive and statistically significant coefficients (coefficient = .15; t value= 3.03 with prob.= .006). However, the short-run equilibrium between D (dependent) and Ch in one lag period (independent variable) reveals a negative and statistically significant one (coefficient = −1.388; t value = −3.57 with prob. = .002). Such results Source. Data were collected and analyzed.
Note. Trace statistic test shows two co-integrating equations at the .05 level. CE = co-integrating equation. **Rejection of the null hypothesis at 5% level of significance; * represent 5% level of significance.
indicate the presence of short-run equilibrium between studied variables. Also, short-run coefficients (−23.16 and 0.54, respectively) were statistically significant (t value = −4.57 with prob.= .000 and 3.47 with prob. = .002, respectively) between Ch (dependent) and AGDP and Ch (independent variables in one lag period), concluding that there is shortrun equilibrium. Short-run coefficient (−0.07) was statistically significant (t value = −3.29 with prob. = .003) between Cat (dependent) and Ch (independent variable in one lag period), meaning that there is short-run equilibrium. To fix VECM adequacy, serial correlation of residuals was tested by the LM test. LM-statistics (lag 1) = (42.54 with prob. = .210). According to the result of LM test, the null hypothesis of no serial correlation of residuals will be accepted.
Regression Analysis Results
Results indicated no autocorrelation between successive values of the disturbance term which has a constant variance (homoscedastic; Table 7).
A CUSUM test was conducted to detect the stability of the cumulative sum of the recursive residuals ( Figure 2). The test finds whether a factor is steady if the cumulative sum is found in the area among the two critical lines, concluding the stability of the model (Zhai et al., 2013). Accordingly, to mention results, the linear regression model was chosen. The result suggested that the estimated model is significant and was used to estimate the contributions of independent variables (date production, honey, fish, chicken, and cattle) on AGDP. The estimated equation can be written as follows:
D H F Ch Cat
R 2 is a statistical measure that was used to evaluate the goodness of appropriateness of the regression model (Behnassi et al., 2019).
From the table, R 2 value indicated that 94% of the variations in AGDP are explained by the selected agricultural products. The value of F statistics illustrates a highly significant level for the model. It is concluded that the independent variables are jointly essential in explaining the variation of AGDP during the study period.
The coefficients of H, Ch, and Cat are positive and significant in elucidating the variability of the AGDP. These findings come in line with other study which argued that beef production recorded a positive significant effect on Pakistanian AGDP . Results also indicate that any 1-ton increase in production of H, Ch, and Cat will increase AGDP by 50.05, 0.07, and 0.83 (1,000 Riyal), respectively. In addition, any 1-ton increase in D will decrease AGDP by 0.02 (1,000 Riyal). Also, the result recorded that the H coefficient was positive and significant at 10% level of significance. These positive results coincide with findings of Rehman et al. (2017).
Results also revealed that the coefficient of D is negative and greatly significant in clarifying the inconsistency of the AGDP and this result may be due to the fact that the KSA donates large quantities of dates throughout the world, especially during Ramadan. On the contrary, the F coefficient has a positive sign; however, it is insignificant.
Impulse response analysis is considered as an essential step in econometric analyes that used to describe the response of a model's variables to a shock in one or more variables (Mohr, 2020). The impulse model was employed in this study to identify the leading subsector that might significantly contribute to the development of other agricultural subsectors in the long run (Figure 3). The figure reflects the response of AGDP, date, honey, fish chicken and cattle to Cholesky one standard deviation innovation impulse of each AGDP, date, honey, fish, chicken or cattle. Results revealed that AGDP showed positive response in the long run to cattle and fish subsectors. The date subsector showed positive response to AGDP and the honey and chicken subsectors; the honey subsector showed positive long run reply to one standard deviation impulse of each of the AGDP, date, and fish subsectors; the fish subsector showed positive long-run reaction to one standard deviation impulse of the chicken subsector; and the chicken subsector showed positive long-run answer to one standard deviation impulse of date subsector.
Finally, the cattle subsector showed positive long-run retort to one standard deviation impulse of each of the AGDP, cattle, date, and chicken subsectors. Each sector appeared to have a positive long-run comeback to some other sector; among them, the cattle sector recorded positive long-run reply to one standard deviation impulse to a majority of sectors (AGDP, cattle, date, and chicken). These results conclude that the cattle sector may be considered the leading sector.
Conclusion
Results of the study reveal the presence of long-run co-integration between selected variables. The coefficient of adjustment parameter for dates (as dependent variable) is negative (−5.42) and significant (critical t value = −2.52 with prob. = .019), meaning the model is able to correct its past-time disequilibrium. In addition, short-run causality was noticed between few variables in the study. The multiple regression analysis indicates the existence of significant and positive relationships between the dependent (AGDP) and the honey, chicken, and cattle (independent variables), except for date. In addition, the coefficient of date is negative and highly significant in explaining the variability in the AGDP. The negative relationship might be attributed to the fact that KSA donates huge quantities of dates especially during Ramadan all over the world. On the contrary, the impulse examinations indicate that cattle sector has positive long-run response to one standard deviation impulse to a majority of sectors (AGDP, cattle, dates, and chicken). The study concludes that the cattle sector might be considered as the leading subsector in the economy. Hence, it is very important to encourage the investment in the cattle, honey, and chicken subsectors to enhance the contribution of the agricultural sector in the KSA economy, thus achieving Saudi Vision 2030 of diversification. It is also recommended to conduct further studies on the impact of other economic subsectors on GDP. | 2021-04-02T13:13:39.134Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "49167fd3c63aeb879909cdcf464cfd94b0fa8a4a",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/21582440211005451",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "49167fd3c63aeb879909cdcf464cfd94b0fa8a4a",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
73623054 | pes2o/s2orc | v3-fos-license | Process of Air Ingress during a Depressurization Accident of GTHTR 300
A depressurization accident is the design-basis accidents of a gas turbine high temperature reactor, GTHTR300, which is JAEA’s design and one of the Very-High-Temperature Reactors (VHTR). When a primary pipe rupture accident occurs, air is expected to enter the reactor core from the breach and oxidize in-core graphite structures. Therefore, it is important to know a mixing process of different kinds of gases in the stable and unstable density stratified fluid layer. In order to predict or analyze the air ingress phenomena during the depressurization accident, we have conducted an experiment to obtain the mixing process of two component gases and the characteristics of natural circulation.The experimental apparatus consists of a storage tank and a reverse U-shaped vertical rectangular passage. One side wall of the high temperature side vertical passage is heated and the other side wall is cooled. The other experimental apparatus consists of a cylindrical double coaxial vessel and a horizontal double coaxial pipe. The outside of the double coaxial vessel is cooled and the inside is heated. The results obtained in this study are as follows. When the primary pipe is connected at the bottom of the reactor pressure vessel, onset time of natural circulation of air is affected by not only molecular diffusion but also localized natural convection. When the wall temperature difference is large, onset time of natural circulation of air is strongly affected by natural convection rather than molecular diffusion. When the primary pipe is connected at the side of the reactor pressure vessel, air will enter the bottom space in the reactor pressure vessel by counter-current flow at the coaxial double pipe break part immediately. Afterward, air will enter the reactor core by localized natural convection andmolecular diffusion.
Introduction
One of the next-generation nuclear plants is a Very-High-Temperature Reactor (VHTR).The VHTR has strong interests of development worldwide.In addition to broad economical appeal resulting from unique high temperature capability, the reactor provides inherent and passive safety.The Japan Atomic Energy Agency (JAEA) has successfully built and operated the 30 MWt High Temperature engineering Test Reactor (HTTR) and is now pursuing design and commercial systems such as the 300 MWe Gas Turbine High Temperature Reactor 300 (GTHTR300).Also, in order to deploy a commercial Gas Turbine High Temperature Reactor 300 for Cogeneration (GTHTR300C) in around 2030, JAEA is now carrying out design study [1,2].
When a double coaxial pipe connecting a reactor and a Gas Turbine Generator (GTG) module breaks, air is expected to enter the reactor core from the breach.The depressurization accident is one of the design-basis accidents of the GTHTR300.When the depressurization accident occurs, air is expected to enter the reactor core from breach and oxidize in-core graphite structures.Air ingress process in the VHTR is known to follow two sequential phases.Density of the gas mixture in the reactor gradually increases as air enters by the molecular diffusion and natural convection of the gas mixture in the first stage of the accident.Finally, the second stage of the accident starts after the natural circulation of air occurs suddenly throughout the entire reactor.The natural circulation of air suddenly occurs as once sufficient buoyancy is established [3].On the other hand, under specific boundary condition, the onset time for producing natural circulation is short [4,5].
A related study investigated the air ingress process and development of a passively safe technology to prevent air Science and Technology of Nuclear Installations ingress.In order to clarify the safety characteristics of the GTHTR300C in a pipe rupture accident, a preliminary analysis of air ingress was performed [6].The previous paper described the influence of local natural circulation in parallel channels on the air ingress process during a primary pipe rupture accident in the VHTR [7].The duration of the first stage of the air ingress process was also discussed with analytical results of the reverse U-shaped passage with parallel channels.
During depressurization accident in the VHTR, localized natural convection will occur in the space between the reactor pressure vessel and the permanent reflector.The Grashof number in natural circulation flow is based on the density difference between hot and cold leg.The natural circulation flow rate is determined by the point where the buoyancy and the pressure loss of the flow path are balanced.For example, the range of Rayleigh number based on space width is about 0 < Ra d < 5.0×10 9 in the HTTR.Therefore, the amount of transported oxygen depends not only on molecular diffusion but also on natural convection, and it is important to know a mixing process of different kinds of gases in the stable or unstable stratified fluid layer.In particular, it is also important to examine the influence of localized natural convection and molecular diffusion on the mixing process from a safety viewpoint.
Previous studies focused mostly on molecular diffusion and natural circulation of the two-component gas mixture in a reverse U-shaped tube and in a simple test model of the HTTR [8].In order to investigate the basic features of the flow behavior of multicomponent gas mixtures consisting of helium (He), nitrogen (N 2 ), oxygen (O 2 ), carbon dioxide (CO 2 ), carbon monoxide (CO), etc., experimental and numerical studies were performed to study the combined phenomena of molecular diffusion and natural circulation of the multicomponent gas mixtures along with the graphite oxidation reaction in a reverse U-shaped tube [9].The numerical results were in good agreement with the experimental results in regard to the density change of the gas mixture, the molar fraction change of the gas species, and onset time of natural circulation of air.Furthermore, the objectives of these studies were to investigate the air ingress process and to develop a safe passive technology for the prevention of air ingress [3].Recently, a density-gradient driven air ingress stratified flow was analyzed using CFD code for the Next-Generation Nuclear Plant (NGNP), which is a US designed VHTR [10] (Oh, et al., 2010).Authors have reported on the mixing process of two component gases through natural convection and molecular diffusion in a stable stratified fluid layer [11].According to the report, the mixing process through molecular diffusion in the vertical stratified fluid layer was significantly affected by the localized natural convection induced by a slight temperature difference between both vertical walls.The report described the process of air ingress during the first stage of a primary pipe rupture accident, and provided experimental results regarding the influence of localized natural circulation in the parallel channels on the air ingress process.Localized natural convection may affect the onset time of natural circulation [12][13][14].In order to predict or analyze the air ingress phenomena during the depressurization accident in the VHTR, it is important to examine the influence of localized natural convection and molecular diffusion on the mixing process.
In general, mixing processes of two component gases in a vertical stable stratified fluid layer are governed by molecular diffusion.When a stable stratification is formed in a vertical slot with two component gases which have different densities, the rate of transportation will be different, as determined by mutual diffusion coefficient.On the other hand, it is expected that natural convection will occur in vertical slot when one sidewall is heated and the other sidewall is cooled.When a stable stratification is formed with the two component gases and two vertical parallel walls of the slot are kept at different temperature, the transport process of the gases becomes more complex.In this case, the heavy gas diffuses into the light gas.In addition to that these gases will also be transported by natural convection.Both phenomena may occur at the same time during the air ingress process of the depressurization accident.Molecular diffusion and natural convection will occur simultaneously in the annular passage between the inner barrel and the water-cooled jacket [3].The range of the Rayleigh number base on the width of the annular passage is about 0 < Ra d < 3.26×10 5 and Ra d < 1.56×10 6 , respectively.The Rayleigh number based on the width of the annular passage of the HTTR or the GTHTR300C will be bigger than two digits of the Ra number based on the width of the simulated apparatus.Therefore, scaling analysis was carried out to find out which phenomena is dominant in the mixing process of two component gases in a vertical stable stratified fluid layer.In this study, we have carried out experiments and obtained the mixing process of two component gases and flow characteristics of localized natural convection.We also investigated the air ingress process when the horizontal double coaxial pipe ruptures.
Figure 1 shows the schematic drawing of an experimental apparatus of vertical rectangular channel.The experimental apparatus consists of a reverse U-shaped vertical slot and a storage tank.One side slot consists of the heated and cooled walls.The other side slot consists of the two cooled walls.The dimensions of the vertical slots are 598 mm in height, 208 mm in depth, and 70 mm in width.Each two vertical slots were connected and were a reverse U-shaped passage.
The dimensions of the connecting passage were 16 mm in height, 106 mm in depth, and 210 mm in length.The storage tank was connected to the lower part of the reverse U-shaped passage.The dimensions of the storage tank were 248 mm in height, 398 mm in depth, and 548 mm in width.The reverse U-shaped passage and the storage tank were separated by a partition plate.
Figure 2 shows the high temperature side slot of the reverse U-shaped passage.A stainless sheath heater and a water cooling pipe made of copper were attached to the heated wall and the cooled wall, respectively.These walls were covered by an insulator which was 30 mm in thickness.The dimension of the heated walls was 500 mm in height and 200 mm in width.The dimension of the low temperature side slot was the same as the high temperature side slot.The distance between the heated wall and cooled wall has been set to 20 mm.The wall and gas temperature were measured by a K-type thermocouple.Considering the errors induced by the thermocouples, the scanner junction, and the DVM accuracy, the entire accuracy of the temperature measurement was within ±0.5 K.The temperature measurement position is provided in Table 1 and Figure 3.
Experimental Method.
The experimental procedure is as follows.The partition plate between the reverse U-shaped passage and the storage tank is closed.The reverse U-shaped passage is filled with a lighter gas, which is helium for example, and the storage tank is filled with heavier gas, which is nitrogen, air, and argon for example.The one side of copper plate in the high temperature side slot is heated and the other side plate is cooled by water.As the copper plate was heated until set temperature, the gas pressure in the slots and the storage tank was kept at the atmospheric pressure.After confirming that the temperature of the various points in the apparatus reached the steady state condition, the partition plate was opened.As shown in Figure 4, the gas temperature fluctuates during the early stage of the experiment (within 50 min).It seems to be probable that localized natural convection produces in the high temperature side slot [16].The gas temperature change at number ( 8) and ( 9) in the high temperature side slot was suddenly decreased at about 50 minutes after the start of the experiment (opening of the partition plate).On the other hand, the gas temperature change at number (23) in the low temperature side slot was suddenly increased at the same time.Such gas temperature change will be explained as follows.The density difference between the high temperature side slot and the low temperature side slot becomes small just after the start of the experiment.The heavy gas diffuses into both slots with elapsing time.Meantime, as shown in Figure 6, the localized natural convection produces in the high temperature side slot.Therefore, the density difference between the high temperature side slot and the low temperature side slot will increase with elapsing time.Finally, as the buoyancy becomes increased enough, natural circulation through the reverse U-shaped passage produces suddenly.Thus, the gas temperature at the lower part of the high temperature side slot increases and the gas temperature at upper part of the low temperature side slot decreases just after the onset of natural circulation.
Gas
Rayleigh number, Grashof number, and Prandtl number are defined as the following equations: Table 2 shows the onset time of natural circulation under various experimental conditions.Table 3 shows the Rayleigh number based on the width of the high temperature side channel.As shown in Table 3, it is possible that the flow regime of natural convection changes conduction regime to transition or boundary layer regime in the He/Air and He/N 2 experiment [17].Therefore, not only molecular diffusion but also localized natural convection will affect the onset time of natural circulation.There is a mutual diffusion coefficient as an index characterizing molecular diffusion.D AB is the mutual diffusion coefficient of gas A in gas B. Gas A is heavy gas; gas B is light gas.D AB is obtained by literature [18].Table 4 shows the mutual diffusion coefficient under the various experimental condition.Numbers in parentheses are the gas temperature in the high temperature side slot.It can be seen that the mutual diffusion coefficient of two component gases depends on the gas temperature in the high temperature side slot.
As shown in Table 2, the onset time of natural circulation decreased with an increasing temperature difference for each combination of gases.The range of the Rayleigh number based on the width of the high temperature side slot is shown in Table 3.The Rayleigh number increased with an increasing temperature difference.The influence of natural convection increased with an increase in the temperature difference between the heated and cooled walls.Thus, the onset time of natural circulation decreased.When the temperature difference was 30 and 50 K, the onset time of natural circulation decreased with an increase in the diffusion coefficient.However, the onset time of natural circulation for He/N 2 was shorter than that for He/Air regardless of the diffusion coefficient when the temperature difference was 70 K.The Rayleigh number for He/N 2 is larger than that for He/Air as shown in Table 3.Therefore, the onset time of natural circulation depended more on molecular diffusion than the strength of localized natural convection when the temperature difference was small.On the other hand, the onset time of natural circulation depended not only on molecular diffusion but also on localized natural convection when the temperature difference was large.Figures 7, 8, and 9 show the gas temperature change with time in the high temperature side slot under the condition of temperature differences of 30 K, 50 K, and 70 K, respectively.The onset time of natural circulation becomes shorter as the mutual diffusion coefficient becomes large in the He/Air and He/N 2 experiment under the condition of the temperature differences of 30 K and 50 K.However, in the case where the He/Air and He/N 2 experiment is under the condition of the temperature difference of 70 K, the onset time of natural circulation is shorter for He/N 2 experiment compared with He/Air experiment.The Rayleigh number of the He/N 2 experiment becomes large compared with the Rayleigh number of the He/Air experiment.This result shows that the influence of localized natural convection is greater than molecular diffusion when temperature difference is large.
Experimental Apparatus.
The other experiment for a horizontal pipe break case is planned using apparatus which is shown in Figure 10.Air ingress scenario in the case of the horizontal pipe break of the GTHTR300C is as follows.
After the pipe ruptures, air will flow into the bottom part of the reactor pressure vessel by the counter-current flow.The density stratified fluid layer will be formed.Buoyancy will produce between the hot and cold legs.As the buoyancy will be small, the natural circulation flow will not produce under the condition of this density distribution.Thus, air will be transported to the reactor core by mainly molecular diffusion.However, from the results obtained in these experiment, air will be transported to the reactor core by the localized natural convection.In the configuration of the HTTR, a vertical channel existed between the reactor pressure vessel and the pipe rupture part.So, it needed much time to onset of natural circulation of air.In the configuration of the GTHTR300C, the vertical path does not exist between the reactor pressure vessel and the pipe rupture part.Therefore, air may be transported to the reactor core earlier.The onset time of the natural circulation of air and the amount of infiltrating air during the accident will be greatly affected by the produced position and strength of the localized natural convection in the pressure vessel.If the localized natural convection occurs inside the channel, it is difficult to estimate not only the density change of gas mixture but also the onset time of natural circulation through the reactor.Anyway, after the time elapses, the natural circulation may occur suddenly.In order to research the mixing process of two component gases when the horizontal primary pipe of the GTHTR300C is ruptured, experiment and analysis are planned.
Figure 11 shows the schematic drawing of an experimental apparatus of double coaxial cylinder.The experimental apparatus consists of a double coaxial cylinder and horizontal A ball valve was installed in the inner pipe and the outer pipe of the horizontal double coaxial pipe.Four cartridge heaters were installed in the inner cylinder and heated.The standard of the cartridge heater is 300 mm in length, 12.8 mm in outer diameter, 170 mm in effective heating portion, rated voltage of 100 V, and rated capacity of 100 W. A water cooling jacket was inserted into the outer cylinder and cooled.In order to prevent heat radiation, a heat insulating material (round furnace, ceramic type) with a height of 200 mm and a thickness of 23.5 mm was installed on the inner cylinder.The top and bottom of the experimental apparatus were covered with a lid to close the apparatus.One port with an outer diameter of 6.35 mm was used as a gas supply port.
A K-type thermocouple was used for temperature measurement.The thermocouple installation positions are shown in Figures 12-14 and Table 5.The temperature measurement error accuracy of this thermocouple is ± 1.5 K.The thermocouple was inserted through a pore of 1.2 mm in diameter of a compression fitting provided on the outer cylinder.Six thermocouples were installed at the 5 mm position from the outer cylinder (thermocouples numbers 1, 2, 7, 8, 13, and 14).Six thermocouples were installed at the 16.5 mm position from the outer cylinder (thermocouples numbers 3, 4, 9, 10, 15, and 16).Six thermocouples were installed at the 28 mm position from the outer cylinder (thermocouples numbers 5, 6, 11, 12, 17, and 18).A total of 18 thermocouples were installed and gas temperature changes in the outer channel were measured.In order to measure the gas temperature at the bottom of the experimental apparatus, 6 thermocouples were installed at 25 mm from the bottom of the apparatus (thermocouples numbers 21-26).In order to measure the gas temperature at the top of the experimental apparatus, 8 thermocouples were installed at 410 mm from the bottom of the apparatus (thermocouples numbers 31-38).Two thermocouples for cooling water temperature measurement were installed at the water cooling jacket entrance.Heater temperature was measured with builtin thermocouple.
For measurement of the gas concentration, an ultrasonic gas concentration meter was used.The gas concentration measurement positions are shown in Figures 12-14 and Table 6.Gas was sampled with a microtube pump from the sampling port provided in the bottom part, upper part, and horizontal double coaxial pipe of the experiment apparatus, and the concentration was measured.A constant temperature type thermal anemometer was used to measure the gas flow velocity.Figure 12 and Table 7 show the measurement position of the gas flow velocity.The flow velocity was measured by attaching an anemometer to the valve installed on the horizontal dual coaxial tube.
Experimental Method.
Experimental procedure is as follows.The experimental apparatus is filled with helium gas.The inner cylinder is heated, and the outer cylinder is cooled by the water cooling jacket.After the heater temperature is reached at steady state, the valves attached to the horizontal double coaxial pipe are opened at the same time.During the experiment, the temperature and mole fraction of two component gases are measured.The flow velocity at the inlet and outlet of the horizontal double coaxial pipe is also measured.Heat input is 36 W (1.22 kW/m 2 ), 144 W (4.89 kW/m 2 ), and 324 W (11.0 kW/m 2 ).Symbols show the gas temperature at the specific points (distance from the outer cylinder and the thermocouple number).As shown in Figure 15, the gas temperature fluctuation near the inner cylinder is ± 2.0 K during the experiment.On the other hand, the gas temperature fluctuation near the outer cylinder is ± 0.5 K. Therefore, it was found that the temperature fluctuation increases from the outer cylinder toward the inner cylinder.In addition, the gas temperature difference between the vicinity of the inner and outer cylinder is about 15 to 30 K. From the results obtained, the localized natural convection will occur between the inner and outer cylinders.The gas temperature fluctuation near the wall can be seen when the natural convection produced in the vertical wall with different temperature.After starting the experiment, the gas temperature near the outer cylinder decreased.A temperature difference is generated between the Installation position of anemometer 1 Gas inlet and outlet of annular flow channel of horizontal double coaxial pipe 2 Gas inlet and outlet of inner pipe flow channel of horizontal double coaxial pipe gas temperature in the vicinity of the inner cylinder and the gas temperature in the vicinity of the outer cylinder.As shown in Figures 16 and 17, however, the temperature fluctuation and the temperature difference decrease from the upper part to the lower part of the apparatus.Therefore, the strength of the localized natural convection generated between the inner and outer cylinders differs from the upper part to the lower part of the apparatus.Consider the change of the flow region of the convection in the apparatus.Table 8 shows the Rayleigh number of the side and top spaces in the apparatus.The equivalent diameter of the side space is the width between the inner diameter and the outer diameter of the coaxial double cylinder.The equivalent diameter of the top space is from the upper surface of the inner cylinder to the upper lid.Rayleigh number which is filled with helium becomes of minimum value and the Rayleigh number which is filled with air becomes of maximum value.This is because the density of air is about 7 times of the density of helium.When the apparatus is filled with helium, the Rayleigh number will be lower than 10 3 .Therefore, the region of the localized natural convection becomes conduction regime [17].On the other hand, when the apparatus is filled with air, the Rayleigh number will exceed 10 4 .Therefore, the region of the localized natural convection becomes transition or boundary layer regime.
Figure 18 shows the mole fraction change of air under the condition of the heat input of 324 W. The horizontal axis indicates the elapsed time after the valve opening.The measurement range of an ultrasonic concentration meter for helium gas is from 0 to 50%.Therefore, the solid lines show the mole fraction of air. Figure 19 shows the velocity at the outlet of the outer passage of the double coaxial pipe.From the results obtained in the experiment, air ingress process will be explained as follows.After the valve was opened, the mole fraction of air in the inner pipe of the horizontal double coaxial pipe increased sharply.Air flows into the cylinder from the inner pipe of the horizontal double coaxial pipe.On the other hand, air flows into the cylinder from the outer pipe of the horizontal double coaxial pipe by the counter-current flow.Helium which filled the bottom part of the cylinder will be released to the outside of the cylinder.This condition will be terminated immediately.The mole fraction of air in the outer pipe of the horizontal double coaxial pipe also increased from 0.5 to 0.6.During this time air will be diffused into the cylinder.Afterward the mole fraction change of air decreased and then it increased again.As shown in Figure 18, the mole fraction change of air at the point of ( 15) decreased at about 20 minutes.Thus, the natural circulation will be generated through the cylinder.The mole fraction of air in the other measurement points increased from 0.5 to 0.6 during the period from 20 to 30 minutes after the valve was opened.As we measured the flow velocity at the outlet of the outer pipe of the double coaxial pipe, we obtained the velocity signal.The velocity
Conclusions
We carried out experiments to investigate the mixing processes of two component gases.The conclusions are as follows.
The working fluids used five kinds of gases which are helium (He), nitrogen (N 2 ), argon (Ar), neon (Ne), and air (Air).The combinations of two component gases are set to He/Air, He/N 2 , Ne/Ar, and N 2 /Ar.The combinations of density ratios are 1.38/10, 1.43/10, 5/10, and 7/10.Temperature difference between the heated wall and cooled wall is set to 30, 50, and 70 K. 2.3.Experimental Results and Discussion.Figures 4 and 5 show the temperature result of He/N 2 experiment.The gas temperature change with time in the high temperature side slot is shown in Figure 4.The gas temperature change with time in the low temperature side slot is shown in Figure 5.The temperature difference between the heated wall and cooled wall was set to 70 K.Science and Technology of Nuclear Installations 5
Figure 4 :Figure 5 :
Figure 4: Gas temperature change with time in high temperature side slot (He/N 2 ).
Here, g is the gravitational acceleration [m/s 2 ], is the thermal expansion coefficient [1/K], T w is the average temperature of the heated flat plate in the left side slot [K], T e is the average gas temperature in the left side slot [K], and D is the equivalent diameter [m].The width of the vertical slot was used as the equivalent diameter.] is kinematic viscosity [m 2 /s]. is viscosity coefficient [kg/m⋅s], C p is constant pressure specific heat [J/kg⋅K], and is thermal conductivity [W/m⋅K].
Figure 6 :
Figure 6: Illustration of change of flow pattern.
Figure 7 :Figure 8 :
Figure 7: Gas temperature changes in high temperature side slot (T/C Numbers 8 and 9) when wall temperature difference is set to 30 K.
Figure 9 :Figure 10 :
Figure 9: Gas temperature changes in high temperature side slot (T/C Numbers 8 and 9) when wall temperature difference is set to 70 K.
Figure 11 :
Figure 11: Experimental apparatus of double coaxial cylinder.
Figure 12 :
Figure 12: Measurement positions of temperature, gas concentration, and flow velocity.
Figure 13 :
Figure 13: Measurement position of temperature, gas concentration, and flow velocity (bottom view).
Table 1 :
Position of temperature measurement.
Table 2 :
Onset time of natural circulation.
Table 3 :
Rayleigh number based on the width of the high temperature side slot.
Table 4 :
Mutual diffusion coefficient of two component gases.
Table 5 :
Positions of temperature measurement.
Table 6 :
Sampling positions of gas mixture.
Table 7 :
Positions of velocity measurement.Gas flow velocity in the experimental apparatus of double coaxial cylinderNo.
Table 8 :
Rayleigh number (×10 4 ) of the side and top spaces in the apparatus. | 2018-12-23T07:28:09.504Z | 2018-09-02T00:00:00.000 | {
"year": 2018,
"sha1": "ef279952c0668ea4e765ff77f0873233146bffd7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2018/6378504",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ef279952c0668ea4e765ff77f0873233146bffd7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
229452114 | pes2o/s2orc | v3-fos-license | CHALLENGE STRESSORS AS A MOTIVATIONAL TOOL: FEMALE EMPLOYEES IN THE MONGOLIAN BANKING SECTOR
This study examines whether challenge stressors can be one of the root causes of burnout among Mongolian female employees in the banking sector. The service industry contributes to more than 50 percent of GDP in Mongolia, and most employees in the financial sector are females. The adaption of smart technologies in this sector is thriving. Therefore, the first objective was to reveal whether there is a significant relationship between challenge stressors and burnout. The second objective was to test whether challenge stressors are inversely related to burnout and its three dimensions. A quantitative design was applied, and 101 validated questionnaires were analyzed. Overall, approximately 4% of the variance in burnout can be predicted by challenge stressors. HR professionals who are in charge of the retention programs in the banking sector are advised to be well aware of how to manage the challenge stressors.
INTRODUCTION
Background of the study Globally, the female population seems to be slightly larger than the male population in some countries. Mongolia is one of the typical examples representing the gender ratio of 49.1% men and 50.8% women in 2020. (National Statistical Office of Mongolia [NSOM], 2020). The female labor participation (FLP) rate in Mongolia has always been higher at over 50% and above the world average since 1990 ( Figure 1). About 80% of employees in Mongolia's financial sector are dominated by females (NSOM, 2019). In fact, women's leadership (Millier, & Bellamy, 2014) and FLP in Asian countries have been emphasized by many studies around the world. However, when it comes to Mongolia's case, a very limited number of studies have been investigated covering the workplace-related stress among females. From a practical point of view, it is obvious that handling their continuous chores at home and performing their jobs well at the same time can be sources of stressors that may drive more stress in women at work. Some studies have stated that stress is somehow higher among females than males (Cohen & Janicki-Deverts, 2012;Kessler et al., 1985).
Also, Jex (1988) reported that stressors are root causes of strains including depression, exhaustion, and burnout. If the stress triggers burnout, it may be worth investigating the relationship between challenge stress and burnout since challenge and hindrance stressors can have a certain influence on employees' outcomes (Podsakoff, 2007). Also, positive job stress found to be positively related to satisfaction with work, organizational commitment (Bhagat et al., 1985;Scheck et al., 1995;Cavanaugh et al., 2000;Boswell, et al., 2004;Bingham et al., 2005;Podsakoff et al., 2007), motivation and performance (LePine et al., 2005). Although the work-related stressors are recently differentiated into two different categories -challenge and hindrance stressorsin the study of Cavanaugh et al., (2000); Boswell et al., (2004) andBringham et al., (2005), examining the effects of stressors at workplaces, however, not all stressors in the workplace were observed to have some negative effects on employees attitudes as well as on their performance outcomes by some researchers (Bhagat et al., 1985;Sarason, & Johnson, 1979;Scheck et al., 1995;1997) in the early literature. According to LePine et al., (2005), the challenge stressors are defined as positive work demands such as time pressures, workload, job responsibility, and job complexity -have a positive association with employee strains, motivation, and performance. Most importantly, the studies were conducted in the category of positive job stressors found that challenge-related job demands are negatively related to job search (Cavanaugh et al., 2000;Boswell et al., 2004), intentions to leave, and turnover .
It has become more and more difficult for service providers to provide an unforgettable and unique experience in order to keep attracting more customers and making them loyal clients (Chathoth et al., 2013), which may make employees get stressed as well as burnout in the service industry (Behrman & Perrault, 1984;Yagil, 2008). In other words, the constant interaction with customers (Behrman & Perreault, 1984) and task repetition (Taylor & Bain, 1999) may promote an increase in burnout's feelings at the workplace. It is a common practice that service-oriented organizations require employees to obey their behavior standards and regulations in order to achieve organizational purposes (Diefendorff et al., 2006). It is so-called emotional labor, and its association with burnout has been confirmed by many studies in the literature such as Kruml and Greddes, (2000), Brotheridge and Grandey, (2002); Zapf (2002); and Kim (2008). Therefore, it may be a common phenomenon for employees in the service industry to reach burnout because of the job they perform.
Mongolia's service industry plays a significant role in its economy and contributes to 50.7% of GDP (NSOM, 2019).
The technological revolution has been gradually changing the course of the financial industry and is not only driving Fintech but it is also influencing the basic forms of many financing services in the banking sector regarding the concept of big data, artificial intelligence, value chain, and blockchain. Scholars generally recommend that companies have to keep up the main flows of technological changes on time to stay in competition on the market (Wang et al., 2015). The impact of technological revolutions on the financial as well as all other industries requires employees to acquire a certain kind of technical and nontechnical skills (Janis & Alias, 2017;Veresné Valentinyi, 2015). Therefore, reinventing and adapting oneself to the rapidly changing workplace generates another root cause of workplace stress that may make employees emotionally exhausted.
Problem statement
Surveys conducted over the last four decades indicate that half to three-quarters of today's workforce describe their work as very stressful (Greenberg & Canzeroni, 1996). Unfortunately, this trend is expected to continue due to rapidly developing technological changes, economic crises, and environmental problems all around the world. For female employees who work in the service industry may get stressed more than those who are employed in different sectors because of emotional labor as well as their household responsibilities at home. Also, human beings are at the transaction stage of the technological revolution and nobody is able to predict how future workplaces will look like and how many of us are going to be replaced by robots and machines by 2050. We can be sure the change is happening, and we are witnessing it at every walk of our life nowadays. However, according to literature, some stressors are acknowledged to be positive, promoting employees to enhance their careers, getting satisfied with work (Sarason & Jonhson, 1979;Scheck et al., 1995;Cavanaugh et al., 2000;Boswell et al., 2004;Bringham et al., 2005), demonstrating a commitment to their organizations (Bhagat et al., 1985;Podsakoff et al., 2007), staying motivated, and delivering better performance at work (LePine et al., 2005). Therefore, keeping this overall picture in mind, the researchers upheld the strong desire to find out how female employees in Mongolia's banking sector feel about challenge stressors at work.
Purpose of the study
Generally, this study aims to examine what kind of relationship exists between the challenges stressors and burnout among female employees in the banking sector in Mongolia.
Research Questions
The followed research questions have been developed in order to achieve the purpose of this study.
1. Are challenges stressors negatively associated with Burnout among Mongolian female bank workers? 2. Are the challenge stressors a significant predictor of burnout among female employees in the banking sector in Mongolia?
Background of Mongolia
Mongolia is a country of nearly 3.1 million inhabitants, and is estimated to be one of the 3G (Global Growth Generators) nations which is predicted to achieve high growth and yield profitable investment opportunities over the next 30 years due to its growing young generation and natural resources (Buiter & Rahbar, 2011). More specifically, about 1.5 million males and 1.6 million females are officially counted in a demographical statistic report of February 2019.
The financial industry and banking sector in Mongolia
At the end of 2018, 14 commercial banks were operating with the human capital of 15.000 employees through 1516 branches in Mongolia. About 938,000 bank borrowers and 9.1 million customers throughout the whole country received financial service by using offline and online services. From 2009 to 2014, the banking sector in Mongolia has consistently occupied around 95% of the total financial market asset (Davaasuren, 2015).
Despite the traditional financial system, the advancement of information and communication technology is reshaping the financial industry around the world by offering various types of online payments, transactions, savings, loans, and insurance services -it is socalled Fintech. In Mongolia, several Fintech companies such as LendMN, Ard Credit, and Hipay offer different financial services and have already extended their market abroad (Jargalsaikhan, 2019). Nowadays, many new initiatives and changes from the revolution in information and communication technology are being generated in the financial sector in Mongolia, but a very limited number of studies have been carried out regarding HR issues of the financial industry in Mongolia.
Challenge stressors
Today's workplace is becoming more and more competitive and stressful than ever before due to the revolution's effects on information and communication technology and many other factors. It is generally agreed that machine learning and robotics will change every line of work (Harari, 2018). According to Frey and Osborne (2013), approximately 47% of jobs in the USA and 54% in Europe will disappear due to automation. Therefore, it is not arguable any longer that employees are demanding more technical related competencies of tomorrow's workplaces (Golightly et al., 2016;Neugebauer et al., 2016;Yu et al., 2015). For employers, about 80% of them expect their employees to adapt and gain new skills in order to fit into their jobs in the future (World Economic Forum, 2018). Besides, companies in the service industry like the banking sector urge their staff to utilize emotional labor. As a result, many employees need to control their emotions and act according to a company's standard requirements, which may push them to get burnout at the workplace because being in a constant state of emotional labor may cause emotional exhaustion among employees (Hochschild, 1983).
In the case of female employees, they may be more stressed due to maternal health, fertility, childcare, and other family-oriented policies, labor-saving consumer durables, social norms and culture, and structural changes in the economy (Our World in Data, 2017). For instance, Marshall and Tracy (2009) and Marshall and Barnet (1993) highlighted the findings of the study that the work-family conflict is being pervasive among female workers who have an infant. In response to these circumstances, female employees may get more stressed and depressed in the workplace in many cases. However, certain kinds of stressors are found to be job demands that promote employee's personal growth and development at work (Cavanaugh et al., 2000).
Job burnout
Job burnout is defined as a syndrome that is composed of three dimensions, including emotional exhaustion, depersonalization, and personal accomplishment (Maslach & Jackson, 1981;Maslack et al., 2001).
Emotional exhaustion is a depleted feeling of mental and emotional tiredness due to the execution of daily work activities, but it continues affecting them throughout the day.
Depersonalization implies that negative or pessimistic attitudes and characteristics that workers demonstrate to the clients in any industry. Moreover, it causes more harmful effects on the service industry since more proactive and respectful interactions are always prioritized.
Personal accomplishment is one's reduced accomplishment which represents the selfevaluation dimension of burnout. It means that people directly feel incompetent, unsuccessful, and demotivated when their performance is revealed as poorer than their colleagues at work, which causes dysfunctional attitudes, low performance, and personal ineffectiveness for employees (Maslach & Jackson, 1981). Therefore, burnout is manifested by workers when employees perceive difficulties to meet organizational expectations (Hobfoll & Shirom, 2000).
A review article was conducted by Purvanova and Muros (2010) based on the 1833 studies on three dimensions of burnout showed that female employees were more likely to be emotionally exhausted than male employees.
The relationship between Challenge stressors and burnout
The service industry sells various products and services using intensive and proactive communication tools between employees and customers. Almost all types of employees in the banking sector have to have specific interactions with customers in a particular circumstance. This intensive and constant interaction is more likely to produce certain work-related stress among employees (Behrman & Perreault, 1984). More broadly, this kind of stress happens when employees consume too many emotional resources to sell a product by interacting with different types of customers on a daily basis (Cordes & Dougherty, 1993). According to Taylor and Bain (1999), task repetition and monotonous activities can cause stress during employeecustomer interaction in the service industry, which might increase the feeling of burnout. Zapf et al. (1999) assert that there is an association between emotional labor and emotional exhaustion.
Stress has become a very popular topic among scholars. However, recently, researchers have found that challenge stressors have negative associations with job search (Cavanaugh et al., 2000), turnover (Boswell et al., 2004), and intentions to leave an organization . Moreover, according to LePine et al., (2005), managers may be able to motivate their staffs, and to improve their performance by increasing challenge stressors because challenge stressors have a positive relationship with satisfaction, motivation, organizational commitment as well as employees' performance (Bringam et al., 2005;Scheck et al., 1995). Therefore, the following hypotheses on the relationship between challenge stressors and burnout were built to examine in this study.
H1: There is a relationship between challenge stressors and burnout.
H2: Challenge stressors are negatively related to burnout.
H2a Challenge stressors are negatively related to emotional exhaustion H2b Challenge stressors are negatively related to depersonalization H2c Challenge stressors are negatively related to personal accomplishment
METHODOLOGY
The quantitative research design was employed. A self-administered and online questionnaire was created using the Google survey platform to collect data from 101 female bank workers from January to April 2019. 35 previously validated items were chosen to measure challenge stressors (6) using the scales created by Rodell and Judge (2009) based on prior validated scales from (Cavanaugh et al., 2000;Ivancevich & Matteson, 1980), and burnout (22) was measured using the scale originated by Maslach, et al., (1996). A 6-and 5point Likert rating scales were implemented. A translation validation process was conducted. Snowball and convenience sampling methods were applied, and SPSS 20.0 was performed to analyze the compiled data.
Descriptive Analysis
Approximately 93% of the participants belong to two different age categories between 18 -30 and 31-40. Among them, 65% are married. The majority of them (58%) have two children. Regarding the salary level, half of them have indicated that their monthly salary ranges between $192 and $384. Most of them (45%) have 1 to 5 years of working experience, whereas 37% have 3 to 5 years of career seniority. For the educational level, most surveyed participants (80%) hold a bachelor's degree while 17% have a master's degree.
Validity and Reliability
The reliability analysis results generated the Cronbach alpha's values of the three sub-dimensions of burnout and challenge stressors that were higher than .70 (ranging from .70 for burnout and .84 for challenge stressors).
Correlation Analysis
According to the correlation analysis results, as shown in Table 1, there is a statistically significant positive relationship between challenge stressors and burnout (r=.208, p<.05). The relationship between challenge stressors and three sub-dimensions of burnout, emotional exhaustion, and depersonalization that have a statistically significant association with challenge stressors (r=.339, p<.01, and r=.268, p<.01 respectively) were also examined. More interestingly, their correlation values were found positive. For personal accomplishment, that is only one subdimension with a negative and significant correlation with challenge stressors (r=-.233, p<.01). However, Hypothesis 2c was accepted. Notably, the correlation analysis result also demonstrates that that number of children is negatively correlated with burnout (r= -.207. p<.05), and emotional exhaustion which is negatively related to age (r=-.212, p<.05), marital status (r=-.233, p<.05), number of children (r=-.281, p<.05) and career seniority (r=-.215, p<.05).
CONCLUSION AND RECOMMENDATIONS
The main objective of this study was to examine the relationship between challenge stressors and burnout. Therefore, based on the results from correlation and single linear regression analyses, it can be concluded that challenge stressors are statistically proven to be a significant predictor of burnout in the context of female employees in the banking sector in Mongolia. It means that the challenge stressors are perceived to be a kind of negative stress that is more likely to lead the participants to feel emotionally exhausted and to demonstrate negative reactions against customers. This is consistent with the findings of the review paper carried out by Purvanova & Muros (2010) and the study done by Behrman & Perreault (1984), who proposed that employed females are more emotionally exhausted than employed males. Besides, the main result of this study is in line with Zapf et al. (1999), who suggested that emotional labor makes employees more exhausted psychologically.
However, this overall finding is not consistent with the results of other key studies on challenge stressors that have been done by Cavanaugh et al., (2000); Boswell et al. (2004); Podsakoff & LePine, (2005) and Podsakoff et al., (2007). According to them, employees perceive the challenge stressors as positive stress, which makes them satisfied, motivated, committed to their jobs, and pushes them to perform better.
Most interestingly, apart from the main finding of this study, among the three dimensions of burnout, personal accomplishment is found to have a significant and negative association with challenge stressors, which is in line with the findings of other main studies on challenge stressors such as Bhagat et al., (1985) and Cavanaugh et al., (2000). It means that if female workers in the banking sector feel more self-accomplished after getting familiar with the challenge stressors at workplaces, they may be able to be motivated and driven by the effects of the challenge stressors instead of feeling burnout or emotionally tired at workplaces. It may be explained in relation to Mongolian culture that is characterized as being high in individualism (Rarick et al., 2014) in terms of Hofstede's cultural assessment model. Individuals who belong to individualistic cultures can be a stable, independent self (Triandis, 1993). It indicates that achieving personal goals in society is more important to individuals. Hence, HR managers in the banking sector in Mongolia need to be fully aware of how to manage the challenge stressors among female employees using it as a motivational tool to make females to feel them not only self-accomplished but also not to reach burnout at the same time in the workplace of the 21 st century since the impact of the challenge stressors on burnout is found to be two-sided among the participants of this study. The limitation of this study can be the sample size. The data collected from 101 female employees in Mongolia may be insufficient. So, it is advisable to collect more data to ensure further statistical results. | 2020-12-03T09:07:34.586Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "e4d7655b364e71df6d73e744db2fe645f52f8396",
"oa_license": "CCBY",
"oa_url": "https://www.ieeca.org/journal/index.php/JEECAR/article/download/451/284",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1cb421532fda46233d8086130e3b99f42bab5d0b",
"s2fieldsofstudy": [
"Business",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
307503 | pes2o/s2orc | v3-fos-license | Assessing Politicized Sexual Orientation Identity: Validating the Queer Consciousness Scale
ABSTRACT Building on psychological theories of motivation for collective action, we introduce a new individual difference measure of queer consciousness, defined as a politicized collective identity around sexual orientation. The Queer Consciousness Scale (QCS) consists of 12 items measuring five aspects of a politicized queer identity: sense of common fate, power discontent, system blame, collective orientation, and cognitive centrality. In four samples of adult women and men of varied sexual orientations, the QCS showed good test-retest and Cronbach’s reliability and excellent known-groups and predictive validity. Specifically, the QCS was positively correlated with identification as a member of the LGBTQ community, political liberalism, personal political salience, and LGBTQ activism and negatively correlated with right-wing authoritarianism and social dominance orientation. QCS mediated relationships between several individual difference variables and gay rights activism and can be used with both LGBTQ people and allies.
Little research has been conducted on the development of a politicized sexual orientation identity, though studies have focused on the connection between sexual and gender minority identification and political participation. For example, Renn and Ozaki (2010) found that college student leaders of LGBTQ organizations had strongly connected their leadership identities as activists to their queer identities. They found that for many LGBTQ students, "to be LGBT is to be [an] activist" (Renn & Ozaki, 2010, p. 22). However, Stürmer and Simon (2004) found that a strong identification with the gay rights movement (rather than a strong gay identity) predicted LGBTQ activism. Similarly, Van Dyke and Cress (2006) found that gays and lesbians were more likely to share a collective homosexual identity when they felt threatened by the anti-gay movement, in this case, around the HIV/AIDS epidemic. In heterosexuals, Wilkinson and Sagarin (2010) found that a strong identification with the gay community could lead to the development of a LGBTQ activist identity, providing evidence that identification as a sexual minority is not always a determinant of LGBTQ activism.
Politicized group identities
In mainstream social and personality psychology, researchers have found that a politicized group identity is key to understanding motivation for activism; simply being a member of a stigmatized group is not enough to spur individual action (Duncan, 1999(Duncan, , 2012Duncan & Stewart, 2007;Stürmer & Simon, 2004;van Zomeren, Postmes, & Spears, 2008). The social identity model of collective action (SIMCA; van Zomeren et al., 2008) identified identity, injustice, and efficacy as key elements comprising a politicized group identification. Duncan (2012) integrated individual difference variables into the SIMCA model to capture within-group differences in group consciousness and activism (see Figure 1). She and her colleagues (Duncan, 1999(Duncan, , 2012Duncan & Stewart, 2007) adapted popular American National Election Studies items (Miller, Kinder, Rosenstone, & the National Election Studies, 1999) to create a measure of feminist consciousness (or politicized gender identity) that was then used to mediate between individual differences in personality and life experiences and women's rights activism. This approach is useful in integrating individual difference approaches to motivation for activism with social identity approaches, as it helps to clarify which members of a particular social group will tend to become politically active and which will not. This measure of politicized group identity (or group consciousness) assesses five key elements of stratum consciousness as described by Gurin and colleagues (Gurin & Markus, 1989;Gurin, Miller, & Gurin, 1980) and reflect the identity, injustice, and efficacy components of SIMCA (van Zomeren et al., 2008). Group consciousness consists of (1) identification with the group, or a sense that what affects some members of one's group will have an impact on all members of the group (also called common fate); (2) power discontent, or the sense that one's group does not have access to an appropriate level of power and resources in society; (3) rejection of legitimacy, or system blame, or a sense that the lack of power experienced by one's group is unfair and related to systemic rather than individual factors; (4) collective orientation, or the sense that the best way to address unfair power differentials is by organizing together and working as a group; and (5) cognitive centrality, or the tendency to use one's group membership as a cognitive schema when processing information. According to this model, politicized group identification relies on the development of a shared belief that the lowstatus group experiences social injustice, and that the group should work as a collective to change their status in society. This model has been applied to understand class consciousness, race consciousness, age consciousness (Gurin et al., 1980), and, most commonly, feminist consciousness (Duncan, 2010;Gurin, 1985;Henderson-King & Stewart, 1999). People scoring high on these measures tend to be more politicized and more politically active for their group than people scoring low on these measures. Using a composite measure comprising all five elements described above, Duncan (1999Duncan ( , 2012 Duncan & Stewart, 2007) has focused on understanding the ways in which group consciousness mediates the relationships between individual difference variables and collective action. The current studies extend this research to queer consciousness and related activism. A few studies have examined politicized identities around LGBTQ movements; however, there does not currently exist an individual difference measure of queer consciousness that encompasses all five of these elements. To be sure, previous studies have looked at the role of identification, sense of belongingness, or common fate (Cimino, 2007;Harris, Battle, Pastrana, & Daniels, 2015). For example, Stürmer and Simon (2004) measured politicized gay identity by asking participants to rate their identification with a gay social movement organization and Simon, Pantaleo, and Mummendey (1995) asked participants to rate how similar and different they were from in-groups and out-groups.
In the current article, we have created an individual difference measure of queer consciousness based on Gurin et al.'s (1980) model of stratum consciousness that not only includes all five aspects of a politicized collective identity but also has the ability to measure the queer consciousness of allies to the LGBTQ movement. Researchers attempting to examine the politicization of sexual orientation identities need a reliable and valid measure of group consciousness that can be used with members of the LQBTQ community and allies alike. This is important because not all people politicize their sexual orientation, and it is the politicization of this identity that is most closely related to activism for LGBTQ rights and that mediates the relationships between individual difference variables and activism.
This new measure is especially relevant in today's quickly changing political climate. For example, by 2005, almost 2000 U.S. schools sponsored Gay-Straight Alliances (GSAs) to provide a forum and a safe place for those who identified as sexual minorities and their allies (Savin-Williams, 2005), and in 2015 the U.S. Supreme Court validated the right of gay people to marry. There are many questions about the LGBTQ community and allies that cannot be answered without a reliable and valid measure of group consciousness, such as the effects of GSAs and pro-gay media on attitudes and behaviors. Our measure can be used to learn more about the LGBTQ community, LGBTQ activism, pro-gay heterosexuals, anti-gay homosexuals and heterosexuals, and the political climate around LGBTQ rights.
Additionally, we chose to make the measure inclusive for those who identify as straight with the knowledge that sexuality can be fluid (Diamond, 2008) and that many young people are rejecting categorical labels based on sexual orientation (Savin-Williams, 2005). We also theorize that it is possible for people to be heterosexual but to have a political queer identity, evidenced by the existence of allies participating in GSAs.
Queer consciousness
For our measure, we chose the term queer consciousness to describe a politicized identity around sexual orientation, and we chose to use the acronym LGBTQ to describe the low-status sexual minority group. Categorizing this community has been the subject of much debate in both psychology and sociology, and we have used theories and research from both fields in our research. Stryker (2008) referred to the constantly changing acronym as "alphabet soup" (p. 21), and Savin-Williams (2005) documented that in the history of activist groups, the acronym used to group sexual (and gender) minorities changed from LGB (lesbian-gay-bisexual) to LGBTQ 2 (lesbian-gaybisexual-trans-queer-questioning). Within our measure, we used LGBTQ to categorize sexual minorities because it includes both sexual orientation and gender identification (for more information on the inclusion of trans in the LGBTQ rights movement, see Broad, 2002). We also specifically included the term queer because of its use academically (in queer theory) and its mainstream use as an umbrella term.
Though the word queer was once used pejoratively to describe sexual minorities, it has since been reappropriated by the community as a term that describes someone who opposes the gender binary and heteronormative culture (Renn, 2007). As society progresses and views on homosexuality change, sexual orientation has become a more fluid concept; rather than using definitive categories, people are beginning to accept that sexual orientation exists along a spectrum (Barker, Richards, & Bowes-Catton, 2009). For example, Savin-Williams (2005) noted a recent trend of teenagers who have turned away from binding terminology such as gay and lesbian, instead preferring not to label their sexuality. In some ways, this has simplified the process of identity development by emphasizing the fluidity of sexuality (Horowitz & Newcomb, 2001). At the same time, it has complicated traditional views of sexual orientation, making it more difficult to create categories on the basis of LGBTQ identification.
We labeled the politicized identity around the LGBTQ movement as queer consciousness because it provides theoretical consistency as well as some practical benefits. The term used to describe someone who is invested in the women's rights movement is feminist; however, there is no truly homologous term for the LGBTQ movement. Furthermore, in the same way that men can be feminists, allies can make important contributions to the movement. Ji, Du Bois, and Finnessy (2009) found that it is possible to increase ally identification through education, providing evidence for the idea that an identity connected to the LGBTQ movement can be developed and become politicized for heterosexuals. Also, we felt that using queer in the measure's name reflected the politicized nature of the identity we wanted to assess.
The term queer has distinctly political connotations. Not only is it a broad term with which to describe the community, encompassing any behavior that is not traditionally heterosexual (if there is such a thing), but queer is also one of the most modern and political terms in use today. Johnson (2007) argued that queering could be important for modern coalition politics because it is issue-based and focused on breaking down boundaries. Though the term queer is controversial and the queer movement itself can have different goals from the lesbian, gay, bisexual, trans, and numerous other identity politics movements, it is both the broadest umbrella term and carries the heaviest political connotation. Furthermore, we wanted to follow the current trend of "queering" so that the title of our measure remained salient to modern and future sexual identity politics. (For more information on the historical use of the term queer, see Johnson, 2007;Wahlert, 2012).
In developing the Queer Consciousness Scale (QCS), reliability and validity data were collected from four samples: two samples of college women, an ideologically varied general sample of both men and women recruited from Internet blogs, and a large general sample of women and men collected via Mechanical Turk. Because there are no existing measures of politicized sexual orientation identity, we concentrate our efforts on establishing Cronbach's alpha and testretest reliability and known groups and predictive validity. In the first study, the measure is introduced and validated with a sample of young women. Test-retest reliability in a second sample is described. In the third sample, the results of the first study are replicated and extended to a sample of men. In the fourth sample, the results are further extended to a large general sample of women and men, additional correlates are presented, and the ability of QCS to mediate the relationship between some individual difference variables and gay rights activism is tested.
Past research has shown that members of oppressed groups are more likely than members of nonoppressed groups to develop group consciousness and to become politically active (Duncan & Stewart, 2007;Gurin et al., 1980;Montgomery & Stewart, 2012). Therefore, we hypothesize that women and people identifying as gay, lesbian, bisexual, and queer will score higher on the QCS than men and people identifying as straight. In addition, LGBTQ people who have "come out" to their families, friends, and at work should score higher on queer consciousness than those who have not (Swank & Fahs, 2013a, 2013b. Previous research has established that participating in women-only groups and explicitly teaching women to question existing gender relations are related to increased feminist consciousness in women (Bargad & Hyde, 1991;Henderson-King & Stewart, 1999). We expect that there should be analogous relationships for LGBTQ people. Therefore, when compared to nonparticipants, QCS scores should be higher for people who have participated in LGBTQ organizations, attended Gay Pride parades, and taken college-level queer studies classes.
In terms of predictive validity, the QCS should be correlated with attitudes and behaviors related to open acceptance of the rights of gay people. One of the most obvious measures to test is self-identification along the Kinsey sexual orientation scale (Kinsey, Pomeroy, & Martin, 1948). Identifying along the non-straight sexual orientation spectrum should be related to higher scores on the QCS. Liberals tend to endorse progressive political and social views at higher rates than do conservatives (Jones, 2015); therefore, liberal political orientation should be related to higher scores on the QCS. Right-wing authoritarianism (RWA) and social dominance orientation (SDO) are associated with conservative beliefs about a variety of social issues, and authoritarians are known to be hostile to gay people (Peterson, Doty, & Winter, 1993). Therefore, QCS scores should be negatively correlated with RWA and SDO scores.
Personal political salience (PPS), or the tendency to attach personal meaning to political events, is related to feminist and race consciousness and activism (Duncan, 1999(Duncan, , 2005Duncan & Stewart, 2007). We expect that PPS should also be related to queer consciousness. In terms of behavior, feminist consciousness is related to participation in women's rights activism (Duncan, 1999;Duncan & Stewart, 2007). Similarly, the QCS should correlate positively with LGBTQ-related political activism.
Participants and procedures
One hundred and twenty-three female students enrolled at a small women's college located in the northeastern United States were asked to complete an online survey using Surveymonkey.com for course credit or the chance to win a $50 Amazon.com gift card. Sixty-two percent of participants identified as White, 23% as Asian American, 10% as African American, 6% as Latina, and 1% as Native American. To evaluate test-retest reliability, a separate sample of 153 women recruited from three different psychology classes was administered the QCS in class on two occasions, approximately 1 month apart.
Measures
Queer consciousness scale As mentioned earlier, the QCS was created by adapting ANES items used to assess the four elements of Gurin's (1985) stratum consciousness, along with cognitive centrality (Gurin & Markus, 1989). The items were adapted with the following objectives in mind: (1) people of varied sexual orientations should be able to answer the questions; (2) the scale should include a balance of negatively worded as well as positively worded items; (3) each aspect of queer consciousness should be represented by at least two items, each worded in opposite directions; (4) the scale should be relatively short and quick to complete; and (5) the scale should include an item that explicitly asked participants to indicate their level of identification with the queer community.
Participants were asked to read 12 statements and rate how strongly they agreed or disagreed, using a 5-point scale where 1 = strongly disagree, 2 = somewhat disagree, 3 = neither agree nor disagree, 4 = somewhat agree, and 5 = strongly agree. The complete scale is included in Appendix A. Items 3, 4, 6, 7, and 10 were reverse scored. The mean of all items completed constituted the scale score.
Known groups validity
Participants were asked a series of questions about their membership in groups that might reasonably be expected to be populated with people who would score higher on queer consciousness than nonmembers. Participants were asked to describe their sexual orientation in an open-ended answer. Answers were coded as straight (63%) or queer (37%; including lesbian, gay, bisexual, and queer). In addition, queer participants were asked if they had come out to their families (63%), friends (89%), and at work (43%). Participants were asked whether they had been a member of an LGBTQ organization (35%), attended a Gay Pride parade (48%), or taken a collegelevel queer studies class (25%). We expected that people who had participated in queer-related activities and who had come out would score higher on queer consciousness than participants who had not.
Sexual orientation
We asked the participants to place themselves on the Kinsey Scale (Kinsey et al., 1948), a 7-point Likert scale ranging from exclusively heterosexual to exclusively homosexual. Answers were recoded so that higher numbers represented a straight sexual orientation. See Table 2 for alpha reliabilities for scales and means and standard deviations for all variables.
Liberalism-conservatism
Participants were asked to place themselves on a scale from 1 to 7, with a 1 indicating strong liberalism, a 7 indicating strong conservatism, and a 4 indicating a more moderate political attitude (Miller et al., 1999). The scale was reverse scored so that high scores indicated stronger liberalism. Altemeyer's (2006) Right-Wing Authoritarian scale consisted of 22 items, 10 of which were reverse scored. Participants were asked to indicate their feelings on an 8-point scale that progressed from very strongly disagree to very strongly agree. A sample item is "The established authorities generally turn out to be right about things, while the radicals and protestors are usually just 'loud mouths' showing off their ignorance."
Right-wing authoritarianism
Social dominance orientation Social Dominance Orientation (SDO; Pratto, Sidanius, & Levin, 2006) was measured by asking participants to indicate whether they agreed or disagreed with 16 statements (half of which were reverse-scored) on a 7-point Likert scale ranging from "Right now I feel strong agreement" to "Right now I feel strong disagreement." Items include statements such as "Some groups of people are simply inferior to other groups" and "We should do what we can to equalize conditions for different groups" (reverse scored).
Personal political salience (PPS)
The PPS measure consisted of 31 political and social events rated on a 3point scale (1 = not at all personally meaningful, 2 = a little personally meaningful, 3 = very personally meaningful; Duncan, 1999Duncan, , 2005Duncan & Stewart, 2007). Participants were instructed to: "Please rate each of the following events for how personally meaningful it is (or was) to you (i.e., how much it affected your life or reflects your values and concerns)." The measure included contemporary as well as historical events (e.g., Obama presidency, Vietnam War). Overall PPS scores were computed by summing scores on all rated events and taking an item mean. In addition, we created a LGBTQ-events focused PPS score that consisted of mean ratings of the following events: Don't Ask, Don't Tell repeal, Stonewall, Matthew Sheppard's murder, Defense of Marriage Act (DOMA) repeal, same-sex marriage, lesbian-gay rights, transgender rights, and HIV/AIDS.
Activism
LGBTQ-related political activism was assessed by asking participants to indicate the type and level of their participation in three causes: LGBTQ rights (50%), Repeal of DOMA (26%), and HIV/AIDS activism (31%). Respondents marked whether or not they participated in up to six specific actions in support of each cause. Actions included signing petitions; attending meetings; writing, calling, or visiting a public official; contributing money; participating in organizations; and participating in rallies or demonstrations (Duncan, 1999;Duncan & Stewart, 2007). Scores could range from 0 to 6 actions for each of the activism variables.
Reliability analyses
The 12-item QCS showed adequate reliability. Cronbach's alpha for the first sample was .76. Alpha for the test-retest samples was .74 at time 1 and .79 at time 2. Over the course of approximately 1 month, the QCS showed strong test-retest reliability (r = .87, p < .001).
Validity analyses
Known groups validity Participants who identified as queer scored significantly higher on the QCS than participants who identified as straight (see Table 1). Queer participants who were out to their friends scored significantly higher on the QCS than participants who were not; however, there were no differences in QCS scores for participants based on whether they were out to their families or out at work. Participants who had been a member of a LGBTQ-related organization, attended a Gay Pride parade, or taken a college-level queer studies class scored significantly higher on the QCS than participants who had not.
Predictive validity
The QCS correlated strongly negatively with a Kinsey straight sexual orientation, right-wing authoritarianism, and social dominance orientation (see Table 2). The QCS correlated strongly positively with liberalism, overall PPS, and PPS focused on LGBTQ events. In terms of activist behaviors, the QCS correlated positively with LGBTQ rights and the repeal of the Defense of Marriage Act activism but was unrelated to HIV/AIDS activism.
Discussion
In this first study, the QCS showed good Cronbach's alpha and test-retest reliability and predictive validity in college women samples. In terms of known-group validity of the QCS, five out of seven groups that were expected to score higher on the QCS (self-identified queer people, queer participants who had come out to their friends-but not those who had come out to their families and at work-members of LGBTQ organizations, those who had attended a Gay Pride parade, and participants who had taken queer studies classes) scored higher on the QCS than nonmembers of these groups. In addition, the QCS was strongly related to liberalism and personal political salience and strongly negatively related to a straight sexual orientation, authoritarianism, and social dominance orientation. Finally, the QCS was correlated with two of out three types of LGBTQ-related political activism. HIV/AIDS activism was unrelated to scores on the QCS, possibly because HIV/AIDS is no longer seen as a primarily gay male-focused disease, and this was a college-student sample. We conducted two additional studies with mixed-gender samples. In the first of these studies (Study 2), we simply replicated the analyses conducted for Study 1.
Participants and procedures
To oversample LGBTQ people and to validate the QCS on a sample of women and men with a wider range of ideological views and wider age range, we collected data in two ways. First, we used snowball sampling by posting a link and explanation of the survey onto the second and third authors' Facebook walls and asked others to do the same. To assure a balanced number of participants with different political views, we also posted the survey on a total of 15 online conservative and liberal blogs. The blogs were found using the Google search engine, and a link to the survey with a short description was posted. Participants were offered the chance to win a $50 gift certificate to Amazon.com in a raffle. Participants who did not answer at least two thirds of the QCS items were deleted from the sample. A total of 182 participants completed enough of the QCS to be included in the final sample. Seventy-six percent of participants identified as female, and 18% identified as male. The age of the online sample participants ranged from 16 to 60 (M = 27.78, SD = 11.20). For sexual orientation, 38% of the online participants self-identified as lesbian, gay, bisexual, or queer, and 50% of the sample identified as straight. In terms of ethnicity, 88% of participants identified as White, and 22% of the participants identified as people of color (Native American, Asian American, Black, or Latino). Thirty-one percent reported an annual family income under $40,000, 31% an income between $40,001 and $100,000, 28% an income between $100,001 and $200,000, and 11% an income over $200,000. Mean educational level was a 2-year college degree. Thirty percent had once been or were currently married or had made a life commitment to a partner, and 9% had been divorced.
Measures
Measures for Study 2 were identical to those used in Study 1. See Table 2 for alpha reliabilities for scales and means and standard deviations for all variables.
Results
The QCS showed good reliability with a Cronbach's alpha of .82. Consistent with the results of Study 1, participants who identified as queer scored significantly higher on the QCS than participants who identified as straight.
As hypothesized, women scored higher than men on the QCS (see Table 1).
Queer participants who were out to their families, friends, and at work scored significantly higher on the QCS than participants who were not. As in Study 1, participants who had been a member of a LGBTQ-related organization, attended a Gay Pride parade, or who had taken a college-level queer studies class scored significantly higher on the QCS than participants who had not. Consistent with the results of Study 1, the QCS correlated strongly negatively with a Kinsey straight sexual orientation, right-wing authoritarianism, and social dominance orientation (see Table 2). The QCS correlated strongly positively with liberalism, overall PPS, and PPS focused on LGBTQ events. In terms of activist behaviors, the QCS correlated positively with LGBTQ rights, repeal of the Defense of Marriage Act activism, and HIV/AIDS activism.
Discussion
Consistent with the results of the first study, the QCS showed good Cronbach's alpha reliability in a broader demographic sample. In terms of known-group validity of the QCS, all seven of the groups that were expected to score higher on the QCS (self-identified queer people; women; queer participants who had come out to their families, friends, and at work; members of LGBTQ organizations; those who had attended a Gay Pride parade; and participants who had taken queer studies classes) scored higher on the QCS than nonmembers of these groups. In addition, the QCS was strongly related to liberalism and personal political salience and strongly negatively related to a straight sexual orientation, authoritarianism, and social dominance orientation. In addition, the QCS was correlated with all three types of LGBTQ-related political activism.
We replicated these results in one additional, larger sample of men and women. We added two measures of political knowledge to expand our base of dependent variables. Although we expected no relationship between QCS scores and general political knowledge, we hypothesized positive correlations between the QCS and knowledge about LGBTQ political history, and that the QCS would mediate relationships between these variables and gay rights activism.
Participants and procedures
Participants consisted of 607 American adults recruited from the Web site Mechanical Turk, where people can complete short surveys for small payments (see Buhrmester, Kwang, & Gosling, 2011). Participants responded to a listing advertising a research study on attitudes about gender and sexual orientation. Participants were required to be at least 18 years old and American citizens. Sixty-three percent of participants identified as women, 37% as men, and .5% declined to identify their gender. Eighty percent of participants identified as White/Caucasian, 12% as Black/African American, 6% as Latino/Hispanic, 4% as Asian American, 3% as Native American, and 2% as "other ethnicity." Percentages sum to more than 100% because participants could check more than one ethnicity. Age ranged from 18 to 75, with a mean of 35.40 (SD = 12.56). Mean social class of origin was lower-middle to middle class, mean personal income was between $20,000 and $50,000, and mean education level was a 2-year college degree.
Measures
Measures for Study 3 were identical to those used in Studies 1 and 2, although due to space constraints the SDO measure and several questions relating to known-group validity were eliminated. In addition, we did not ask participants to identify their sexual orientation categorically, although they did identify themselves along the Kinsey sexual orientation scale. Participants used sliders (assigned values of 0-9) instead of fixed numbers for continuously scaled measures.
In addition, two measures of political knowledge (one assessing general political knowledge and one assessing knowledge about LGBTQ political history) were created for this study as additional predictive validity measures. General political knowledge (Delli Carpini & Keeter, 1996) was assessed using six items taken from the 2012 American National Election Studies pre-and post-election questionnaires. Items covered basics about the U.S. political system (e.g., how many times someone can be elected U.S. president, how long a U.S. senator's term is), policies (e.g., what Medicare is) as well as knowledge about current political leaders (e.g., who the current Secretary-General of the United Nations is, which party holds a majority in the U.S. House, who the current U.S. Secretary of State is). See Appendix B for the exact wording of the items. Knowledge of LGBTQ political history was assessed with a 6-item quiz created for this study. Items covered famous historical events in the history of LGBTQ activism. See Appendix C for the exact wording of the items. Alpha reliability, means, and standard deviations for both scales are presented in Table 3. On average, participants answered about one half of the items correctly on both scales.
Finally, we tested the notion that the QCS would mediate relationships between individual difference variables and gay rights activism. Baron and Kenny (1986) specified three criteria for assessing mediating relationships: (1) the independent variable is significantly related to the dependent variable (individual difference variables are related to gay rights activism; Path C in Figure 1); (2) the mediator is significantly related to the independent and dependent variables (QCS is related to the individual difference variables and gay rights activism; Paths A and B); and (3) when the mediator is controlled for, the magnitude of the relationship between the independent and dependent variable is reduced (i.e., controlling for QCS scores, the relationship between the individual difference variables and gay rights activism-Path C-is reduced). In cases where Baron and Kenny's (1986) criteria for mediation were met, we tested whether the mediated effect was significantly different from 0 using a bootstrapping procedure (Shrout & Bolger, 2002).
Reliability and predictive validity
Similar to Studies 1 and 2, the QCS showed good reliability (Cronbach's alpha = .84). Women (M = 5.79, sd = 1.46) scored significantly higher on the QCS than did men (M = 5.28, sd = 1.48; t(602) = 4.15, p < .001). Consistent with the results of Studies 1 and 2, the QCS correlated negatively with a Kinsey straight sexual orientation and right-wing authoritarianism (see Table 3). The QCS correlated strongly positively with liberalism, overall PPS, and PPS focused on LGBTQ events. The QCS was unrelated to scores on general political knowledge; however, it was positively related to scores on the LGBTQ political history quiz. In terms of activist behaviors, the QCS correlated positively with LGBTQ rights and repeal of the Defense of Marriage Act activism. Table 4 shows the intercorrelation of independent variables included in the mediation analyses. The strongest (and moderate) relationship was between PPS and liberalism. The overall regression was significant, F(7, 595) = 52.74, p < .001, R2 = .38). Table 5 shows the results of the mediation analyses. After controlling for gender and education along all three paths, the relationship between three out of four of the individual difference variables and gay rights activism was mediated by queer consciousness. That is, the direct relationships between liberalism, sexual orientation, and personal political salience and gay rights activism (Path C) were significantly reduced when queer consciousness was added to the regression equations, and the indirect effects were significantly different from 0.
Discussion
Consistent with the results of the previous two studies, the QCS showed good Cronbach's alpha reliability in a large sample of the general population. The QCS was strongly related to liberalism and personal political salience and strongly negatively related to a straight sexual orientation and authoritarianism. Finally, the QCS was positively correlated with knowledge about LGBTQ political history and both types of LGBTQ-related political activism. Similar to other measures of politicized group identities, queer consciousness mediated the relationships between several individual difference variables and political activism. This shows that the QCS operates psychologically much in the same way as other types of politicized identities, providing motivation for collective action participation.
General discussion
The purpose of this study was to introduce and validate a measure of queer consciousness, or politicized sexual orientation identity, by establishing .15*** .14*** none Note. Numbers in the first three columns are standardized ß coefficients controlling for gender and education. Unstandardized regression coefficients are available from the first author. Unstandardized indirect effects (bootstrap SE) are reported in the last column. Mediation was determined by running the SPSS "PROCESS" macro created by Andrew Hayes and available online at http://processmacro.org/ index.html. If paths A, B, and C were significant and if the 95% confidence interval for the estimates of the indirect effects did not include 0, then mediation was significant at p < .05. The relationship between QCS and gay rights activism ß = .26***, N = 602. *** p < .001. reliability and validity in several samples of adults. Overall, the QCS showed excellent test-retest reliability over the course of 1 month, and good Cronbach's alpha reliability. This new measure is short, easy to use, and tied to the existing and extensive psychological literature on politicized collective identities. It can be used with people of varied sexual orientations. Because there are no existing measures of queer consciousness based on the psychological literature on politicized collective identities, we concentrated our validation efforts on establishing known-groups and predictive validity. All groups that were hypothesized to score higher on queer consciousness (queer sexual orientation; women; gay people out to their families, friends, and work; members of queer organizations; those who had attended a Gay Pride parade; and those who had taken queer studies courses) scored higher on queer consciousness than nonmembers of those groups. In addition, QCS scores were positively related to identifying along the non-straight spectrum of the Kinsey scale, political liberalism, finding personal meaning in political events generally (PPS) and LGBTQ-related events specifically, and participating in LGBTQ-related activism. QCS scores were negatively related to right-wing authoritarianism and social dominance orientation. In the third study, the QCS was also positively related to LGBTQ political knowledge and unrelated to general political knowledge, as hypothesized.
Thus the QCS has shown good reliability and validity and should be used to understand the correlates of politicizing identity around sexual orientation, including political activism. One of its strengths is that it can be used with sexual minorities as well as allies. We believe that this new measure of queer consciousness can contribute to our understanding of LGBTQ psychology in a number of different ways. We would like to mention two specifically. First of all, this measure can be used to examine intragroup dynamics within the LGBTQ community. For those who identify as a sexual or gender minority, what personality characteristics and life experiences differentiate those who develop high queer consciousness from those who do not? We already know that, similar to women who develop higher levels of feminist consciousness, experiences with sexual orientation discrimination are related to higher levels of gay rights activism (Swank & Fahs, 2013a, 2013b. What factors are related to politicizing identities around sexual orientation, and what factors differentiate those who become politically active for LGBTQ causes from those who do not? Because we were able to show that the QCS mediates relationships between some of these variables and gay rights activism, researchers should be able to use this new measure in conjunction with other personality and life experience measures to develop a deeper understanding of individual factors motivating gay rights activism. Another burgeoning area of research is understanding LGBTQ ally identification and activism. Over the past few years, multiple studies have been published that seek to explain ally participation in the LGBTQ movement (Duhigg, Rostosky, Gray, & Wimsatt, 2010;Fingerhut, 2011;Goldstein & Davis, 2010;Russell, 2011;Swank & Fahs, 2011). Because our measure was specifically designed to include allies, it can be used in future studies to look at the variables that are related to higher levels of queer consciousness and activism in allies. Experimental studies could be conducted as well, wherein independent variables were manipulated and their effects on levels of queer consciousness examined.
The QCS fits squarely in the tradition of both the political science and social psychological investigations of politicized collective identities. Because the QCS items were developed from long-used items adapted from the American National Election Studies (ANES; Miller et al., 1999), they should seem familiar to social science researchers. In addition, the five elements of group consciousness assessed in the QCS overlap completely with the dominant social psychological models of politicized collective identities (Duncan, 2012;van Zomeren et al., 2008). Because this measure is short, easy to use, and consistent with the dominant theoretical models of group consciousness and collective action, it should allow researchers to incorporate an individual differences approach into their existing analyses of factors contributing to motivation for collective action in both non-straight and straight people. Integrating personality and social psychological approaches in research on this motivation will allow us to develop a more nuanced and complete understanding of queer activism. | 2018-04-03T03:07:42.677Z | 2017-07-03T00:00:00.000 | {
"year": 2017,
"sha1": "8ec1f00af5c354bfd03570c80777e8fe9e98b801",
"oa_license": "CCBY",
"oa_url": "https://scholarworks.smith.edu/cgi/viewcontent.cgi?article=1006&context=psy_facpubs",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "dbefa0999fa3538987319fcbbcd9d1fdae8ce5d3",
"s2fieldsofstudy": [
"Psychology",
"Sociology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
244130226 | pes2o/s2orc | v3-fos-license | Hybrid Reflection Modulation
Reconfigurable intelligent surface (RIS)-empowered communication has emerged as a novel concept for customizing future wireless environments in a cost- and energy-efficient way. However, due to double path loss, existing fully passive RIS systems that purely reflect the incident signals into preferred directions attain an unsatisfactory performance improvement over the traditional wireless networks in certain conditions. To overcome this bottleneck, we propose a novel transmission scheme, named hybrid reflection modulation (HRM), exploiting both active and passive reflecting elements at the RIS and their combinations, which enables to convey information without using any radio frequency (RF) chains. In the HRM scheme, the active reflecting elements using additional power amplifiers are able to amplify and reflect the incoming signal, while the remaining passive elements can simply reflect the signals with appropriate phase shifts. Based on this novel transmission model, we obtain an upper bound for the average bit error probability (ABEP), and derive achievable rate of the system using an information theoretic approach. Moreover, comprehensive computer simulations are performed to prove the superiority of the proposed HRM scheme over existing fully passive, fully active and reflection modulation (RM) systems.
I. INTRODUCTION
Reconfigurable intelligent surface (RIS)-empowered communication technology which configures electromagnetic waves over-the-air to improve the received signal quality, appears to be a promising solution for future wireless transmission networks [1]. Particularly, RISs are planar metasurfaces that enable the modification of propagation environments via integrated smart programmable elements in favor of enhancing signal quality. By adjusting impinging signals, these elements are able to perform unique functions such as controlled reflection, amplification, absorption, etc. to boost the signal strength, alleviate the inter-channel interference and thus enhance the channel capacity gains [2].
The existing literature on RIS-aided systems is extensive and focuses particularly on RISs with fully passive reflecting elements that merely reflect the incident signal to desired directions by employing low-power electronic components [3].
In early RIS-aided transmission schemes, multi-user systems that optimize the transmit power [4]- [6], error performance [7]- [9], and achievable rate [10]- [13] have been developed in order to achieve major performance gains. Further, an RIS is deployed for improving the physical layer security of target communication systems [14]- [16] while in [17]- [19], deep learning-based efficient solutions have been developed for channel estimation and reflection-based designs. Recently, leveraging RIS, realistic sub-6 GHz [20] and millimeter wave (mmWave) channel models [21], [22] are designed and implemented. Above all, unlike the aforementioned systems that consider computer simulations, in [23]- [25], low-cost RIS prototypes are constructed to obtain more accurate results about actual performance of the RIS-aided systems through experimental measurements.
Over the past decade, substantial research efforts have been devoted to the index modulation (IM) technique, one of the revolutionary transmission paradigms, which conveys extra information bits employing the building blocks of typical wireless communication systems, such as antennas, relays, antenna patterns, time slots, etc. [26]. On the other hand, the proliferation of literature on the RIS technology heightens the need for increasing data rates using IM techniques. Therefore, the combination of RISs with the traditional IM systems has been aroused in [20], [27], [28], where the information is transmitted via indices of transmit/receive antennas, and an RIS is adopted to further enhance transmission performance. Moreover, the performance analysis of the RIS-aided IM schemes has been investigated in [29]- [31] and novel closed form expressions are obtained in [32]. On the other hand, in recent studies, a novel IM technique, reflection modulation (RM), has been developed to utilize the reflecting elements as information transmitting units [33]. In recent RM systems, using ON/OFF keying mechanism of the passive reflection elements, an RIS has been deployed to carry information [34]- [37].
Despite this extensive research, since RIS-aided systems suffer from a multiplicative path attenuation [38], it is practically very challenging, in case of strong direct link, for fully passive RISs to obtain a remarkable performance gain over a conventional wireless scenario, which is a major drawback to overcome.
On the other hand, more recent attention has focused on facilitating active reflecting elements at RISs to attain significant performance gains, which lays the groundwork for further research in RIS-aided transmission schemes [39]- [44]. In [39], achievable channel capacity of a single-input single-output (SISO) system assisted by an RIS, whose reflecting elements are equipped with additional controllable power amplifiers to simultaneously amplify and reflect signals, arXiv:2111.08355v2 [cs.IT] 16 Nov 2022 has been elaborately analyzed through experimental measurements. Subsequently, in follow-up studies, channel capacity and energy efficiency of fully active RIS [41] and partially active RIS-aided systems [42] have been compared to the earlier benchmark studies of conventional specular reflection and fully passive RIS-aided systems. Reported results indicate a significant performance achievement for RIS-aided systems with active reflecting elements, compared to prior studies.
Against this background, this paper presents a novel IM scheme called hybrid reflection modulation (HRM) that utilizes a hybrid RIS which consists of both active and passive elements to support the transmission of a SISO system. In other words, the main motivation of this study is to combine the attractive advantages of IM and active RIS systems in a clever scheme in which the RIS operates as a part of the transmitter and directly transmits information. This makes the proposed scheme fundamentally different from the recent hybrid RIS-aided designs that consider the classical SISO signaling over a hybrid RIS architecture employing a certain number of active reflecting elements [40], [43]. In the proposed HRM scheme, we assume that the RIS elements are equipped with electronically controllable phase shifters and reflectiontype amplifiers [39], which enable to simultaneously perform reflection and amplification functions. While the integrated phase shifters are dynamically adjusted to supply convenient phase shifts, the available power amplifiers can be turned ON and OFF according to incoming information bits to avoid excessive power consumption. Therefore, in the HRM scheme, in accordance with the incoming information bits, an RIS element can plainly reflect the incident signal without any amplification as a passive reflecting element or further amplify the reflected signal at the expense of increasing power consumption as an active reflecting element. On the other hand, by adapting the IM principle, the RIS is split into subgroups, and the information is transmitted through different channel realizations created by various combinations of active and passive reflecting elements in these groups. Moreover, we perform a detailed theoretical analysis to obtain achievable rate expressions using an information theoretic approach, and derive an upper bound for the analytical bit error probability (ABEP) of the system. Furthermore, we carry out a comprehensive numerical analysis under spatially correlated and uncorrelated channel conditions to illustrate the performance improvement of the HRM scheme over the prior RIS-aided benchmark schemes considering fully active [39] and fully passive [1], [35] RISs.
The remaining of the paper proceeds as follows. The system model of the proposed HRM scheme is given in Section II. Section III provides theoretical performance analyses of the HRM scheme including ABEP, achievable rate and energy efficiency. In Section IV, computer simulation results are presented, and the conclusions are given in Section V.
Notations: Throughout this paper, vectors and matrices are denoted by bold lower and bold upper letters, respectively. Absolute value of a scalar is denoted by |·| while · is used for Euclidean/Frobenious norm. (·) H and (·) T stand for Hermitian and transposition operators, respectively. CN (µ, σ 2 ) denotes distribution of a complex Gaussian random variable with . mean µ and variance σ 2 . diag(a 1 , a 2 , · · · , a N ) represents a diagonal matrix with diagonal elements of a 1 , a 2 , · · · , a N , and C a×b denotes the set of a × b dimensional complex matrices. Furthermore, P r (·), Q(·) and E {·} represent probability of an event, Q-function and expectation operator, respectively.
II. HYBRID REFLECTION MODULATION
In this section, after giving the review of the classical passive and active RIS, we present the system model and the detection algorithm of the proposed HRM scheme.
A. Passive and Active RIS
Most of the current literature on RIS-aided systems pays particular attention on RIS with passive reflecting elements in various research fields [1]- [13]. By smartly inducing convenient phase shifts without any transmit power consumption [4], the passive RIS elements do not directly modify the magnitude of the incident signal. On the other hand, the active reflecting elements are capable of generating reflection gains of greater than unity at the cost of additional power consumption [39], [42]. This amplification functionality of a reflecting element can be achieved by integrating additional power amplifier circuitry such as tunnel diode [42] or low-noise amplifier (LNA) [39]. Therefore, unlike a passive reflecting element, each active element introduces a non-negligible thermal noise. For instance, let ξ p = |ξ p |e jφp and ξ a = |ξ a |e jφa respectively represent the reflection gains of a passive and an active element whose magnitudes, (|ξ p |, |ξ a |) and phases (φ p , φ a ) can be defined as follows Nevertheless, although the active reflecting elements exploit power supplies in order to amplify the reflected signal, their hardware constructions are completely different from amplifyand-forward (AF) relays that utilize high-cost signal processing units.
B. HRM Scheme
Adopting the IM principle, in the HRM scheme, we aim to modulate a single tone carrier signal through RIS with reflective and power controllable elements. As illustrated in Fig. 1, in the proposed HRM scheme, we consider a SISO system that employs an RIS with N reflecting elements to boost the communication link between the transmitter (T) and the receiver (R) in an outdoor environment. 1 In practical conditions, since it is unlikely to maintain a constant direct link between T-R link due to severe signal blockage in an outdoor environment, we assume that the direct link is blocked by the obstacles. Moreover, an RIS controller is incorporated with the RIS to dynamically adjust the phase shifts and the amplification gains of each reflecting element considering the information provided from the transmitter via a wireless control channel. For the sake of simplicity, we assume that perfect channel state information (P-CSI) of the all nodes are available at the transmitter [4], which are conveyed to the RIS controller via the control link, and to the receiver through pilot-based transmission [45]. However, since the reflection amplitudes and phases are controlled separately, comparing to fully passive RIS [37], the RIS controller requires additional variable resistor loads [46]. Subsequently, unlike the conventional passive elements that only utilize low-cost PIN diodes or varactors [3] to simply reflect signals without any amplification, the HRM scheme additionally includes a reflectiontype power amplifier per RIS element to amplify the reflected signals in order to attain further channel capacity gains [39]. In the HRM scheme, a phase shifter per each RIS element is employed to generate optimum phase shift for maximizing the signal-to-noise ratio (SNR), while reflection-type power amplifiers are dynamically turned ON/OFF according to the transmitted information bits. Therefore, similar to conventional RIS architecture, when the power amplification option is disabled, an RIS element can merely reflect the incident signal without any amplification, or when enabled, it can further amplify the signal with a convenient phase shift. Notably, the 1 Since the MIMO extension of the proposed HRM scheme requires the development of a computationally intensive algorithm for optimization of reflection coefficients of the RIS elements, this paper considers a SISO transmission to avoid an additional computational burden at the RIS.
RIS element corresponds to a conventional passive reflecting element in the former case, while it is converted to an active reflecting element in the latter case.
In the proposed HRM scheme, the RIS with N reflecting elements is divided into G sub-groups, each having S = N/G number of RIS elements. Then, applying the IM concept, the HRM scheme transmitting a single tone carrier signal requires log 2 (G) information bits to employ the elements of l A out of G groups as active reflecting elements while the remaining ones are used as passive reflecting elements, where l A ∈ {0, 1, . . . , G − 1}. Therefore, the numbers of active and passive reflecting elements become N A = l A × S and N P = N − N A , respectively. In particular, for l A = 0, since the power amplifiers of all reflecting elements are disabled, the RIS elements simply reflect signals without any amplification. In that case, the RIS serves as a conventional fully passive RIS.
Indeed, since active RIS elements further amplify the incident signal compared to the conventional passive reflecting elements, in the HRM scheme, exploiting a different number of active RIS elements in each time instant generates multilevel HRM symbols H l A , like a virtual amplitude shift keying (ASK) modulator. Moreover, unlike the classical ASK modulator that utilizes a fully digital RF chain with high hardware complexity and implementation cost, in the proposed HRM scheme, employing an unmodulated cosine carrier at the transmitter, different combinations of active and passive reflecting elements are used to create a virtual ASK constellation. This also facilitates the HRM receiver to differentiate the perceived signal with a high accuracy.
To better illustrate, the HRM transmission scheme is explained with following example. In order to achieve a spectral efficiency of m = 2 bits per second per Hertz (bits/s/Hz), we assume that the proposed HRM transmission scheme employs an RIS with N = 16 elements divided into G = 2 m = 4 sub-groups, each of which consisting S = 4 RIS elements. While the active/passive RIS element combinations for the corresponding HRM symbols H l A are presented in Fig. 2, the considered bit mapping is listed in Table I, where l A ∈ {0, 1, 2, 3}. As clearly seen from Fig. 2, unlike the conventional fully passive [1] and fully active RISs [39] whose elements continuously operate in the same manner, in the proposed HRM scheme, via adjusting power amplifiers, different RIS configurations consisting of active and passive elements are formed in each time instant, which creates distinct variations in the amplitude of the over-the-air HRM symbols. Accordingly, for incoming {00} bits, since the number of active RIS sub-groups is l A = 0, H 0 symbol is created by a fully passive RIS, while for the other incoming bit streams of {01}, {10} and {11}, the HRM symbols are generated from hybrid RIS configurations consisting of both active and passive reflecting elements. Clearly, the larger number of active subgroups, the further RIS amplifies the incident signal.
Let ξ i = |ξ i | e jφi be the reflection coefficient of the i-th reflecting element of the HRM scheme with magnitude |ξ i | and phase shift φ i ∈ [−π, π], where i ∈ {1, 2, · · · , N }. Then, for active and passive elements, the reflection coefficient ξ i becomes It is worth noting that for the passive reflecting elements, the reflection gain is assumed to be |ξ i | = 1 [1] while for the active reflecting elements, it is [43]. For simplicity, we assume that all active reflecting elements have the same reflection gain, i.e., p i = p for ∀i ∈ {1, 2, . . . , N A }. Accordingly, the reflection matrices including the phases of the active and passive elements can be respectively given as Let h ∈ C 1×N = √ L th and g ∈ C 1×N = √ L rg be the channel vectors between the T-RIS and RIS-R links, respectively, where L t and L r are the path attenuation in the corresponding links. Here, the path loss terms are obtained for the T-RIS distance d t and the RIS-R distance d r as , where β 0 is the path loss at the reference distance of 1 meter (m), and α t and α r are the path loss exponents at the T-RIS and RIS-R links, respectively. Please note that T and R are located sufficiently away and operate independently, thus, the T-RIS and RIS-R links are statistically independent, whereh andg are modeled as independent Rician fading channels and generated as whereh LOS andg LOS are the line-of-sight (LOS) components andh NLOS andg NLOS are non-LOS (NLOS) components of their corresponding channel vectors, while K t and K r are the Rician fading coefficients of the T-RIS and RIS-R links, respectively. Here, both the LOS and NLOS components are assumed to consist of complex Gaussian random variables, whose each entry is independent and identically distributed (i.i.d.) and follows CN (0, 1) distribution.
For a better illustration, the channel vectors between the T-RIS and RIS-R links can be given as h = [h a , h p ] T and g = [g a , g p ], respectively, where h a ∈ C 1×N A and g a ∈ C 1×N A are the channel vectors corresponding to the active elements, while the channel vectors h p ∈ C 1×N P and g p ∈ C 1×N P correspond to the passive reflecting elements at the RIS. Therefore, for P t being the total transmit power, the overall received complex baseband signal at the receiver becomes [39]: where is the additional noise vector composing the thermal noise terms generated by power amplifiers of active elements that cannot be neglected as in the passive elements.
In the HRM scheme, the phase shifts of the all reflection elements and the amplification gain of the active RIS elements p can be optimized in order to achieve the maximum SNR. Then, for P A being the maximum amplification power at the RIS, which corresponds to the power budget of active reflecting elements [40], the maximum instantaneous received SNR can be formulated as Then, applying the triangle and Cauchy-Schwarz inequalities [47], since p g a h a + g p h p ≥ pg a Φh T a + g p Ψh T p , the optimum phase shift of the i-th reflecting element, φ i , which completely eliminates the phases of the corresponding channel coefficients, and the reflection gain p are simply obtained as Therefore, for h i = |h i | e jϕi and g i = |g i | e jχi respectively being the i-th component of the channel vectors h and g, for the optimum phase shifts in (9) and an arbitrary amplification gain p, the received signal (6) can be rewritten as |gi|ṽi + ns (11) whereṽ i = v i e −jϕi , and v i is the i-th complex element of the dynamic noise vector v. Therefore, for being the HRM symbol for the corresponding l A , the received signal can be rewritten as where n = p N A i=1 |g i |ṽ i + n s is the overall noise term. It is worthy to note that applying the central limit theorem (CLT) for increasing N A , n is approximated to a complex Gaussian random variable with ∼ CN (0, N 0 ) distribution, where N 0 = p 2 N A L r σ 2 dy + σ 2 st 2 .
2 For X and Y being independent random variables, the variance of the product Z = XY is calculated as In addition, the mean and variance of the sum
C. HRM Receiver
In the HRM scheme, since exploiting different number of active reflecting elements creates virtual amplitude variations in the received signal, different signal levels of the HRM symbols can be easily distinguished at the receiver. Moreover, the HRM receiver with perfect knowledge of the overall channel considers maximum likelihood (ML) detection algorithm to choose the most likely estimate of l A , as followŝ where p(y|H l A ) is the conditional probability density function (pdf) of the received signal y given H l A , which can be given as Here, the overall noise power N 0 , where it is obtained as N 0 = p 2 N A L r σ 2 dy + σ 2 st in the previous subsection, and the HRM symbol H l A vary with the number of active subgroups of RIS (l A ) and the total number of active reflecting elements N A . However, since the thermal noise of each active element experiences the path attenuation of the RIS-R link (L r ) while the RIS-R distance of d r is sufficiently large, the varying N A hardly affects the decision of minimum metrics in (14). Therefore, the HRM receiver can simply detect l A as followsl which gives as almost the same estimate as the ML algorithm given in (13). Here, we consider all combinations of active and passive elements and simply select the closest virtual constellation point with respect to received signal.
D. Fully Hybrid Reflection Modulation (F-HRM)
In this subsection, a special case of the proposed HRM scheme, fully hybrid reflection modulation (F-HRM), is introduced. In the F-HRM scheme, the same RIS, transmitter and receiver hardware architectures of the HRM scheme are considered. However, unlike HRM, in the F-HRM scheme, whole RIS elements without grouping are assumed to manipulate the incident signal in the same manner. Specifically, in the F-HRM scheme, 1-bit information (m = 1 bits/s/Hz) is transmitted over the RIS to control the amplification gains of the RIS elements. In the F-HRM scheme, by properly adjusting power amplifier of each reflecting element, for the incoming {0} bit, all reflecting elements perform a plain passive reflection with the optimum phase shifts of (9), while for the incoming bit {1}, all elements function as active reflecting elements that amplify and reflect the incident signal with additional thermal noise. Please note that, in the F-HRM scheme, since RIS elements operate in the same manner as a whole, number of the overall active reflection elements is N A = l A × N for l A ∈ {0, 1}. Accordingly, in the F-HRM scheme, for the corresponding l A and N A values, the received signal, the optimum estimate of l A at the receiver and the maximum received SNR can be obtained from (11), (15) and (7)-(10), respectively.
III. PERFORMANCE ANALYSES
In this section, we investigate the performance of the proposed HRM in terms of average bit error probability (ABEP), achievable rate and energy efficiency.
A. ABEP Analysis
In this subsection, the ABEP of the proposed HRM scheme is analyzed. Since the simple HRM detection algorithm in (15) gives exactly the same error performance as the true ML detector in (13), we build our theoretical analysis based on it in the following way.
After the pairwise error probability (PEP) of the HRM scheme is obtained, we derive the ABEP of the system using a moment generating function (MGF)-based approach [48]. For this purpose, first of all, in order to determine the conditional PEP (CPEP) of the HRM scheme, we assume that the number of sub-groups of active elements l A and its corresponding total number of active elements N A = l A × S are erroneously detected asl A andN A =l A × S, respectively. Therefore, considering the detection rule in (15), the CPEP of the HRM scheme can be given as where Hl A = p N A j=1 |h j | |g j | + N j=N A +1 |h j | |g j | is the HRM symbol for the correspondingl A . Therefore, the CPEP in (16) can be simplified to: After some mathematical manipulations, the CPEP expression in (17) can be rewritten as Therefore, for D being a Gaussian random variable
the CPEP expression yields in
where the mean and the variance of D are calculated as After deriving the statistical distributions, the CPEP expression can be given, using Q-function, as In the HRM scheme, the channel magnitudes of |h i | and |g i | are independent Rician distributed random variables with the means of µ |hi| = 1 2 Ltπ Kt+1 L 1/2 (−K t ) and µ |gi| = 1 2 Lrπ Kr+1 L 1/2 (−K r ), and the variances of is the Laguerre polynomial [49]. Then, defining Σ = H l A − Hl A , which is another Gaussian random variable with the following statistics for δ = (N A −N A ) the average error probability of the system is calculated in the following way. Considering the following alternative representation of Q-function and using the MGF of Π = |Σ| 2 , which follows non-central chi-square distribution, the average PEP can be calculated as follows Here, the MGF of non-central chi-square distribution is given as [50] M Π (s) = 1 Therefore, substituting the MGF expression (24) into (25), the PEP is obtained as To gain further insights, since a function of z(θ) = 1/ sin 2 (θ) has a single minimum at θ = π/2, where z(π/2) = 1, by letting θ = π/2, (25) can be upper bounded as follows [48] P r l A →l A ≤ 1 2 Moreover, for high P t /N 0 regime, an asymptotic PEP expression can be approximated as: . (27) It is worth noting that for G = 2, the PEP results the ABEP of the HRM scheme, while for G ≥ 2 the following well-known upper bound is considered [48]: where e(l A ,l A ) is the number of bit errors in each PEP event.
B. Achievable Rate Analysis
In this subsection, considering an information theoretic approach, we perform achievable rate analysis of the HRM scheme by deriving the mutual information (MI) between its transmit and received signals.
In the HRM scheme, since the an unmodulated carrier signal is transmitted and the information bits are modulated to generate a spatial constellation symbol H l A , the MI of the HRM scheme corresponds to the information conveyed between the received signal vector space Y and spatial constellation space H. Therefore, the achievable rate of the proposed HRM scheme becomes [50] Here, since each HRM symbol H l A is equiprobable, i.e., p(H l A ) = 1/G, substituting the conditional pdf of p(y|H l A ) given in (14) into (29), the achievable rate of the HRM scheme is rewritten as Therefore, after some algebraic manipulations, (30) can be simplified to [51] I(H; Y) = log 2 (G) − log 2 (e)
C. Energy Efficiency
In this subsection, the power consumption model and energy efficiency of the proposed HRM scheme are evaluated. In the HRM scheme, for τ t being the transmit power efficiency, its average power consumption can be calculated as where P c represents the overall power dissipated in transmitter and receiver circuit blocks while P RIS denotes the total power consumption of the RIS that can be given, for 1 and 2 respectively denoting the average number of active and passive reflecting elements, as follows: In (33), P dy and P st correspond to dynamic and static power consumption of the active reflecting elements, respectively, while P p is the required power per passive reflecting element [40], and τ a is amplifier efficiency of the active reflecting elements for τ a , τ t ∈ (0, 1] [40]. On the other hand, when a conventional RIS of N passive reflecting elements is considered, P RIS corresponds to the power consumed by the adaptive phase shifters, i.e. P RIS = N P p [6], while for fully active RIS, whose elements include both reflection and amplification circuitry, the overall power consumed by the RIS becomes P RIS = P A τa +N P dy +P st [40]. Comparing the power consumption of the proposed hybrid, fully active and fully passive RIS configurations, with the note that P p P dy , it is obvious that fully passive RIS architectures with only reflection capabilities are the most power-efficient constructions. On the other hand, in the proposed HRM scheme, the hybrid RIS architectures save a significant amount of power compared to the fully active RIS designs. Further, the energy efficiency in bits per Joule (bits/J) of the HRM system results, in terms of instantaneous received SNR γ given in (7), can be obtained as where B W represents the system bandwidth.
IV. NUMERICAL RESULTS
In this section, the BER, achievable rate and energy efficiency performance of the proposed HRM scheme is investigated through extensive computer simulations. For different number of RIS sub-groups and reflecting elements, the superior performance of the proposed HRM scheme over the existing fully active [39], fully passive [1] and RM [35] schemes is demonstrated. Unless otherwise indicated, in all simulations, the following system parameters are assumed: the distances d t = 20 m and d r = 50 m, the scale parameters Ω t = Ω r = 1 the path loss exponents α t = 2.2 and α r = 2.8, the Rician shape parameters K t = K r = 0, the noise variances σ 2 dy = σ 2 st = −90 dBm, and the reference path loss value of β 0 = −30 dB.
A. BER Performance in Ideal Channel Conditions
In this subsection, the BER performance of the proposed HRM scheme under the ideal channel conditions is carried out.
In Fig. 3, the analytical and numerical results of the BER performance of the HRM scheme with p = 10, which achieves a spectral efficiency of m = 1 bits/s/Hz for G = 2 and Rician scale factors of K t = K r ∈ {0, 10}, is demonstrated. As it can be clearly seen from Fig. 3, the analytical results applying the CLT for N ∈ {32, 64, 128, 256, 512} perfectly match to computer simulations. Moreover, it is observed that for each K t = K r , doubling the reflection elements provides approximately 7.5 dBm improvement in the required transmit power P t at the BER value of 10 −6 .
In Fig. 4, the BER performance of the HRM scheme, is given for N = 256 reflecting elements divided into G = 2, 4, 8, 16 and 32 sub-groups. It is observed that like the ordinary multi-level digital modulation techniques, as the number of sub-groups is increased the signal levels of the HRM symbols get closer, and this deteriorates the BER performance of the HRM scheme. In particular, it is apparent that the HRM scheme achieving m = 5 bits/s/Hz (m = log 2 (G)) with an RIS of G = 32 sub-groups exhibits remarkably worse error performance compared to the lower G cases, which shows an interesting trade-off between the error performance and the spectral efficiency. Moreover, as in the classical ASK modulation, considering the reflection power constraint in (8), since the same p and N values are considered in Fig. 4, the systems with larger G necessitate a higher power consumption. Therefore, it can be concluded that besides their superior error performance, the HRM systems with a smaller G save more energy than the systems with a larger G.
In Fig. 5, the BER performance comparison of HRM, RM [35] and conventional fully passive RIS-aided systems is investigated for N = 256. In the reference RM scheme [35], similar to HRM, an RIS with fully passive reflecting elements, is split into G sub-groups whose indices are used to convey additional information bits. However, contrary to HRM and conventional fully passive RIS-aided systems, adjusting ON/OFF keying states of each group, the whole RIS elements are not utilized in the reference RM transmission scheme [35]. Moreover, in the reference RM scheme [35], an RF source is used to transmit an optimized M -ary phase shift keying (M -PSK) constellation per each RIS configuration to achieve a spectral efficiency of m = log 2 (G) + log 2 (M ). Then, in Fig. 5, to attain m = 4 bits/s/Hz, considering the optimum phase shifts in (9), the HRM scheme with G = 16 sub-groups and the amplification gain of p = 10 is compared to the reference fully passive RIS-aided system with 16-PSK, and the RM scheme with G = 4 sub-groups employing the rotated quadrature PSK (QPSK) [35]. The results show that although the HRM scheme with G = 16 enlarges the HRM signal constellation considerably, it still achieves significant performance over the RM [35] and conventional fully passive RIS-aided system. Furthermore, in Fig. 5, at m = 4 bits/s/Hz, as an extension of the HRM scheme, the BER performance of HRM that jointly encodes information in the transmit signal and RIS subgroups is also evaluated. In this case, while preserving the RIS and receiver architecture of the proposed HRM, instead of an unmodulated signal, a QPSK modulated signal is employed at the transmitter of the HRM scheme. For the case of the HRM scheme with M -PSK modulation, the spectral efficiency becomes m = log 2 (M ) + log 2 (G) bits/s/Hz. Therefore, in order to achieve a spectral efficiency of m = 4 bits/s/Hz, for p = 10 and QPSK signaling, the RIS is clustered into G = 4 sub-groups. The results exhibit that the HRM scheme with G = 4 and QPSK signal transmission achieves 16 dB P t gain at the BER value of 10 −5 over the reference RM scheme [35]. It is clear from the Fig. 5 that using an additional RF chain at the transmitter alleviates the burden of RIS transmission by reducing the required number of RIS sub-groups. In that case, since the benefits of a lower HRM signal level are retained, the BER performance improves, but it brings an additional hardware cost.
B. BER Performance in Non-Ideal Channel Conditions
In this subsection, the BER performance of the ideal and non-ideal channel conditions is compared for different RIS configurations.
Further, for more realistic settings, we investigate the performance of the HRM scheme under the spatially correlated RIS elements whose impact on the BER performance is given in Fig. 6. For this aim, we consider a square RIS and assume that the channel vectors h and g, representing T-RIS and RIS-R links, respectively, are modeled as spatially correlated Rayleigh fading channels, i.e., K t = K r = 0 in (4) and (5), and generated as where R ∈ C N ×N is the correlation matrix due to spatially correlated RIS elements, whose (k, l)-th component is [R] k,l = sinc(2 w k − w l /λ) [52], [53], for k, l ∈ {1, 2, · · · , N } and λ being the wavelength at 2.4 GHz operating frequency. Here, the horizontal width and vertical height of a single reflecting element are represented by d h and d v , respectively, and for i ∈ {k, l}, the vector is the number of reflecting elements in each row or column of the square RIS, i.e., N = N h × N h . In Fig. 6, the BER performance of HRM scheme under spatially correlated and spatially independent channel conditions is given for the squared RIS elements with different dimensions of d h = d v ∈ {λ/2, λ/4, λ/8} at the spectral efficiency of m = 1 bits/s/Hz. The results show that the configuration of the RIS has a great impact on the degree of correlation. Therefore, as the horizontal and vertical sizes of the RIS elements enlarge, the HRM system becomes more robust to the bit errors. Moreover, it can also be deduced that increasing number of reflecting elements significantly facilitates the BER performance degradation of channel correlation. As it can be clearly seen from the Fig. 6, the HRM scheme with N = 256 exhibits almost the same BER performance in both spatially correlated RIS with d h = d v = λ/2 and spatially independent RIS cases. However, for the lower N values, i.e., N = 16 and N = 64, the spatial correlation causes a considerable deterioration in the BER performance.
C. Achievable Rate and Energy Efficiency Performance
In this subsection, the achievable rate and the energy efficiency performances of the proposed HRM scheme and the fully passive and fully active RIS-aided systems are compared through extensive computer simulations. Fig. 7 provides the achievable rate of the HRM scheme with the amplification gain of the active elements being p = 10. In this figure, the RIS with N = 64, 256 and 512 reflecting elements are divided into G = 2, 4 and 8 sub-groups that achieve the spectral efficiency values of m = 1, 2 and 3 bits/s/Hz, respectively. These information theoretic results illustrate that increasing number of reflecting elements, N , enables a more rapid convergence to the target data rate.
Furthermore, in Fig. 8, we investigate the energy efficiency and power consumption of F-HRM, fully active [39] and fully passive RIS-aided schemes [1] at the spectral efficiency of 1 bits/s/Hz. At the transmitter, while in the F-HRM scheme, an unmodulated carrier signal is considered, binary PSK (BPSK) modulation is employed in the fully passive and fully active RIS-aided systems. Notably, in the F-HRM scheme, the average number of active and passive elements are equal as 1 = 2 = N/2. Therefore, to evaluate the total power consumption in (33), we set P c = 75 dBm, P p = 5 mW, P st = 35 dBm, P dy = 30 dBm, and τ a = τ t = 0.5 [40], and assume B W = 10 MHz [6] to determine the energy efficiency of (34) by 10 6 number of iterations.
In Fig. 8(a), the energy efficiency of F-HRM and active RISaided transmission schemes, all employing N = 512 reflecting elements at the RIS, is measured as a function of P t . The results indicate a considerable energy efficiency improvement for the F-HRM scheme over the active RIS-aided system for P A = 10, 20 and 30 dBm. These results can be explained by the fact that although a fully active RIS-aided system achieves substantial capacity gains [39], [41], it requires larger amount of power compared to the more environment-friendly F-HRM scheme.
In addition, in Fig. 8(b), the energy efficiency of the F-HRM and active and passive RIS-aided systems are further investigated for varying N values and the amplification power of P A = 0 and 10 dBm, as well as for the transmit power P t = 30 dBm. Consistent with the results in Fig. 8(a), the F-HRM scheme achieves a noticeable improvement in the energy efficiency compared to fully active RIS-aided system, and exceeds the conventional passive RIS-aided system with a substantial margin. To support these results, in Fig. 8(c), the power consumption of the proposed F-HRM scheme and the reference RIS-aided systems are depicted as a function of N for P A = 10 dBm and P t = 30 dBm. Obviously, increasing N hardly changes the power consumption of the passive RISaided systems, while further opens the power consumption gap between the F-HRM and active RIS-aided systems.
The results presented in Fig. 8 are in accordance with the earlier studies [39]- [41] that an interesting trade-off exists between the achievable rate and power consumption. Therefore, the RIS-aided systems with partially or fully active reflecting elements are capable to achieve ultimate capacity gains compared to the conventional reflection-based transmission schemes such as fully passive RIS-aided systems. On the other hand, although the HRM and fully active RISaided systems have the same hardware capabilities, i.e., the all reflecting elements are integrated with additional power amplifiers, since constantly driving active RIS elements requires a tremendous power consumption, more energy-efficient communication systems with high data rate can be constructed using HRM transmission concepts that limit the overall power consumption. In summary, it can be deduced from the results that the HRM scheme offers an intermediate solution between a fully passive and fully active RIS-aided transmission scheme, and achieves noticeable performance gains with a high data rate in a more energy-efficient manner.
V. CONCLUSION
In this paper, we have introduced the novel scheme of HRM which offers a promising solution for the RIS-aided transmission systems that experience high path attenuation. In the proposed HRM scheme, the target RIS has been split into sub-groups through which the conventional IM technique has been applied to transmit information. While the active/passive combinations of the reflecting elements in those sub-groups have been determined according to incoming information bits, the phases have been optimally adjusted for achieving maximum SNR gains. Therefore, the RIS has been configured to perform amplification and reflection functions at the same time. Besides, the analytical BER performance and the achievable rate of the HRM scheme have been derived. Furthermore, comprehensive computer simulations have been conducted to illustrate the performance achievement of the HRM scheme over the existing fully active, fully passive and RM systems. Moreover, the effect of hardware impairments and channel estimation errors on the BER performance of the proposed scheme, the generalization of HRM for nonuniform power distributions, new I/Q modulator designs and the MIMO/multi-user extension of the proposed scheme to increase its data rate, which requires the development of a sub-optimal detector to optimize the reflection coefficients of RIS elements, are interesting directions for future research. | 2021-11-17T02:15:39.665Z | 2021-11-16T00:00:00.000 | {
"year": 2021,
"sha1": "514cabd65c4a50dbb29e815b9835bf96bbd5c54a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b4dba0e93c6e36a61eaa2e811963ecdca5a87993",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering",
"Mathematics"
]
} |
251040507 | pes2o/s2orc | v3-fos-license | Capacity of Entanglement for Non-local Hamiltonian
The notion of capacity of entanglement is the quantum information theoretic counterpart of the heat capacity which is defined as the second cumulant of the entanglement spectrum. Given any bipartite pure state, we can define the capacity of entanglement as the variance of the modular Hamiltonian in the reduced state of any of the subsystems. Here, we study the dynamics of this quantity under non-local Hamiltonian. Specifically, we address the question: Given an arbitrary non-local Hamiltonian what is the capacity of entanglement that the system can possess? As an useful application, we show that the quantum speed limit for creating the entanglement is not only governed by the fluctuation in the non-local Hamiltonian, but also depends inversely on the time average of square root of the capacity of entanglement. Furthermore, we discuss this quantity for a general self-inverse Hamiltonian and provide a bound on the rate of the capacity of entanglement. Towards the end, we generalise the capacity of entanglement for bipartite mixed states based on the relative entropy of entanglement and show that the above definition reduces to the capacity of entanglement for pure bipartite states. Our results can have several applications in diverse areas of physics.
I. INTRODUCTION
Entanglement has potential applications in quantum information science ranging from quantum computing, quantum communication and host of other areas such as condensed matter physics, high energy physics and even string theory [1,2]. It is considered a very useful resource in information processing tasks. Over several years, how to create and quantify entanglement has been a subject of major explorations [3,4]. Thanks to technological progress, now we can create entanglement between two or more number of particles in quantum optical systems [5], ion traps [6], superconducting systems [7,8], and NMR setups [9]. How to create entanglement between more and more number of particles and distribute over long distances still continues to be quite challenging [10]. Quantum entanglement between two particles can of course be created depending on the choice of the initial state and suitable non-local interaction between them. However, design of suitable interacting Hamiltonian is not always easy. This makes the production of entanglement a non-trivial task. Therefore, it is natural to ask the question, for a given non-local Hamiltonian, what is the best way of exploiting this Hamiltonian to create entanglement. This was addressed in Ref. [11] Entanglement entropy is quite a useful diagnostic tool which measures degree of quantum entanglement between subsystems in a many-body quantum systems [12]. A different quantity, called as the capacity of entanglement has been proposed to characterize topologically ordered states in the context of the Kitaev model [13]. Given a pure bipartite entangled state ρ AB , the capacity of entanglement is defined as the second cumulant of the entanglement spectrum. Thus, associated to a reduced density matrix, we can define the capacity of entanglement as the variance of the modular Hamiltonian in the mixed state. If {λ i }'s are the eigenval-ues of the reduced density matrix of one of the subsystem, than the entanglement entropy is defined as S EE = S(ρ A ) = − tr(ρ A log ρ A ) = − i λ i log λ i . Now, the capacity of entanglement C E is defined as the second cumulant of this entanglement spectrum [14], i.e., the variance in the entanglement entropy operator. It is similar to the heat capacity of thermal systems and is given by [14][15][16] The above quantity can be thought of as the variance of the distribution of − log λ i with probability λ i , and thus it contains information about the width of the eigenvalue distribution of the reduced density matrix. We can gain insight on the whole spectrum by studying upto first two cumulants, i.e., the entanglement entropy and the capacity of entanglement. Defining a modular Hamiltonian as K A = − log ρ A , they are the expectation value and the variance of K A . The capacity of entanglement has found useful applications in the conformal and the nonconformal quantum field theories [17,18], as well as in models related with the gravitational phase transitions [18][19][20][21][22].
In this paper, we address the entanglement capacities for non-local Hamiltonians. To be specific, we answer the following question: Given a non-local Hamiltonian, what is the capacity of entanglement for bipartite systems? We show that the entanglement rate is bounded by the fluctuation in the non-local Hamiltonian and the capacity of entanglement. In addition, the quantum speed limit for creating the entanglement depends inversely on the fluctuation in the non-local Hamiltonian as well as on the time average of the square root of the capacity of entanglement. Thus, the more the capacity of entanglement, the shorter the time duration system may take to produce the desired amount of entanglement. We illustrate the quantum speed limit for general two-qubit non-local Hamiltonian and find that our bound is indeed tight. Furthermore, we discuss the capacity of entanglement for self-inverse Hamiltonains and provide a bound on the rate of capacity of entanglement. Finally, we generalise the capacity of entanglement for bipartite mixed states based on the relative entropy of entanglement measure. This definition reduces to the capacity of entanglement for the pure bipartite states. This will open up its explorations for mixed states in future. We believe that our results can find applications in diverse areas of physics ranging from condensed matter systems to conformal field theories and alike.
The present paper is organised as follows. In Section II, we provide basic definitions and useful relations for the capacity of entanglement for pure bipartite states. In Section III, we discuss the capacity of entanglement for non-local Hamiltonians. In Section IV, we prove that the entanglement rate is bounded by the capacity of entanglement and the speed of quantum evolution under the non-local Hamiltonian. We also provide a quantum speed limit for entanglement production or degradation and discuss how the capacity of entanglement helps in deciding the speed limit. In Section V, we discuss the capacity of entanglement for self-inverse Hamiltonians and provide a bound on the rate of the capacity of entanglement. In Section VI, we generalise the definition of the capacity of entanglement for bipartite mixed states based on the notion of relative entropy of entanglement. Finally, in Section VII, we summarise our findings.
II. DEFINITIONS AND RELATIONS
Let H represent a separable Hilbert space and dim(H) be the dimension of Hilbert space. Let us consider a bipartite quantum system described by state vector |Ψ AB ∈ H AB = H A ⊗ H B with unit norm. It is possible to express the state vector |Ψ AB as where {|ψ n } A and {|φ n } B are the Schmidt basis in H A and H B , respectively and {λ n } are the non-negative real numbers with n λ n = 1. Eq. (1) is called the Schmidt decomposition of |Ψ AB and λ n are known as the Schmidt coefficients. If the Schmidt decomposition of |Ψ AB has more than one non-zero Schmidt coefficients then we say that system A and B are "entangled". If there is only one non-zero Schmidt coefficient then the state is not entangled.
Let B(H AB ) denotes the algebra of linear operators acting on a finite-dimensional Hilbert space H AB of dimension dim(H AB ) and let D(H AB ) denote the set of density operators for the bipartite system. The density operators are positive operators of unit trace acting on H AB . For any state ρ AB ∈ D(H), if we can express ρ AB as ρ AB = i p i ρ i A ⊗ ρ i B then it is separable state, otherwise the mixed state is entangled one. Given a density operator ρ AB associated with a bipartite quantum system AB, the reduced density matrix for subsystem A (or B) is obtained by taking partial trace over subsystem B (or A), i.e., ρ A = tr B (ρ AB ). A physical quantity of system A represented by a self-adjoint operator O A on H A is identified with a self-adjoint operator O A ⊗ I B on H AB , where I B is the identity operator on H B . The expectation value of O A ⊗ I B on state ρ AB is given by tr(ρ A O A ), where ρ A is the reduced density operator of system A.
Let us consider a composite system AB with pure state |Ψ AB . The amount of entanglement between subsystems A and B can be quantified via the entanglement entropy which is defined as the von Neumann entropy of the reduced density operator ρ A = n λ n |ψ n A ψ n | (or ρ B ), i.e., which is invariant under local unitary transformations on ρ A . The von Neumann entropy vanishes when density operator ρ A is a pure state. For a completely mixed density operator, the von Neumann entropy attains its maximum value of log d A , where d A = dim(H A ).
For any density operator ρ A associated with quantum system A, we can define a formal "Hamiltonian" K A , called the modular Hamiltonian, with respect to which the density operator ρ A is a Gibbs like state (with β = 1) where Z = tr(e −K A ). Note that any density matrix can be written in this form for some choice of Hermitian operator K A . With slight adjustments in the above equation, the modular Hamiltonian K A can be written as K A = − log ρ A . In this case, the entanglement entropy of the system is equivalent to the thermodynamic entropy of a system described by Hamiltonian K A (with β = 1). Writing in terms of modular Hamiltonian K A = − log ρ A , the entanglement entropy becomes the expectation value of the modular Hamiltonian The capacity of entanglement is another informationtheoretic quantity that has gained some interest recently [13,24]. It is defined as the variance of the modular Hamiltonian K A [13] in the state |Ψ AB and can be expressed as The capacity of entanglement can also be defined in terms of the variance of the relative surprisal between two density matrices V (ρ||σ) [25]: If one of the density matrices becomes maximally mixed (i.e., either ρ or σ becomes I/d), then the variance of the relative surprisal becomes the capacity of entanglement. As shown in Ref. [26], uncertainty for any observable is a convex function. Given two or more Hermitian operators such as O 1 and O 2 , the standard deviation or the uncertainty for observables satisfy ∆( This shows that adding two or more observables always reduces the uncertainty. If we define the standard deviation in the modular Hamiltonian as uncertainty in the entanglement operator, then for any two modular Hamiltonian K 1 and K 2 , we will have where K i = − ln ρ i . This property has an interesting implication when we have a modular Hamiltonian undergoing some variation. Suppose, we allow a variation in the modular Hamiltonian as K → K = K + xV , where V is the additional term in the modular Hamiltonian and x is a real parameter. Then, the following relation holds true, i.e., ∆K ≤ ∆K + x∆V . For the sake of completeness, we mention the following properties which are applicable for C E on account of having similar form as the relative surprisal between two density matrices: 1. Additivity under tensor product: 3. Uniform Continuity: for ξ some constant and l 1 trace norm between states D(ρ, ρ ). 5. Corrections to subadditivity: for any bipartite state ρ with marginal states ρ 1 , ρ 2 and mutual information I ρ , with constant χ and f (x) = max(x 1/4 , x 2 ).
6. For fixed dimensions d ≥ 2, the state ρ d with maximal variance has the spectrum with r being the unique solution to , and for the limit of large d, r ≈ 1 2 .
For further details and proofs regarding the above properties, readers are advised to go through Ref. [25].
III. CAPACITY OF ENTANGLEMENT FOR NONLOCAL HAMILTONIANS
The dynamics of entanglement under two-qubit nonlocal Hamiltonian has been adressed in Ref. [11]. In this section, we address the following question: What is the capacity of entanglement for arbitrary two-qubit nonlocal Hamiltonian? Further, we also discuss about the rate of the capacity of entanglement for the non-local Hamiltonian. For any two-qubit system, the non-local Hamiltonian can be expressed as (except for trivial constants) where α, β are real vectors, γ is a real matrix and, I A and I B are identity operator acting on H A and H B . The above Hamiltonian can be rewritten in one of the two standard forms under the action of local unitaries acting on each qubits [11,27]. This is given by where µ 1 ≥ µ 2 ≥ µ 3 ≥ 0 are the singular values of matrix γ [11]. Using the Schmidt-decomposition, any two qubit pure state can be written as We can utilize the form of Hamiltonian in Eq. (10) and choosing H + (i.e. assuming det(γ) ≥ 0) to evolve the state in Eq. (11) without loosing any generality [11]. To further showcase a specific example, let us choose |φ = |0 and |χ = |0 . Thus, the state at time t = 0 takes the form Under the action of the non-local Hamiltonian, the joint state at time t can be written as ( = 1) where To evaluate the capacity of entanglement, we would require the reduced density matrix of the two qubit evolved state, ρ A (t) = tr B (ρ AB (t)), which is given by where The capacity of entanglement at a later time t can be calculated from the variance of modular Hamiltonian K A . This is given by In order to quantify the entanglement production, we can define the entanglement rate Γ as defined in Ref. [11], i.e., The assertion is that this quantity depends upon the entanglement S EE which depends upon some parameter p and the rate of the Schmidt coefficient. The condition(s) to obtain a maximal entanglement rate are of interest for which two things are of significance. First, for a given value of S EE of two-qubit system, we find |Ψ E , for which the interaction produces maximum rate Γ E and, the maximal achievable entanglement rate Γ max = max E Γ E with corresponding state |Ψ max . Let us evaluate objects defined above for an arbitrary Hamiltonian H. Using the Schmidt-decomposition of the state |Ψ(t) where φ|φ ⊥ = 0 = χ|χ ⊥ and p ≤ 1 2 . The entanglement measure S EE , must depend only on the Schmidt coefficient p, given the fact that it must be invariant under local unitary operations. If we choose entropy of entanglement as S EE , the entropy of reduced density operator of one of the qubits is given by Operationally, S EE quantifies the amount of EPR entanglement contained asymptotically in a pure state |Ψ AB , thus S EE gives a ratio of maximally entangled EPR state Considering the infinitesimal time evolution of Schmidt coefficient of two qubit state, we get The time evolution of the reduced state for the subsystem A is given by (19) Starting from ρ A |φ = p|φ , then using the Schrödinger equation, we have dp dt As Γ is to be maximized, we can choose Note that fixing S EE , means fixing p and so the maximum entropy corresponds to a state with some fixed |φ and |χ . For any value S EE of entanglement, the state |φ and |χ for which maximum entanglement rate Γ E is obtained does not depend on S EE , but only on the form of Hamiltonian H. Let h max be the maximum value of |h|, i.e., Now, we need to drive the two qubit state with local operators so that for all time the corresponding state is the one with maximum rate and we would then know how capacity of entanglement evolves with time.
Evaluating the capacity for entanglement for general pure bipartite states in the Schmidt-decomposed form as in Eq. (17) and using the modular Hamiltonian, we can express it as We can define the rate of capacity of entanglement as Let Γ C denote the rate of capacity of entanglement, i.e, Γ C := dC E (t) dt . From the earlier result, using the transformed Hamiltonian, we have Thus, it will not diverge with this form for p = 0 or 1. It should be clear that local terms corresponding to α, β in Eq. (9) give no contribution to h max with the given Schmidt-decomposed form of the bipartite state. Trying to determine h max in terms of µ 1,2,3 , we get The maximum is reached for when |χ = |φ ⊥ . Further utilizing completeness condition |φ φ ⊥ | + |χ χ ⊥ | = I, we get the expression It can be further inferred from µ 1 ≥ µ 2 ≥ µ 3 that maximum value is reached for when |φ = |0 or |1 , which gives us Thus, the state that provides maximum rate of capacity of entanglement and the corresponding rate are given by The maximum rate Γ C max is obtained here for p 0 0.0045 which maximizes f (p) to f (p 0 ) 1.2108 for the corresponding |Ψ max . The capacity of entanglement for this maximum rate is C E (p 0 ) 0.1306.
It has been shown that if we can allow local operations which can entangle each qubit with local ancilla, that can increase the Γ max for certain kinds of Hamiltonian [11]. We shall begin by generalizing the formulas for multilevel systems which contains the ancillas and the qubits. Consider a state |Ψ AB with the Schmidt-decomposition |Ψ AB = N n=1 √ λ n |φ n |χ n . Again, the capacity of entanglement only depends on the Schmidt coefficients λ n ≥ 0. Using the definition of capacity of entanglement rate in Eq. (16), we havẽ Using the Schrödinger equation, we find Now, let us consider one such example where adding ancillas allows one to increase capacity of entanglement more efficiently. Let us consider the case in which the ancillas are also qubits. Letting λ 1 = p and λ 2 = λ 3 = λ 4 = (1 − p)/3, Eq. (29) simplifies tõ wherẽ We have a freedom to choose the phase of states |φ n such that all terms add with the same sign thus allowing us to replace the imaginary parts of the above terms by their absolute values, i.e.,f (p) by f (p) . We find that p 0 0.6036 corresponding to capacity of entanglement C E (p 0 ) 0.5523 maximizingf (p) to f (p 0 ) 1.4459.
Further, proceeding to maximizeh, we obtain that the maximum value ish max = µ 1 + µ 2 + µ 3 , which occurs when |φ n and |χ n are both orthogonal maximally entangled states between the qubit and the ancilla. Upon comparing the cases in which ancillas are used to those in which they are not used, we can either have f (p 0 ) ≥ |f (p 0 )| orh max ≥ h max . For the case when µ 3 = 0, we can use ancillas to increase the maximum rate of capacity of entanglement Γ max as well as Γ for a given capacity of entanglement of the state |Ψ .
IV. BOUND ON RATE OF ENTANGLEMENT
In this section, we will show that the capacity of entanglement plays an important role in providing an upper bound for the entanglement rate for the non-local Hamiltonian. Specifically, we will show that the entanglement rate is upper bounded by the speed of transportation of the bipartite state and the time average of square root of the capacity of entanglement. Also, this sets a quantum speed limit on the entanglement production and degradation for pure bipartite states. Thus, the capacity of entanglement has a physical meaning in deciding how much time a bipartite states takes to produce a certain amount of entanglement.
Let us consider a bipartite system initially in a pure state. Let |Ψ(0) AB denote the initial state of the system. We consider the dynamics generated by a nonlocal Hamiltonial H AB . The time evolved state at later time t is given by |Ψ(t) AB = U AB (t)|Ψ(0) AB , where U AB (t) = e −iH AB t with = 1. Now, we apply the Heisenberg-Robertson uncertainty relation [28] for two non-commuting operators K A and H AB . This leads to Recall that the evolution of average of any self adjoint operator O is given by Using Eq. (35) (for O = K A ) in Eq. (34), we then obtain Let Γ(t) denote the rate of entanglement. Recall that the average of the modular Hamiltonian is the entanglement entropy S EE . In terms of the entanglement rate Γ(t), the above equation can be written as Since the square of the standard deviation of modular Hamiltonian is the capacity of entanglement, so in terms of the capacity of entanglement, we can write above bound as To interpret the above equation, first note that 2 ∆H AB is nothing but the speed of transportation of the bipartite pure entangled state on the projective Hilbert Space of the composite system. If we use the Fubini-Study metric for two nearby states [29][30][31], then the infinitesimal distance between two nearby states is defined as Therefore, the speed of transportation as measured by the Fubini-Study metric is given by V = dS dt = 2 ∆H AB . Thus, the entanglement rate is upper bounded by the speed of quantum evolution [32] and the square root of the capacity of entanglement, i.e., |Γ(t)| ≤ C E (t)V .
It was shown in Ref [33] that for ancilla unassisted case, the entanglement rate is upper bounded by c H log d, where d = min(dimH A , dimH B ), c is constant between 0 and 1, and H is operator norm of Hamiltonian which corresponds to p = ∞ of the Schatten p-norm of H which is defined as p . Now, using the fact that the maximum value of capacity of entanglement is proportional to S max (ρ A ) 2 [13], where S max (ρ A ) is maximum value of von Neuamnn entropy of subsystem which is upper bounded by log d A , where d A is the dimension of Hilbert space of subsystem A, and ∆H ≤ H , a similar bound on the entanglement rate can be obtained from Eq. (38). Thus, the bound on the entanglement rate given in Eq. (38) is stronger than the previously known bounds.
The bound on the entanglement rate can be used to provide a quantum speed limit for the creation or degradation of entanglement. The notion of quantum speed limit (QSL) decides how fast a quantum state evolves in time from an initial state to a final state [34]. Even though it was discovered by Mandelstam and Tamm [35], over last one decade, there have been active explorations on generalising the notion of quantum speed limit for mixed states [36,37] and on resources that a quantum system might posses [38]. Recently, the notion of generalised quantum speed limit has been defined in Ref. [39]. In addition, the quantum speed limit for observables has been defined and it was shown that the QSL for state evolution is a special case of the QSL for observable [40]. For a quantum system evolving under a given dynamics, there exists fundamental limitations on the speed for entropy S(ρ), maximal information I(ρ), and quantum coherence C(ρ) [41] as well as on other quantum correlations like entanglement, quantum mutual information and Bell-CHSH correlation [42]. Below, we provide a speed limit bound for the entanglement entropy which can be applied for scenario where entanglement can be generated or degraded, based on the capacity of entanglement. Our bound highlights the non-trivial role played by the capacity of entanglement in deciding the QSL.
The speed limit for entanglement entropy can be calculated from Eq. (38) by taking the absolute value on both the sides and integrating over time. Thus, we have For the time independent Hamiltonian, we obtain the following bound for the quantum speed limit for entanglement In the case of time dependent Hamiltonian H(t), we can apply the Cauchy-Schwarz inequality in Eq. (40) and obtain the following inequality (42) From the above inequality, we get the bound for the speed limit for entanglement entropy change as given by where ∆H = 1 2 dt, is the time averaged fluctuation in the Hamiltonian. In both these bounds (time dependent and time independent Hamiltonian) it is clear that evolution speed for entanglement generation (or degradation) is a function of capacity of entanglement C E . Thus, we can say that C E controls how much time a system may take to produce certain amount of entanglement. Here we depict T E QSL vs T with p = 1 for θ = 0.5 and 1.0, which shows that our speed limit bound is tight. Now, one may ask how tight is the QSL bound for the entanglement generation of degradation? Here, we illustrate with a specific example that the quantum speed limit for the creation of entanglement is actually tight. Consider the initial state at t = 0 as given in Eq. (12). The time evolution of the state is given by Eq. (13). Estimating the speed limit bound on the entanglement entropy in Eq. (41), for the considered state, would need following quantities: where η(t) = (1 − 2p) cos(2θt).
The plot in Fig. 2 for T E QSL vs T ∈ [0, 0.45] is shown under unitary dynamics through a general two qubit non-local Hamiltonian H + AB , beginning with an initial state of the system |Ψ(0) = |0 |0 (taking p = 1 in Eq. (13)). Our example shows that for the case of θ = µ 1 − µ 2 = 0.5 and 1.0, the QSL for the entanglement creation is indeed tight and attainable.
V. CAPACITY OF ENTANGLEMENT FOR SELF INVERSE HAMILTONIAN
In this section, we will explore the dynamics of capacity of entanglement for the self inverse Hamiltonian. Such Hamiltonians are simpler to handle and provide many interesting insights. The rate of capacity of entanglement for the self inverse Hamiltonian has been addressed. It was found that the inclusion of ancilla system lead to the enhancement of the entanglement capability in Ref. [11], but for the Ising Hamiltonian H ising = σ z ⊗ σ z it was shown that entanglement capability is ancilla-independent [43]. This independence on ancillas of entanglement capabilities turns out to be a consequence of the self-inverse property of the Hamiltonian H ising = H −1 ising . This result was generalized to all Hamiltonian evolutions of the kind [44] such that X i = X −1 i ∈ H i for i ∈ {A, B}. As a consequence of self-inverse property of the Hamiltonian, we have the time evolution operator ( = 1) Let |Ψ(0) AB be the initial state of the bipartite system AB, which can be expressed in the Schmidt decomposition as follows Let ρ AB (t) denote the density operator at time t. The time evolution of ρ AB (t) is governed by the Liouville-von Neumann equation given as where H AB is the non-local Hamiltonian of the composite system. The dynamics of the reduce density operator ρ B (or ρ A ) can be obtained from above equation by tracing out A (or B), which is given by Now, first we will calculate an upper bound on rate of capacity of entanglement for unitary evolution and then we will address the case of self inverse Hamiltonian. To calculate an upper bound on C E , first we differentiate both the sides of Eq. (6) with respect to time, this leads to where Γ(t) is the rate of entanglement. Now, we use the fact that logarithm of an operator ρ can be represented by where I is the identity operator. We use the above equation to compute the first terms on the right hand side of Eq. (52). This can be expressed as The second term on the right hand side of above equation is the rate of the entropy [45], so we rewrite Eq. (52) as Now, we consider the case where ρ is full rank, then the first term of above equation can be simplified as where k max is the maximum of the eigenvalues of the modular Hamiltonian. We then obtain an upper bound on the capacity of entanglement as Using Eq.(38) , we can give an upper bound on the rate of capacity of entanglement as given by where V = 2 ∆H AB is the speed of bipartite quantum state.
For the ancilla unassisted case, the entanglement rate Γ(t) is upper bounded by c||H|| log d (see Ref. [33]). Then, the upper bound on the rate of capacity of entanglement Γ C becomes Now we will find the upper bound on Γ C for self inverse Hamiltonians. The maximum entanglement rate Γ(t) for the self inverse Hamiltonian H = X A ⊗ X B was calculated in Ref. [44]. It is given by Therefore, the bound on Γ C can be expressed as This bound is independent of the details of the initial state but uses the self-inverse nature of the non-local Hamiltonian.
VI. CAPACITY OF ENTANGLEMENT FOR MIXED STATES
In the previous section, we used definition of C E for pure states. Here, we generalise the definition for the case of mixed states in such a way that it reduces to the previous definition for pure states. For this, we use the relative entropy of entanglement since it reduces to the entanglement entropy for pure states. The relative entropy of entanglement is defined in Ref. [46] and further expanded for arbitrary dimensions in Ref. [47]. This is given by where SEP is set of all separable or positive partial transpose (PPT) states and S(ρ||σ) = Tr(ρ log ρ − ρ log σ). Operationally, the relative entropy of entanglement quantifies the extent to which a given mixed entangled state can be distinguished from the closest state which is either separable or has a positive partial transpose (PPT). Also, this is an entanglement monotone and it is asymptotically continuous.
In the following, we shall denote the state in SEP for which the the minimum is attained for a given ρ AB as ρ * AB . Then, we can write E R (ρ AB ) as E R (ρ AB ) = min σ AB ∈SEP S(ρ AB ||σ AB ) = S(ρ AB ||ρ * AB ).
(63) Now, we claim that the capacity of entanglement for mixed states is given by We will now show that this agrees with the definition of capacity of entanglement for pure states. The relative entropy of entanglement is given by For a pure state, the density operator ρ AB is given by (66) The expression for ρ * AB for ρ AB is known [48] and given as follows The first term of Eq. (64) is given by Defining A Ψ = |Ψ AB Ψ| − I, we get This leads to with the only surviving term in Eq. (68) is Ψ|(log ρ * AB ) 2 |Ψ AB . Now, we have The second term of Eq. (68) is equal to E(ρ AB ) 2 for pure states. Thus, for ρ AB = |Ψ Ψ| AB , we have, which agrees with the expression for the capacity of entanglement for the pure bipartite states.
It may be worth noting that the capacity of entanglement for mixed state can also be expressed as the variance of the shifted modular Hamiltonian for the joint system. Upon defining the modular Hamiltonian for the composite state ρ AB and ρ * AB as K AB = − log ρ AB and whereK AB = K AB −K * AB , is the shifted modular Hamiltonian for the composite system. This provides another meaning for the capacity of entanglement for the mixed state. Now, we illustrate the capacity of entanglement for mixed state using the above definition. For general mixed entangled states, it is not always easy to find the closest separable state. However, for those cases where we know the closest separable state, we can compute the capacity of entanglement. Let us consider a mixed entangled state as given by where |φ + = 1 √ 2 (|00 + |11 ) is one of the four Bell states. The corresponding closest separable state which minimizes quantum relative entropy with ρ AB [48] is given by The expression for the relative entropy of entanglement for this example is given by Consider another example of a mixed state ρ AB = λ |φ + φ + | + (1 − λ) |00 00| .
The closest separable state minimizing relative entropy for this case is of the form [48] ρ * AB = 1 − The relative entropy of entanglement in this case can be analytically be found and given as E R (λ) = s + ln(s + ) + s − ln(s − ) where The detailed expression for the capacity of entanglement for ρ AB in Eq. (72) and Eq. (75) are very complicated. For the purpose of illustration we have provided numerical plots for the same. From the behaviour of plots in Fig. 3 and Fig. 4, it can be inferred that for λ ∈ {0, 1}, the cases where all non-zero eigenvalues of the state are same and thus the state becomes either pure or maximally mixed, and for such flat states, the capacity of entanglement vanishes. We leave the detailed investigation for the mixed state case for future work.
VII. CONCLUSIONS
Undoubtedly, study of quantum entanglement for bipartite and multipartite states is one of the prime area of research over last several decades. Even though the dynamics of entanglement for non-local Hamiltonians has been addressed earlier, the question of dynamics of the capacity of entanglement has not been studied before. The notion of the capacity of entanglement is a very useful quantity and this can be regarded as the quantum information theoretic counterpart of the heat capacity. For any bipartite pure state, the capacity of entanglement is the variance of the modular Hamiltonian in the reduced state of any of the subsystem. In this paper, we have studied the dynamics of the capacity of entanglement under non-local Hamiltonian. Our results answers a very pertinent question on the capacity of entanglement that the system can possess when it evolves in time under a non-local Hamiltonian. The capacity of entanglement has another meaning in deciding the upper bound for the entanglement rate. We have shown that the quantum speed limit for creating the entanglement is not only governed by the fluctuation in the non-local Hamiltonian, i.e., the speed of transportation of bipartite state, but also depends inversely on the time average of the square root of the capacity of entanglement. In addition, we have discussed the capacity of entanglement for self-inverse Hamiltonian and found an upper bound for this case on the rate of capacity of entanglement. We have also generalised this quantity for bipartite mixed states based on the relative entropy of entanglement, which reduces to known form for pure states case. In future, it will be worth exploring this notion which will have useful applications in other areas of physics. | 2022-07-26T01:16:23.079Z | 2022-07-23T00:00:00.000 | {
"year": 2022,
"sha1": "4ec9add8e0fa945747053d29196350feb4d17a32",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4ec9add8e0fa945747053d29196350feb4d17a32",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
15690535 | pes2o/s2orc | v3-fos-license | Association between HLA Variations and Chronic Hepatitis B Virus Infection in Saudi Arabian Patients
Hepatitis B virus (HBV) infection is a leading cause of liver diseases including cirrhosis and hepatocellular carcinoma. Human leukocyte antigens (HLAs) play an important role in the regulation of immune response against infectious organisms, including HBV. Recently, several genome-wide association (GWAS) studies have shown that genetic variations in HLA genes influence disease progression in HBV infection. The aim of this study was to investigate the role of HLA genetic polymorphisms and their possible role in HBV infection in Saudi Arabian patients. Variations in HLA genes were screened in 1672 subjects who were divided according to their clinical status into six categories as follows; clearance group, inactive carriers, active carriers, cirrhosis, hepatocellular carcinoma (HCC) patients and uninfected healthy controls. Three single nucleotide polymorphisms (SNPs) belonged to HLA-DQ region (rs2856718, rs7453920 and rs9275572) and two SNPs belonged to HLA-DP (rs3077 and rs9277535) were studied. The SNPs were genotyped by PCR-based DNA sequencing (rs2856718) and allele specific TaqMan genotyping assays (rs3077, rs7453920, rs9277535 and rs9275572). The results showed that rs2856718, rs3077, rs9277535 and rs9275572 were associated with HBV infection (p = 0.0003, OR = 1.351, CI = 1.147–1.591; p = 0.041, OR = 1.20, CI = 1.007–1.43; p = 0.045, OR = 1.198, CI = 1.004–1.43 and p = 0.0018, OR = 0.776, CI = 0.662–0.910, respectively). However, allele frequency of rs2856718, rs7453920 and rs9275572 were found more in chronically infected patients when compared to clearance group infection (p = 0.0001, OR = 1.462, CI = 1.204–1.776; p = 0.0178, OR = 1.267, CI = 1.042–1.540 and p = 0.010, OR = 0.776, CI = 0.639–0.942, respectively). No association was found when polymorphisms in HLA genes were compared in active carriers versus cirrhosis/HCC patients. In conclusion, these results suggest that variations in HLA genes could affect susceptibility to and clearance of HBV infection in Saudi Arabian patients.
Introduction
Hepatitis B infection is an inflammatory illness of the liver caused by hepatitis B virus (HBV). It is a potentially severe disease accounting for over 400 million chronic HBV patients and nearly 1.2 million deaths every year [1]. Even though 2-10% of HBVinfected individuals develop chronic complications, the clinical outcomes vary, with 15-40% of these chronic HBV patients are at a higher risk of developing liver cirrhosis and hepatocellular carcinoma (HCC) during their lifetime [2]. Although, the exact mechanism is not fully understood, the reason for this difference in response to HBV virus is believed to be attributed to a complex web of inter-related factors, such as host genetic, viral, and environmental factors [3].
Since the outcome of any infection depends mainly on the host immune response, a number of studies have investigated and reported that several variations in the human leukocyte antigens (HLAs) class I and class II genes are involved in HBV persistence or clearance [4,5,6]. HLAs belong to the major histocompatibility complex (MHC) genes that are located on chromosome 6p21. MHC class II molecules play an important role in the defense against infections and are involved in presenting antigen to CD4 + T cells thereby augmenting antibody production and cytotoxic T cell activation. Such molecules are encoded by three different loci namely HLA-DR, -DQ, and -DP. These genes are highly polymorphic; thus enabling them to present a wide range of antigens [7,8,9].
A recent genome wide association (GWAS) study [13] identified 11 single nucleotide polymorphisms (SNPs) belonging to the class II HLA-DP region to be associated with chronic hepatitis B infection among Japanese subjects. However, upon validation among two independent Japanese and a Thai cohort, it was revealed that only two SNPs (rs3077 and rs9277535) continued to remain significant. A second GWAS study conducted by the same group among Japanese subjects revealed two SNPs (rs2856718 and rs7453920) within the HLA-DQ locus to be significantly associated with hepatitis B persistence [14]. In addition, a study conducted on Chinese population revealed that the non-risk alleles of HLA-DP SNPs, rs3077 and rs9277535, showed protective effects for the clearance of the virus [15]. Similarly, several other studies conducted on different Chinese sub-populations have investigated the role of HLA-DP variants on development of persistent chronic HBV infection or its clearance [16,17,18,19]. Thus, this study aims to determine if similar observations can be monitored when these SNPs are examined in HBV-infected or HBV-cleared individuals of Saudi Arabian origin. Five SNPs were analyzed, two belonging to the HLA-DP region (rs3077 and rs9277535) and three belonging to the HLA-DQ region (rs2856718, rs7453920 and rs9275572).
Patients
The study protocol conformed to the 1975 Declaration of Helsinki and was approved by the institutional review boards of King Faisal Specialist Hospital and Research Center, Armed Forces Hospital, and King Khalid University Hospital, Riyadh, Saudi Arabia. A total of 1672 Saudi nationals were included in the study who were recruited during a three year period from August 2007 to August 2010. They consisted of the complete spectrum of HBV infected individuals, 488 inactive asymptomatic HBVcarriers (Group I), 208 active symptomatic HBV-carriers (Group II), 85 HBV-infected patients diagnosed with liver cirrhosis or cirrhosis+HCC (Group III) and 304 HBV-cleared subjects (Group IV). The study also included 587 healthy control subjects who were blood donors and were HBs antigen (HBsAg) and HBe antigen (HBeAg) negative. All patients had to sign an informed consent prior to enrolling in the study, and their basic demographic data were recorded. Subjects who were found to be positive for HBsAg and negative for HBeAg with persistently normal serum ALT levels and HBV DNA level ,2000 IU/mL were characterized as inactive carriers, while subjects who were found to have a repeated detection of HBsAg over a period of six months and with elevated serum ALT levels and HBV DNA level $2000 IU/mL were diagnosed as active HBV carriers. The clearance group was identified as individuals who were diagnosed as anti-HBcore antibody positive and HBsAg and HBeAg negative. Liver cirrhosis among HBV infected patients was confirmed by liver biopsy, clinical, biochemical or radiological evidence of cirrhosis. Diagnosis of HCC was made by computed tomography and/or magnetic resonance imaging of the liver, according to the guidelines published for the diagnosis and management of HCC [20]. Baseline characteristics including age, gender, and clinical data such as biochemical tests and viral load are shown in Table 1.
Genotyping of HLA SNPs
Genomic DNA from peripheral blood mononuclear cells was extracted using Gentra Pure Gene kit according to the manufacturer's protocol (Qiagen, Hilden, Germany). Blood samples from patients and controls were genotyped for the five HLA SNPs using either a) PCR-based genotyping assay or b) TaqMan assay. sequencing buffer, 0.2 mM each primer (either forward or reverse primer specific for the target sequences). The reaction was performed in cycling at 96uC for 1 minute and then at 96uC for 10 seconds, 55uC for 5 seconds and 60uC for 4 minutes for 25 cycles. Sequencing products were purified using DyeEx spin column and eluted in 25 ml ddH2O. Each sample was then vacuum-dried and resuspended in 15 ml of Hi-Di formamide. The samples were analyzed by ABI 3700 DNA Analyzer (Applied Biosystems, USA). b) TaqMan genotyping assay: Four HLA SNPs-rs3077, rs9277535, rs9275572 and rs7453920 were genotyped using TaqMan allelic discrimination assay with the 7900 HT Fast Real Time PCR System (Applied Biosystems, Foster City, CA, USA). The amplifying primers and probes were ordered for TaqMan (Applied Biosystems, Foster City, CA, USA). One of the allelic probes was labeled with FAM dye and the other with the fluorescent VIC dye. PCR was run in the TaqMan universal master mix (Applied Biosystems) at a probe concentration of 206. The reaction was performed in a 96well format using 20 ng of genomic DNA in a total reaction volume of 25 ml. The reaction plates were heated for 2 mins at 50uC and for 10 mins at 95uC, followed by 40 cycles of 95uC for 15 s and 60uC for 1.5 mins. The fluorescence intensity of each well in TaqMan assay plate was read. Fluorescence data files from each plate were analyzed by automated software (SDS 2.4).
Statistical analysis
Statistical analysis was performed using SPSS version 17.0 (SPSS Inc., Chicago, IL, USA). The genotypic and allelic distribution for the HLA SNPs among the patient groups, controls and clearance group were assessed by means of Pearsons's x 2 test and the association between the SNPs and the disease status were calculated under additive, dominant and recessive genetic models and were expressed in terms of odds ratio (OR) and their 95% confidence intervals (CI). A p#0.05 was considered to be statistically significant. The SNPs were tested for Hardy-Weinberg equilibrium (HWE) using the DeFinetti program (http://ihg.gsf. de/cgi-bin/hw/hwa1.pl). A cut-off p-value of 0.01 was set for HWE.
Results
In the present case-control study, two SNPs belonging to the HLA-DP (rs3077 and rs9277535) and three SNPs in HLA-DQ (rs2856718, rs7453920 and rs9275572) were analyzed. The study included 488 inactive HBV carriers, 208 active HBV carriers, 85 HBV-infected patients suffering from cirrhosis and cirrhosis+ HCC, 304 HBV-cleared individuals and 587 healthy uninfected controls. The Hardy-Weinberg equilibrium (HWE) was assessed for the five SNPs using the chi-square test with one degree of freedom, however, only rs2856718 deviated from HWE when analyzed for the whole population. This SNP was found to be in HWE in healthy control subjects and therefore none of the SNPs were excluded from the analysis. The genotypic distribution of the five SNPs among patients and controls are as shown in Table 2. The SNPs rs2856718 (OR = 1.351; 95% C.I. 1.147-1.591; p = 0.0003), rs3077 (OR = 1.200; 95% C.I. 1.007-1.431; p = 0.041) and rs9277535 (OR = 1.198; 95% C.I. 1.004-1.430; p = 0.045) were found to be significantly associated with hepatitis B infection when the HBVinfected patients group was compared against the control group ( Table 2). In addition, these three SNPs were found to be recessively associated with the susceptibility to HBV infection, with odds-ratios of 0.513, 1.581 and 1.871, respectively. For HLA-DQ SNPs, no significant association was observed in the case of rs7453920 with regard to HBV susceptibility, the minor allele A of HLA-DQ rs9275572 was found to be recessively associated with HBV infection in the opposite direction with an OR of 0.776, 95% C.I. 0.427-0.821, and p-value of 0.001, suggesting a plausible protective role for homozygous AA genotype against HBV infection.
When these SNPs were analyzed to determine whether they play a role in clearing the HBV virus, SNPs rs2856718 (OR = 1.462; 95% C.I. 1.204-1.776; p = 0.0001), rs7453920 (OR = 1.267; 95% C.I. 1.042-1.540; p = 0.017), and rs9275572 (OR = 0.776; 95% C.I. 0.639-0.942; p = 0.0104) were found to have a significant association ( Table 3). The frequency of rs2856718-G allele of among HBV-cleared individuals (freq. = 0.55) was higher than that of HBV-infected patients (freq. = 0.45), and the G allele was found to be dominantly associated with an OR of 3.640 (95% C.I. 2.523-5.265) and a p, 0.0001, suggesting that the G allele may have an important role in clearing the HBV virus. Similarly, the rs7453920-G was found to be dominantly associated with HBV clearance with an OR of 1.812 (95% C.I. 1.194-2.761) and p = 0.0030. Also, the frequency of rs9275572-A was found to be more in HBV-infected than those who cleared the virus. On comparing groups II, III and IV with inactive HBV carriers (group I), none of the SNPs were found to have a significant association with HBV persistence (Table 4). However, rs3077-G allele was found to be dominantly associated with HBV persistence but in the negative direction (OR = 0.675, 95% C.I. 0.502-0.908; p = 0.0092).
Similarly, no significant association was observed with regards to the development of Cirrhosis+HCC among HBV-infected patients when groups III+IV were compared to group II (active HBV carriers) ( Table 5).
Haplotype analysis was performed between HBV patients and control group and two blocks were produced. The first block (block 1) consists of the two HLA-DQ SNPs rs2856718 and rs9275572, while, the other block (block 2) was the HLA-DP SNPs rs3077 and rs9277535 [ Figure 1a and b]. Three out of the four haplotypes in block 1 were found to be significant [ Table 6]. The haplotype AG which included the risk allele for rs2856718 was found to be significant with (p = 0.0366 and freq. = 0.416); the haplotype GA, which includes the protective alleles, was found to be significant with a p value,0.0001 and a frequency of 0.259 within the population. In addition, the haplotype AA which included the risk allele for rs2856718 and the protective allele for rs9275572 was also found to be significant with (p = 0.0284 and freq. = 0.098). For block 2, only one haplotype GG was found to be significant with (p = 0.0121 and freq. = 0.147).
Similarly, haplotype analysis was performed between HBV patients and HBV-cleared individuals, which included the two HLA-DQ SNPs rs2856718 and rs9275572. Three out of four haplotypes were found to be significant [ Table 7]. The haplotype AG which included the risk allele for rs2856718 was observed more frequently with a freq. = 0.414 and was found to be significant with a p value = 0.0139, while the haplotype GA which included the risk allele for rs9275572 was also highly significant (p,0.0001 and freq. = 0.245). The haplotype which included the risk alleles for both the SNP was also significant with a p = 0.0297 but its frequency was only 1%.
Discussion
In this study, four SNPs (rs2856718, rs3077, rs9277535 and rs9275572) belonging to the HLA-DP and HLA-DQ region were found to be significantly associated with HBV susceptibility. The SNP rs2856718 which is located in the intergenic region between HLA-DQA2 and HLA-DQB1 was found to be dominantly associated with HBV infection, which was consistent with the findings of a previous study [14]. For rs2856718, the risk genotype, AA, was found to be more predominant among HBV-infected patients (freq. = 0.40) when compared to HBV-cleared subjects (freq. = 0.15). Under the dominant model, this study revealed that the non-risk allele G was found to be strongly associated with HBV clearance when compared to AA genotype carriers. This suggests that inheriting a single rs2856718-G allele would reduce the risk of an individual to progress into chronic HBV infection.
Similar observations were made for the non-risk allele G of rs7453920, which was also dominantly associated with HBV clearance. In agreement with the observations of Hu et al. [18], no significant association was observed with liver cirrhosis/HCC risk. The SNP rs7453920 that belongs to intron 1 of HLA-DQB2 region failed to show any association with susceptibility to HBV infection as observed in the study of Mbarek et al. [14].
The SNPs, rs3077 and rs9277535 have been analyzed in several studies [13,14,15,16,17,18,19,21,22]. In the present study, the rs3077-G allele (freq. = 0.23) was found to be the minor allele compared to the A allele (freq. = 0.77) among healthy uninfected subjects. This finding is consistent with the observations from cohorts with different ancestries such as Caucasian (G = 0.16), Mexican in California (G = 0.21), Tuscan in Italy (G = 0.22) and Maasai in Kenya (G = 0.30), who were included in the HapMap project and from the pilot studies of 1000 Genomes project conducted among Caucasian population (G = 0.13) (http://www. ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs = 3077). A similar observation was made by Vermehren et al. [23], with a G allele frequency of 0.18 among healthy Caucasians. The study further reported that the rs3077-G was recessively associated with the risk to HBV infection. This was in agreement with our study where the SNP rs3077 was found to be significantly associated with HBV infection, predominantly among HBV patients who carried the homozygous GG genotype. Surprisingly, our observations differed from other Asian studies in terms of allelic distributions. Similar work described the same findings with the major G allele being a risk for HBV infection [19]. Other studies conducted on Chinese [15,16,17,22], Japanese [13,24], Thai [13] and Korean [24] subjects reported that the minor A allele was protective against HBV infection. A phylogenetic analysis using four VNTRs and one STR revealed that Japanese, Chinese (Han, Hui and Uygur populations) and Kazakhs formed one cluster while two European populations (Greek and Italians) formed another cluster with the Saudi Arabian population sample, suggesting that the Saudi Arabian population might be more closely related to the Caucasian [25]. That might be one reason for this difference in genotype distribution. In addition, no significant association was observed for rs3077-G in relation to HBV viral clearance but a significant association was estimated in the case of persistent HBV infection under the dominant model but in the opposite direction, suggesting that the AA genotype may play a role in the progression of HBV infection to chronicity, while the G allele might be protective against the progression. This could be supported with the findings of Tseng et al. [26] who reported that the GA and GG haplotypes of rs3077 and rs9277535 were associated with a higher HBeAg seroconversion rate among chronic HBV patients undergoing PEG-IFN therapy. However, no association was observed with progression to liver cirrhosis or HCC in this study, which is in agreement with An et al. [16] but in contradiction with the conclusions of Hu et al. [18].
The SNP rs9277535 was also found to be significantly associated with HBV susceptibility. The GG genotype was more prevalent among HBV patients (GG = 0.09) than controls (GG = 0.05) and was thus found to be recessively associated with HBV risk. This was consistent with studies conducted on different Chinese populations that reported the G allele to be susceptible to chronic HBV infection [17,19], and the A allele to have a protective effect against HBV [16,17,21], while others reported a significant association with HBV clearance [15,21,22]. Other studies have reported no significant association with HBV infection or its recovery among Caucasians [23] and among African-and European-Americans [27]. Certain discrepancies regarding the distribution of alleles as observed for rs3077 [15,16,17,18,19,21,22] remain, but the observations of this study (G = 24%) were comparable to the data published for Yoruban cohorts in the HapMap project and in the 1000 Genomes project, with the minor G allele frequency varying between 10.3-12%. Furthermore, we found no association for rs9277535 with HCC risk which was consistent with the results of Hu et al. [18].
In addition to the above four SNPs, we analyzed a novel HLA-DQ SNP rs9275572-A, which seemed to show a protective effect against HBV infection and also showed a significant association with HBV clearance. This SNP has been reported to have a significant association with HCV-induced HCC among Japanese [28]. However, in this study no significant association was observed with HBV-related liver cirrhosis or HCC, and a similar result was observed in the study conducted by Li et al. [29].
The haplotype that included the risk alleles of the two SNPs rs3077 and rs9277535 was found to be significantly associated with HBV susceptibility. This can be substantiated by the evidence from a study that reported these SNPs to be strongly associated with the regulation of mRNA expression of HLA-DPA1 and -DPB1, by lowering their expression with the increasing risk of chronic HBV [30].
In summary, this study demonstrated that the genetic variations in the HLA-DP and -DQ genes are strongly associated with HBV susceptibility among Saudi Arabian population. Furthermore, a major finding of this study is that SNPs that belong to HLA-DQ variants are linked to HBV viral clearance. | 2016-05-12T22:15:10.714Z | 2014-01-22T00:00:00.000 | {
"year": 2014,
"sha1": "12e2e959b0211e0bdf9123ec38fcdf2d05989c8f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0080445",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "12e2e959b0211e0bdf9123ec38fcdf2d05989c8f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
212696568 | pes2o/s2orc | v3-fos-license | Screening for non-communicable diseases
India is experiencing an expeditious health transition with a rising burden of non-communicable diseases (NCD) exceeding the burden of communicable diseases like water-borne or vector-borne diseases, HIV, TB and others. Non communicable diseases are the major cause of global mortality, contributing to more than 63% of allcause mortality in the population. Non communicable diseases also causes considerable loss in potentially productive years of life. Losses due to premature deaths related to non-communicable diseases are also projected to increase over the years.
common NCD utilizing the services of the frontlineworkers and health-workers under the existing healthcare system starting from the primary health centers. 4 Approval for conducting this retrospective analytical study was given by the Institutional Ethics Committee. Consent was not obtained from individual study participants as data was accessed and analyzed from medical records only, maintaining complete patient confidentiality.
Objective of this study was to enumerate the results of screening for non-communicable diseases in the NCD clinic over a period of one year in a tertiary health centre.
METHODS
This retrospective analytical study was conducted from the data collected over a period of one-year form January 2018 to December 2018. The results from screening tests conducted in the NCD clinic, for detecting hypertension, diabetes mellitus, breast cancer and cervical cancer, in Government tertiary care Hospital for Women, Chennai, were recorded. The flowchart and screening methods followed were those recommended by the NHM -NPCDCS. Data thus obtained was analyzed using standard statistical methods. All women attending the NCD clinic, during the study period, formed the study population. All women who were attending the NCD clinic for the first time, during the study period were included in the study and formed the study group. All women who came for follow-up to the NCD clinic were excluded. Women attending the NCD clinic were informed about all the screening procedures and willing women were screened.
Screening for hypertension
The procedure followed was as given in standard medical text books. Blood pressure was measured over brachial artery, for the women attending the NCD clinic, in a sitting posture, with the BP cuff covering the two-third of upper arm length and the entire girth of the upper arm, at the level of the heart. The criteria for raised blood pressure was, the blood pressure recorded more than or equal to 140/90 mmHg, for women of age >18 years and more than 150/90 mmHg for women of age 60 years and above.
Screening for diabetes mellitus
Women, aged above 30 years, attending the NCD clinic were screened for diabetes mellitus using random capillary blood glucose value, obtained by finger prick method. Random capillary blood glucose value >= 140 mg/dl is considered as a positive test for screening for diabetes mellitus.
Screening for breast cancer
Both the breasts were inspected to compare the size, shape and colour. Any swelling or distortion in the breasts, and dimpling, puckering or bulging of the skin was looked for. Nipples were inspected for the position, redness, rash or discharge. The above changes were once again looked for, with both the arms in raised position. Next, with the women in lying down position, both the breasts were palpated with the flat of the palm and repeated with the first few finger pads of the hand using a circular motion about the size of an old one-rupee coin. The entire breast, both right and left was examined from top to bottom and then left to right to detect the presence of any lumps.
Screening for cervical cancer
The woman was examined in the dorsal lithotomy position. The cervix was visualized under magnavision lens. Any abnormal appearance or discharge from the cervix was noted. After application of 5% acetic acid on the surface of the cervix, appearance of any white change and areas covered were noted. Cervix with abnormal appearance/discharge, and/or acetowhite changes after application of acetic acid were considered test positive.
Women with positive test results were encouraged to initiate lifestyle changes, by avoiding tobacco use and high salt, encouraging fruits and vegetables intake, and were encouraged moderate physical activity and referred to the respective hypertension, diabetes or breast clinic, at RGGGH for further management. Women with positive tests for cervical cancer screening were referred to colposcopy screening clinic, in the same tertiary centre for further management.
RESULTS
Of the 42,519 women screened for common non communicable diseases -hypertension, diabetes mellitus, breast cancer and cervical cancer, nearly 5.55% women (n=2359) had positive results for screening tests done for any one of the diseases screened. Nearly 13,971 women were screened for hypertension in the study period, of which 1,216 women were found to have raised blood pressure. Of 11,708 women screened for diabetes mellitus in the year 2018, 856 women had positive results for the screening tests. Around 7,568 women were screened for cervical cancer in the study year and 175 women had positive test results for screening.
A large number of 9,272 women were screened for breast cancer, of these 112 women had positive test results. As per the NCD guidelines, any women who tested positive for the screening tests for hypertension, diabetes mellitus or breast cancer were referred to the respective hypertension clinic, diabetes clinic or breast clinic, at the Govt. General hospital (RGGGH) for further management. Nearly 175 women with positive tests for cervical cancer screening were referred to colposcopy cervical cancer screening clinic, in the same tertiary centre for further management. The screening results of the NCD -8.7% women had raised blood pressure, 7.31% had raised blood sugar levels. Nearly 1.21% women had positive screening test results for cancer breast, 2.31% women had positive screening test results for cervical cancer (Figure 1).
DISCUSSION
The screening results of the NCD revealed that 8.7 % women had raised blood pressure, 7.31 % had raised blood sugar levels. Nearly 1.21% women had positive screening test results for breast cancer, 2.31% women had positive screening test results for cervical cancer. On comparing the numbers of women who had undergone screening and the numbers of women with positive test results, Authors did not observe any significant variation in the proportional ratio between the number of women screened and the number of women detected with positive test results.
More numbers of attendees were observed in the first six months of the calendar year, from January to June.
Authors observed a decline in the numbers from the month of July and thereafter in the second half of the year, the numbers of women who were screened monthly remained the same. This was comparable, for the screening of women for diabetes mellitus and hypertension in both the monthly and yearly numbers of women screened and the number of women found to be positive for the test results. More studies should be done to identify the causes for the month wise variation in numbers of women screened. More number of attendees in the first six months of the calendar year, from January to June. Authors observed a decline in the number from the month of July up till the month of September and an increase in the number from October to December (Figure 4).
In the screening for both breast cancer and cervical cancer, authors observed a decrease in the numbers screened in July, which continued till the month of September and then followed by a steady increase in the numbers of women screened, which was observed from More number of attendees in the first three months of the calendar year, from January to March. Authors observed a decline in the number from the month of April which continued up till the month of September and followed by an increase in the number from October to December ( Figure 5).
Numerous studies have been conducted across the globe which provided the proof that the screening policies for detecting diseases such as hypertension and diabetes mellitus, have a preventive impact on morbidity and mortality associated with the diseases. This is why evidence-based international recommendations have been issued for implementing screening policies, for detecting diseases such as hypertension and diabetes mellitus. 5,6 Early detection of disease and its management plays a key role in reducing the associated morbidity and mortality of diseases. 9,10 The WHO has recommended the population-based screening for cervical and breast cancer. Educational model-based interventions promote self-care and create a foundation for improving the acceptance for cancer screening. [11][12][13][14] Screening policies should take into consideration socio-economic factors, which influence disease causation. 15 The introduction of innovative methods for screening of large population should be emphasized. Public awareness on early detection by screening should be encouraged. 16,17
CONCLUSION
Of the total number of 42,519 women screened for common non communicable diseases -hypertension, diabetes mellitus, breast cancer and cervical cancer, nearly 5.55% women had positive results for screening tests done for any one of the diseases screened. The screening results at the NCD clinic in the study period revealed that, of the total number of women screened for hypertension, 8.7% of women had raised blood pressure; 7.31% of the total number of women screened for diabetes mellitus had raised blood sugar levels. Nearly 1.21% women had positive screening test results for breast cancer, 2.31% women had positive screening test results for cervical cancer. As per the NCD guidelines, women who tested positive for the screening tests for hypertension, diabetes mellitus and breast cancer were referred to the respective hypertension clinic, diabetes clinic or breast clinic, at the Govt. General hospital (RGGGH) and women with positive tests for cervical cancer screening were referred to Colposcopy cervical cancer screening clinic, in the same tertiary centre for further management. | 2020-03-05T11:10:42.945Z | 2020-02-27T00:00:00.000 | {
"year": 2020,
"sha1": "1a2777b48786bd72783e44a9017966bd3fb339d0",
"oa_license": null,
"oa_url": "https://www.ijrcog.org/index.php/ijrcog/article/download/7902/5326",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "80e83054d9a03ebe7d24c6a636f483d396eb3c9c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3082884 | pes2o/s2orc | v3-fos-license | Holocene sea ice variability driven by wind and polynya efficiency in the Ross Sea
The causes of the recent increase in Antarctic sea ice extent, characterised by large regional contrasts and decadal variations, remain unclear. In the Ross Sea, where such a sea ice increase is reported, 50% of the sea ice is produced within wind-sustained latent-heat polynyas. Combining information from marine diatom records and sea salt sodium and water isotope ice core records, we here document contrasting patterns in sea ice variations between coastal and open sea areas in Western Ross Sea over the current interglacial period. Since about 3600 years before present, an increase in the efficiency of regional latent-heat polynyas resulted in more coastal sea ice, while sea ice extent decreased overall. These past changes coincide with remarkable optima or minima in the abundances of penguins, silverfish and seal remains, confirming the high sensitivity of marine ecosystems to environmental and especially coastal sea ice conditions.
I appreciate the chance to take a third look at the revised version of the manuscript, "Holocene wind and sea ice variability in Western Ross Sea (Antarctica)." The authors have addressed the concerns that I detailed previously, most critically through their discussion of the potential problems with the marine sediment core chronologies, with the addition of both text material and figure 7. I understand the position the authors are in, that of course, more could be done to potentially improve/re-assess the chronologies with advanced technologies such as ramped pyrolysis -but that at some point we all need to move forward with what we have. The authors make a solid case and demonstrate that their interpretations are sound even with uncertainties in dating. The figures are improved with regard to clarity, and the scope of the paper, with a focus on the Ross Sea, is appropriate. In terms of broad scale interest, this study is an excellent example of working with marine and terrestrially-based paleoclimate records to understand regional history, as well as of interest in terms of its application to the larger ecosystem, through the macrofaunal records. I have no additional suggestions for changes except for a few very minor mistakes (spelling and grammatical) that are present, which will be caught by a copy editor.
Reviewer #2 (Remarks to the Author): Mezgec and co-authors compare sediment core records (proxies for sea ice), faunal remains (dates of colonization by seals and penguins), and ice core records (climate proxies) to ascertain the role of past sea ice changes on regional ecology in the Ross Sea, Antarctica. Linking these various records is important because, as the authors state, sea ice presence and polynya efficiency carry importance for global climate feedbacks. Implementation of sea ice factories efficiency in oceanatmosphere models is a main recommendation of the paper. I found the use of these three types of records in tandem very interesting, however it is very difficult to link them together. The primary issue is the chronology of the sediment cores. The authors point out that no further analyses are possible (indeed they have done a lot of work), but that there are better dating methods now available. They discuss their dates in the response to reviewers, and ascertain that the cores are reliable to about 650 y precision on the dates (slightly higher than what is currently listed in the methods section of the manuscript, 200-500y). But central in their conclusion about the ages is an assumption that Dead Carbon (DC, carbon that is pre-aged, or even free of 14C) is constant downcore. If there is any region where this may be true, it is the Ross Sea with very high productivity and larger distances to the continent. However, previous work from the Antarctic Peninsula shows that, even when lithology and basic geochemistry (C:N, d13C, etc) are constant, the DC contribution can be quite variable. This could still impart unaccounted for uncertainty to the sediment core ages, making them larger. The authors and editors have to wonder -does this story change if the ages of the sediment cores are more uncertain than depicted in the manuscript? If uncertainty in the ages increases, at what point do the authors' conclusions become questionable? Does it happen before or after a threshold of likely uncertainty is crossed? This may be the last sort of exercise or calculation that the authors' could do in lieu of more age measurements. A more minor critique of this manuscript is the use of Lagrangian modeling. It is, on the surface, an approach that seems to be on the cutting edge. However, it is a model. It may be useful, but shouldn't be deemed correct. The authors use this model to suggest the main pathways of air traveling over the sites given modern (since 1979) sea ice and satellite observations. A more interesting use of this model would be to feed it different hypothetical sea ice geometries as suggested in lines 169-172 to see if there is a significant feedback to wind patterns. In doing so, the authors would constrain whether their approach is good or not, and they would address one of their most important points (that sea-ice factories efficiency should be fed into coupled climate change models). In doing so, though, there would be a risk that they were correct in using modern wind trajectories to ascertain past patterns during sea-ice shifts, but they would have to argue against the importance of improving parameterization of such sea ice efficiency feedbacks into models. I appreciated this manuscript despite the flaws in the marine sediment core chronology and the use of the air trajectory model. I think more effort should go into linearly explaining each of the three types of records and the use of the model to make the main message more poignant. Otherwise, the average reader (as well as the expert) needs to skip around between the various sections in the currently-formatted manuscript to make sense of the authors' conclusions and discussion. This is not desirable. A case in point is the well-summarized response to reviewers. This should be part of the main manuscript. It makes some pertinent arguments, but also spells out the issues with the chronology. Without this in the published article, those arguments are lost from the readership and the paper becomes less important (or even controversial). In sum, I don't think that compound specific or Ramped PyrOx will solve all of the authors' issues, but it certainly may clear some issues up. Even just another bulk age from the same core depth as the mollusc shell would have helped test their core-top assumptions. It is a fascinating study, but Nature Communications may not be the right venue to fully discuss all of the intricacies in linking these records.
The reviewer comments are in red, while our answers are in black.
Reviewer #1 (Remarks to the Author):
I appreciate the chance to take a third look at the revised version of the manuscript, "Holocene wind and sea ice variability in Western Ross Sea (Antarctica)." The authors have addressed the concerns that I detailed previously, most critically through their discussion of the potential problems with the marine sediment core chronologies, with the addition of both text material and figure 7. I understand the position the authors are in, that of course, more could be done to potentially improve/re-assess the chronologies with advanced technologies such as ramped pyrolysis -but that at some point we all need to move forward with what we have. The authors make a solid case and demonstrate that their interpretations are sound even with uncertainties in dating.
We thank the reviewer for his/her time spent on our paper that was greatly improved for his/her comments. We implement the chronology section in the methods and explain in the Supplementary Information Note 1 all the caveats with 14C chronology in the Ross Sea, and a sensitivity test.
The figures are improved with regard to clarity, and the scope of the paper, with a focus on the Ross Sea, is appropriate. In terms of broad scale interest, this study is an excellent example of working with marine and terrestrially-based paleoclimate records to understand regional history, as well as of interest in terms of its application to the larger ecosystem, through the macrofaunal records. I have no additional suggestions for changes except for a few very minor mistakes (spelling and grammatical) that are present, which will be caught by a copy editor.
We thank reviewer 1 for his/her appreciations. We nonetheless further improved the clarity of the figures and rearranged them according to the new structure of the paper requested by reviewer 2.
Reviewer #2 (Remarks to the Author):
Mezgec and co-authors compare sediment core records (proxies for sea ice), faunal remains (dates of colonization by seals and penguins), and ice core records (climate proxies) to ascertain the role of past sea ice changes on regional ecology in the Ross Sea, Antarctica. Linking these various records is important because, as the authors state, sea ice presence and polynya efficiency carry importance for global climate feedbacks. Implementation of sea ice factories efficiency in ocean-atmosphere models is a main recommendation of the paper. I found the use of these three types of records in tandem very interesting, however it is very difficult to link them together. The primary issue is the chronology of the sediment cores. The authors point out that no further analyses are possible (indeed they have done a lot of work), but that there are better dating methods now available.
Although we are aware that now better methods are available, we cannot perform further analyses. To note though that even these new dating techniques present their own flaws. In the Supplementary Information, we added a section detailing how robust are the marine core chronologies along with some arguments on the benefit and flaws of new dating techniques. Indeed, it was observed that 14C of fatty acids can also be altered by old carbon as dates as old as 22 ka BP have been measured in the laminated sections above the diamicton at site U1357 off Adélie Land (Ohkouchi, personal communication).
They discuss their dates in the response to reviewers, and ascertain that the cores are reliable to about 650 y precision on the dates (slightly higher than what is currently listed in the methods section of the manuscript, 200-500y).
We better explained all the caveats related to 14C dating and uncertainties in the Ross Sea moving what reported in the previous response letter to the Chronology section in Methods and in the Note 1 of Supplementary Information. But central in their conclusion about the ages is an assumption that Dead Carbon (DC, carbon that is pre-aged, or even free of 14C) is constant downcore. If there is any region where this may be true, it is the Ross Sea with very high productivity and larger distances to the continent. However, previous work from the Antarctic Peninsula shows that, even when lithology and basic geochemistry (C:N, d13C, etc) are constant, the DC contribution can be quite variable. This could still impart unaccounted for uncertainty to the sediment core ages, making them larger.
As reviewer 2 stated, very high productivity in Ross Sea may reduce DC variations. Although limited in numbers, previous studies suggest a rather constant offset between AIOM and CaCO3 ages (~400 years) in the Ross Sea (Andrews et al., 1999) and the similarly productive Adélie Land region (Costa et al., 2007;Dunbar, personal communication). The Antarctic Peninsula is a very different environment with higher ice free areas (Lee et al., 2017) and much more contorted coasts that indeed increase the possibility of variable DC to the marine environment in response to deglaciation. Although Ross Sea deglaciation may have impacted early Holocene ages at JB site, the ice sheet was well southward at 7 ka BP when OSIZ and CSIZ already diverged. Additionally, even a couple of hundred years of DC changes will not change our interpretations as proved below by the sensitivity tests (see next paragraph).
The authors and editors have to wonder -does this story change if the ages of the sediment cores are more uncertain than depicted in the manuscript? If uncertainty in the ages increases, at what point do the authors' conclusions become questionable? Does it happen before or after a threshold of likely uncertainty is crossed? This may be the last sort of exercise or calculation that the authors' could do in lieu of more age measurements.
We add in the supplementary information the figure, reported in the previous response letter, showing our sensitivity test adding ± 600 year to the marine records and adding ± 100 years to the ice core records, supporting our main results even if the cores are moved in opposite temporal direction. A lag correlation analysis where F. curta records are moved forward or backward by 200 years, 400 years and 600 years compare to TY and TD ssNa records further demonstrates that our interpretations are still valid at this "inaccuracy".
A more minor critique of this manuscript is the use of Lagrangian modeling. It is, on the surface, an approach that seems to be on the cutting edge. However, it is a model. It may be useful, but shouldn't be deemed correct. The authors use this model to suggest the main pathways of air traveling over the sites given modern (since 1979) sea ice and satellite observations. A more interesting use of this model would be to feed it different hypothetical sea ice geometries as suggested in lines 169-172 to see if there is a significant feedback to wind patterns. In doing so, the authors would constrain whether their approach is good or not, and they would address one of their most important points (that sea-ice factories efficiency should be fed into coupled climate change models). In doing so, though, there would be a risk that they were correct in using modern wind trajectories to ascertain past patterns during sea-ice shifts, but they would have to argue against the importance of improving parameterization of such sea ice efficiency feedbacks into models.
The text in the methods section has been improved to constrain the quality of the approach, whereas it is not possible to implement the parameterisation proposed by the reviewer in the "model" employed for our analysis. The Lagrangian modeling used in the manuscript is the HYSPLIT 4 model initialised by meteorological gridded dataset ERA-INTERIM. HYSPLIT is a complete system for computing simple air parcel trajectories, as well as complex transport, dispersion, chemical transformation, and deposition simulations. HYSPLIT is one of the most extensively used atmospheric transport and dispersion models in the atmospheric sciences community. ERA-Interim is a climate reanalysis gives a numerical description of the recent climate (for Antarctica since 1979), produced by combining models with observations using data assimilation system. It contains estimates of atmospheric parameters (temp, air press., wind, etc.) and sea ice observation coming by satellite observations (e.g. microwave). The sea ice extension is not parameterised but it is provided (as the concentration) from satellite image analysis. Therefore, our analysis of back-trajectories, using the ERA-Interim, does not permit to parameterised the sea ice extension/concentration and related feedback with meteorological conditions (e.g. wind etc.) because these values come from the assimilation in the model of the satellite data. Aiming to answer to the request of Reviewer concerning the possibility to change the polynya geometries into the model, it should be used coupled climate change models (e.g. LOVECLIM, IPSL-CMIP5 etc.); however, these models present a spatial resolution two or three times greater (3°x3° in lat/ lon i.e. thousands of km) than ERA-INTERIM reanalysis (1°x1°). Actually CGCMs are not able to spatially resolve the polynya areas (hundreds of km) and also the glacier valleys (tens of km) that channelize the katabatic wind. Lastly, several authors have already pointed out the straight correlation between katabatic wind and polynya "efficency" on sea ice and HSSW productions (Zwally et al. 1985;Pease 1987;Adolphs and Wendler 1995;Markus and Burns 1995;Massom et al. 1998;Comiso et al. 2011;Drucker et al. 2011). Moreover, Gallée (1997) using a model, pointed out a strong positive feedback between the katabatic wind system and the latent heat flux polynya which reinforces the katabatic wind coming off the ice sheet. The improvement of these CGCMs to resolve the sea ice production/dynamics is beyond the scope of the manuscript. I appreciated this manuscript despite the flaws in the marine sediment core chronology and the use of the air trajectory model. I think more effort should go into linearly explaining each of the three types of records and the use of the model to make the main message more poignant. Otherwise, the average reader (as well as the expert) needs to skip around between the various sections in the currently-formatted manuscript to make sense of the authors' conclusions and discussion. This is not desirable.
We significantly reorganized the results and discussion part to make the flow of the reading more linear.
A case in point is the well-summarized response to reviewers. This should be part of the main manuscript. It makes some pertinent arguments, but also spells out the issues with the chronology. Without this in the published article, those arguments are lost from the readership and the paper becomes less important (or even controversial).
We moved parts of the previous response letter to the main text, to the Chronology section, as well as to the Supplementary Information. We now believe that the readership will have all the necessary information in hands.
In sum, I don't think that compound specific or Ramped PyrOx will solve all of the authors' issues, but it certainly may clear some issues up. Even just another bulk age from the same core depth as the mollusc shell would have helped test their core-top assumptions. It is a fascinating study, but Nature Communications may not be the right venue to fully discuss all of the intricacies in linking these records.
We do our best in order to re-structure the manuscript in the Nature Communication style and we really think that now the reading of this new revised version is more linear and poignant, making the manuscript a fascinating study. | 2018-04-03T02:49:23.484Z | 2017-11-06T00:00:00.000 | {
"year": 2017,
"sha1": "29bad0b04a73d2ae2cf8c96c60249de2fef676a1",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-017-01455-x.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "440ae2af61a333ec53bbb8fc2b112072f3dbd72e",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"extfieldsofstudy": [
"Medicine",
"Geology"
]
} |
222384947 | pes2o/s2orc | v3-fos-license | Federated Ensemble Regression Using Classification
Ensemble learning has been shown to significantly improve predictive accuracy in a variety of machine learning problems. For a given predictive task, the goal of ensemble learning is to improve predictive accuracy by combining the predictive power of multiple models. In this paper, we present an ensemble learning algorithm for regression problems which leverages the distribution of the samples in a learning set to achieve improved performance. We apply the proposed algorithm to a problem in precision medicine where the goal is to predict drug perturbation effects on genes in cancer cell lines. The proposed approach significantly outperforms the base case.
Introduction
In a standard regression setting, one builds a model on pre-existing learning data with the goal of making predictions on future unseen samples. In this case, a single model is built using a preferred learning algorithm. However, it has been demonstrated that one can improve predictive accuracy even further by aggregating the predictive power of multiple models built using the same learning data [14]. These models can be built in a variety of ways, from varying the attributes used in building the models to using multiple learning algorithms. This is done to ensure heterogeneity in the models, such that given a set of new samples, they are all wrong in different ways and their aggregation leads to improved predictions [5]. The typical approach in the use of a single model or an ensemble is that the distribution of the continuous response one is interested in predicting is often not given much thought. For example, if one imagines that the response for the samples in a dataset follows a normal distribution. Then it also follows that any model that is naively built using this data is going to be very good at predicting samples that a near the centre of the distribution, but not those at the tails. A close analogy to this phenomenon is the class imbalance problem in a classification setting. Where given a dataset in which one class is over-represented, models built on this dataset using machine learning algorithms typically perform poorly when presented with a sample from the under-represented class [2,15]. Therefore, we hypothesised that predictive performance in a regression setting can be improved by accounting for the distribution of the response.
We take an ensemble learning approach to solving this problem. First, we split the learning data into a pre-specified number of bins using a known discretization technique [9]. We then build a regressor for each bin using only the samples that belong to that bin, each of which generalises on only a restricted portion of the distribution. We then build a classifier for each bin, treating the samples which belong to said bin as the positive samples, and the samples in the other bins as the negative samples. Therefore, there is a classifier-regressor pair for each bin. Given an unseen sample, real-valued predictions are made using the regressor for each bin. The corresponding classifier for each regressor is then used to predict the probability that the unseen sample is similar to the samples used in building the regressor. The predictions are then aggregated by weighting the probabilities and applying them to the predictions. This process is described diagrammatically in Fig. 1. This approach is valuable to problems in precision medicine, where tail case prediction is of vital importance. An example of such a problem is the prediction of drug perturbation effects on genes in cancer cell lines, which, with improved predictive accuracy, has the potential to dramatically improve the rate at which new cancer drugs are developed. In our evaluation, we used data from the library of integrated network-based cellular signatures (LINCS) [16], which curates the drug perturbation effects on human genes. Our evaluation shows a significant improvement in performance over the base case. Our contributions are as follows: 1. An ensemble learning approach which considers response distribution for regression problems. 2. An application to a real-world dataset in precision medicine.
Related Work
Ensemble learning takes a variety of forms, from bootstrap aggregating (bagging) which is central to popular and robust learning algorithms like random forests [4], to methods like stacking [3]. The proposed approach shares some similarities with both of these methods. Stacking is most commonly used when one intends on aggregating the predictions made by multiple learning algorithms, or if a single learning algorithm is used, multiple models are built using subsets of the feature space [18]. There are three main processes in a stacking procedure: meta-feature generation, pruning, and aggregation [17]. Assume one has a learning and a test set. In the meta-feature generation phase, meta-features are generated for both the learning and test sets, and total to the number of models whose predictive power one wants to aggregate. Pruning is then used to optimise for the best metafeatures. Finally, aggregation is done by learning weights using the learning set meta-features and then applying these weights to the test meta-features to form the final prediction.
In contrast to a typical stacking approach which we have described, we do not generate meta-features in our approach. The utility of the meta-features is that they provide a mechanism through which aggregating weights can be learned using a meta-level learning algorithm. Instead, we opt for a scheme where given a new sample, individual classifiers predict how much we can trust the predictions of their corresponding regressor as described in the introduction, which is more closely aligned with the concept of local classifiers in the hierarchical classification literature [20]. This implies that we also do not perform a pruning step. It is worth noting that while aggregation in stacking can be performed using weights learned with a meta-learner, it is also possible to simply average the predictions, we explore this in our evaluation. Other similarities exist. For example, one can argue that our weighting and aggregating procedure is a form of dynamic weighting, where new samples are weighted based on their similarities to samples used in building a model [19]. However, rather than being a separate step, dynamic weighting is implicit in the proposed learning procedure.
Central to the proposed method is the discretization of the continuous response one is interested in learning how to predict. Several methods to perform this task have been proposed, and they have been classed into supervised and unsupervised methods [7,9]. We considered only unsupervised methods in our evaluation. However, the use of supervised methods will be explored in future work. Methods which use classification as a means to perform regression in an ensemble setting have also been proposed. Ahmad et al. proposed the use of extreme randomized discretization to perform regression via classification [12].
In contrast to what we propose, the authors do not use a classifier-regressor pair to estimate the prediction for a new sample. Rather they do this using the minimum or maximum of the training data points and the bin boundary [1,12]. Also closely related to what we propose is work by Gonzalez et al. for problems that involve multi-variate spatio-temporal data [11]. The main differences in our approaches is two-fold. Firstly, they are interested in classifying bands of attributes before performing regression. Secondly, aggregation is done by first selecting the best models using leave-one-out cross-validation and the median predicted values by these models is treated as the final prediction for a new sample.
Algorithm
The proposed approach can be split into a training and a prediction phase. An informal description follows, however, a more formal representation is given in Algorithm 1. In the training phase, given a training set with input vectors, a response, and a pre-specified number of bins c: 1. Discretize the response into c bins, forming c datasets. 2. For each c bin, build a regressor R c and a classifier C c . The regressor is built using the training samples for the particular bin. Whereas, the classifier is built by treating the samples in the current bin as the positive class and all other samples in the training set as the negative class.
In the prediction phase, given a new sample: 1. With all the R c regressors, predict values for the new sample. 2. With all the C c classifiers, predict the probability that the sample belongs in that c bin. 3. Generate weights using the c probabilities such that they sum to 1. This is done by summing the c probabilities and then dividing each c probability by this sum. 4. Get the final prediction by summing the values generated by applying each corresponding c weight to the prediction made by its c regressor in step (1).
Considerations
When tackling a machine learning problem, the choice of learning algorithm is vital as it plays a crucial role in predictive performance. However, it is clear from the description of the proposed approach outlined above that it is learner agnostic. That is, one can choose to build the classifiers and regressors using their preferred algorithm of choice. This property is particularly useful as one can choose to optimise for different properties using approaches from multiple kernel learning [22] or even stack multiple learning algorithms if they so choose. The choice of discretization technique is also open-ended, where one can choose to use known supervised or unsupervised discretization techniques, or a custom technique tailored to a particular problem. When the number of bins is greater two, it will generally be the case that there will be some form of class imbalance. This may be in favour of the positive or negative class, and can be quite severe, depending on the distribution of the response variable under consideration, choice of discretization technique, and the number of specified bins. Therefore, it is important that this be taken into consideration, as it is known that class imbalance can have significant effects on predictive accuracy [2]. To combat this, methods which balance an imbalanced dataset such as oversampling methods like the synthetic minority oversampling technique (SMOTE) [6] should be considered. We explore the effects of discretization technique and class imbalance in our evaluation.
Algorithm 1. Federated Ensemble Learning using Classification
Input: Training set matrix L ∈ IR m×b , response vector y, c bins, and test set matrix T ∈ IR n×b Output: Test set predictions Training: 1: Split y into c bins using a discretization technique of choice, producing L = (L1, . . . , Lc) and Y = (y1 . . . yc) 2: for each c bin in L and Y do 3: Build a regressor Rc using Lc and yc 4: Build a classifier Cc using Lc as the positive samples and L − Lc as negative samples. Note: class balancing may be required 5: end for Prediction: 6: for each c regressor-classifier pair Rc and Cc do 7: Predict the response for T using Rc 8: Predict the probability that the samples in T belong in c using Cc 9: end for The process above generates predicted response and probability matrices R, P ∈ IR n×c 10: vn = c j=1 pn,j 11: Create weight matrix W ∈ IR n×c by dividing all elements in each row in P by the value in the corresponding row index in vn 12: Create weighted response matrix R w ∈ IR n×c by performing the element-wise multiplication of R and W 13: The final prediction T = c j=1 r w n,j 14: return T
Evaluation Setup
We used data from the general LINCS Phase II dataset with accession code GSE70138. We had 7000 training samples and 3000 test samples. The predictive task is the expression levels of 20 cancer-related genes [8,10] using perturbation conditions as input. We evaluated four bin sizes: 2,3,4, and 5. We also considered four discretization methods. The first involves randomly assigning samples to bins, the second involves splitting samples evenly into bins after sorting, the third and fourth are equal frequency interval and k-means clustering. It worth noting that even splitting and frequency interval are the same in that they discretize a vector of continuous variables evenly given a specified size. However, they differ in that equal frequency does not achieve perfect equally sized groups if there are duplicates, naive even splitting does. For aggregation methods, we considered simple averaging, a case in which no classifiers are used in aggregation. For the cases in which classifiers are involved, we considered one in which class imbalance is ignored, we refer to this simply as imbalanced for the rest of the manuscript. The other classifier approaches used are one in which undersampling is used to reduce the number of samples in one class when it outweighs those in the other, and oversampling, which is the reverse. Undersampling was performed by randomly selecting samples from the over-represented class equal to that of the under-represented class. Oversampling was performed using SMOTE with the smotefamily package [21], where k = 5. We used random forests as our learning algorithm. All models were built using 1000 trees and default settings with the ranger [24] library in R [13]. The reported performance metric for regression is the coefficient of determination (R 2 ), as we are interested in the amount of the observed variance explained by the ensemble. We also report the performance of the classifying aggregators, for these we report accuracy, precision, recall and the F1 score. The dataset used in our experiment is available here http://dx.doi. org/10.17632/8mgyb6dyxv.2, and it is named base fp, and the code is available here https://www.github.com/oghenejokpeme/FERUC.
Overall Performance
We observed that on average multiple combinations of the considered discretizeraggregator pairs generally outperformed the base case (see Table 1). Certain discretizer-aggregator pairs tended to consistently perform well or poorly. Even split and frequency interval combined with oversampling outperformed all other combination pairs, whereas k-means combined with averaging or undersampling generally underperformed when compared to the others (see Table 2). When paired with oversampling, the even split and frequency interval discretizers both achieved an average percentage performance increase of approximately 100% over the base case. Combined, both of these methods performed best when the number of bins is set to 5 (Table 3). With the assumption that there is no difference in performance between these two combinations and the base case, paired t-tests suggest that the null hypothesis can be rejected with a significance level of 0.01, with p-values of 2.6×10 −8 and 9.7×10 −9 respectively. Note that the average percentage performance difference between two competing approaches is calculated by estimating the percentage difference in performance for each gene pair, and then finding the mean.
Discretizer Effects
Discretization is the first step in the proposed learning algorithm, and the method by which we stratify the distribution of the response one might be interested in predicting into narrow-bins (Algorithm 1). It is clear from Fig. 2 that the choice of discretizer plays a crucial role in predictive performance. When averaging is used as the aggregator, random sampling outperforms all other discretizers, with even split and frequency interval performing equally well. This is interesting as it shows that without the aggregating classifiers, the regressors built using methods like frequency interval perform worse than those built using random sampling. The reason is because when the response is put into bins using random sampling, the values in each of these bins will generally follow the same distribution as the overall response. Therefore, aggregating the predictions made by regressors built using these bins by averaging will generally yield good results. This is in contrast to when methods like even split or frequency interval are used, as each bin comprises of narrow generally non-intersecting bands of the overall distribution. It is worth noting here though that k-means performs remarkably poorly, producing negative R 2 values, suggesting that it fits worse than the horizontal line. One might be quick to note that this is one of the disadvantages of using R 2 as a performance metric in a regression problem when there is the potential for non-linearity. However, we would argue that for this particular application, it is vital that we have a clear representation of how much of the observed variance is explained by the proposed ensemble.
When class imbalance is ignored as is the case in the imbalanced aggregators, we observed that even split and frequency interval have near identical performance, with k-means and random sampling coming third and fourth depending on bin size. When undersampling is used to balance the dataset before building the classifying aggregators, we observed that as the number of bins increases, random sampling tended to outperform the even split and frequency interval discretizers. This is because as bin size increases, the number of samples in each bin decreases, and by undersampling, the classifying aggregators are built using fewer and fewer samples, making them less powerful. The performance of the random sampling discretizer does not suffer as much from this because its regressors are built using bins which generally represent the overall distribution of the response. We discuss this further when we discuss aggregator effects in the next section.
The performance of the discretizers when oversampling is used to handle class imbalance supports and contrasts with their performance when undersampling is used. We observed that the even split and frequency interval discretizers generally perform vastly better than how they do when undersampling is used to deal with class imbalance. In contrast to undersampling, the classifying aggregators are built using datasets in which the positive class has been oversampled, improving the models which classify new samples into bins. Given that the overall distribution of a response is represented in each bin when random sampling is used, building accurate bin delineating classifiers becomes more difficult as the samples in the positive and negative classes are very much alike. However, the expectation is that these classifiers will essentially predict that a new sample belongs in its bin, and produce a probability based on how closely related it is to the positive samples used in their construction. Therefore, for the random sampling discretizer, one would expect better performance when undersampling is used, which is what we observed (see Fig. 2).
Fig. 2.
Average discretizer performance (R 2 ) for the considered bin sizes across the considered aggregation approaches. Frequency interval is excluded from averaging and undersampling aggregation results because it consistently produced negative R 2 values.
Aggregator Effects
In the previous section, we discussed the effects the choice of discretizer can have on predictive performance. Although the discretizers were our main focus, it is clear that there is a synergistic effect between the choice of discretizer and aggregator. Figure 2 also shows that the choice of aggregator has a clear effect on predictive performance, with averaging performing worse overall, oversampling outperforming all the others, and undersampling generally performing worse than imbalanced. Here, our primary focus is to discuss why this is the case, especially as it has to do with the classifying aggregators. Table 4 shows the average predictive performance (accuracy, precision, recall, and F1 score) for all discretizer-aggregator pairs, and for all bin sizes we considered. These results explain the observed predictive performance discussed in the previous two sections. Although the accuracy of the classifying models are also reported, our discussion will be mostly centered around the precision and recall metrics, given that we are dealing with input datasets which may be class imbalanced. Table 4. Average predictive performance of the aggregating classifiers built using datasets whose class representations are imbalanced (RG), undersampled (US), and oversampled (OS) for the considered discretizers. The reported performance metrics are accuracy (Acc), precision (Prec), recall (Rec), and F1 score. When random sampling the discretizer, we observed that across all bin sizes, the recall of all the classifying aggregators is exactly 1. This is consistent with previously discussed results. It shows that the classifiers are classifying all the test samples as being similar to those used in their building. This is unsurprising since the samples used in building each bin's classifier follows the same distribution as the original response vector. The precision of the aggregating methods is more nuanced. In the case in which class imbalance is ignored, although the recall maintains its value of 1 as bin size increases, the precision steadily decreases. This makes sense, as the expectation is that the models will consistently become worse at identifying false positives. The results for undersampling and oversampling are contrasting. While recall is also consistently 1 as bin size increases, the precision for undersampling stays at approximately 50%, while like the class imbalance case, the precision for oversampling steadily declines. This is also consistent with expectation. In the case of undersampling, we are building binary bin classifiers using a perfect 50−50 split in class representation, but with fewer samples as bin size increases. It is no surprise that accuracy is also approximately 50%. For oversampling, accuracy and precision both hold 50% when bin size is 2, but steadily declines as it increases. Here, we argue that oversampling the samples in the imbalanced class, which is usually the positive class, makes the classifiers even worse at predicting false positives. This is to be expected, due to the properties of the random sampling discretizer.
For the even split and equal frequency discretizers, all three aggregators have an average value of 58% for accuracy, precision, recall, and F1 when the bin size is 2. This suggests that the models are capable of classifying positive and negative samples equally well. However, this changes as bin size increases. When the input dataset is imbalanced, we observed that both precision and recall steadily decreases, with precision getting remarkably worse-off than recall. The explanation for this is that the class imbalance is exacerbated by the increasing bin size with fewer samples in each bin, making it harder for the models to identify false positives. When undersampling is used, precision generally remains the same as bin size increases but recall decreases. This shows that while the classifiers' false positive prediction rate does not get significantly worse, its number of false negative predictions increases. This phenomenon can be easily explained by the fact that as bin size increases, fewer samples in general are used in building the classifiers. For oversampling, what we observe for recall and precision are in contrast to those of undersampling. Though they both decrease as bin size increases, recall is better than precision. When compared to the imbalanced case, although the recall values are similar, the precision in the oversampling case is generally better, especially as bin size increases. This explains why oversampling outperforms the imbalanced and undersampling cases. The difference in performance between even split and frequency interval as seen in Table 2 can be explained by a slight increase in precision and a slight decrease in recall for even split compared to frequency interval (see Table 4). Therefore, it is worth noting that for the proposed approach, seemingly duplicate values should not be excluded during discretization.
For k-means, the precision and recall values are generally similar to those of even split and frequency interval for the considered bin sizes with the exception of when bin size is 2. However, even with this similarity, we still observed that the undersampling aggregator performed remarkably poorly when paired with k-means (see Tables 1 and 2). Our analysis of the results showed that this is because of a known limitation of k-means discretization, which is that it is very sensitive to outliers [7], which we expect to mainly be at the tails of the response distribution. Individual investigation of classifier performance for each gene showed that this occurs because the models built using samples at the tails have either very low precision and very high recall, or the opposite. This is in contrast to other discretizers, for which the precision-recall ratio is better balanced. This is evident from the difference in F1 scores between the different discretizers across the considered bin sizes (see Table 4). Figure 3 shows how aggregator predictive performance changes as bin size increases for the considered discretizers. For the random discretizer, most aggregators tend to steadily improve as bin size increases. The exception to this is undersampling, which peaks at a bin size of 4. For even split and frequency interval, the four aggregators behave similarly as expected. The imbalanced and oversampling aggregators get better as bin sizes increases, with the imbalanced aggregator doing so at a slower rate. Averaging gets worse as bin size increases as discussed in previous sections. Lastly, the undersampled aggregators reach peak performance at a bin size of 3 and begin to decline. When the k-means discretizer is paired with averaging and undersampling, we see a performance decrease from bin size 2 to 3, then steady increase from 3 to 5. However, as noted in the previous two sections, the performance is still remarkably poor. For oversampling, predictive performance sees a slow decline as bin size increases. Whereas the imbalanced aggregator tends to hold its performance. From these results, it is clear that bin size also plays a crucial role in the performance of the proposed ensemble regression approach. However, to what extent this is the case is beyond the scope of this work and will be the subject of future work.
Discussion
An important task in the machine learning model building process is the selection of the right parameters. Our results show that the choice of bin size, discretizer, and aggregator all play an important role in predictive performance. Although we do not directly evaluate it here, we argue that these parameters can be easily optimised using the standard model selection approach with cross-validation. Assuming a near optimal bin size has been selected, the proposed ensemble learning algorithm is limited by the fact that it can only do as well the classifierregressor pairs. Although we used only random forests in our evaluation, which is capable of building both classifiers and regressors, one can choose to use one learning algorithm for the classifiers and another for the regressors. In fact, it is possible to extend what we have proposed using traditional stacking, where multiple learning algorithms are used as classifiers and regressors. Of course this will come with increased cost in the form of computational time complexity. Another obvious extension is in multi-target regression problems. For example, one can imagine using this as the core predictor in an ensemble of regressor chains [23]. All of this, along with evaluations on other datasets will be the subject of future work.
Conclusion
We have presented an ensemble learning algorithm for regression using classification which leverages the underlying distribution of the response one is interested in predicting. We evaluated this approach on an important problem in precision medicine, which is the in silico estimation of drug perturbation effects on genes in cancer cell lines. We found that this approach significantly outperforms the base case, with several directions for extension which we conjecture will further improve its predictive capabilities. | 2020-10-16T05:04:42.121Z | 2020-09-19T00:00:00.000 | {
"year": 2020,
"sha1": "02d8bed10b2b53d09cf89290221945b61df0b619",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-61527-7_22.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "02d8bed10b2b53d09cf89290221945b61df0b619",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
267411086 | pes2o/s2orc | v3-fos-license | A detailed dissection of the expression, localization, structure, and diagnostic potential of cyst wall proteins of the eye pathogen Acanthamoeba
The cyst wall of the eye pathogen Acanthamoeba castellanii contains cellulose and chitin and has ectocyst and endocyst layers connected by conical ostioles. Previously, we used mass spectrometry of purified walls to identify an abundant laccase and three families of lectins (Jonah, Luke, and Leo). Here we show that frameshifts in the protein prediction in AmoebaDB, which incorrectly add 12 transmembrane helices, cause Jonah to mislocalize to a ring around ostioles rather than to the ectocyst layer. RT-PCR, double labels with GFP and RFP or mCherry, and promoter swaps show that ectocyst localization does not just correlate with but is caused by earlier expression, while localization in the endocyst layer and ostioles is caused by later expression. A chitin-binding domain from an Entamoeba chitinase shows chitin forms thick fibrils in the ectocyst layer and a honeycomb in the endocyst layer. AlphaFold shows Ac wall proteins originate from bacteria by horizontal gene transfer (β-helical folds of Jonah and three cupredoxin-like domains of the laccase), share common ancestry with wall proteins of slime molds (β-jelly-roll folds of Luke), or are unique to Acanthamoeba (four disulfide knots of Leo). Ala mutations show linear arrays of aromatic amino acids in β-jelly-roll folds of Luke and disulfide knots of Leo are necessary for binding cellulose and proper localization of proteins in the cyst wall. Finally, rabbit antibodies to recombinant Jonah, Luke, Leo, and laccase efficiently detect calcoflour white-labeled cysts of 10 of 11 Acanthamoeba isolates tested, suggesting all four proteins are excellent diagnostic targets. IMPORTANCE Acanthamoebae are free-living amoeba in the soil and water that cause Acanthamoeba keratitis in under-resourced countries, where water for washing hands may be scarce. Acanthamoeba is an emerging pathogen in the United States, because of its association with contact lens use. Here we show early expression during encystation causes a Jonah lectin and a laccase to localize to the outer layer of the cyst wall, while later expression cause Luke and Leo lectins to localize to the inner layer and the conical ostioles that connect the layers. We used structural predictions to identify the aromatic amino acids of Luke and Leo necessary for binding cellulose in the wall and to identify domains of Jonah and laccase useful for making recombinant proteins to immunize rabbits. Rabbit antibodies to Jonah, Luke, Leo, and laccase all efficiently detected cysts of ten Acanthamoeba isolates, including five T4 genotypes that cause most keratitis cases.
Introduction
Acanthamoeba keratitis (AK), which leads to scarring and blindness, if not successfully treated, is caused by free-living amoebae, named for acanthopods (spikes) on the surface of trophozoites (1)(2)(3).In immunocompromised patients, Acanthamoebae may cause encephalitis, which is frequently fatal (4).AK is associated with corneal trauma in the Middle East, South Asia, and Africa, where water for handwashing is often scarce (5)(6)(7)(8)(9).AK is associated with contact lens use in the US, Europe, and Australia and so Acanthamoeba is on the NIAID list of Emerging Infectious Diseases/Pathogens (3,(10)(11)(12).Regardless of the place or the cause of infection, 18S RNA sequences show that the T4 genotype most often causes AK, while other genotypes less frequently or never cause AK (13)(14)(15)(16).The whole genome of the Neff strain of Acanthamoeba castellanii (Ac), a T4 genotype that is broadly studied, was sequenced, and proteins were predicted and deposited in AmoebaDB (17,18).Recently, the Neff genome has been re-sequenced with long reads to produce 33 chromosome-size assemblies, while protein prediction has been improved by transcriptomes of trophozoites and organisms encysting for eight hours (19,20).
Cyst walls, which form when trophozoites are starved of nutrients, protect free-living Acanthamoebae from osmotic shock in freshwater or drying in the air (21,22).The cyst wall also makes Acanthamoebae resistant to disinfectants used to clean surfaces, alcohol-based hand sanitizers, sterilizing agents in contact lens solutions, and/or antibiotics applied to the eye (23)(24)(25)(26)(27).While trophozoites cause damage to corneal epithelial cells, AK is most often diagnosed by microscopic identification of cysts in corneal scrapings (1,2,28).Acanthamoeba cysts are detected using calcofluor white (CFW) or wheat germ agglutinin (WGA), which bind to cellulose and chitin, respectively, in their walls (29,30).Because CFW and WGA also react with walls of plants, fungi, and other protists, they are suboptimal reagents that may lead to incorrect diagnosis of AK (31)(32)(33).The translational goal of the present study, therefore, is to identify abundant cyst wall proteins, which might be targets for diagnostic antibodies.Antibodies to the Acanthamoeba wall might also be useful to detect cysts in concentrated water samples, monitor contamination in contact lens solutions, test reagents for cleaning reusable contact lenses, and/or test drugs that inhibit encystation.
Nearly 50 years ago, cellulose (-1,4-linked glucose) was identified in Acanthamoeba cyst walls, which has an outer ectocyst layer and an inner endocyst layer connected by conical ostioles (34,35).Because Acanthamoeba has a chitin synthase and WGA binds to cyst walls, it is also likely that chitin (-1,4-linked GlcNAc) is also present (36).No cyst wall proteins were known until we purified cyst walls from the Neff strain of Acanthamoeba castellanii (Ac), which is a T4 genotype, and identified three families of cellulose-binding lectins that we named Jonah, Luke, and Leo (29).When a representative protein from each family was tagged with green fluorescent protein (GFP) and expressed under its own promoter in transfected Ac, Jonah localized to the ectocyst layer that is made early, while Luke and Leo localized to the endocyst layer and ostioles that are made later (37,38).Anti-GFP antibodies showed Jonah-GFP is more accessible than Luke-GFP and Leo-GFP, suggesting Jonah and other ectocyst proteins might be better targets for anti-cyst antibodies to diagnosis AK by fluorescence microscopy of eye scrapings (29).
Here we performed basic science experiments to understand better the localization, origin, and structure of Jonah, Luke, and Leo lectins, as well as an abundant laccase, as well as translational science experiments to test their value as targets for antibodies to diagnose cysts in patients with AK.First, we identified three additional Ac proteins in the ectocyst layer and used RT-PCR, double labels with GFP and RFP or mCherry, promoter swaps, and a GFP-tagged probe for chitin to begin to understand why proteins are located in the ectocyst layer versus the endocyst layer and ostioles.Second, we used AlphaFold and Foldseek to better understand the structure and origin of wall proteins (39)(40)(41), and we made alanine mutations to show linear arrays aromatic amino acids of Luke and Leo bind cellulose (42)(43)(44)(45)(46)(47).Third, we tested rabbit antibodies (rAbs) to recombinant Ac cyst wall proteins against cysts of 10 other Acanthamoeba strains and species, including five T4 genotypes that cause most cases of AK (13)(14)(15)(16).
Localization of Jonah-3, Leo-S, and laccase-1 to the ectocyst layer of mature walls of Ac
The goal here was to identify additional abundant Ac cyst wall proteins that localize to the ectocyst layer, which is likely more accessible to diagnostic antibodies than the endocyst layer.Jonah family lectins are highly abundant in cyst walls and have an N-terminal signal peptide followed by one or three choice-of-anchor A (CAA) domains, which form β-helical folds (BHFs) (Fig. 1A).Jonah-1 (ACA1_164810), which has a long, unstructured Thr-rich domain in addition to a single BHF, is expressed early during encystation and localizes to the ectocyst layer when tagged with GFP and expressed under its own promoter in transformed Ac of the Neff strain (ATCC 30010) (Fig. 1A, 1B, and 1E) (Supporting Information S1 and S2) (29,(37)(38)(39)(40)(41).The ectocyst layer is labeled with WGA, which binds to chitin, while the endocyst layer and ostioles are labeled with CFW, which binds to cellulose and/or chitin.Here we tested the localization of Jonah-3 (ACA1_157320), which contains three BHFs and was very abundant in purified cyst walls (17,18,29).While other cyst wall proteins contain a signal peptide but no TMHs, Jonah-3-AmoebaDB contains 12 transmembrane helices (TMHs) (Fig. 1A) (48,49).GFP-tagged Jonah-3-AmoebaDB, which was expressed under its own promoter, localizes to punctate rings around each ostiole (Fig. 1B and 1C).To test whether this ring-like localization of Jonah-3-AmoebaDB might be an artifact of incorrect protein prediction, we used an unfinished transcriptome of encysting Ac to make Jonah-3(c) (corrected), which contains three BHFs separated by unstructured, Ser-rich spacers (Fig. 1A and 1F and Supporting Information S1).An alignment of Jonah-3(c) and Jonah-3-AmoebaDB shows 12 TMHs in the latter derive from a series of frameshifts rather than deletions or insertions.Further, Jonah-3(c)-GFP localizes to the ectocyst layer with an identical appearance to that of Jonah-1-GFP (Fig. 1D).
Luke-2 (ACA1_377670) and Luke-3 (ACA1_245650), which have two or three β-jelly-roll folds (BJRFs), respectively, that are separated by unstructured, Ser-rich spacer(s), are expressed later during encystation and localize to the endocyst layer and ostioles when tagged with GFP and expressed under their own promoters (Fig. 2A, 2B, 2D, and 2E).Leo-A (ACA1_074730), which contains two adjacent sets of four disulfide knots (4DKs), is also expressed later in encystation and localizes to the endocyst layer and ostioles when tagged with GFP (Fig. 3A, 3B, and 3G).Here we tested the localization of Leo-S lectin (ACA1_188350), which contains two sets of 4DKs separated by a long, unstructured, Thr-rich spacer (Fig. 3A and 3I).We used for a second time the unfinished Ac transcriptome to make Leo-S(c) that corrects an eight amino acid deletion in the N-terminal 4DK, which was identified by an alignment with other Leo-S lectins (Supporting Information S1).Unlike Leo-A-GFP, which localizes to the endocyst layer and forms a flat ring around the ostioles, Leo-S(c)-GFP localizes to the ectocyst layer of mature cyst walls in a pattern like those of Jonah-1 and Jonah-3(c) (Fig. 3F).Time course studies of Leo-S(c)-GFP during the encystation showed that after 12 hours encystation, Leo-S(c)-GFP is present in a dense set of secretory vesicles (SV in Fig. 3D) that fill the cytosol of cells, which are rounded but lack a wall (as shown by a failure to label with WGA or CFW).However, after 24 hours encystation, Leo-S(c)-GFP forms a patchy distribution on a single-layered wall, which is labeled with both WGA and CFW (Fig. 3E).After 48 hours, Leo-S(c)-GFP has a homogeneous distribution in the ectocyst layer (data not shown), which is the same as Leo-S(c)-GFP in mature cysts at 72 hours (Fig. 3F).
The last protein tested was laccase-1 (ACA1_068450), which is a multicopper oxidase with three cupredoxin-like domains (CuRO-1, CuRO-2 and CuRO-3) that is abundant in purified cyst walls of Ac (Fig. 4A and 4E) (29,50,51).Laccase-2(c) (ACA1_006180), the sequence of which we corrected using the unfinished transcriptome, is much less abundant and so was not localized.GFP-tagged laccase-1 under its own promoter localizes to the ectocyst layer of mature Ac walls with an appearance like those of GFP-tagged Jonah-1, Jonah-3(c), and Leo-S(c) (Fig. 4D).Like Leo-S(c)-GFP, laccase-1-GFP fills secretory vesicles of Ac encysting for 18 hours, when there is no wall labeled with WGA or CFW (Fig. 4B).N.B. that the precise timing of secretory vesicles and wall formation may vary by up to 6 hours, depending on the exact condition of trophozoites prior to placement in encystation medium.Unlike Leo-S(c)-GFP, after 24 hours encystation laccase-1-GFP is homogeneously distributed in the single layered wall of Ac, which becomes the ectocyst layer of mature cysts at 72 hours (Fig. 4C and 4D).
In summary, we localized three new proteins to the ectocyst layer of the Ac cyst wall, one of which was correctly predicted by AmoebaDB (laccase-1) and two of which needed to be corrected using an unfinished transcriptome (Jonah-3(c) and Leo-S(c)).We also showed that Leo-S(c) and the laccase-1 are each made early during encystation in a dense set of secretory vesicles, which are released onto the single-layered wall in a fashion like that of Jonah-1 (29).
RT-PCR and promoter swaps show timing of expression does not just correlate with but causes protein localization in the ectocyst layer (earlier) or the endocyst layer and ostioles (later)
Four experiments here address limitations in our present understanding of how proteins are targeted to the two layers of the Ac cyst wall and ostioles.First, we use qRT-PCR to track mRNAs for Jonah-1, Luke-2, Leo-A, and laccase-1 to determine the extent to which the results correlate with confocal micrographs of GFP-tagged proteins each expressed under its own promoter (~500-bp upstream of the start ATG) (Fig. 4B to 4D) (29).
Second, to visualize the development of the two layers of the wall in the same organism, we encysted Ac expressing Luke-2-GFP and either Jonah-1-RFP or Jonah-1-mCherry, each under its own promoter on a single plasmid, and examined them with confocal microscopy plus CFW.Third, we use promoter swaps between pairs of proteins (Jonah-1 and Luke-2 or laccase-1 and Leo-A) to test the hypothesis that early expression does not just correlate with but causes Ac cyst wall proteins to localize to the ectocyst layer, while later expression causes cyst wall proteins to localize to the endocyst layer and ostioles (29).We also tested here whether proteins expressed under the same promoter have the similar or different localizations in the Ac cyst wall, which would suggest similar or different binding specificity, respectively, for cellulose and/or chitin, the two glycopolymers that form fibers in the wall (35,36).Fourth, we use earlier and later promoters (Jonah-1 and Luke-2, respectively) to express a heterologous probe for chitin fibrils, which is composed of a chitin-binding domain of an Entamoeba chitinase called CBM55 fused to GFP (52).
Across nine time points, we isolated RNA from nontransfected trophozoites and Neff strain encysting for 96 hours and performed qRT-PCR with cDNA (Fig 5A).qRT-PCR products were normalized by calreticulin, which is part of the N-glycan-dependent quality control of protein folding in the ER and is highly expressed in trophozoites and encysting Ac (53).Transcription of Jonah-1 begins at 6 hrs encystation and peaks at 18 hrs, while transcription of laccase-1 begins at 12 hrs and peaks at 24 hrs.In contrast, transcription of Luke-2 begins at 24 hrs and peaks at 48 hrs, while transcription of Leo-A is only detected at 72 hrs.The qRT-PCR results agree with those of the unfinished transcriptome, which also suggests the low level of Leo-A is likely caused by a suboptimal choice of primers for RT-PCR.Double labels on the same plasmid show Jonah-1-RFP is made at 12 hrs of encystation (Fig. 5B), when there is no evidence of Luke-2-GFP while CFW labels glycopolymers (most likely cellulose) that are present in small vesicles.After 24 hrs encystation (Fig. 5C), Jonah-1-RFP begins to accumulate on the single-layered wall, which is lightly labeled with Luke-2-GFP that is predominantly in small secretory vesicles that are not well resolved from each other.After 48 hrs encystation (Fig. 5D), Jonah-1-mCherry localizes to the ectocyst layer with a slightly punctate appearance, as shown in Fig 1B and 5E, while Luke-2-GFP forms rings around ostioles and lightly labels the endocyst layer, which is heavily labeled with CFW.This appearance of Jonah-1-mCherry, Luke-2-GFP, and CFW is indistinguishable from that of mature cysts at 72 or 96 hrs encystation (data not shown).
For the promoter swap experiments, we used GFP-tagged proteins that are either made earlier and localized to the ectocyst layer (Jonah-1 and laccase-1) or made later and localize to the endocyst layer and ostioles (Luke-2 and Leo-A) (Fig. 1B, 2B, 3D to 3F, and 4B to 4D) (29,37,38).In the first promoter swap, Jonah-1-GFP expressed under the later Luke-2 promoter moves from the ectocyst layer to the ostioles, which are densely coated and increase in number from ~9 to as many as 16, while the endocyst layer is weakly labeled (Fig. 5E and 5F).The localization of Jonah-1-GFP under Luke-2 promoter is distinct from those of GFP-tagged Luke-2 and Leo-A under their own promoters, which show much greater localization to the endocyst layer, suggesting Jonah-1 binds to different glycopolymers (Fig. 1B and 2B).Conversely, Luke-2-GFP expressed under the earlier Jonah-1 promoter moves from the endocyst layer and ostioles to lightly decorate the ectocyst layer and form flat, narrow rings around the ostioles (Fig. 5G and 5H).These narrow rings, which match those of Luke-2-GFP under its own promoter, suggest that the glycopolymer in the ostiole to which Luke-2 is binding is available both earlier and later during encystation.The localization of Luke-2-GFP in the ectocyst layer is distinct from those of GFP-tagged Jonah-1, Jonah-3(c), Leo-S(c), or laccase-1, which, under their own promoters, coat the ectocyst layer and form dimples where ostioles are located (Fig. 1B, 1D, 3F, and 4D).This result suggests that glycopolymers bound by Luke-2 are likely not the same as those bound by proteins that normally localize to the ectocyst layer.
In the second promoter swap experiment, we expressed laccase-1-GFP (earlier) and Leo-A-GFP (later) each under its own promoter or the other promoter and determined their localizations in mature cyst walls.
Laccase-1-GFP expressed under the later Leo-A promoter moves from the ectocyst layer to the ostioles, which are densely coated (Fig. 5I and 5J).Leo-A-GFP expressed under the earlier laccase-1 promoter, moves from the endocyst layer and ostioles to decorate lightly the ectocyst layer and form flat rings around the ostioles (Fig. 5K and 5L).
When expressed under the earlier Jonah-1 promoter, the chitin-binding domain of the Entamoeba cellulase (CBM55) tagged with GFP is localized to distinct fibrils in the ectocyst layer, some of which have linear arrays of small spheres (Fig. 5M).A 2D cross-section of the same cell shows that the chitin layer is narrow.In contrast, CBM55-GFP expressed on the later Luke-2 promoter has a honey-comb appearance in the endocyst layer, which is thicker on 2D cross-section, and there is no localization to ostioles (Fig. 2N).
In summary, qRT-PCR and double labels support our previous conclusions that Jonah-1 and laccase-1 in the ectocyst layer proteins are made earlier, while Luke-2 and Leo-A in the endocyst layer and ostioles are made later (29).Promoter swaps support our hypothesis that earlier expression does not just correlate with but causes proteins to localize to the ectocyst layer and later expression causes proteins to localize to the endocyst layer and ostioles.The similar localizations of Jonah-1 and laccase-1 under earlier and later promoters, which are distinct from localizations of Luke-2 and Leo-A under similar promoters, suggest pairs of wall proteins bind to distinct glycans.Finally, CBM55-GFP suggests chitin forms fibrils in the ectocyst layer and honeycombs in the endocyst layer, the mechanism of which is unknown but fascinating.
Structures of abundant wall proteins reveal the complex ancestry of Ac cyst wall proteins and identify linear arrays of three aromatic amino acids in Luke-2 and Leo-A, which bind cellulose and direct proteins to the Ac cyst wall Previously, we used sequence-based searches to show that Jonah lectins contain one or three choice of anchor-A (CAA) domains like a spore coat protein of Bacillus anthracis, while Luke lectins contain two or three domains similar to those of cellulose-binding proteins of Dictyostelium discoideum to carbohydrate-binding modules of bacteria (CBM2) and plant (CBM49) endocellulases (29,32,(42)(43)(44)(45)(54)(55)(56)(57).Further, the Ac laccase-1 has three cupredoxin-like domains like those of bacterial and fungal enzymes, while 8-Cys domains of Leo appear to be unique to Ac (50,51).Here we used AlphaFold and Foldseek to 1) predict structures of abundant cyst wall proteins and compare them to proteins with proven structures, 2) identify cellulose-binding sites for Luke and Leo lectins, and 3) select domains in Jonah-1 and laccase-1 for immunizing rabbits to produce antibodies that detect cysts by immunofluorescence microscopy (39)(40)(41).
Luke-2 and Luke-3 have two and three β-jelly-roll folds (BJRFs), respectively, each of which has a disulfide bond linking its beginning to its end.The BJRFs of Luke lectins are separated by short (33 to 47-aa) unstructured domains enriched in Ser (yellow) or Thr (blue) (Fig. 2D and 2E).Slimes molds including Dictyostelium, Tieghemostelium, Heterostelium, Cavenderia, and Polysphondylium also have cell wall proteins with one to five similar BJRFs, consistent their shared ancestry with Luke lectins (32,56).Further, the BJRFs of Luke-2 and a predicted Dictyostelium cellulose-binding protein (Q86KB6_DICDI) each contain a linear array of three aromatic amino acids that are also present in CBM2 of an endocellulase of a cellulolytic soil bacterium Cellulomonas fimi, the structure of which has been solved (PDB 1EXH) (pink arrows in Fig. 2F and 2G) (58,59).Ala mutations of these three aromatic amino acids in CBM2 and in CBM49 of a plant endocellulase, which is closely related, show they are the binding sites for cellulose (42,47).Here we muted to Ala two sets of three aromatics (W35, W73, and W88 and W187, W228, and F244) of Luke-2, which was fused to maltose-binding protein (MBP) and expressed in the periplasm of E. coli where, like the endoplasmic reticulum of eukaryotic cells, disulfide bonds are formed (29,60,61).We purified MBP-Luke-2 +/-Ala mutations on amylose resins and tested their binding to Avicel (crystalline) cellulose.Western blots showed WT Luke-2 binds well to Avicel cellulose, while MBP alone and Luke-2 plus Ala-mutations fail to bind to cellulose (Fig. 2J).Further, Luke-2-GFP plus Ala mutations, which was expressed under the same promoter as WT Luke-2-GFP, no longer localizes in the endocyst layer and ostioles (Fig. 2B and 2C).The binding of MBP-WT Luke-2 to Avicel cellulose argues for its proper folding, so we used it to immunize rabbits.
Leo lectins have two sets of four disulfide knots (4DKs), which are adjacent (Leo-A) or separated by a long, unstructured, Thr-rich domain (Leo-S(c)) (Fig. 3G and 3I).Although there are numerous carbohydratebinding modules composed of sets of 4DKs (e.g., CBM18 of WGA, CBM19 of a Saccharomyces cerevisiae GH18 chitinase, and CBM55 of Entamoeba histolytica GH18 chitinase), we were unable with Foldseek to identify any shared structures with the set of 4DKs of Leo (43,46,52,(62)(63)(64).Remarkably, 4DKs of Leo-A and Leo-S(c) contain linear arrays of three aromatic residues, even though these 4DKs have no relationship to BJRFs of Luke or to CBM2 and CBM49 of bacterial and plant endocellulases (Fig. 3H).Ala mutations to two sets of three aromatics (Y46, Y63, and Y77 and Y134, Y151, and Y165) caused an MBP-Leo-A fusion to no longer bind to Avicel cellulose (Fig. 2J).Similarly, the Ala mutant of Leo-A-GFP, which was expressed under the same promoter as WT Leo-A-GFP, no longer localizes to the endocyst layer and ostioles (Fig. 3B and 3C).As above, the binding of MBP-WT Leo-A to Avicel cellulose argues for its proper folding, so we used it to immunize rabbits.
Jonah-1 has a single three-sided BHF, a pair of α-helices of unknown function, and a long, unstructured, Thr-rich domain, while Jonah-3(c) has three BHFs and three long, unstructured, Ser-rich domains (Fig. 1E and 1F).The BHF of Jonah-1 closely resembles the three-sided BHF of an antifreeze protein of an Antarctic sea ice bacterium Colwella sp., the structure of which has been solved (PDB 3WP9) (Fig. 1E to 1H) (65).Although Jonah-1 and laccase-1 (next paragraph) contained numerous aromatic amino acids on their surface, none was in linear arrays, so we did not perform Ala mutations to test cellulose-binding.Finally, we chose the BHF of Jonah-1 to make an MBP-fusion for immunizing rabbits, because we wanted to avoid unstructured, Thr-rich regions, which might interfere with proper folding of the protein and/or contain O-linked glycans that would obscure antigenic sites.
The laccase-1 of Ac has three cupredoxin-like domains (CuRO-1, CuRO-2 and CuRO-3) but lacks unstructured, Ser-or Thr-rich domains, which distinguishes it from other ectocyst layer proteins (Jonah-1, Jonah-3(c), and Leo-S(c)) (Fig. 4E).Instead, laccase-1 has a positively charged loop between CuRO-2 and CuRO-3, which is also present in spore coat proteins of bacteria.Ac laccase-1 shares a 44% identity with spore coat protein A of Caldicoprobacter faecalis, which is present in sewage sludge.Ac laccase-1 shows slightly lower positional identities with laccases of archaea (e.g., 40% with Halalkalicoccus paucihalophilus, fungi (e.g., 38% identity with Penicillium nalgiovense), and plants (e.g., 36% with Diphasiastrum complanatum (36%).Pelomyxa schiedti (36% identity) is the only Amoebazoa with a laccase, which is absent from humans and most metazoa but are present in a few arthropods (e.g., 28% identity with Rotaria sp.).While the high positional identity with bacterial laccases suggests the Ac laccase-1 may be active, we did not determine what substrates are oxidized, and we did not knock it out to determine the phenotype caused.The CuRO-1 of laccase-1, which we chose to make an MBP-fusion for immunizing rabbits, closely resembles that of the spore coat protein A of Bacillus subtilis, the structure of which has been solved (PDB 4YVN) (Fig. 4F) (66).A possible concern here is the presence of a 10-aa sequence (NVYAGLAGFY) near the C-terminus, which is also present in laccases of some bacteria, archaea, fungi, and plants and so might lead to cross-reacting antibodies.
In summary, BJRFs of Luke lectins share recent common ancestry with wall proteins of slime molds and distant ancestry with CBM2 and CBM49 of bacterial and plant endocellulases, while 4DKs of Leo are unique.
BHFs of Jonah lectins and three CuRO domains of laccase closely resemble those of bacterial proteins and so likely derive by horizontal gene transfer, which was not proven here.Although the structures of the BJRFs of Luke-2 and the set of 4DKs of Leo-A show no resemblance, the linear arrays of three aromatic amino acids, which are involved in binding cellulose and localizing proteins in the endocyst layer and ostioles, are the same, consistent with convergent evolution.Finally, while the BHF of Jonah-1, BJRFs of Luke-2, and 4DKs of Leo-A have no well-conserved sequences that might lead to cross-reacting antibodies with bacteria or fungi in eye scrapings, CuRO-1 of laccase-1 has a well-conserved 10-aa sequence that may be problematic.
Rabbit antibodies to Jonah-1, Luke-2, Leo-A, and laccase-1 all efficiently detect cysts of 10 of 11 Acanthamoeba isolates, including five T4 genotypes that cause most cases of AK.
The goals here were to 1) use rAbs to recombinant wall proteins to visualize native proteins by western blots and confocal microscopy and 2) test how well each antibody detected CFW-labeled cysts of 11 Acanthamoeba species/strains.Rabbits were immunized with MBP-fused to WT Luke-2, WT-Leo-A, BHF of Jonah-1, and CuRO-1 of the laccase-1 in complete Freund's adjuvant and boosted three times with incomplete Freund's adjuvant at Cocalico Biologics, Inc., and rabbit IgGs were purified using Protein-A Sepharose (60,61).Western blots and confocal microscopy showed that pre-bleeds from rabbits do not react with trophozoites or cysts of Ac (Fig. 6A).While rAbs to MBP-fusions with Jonah-1, Luke-2, Leo-A and laccase-1 do not bind to trophozoite proteins, each rAb binds well to cyst wall proteins of Neff strain of Ac.Anti-Jonah-1 rAbs bind to a protein of the expected size of ~55-kDa, as well as to two lower mol wt bands.The latter may result from proteolytic cleavage of Jonah-1 before or during isolation of cyst proteins and/or by cross-reaction with smaller Jonah-1 proteins, which share multiple conserved domains.Anti-Luke-2 rAbs bind to a thick ~50-kDa band, which is greater than the 27-kDa expected size of Luke-2.We suspect that the increased size of Luke-2 is caused by extensive glycosylation of its five Nglycan sites and ~20 O-glycan sites in the low complexity, Ser-rich spacer (67).Anti-Leo-A rAbs bind to a thick ~13-kDa band, which is slightly smaller than the expected size of 17-kDa, perhaps due to proteolytic cleavage.
Finally, anti-laccase-1 rAbs bind to a 75-kDa band, which is slightly bigger than the expected size of 64-kDa, as well as to a less abundant 55-kDa band.Again, addition of N-glycans or O-glycans may explain the increase in size, while proteolytic cleavage may explain the lower mol wt band.
High-power confocal microscopy showed rAbs to the BHF of Jonah-1 bound in a somewhat patchy distribution to the ectocyst layer (surface) of cysts of Neff strain and 10 other Acanthamoeba species and/or strains (Fig. 6A and 7A).This patchy distribution, which is in contrast to the homogeneous distribution of Jonah-1-GFP (Fig 1B ), may result from masking of the BHF of Jonah-1 by its large, unstructured Thr-rich domain, which likely contains O-linked glycans, and/or masking by other ectocyst wall proteins (Fig. 1E).In contrast, rAbs to CuRO-1 of laccase-1 bound in a homogenous pattern to the Neff strain and to other Acanthamoeba species/strains, which is similar to that of laccase-1-GFP, suggesting there is less masking of laccase-1 (Fig. 4D, 6E, and 7D).While rAbs to Luke-2 and Leo-A densely labeled the ostioles and weakly labeled the endocyst layer of many cysts (Fig. 6C and 6D), other cysts were labeled on the endocyst layer and/or ectocyst layer (Fig. 7B and 7C).Of note, rAbs to Luke-2 and Leo-A often bound in similar patterns to a particular Acanthamoeba isolate, which is reminiscent of similar appearances of Luke-2-GFP and Leo-A-GFP in the promoter swap experiments (Fig. 5).There are two explanations, which are not mutually exclusive, for the variable distribution in the cyst walls of rAbs to Luke-2 and Leo-A.First, these proteins may be made at different times by different Acanthamoeba species/strains, so that Luke-2 and Leo-A localize to different places.Second, these rAbs may bind to relatively small amounts of Luke-2 and Leo-A in the ectocyst layer, which was not visualized by confocal microscopy, because of the large amounts of GFP-tagged Luke-2 and Leo-A in the endocyst layer and ostioles (Fig. 2B and 3B).
To determine which wall proteins might be the best targets for diagnosis of cysts in AK, we counted in randomly selected low power confocal fields the number of CFW-labeled cysts detected by rAbs to Jonah-1, Luke-2, Leo-A, and laccase-1 (Fig. 6F to 6I).In two separate experiments, >100 cysts were counted for each rAb and each Acanthamoeba species/strain, and the averages +/-SEM were plotted in Fig 6J .We found that rAbs to Luke-2, Leo-A, and laccase-1 each detected >95% of CFW-labeled cysts of 10 of 11 Acanthamoeba species and strains tested.In contrast, anti-Jonah-1 rAbs showed >95% detection for 7 of 11 Acanthamoeba species but detected somewhat fewer cysts of A. byersi (PRA-115) (T11) (91%), Acanthamoeba sp. 13 (ATCC 50655) (T18) (89%), and A. polyphaga (ATCC 30872) (T4) (85%).Why this is the case is not clear, but the result, which suggests that the BHF of Jonah-1 might not be quite as good a target as the BJRFs of Luke-2, the 4DKs of Leo-A, or CuRO-1 of laccase-1, was reproduced in separate experiments.In addition, all four rAbs struggled to detect A. mauritaniensis (T4) (ATCC 50676), suggesting that there is a problem with encysting these trophozoites under the conditions used here.While counts were performed for two sets of cysts labeled with Protein-A Sepharose-purified antibodies, we obtained similar results with rabbit sera diluted 1:300.Most important, we confirmed that rAbs to all four proteins were also visible with a conventional Zeiss fluorescence microscope, which is more similar to those present in clinical labs or in ophthalmologists' offices.While the success with rAbs to Jonah-1 and laccase-1 was expected, because these proteins are on the ectocyst layer (surface) of cysts, the success of rAbs to Luke-2 and Leo-A was unexpected because we previously had trouble locating Luke-2-GFP or Leo-A-GFP using anti-GFP antibodies (29).Finally, regardless of whether the rAbs are binding to the ectocyst layer or endocyst layer, the presence of ~10 ostioles makes Acanthamoeba cysts look distinct from fungal walls, which may at most have a few bud scars (29,31,34).
Incorrect protein prediction of Jonah-3 in AmoebaDB led to its artifactual but interesting localization in rings around ostioles
While Ac proteins sequences available in AmoebaDB made it possible for us to perform mass spectrometry of cyst walls and for others to discover dozens of proteins of interest, its predictions are based upon an incomplete genome of Neff strain of Ac and incomplete transcriptomes (17,18,29).We used an unfinished transcriptome to show TMHs between three BHFs, which incorrectly localized GFP-tagged Jonah-3-AmoebaDB to a ring around ostioles, were caused by frameshifts.Instead, Jonah-3(c) has low-complexity, Ser-rich spacers like those of Luke-2 and localizes to the ectocyst layer in a pattern matching that of Jonah-1.The unfinished transcriptome also allowed us to correct an eight amino acid long deletion in Leo-S-AmoebaDB that would have disabled its Nterminal 4DK, which is necessary for binding cellulose and localizing the protein to the ectocyst layer.Because AmoebaDB predicts 14,000 proteins, the errors described here in Jonah-3, Leo-S, and laccase-2 are mere anecdotes but suggest a need for a reannotated proteome based upon a well-constructed transcriptome.Finally, despite being an artifact, rings around the ostioles formed by Jonah-3-AmoebaDB may be useful for studying ostiole formation during encystation and ostiole breakdown during encystation.Similarly, the heavy labeling of ostioles by Jonah-1 and laccase-1 under the later Luke-2 and Leo-A promoters, respectively, may also facilitate studies of ostiole formation and degradation.
Timing of expression and properties of abundant wall proteins determine their localization in the ectocyst layer, endocyst layer, and/or ostioles
Because whole cyst walls were purified by Percoll gradients and examined by mass spectrometry, we expressed selected proteins with a GFP-tag under their own promoters to precisely localize them in the two layered-wall connected by ostioles (29,37,38).Here we increased the number of ectocyst layer proteins from one (Jonah-1) to three (Jonah-3(c), Leo-S(c), and laccase-1).Laccase-1 and Leo-S(c) were also localized to a massive set of secretory vesicles early in encystation, as was previously shown for Jonah-1.For the first time, we used double labels to colocalize Jonah-1 and Luke-2 in developing walls and show unequivocally that the ectocyst layer is made first and the endocyst layer and ostioles are made second.Rabbit antibodies confirmed the presence of Jonah-1 and laccase-1 in the ectocyst layer of untransformed cysts of the model Neff strain of Ac, as well as cysts of 10 other Acanthamoeba species/strains.Promoter swaps showed Jonah-1 and laccase-1 switch to the endocyst layer and ostioles when expressed under later promoters of Luke-2 and Leo-A, respectively.
Conversely, Luke-2 and Leo-A switch from the endocyst layer and ostioles to the ectocyst layer when expressed under the early promoters of Jonah-1 and laccase-1, respectively.Because the localizations of Jonah-1 and laccase-1 resemble each other under early and later promoters, it is likely that they are binding to the same glycopolymer(s), which we did not determine here.Similar localizations of Luke-2 and Leo-A under both promoters suggest these lectins are also binding to the same glycopolymer(s).This result is not surprising, as Luke-2 and Leo-A contain similar arrays of aromatic amino acids that bind cellulose.Finally, we took advantage of earlier and later promoters and a heterogenous probe from an Entamoeba chitinase to show two distinct appearances for chitin in the ectocyst layer (thick fibrils) and endocyst layer (honey comb), the latter of which we had never seen before with the exogenous probe for chitin (WGA) or GFP-tagged wall proteins.Double labels are a promising tool for studying encystation by live protists, as well as studying how cyst walls are broken down by excysting Ac.Our conclusion that early expression causes proteins to localize in the ectocyst layer, and later expression causes proteins to localize in the endocyst layer and ostioles leaves many questions unanswered.As above, we do not know how ostioles are formed, and we do not know what causes chitin to have such distinct appearance in the two layers.In addition, we have not yet precisely localized cellulose in cyst walls or determined whether its appearance in the two layers is different.Finally, we have not identified sequences in regions 5' to the start ATG that bind transcription factors specific for earlier or later encystationspecific expression, nor have we figured out why some encystation-specific promoters are stronger than others and so produce greater quantities of transcripts for wall proteins.
While its 4DKs are unique, Leo lectins share many properties with wall proteins of Ac and other organisms, arguing for convergent evolution
Although the four families of Ac wall proteins show no relationship to each other, three have been previously identified and characterized in walls of protists, fungi, plants, and/or bacteria.Luke lectins have two or three BJRFs, which share recent ancestry with wall proteins of Dictyostelium and distant ancestry with CBM2 and CBM49 of bacterial and plant endocellulases (32,42,47,56,59).Laccases, which are most similar to those of bacteria, are widely distributed in walls of plants, fungi, and oomycetes (50,51,66).Although their function in Ac, which proliferate in temperate climates, is not clear, BHFs similar to those in Jonah lectins protect Artic bacteria from freezing (57,65).While its 4DKs that bind cellulose are unique, Leo lectins share properties with wall proteins of Ac and other organisms, suggesting the importance of convergent evolution (69).Chitin-binding domains of Saccharomyces chitinases (CBM19), Entamoeba chitinases (CBM55), and WGA (CBM18) also contain unique 4DKs, while chitin-binding domains of Jacob lectins contain unique 3DKs (43,46,52,63,64).
Even though 4DKs of Leo and BJRFs of Luke have no structural similarity, each contains linear arrays of three aromatic amino acids, which Ala mutations showed bind cellulose.Similar linear arrays of aromatics that bind cellulose are present in endocellulases of bacteria (CBM2) and plants (CBM49), as well as in CBM63s of expansins (proteins that unfold cellulose) of plants, bacteria, and Ac (42,44,47).Reinvention by convergent evolution of these linear arrays of aromatics strongly suggests they are the best means for binding cellulose, which forms flat ribbons with alternating glucose residues facing opposite surfaces.
Like Leo-A with two adjacent 4DKs and Leo-S with 4DKs separated by a large Thr-rich spacer, Entamoeba Jacob-1 has two adjacent 3DKs, while Jacob-2 has a third 3DK separated by a long, unstructured, Ser-rich spacer (70).Unstructured Ser-or Thr-rich domains, which are also present in Jonah and Luke lectins, are likely modified by O-linked glycans, which protect glycoproteins from being degraded by bacterial proteases, as shown for wall proteins of Entamoeba and Cryptosporidium (71,72).Due to constraints on resources and time, we did not identify O-linked glycans on Ac cyst wall proteins, nor did we confirm the AlphaFold structures by crystallizing any of the four wall proteins studied here.This seems less of an issue for the BHFs of Jonah-1 and -3, BJRFs of Luke-2 and Luke-3, and CuRO-1 domain of laccase-1, the structures of which match those of crystallized proteins (59,65,66).A goal of future studies will be to solve the crystal structure of the unique 4DK of Leo +/-cellulose.
Abundant wall proteins do not have to be in the ectocyst layer to be good targets for diagnostic antibodies
While it is rare for us (and other parasitologists) to pair basic and translational science experiments, it worked well here for the following reasons.The BHF of Jonah-1, the BJRFs of Luke-2, the 4DKs of Leo-A, and CuRO-1 domain of laccase-1 each expressed well as MBP-fusions in the periplasm of bacteria, where disulfide bonds are formed (60,61).In contrast to anti-peptide rAbs to Jonah-1 and Leo-A, which bound to Western blots but not to fixed cysts (29), rAbs to MBP-fusions of all four Ac wall proteins bound well to Western blots and to cysts of Neff strain of Ac, as well as to cysts of 9 of 10 other isolates of Acanthamoeba.While we expected that rAbs to Jonah-1 and laccase-1, which are both in the accessible ectocyst layer, to efficiently detect CFW-labeled cysts, anti-laccase-1 performed slightly better than anti-Jonah-1.Even though we doubted whether rAbs to Luke-2 and Leo-A, which are in the relatively inaccessible endocyst layer and ostioles, would detect CFW-labeled cysts efficiently, anti-Luke-2 and anti-Leo-A rAbs both performed very well.Although the labeling of the ectocyst layer by rAbs to Luke-2 and Leo-A is not easy to explain, efficient detection is what matters for a diagnostic reagent.
Finally, the signal from the secondary goat antibody binding to rAbs to Jonah-1, Luke-2, Leo-A, and laccase-1 is strong enough, so that cysts were easily detected with a conventional fluorescence microscope, not just by an expensive confocal microscope used to get three-dimensional, high-resolution images of the ectocyst layer, endocyst layer, and ostioles.
Because these rAbs cross-react with MBP that was not removed prior to vaccination, they are not ready for testing on corneal scrapings that likely contain numerous bacteria.Although we are confident that Luke-2, Leo-A, and Jonah-1 antigens are each unique to Ac, a 10-aa sequence in laccase-1 may lead to antibodies that cross-react with walls of bacteria, fungi, or plants.Cysts examined here were made by starving cultured Acanthamoebae and so might not be the same as those made in the corneal epithelium, soil, or water.Diagnostic anti-cyst antibodies will complement monoclonal antibodies to the mannose-binding proteins on trophozoites (68,73), as well as antibodies to transporters and secreted proteins of trophozoites and cysts (74)(75)(76).Anti-cyst antibodies may also supplement Loop-mediated Isothermal Amplification (LAMP) assays for diagnosing AK (77,78).
Ethics statement
Culture and manipulation of Acanthamoebae under BSL-2 protocols were approved by the Boston University Institutional Biosafety Committee.Production of custom rabbit antibodies was approved by the Institutional Animal Care and Use Committee of Cocalico Biologics, Inc., Denver PA.
Summary of new methods to study Ac wall proteins.
Many methods are the same that we used in our mass spectrometric characterization of proteins in purified cyst walls of Ac, which were described in detail (29).New here is use of unfinished transcriptome to correct protein prediction in AmoebaDB for Jonah-3(c) and Leo-S(c).Confocal microscopy replaced structured illumination microscopy (SIM), because it is quicker and allowed us to examine many more parasites.RT-PCR and double labels with GFP and either RFP or mCherry were used to compare expression and localization of ectocyst and endocyst layer proteins, while an exogenous probe from an Entamoeba chitinase (CBM55) was used to localize chitin in ectocyst and endocyst layers.Two pairs of promoter swaps were used to show that localization in the two layers of the wall is not just correlated with timing of expression but is caused by timing of expression.
AlphaFold was used to predict structures for each cyst wall protein and test with Ala mutations aromatic amino acids involved in binding cellulose.Foldseek was used to suggest the origin of domains in cyst wall proteins that could not be identified using sequence-based searches.Recombinant proteins, rather than peptides, were used to make rAbs to abundant wall proteins, which supported localizations of four GFP-tagged constructs under their own promoters.Binding of the rAbs to cysts of numerous Acanthamoeba genotypes, which were confirmed by PCR of 18S RNA genes, supported their use as targets for diagnostic antibodies.
Acanthamoeba species and strains, culture, and cyst preparation
A. castellanii Neff strain (ATCC 30010) trophozoite parasites were obtained from the American Type Culture collection (ATCC).Trophozoites of other strains of Acanthamoeba, originally derived from human corneal infections and granulomatous encephalitis infections, were acquired from Dr. Monica Crary, Alcon Research, LLC, Fort Worth, TX, United States or from Noorjahan Panjwani of Tufts University Medical School (24,68).All experiments were performed using A. castellanii Neff strain until otherwise mentioned.
Ac trophozoites were grown and maintained in axenic culture at 30ºC in T-75 tissue culture flasks in 10 ml ATCC medium 712 (PYG plus additives) with antibiotics (Pen-Strep) (Sigma-Aldrich Corporation, St. Louis, MO) as described previously (21,29,79).Adherent log-phase trophozoites from stationary culture were detached with the help of a cell scraper and pelleted down by centrifugation at 500x g for 5 min followed by two washes with 1X phosphate buffered saline (PBS).Cysts were prepared from trophozoites by incubating them with encystation medium (EM, 20 mM Tris-HCl [pH 8.8], 100 mM KCl, 8 mM MgSO4, 0.4 mM CaCl2, and 1 mM NaHCO3) (29,79).In brief, ~10 7 trophozoites obtained from a confluent flask were washed with 1x PBS and subsequently incubated with EM in a T-75 tissue culture flask for 120 hours.The mature cysts were released with the help of a cell scrapper, harvested, and washed with 1x PBS by centrifugation at 1,500x g for 10 min.
The harvested cysts were either used immediately or stored at 4ºC or -20ºC depending upon the need of the experiment.
In silico sequence analysis of candidate cyst wall lectins
The full-length coding sequence of different cyst wall proteins of A. castellanii namely Jonah-1 (ACA1_16481), Jonah-3 (ACA1_157320), Luke-2 (ACA1_377670), laccase-1 (ACA1_068450), Leo-A (ACA1_074730) and Leo-S (ACA1_188350) were obtained from AmoebaDB, a functional genomic database useful for genetic studies of the Neff strain and ten other Acanthamoeba strains (https://amoebadb.org/amoeba/app)(80).The AmoebaDB database was also used to predict introns and identify paralog proteins and upstream promoters (nucleotide sequence) of different cyst wall proteins.The full-length amino acid sequences of candidate wall proteins were further analyzed, reannotated, and corrected based on homologs proteins using BLAST server in other species (54).Other in-silico tools used to functionally characterize the cyst wall proteins include the CDD database (conserved domain identification) (81), SignalP 4.1 (49) and DeepTMHMM (82) (signal peptides and transmembrane helices), CAZy and InterPro databases (Carbohydrate-binding modules) (43,46,55) (48) and big-PI (Glycosylphosphatidylinositol anchors) (83), respectively.Finally, we used an unfinished transcriptome of trophozoites and encysting Ac, which will be described elsewhere when completed, to check and correct protein predictions.
Expression and localization of candidate cysts wall lectins during encystation in A. castellanii
RT-PCR studies were performed to check the expression of cyst wall lectins in encysting protists.In brief, adherent log-phase trophozoites of Ac were washed twice with PBS and subsequently stimulated to encyst by incubation at 30ºC with encystation media.The amoeba cells were collected at various time points including 0, 6, 12, 18, 24, 36, 48, 72, and 96 hours post incubation.The cells were pelleted down by centrifugation, washed with PBS, and stored in Trizol reagent at -80ºC for further use.Once all the time points were collected, the Trizol cell pellets were thawed on ice, and the cells were lysed by bead beating (BioSpec mini bead beater) operated in a cold room at 4ºC with 1 min beating and cooling off for 5 mins for 3 cycles.Total RNA was extracted using Directzol RNA minprep kit (Zymo Research), and the concentration was checked using Nanodrop 2000/2000c (Thermo Fisher Scientific).cDNA was synthesized using AMV reverse transcriptase (New England Biolabs, Ipswich, MA) as per the manufacturer instructions.The expression of abundant cyst wall proteins during encystation was analyzed using Realtime-PCR (Biorad) using Power SYBR green PCR master mix (Applied Biosystems).Calreticulin was used as housekeeping gene to normalize the expression profile of various candidate cyst wall lectins.The primers used for real time PCR analysis are listed in Supporting Information S1.
To verify the expression and subcellular localization of cyst wall proteins during encystation, we expressed individual cyst wall lectins from an episomal plasmid in Neff strain of Ac under their own promoter.
Primers for making constructs are listed in Supporting Information S1, while sequences of promoters and proteins are listed in Supporting Information S2.For all constructs, the pGAPDH plasmid was used, which harbors a neomycin resistance gene (for G418 drug selection) and a glyceraldehyde 3-phosphate dehydrogenase (GAPDH) promoter for constitutive expression of a C-terminus GFP fusion chimera (37).We also used the pGAPDH vector to express proteins tagged with either RFP or mCherry, both codon-optimized for Ac.In our in silico structural studies, we observed a comparable binding site topography for Luke-2 and Leo-A cellulose-binding module (CBM), which consists of a linear array of aromatic amino acids, as previously observed phenomenon for other CBMs.To confirm the involvement of these aromatics in Luke-2 and Leo-A CBMs these aromatic amino acid residues were mutated to alanine.Tryptophan (W35, W73, W88, W187, W228 and phenylalanine (F244) of Luke-2 were replaced with alanine, while tyrosine (Y46, Y63, Y66, Y134, Y151, and Y165) of Leo-A were replaced with alanine.The construct used for Jonah-1, Leo-A (WT), and Luke-2 (WT) were taken from a previously published study (29).While the full-length coding sequences of Leo-A mutant, Luke-2 mutant, laccase-1, and Leo-S gene were codon optimized and custom synthesized from the Twist Biosciences along with their respective promoter sequences.For the promoter swap experiment, the Luke-2 promoter was replaced with Jonah-1, while the laccase-1 promoter was replaced with the Leo-A promoter and vice versa.For the chitin-binding domain CBM55 construct, Entamoeba histolytica CBM55 sequence was obtained from AmoebaDB, codon optimized for Ac, and expressed under Jonah-1 or Luke-2 promoter.For visualizing two proteins in the same cell, a single pGAPDH vector was engineered to express Jonah-1 tagged with RFP or mCherry and Luke-1-GFP, each expressed under its own promoter.RFP and mCherry were codon-optimized for Ac and synthesized at Twist Biosciences.The 5' upstream sequence (400-600 bp) (Supporting Information S2) for each gene was PCR amplified from Ac genomic DNA (used as a promoter to drive their expression) to swap the respective gene promoter in the pGAPDH plasmid.For construct preparation, we used a restrictionfree cloning strategy using NEBuilder HiFi DNA assembly master Mix from New England Biolabs, Ipswich, MA.
The identities of final constructs were sequence verified using Oxford nanopore sequencing technology from Plasmidsaurus.The primers used for cloning of different promoters, and cyst wall lectin genes are listed in Supporting Information S1.
Transfection
Transfection in A. castellanii was performed routinely in the lab using Lipofectamine™ Transfection Reagent (Thermo Fisher Scientific), as per the manufacturer's instruction.Briefly, for each transfection ~5 x 10 5 log-phase trophozoites were seeded in a T-25 flask in ATCC medium 712 and incubated for 30 min at 30ºC.After the incubation, the adherent trophozoites were washed, and the media was replaced with 500 μl of encystation media (EM).Prior to transfection, the Lipofectamine TM 3000 reagent was diluted in EM (7.5 µL Lipofectamine TM 3000 was diluted in 125 µL EM).The master mix of DNA was prepared by diluting 8 μg of plasmid DNA with EM (to achieve a final volume of 117 µL) and subsequent addition of 8 µL P3000 TM reagent.Diluted Lipofectamine TM 3000 and DNA master mix both were added in a 1:1 (125 + 125 µl) ratio and incubated at room temperature for an additional 15 min.Following incubation, the DNA-lipofectamine complex (250 µL) mixture was added directly onto the adherent trophozoites in 1 mL EM (1250 µL).The trophozoites were further incubated for an additional 4 hours at 30ºC.After this, the EM and transfection mixture was removed by pipetting and 10 ml fresh media was added into each well.After 24h of transfection, the used medium was replaced with fresh media containing G418 (12.5 µg/mL).After 48h of transfection, cells were transferred from 6 well plates to T-75 flasks with ATCC medium 712 plus G418 (25 µg/mL).The used medium was changed routinely with fresh media containing G418 (25 µg/mL) on every fourth day.After 2 to 4 weeks, the transfectants began growing robustly in the presence of G418.
Confocal microscopy
To check the expression of the abundant cyst wall proteins during the encystation, the transgenic Ac trophozoites (which express GFP-tagged chimera cyst wall proteins) were induced to encyst with EM.The encysting amoebas were collected at various time points (0, 6, 12, 18, 24, 36, 48, 72, and
Overexpression and purification of candidate cyst wall proteins as MBP-fusions
The MBP-fusion constructs of Jonah-1, laccase-1, Luke-2 (WT and mutant), and Leo-A (WT and mutant) were prepared by cloning the codon-optimized (E.coli expression) synthetic gene fragments into pMAL-p2x vector (New England Biolabs) for periplasmic expression in BL21-CodonPlus(DE3)-RIPL (Agilent Technologies, Lexington, MA) (61).The primers used for cloning synthetic gene fragments and the length of different domains of cyst wall proteins (devoid of signal sequence) are listed in Supporting Information S1 and S2.The expression of each MBP-fusion proteins was induced by the addition of 0.1 mM of IPTG for 16 h at 16°C.The bacterial cell pellet was lysed, and supernatant was collected as per the manufacturer's specifications (New England Biolabs).
Later the supernatant mixture was passed through the column for the MBP-tagged protein to bind to the column at 0.5 mL/min flow rate.The column washed with 20 column volumes of wash buffer, and the purified protein kDa Amicon Ultra centrifugal filter (Millipore, USA).
Cellulose binding assay using WT and Mutant Luke-2 and Leo-A
To check the cellulose binding activity of WT and mutant Luke-2 and Leo-A, we performed an in vitro cellulose binding assay.In this assay, recombinant purified MBP fusion Luke-2 (WT and Ala mutant) and Leo-A (WT and Ala mutant) proteins were used.A total of 1 μg MBP-fusion protein (in 100 μl of 1% NP40) was incubated with 0.5 μg Avicel microcrystalline cellulose (Sigma-Aldrich) for 3 hours at 4ºC with rocking.Following binding, microcrystalline cellulose fibers were pelleted down by centrifugation, while the supernatant was collected in a separate tube (unbound fraction).The microcrystalline cellulose fibers (bound fractions) were washed three times with 1% NP40.The input material (total) unbound (U), and bound (B) fractions were boiled in the SDS sample buffer.Soluble proteins were separated on SDS-PAGE (4-15%), blotted to nitrocellulose membranes, blocked in 5% BSA, and detected using anti-MBP antibodies (New England Biolabs).As a negative control, microcrystalline beads were incubated with MBP alone.
Detection of candidate cyst wall lectins in A. castellanii trophozoites and cysts
To detect the cyst wall proteins in Ac, log-phase trophozoites parasites and 120 h post-encysting mature cysts were harvested and lysed in SDS sample buffer.The lysates were separated on SDS-PAGE gel (4-15%), transferred on nitrocellulose membrane, and blocked in 5% BSA in PBS.The blots were probed with primary rabbit polyclonal antibodies (1:5000)/purified rabbit IgG (1:1000) raised against the different abundant cyst wall proteins.Anti-rabbit HRP conjugated IgG (Thermo Fisher Scientific) was used as the secondary antibody.Rabbit pre-immune serum or anti-rabbit IgG were used as control.Super Signal West Pico PLUS (Thermo Fisher Scientific) substrate was used for chemiluminescent detection.Blots were imaged using GE ImageQuant LAS 4000 gel imager.
Immuno-staining of the mature cysts with purified rabbit polyclonal antibodies
To check the applicability of custom raised polyclonal rabbit antibodies against detection of Jonah-1, Luke-2, Leo-A, and laccase paralogs, we performed IFA imaging using mature cysts from other Acanthamoeba species/strains.For IFA imaging, ~0.5 to 1.0 x 10 7 mature cysts (120 h post encystation) from different strains of Acanthamoeba were washed in PBS and fixed in 4% paraformaldehyde for 15 minutes at room temperature.
Following fixation, cysts were washed three times with PBS and blocked with 1% BSA for 1 h at room temperature.The cysts were further incubated with the rabbit polyclonal antibody (1:200 dilution) for 1h at room temperature.Subsequently these cysts were washed three times with PBS and incubated with secondary antirabbit IgG conjugated with Alexa flour 488 (1:300) and Calcofluor White 1:20 (CFW, 1 mg/ml, Sigma Aldrich) for 30 min at room temperature.The cysts were washed 3 times with 1x PBS and mounted in VECTASHIELD® Antifade Mounting Medium (Vector Laboratories, Newark, CA).Samples were illuminated using 380nm (CFW), and 488 nm (Alexa flour 488) laser excitation.The stained cysts were imaged using confocal microscope as detailed above.For each rAb and each Acanthamoeba isolate, we counted at least 100 cysts in random fields to determine what percentage of CFW-labeled cysts were detected with the rAb.This experiment was repeated twice and counts for the two experiments were averaged.Finally, we examined the same set of slides using a Plan-APO Chromat 100x oil immersion lens on a Zeiss Axio Observer Z1 microscope equipped with Axio Cam ERc5s to show that a confocal microscope was not needed to detect rAb-labeled cysts.
was eluted with elution buffer (20 mM Tris-HCl (pH 8), 200 mM NaCl, 1 mM EDTA, 10 mM maltose).The identity and purity of recombinant purified MBP-fusion proteins were confirmed by SDS-PAGE and Western immunoblotting analyses.The wild-type recombinant Jonah-1, Luke-2, Leo-A, and laccase-1 MBP-fusions were used to raise custom polyclonal antibodies (Cocalico Biologicals).Total IgG was purified from plasma samples of pre-immune and post-immunized rabbits via Protein A affinity chromatography (Pierce™ Protein A Agarose, Thermo Fisher Scientific, USA) as per manufacturer instructions.In brief, for Protein A Sepharose purification, plasma was diluted 2-fold with binding buffer (1x Tris-Buffered Saline, pH 7.4) and loaded onto a column containing Protein A Agarose beads.The diluted plasma was passed through the column twice, and beads were washed with 1x TBS (20-fold column volume).The bound IgG was eluted with 0.1 M glycine-HCl (pH 2.7), into the neutralizing buffer (1 M Tris-HCl, pH 9.0) and concentrated, and the buffer exchanged into PBS using a 30
Alternatively, 647 nm laser excitation was used for RFP or mCherry chimeras.The fluorescence images were captured using CFI Plan Apochromat VC 60XC WI Plan APO 60x oil objective of Nikon Ni2 AX inverted confocal microscope equipped with NIR imaging system.We deconvolved 0.1 μm optical sections using NIS elements (Version: AR5.41.02) imaging software.All confocal images shown were 3D reconstructions using dozens of zstacks.Size bars were based upon 2D cross-sections. | 2024-02-06T14:11:28.246Z | 2024-02-10T00:00:00.000 | {
"year": 2024,
"sha1": "4ca934a157f6a574ab72dec45955b1a0bbd7b8ac",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2024/02/02/2024.02.02.578540.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "4ca934a157f6a574ab72dec45955b1a0bbd7b8ac",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
249051925 | pes2o/s2orc | v3-fos-license | A huge preperitoneal collection following acute necrotizing pancreatitis: A case report and the management approach
Introduction and importance Fluid collection is a critical complication of acute necrotizing pancreatitis. It is usually formed near the pancreas, but unusual collection sites have also been reported. Anterior extraperitoneal or preperitoneal collections following acute pancreatitis are rare and must be differentiated from pancreatic ascites, which is a collection of fluid in peritoneal cavity. Case presentation A 68-year-old man with a suspected pancreatic mass presented to the emergency department, complaining of abdominal pain and gradual abdominal distention. He had experienced epigastric pain, nausea, vomiting, progressive abdominal distention, and icterus for two weeks prior to admission. An abdominopelvic CT scan revealed extensive necrotizing pancreatitis with a prominent extraperitoneal collection. The collection had extended from the retroperitoneal space to the anterior extraperitoneal or preperitoneal space and had pushed the abdominal viscera backward. We managed the patient with the “Step-up” approach, and the patient was discharged after four weeks. Clinical discussion & conclusion Preperitoneal fluid collection can rarely occur following acute necrotizing pancreatitis. Here, we suggested two possible routes for fluid migration from the retroperitoneum to the preperitoneal space. Using minimally invasive techniques such as percutaneous drainage of peripancreatic collections could reduce morbidity and mortality in critically ill patients diagnosed with necrotizing pancreatitis.
Introduction
Pancreatitis is a condition characterized by the inflammation of the pancreas gland. One of the complications in pancreatitis is the fluid collection categorized by the Atlanta classification into four groups based on the time, the initial cause, and the state of encapsulation. Rarely, it also causes pancreatic ascites or pleural effusion [1]. Diagnosis is based on the patient's history, radiological findings, and fine-needle aspiration in some cases. The therapeutic approach to the pancreatic collection depends on patient's symptoms, the collection location, and the presence of infection [2]. In this study, we present a patient with necrotizing pancreatitis who was first misdiagnosed as pancreas malignancy with ascites due to the severity of symptoms and the uncommon location of the collection. Eventually, the diagnosis of acute necrotizing pancreatitis (ANP) with an extraperitoneal collection, a rare site of collection, was confirmed in the patient. This case report has been reported in line with the SCARE 2020 Criteria [3].
Case presentation
A 68-year-old man presented to the emergency department of our tertiary referral center complaining of epigastric pain with gradual abdominal distention. The patient had experienced acute epigastric pain, which radiated to the interscapular region, nausea, vomiting, progressive abdominal distention, and icterus for two weeks before hospital admission. Additionally, he had elevated blood sugar in his recent laboratory tests, without a known history of diabetes mellitus. On an outpatient basis, an abdominopelvic CT scan was performed earlier, the result of which had been mistakenly reported as a solid-cystic mass in the head of the pancreas with a necrotizing pattern and ill-defined margins, and moderate ascites. The previous laboratory and radiological findings were in favor of pancreatic malignancy. He was a teacher with no significant past medical, surgical and family history. He neither smoked nor took any drugs. On physical examination, his abdomen was distended, but there was no tenderness or guarding. Moreover, his lower extremities showed a 3+ pitting edema which was bilateral and symmetrical.
Hydration with intravenous fluids was initiated. Abdominal ultrasonography showed a massive, highly viscous fluid with debris and septations in the abdominal cavity, which was aspirated. The aspirated fluid was thick, green pus mixed with debris. Due to a suspected diagnosis of secondary peritonitis, an abdominopelvic CT scan was performed. An expert radiologist in our hospital reported extended necrotizing pancreatitis with a massive extraperitoneal collection and regional compression effects on the peritoneal organs (Fig. 1).
The extraperitoneal collection was drained with a minimally invasive approach (Fig. 2), and the specimen was submitted for routine bacterial smear and culture. The results were positive for E. coli which was sensitive to Imipenem. Consequently, the appropriate antibiotic regimen was implemented.
Ten days later, the draining fluid turned bloody, and the patient became unstable with a heart rate of 140 bmp and systolic blood pressure of 60 mmHg. Thus, he was emergently transferred to the operating room. After an upper midline laparotomy in a supine position, which was performed by the attending surgeon, the lesser sac was opened, and the infected necrotic tissue was surgically debrided. At the end of the necrosectomy, two large-bore drains -Nelaton and Jackson Pratt (JP)-were placed into the cavity for continuous irrigation and drainage (Fig. 3).
Six days after the operation, the patient became icteric and febrile. The laboratory results showed leukocytosis, hyperbilirubinemia, and a gradual elevation of alkaline phosphatase (up to 1200 IU/L; reference range: 70-306 IU/L) in comparison to the previous results. Abdominal ultrasonography and magnetic resonance cholangiopancreatography (MRCP) were performed, which revealed cholecystitis. Thus, percutaneous cholecystostomy was performed. He was discharged with the cholecystostomy and the JP drain.
The last consequence of necrotizing pancreatitis in our patient was a pancreatic fistula through the JP drain orifice, through which drainage occurred in a low-pressure system. The fistula was resolved after endoscopic retrograde cholangiopancreatography (ERCP) and sphincterotomy. The endoscopic sphincterotomy turned the high-pressure pancreatic duct drainage into a low-pressure one, facilitating the drainage through the major papilla. Moreover, the patient experienced glucose intolerance and steatorrhea postoperatively. Oral pancreatic enzymes (Creon) were used to treat the exocrine pancreatic insufficiency and control the patient's steatorrhea. Outpatient follow-ups continued for ten months, and glucose intolerance and steatorrhea were eventually resolved.
Discussion
Acute necrotizing pancreatitis (ANP) occurs in 10% of acute pancreatitis patients, contributing to a significant increase in mortality and morbidity [4]. A critical complication of ANP is the fluid collection and pancreatic ascites. Fluid collections are usually formed near the pancreas, situated in the retroperitoneal space. Moreover, extrapancreatic fluid collections might develop in the lesser sac, the anterior or posterior pararenal space, the spleen, and the left hepatic lobe [5]. In addition, unusual collection sites following pancreatitis have been reported, including the mediastinum [5,6], small bowel mesentery, splenic or hepatic subcapsular region, omentum, anterior abdominal wall, pelvis, and inguinoscrotal region [7]. Anterior extraperitoneal or preperitoneal collections following acute pancreatitis are rare and must be differentiated from pancreatic ascites, which is a collection of fluid in the peritoneal cavity.
The retroperitoneal space is bounded anteriorly and posteriorly by the posterior parietal peritoneum and the transversalis fascia, retrospectively. The retroperitoneal space, which extends from the diaphragm to the pelvic cavity, consists of three spaces at the kidney level [1]: Anterior pararenal space (APS), which contains the duodenum, the pancreas, and the ascending and descending colon [2], perirenal space (PRS) which lies around the kidneys and is an inverted cone-shaped space due to the kidneys' ascent from the pelvis and [3] the posterior pararenal space (PPS) which mostly contains fat tissue. Although closed at the superior ends, the retroperitoneal spaces communicate inferiorly with the pelvic extraperitoneal spaces, including the perivesical space and the PPS (Fig. 4).
The PRS is usually, but not all the time, cut off inferiorly by the merging of the Gerota and Zuckerkandl fascias. Although the perirenal space is usually cut off inferiorly and does not extend into the pelvis, it is possible for the disease processes to spread along the combined interfascial space. The small PPS is bound posteriorly and laterally by the transversalis fascia and the lateroconal fascia, respectively. The PPS, which contains fat tissue, is situated anterior and posterolateral to the quadratus lumborum muscle. The Grey-Turner sign in acute pancreatitis is caused by the spread of disease from the anterior pararenal space to the area between the leaves of the posterior renal fascia and, subsequently, the lateral edge of the quadratus lumborum muscle [8].
According to the previous publications, it is assumed that there exists an anterior route of communication along the umbilical prevesical fascia to a space called preperitoneal space. This space, also called the properitoneal space, is located between the peritoneum and the transversalis fascia. We, therefore, hypothesize that the fluid might have migrated from the retroperitoneal space, along the parietal peritoneum, down to the bladder dome and then had spread upwardly through the preperitoneal space. In addition, another probable pathway can be suggested due to the lateral connections of PPS with the lateroconal fascia and the transversalis fascia. Thus, these two pathways seem to explain the unusual location and the route of preperitoneal collection formation in our patient [9][10][11] (Fig. 5).
The treatment approach to the collections following ANP depends on the patient's symptoms and the presence of infection. The management of infected pancreatic necrosis has changed over the past decade. In infected pancreatic fluid collections, minimally invasive drainage methods are preferred over early surgical interventions due to the high risk of morbidity and mortality from surgery. A step-up approach with minimally invasive intervention is currently the preferred technique [12]. The step-up approach includes percutaneous drainage, endoscopic transgastric drainage, and minimally invasive retroperitoneal necrosectomy. This approach might reduce the rate of complications and death by minimizing the surgical trauma (i.e., tissue damage and a systemic pro-inflammatory response) in patients who are already critically ill [13]. The step-up approach might have been limited in our case due to the patient's instability which required emergent necrosectomy.
Conclusion
The step-up approach is the best method to manage patients with necrotizing pancreatitis, even in cases with huge collection and necrosis.
Provenance and peer review
Not commissioned, externally peer-reviewed. | 2022-05-26T15:09:38.284Z | 2022-05-23T00:00:00.000 | {
"year": 2022,
"sha1": "35b809138ceac31abd895adbfb1b977c5474b869",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.amsu.2022.103843",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "18047e498c5ce5fe4f7e10858c3d96ca5e4cf044",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
56484485 | pes2o/s2orc | v3-fos-license | High Dose Prednisolone Lowers Plasma Glycated Albumin Levels Compared to Actual Glycemic Control: A Retrospective Observational Study
Introduction Glycated hemoglobin (A1c) and glycated albumin (GA) are often used as indicators of glycemic control. In this study, we determined whether prednisolone (PSL) administration lowers plasma GA. Methods We investigated the factors affecting GA using multivariate analysis in 48 subjects with connective tissue diseases (CTDs). Results Multiple regression analysis of GA showed that the dose of PSL [β = − 1.36; 95% confidence interval (CI) − 2.59 to − 0.14; p = 0.03], age (β = 0.06; 95% CI 0.03–0.09; p < 0.001), body mass index (BMI) (β = − 0.14; 95% CI − 0.28 to − 0.01; p = 0.042), and A1c (β = 1.4; 95% CI 0.38–2.42; p = 0.008) significantly correlated with GA (adjusted R2 = 0.518). Moreover, GA levels adjusted for age, sex, BMI, plasma albumin (Alb) and creatinine (Cre), and A1c in the subjects taking ≥ 5 mg PSL was significantly lower than those in those taking < 5 mg PSL. Finally, the dose of PSL (as a continuous variable) was negatively correlated with GA adjusted for age, sex, BMI, Alb, Cre, and A1c. Conclusion High dose (≥ 5 mg) PSL reduces GA concentration more than glycemia.
INTRODUCTION
Glucocorticoids are widely used for the treatment of diseases such as autoimmune disease and chronic kidney disease [1,2]. However, such treatment predisposes to diabetes, with odds ratios for new-onset diabetes mellitus (DM) in patients treated with glucocorticoids having been shown to be 1.5-2.5 [1,2]. Steroidinduced DM is associated with marked postprandial hyperglycemia due to peripheral insulin resistance and islet cell dysfunction [1]. Insulin sensitizers, such as thiazolidinediones and metformin, are often used as first-line therapies. However, insulin secretory capacity is also reduced; therefore, insulin therapy and oral hypoglycemic drugs may also be used [3]. Following these steps, if glycemic control remains insufficient, intensive insulin therapy may also be required after steroid therapy [3]. Early detection of steroid-induced DM is needed to prevent insulin therapy [3].
Several indices are now used to evaluate glycemic control, but glycated hemoglobin A1c (A1c) is the most frequently used. However, in specific conditions, such as hemolytic anemia and hemoglobinopathies, there are discrepancies between actual glycemic control and A1c concentration [4,5]. In such instances, measurement of glycated albumin (GA) is recommended [4]. Because albumin (Alb) has a shorter half-life than hemoglobin, GA reflects the efficacy of glycemic control over a shorter period of time than A1c [6], and more accurately reflects postprandial than fasting hyperglycemia [7]. Because early stage steroid-induced DM exhibits postprandial hyperglycemia with normal fasting glucose levels [1], GA can be an appropriate indicator of glycemic status in patients with steroid-induced DM. However, in some previous studies, glucocorticoid administration has been shown to lower plasma GA more effectively than glycemia [8]. Nevertheless, it remains unclear whether glucocorticoid treatment reduces plasma GA concentration.
In this study, we determined whether high dose prednisolone (PSL) affects plasma GA concentration in patients with connective tissue diseases (CTDs) using multivariate analysis. The identification of factors affecting GA concentration may define contraindications for the use of GA in subjects undergoing steroid treatment.
Study Subjects
We retrospectively reviewed the medical records of outpatients with CTDs who had attended our department at Gifu University Hospital between April 2009 and June 2016 (Table 1). Forty-eight subjects were identified and placed into two groups: those who had been taking \ 5 mg PSLequivalent per day, and those who had been taking C 5 mg PSL-equivalent per day. Subjects eligible for this study met the following criteria: (1) age 20-80 years; (2) dose of steroid unchanged within a 3-month period; and (3) HbA1c concentration changed by B 0.2% over a 2-month period. The exclusion criteria were as follows: (1) type 1 DM; or (2) presence of infectious disease, thyroid disease, malignant tumor, liver cirrhosis, unstable DM, anemia, or chronic renal disease (estimated glomerular filtration rate \ 60 ml/min/1.73 mm 2 , proteinuria, or hematuria). The study protocol was approved by the ethics committee of our university (Approval No. 28-139, approval date 2016/8/3) and was designed in accordance with the Declaration of Helsinki. This study did not involve an intervention, and was retrospective and observational in nature, and therefore did not require a trial registration ID.
Measurement of A1c and GA
Plasma A1c was measured using an automated high-performance liquid chromatography analyzer (HLC-723-G9; Tosoh Corporation, Tokyo, Japan) and is presented as a National Glycohemoglobin Standardization Program value. GA concentration was measured using an enzymatic method involving albumin-specific proteinase, ketoamine oxidase, and albumin assay reagent (Lucica GA-L; Asahi Kasei Pharma Corp., Tokyo, Japan).
Statistical Methods
Using a Monte Carlo simulation, we determined that 48 patients would be sufficient to detect a mean reduction of 0.2 in GA per unit of PSL with an SD of 2.4, 5 in SD of PSL, 82% power, and an a level of 0.05. Patients with CTDs were divided into two treatment groups, those who had been taking \ 5 mg and those who had been taking C 5 mg PSL per day. Continuous variables are represented as median, 25th and 75th percentile. Categorical variables are presented as numbers and percentages. Normally distributed clinical characteristics [age, hemoglobin (Hb), plasma Alb and creatinine (Cre), and body mass index (BMI)] were compared using Student's t test. A multivariable linear regression model was used to evaluate the relationship between the dose of PSL (\ 5 mg PSL or C 5 mg PSL) and GA, with adjustment for age, sex, BMI, Alb, Cre, and A1c. These factors were chosen a priori on the basis of their clinical significance and possible effect on GA. To evaluate multi-collinearity, the variance inflation factor (VIF) was calculated for each variable. VIF [ 5 was considered to indicate collinearity. The reliability of the model was internally validated using the bootstrap method. The optimism parameter in a calibration plot was calculated as the degree of overfitting, which was estimated using 150 sets of bootstrap sampling. We also conducted an analysis to assess the non-linear association between the dose of PSL as a continuous variable and plasma GA, in which PSL dose was modeled with the use of restricted cubic splines to allow for non-linear association. All statistical analyses were performed using R statistical software (version 3.5.1; available at http://www. rproject.org) and the ''rms'' package.
Subject Characteristics
A total of 48 patients (19 men and 29 women) were included in the final analysis (Table 2). Twenty-one patients had been taking \ 5 mg PSL per day and 27 had been taking C 5 mg PSL per day. The characteristics of the patients are shown in Table 1. Age, gender, BMI, Hb, and plasma A1c and Cre were similar between the two groups ( Table 2), but the group taking C 5 mg PSL tended to have lower plasma Alb ( Table 2).
High Dose PSL Lowers Plasma GA Adjusted for Age, Sex BMI, Alb, and Cre To determine whether prednisolone use influenced GA concentration, we performed multiple linear regression analysis on plasma GA ( Table 3). The relationship between the dose of PSL and GA was significant (p = 0.03) after adjustment for age, sex, BMI, Alb, Cre, and A1c. The C 5 mg PSL group had a lower GA [b = -1.36; 95% confidence interval (CI) -2.59 to -0.14; Table 3, Fig. 1]. There was an approximate 0.06 increase in GA per 1 year increase in age (b = 0.06; 95% CI 0.03-0.09; p \ 0.001; Table 2). There was also a trend towards GA being 0.22 lower in men (b = -0.22; 95% CI -1.80 to 1.37; p = 0.784, Table 3, Fig. 2). An increase of 1 unit of BMI was associated with an approximate 0.14 reduction in GA (b = -0.14; 95% CI -0.28 to -0.01; p = 0.042, Table 3). In addition, Alb tended to be about 0.4 higher in patients taking high-dose PSL (b = 0.4; 95% CI -2.1 to 2.9; p = 0.749; Table 3) and Cre tended to be about 1.59 higher (b = 1.59; 95% CI -3.74 to 6.91; p = 0.55; Table 3). An increase of 1 unit of A1c was associated with an approximate 1.4 increase in GA (b = 1.4; 95% CI 0.38-2.42; p = 0.008; Table 3). Box plots of the predicted GA value, adjusted for covariance, are shown in Fig. 1. The adjusted R 2 was 0.518, suggesting that the regression model could explain differences in GA well. The VIFs suggested that collinearity was not present among the variables (the VIFs were 1.35 for age, 2.30 for sex, 1.09 for BMI, 1.33 for Alb, 2.26 for Cre, and 1.49 for A1c.) The optimism parameter for the regression model was 0.09, meaning that the degree of overfitting was 9.2%; thus, there was no evidence of overfitting. Finally, the dose of PSL as a continuous variable was negatively correlated (p = 0.003) and weakly non-linearly associated (p = 0.46) with GA adjusted for age, sex, BMI, Alb, Cre, and A1c (Fig. 2), indicating that plasma GA decreases as the dose of PSL increases.
DISCUSSION
In this study, we investigated whether steroid administration affects GA. Multiple linear regression analysis showed that PSL dose, age, BMI, and A1c significantly correlated with plasma GA. After adjustment of GA for age, sex, BMI, Cre, and A1c, it was higher in patients taking 5 mg PSL than in those taking \ 5 mg PSL. Therefore, in subjects undergoing highdose (C 5 mg) PSL therapy, it is possible that the use of GA may be associated with a underestimation of the severity of hyperglycemia.
This study is the first to reveal that high-dose PSL lowers GA, although it was a small retrospective study. A previous study reported that [9], and a patient treated with 20 mg prednisolone has also been described who exhibited GA-to-A1c ratios that were persistently lower (1.49-1.80) than the ratio normally present in type 2 DM patients (2.5) [8]. It is known that alterations in albumin concentration can affect GA, and glucocorticoids upregulate both protein synthesis and degradation, thereby increasing protein turnover [10]. In fact, plasma albumin in patients taking C 5 mg PSL was significantly lower than that in those taking \ 5 mg PSL. When it is also considered that plasma GA in patients with nephrotic syndrome and hyperthyroidism is lower than would be expected according to the degree of glycemic control [11,12], it is likely that high-dose PSL causes a reduction in GA as a result of altered albumin turnover. The use of glucocorticoids can been associated with the development of osteoporosis, osteonecrosis, cataracts, hyperglycemia, coronary heart disease, and cognitive impairment, but the minimum dose required to induce these adverse effects has been shown to be about 5 mg Correlation between plasma glycated albumin and prednisolone dose. A multivariable linear regression model was used to evaluate the associations between dose of prednisolone (PSL) (as a continuous variable) and glycated hemoglobin (GA) adjusted for age, sex, body mass index (BMI), plasma albumin (Alb), plasma creatinine (Cre), and glycated hemoglobin A1c (A1c). The solid line indicates the predicted GA after adjustment for age 57.5, sex = male or female, BMI 23.38, Alb 4.2, Cre 0.625, A1c 5.9. The gray band indicates the 95% confidence interval for the regression line PSL per day [13,14]. Therefore, high-dose PSL is often defined as C 5 mg PSL [13,14]. Moreover, the incidence of DM in patients taking C 5 mg PSL was five times higher than in patients taking \ 5 mg PSL [15]. Therefore, we divided the subjects in this study into those taking \ 5 mg PSL and those taking C 5 mg PSL. Our data show that the use of GA as an indicator of glycemic status in patients taking high doses of steroids may be associated with an underestimation of the severity of hyperglycemia.
It has previously been reported that A1c, age, and BMI significantly affect plasma GA levels [16][17][18][19]. Because A1c also indicates the severity of recent glycemia, it is closely correlated with GA [17,18], and the relationship has been described by HbA1c = 0.216 9 GA ? 2.978 [R 2 = 0.5882, p \ 0.001] [18]. GA increases with age from infancy to adulthood and therefore age-adjusted GA can more accurately reflect glycemic status [19,20]. Most obese individuals are insulin resistant, which can be associated with postprandial hyperglycemia, which tends to increase GA rather than A1c [7,21,22], such that, for example, the GA-to-A1c ratio is often higher in patients with type 1 DM [7,21]. Moreover, the risk of steroid-induced DM is greater in individuals with a higher BMI [23]. However, our data show that GA is negatively correlated with BMI, which is consistent with many previous reports [24][25][26]. Finally, other authors have speculated that greater protein turnover and inflammation may contribute to the observed GA levels in obese subjects [26]. Although the mechanism has not been elucidated in this study, A1c, age, and BMI have been shown to affect GA concentration.
This study had a number of limitations. Firstly, it was not a randomized study, it was small, and conducted only in one hospital; therefore, we are mindful that there is a risk of selection bias. To further investigate the effects of PSL dose on GA, a prospective study (such as a cohort study) is required. Secondly, the range of A1c concentrations was limited to 5.0-7.5%; therefore, conclusions regarding GA can only be drawn for patients with A1c values within this range. Thirdly, we used A1c as a marker of glycemic control, rather than multiple or continuous peripheral blood glucose measurements, which can provide a more accurate picture of glycemic control, but would not be covered by health insurance in Japan. Finally, we only studied subjects with connective tissue diseases and not healthy individuals, for which we would not have had the opportunity to measure A1c and GA; therefore, we could not compare data obtained from diseased subjects with healthy subjects.
CONCLUSION
We have demonstrated that high doses of PSL, as well as age, high BMI, and high A1c, are associated with relatively low GA concentrations compared with the severity of hyperglycemia. Thus, when high-dose (C 5 mg) PSL is administered to patients with CTDs, the use of GA might be associated with an underestimation of the degree of glycemic control.
Compliance with Ethics Guidelines. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. This research used non-identifiable data obtained by the treating physicians, and therefore on the basis of the decision of the local ethics committee of our university (Approval No. 28-139, approval date 2016/8/3), informed consent was not required. This study was not interventional but retrospective and observational, and therefore did not have a trial registration ID.
Data Availability. The datasets collected and/or analyzed during the current study are available from the corresponding author on reasonable request.
Open Access. This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/ by-nc/4.0/), which permits any noncommercial use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2018-12-15T14:02:37.548Z | 2018-12-13T00:00:00.000 | {
"year": 2018,
"sha1": "0ec15aca0a93c59d404453c13a90430f11c992f2",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13300-018-0552-8.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0ec15aca0a93c59d404453c13a90430f11c992f2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266288402 | pes2o/s2orc | v3-fos-license | COMMUNITY PERCEPTION OF PADANG VILLAGE, MANGGENG SUB-DISTRICT, SOUTHWEST ACEH DISTRICT ON THE IMPLEMENTATION OF SUSTAINABLE FOOD HOME AREA (KRPL)
The purpose of this study was to describe the perceptions of the Padang Village Community, Manggeng District, Aceh Barat Daya District towards the Sustainable Food Home Area (KRPL). This research was conducted in Padang Village, Manggeng District, Southwest Aceh District. This research was conducted in January 2023 until completion. The sampling model in this study used simple random sampling, namely sampling regardless of strata in one population. the study population was 310 heads of households, the sampling approach from the population used the Slovin formula with a leeway of 15% (e = 15%). Then a sample of 39 heads of households can be drawn. The data sources to be used in this study consist of primary data and secondary data. The data analysis method in this study used a Likert Scale by looking at the comparison index of the total score of certain criteria, the perception index in this study was calculated. From the results of research on people's perceptionstowards the implementation of Sustainable Food Home Areas (KRPL) where each indicator can be categorized as very suitable for each indicator which inBenefit indicator with a grade point index of 87.30%, Ease of implementation indicator with a grade point index of 89.74%, Conformity indicator in application with a grade point index of 91.03% and KRPL Sustainability Indicators with a grade point index of 91.03%.
RESEARCH METHOD
This research was conducted in Padang Village, Manggeng District, Southwest Aceh District.This research was conducted from January 2023 to completion.The sampling model in this study used simple random sampling, namely sampling without regard to strata in one population (Sugiyono, 2015), the study population consisted of 310 heads of households, the sampling approach from the population used Slovin formula by formula n= 1+ () 2 n=Sample size to be taken N=Population size e = margin of error which is the amount expected or specified (e = 15%) The data sources to be used in this study consist of primary data.Primary data is data obtained from questions (questionnaires) that have been prepared, by direct interview techniques to the Head of the Family who has been determined as a sample and secondary data.Secondary data is data that will be obtained from sources that have been issued by related parties such as village offices, food security agencies with official websites, and literature related to research which will later be utilized in this research.
Data Collection Techniques that will be used in this study are: 1. Observation Observation is an observation activity carried out to obtain respondent data directly in the field.
Interview
Interviews were conducted using a questionnaire that had been designed/prepared by asking the respondent Variables in this study include: a. Respondent Identity The identity of the respondent to be observed consists of: 1) Age, is the age of the respondent which is calculated in years 2) Education, Is the level or level of education completed by respondents counting from SD, SLTP, SMA, DII, and SI. 3) Occupation, Is a profession or permanent job of the respondent b.Community Perceptions of Sustainable Food Home Areas (KRPL) Public perception of (KRPL) is measured by four indicators (1) Benefits, (2) Ease of application of technology, (3) Compatibility of Technology with Landscaping and (4) KRPL Sustainability.(Mira 2017). in the form of a statement related to the research theme.Statements that will be made as follows: Variations in daily consumption 5 Ease of application The ease of implementing plant cultivation using polybags 6 Ease of application of plant cultivation with a vericulture system 7 Appropriateness in application Suitability in the application of plant cultivation using polybags 8 Compatibility in the application of plant cultivation with the vericulture system 9 KRPL Sustainability Availability of KRPL supporting facilities and infrastructure at the Agriculture depot 10 Support the environment in its application This research is research in qualitative descriptive form with the aim of interpreting phenomena or data that are natural or human-made in the form of tabulations without generalization purposes (Sukmadinata, 2017).This qualitative descriptive study uses a Likert Scale, a measure of people's perceptions of KRPL consisting of four indicators (Benefits, Ease of Implementation, Compatibility with Landscaping and Sustainability (KRPL).Alternative answers to respondents' perceptions in the form of a Likert Scale in this study can be seen in the following table.: From the table above it can be seen which most of the people of Padang village make a living as farmers 148 people, entrepreneurs 88 people, IRT, people and ASN 54 people, the rest have a livelihood as TNI/Polri 5 people, 10 people traders, 1 fisherman, 1 worker, 3 welders, 6 carpenters, 2 tailors, and 11 retirees.Respondents in this study were still productive age respondents where 87.18% of respondents were in productive age.(Tjiptoherijanto, 2001) states that overall, respondents belong to the productive age category, which ranges from 15 to 65 years of age.so that these respondents are residents who are still productive, thus these respondents are still able to work and produce something (Young & Arfan, 2016).Perception is the experience of objects, events, or relationships obtained by inferring information and interpreting messages.Perception is giving meaning to sensory stimuli (sensory stimuli).( Rakhmat Perception in this case is the view of the people of Padang village, Manggeng sub-district, Southwest Aceh district of the Sustainable Food Home Area (KRPL) program.Perceptions of KRPL are broken down into four indicators, namely perceptions of benefits, ease of application, suitability in application and sustainability of KRPL.The following is a description of each indicator:
Community Perception of the Benefits of KRPL
The level of perception of the Padang village community towards the benefits of the KRPL program is categorized as very appropriate with the achievement of an average percentage of 87.31%.The people of Padang village, Manggeng sub-district, Southwest Aceh district have a very good perception of the benefits of the KRPL program.This can be seen from each indicator on the benefits of the KRPL.This can be seen in the Based on the research data in the table above the highest score of the KRPL benefit indicator is shown by the first indicator (Saving kitchen expenses) with a score of 181 with a percentage of 92.82% in the Very Suitable category.Then the 2nd indicator (Increasing family income) With a score of 149 with a Percentage of 76.41% in the Very Appropriate category.the third indicator (maintaining family food availability) with a percentage of 90.77%, the fourth indicator (variation of daily consumption) with a percentage of 89.32%.The KRPL program is an alternative by using environmentally friendly yard utilization to meet food needs, family nutrition, and increase income which in the end can improve welfare through community empowerment.(oka et.al 2016).The perception of the Padang village community towards the KRPL Program can help the community in minimizing kitchen expenses, where kitchen expenses are a necessity that must be met in every household.In every household, kitchen expenses are of course different from KRPL.The utilization of the house yard is able to ease the kitchen expenses of each household.The habit of consuming food determines the consumption pattern.Food ingredients are closely related to food that is healthy, safe and lawful while at the same time fulfilling balanced nutritional requirements.(Oka et., al 2016) many respondents mentioned which variations in consumption obtained from KRPL also depended on what species were cultivated.Variations in consumption from the implementation of KRPL are also not fully met so that people still have to buy certain needs in the market
Community Perceptions of the Ease of Implementing KRPL
In implementing KRPL, of course, innovation is needed.Innovation basically has a good role in the development of KRPL, (Prasentianti et.al 2012) mentions especially in terms of plant cultivation activities.Besides being able to provide improvement and guarantee the quality of cultivation products, the right technology will have the opportunity to reduce cultivation costs.This innovation will certainly affect the ease of application, including the ease of implementing plant cultivation using polybags and the ease of implementing cultivation with a vericulture system.Study of the perceptions of the Padang village community regarding the ease of implementing KRPL in the form of the level of ease of application and the level of suitability of technology with environmental conditions and yards.Group members' perceptions of the innovations implemented in KRPL can be seen in the following table : Table 05.Community perception of the ease of implementing KRPL No Indicator of Ease of application Score Ideal score Percentag e 1 The ease of implementing plant cultivation using polybags 188 195 96.41% 2 Ease of application of plant cultivation with a vericulture system 162 195 83.08% Total Index 350 390 89.74%Data source: Primary data will be processed in 2022 Based on the data presented in the table above, in general the perception of the Padang village community towards the innovations implemented in KRPL is very appropriate, namely 89.74%.This is influenced by each of the indicators where the public's perception of the ease of application in polybags with a percentage of 96.41% and ease of application in polyculture with a percentage of 83.08%.The gains from each of these categories mean that the innovations that are applied to ease the implementation of KRPL are easy to apply even though each innovation is not very suitable for the conditions of the yards of each community.From the above data it can also be concluded from the community's perception of the ease of implementing KRPL where the level of public perception is higher in polybag cultivation, because cultivating plants in polybags is very easy besides being able to cultivate in narrow yards they can also be moved.Whereas in vericultural cultivation it is slightly lower.Verticulture is an agricultural cultivation system that is carried out vertically and in stages.Verticulture is a farming pattern that uses vertical planting containers to overcome land limitations (Supriyadi et., al 2013).
Indicator of conformity in implementation
The narrowing of cultivation land and the continued increase in food demand has required a way to carry out cultivation in order to fulfill needs by only relying on narrow land.KRPL is present as a solution in the continuity of food in the midst of society.Padang village is one of the densely populated villages.The carrying capacity of the coral in the application of KRPL has the potential to be developed into limited land farming in the center of an area that is already densely populated.The perception of the people of Padang village, Manggeng sub-district, Southwest Aceh district towards the suitability of implementing plant cultivation using polybags and the suitability of implementing plant cultivation with the vericulture system can be seen in the following table: Based on the data above, in general the perception of the Padang village community regarding suitability in implementing KRPL is very appropriate, namely 91.03.This is also influenced by each indicator, namely suitability in the application of plant cultivation using polybags 88.21% and suitability in the application of plant cultivation with the vericulture system 93.85%.The acquisition of these categories means that the innovations implemented in implementing KRPL are easy to apply.From the results of interviews with the general public, it was stated that vericulture or polybag cultivation was very suitable to be applied given the lack of land, narrow yards as well as the verticulture system which basically uses terraced plots so that it is very suitable for narrow land.Plants that are cultivated in various cultures can just use used goods.(Nirwana et., al 2013) stated that plants cultivated vertically have high economic value, are shortlived, and have a root system that is not too extensive.
Community Perceptions of KRPL Sustainability
A development program can be said to be successful if the program can continue for an unspecified time (Meranti 2017).The sustainability of KRPL will continue by paying attention to aspects such as the availability of supporting facilities and infrastructure for KRPL at agricultural depots and supporting the environment in implementing KRPL.The table above shows that the perception of the Padang village community towards the sustainability of KRPL is included in the Very appropriate category with a percentage gain of 91.03%.This is because the amount of support that encourages the sustainability of KRPL is reflected in the indicator Availability of supporting facilities and infrastructure for KRPL at Agricultural depots with a percentage of 95.38% or categorized as very suitable and the indicator Supporting the environment in its application with a percentage of 91.03% is also included in the very suitable category.Indicators Availability of KRPL supporting facilities and infrastructure at agricultural depots with a percentage of 95.38%The score percentage was obtained because in general respondents considered that facilities and infrastructure to support KRPL sustainability were very easy to obtain at agricultural depots such as seeds, fertilizers, polybags, pesticides and other facilities.and other supporting infrastructure.While supporting the environment in its application with a percentage of 91.03% that the community considers the environment in implementing KRPL very safe, especially security in cultivation where the community is not worried about pests, especially attacks from livestock especially like goats because the village has given warnings not to release animals the livestock
CONCLUSION
From the results of research on people's perceptionstowards the application of sustainable food housing areas (KRPL) where each indicator can be categorized as Very Suitable for each indicator which in the indicatorBenefit indicator with a grade point index of 87.30 percent, Ease of application indicator with a grade point index of 89.74 percent, Conformity indicator in application with a grade point index of 91.03 percent and KRPL Sustainability Indicators with a grade point index of 91.03 percent.This means that in general the community agrees or understands about the application of KRPL in which KRPL can save kitchen expenses, increase family income, maintain family food availability, vary daily consumption, easy and appropriate in its application both polybag and vericulture cultivation, availability of facilities and infrastructure supporting KRPL and supporting the environmentin application in society.
2005; Arifin et., al 2017).In general, perception can be defined as the process of giving meaning, interpretation of stimuli and sensations received by individuals, and is strongly influenced by internal and external factors for each individual (Arifin et., Al 2017).Restiyanti et.,al (2005) revealed that the factors that influence perception, can be grouped into two main factors, namely: 1) Internal factors, including: a) Experience b) Needs c) Assessment d) Expectations / expectations, and 2) External factors, including: a) External appearance b) Stimulus characteristics c) Environmental situation.
Table 01 .
Statements based on indicators
3. RESULTS AND DISCUSSION 3.1 General description of the research locationa
Southwest Aceh Regency with an area of approximately 340 hectares.Administratively and geographically, Gampong Padang is bordered by: -West side is bordered by Gampong Paya -East side is bordered by Gampong Sungai Krueng Manggeng -North side is bordered by Gampong Keudee -South side is bordered by Gampong Teungah.It is divided into three hamlets, namely Salak Hamlet, Jambu Hamlet, and Bate Intan Hamlet.Based on data from residents of Padang village, Manggeng sub-district, Southwest Aceh district in 2022 who live in Padang Village can be seen in the table below:
Table 01 .
Total population of Padang villages by sex Primary data will be processed in 2023The total population of Padang village is 1120 population with 51.70% male or 579 and the rest are female which amount to 541 or 48.30%.While the views of livelihoods can be seen in the table below
Table 02 .
Distribution of the livelihoods of the Padang village community
Table 03 .
Age distribution of respondentsData source: Primary data will be processed in 2023
Table 04 .
Community perception of KRPL benefits
Table 06 .
Community perception of suitability in application
Table 07 .
Community perception of KRPL sustainability | 2023-12-16T16:51:37.480Z | 2023-09-06T00:00:00.000 | {
"year": 2023,
"sha1": "b990e172f0ab0959b4d8f98cb04e04a5a72d5cee",
"oa_license": "CCBYSA",
"oa_url": "http://radjapublika.com/index.php/IJEBAS/article/download/1192/1088",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "2d66a3e0ef12cb79f138e18204960806dc43c02a",
"s2fieldsofstudy": [
"Environmental Science",
"Sociology"
],
"extfieldsofstudy": []
} |
16544645 | pes2o/s2orc | v3-fos-license | Transient receptor potential ankyrin 1 (TRPA1) is functionally expressed in primary human osteoarthritic chondrocytes
Background Transient receptor potential ankyrin 1 (TRPA1) is a membrane-associated cation channel, widely expressed in neuronal cells and involved in nociception and neurogenic inflammation. We showed recently that TRPA1 mediates cartilage degradation and joint pain in the MIA-model of osteoarthritis (OA) suggesting a hitherto unknown role for TRPA1 in OA. Therefore, we aimed to investigate whether TRPA1 is expressed and functional in human OA chondrocytes. Methods Expression of TRPA1 in primary human OA chondrocytes was assessed by qRT-PCR and Western blot. The functionality of the TRPA1 channel was assessed by Ca2+-influx measurements. Production of MMP-1, MMP-3, MMP-13, IL-6, and PGE2 subsequent to TRPA1 activation was measured by immunoassay. Results We show here for the first time that TRPA1 is expressed in primary human OA chondrocytes and its expression is increased following stimulation with inflammatory factors IL-1β, IL-17, LPS, and resistin. Further, the TRPA1 channel was found to be functional, as stimulation with the TRPA1 agonist AITC caused an increase in Ca2+ influx, which was attenuated by the TRPA1 antagonist HC-030031. Genetic depletion and pharmacological inhibition of TRPA1 downregulated the production of MMP-1, MMP-3, MMP-13, IL-6, and PGE2 in osteoarthritic chondrocytes and murine cartilage, respectively. Conclusions The TRPA1 cation channel was found to be functionally expressed in primary human OA chondrocytes, which is an original finding. The presence and inflammatory and catabolic effects of TRPA1 in human OA chondrocytes propose a highly intriguing role for TRPA1 as a pathogenic factor and drug target in OA. Electronic supplementary material The online version of this article (doi:10.1186/s13075-016-1080-4) contains supplementary material, which is available to authorized users.
Background
Transient receptor potential ankyrin 1 (TRPA1) is a membrane-associated cation channel which mediates pain and hyperalgesia [1,2] and functions as a chemosensor of noxious compounds [3][4][5]. TRPA1 was first discovered in 1999 [6] and has since then been found to be widely expressed in afferent sensory neurons, especially in Aδ and C fibers of nociceptors [7,8]. In addition to pain, TRPA1 also has a role in mediating neurogenic inflammation [9,10]. More recently, TRPA1 has been found to be expressed also in some nonneuronal cells such as keratinocytes [11] and synoviocytes [12] but the functional roles of nonneuronal expression remain to be studied. TRPA1 is activated by numerous exogenous pungent compounds such as allyl isothiocyanate (AITC) from mustard oil [5], acrolein from exhaust fumes and tobacco smoke [9], and allicin from garlic [3]. Interestingly, TRPA1 is also activated and sensitized by agents formed endogenously in inflammatory reactions, such as nitric oxide [13], hydrogen peroxide [14] and nitro-oleic acid [15]. The activation of TRPA1 causes an influx of cation ions, particularly Ca 2+ , into the activated cells [16] and this elevation of intracellular Ca 2+ has been shown to trigger an action potential in neuronal cells [16,17]. Interestingly, among the many regulatory effects of the alterations of intracellular Ca 2+ concentration, its increase has also been shown to affect the gene expression of inflammatory mediators [18][19][20].
Recent evidence suggests TRPA1 to have a role in inflammation through exogenous activation by TRPA1 agonists and also through endogenous mechanisms. TRPA1 has been shown to mediate carrageenan-induced inflammatory edema [21], tumor necrosis factor (TNF)triggered hyperalgesia [22], airway hyperreactivity and inflammation [23,24], and to relate to acute gouty arthritis [25,26]. Very recently we found that TRPA1 has a role in mediating acute inflammation, cartilage destruction, and joint pain in monosodium iodoacetate (MIA)induced inflammation and osteoarthritis in the mouse [27].
Osteoarthritis (OA) is the most common cause of musculoskeletal disability and pain worldwide and its prevalence is constantly increasing as the population ages. OA is a degenerative disease of the joints, which is characterized by inflammation and hypoxia within the joint, leading to cartilage degradation, joint deformity, disability, and pain [28,29]. OA-related cartilage degradation is caused by a growing imbalance between the production of catabolic, anabolic, and inflammatory mediators within the joint driven by the increased expression of matrix-degrading metalloproteinases and proinflammatory mediators such as interleukin (IL)-6 and prostaglandin E 2 (PGE 2 ) [28].
TRPA1 has not previously been investigated in chondrocytes. However, factors involved in hypoxia and inflammation, such as hydrogen peroxide (H 2 O 2 ), nitric oxide (NO), and IL-6 have been shown to upregulate the expression or activation of TRPA1 in some other cells [12][13][14]. Furthermore, the activation of TRPA1 has been reported to enhance the production of inflammatory factors [12,21,26,30]. Since there is a hypoxic and inflammatory state in OA joints [28,31], and TRPA1 has been shown to be involved in the mediation of acute inflammation and cartilage degradation in MIA-induced osteoarthritis [27], we hypothesized that TRPA1 is expressed in the chondrocytes in osteoarthritic joints, where its activation could play a vital part in the inflammation and pathogenesis of OA. In the present study, we tested that hypothesis by measuring the expression and function of TRPA1 in primary human OA chondrocytes.
Cell culture
Primary chondrocyte cultures were carried out as previously described [32]. Leftover pieces of OA cartilage from knee joint replacement surgery were used under full patient consent. The patients in this study fulfilled the American College of Rheumatology classification criteria for OA [33] and the study was approved by the Ethics Committee of Tampere University Hospital, Tampere, Finland (reference number R09116), and carried out in accordance with the Declaration of Helsinki. The procedures to isolate and culture the primary chondrocytes are described in the supplementary data (Additional file 1). During experiments the cells were treated with IL-1β (R&D Systems Europe Ltd, Abingdon, UK), IL-17 (R&D Systems Europe Ltd.), lipopolysaccharide (LPS) (Millipore Sigma, St. Louis, MO, USA), resistin (BioVision Inc., Milpitas, CA, USA), the TRPA1 antagonist HC-030031 (Millipore Sigma) or with combinations of these compounds as indicated.
HEK 293 human embryonic kidney cells (American Type Culture Collection, Manassas, VA, USA) were cultured as described in the supplementary data (Additional file 1). The cells were transfected using 0.42 mg/cm 2 of human TRPA1 plasmid DNA (pCMV6-XL4 by Origene, Rockville, MD, USA) with lipofectamine 2000 (Invitrogen, Life Technologies, Carlsbad, CA, USA) according to the manufacturer's instructions.
Animals
Wild-type (WT) and TRPA1 knockout (KO) male B6;129P-Trpa1(tm1Kykw)/J mice (Charles River Laboratories, Sulzfeld, Germany) aged 19-22 days were used in mouse cartilage culture experiments. Mice were housed under standard conditions (12-12 h light-dark cycle, 22 ± 1°C) with food and water provided ad libitum. Animal experiments were carried out in accordance with the legislation for the protection of animals used for scientific purposes (Directive 2010/63/EU) and the experiments were approved by The National Animal Experiment Board (reference number UTA 845/712-86). Animals were sacrificed by carbon monoxide followed by cranial dislocation.
Mouse cartilage culture
After mice were euthanized, full-thickness articular cartilage from femoral heads were removed and cultured as described in the supplementary data (Additional file 1). The cartilage pieces were exposed to IL-1β (R&D Systems Europe Ltd.) or its vehicle for 42 h and thereafter culture media were collected and matrix metalloproteinase (MMP)-3, IL-6, and PGE 2 concentrations were measured by immunoassay.
Western blot measurements
After the cell culture experiments, total protein was extracted, and TRPA1 was immunoprecipitated and analyzed with Western blot as described in the supplementary data (Additional file 1). TRPA1 antibody NB110-40763 (Novus Biologicals, LCC, Littleton, CO, USA) was used as the primary antibody and goat antirabbit HRP-conjugate (sc-2004, Santa Cruz Biotechnology, Inc., Dallas, TX, USA) as the secondary antibody in the Western blot analysis.
RNA extraction and quantitative RT-PCR
At the indicated time points, total RNA was extracted and analyzed by quantitative reverse transcription polymerase chain reaction (qRT-PCR) for the expression of TRPA1 mRNA as described in the supplementary data (Additional file 1).
Statistical analysis
Data were analyzed using Graph-Pad InStat version 3.00 software (GraphPad Software, San Diego, CA, USA).
The results are presented as mean + standard error of the mean (SEM) unless otherwise indicated. Unpaired t test, paired t test, one-way analysis of variance (ANOVA) or repeated-measures ANOVA, followed by Dunnett's test were used in the statistical analysis. Differences were considered significant at p < 0.05, p < 0.01, and p < 0.001.
Results
TRPA1 is expressed in primary human OA chondrocytes and in immortalized human T/C28a2 chondrocyte cell line Primary human OA chondrocytes and immortalized human T/C28a2 chondrocyte cell line expressed TRPA1. The expression was measured by quantitative RT-PCR on isolated total mRNA using a specific TaqMan assay. The proinflammatory cytokine IL-1β was found to increase TRPA1 expression in a time-dependent manner: in primary chondrocytes the expression of TRPA1 increased up to 48 hours and declined thereafter (Fig. 1a), whereas in the human T/C28a2 chondrocytes the expression maximum was at 6 hours ( Fig. 1b). In addition, TRPA1 expression was also enhanced by inflammatory factors IL-17, LPS, and resistin (Fig. 2).
To verify the translation of TRPA1 mRNA into protein, we extracted total protein from primary human OA chondrocytes and human T/C28a2 chondrocytes and performed Western blot analysis. HEK293 cells transiently transfected with TRPA1 plasmid were used as positive control and the protein was detected with a specific human TRPA1 antibody. Remarkably, both cell types were found to express TRPA1 protein as seen in Fig. 3.
Human chondrocytes express a functional TRPA1 channel
To confirm that TRPA1 mRNA and the subsequent protein expressed by human chondrocytes produces a functional channel, Ca 2+ -influx measurements were carried out. Primary human chondrocytes and T/C28a2 chondrocytes were cultured with IL-1β, which was found to stimulate TRPA1 expression, or with its vehicle for 24 h, and thereafter TRPA1 was activated with the TRPA1 agonist AITC. IL-1β stimulation resulted in an increased responsiveness to AITC as seen as an enhanced Ca 2+ influx, and the selective TRPA1 antagonist HC-030031 was shown to prevent this effect (Fig. 4).
MMP, IL-6 and PGE 2 production is downregulated by genetic depletion and pharmacological inhibition of TRPA1
After finding that functional TRPA1 was indeed expressed in chondrocytes, we aimed to further examine the possible arthritogenic role of the TRPA1 channel. We investigated the effect of genetic depletion of TRPA1 on the production of OA-related factors MMP-3, IL-6, and PGE 2 by using articular cartilage samples from TRPA1-deficient (knockout, KO) and corresponding wild-type (WT) mice. IL-1β treatment increased MMP-3, IL-6, and PGE 2 production in cartilage as expected. Remarkably, this response was significantly attenuated in the cartilage from the TRPA1 KO mice as compared to the corresponding WT mice (Fig. 5). Further, we treated primary human chondrocytes with IL-1β alone and together with the selective TRPA1 antagonist HC-030031 for 24 h. Interestingly, the selective TRPA1 antagonist HC-030031 downregulated IL-1β-enhanced MMP-1, MMP-3, MMP-13, IL-6, and PGE 2 production by 25-45 % (Fig. 6), suggesting that TRPA1 plays a role in the upregulation of these catabolic and inflammatory factors in OA cartilage.
Discussion
The findings of the present study suggest a hitherto unknown role for TRPA1 in the pathogenesis of OA. We have shown for the first time the expression of the TRPA1 channel in primary human OA chondrocytes and in the human T/C28a2 chondrocyte cell line. We showed the expression of TRPA1 mRNA and protein by qRT-PCR and Western blot, respectively. We were also able to show that the expressed TRPA1 was functional, as evidenced by Ca 2+ -influx measurements. Further, we found TRPA1 to have a role in mediating the production of OA-related factors MMP-1, MMP-3, MMP-13, IL-6, and PGE 2 as evidenced by pharmacological inhibition and genetic depletion of TRPA1.
TRPA1 was first discovered in 1999 in fetal lung fibroblasts [6]. Since then it has been mainly studied in different afferent sensory neurons such as Aδ and C fibers of nociceptors [7,8]. More recently, however, TRPA1 has also been found to be expressed in some nonneuronal cells such as keratinocytes [11,37,38], synoviocytes [12,39] and airway epithelial and smooth muscle cells [30]. It is noteworthy, that not all of these studies have shown functionality of the TRPA1 ion channel and some have only reported the expression of TRPA1 at the mRNA level. In the present study, we have comprehensively shown the expression and activation of TRPA1 in human chondrocytes, to support the criteria set by Fernandes et al. [40]. We were able to show for the first time the expression of both TRPA1 mRNA and protein and the functionality of the TRPA1 channel in primary human OA chondrocytes and in human T/C28a2 chondrocyte cell line. This finding is particularly interesting as in OA joints there is a hypoxic [31] and inflammatory [28,41] state and related factors, H 2 O 2 , NO, and IL-6, have previously been shown to upregulate the expression and activation of TRPA1 [12][13][14]. According to Hatano et al. [12] the human TRPA1 promoter has at least six putative nuclear factor kappa B (NF-kB) binding sites and ten core hypoxia response elements (HREs), which are binding sites for hypoxia-inducible factor (HIF) transcription factors. HIFs are known to mediate adaptive responses to hypoxia as well as to be activated by inflammation [42,43] and the binding of HIFs to consensus HREs on their target genes regulates gene transcription. After discovering TRPA1 expression in chondrocytes, we aimed to investigate whether inflammatory factors/ mechanisms related to the pathogenesis of OA [28,29] regulate expression of TRPA1, which would indicate a role for TRPA1 as a mediator in OA. IL-1β is considered as a major player in OA associated with cartilage destruction. IL-1β is elevated in OA joints and it suppresses type II collagen and aggrecan expression, stimulates the release of MMP-1, MMP-3, and MMP-13, and induces the production of IL-6 and some other cytokines as well as PGE 2 [28]. In part IL-17 feeds forward these mechanisms as it further induces IL-1β, TNF, and IL-6 production, upregulates NO and MMPs and downregulates proteoglycan levels related to the pathogenesis of OA [28]. Based on our results, IL-1β and IL-17 both also induce TPRA1 expression and intriguingly, some of the IL-1β-induced inflammatory and catabolic effects are partly mediated by TRPA1. In OA the innate immune system and in particular toll-like receptors (TLRs) activated by cartilage matrix degradation products, also play a significant part in disease progression. Chondrocytes express TLRs, which trigger major inflammatory pathways and are activated by bacterial lipopolysaccharide (LPS) and damage-associated molecular patterns [29], and also the adipocytokine resistin known to be expressed in OA joints [44] has been shown to transduce its effects through toll-like receptor 4 [45]. In the present study, we found that both LPS and resistin increased expression of TRPA1 in human chondrocytes, suggesting a TLR-mediated mechanism to enhance TRPA1 expression in OA cartilage. In support of the present results, Hatano et al. showed that TRPA1 gene expression was enhanced in synoviocytes by inflammatory factors TNF-α and IL-1 [12], and the present study together with that of Hatano et al. [12] suggests a previously unrecognized mechanism that links TRPA1 as an inducible factor to joint inflammation.
Activation of TRPA1 results in a substantial influx of Ca 2+ into the stimulated cells [46]. Here we verified the functionality and activation of the TRPA1 channel in human chondrocytes by measuring Ca 2+ influx using the TRPA1 agonist AITC as well as the TRPA1 antagonist HC-030031. As shown previously, elevated intracellular Ca 2+ concentration may affect the expression of inflammatory genes both in a direct or indirect manner [20]. In the present study, we found that TRPA1 regulated the production of inflammatory and catabolic factors, namely MMP enzymes, IL-6, and PGE 2 in chondrocytes. IL-1-induced MMP-3, IL-6, and PGE 2 production in the cartilage from TRPA1-deficient mice was less than half of that found in the cartilage from wild-type mice. Accordingly, the selective TRPA1 antagonist HC-030031 reduced IL-1-induced MMP-1, MMP-3, MMP-13, IL-6, and PGE 2 production by 25-45 % in primary human OA chondrocytes. In the latter experiment, the cells were incubated in the presence of IL-1 and HC-030031 for 24 h; therefore the result may be an underestimate of the effect of total inhibition of TRPA1 in OA chondrocytes because HC-030031 is a reversible TRPA1 antagonist with a relatively short half-life [47]. These findings are supported by previous studies indicating that TRPA1 activation regulates the production of IL-1 in keratinocytes [38], IL-6 and IL-8 in synoviocytes [12], and PGE 2 along with leukotriene B 4 in fibroblasts and keratinocytes [48]. We have recently found that TRPA1 also regulates the expression of cyclooxygenase-2 (COX-2) [21,27] and the production of monocyte chemotactic protein-1 (MCP-1), IL-6, IL-1β, myeloperoxidase (MPO), MIP-1α and MIP-2 in inflammatory conditions [26]. The detailed molecular mechanisms of this regulation remain, however, to be studied.
TRPA1 is shown to be involved in pain, hyperalgesia, and neurogenic inflammation [10,16,49,50]. In OA-related pain, the role of TRPA1 has been investigated in studies by Moilanen et al. [27] McGaraughty et al. [51] and Okun et al. [52] using the MIA-model of OA. The two firstmentioned studies [27,51] concluded TRPA1 to contribute to joint pain in experimental OA. In addition, Moilanen et al. [27] reported that TRPA1-deficient mice developed less severe cartilage changes following MIA injections. Accordingly, we showed here that TRPA1 is functionally expressed in chondrocytes. We also examined the possible functions of the channel by treating primary chondrocyte cultures with IL-1β and the selective antagonist HC-030031 [2,53,54]. Our results suggest an inflammatory and catabolic role for TRPA1 in human chondrocytes, as we found inhibition of TRPA1 to suppress the production of OA-related factors MMP-1, MMP-3, MMP-13, IL-6, and PGE 2 . These results were supported by experiments with cartilage from WT and TRPA1-deficient mice: following stimulation with IL-1β MMP-3, IL-6, and PGE 2 production was lower in the cartilage from TRPA1-deficient mice than from WT animals. These results together suggest that TRPA1-activating , and PGE 2 (c) in the cartilage is attenuated by genetic depletion of TRPA1. Cartilage samples were obtained from TRPA1-deficient (knockout, KO) and corresponding wild-type (WT) mice. The samples were cultured with and without IL-1β (100 pg/ml) for 42 h and thereafter the culture medium was collected and analyzed for concentrations of MMP-3, IL-6, and PGE 2 by immunoassay. The results are expressed as mean + SEM, n = 6-9. Unpaired t test was used in the statistical analysis; * p < 0.05, ** p < 0.01, and *** p < 0.001 compared to the WT mice. IL interleukin, MMP matrix metalloproteinase, PGE 2 prostaglandin E 2 , TRPA1 transient receptor potential ankyrin 1 factors are present in OA joints, and that TRPA1 mediates, at least partly, OA-related pain, inflammation, and cartilage destruction in neuronal and nonneuronal cells in the joint.
Conclusions
In conclusion, we found the TRPA1 cation channel to be functionally expressed in primary human OA chondrocytes and in part to mediate inflammatory and catabolic effects, which are both original findings. The inflammatory and hypoxic environment in the OA joint is conducive to enhance the expression and activation of TRPA1. The presence and effects of TRPA1 in human OA cartilage as found in the present study, together with the previous findings on TRPA1 in experimentally induced OA [27,51] propose an intriguing role for TRPA1 as a mediator and drug target in OA. , MMP-13 (c), IL-6 (d), and PGE 2 (e) in primary human OA chondrocytes is attenuated by pharmacological inhibition of TRPA1. Primary human OA chondrocytes were stimulated with IL-1β (100 pg/ml) in the presence and absence of the selective TRPA1 antagonist HC-030031 (100 μM) for 24 h. MMP-1, MMP-3, MMP-13, IL-6, and PGE 2 concentrations in the culture media were measured by immunoassay and the results are expressed as mean + SEM. Samples were obtained from eight patients and the experiments were carried out in duplicate. Paired t test was used in the statistical analysis; * p < 0.05, ** p < 0.01, and *** p < 0.001 compared to the IL-1β-treated samples. IL interleukin, MMP matrix metalloproteinase, OA osteoarthritis, PGE 2 prostaglandin E 2 , TRPA1 transient receptor potential ankyrin 1 | 2018-04-03T06:18:04.152Z | 2016-04-01T00:00:00.000 | {
"year": 2016,
"sha1": "919cae1056990283a37e5a2dc4a78004be431172",
"oa_license": "CCBY",
"oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/s13075-016-1080-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "919cae1056990283a37e5a2dc4a78004be431172",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry",
"Psychology"
]
} |
246319253 | pes2o/s2orc | v3-fos-license | Sonification of weather data as a non-human-centric artistic approach [version 1; peer review: awaiting peer review]
Background In the mid-20th century, the emergence of sound studies demonstrated a shift of research interest in sonic practitioners. This field gains its prevalence by expanding the boundaries of prevailing conception through proposing alternative creative approaches in sound art practices. Methods – Two methods were presented – listening and sounding to promote creative sound making. The first method, listening involves soundwalking and recording sound from the external environments. These recordings were then re-evaluated and post-processed in audio editing software. The second method, sounding involves the creation of a weather data sonification system in Pure Data environment, in which the perceptual experience from the first method was taken into consideration. Result – First method enables the genesis of creative idiosyncrasies, such as preferences and ideas through the sonic perception of environmental events. In this process, noise and weather were identified as environmental components that share similar sensible qualities. Thus, noise is a prevalent medium that inspires the creation of sound generators in the sonification system presented in this paper. The sonic output of data sonification reveals an analogical connection between weather data and sonic parameters, in which changes in data values result in changes in acoustic properties. These outputs deliver different sensibilities based on their data parameters; sonification of temperature data might suggest an alarming effect to the listener. Conclusion – The proposed methods were intricately linked, suggesting environmental events to be perceived and realized through a non-scientific perspective. By highlighting aesthetic possibilities of environmental components, this paper presents an alternative perspective in contrast with the human-centric worldview through the creation of sonic works. Open Peer Review Reviewer Status AWAITING PEER REVIEW Any reports and responses or comments on the article can be found at the end of the article. Page 1 of 10 F1000Research 2022, 11:96 Last updated: 02 FEB 2022
Methods -Two methods were presented -listening and sounding to promote creative sound making. The first method, listening involves soundwalking and recording sound from the external environments. These recordings were then re-evaluated and post-processed in audio editing software. The second method, sounding involves the creation of a weather data sonification system in Pure Data environment, in which the perceptual experience from the first method was taken into consideration.
Result -First method enables the genesis of creative idiosyncrasies, such as preferences and ideas through the sonic perception of environmental events. In this process, noise and weather were identified as environmental components that share similar sensible qualities. Thus, noise is a prevalent medium that inspires the creation of sound generators in the sonification system presented in this paper. The sonic output of data sonification reveals an analogical connection between weather data and sonic parameters, in which changes in data values result in changes in acoustic properties. These outputs deliver different sensibilities based on their data parameters; sonification of temperature data might suggest an alarming effect to the listener.
Conclusion -The proposed methods were intricately linked, suggesting environmental events to be perceived and realized through a non-scientific perspective. By highlighting aesthetic possibilities of environmental components, this paper presents an alternative perspective in contrast with the human-centric worldview through the creation of sonic works.
Reviewer Status AWAITING PEER REVIEW
Any reports and responses or comments on the article can be found at the end of the article.
Introduction
In the 21st century, sound studies research 1 prevailed as an academic field to investigate and reflect on the continuous establishment of strategies, culture, and aesthetics, amid the emergence of new sonic practices. The research was categorized into different tropes which focused on sonic perception, sonic sites and soundscapes, sonic reproduction, artists and collectives, and sonic aesthetics. Therefore, the strategies and creative possibilities of sonic methods under the category of soundscape were investigated. In other literature, the term soundscape was defined as 'events heard, not objects seen', whereas in this research it was defined as a discipline that examines the effects of the acoustic environment on the creatures and entities living within it. This then contributes to the establishment of various noise abatement and musical practices as methods of advocating and appreciating the beauty of environmental soundscape. 2 Although soundscape research started as a musical endeavour for appreciation of environmental sound, which proclaimed the world as a 'macrocosmic musical composition', it only focused on collecting and examining 'non-polluted' sound through field recordings. Thus, this contradiction offers the possibilities of sonic practitioners to extend and practicalize its original concepts into other creative practices.
Despite this, soundscape research encouraged sonic practitioners and non-practitioners to engage with the environment through listening. Instead of focusing on sound from the natural environment, works from avant-garde practitioners explore the creative possibilities of sound indiscriminately from different environments. In Sonic Meditations, 3 the composer Oliveros encouraged a series of sonic experimental activities such as listening and recording environmental sounds, journaling listening experiences and producing sound works as part of deep listening practice. On the contrary, Feld's acoustemological approach derived from a non-human perspective-to acknowledge relation shared with numerous human and non-human actants, and ultimately the construct of the world through 'sonic ways of knowing'. 4,5 On a similar notion, this paper will engage and discuss human and non-human relations through sonic means. We propose two methodslistening and sounding. First, listening and soundwalking were conducted in order to observe and examine the overlap of sonic components emitted from human and non-human influences in the physical environment. In hopes to better understand other non-sonic events of the environment, weather data was collected through a virtual open-source platform. Second, sounding involves analysing, post-processing and re-evaluating sound samples that were recorded per se. The findings of the analysis enable relation to be drawn between sonic properties and quality, which then guide the process of data sonification.
Method 1: Listening and data collection
Listening in the context of this paper draws closely on the methods that were promoted in previous deep listening practices. 6 This involved observing and recording sonic events simultaneously when navigating through the sonic landscapes of different environments. In short, this method of perceiving defined soundwalking practice.
The process of soundwalking first involved observing and listening attentively to a different sound and sonic events that took place in the surrounding environment. This process was facilitated by using a digital audio recorder (Zoom H1n) which enables sound with different properties to be heard and detected as audio signal input. In other words, this facilitation intensified the listening experience, in which sound with lower amplitudes and high frequencies that were less audible, such as forest ambience, bird songs were amplified and heard through the device's integrated stereo microphones. Thus, based on the properties of sound, the gain of signal inputs was monitored and adjusted accordingly.
Then, soundwalking proceeded with identifying and recording environmental sound with desirable sensible qualities, specifically surrounding ambience that was often perceived as 'noise' or 'insignificant sound'. Based on the understanding that sound possessed distinctive qualities in different spatiotemporal settings, soundwalking was conducted in both natural and urban environments. Decisions such as moving towards or away from the sound source were made on-site based on the changing qualities of the environment. Each recording was recorded in 5-7 minutes, thus durationally these recordings can be perceived as sonic events that documents the multiplicities of sonic components emitted from non-human and human producers.
Soundwalking enables environmental events to be perceived primarily through sonic means, however, these events can also be perceived through other technological means. Hence, we identify weather as events that share similar sensible qualities with ambience recordings, that of being 'unpredictable' and 'indeterminate'. In order to examine the changing process of weather events on a specific timeline, weather data of the soundwalking site was collected through a virtual open-source platform known as Open Weather Map (OWM) that integrates several data sources such as numerical weather prediction (NWP), weather stations and satellite data. 7 The data was retrieved in numerical format through Application Programming Interface (API) calls and stored externally in comma-separated values (CSV) files, as shown in Figure 1. Hence, weather data collection can be seen as an extension of soundwalkinga method of environmental engagement.
Method 2: Sounding, data analysis and sonification
Recordings of the sonic events were categorized based on the details of recording sites, sonic components and sources. For example, sound recordings of forests were categorized as the natural soundscape that consists of bird song, wind noise and hums, whereas sound recordings of cityscape were categorized as the urban soundscape that consists of noises of transportation and traffic as some of the major sonic components. These details were documented to ensure future navigation.
In order to narrow down the variables of recordings, this paper only focuses on the sound that shares similar sensible qualities with the environment. The sound emitted from other sources such as birds, people and transportation were not sampled or used for analysis. Hence, the sound of the surrounding ambience such as the noise of wind and other geophysical influences were sampled out from the recordings into a shorter duration. These sampled sounds were most often perceived as 'background noise' or rumbles, in which the sources of emission were arbitrary or unrecognizable. To examine the sensible qualities of background ambience, the acoustic properties of different samples were compared through audio analysis tools, namely a frequency and amplitude follower.
We selected samples from both urban and natural environments; the comparison is better visualized in the form of spectrograms. The values of amplitude (dB) and frequency (Hz) were annotated on each spectrogram. The analysis was done with Sonic Visualizer. The figures below depict the result of acoustic analysis across a specific duration: Figure 2 shows a higher fluctuation of both amplitude and frequency as compared to Figure 3. The comparison of acoustic properties was summarized in Table 1.
The difference in acoustic properties result in the difference of sensible qualities: samples of cityscape could be perceived as loud, chaotic and noisy, whereas samples of forest could be perceived as soothing, calming and unobtrusive. These identified qualities enable the genesis of idiosyncratic preferences, knowledge and sensitivity on the sensibilities of noise. Thus, it indirectly inspires and influences the decision-making process of creating sound generators in the sonification process.
Sound generators were created in Pure Data (PD)an open -source visual programming environment to sonify different parameters of weather data that were collected previously. The sonification process translates numerical data into sonic outputs -in which sonic parameters of generators were modified by incoming data parameters. 8 Each data point maps to the frequencies and amplitudes as shown in Table 2. Consequently, this mapping allows the changes of data points to be revealed and perceived across time.
To distinguish weather data parameters from one another, the parameters were assigned to three different sound generators which were synthesized to deliver different sensible qualities. In order to represent the distinctive changes of temperature data, a sawtooth wave oscillator with an amplitude envelope of short release time was used to deliver a sharper sonic quality. In contrast, to represent the fluid and irregular physical quality of clouds, a sine wave oscillator and white noise generator were used; enhanced with slight reverberation to ensure a smoother sounding quality. Lastly, rain volume was represented with white noise generators by structures of high-pass and low pass filters to mimic rainy ambience. Figure 4 shows examples of sound generators in the sonification system. To afford control in the process of sonification, the system was designed with a few interactive parameters. The front-end of the system was organized into two parts as shown in Figure 5, the top part display control parameters for input weather data, which consist of two options of data sets and controllable data reading intervals (milliseconds), whereas the bottom part display volume control (dB) for sonic output. In consequence, the sonic output varies each time when the control parameters were modified.
For a desirable result, the control parameters were set constant. Daily data was chosen from the dataset and its reading interval was set to 350 milliseconds, which means each data point was read and translated into sound between the given time intervals. Weather data parameters, namely temperature, cloud coverage and rain volume were sonified and exported separately into three (.wav) files with a duration of 3 minutes. Each sonification output was later visualized in spectrogram to draw a relation between weather data and sonic parameters.
Results
Practicalizing sonic methods to perceive and re-enact sensibilities of environmental events In this paper, we propose sonic methods as practical ways of perceiving and re-enacting sensibilities of environmental events. This proposition presupposes events derived from everyday reality are possible catalysts for artistic creation. Through practicalizing these methods, a relation was drawn between environmental components and intricacies that were perceived, both through sonic and non-sonic means. In the first method, noise, ambience, and weather were identified as specific components that share similar sensible qualities, that of being unpredictable and random in its materialities. That being the case, the noise was highlighted as a prevalent sonic medium that inspires the creation of sound generators which were later used to represent the sensibilities of weather data through sonification.
Environmental perception enables the formation of idiosyncratic experience, ideas, preferences, and affords ways of realizing and re-enacting sensibilities. In the second method, sonification was seen as a method in realizing and re-enacting sonic experiences. This method considers how weather events were experienced, for example, how temperature changes seemingly deliver an alarming effect, or how moving clouds might suggest a sense of wonder. Thus, each data parameter was mapped onto different sound generators to deliver distinctive sensibilities. Sonic output of temperature data was heard as distinctive changes in frequencies; cloud data was heard as subtle changes in frequencies, random changes in amplitude of noise; and rain was heard as ambience based on its occurrences. The outputs suggest an analogical connection between weather data parameters and sonic parameters, this connection is visualized in spectrograms, as shown in Figures 6, 7 and 8.
The spectrograms suggested the most distinctive changes in the acoustic properties of temperature sonification as compared to others. This method of outputting sonification process enables weather data to be represented as distinctive Figure 6. Comparison of changes in temperature data and changes in frequencies.
parameters. However, the output altered if all processes were outputted at once, the combination of different sonic qualities will generate a new sonic event; abstracting the changes in data values, thus blurring the relation between the parameters.
Ultimately, both methods were found to be intricately linked to one another in the processes of creating sonic works. The transition from one method to another is found to be fluid and interchangeable as these methods consist of other undiscussed variables. These uncertainties can be further explored as creative avenues.
Discussion/conclusions
This paper suggested that environmental events can be perceived from a non-scientific perspective in creative practice. These methods enable knowledge, experience and sensibilities of the environment to be delivered through the creation of sonic works.
In the process of creation, the noise and weather events were highlighted as results of non-human geophysical influences. As such, noise was used as an aesthetic component to represent the irregularities of weather events. By tapping into the emancipatory aesthetic possibilities of noise, this paper offers an alternative way of environmental learning in contrast with the romantic environmentalist vision as prioritized in previous soundscape research.
In other respects, sonification was highlighted as a technical process that blurs the boundaries between scientific and artistic representation, in which relationships could be drawn in between. Sonification was a method proposed to realize the sensibilities of weather data; it also possessed the possibilities to generate new sensibilities and understanding each time when the process was executed. In contemporary sound art practice, sonification could be a useful method to reflect and apprehend the uncertainties of environmental event crises.
Future works
This paper only discusses limited scopes of non-human perspectives. With the flexibility of the proposed methods, we foresee the discussion and exploration to be stretched out to other types of quantifiable environmental data. | 2022-01-28T16:23:06.148Z | 2022-01-26T00:00:00.000 | {
"year": 2022,
"sha1": "9895bee8d71333189802eb82cf4b8fb1113db910",
"oa_license": "CCBY",
"oa_url": "https://f1000research.com/articles/11-96/v1/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4f0fefdb8ea73e782e3243a88071dad63d2dca05",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": []
} |
269701581 | pes2o/s2orc | v3-fos-license | Evolution of black shale sedimentary environment and its impact on organic matter content and mineral composition: a case study from Wufeng-Longmaxi Formation in Southern and Eastern Sichuan Basin
Due to global geological events and differences in regional sedimentary environments, marine shale reservoirs of Wufeng-Longmaxi Formation in Eastern and Southern Sichuan Basin exhibit signi fi cant heterogeneity in organic matter content and mineral composition. In order to reveal the in fl uence of paleoenvironment evolution on reservoir heterogeneity, key geochemical indicators of elements were used to reconstruct the sedimentary environment of marine shale in Eastern and Southern Sichuan Basin. The in fl uence mechanism of paleoenvironment on organic matter content and mineral components was also explored. The results indicate that the Wufeng-Longmaxi Formation in the Southern and Eastern Sichuan Basin can be divided into two third-order sequences (Sq 1 and Sq 2). Each third-order sequence is divided into a transgressive system tract (TST) and a highstand system tract (HST). The average TOC content in the Eastern Sichuan Basin is the highest during the TST1 period with reaching 4.2%, while reached its maximum at 3.9% during the TST2 period in the Southern Sichuan Basin. Due to the in fl uence of high paleo-productivity, the organic matter accumulation and quartz content in the eastern Sichuan region were higher than those in the southern Sichuan region from the TST1 to the middle TST2 period. However, the organic matter accumulation and quartz content in the late TST2 period were lower than those in the southern Sichuan region due to the dilution of terrestrial debris.
Introduction
The Ordovician-Silurian black shale in the Sichuan Basin is a key contributor to China's breakthrough in shale gas production (Chen et al., 2017a;Jiang et al., 2022).At present, a gas field with an annual production capacity of 20 billion cubic meters has been built in the Ordovician-Silurian black shale with a burial depth of less than 3500 m in the Changning, Weiyuan, and Zhaotong areas in the Southern Sichuan Basin (Chen et al., 2017b;Cai et al., 2023;Fan et al., 2024).However, the Ordovician-Silurian black shale with a burial depth of less than 3500 m only accounts for 16% of the shale gas geological resources (Fu et al., 2019;Dong D. et al., 2022;Li, 2023a), and deep (burial depth>3500 m) marine shale gas is clearly the main driver of future production growth (Fu et al., 2021a;Fu et al., 2021b;Li, 2023b).Due to the influence of regional tectonic environment, there are significant differences in the sedimentary thickness and paleoenvironment of the Ordovician-Silurian shale in the Eastern and Southern Sichuan Basin (Guo et al., 2019;Jin et al., 2020;Huang et al., 2023).The degree of organic matter enrichment and mineral composition in the shale will also vary.Therefore, restoring the paleosedimentary evolution process of shale and clarifying its impact on the distribution of organic-rich intervals and mineral components is of great significance for promoting efficient shale gas exploration in the Eastern and Southern Sichuan Basin (Jiang et al., 2018;Wei et al., 2021).
The Ordovician-Silurian black shale was formed during a turbulent geological history transition period (Katian to Aeronian), influenced by global and regional geological events such as volcanic eruptions, orogeny, glaciation, biotic extinction, and global sea-level changes (Wu et al., 2018;Dong T. et al., 2022;Li et al., 2022).In order to analyze the evolutionary characteristics of the black shale sedimentary environment, logging curve cycles and distribution of biogenic graptolite zones were used to establish a sequence stratigraphic framework (Wang et al., 2019), which is helpful for the comparative study of subsequent paleoenvironmental evolution.The sedimentary paleoenvironment mainly affects the enrichment of organic matter by controlling the accumulation process of organic matter (primary productivity, redox conditions, terrestrial input, sedimentation rate), and has produced three commonly used models: the "preservation model", the "productivity model", and the most common "multi-factor control model" (Armstrong et al., 2009;Berry, 2010;Zhao et al., 2019).In addition, the indirect impact of global or regional geological events on shale sedimentation and organic matter enrichment should not be ignored (Song et al., 2023;Xie et al., 2023).The control effect of sedimentary paleoenvironment on shale mineral composition can be summarized in three aspects: firstly, it affects the development of biogenic siliceous quartz or biogenic calcareous carbonate (Cai et al., 2022;Chen et al., 2023); secondly, it directly affects the type and content of mineral components in sedimentary areas through input from source and terrestrial sources (Wu et al., 2022); The third is the enrichment of minerals such as authigenic carbonates and pyrite (Liu et al., 2019;Chen et al., 2023).
Compared with the abundant achievements of previous studies, sedimentary and elemental geochemistry methods have been applied in this study, and multiple paleo-environmental factors such as paleo-redox, paleo-productivity, terrestrial debris input, and paleo-climate have been discussed.Under the isochronous stratigraphic framework, the evolutionary characteristics of paleoenvironment in the Eastern and Southern Sichuan Basin have been identified, and the shale sedimentation process has been restored.Meanwhile, the impact of sedimentation on the distribution and mineral composition of organic-rich shale has been discussed.The research results provide a reference basis for predicting and optimizing the "sweet spot area" of shale gas in the Sichuan Basin.
Geological setting
The Ordovician-Silurian black shale was formed during the extinction of the South China Basin and the formation of the South China Orogenic Belt.After the Middle Ordovician, the Huaxia block collided with the Yangtze block.As the main part of the Upper Yangtze Plate, the Sichuan Basin and its periphery have entered the stage of foreland basin evolution (Zhang et al., 2020;Wang et al., 2022).During the Late Ordovician to Early Silurian, with the strengthening of compression from southeast to northwest, the paleo-uplift in the Central Sichuan Basin and the paleo-uplift in the Central Guizhou-Xuefeng outside the basin alternately uplifted (Jiang et al., 2019;Jiang et al., 2020), resulting in the distribution of the foreland depression belt in the "three uplifts sandwiched with one depression" (Zhang et al., 2022), and the sedimentary facies and provenance regions were controlled by the peripheral uplifts.Between the paleo-uplifts, there are semi-closed stagnant basins, forming two depression centers in Southern and Eastern Sichuan (Figure 1).The northern part of southern Sichuan is located at the southern edge of the Leshan-Longnvsi paleouplift, while the southern part is blocked by the Central Guizhou paleouplift and the Xuefeng paleo-land, forming a semi-enclosed basin with strong water limitations.Due to its proximity to the Central Guizhou paleo-uplift, terrigenous debris material may mainly come from the foreland uplift zone, including the Central Guizhou paleo-uplift, as well as the Xuefeng paleo-uplift, with limited supply from the Central Sichuan paleo-uplift.The western part of the Eastern Sichuan Basin is closer to the paleo-uplift in Central Sichuan, and the northern edge is Geological background and location of studied wells during the sedimentary period of the Wufeng Formation-Longmaxi Formation in the Sichuan Basin and its surrounding areas.closer to the Qinling Ocean, which is connected to external water bodies.The limitations of its water bodies are weak, and its provenance may mainly come from the paleo-uplift in Central Sichuan Basin.
The black shale of the Ordovician Wufeng Formation-Silurian Longmaxi Formation is widely developed in southern China, and belongs to the Katian Stage-Aeronian Stage in terms of geological age (Brenchley et al., 2003).Except for the top 1-2 m of the Ordovician Wufeng Formation, which is composed of shell limestone or carbonate-bearing mudstone rich in Hirnantian fauna fossils, all other intervals are shale rich in graptolite fossils.
The shale intervals studied in this article are Katian Stage, Hirnantian Stage, and Rhuddanian Stage, which correspond to the WF1-WF3, WF4-LM1, and LM2-LM5 of the graptolite biozone, respectively (Chen X. et al., 2017;Huang et al., 2023).By combining lithology, logging response, and graptolite biozone, and referring to previous classification schemes (Figure 2), the shale interval can be divided into two third-order sequences (Sq 1 and Sq 2) and four individual tracts (TST1, HST1, TST2, and HST2).The Wufeng Formation and GYQ Member form the first third-order sequence (Sq 1), with black siliceous shale and argillaceous-rich siliceous shale (TST1) from the Wufeng Formation at the bottom and argillaceous limestone (HST1) from the GYQ Member above.The lower part of the Longmaxi Formation is the second third-order sequence (Sq 2), with the bottom boundary in contact with the overlying siliceous shale through transgression.The black siliceous shale in the lower part of the Longmaxi Formation is divided into TST2, and the second maximum flood surface (MFS) has been identified, representing the transition from deep water to shallow water and the beginning of HST2.
Sample collection and measurement
34 shale samples from the Well FT-1 in the Eastern Sichuan Basin.When selecting samples, non matrix parts such as calcite veins were avoided.All samples were measured for TOC value, mineral composition, and bulk element content.Previous studies have obtained detailed elemental geochemical data for marine shale intervals in the southern Sichuan region (Huang et al., 2023), laying the foundation for comparing paleoclimate, terrestrial debris input, paleoproductivity, and bottom water redox conditions with the Eastern Sichuan Basin.In addition, to clarify the provenance and tectonic background of the Southern Sichuan Basin, this study cited 59 sets of data obtained from the Changning and Xingwen Outcrop of the Southern Sichuan Basin (Li, 2017).The XRD experiment was completed at the China Petroleum Exploration and Development Research Institute and measured using a Japanese Nikkei X-ray diffractometer.The determination of element content was completed within Kehui Testing (Tianjin) Technology Co., Ltd.Trace and rare earth elements were determined using ICP-MS (Jena PQ MS) high-resolution plasma spectrometer, and the main elements were determined using XRF-1800 wavelength dispersive X-ray fluorescence spectrometer.The TOC test was conducted using the CS744-MHPC carbon and sulfur analyzer at the unconventional experimental center of CNOOC Energy Development.
Indicator calculation and calibration
The enrichment factor (X EF ) can eliminate the influence of terrestrial debris and distinguish the degree of element enrichment in sediments.The calculation formula is: X EF =(X/Al) sample /(X/Al) standard.
X and Al represent the element content and Al content in the sediments, respectively, and "standard" refers to the standardized background values.PAAS (Taylor and McLennan, 1985) and UCC (McLennan, 2001) are frequently used for normalization in previous studies, and we select the PAAS in this study.Al is not easily affected by weathering or post-depositional alteration, so the ratio of element content to Al is used to remove the influence of terrestrial debris.
The calculation method for trace element content in sediments of non detrital origin (biogenic or authigene enrichment) is as follows: E org /E bio = E sample -Al sample ×(E/Al) detr E bio represents the organic or biological portion of an element that exceeds a specific terrestrial input standard.The portion exceeding the average shale is calculated by subtracting the estimated fragment source from the total element content in the sample.E sample and Al sample represent the abundance of a certain element and Al element in the shale sample, respectively, and (E/Al) detr is the ratio of the average abundance of E and Al under specific standards (McLennan, 2001).In practical research, PAAS is often used to estimate terrestrial inputs, with commonly used (Si/Al) detrs and (Ba/Al) detrs of 3.11 and 0.0075, respectively (Taylor, 1964;Wedepohl, 1971;Dymond et al., 1992).The Chemical Weathering Index (CIA) is commonly used to determine the degree of paleoweathering.In order to exclude the influence of oxides from non terrestrial debris sources, it is necessary to calibrate the CaO content to obtain the CaO content (CaO*) in silicates.
Shale lithofacies and mineral compositions
Using the ternary diagram of mineral composition, shale lithofacies types were classified (Figure 3A).The results indicate that in the Southern Sichuan Basin, the main types are argillaceousrich siliceous shale, mixed siliceous shale, and siliceous-argillaceous mixed shale.The Eastern Sichuan Basin is mainly composed of siliceous shale, argillaceous-rich siliceous shale, and mixed siliceous shale.The mineral components of different system tracts in the Eastern and Southern Sichuan Basin are shown in Table 1.The carbonate mineral content of marine shale in Southern Sichuan is generally higher than that in Eastern Sichuan, while the feldspar content is generally lower than that in Eastern Sichuan.The content of clay minerals and pyrite in marine shale in the Southern Sichuan Basin is higher in TST1, and lower in other periods than in the Eastern Sichuan Basin.However, the content of quartz is only higher in HST2 period, and lower in other periods than in the Eastern Sichuan Basin.
From the characteristics of changes in different system tracts, from TST1 to HST2, the clay content in the Eastern Sichuan Basin continued to increase while the quartz content continued to decrease.However, the quartz content in the Southern Sichuan Basin shows a trend of first increasing and then decreasing, while the change in clay content is opposite.In addition, the TST1 and TST2 sedimentary samples in the Eastern Sichuan are more dispersed, indicating that the mineral composition during this period is complex and varied.It is possible that the study area is close to the provenance region, and the water is relatively shallow, making it more susceptible to the influence of sea-level changes and terrestrial inputs.
The TOC content of marine shale in the eastern Sichuan region is generally higher than that in the southern Sichuan region (Figure 3B), with the highest average TOC value during the TST1 period, reaching 4.2%.The average TOC value of marine shale in the southern Sichuan region reached its maximum during the TST2 period, at 3.9%.From TST1 to HST2 period, the average TOC in Southern Sichuan showed a trend of first increasing and then decreasing, while the average TOC in Eastern Sichuan continued to decrease.The maximum variation in TOC during the TST1 period in Eastern and Southern Sichuan may be related to rapid changes in the paleoenvironment or frequent geological events during this period.
Paleo-redox condition
The redox level is a key factor in the preservation of organic matter.Redox-sensitive elements Mo, U, V, Cr, Ni, and Co are commonly used for the reconstruction of bottom water redox environments (Tribovillard et al., 2006).These elements are prone to precipitate in sediment under reducing conditions, while they are prone to dissolve in water under oxidizing conditions (Tribovillard et al., 2012).In order to weaken the dilution effect of terrestrial debris, the ratio of element content to Al content was used as an indicator in this study.Figure 4 shows that the reducibility of bottom water in both Eastern and Southern Sichuan increases rapidly during the TST1 period, and reaches its maximum in the early TST2 period before slowly decreasing.
In order to quantify redox levels in different system tracts and areas, the cross-plots of Th/U-Ni/Co, V/Cr-V/(V + Ni), and Mo EF -U EF were applied (Figures 5A-C).In the Southern Sichuan Basin, the data points of TST1 reflect a gradual transition from suboxic to anoxic, sample distribution area and data points exhibit stronger reduction of anoxic bottom water in the early TST2 with some intervals even reaching the euxinic condition, while gradually decrease to weak anoxic or suboxic conditions during the late TST2 and HST2 period.The TST1 period in the Eastern Sichuan Basin is mainly in a anoxic environment, and the bottom water reduction in the TST2 period is also reach its maximum, manifested by some data points being close to the euxinic zone.In addition, a small number of data points in the late TST2 period near the suboxic zone may be related to a decrease in the degree of late reduction.The reducing conditions in HST2 continued slowly decreasing after a slight increase, and it remained mainly in a weak anoxic environment in the early period.
Mo/TOC can be used to determine the degree of limitation of water bodies (Algeo and Lyons, 2006;Algeo and Tribovillard, 2009).The data points of TST1 in the Southern Sichuan Basin are relatively scattered (Figure 5D), suggesting an unstable degree of water limitation.Considering the continuous uplift of the surrounding paleo-uplift and the anoxic bottom water environment, the Mo/ TOC ratio is mostly less than 4.5, suggesting that the water is strongly restricted in the middle and later periods of TST1.The Mo/ TOC enclosed area in the TST2 period is widely distributed, but mainly in a moderately enclosed environment.Its changing characteristics also indicate that as the degree of transgression increases, the degree of restriction gradually strengthens.During the HST2 period, the reducibility of water decreased, but the degree of restriction was significantly reduced in the early suboxic-anoxic environment.The TST1 data points in the Eastern Sichuan are relatively scattered, with Mo/TOC values mostly greater than 4.5, indicating moderate restriction.The limitations of TST2 period have changed significantly, with some data points having Mo/TOC ratios less than 4.5, indicating a significant increase in restriction during certain intervals, but still in a moderately restricted environment.The degree of restriction in HST2 period decreases with the decrease of sea-level.
Paleo-productivity condition
The content of Ba, Cu, Ni, and Zn is closely related to the life activities of marine organisms, Ba bio and (Cu + Ni + Zn)/Al can serve as reliable indicators for evaluating productivity.The Ba bio in Figure 6 reflects that the paleo-productivity of the Eastern Sichuan showed an overall increasing trend during the TST1-HST2 period, while showed an increasing trend during the TST1-TST2 period and a decreasing trend in HST2 in Southern Sichuan.The (Cu + Ni + Zn)/Al indicates that the paleo-productivity of these two regions all showed a unimodal distribution, increasing during the TST1 period and reaching its peak in the early TST2 period before slowly decreasing.
The Si excess in the black shale of the Wufeng-Longmaxi Formation is related to biogenic origin, so this study also used Si excess to evaluate paleo-productivity (Cai et al., 2022).There is a negative correlation between Si excess and Al (Figure 5E), ruling out the possibility of terrestrial input and clay mineral transformation to form Si excess .The positive correlation between K 2 O-Rb (Figure 5F) indicates that the source of Si excess is not related to magmatic activity or provenances (Floyd and Leveridge, 1987;Huang et al., 2023).The Al-Fe-Mn data (Figure 5G) points all fall into the non hydrothermal zone, indicating that the formation of Si excess is independent of hydrothermal activity (Xie et al., 2021).The positive correlation between Si excess and TOC (Figure 5H) confirms its correlation with biological sources.The Si excess in the Eastern Sichuan generally maintains a trend of first increasing and then decreasing from TST1 to HST2, with the highest paleo-productivity values distributed in the early period of TST2.The peak of the total amount of Si excess in Southern Sichuan also appeared in the early period of TST2 and gradually decreased thereafter.
Due to the fact that Ba bio is not suitable for oxygen-deficient environment, while the mass fraction of Cu, Zn and Ni is closely related to the sedimentation amount of organic matter, and suitable for reducing environment.As a result, there is no consistency in these indicators, and the reasons for this can be fully explained.As for Ba bio , sulfate reduction reactions are common on the surface and/or bottom of sediments in anoxic and euxinic environments during TST1 and HST2, barium sulfate is a latent source of sulfate, and partial dissolve when sulfate supply is insufficient, resulting in a lower content of barium and estimated productivity.On account of the eastern and southern Sichuan are mainly in a relatively reducing environment, (Cu + Ni + Zn)/Al is basically consistent with the change trend of Si excess , and that also verified the accuracy of the our results.In conclusion, the paleo-productivity during the depositional process of TST1 and TST2 in Eastern and Southern Sichuan is stronger than that of HST2.In addition, the Si excess and (Cu + Ni + Zn)/Al in Eastern Sichuan is generally higher than that in Southern Sichuan, reflecting the strong paleo-productivity of the former.
Terrestrial input condition
Al and Zr elements are not easily migrated during transportation and can be used as effective indicators to characterize the intensity of terrestrial inputs.Al is mainly came from fine-grained aluminosilicate clay minerals, while Zr existed in clay minerals and coarse-grained minerals (quartz, zircon, etc.).Therefore, Zr/Al ratio is considered an effective indicator for quantifying the content of coarse-grained debris in terrestrial inputs.Figure 7 indicates that the changes of Zr and Al curves in the Eastern and Southern Sichuan is basically consistent, showing a gradually decreasing trend during the TST1 and HST1 periods, as well as a fluctuating trend during the TST2 period and a gradually increasing trend during the HST2 period.The variation of Al contents suggest that the total terrestrial input amount of finegrained clay minerals are similar in this two areas.However, the overall Zr content in the Eastern Sichuan Basin is lower than that in the Southern Sichuan Basin, especially during the TST1 period with rapid drop.This indicates that the difference in terrestrial input during the TST1-HST2 sedimentary period between the Eastern and the Southern Sichuan is relatively small, but the input of coarse-grained debris is lower during the TST1-HST2 period in the eastern Sichuan.
Zr/Al in the Eastern Sichuan showed a slow decreasing trend during TST1-TST2, suggesting a significant slowdown in the input rate of coarse-grained debris, while a slow increase during HST2 indicates a faster input of coarse-grained debris.However, the input of coarse-grained debris in the Southern Sichuan Basin remained relatively stable during the TST1-TST2 period, while it also increased during the HST2 period.In addition, the Zr/Al values in the Southern Sichuan were higher than those in the Eastern Sichuan at all periods.This indicates that the input rate of coarsegrained debris in the sediment composition of the Eastern Sichuan Basin is generally lower than that of the Southern Sichuan, especially during the HST2 deposition period.During this period, the development of more mixed shale in the Southern Sichuan can be confirmed.In the late period of TST2, the Al content gradually increased, but the Zr/Al and Zr contents showed a slow decreasing trend in fluctuation.It is speculated that the rapid increase of finegrained clay minerals diluted coarse-grained minerals, resulting in an opposite trend of Al, Zr, and Zr/Al.
Paleoclimate and weathering intensity
The Chemical Index of Alteration (CIA) is often used to indicate the climate and physical and chemical weathering intensity of provenance region.CIA values of 50-65, 65-80, and 80-100 indicate weak, moderate, and strong weathering degrees, respectively, and correspond to cold-dry, warm-humid, and hothumid conditions, respectively.In the actual application process, in order to exclude the influence of CaO content that is bound to nonsilicates, the corrected CIA (CIA *) was calculated in this study (Figure 5I; Figure 7).The lowest values of CIA * in both the Eastern and Southern Sichuan appear at HST1, indicating a global temperature drop during the Hirantanian glaciation.The CIA * value in the Southern Sichuan ranges from 60 to 75.During the TST1-HST2 period, the CIA * gradually decreases and undergoes an evolutionary process from a hot warm climate to a cold dry climate.The cooling event caused by the Hirnantian glaciation is still accompanied by the early deposition of TST2.The CIA * value in the Eastern Sichuan ranges from 55 to 70 during the TST1 to early TST2 period and a CIA * value below 65, indicating a cold-dry environment.In the late period of TST2, the CIA * was greater than 65, and the climate gradually became mild.However, in the early period of HST2, it experienced a short-term cold-dry climate, and in the late period, it eventually transformed into a warm-humid climate.Compared with the input indicators of terrestrial debris, it is found that the low degree of chemical weathering in the cold climate during the TST tract is an important reason for the low input of terrestrial debris.
Provenance condition and tectonic setting
The differences in provenances not only affect the mineral compositions, but also control the types of terrestrial nutrient inputs.The Al 2 O 3 /TiO 2 ratio of fine-grained sediments can effectively identify the type of source rocks.Due to the low solubility of Al and Ti oxides at low temperatures, their proportion in sedimentary rocks is very close to that of the source rock.The distribution of Al 2 O 3 /TiO 2 ratio in<8, 8-21, and>21 indicates basic, neutral, and acidic igneous rocks, respectively.The provenance in the southern Sichuan region is mainly a mixture of neutral and acidic igneous rocks (Figure 8A).The provenance in the Eastern Sichuan is a mixture of neutral and acidic igneous rocks in TST1, while in TST2 and HST2 are neutral igneous rocks.The weathering trend of the A-CN-K diagram (Figure 5I) further confirm that the provenance in Southern Sichuan during TST1-TST2 tends to be neutral igneous rocks, and HST2 tends to be acidic igneous rocks.While in the Eastern Sichuan during the TST1-HST2 tends to have neutral granodiorite sources.
Due to the higher content of compatible element Sc and lower content of incompatible elements Zr and Th in basic rocks compared to acidic rocks, and the relatively constant abundance of Sc, Zr, and Th during weathering, the Zr/Sc-Th/Sc ratio can be used for source rock type analysis.Figure 8B shows that the distribution plots in the Southern Sichuan is more towards the upper right than that in the Eastern Sichuan, also suggesting that there are more acidic igneous components in provenance.In addition, in the provenance composition of the Eastern Sichuan, TST1 period is more acidic and HST2 period is more neutral.Roser and Korsch (1986) proposed that the K 2 O/Na 2 O-SiO 2 cross-plot can be effectively used to distinguish the tectonic environment of fine-grained sedimentary rocks.The results indicate that the Eastern Sichuan is in an active continental margin environment during the early Wufeng period.The data points of the Longmaxi period in the Southern Sichuan are close to the active continental margin, but the data points of the Wufeng period are closer to the passive continental margin environment (Figure 8C).Stable rare earth and trace element combinations (such as La, Th, Sc, Zr, etc.) can also be used to analyze and determine tectonic backgrounds.The Th-Sc-Zr/10 and Th-Co-Zr/10 diagrams established by Bhatia and Crook (1986) were applied for background analysis (Figures 8D, E).The analysis reveals that the majority of data points fall within or proximate to the active continental margin zone, while a minimal proportion is dispersed in the continental island arc zone.Compared to Eastern Sichuan, the Southern Sichuan exhibits a stronger correlation to the active continental margin region.The sedimentary tectonic context in both regions remains steady from Late Ordovician to Early Silurian, characterized predominantly by an active continental margin setting.
Impact of paleo-environmental condition on OM accumulation and mineral composition
The formation of black shale from the Ordovician Wufeng Formation to the Silurian Longmaxi Formation in the Sichuan Basin is the result of geological events and various paleo-environmental conditions.Under this influence, there are significant differences in TOC and mineral composition between different system tracts in the Eastern and Southern Sichuan (Figure 9).
TST 1 period
Under the influence of the Caledonian Movement, the Xuefeng paleo-uplift and the Central Sichuan underwater uplift rapidly uplifted.In the Kaitian stage, relative sea-level rise led to a transformation of the sedimentary environment from an early open platform to a restricted basin, initiating the deposition of black shale in the Wufeng Formation.
In the early TST 1 period, intense tectonic uplift enhanced the intensity of detrital input.The separation effect of paleo-uplift and relative sea-level rise leads to an increasing reduction and restriction of bottom water, but also accompanied by intermittent relative oxic conditions.Active tectonic activity also promotes volcanic eruptions, and the volcanic ash formed by it is transported and deposited in the ocean, providing rich nutrients for surface seawater.Under this setting, plankton such as graptolites have proliferated on a large scale, and the productivity level of the water surface is constantly improving.These plankton can be preserved as organic matter under anoxic conditions after death.In the late TST 1 period, tectonic activity tended to stabilize, but the early large-scale volcanic activity had led to a gradual cooling of the global climate and changes in graptolite biodiversity, triggering the Late Ordovician extinction.In cold climates, the weathering intensity is weak, and the amount of terrestrial input is significantly reduced.A large number of aquatic organisms die, providing organic matter supply and enhancing the reducibility of bottom water through degradation of organic matter.During this period, organic-rich siliceous shale is mainly formed.
During TST 1 period, the trend of TOC variation in the Eastern Sichuan and Southern Sichuan is consistent, but the average value of the former is significantly higher than that of the latter.It is speculated that this phenomenon is related to the high paleo-productivity in the Eastern Sichuan.The high excess Si and a large amount of siliceous organisms confirm the enrichment of organic matter, manifested by a higher quartz content in Eastern Sichuan than in Southern Sichuan (Table 1).
The detrital input in the Eastern and Southern Sichuan is basically the same, and both provenances tend to be acidic magmatic rocks.The high content of feldspar in the Eastern Sichuan is related to the relatively cold-arid climate background, where feldspar is weakly affected by weathering during transportation.Due to being closer to the Central Guizhou-Xuefeng paleo-uplift, there is more input of coarse-grained carbonate debris in the Southern Sichuan Basin, which is characterized by high carbonate mineral content, especially in the late TST 1 period.This has also been confirmed by the Zr/Al ratio.
HST 1 period
In the mid to late Hirnantian stage, the formation of glaciers on the Gondwana continent led to a significant decrease in global sea-level (Bertrand et al., 1996), resulting in sharp shallowness of water mass and unfavorable organic matter preservation under the oxic bottom water condition.A thin layer of carbonate-rich argillaceous limestone was deposited in a shallow water environment.
TST 2 period
The Hirnantian glaciation has ended, and global temperatures are rapidly rising.Glacial melting has caused widespread transgression.Volcanic activity and tectonic movements tend to stabilize.In the early TST 2 period, the separation of paleo-uplift and sea-level rise kept the bottom water in a restricted and anoxic condition.Climate warming has led to an increase in weathering intensity and enhanced detrital influx.The glacial melting results in a large amount of cold fresh water flowing into the ocean, which not only brings nutrients but also causes stratification of fresh and saline water, exacerbating anoxic conditions.Planktonic organisms, including radiolarians and graptolites, have once again flourished on a large scale, increasing productivity levels.The continuously sinking biological remains continue to consume dissolved oxygen in the bottom water, maintaining anoxic environments, even euxinic environments.During early TST 2 period, less detrital influx helps to weaken the dilution effect and form organic-rich siliceous shale.During middle TST 2 period, the sea-level continues to rise, increasing the circulation between the restricted basin and the ocean, and reducing the restriction degree of bottom water.However, under moderate to strong paleo-productivity, the consumption of oxygen by the remains of organisms still ensures that the bottom water is in an suboxic-anoxic environment.In the late TST 2 period, the fluctuating sea-level drop results in limited surface water productivity.The increase in dissolved oxygen in bottom water is not conducive to organic matter preservation.The humid climate leads to an increase in the intensity of paleoweathering, while the increase in terrestrial input disrupts the enrichment of organic matter.Compared to the middle TST 2 period, the organic carbon content is significantly reduced, and the shale lithology is argillaceous-rich siliceous shale or mixed siliceous shale.
During early-middle TST 2 period, the high productivity level in the Eastern Sichuan leads to a higher average TOC content than in the Southern Sichuan.In the late TST 2 period, the paleo-productivity in the Eastern Sichuan is still at a high level, but the rapidly increasing weathering intensity has led to a significantly higher detrital influx than in the Southern Sichuan.The dilution effect of terrestrial debris reduces the accumulation of organic matter in the Eastern Sichuan.The mineral composition indicates that the quartz content in the Eastern Sichuan was higher in the early TST 2 period (Figure 9), which is related to the deposition of a large amount of biogenic quartz, while the late TST 2 period was lower than that in the Southern Sichuan (Table 1).The clay content in the Eastern Sichuan is higher than that in the Southern Sichuan, while the carbonate content is lower than that in the Southern Sichuan.This is related to the large amount of terrestrial debris input and the high proportion of clay minerals.From the perspective of provenance differences, the Eastern Sichuan Basin is mainly composed of neutral igneous rocks, while the Southern Sichuan Basin is mainly composed of acidic igneous rocks.The former has a higher proportion of plagioclase feldspar in sediment supply, while the latter has a higher proportion of potassium feldspar.However, potassium feldspar is prone to kaolinization during transportation, so the mineral composition in eastern Sichuan has a significantly higher content of feldspar, mainly plagioclase feldspar.
HST 2 period
The climate in the late Rhuddanian stage continued to warm, with a slow decrease in sea-level, but there was short-term transgression.The tectonic uplift intensifies the weathering process in the provenance region, and the amount of terrestrial input continues to increase.The productivity level has undergone significant changes (significantly higher in Eastern Sichuan than in Southern Sichuan), but the bottom water conditions are not conducive to the preservation of organic matter.Diluted by terrestrial debris, organic matter is dispersed in sediments.More and more terrestrial quartz and clay minerals have replaced biogenic quartz as the main component of minerals, and the shale lithofacies are gradually transitioning to argillaceous-siliceous mixed shale.
The low productivity, low preservation conditions, and high terrestrial input in the Southern Sichuan Basin are reflected in a rapid decrease in TOC.The TOC in the Eastern Sichuan increased in The Paleoenvironment, TOC and minerals composition evolution process in the Southern and Eastern area of Sichuan basin during the Late Ordovician-Early Silurian.
the early stages and gradually decreased in the later stages, which is speculated to be related to higher paleoproductivity and a high proportion of clay in terrestrial debris.The high productivity in the Eastern Sichuan not only brings organic matter input, but also consumes the oxygen content of the bottom water during deposition, which to some extent increases the reducibility of the bottom water.In addition, high clay content is conducive to the adsorption and rapid settling of organic matter, reducing consumption during the sinking process, and facilitating the aggregation and preservation of organic matter.
The difference in the detrital influx between the Eastern and Southern Sichuan is small, but the quartz mineral content in Southern Sichuan is higher than that in Eastern Sichuan (Table 1), while the clay content is lower than that in Eastern Sichuan.This is related to the slightly stronger weathering intensity and coarsegrained debris input (silt-size quartz) in Southern Sichuan.The provenance sources in the Eastern Sichuan tend to be neutral magmatic rocks, while those in the Southern Sichuan tend to be acidic magmatic rocks.Moreover, the climate in the Eastern Sichuan Basin is cold-dry, with low weathering intensity.Therefore, the mineral composition of plagioclase in the Eastern Sichuan Basin is relatively high, while the content of quartz and potassium feldspar in the Southern Sichuan is higher (Table 1).The content of carbonate rocks in Southern Sichuan is higher than that in Eastern Sichuan, especially in calcite, which is related to the high input of coarsegrained carbonate debris.The average value of pyrite in the Eastern Sichuan is 37.5% higher than that in the Southern Sichuan, which indirectly confirms that the oxygen content in the bottom water of the Eastern Sichuan is lower, which is conducive to the formation of pyrite.
Conclusion
1) The lower part of the Longmaxi Formation-Wufeng Formation in the Southern and Eastern Sichuan Basin can be divided into two third-order sequences (Sq 1 and Sq 2).Sq 1 is composed of the Wufeng Formation and GYQ Member, while Sq 2 is composed of the lower part of the Longmaxi Formation.Each third-order sequence consists of a set of transgressive system tracts (TSTs) and a set of high-level system tracts (HSTs).
2) The TOC content in Eastern Sichuan is higher than that in Southern Sichuan.During the TST1 period, the average TOC content in Eastern Sichuan was the highest, reaching 4.2%.The average TOC content in Southern Sichuan reached its maximum at 3.9% during the TST2 period.From TST1 to HST2, the clay content in eastern Sichuan continued to increase while the quartz content continued to decrease.However, the quartz content in southern Sichuan shows a trend of first increasing and then decreasing, while the change in clay content is opposite.3) During the TST1 period, the TOC range in Eastern Sichuan is significantly higher than that in Southern Sichuan, which is related to its high paleoproductivity.In the early to middle TST2 period, the high paleoproductivity in Eastern Sichuan resulted in a higher TOC content than in Southern Sichuan.In the late TST2 period, the accumulation of organic matter in Eastern Sichuan was lower than that in Southern Sichuan due to the dilution effect of terrestrial debris.During the HST2 period, the TOC in Southern Sichuan showed a rapid decrease trend.The TOC in Eastern Sichuan slightly increased in the early HST2 period, which is related to higher paleoproductivity and the adsorption and preservation of clay.4) During the TST1 period, the quartz content in Eastern Sichuan was higher than that in Southern Sichuan, which is related to the abundant accumulation of biogenic silica.During the TST2 period, due to the high input of terrestrial debris and the high proportion of clay minerals, the quartz content in Eastern Sichuan was higher than that in Southern Sichuan in the early period, but the opposite was true in the late period.
The mineral content of clay in Eastern Sichuan has always been higher than that in Southern Sichuan.During the HST2 period, compared with Eastern Sichuan, the stronger weathering intensity and coarse-grained debris input (mainly composed of silt-size quartz) in Southern Sichuan resulted in higher quartz mineral content and lower clay mineral content.
FIGURE 3
FIGURE 3Mineral composition in different system tracts (A) and box-plots of TOC range (B) in Eastern and Southern Sichuan Basin.Total organic carbon (TOC) content range.
FIGURE 4
FIGURE 4Comparison of common paleo-redox indicators in different system tracts in Eastern and Southern Sichuan Basin.
FIGURE 6
FIGURE 6Comparison of common paleo-productivity indicators in different system tracts in Eastern and Southern Sichuan Basin.
FIGURE 7
FIGURE 7Comparison of common terrigenous clastic input and paleo weathering indicators in different system tracts in Eastern and Southern Sichuan Basin.
TABLE 1
Huang et al., 2023) characteristics and TOC content of shales in different area and system tracts (Well N16, W1, W4-H in southern Sichuan fromHuang et al., 2023). | 2024-05-11T16:20:08.202Z | 2024-05-06T00:00:00.000 | {
"year": 2024,
"sha1": "21a2fc5368e03860e2124a500d3480e68c93b8b9",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fenvs.2024.1391445/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1c1da4b3e346807ffc2ec860e9c685aa9b6d17f3",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"extfieldsofstudy": []
} |
53597905 | pes2o/s2orc | v3-fos-license | Precise position control of a helical magnetic robot in pulsatile flow using the rotating frequency of the external magnetic field
We propose a position control method for a helical magnetic robot (HMR) that uses the rotating frequency of the external rotating magnetic field (ERMF) to minimize the position fluctuation of the HMR caused by pulsatile flow in human blood vessels. We prototyped the HMR and conducted several experiments in pseudo blood vessel environments with a peristaltic pump. We experimentally obtained the relation between the flow rate and the rotating frequency of the ERMF required to make the HMR stationary in a given pulsatile flow. Then we approximated the pulsatile flow by Fourier series and applied the required ERMF rotating frequency to the HMR in real time. Our proposed position control method drastically reduced the position fluctuation of the HMR under pulsatile flow.
I. INTRODUCTION
Coronary artery disease has become a severe health problem for modern people due to meat-based eating habits, lack of exercise, and aging.Coronary artery disease is mainly caused by arteriosclerosis, angina pectoris, and vascular occlusion in coronary arteries.Catheterization is one of the most popular medical operations to treat coronary artery disease.A flexible and biocompatible tube called a catheter is inserted into a blood vessel to create a passage for other wired medical devices.However, medical doctors cannot ensure that the catheter reaches the target region in complicated and narrowed coronary arteries because they are restricted in their ability to steer the catheter.The success of this operation thus considerably depends on the experience of individual medical doctors.
To overcome the limitations of the conventional catheterization process, researchers have been investigating helical magnetic robots (HMRs) that are wirelessly manipulated using an external rotating magnetic field (ERMF) generated from a magnetic navigation system (MNS).HMRs have a simple structure and great steering and mobile ability. 1 Ishiyama et al. proposed an HMR with a spiral structure and investigated its navigating performance according to the frequency of the ERMF. 2 Choi et al. developed an HMR and manipulation method that can generate three-dimensional locomotion and a drilling motion. 3Jeon et al. proposed a saddle structure for the MNS to generate navigation, mechanical drilling, and drug delivery motions in the HMR. 4,5Lee et al. proposed a dual-body HMR whose navigation, mechanical drilling, and cargo delivery motions were controlled via the ERMF. 6owever, none of the prior research considered the disturbance caused by the pulsatile blood flow in real coronary arteries.The HMR should perform not only navigation but also various medical actions such as drug and stent delivery under pulsatile flow.When the HMR performs medical action, the HMR should maintain its current posture stationary at the target location to perform next medical a Gunhee Jang is a corresponding author.Electronic mail: ghjang@hanyang.ac.kr 2158-3226/2017/7(5)/056712/5 action.If a stent expands in the wrong place, a major surgery may be required to remove the stent. 7herefore, we should minimize the position fluctuation due to the pulsatile blood flow in order to precisely perform the medical actions.
We propose a position control method for an HMR that uses the rotating frequency of the ERMF to minimize the position fluctuation of the HMR caused by the pulsatile flow in human blood vessels.First, we experimentally investigated and obtained the relation between the flow rate and the required rotating frequency of the ERMF to make the HMR stationary in a given flow rate.Then we performed several experiments in pseudo blood vessels under pulsatile flow to validate the effectiveness of our proposed position control method.
II. CONTROL METHOD FOR AN HMR FOR STABLE MOTION IN PULSATILE FLOW
An HMR is composed of a helical body and a diametrically magnetized cylindrical magnet, as shown in Fig. 1(a).The HMR generates a propulsive force when the helical body rotates along the axis of the HMR in a fluidic environment.To generate the rotating motion along the axis, we use magnetic torque generated by the magnet in the HMR under the ERMF, which can be expressed as follows: where m and B are the magnetic moment of the magnet and magnetic flux density of the ERMF, respectively.The external magnetic field that interacts with the magnet to generate the magnetic torque and rotating motion along the ERMF, as shown in Fig. 1(b), can be expressed as follows: where B 0 , f, N, and U are the magnitude and frequency of the ERMF, the unit vector of the rotating axis, and the unit vector normal to N, respectively.Because the HMR generates a helical motion using the rotating motion of the magnet under the ERMF, as shown in Fig. 1(c), we can manipulate the HMR by controlling the ERMF.According to previous empirical studies, 2,8 the velocity of the HMR is proportional to the frequency of the ERMF.To maintain the HMR in a stationary position under pulsatile flow, the propulsive velocity or propulsive force of the HMR should be equal to the flow velocity or resistive force of the pulsatile flow.Because the propulsive force can be controlled by the rotating frequency of the ERMF, we need to find the frequency that will compensate the resistive force for any given flow condition.The flow velocity can be obtained by dividing the measured flow rate by the cross-sectional area.Fig. 2 represents the proposed control algorithm.An HMR is in the middle of a water-filled glass tube serially connected with a peristaltic pump and a flowmeter.The peristaltic pump generates pulsatile flow in the tube, and the flowmeter measures the flow rate.Because the flowmeter measures the flow rate discretely, we interpolated the measured flow rate using the Fourier series to provide continuous information linking the flow rate to the power supply as follows: where a m and φ m are the Fourier constant and phase of the m t h term, respectively.For the HMR to maintain a stationary position, the propulsive force has to be controlled according to the varying flow rate.After we define the proportional constant f s between the flow rate Q flow and the rotating frequency f (t), the ERMF frequency required to maintain the HMR in a stationary position can be expressed as follows: Using the required frequency of the ERMF from Eq. ( 4), the voltage information for each coil of the MNS can be calculated and transferred to the power supply to effectively overcome the pulsating flow and precisely maintain an HMR's stationary position.
III. RESULTS AND DISCUSSION
We developed an MNS, as shown in Fig. 3(a), to generate a three-dimensional ERMF.The MNS consists of three pairs of electromagnetic coils perpendicular to one another (x-directional Helmholtz coil, y-and z-directional uniform saddle coils).The major specifications of the MNS are given in Table I.We prototyped the HMR with 3D printing technology in ultraviolet curable plastic, as shown in Fig. 3(b).A diametrically magnetized cylindrical magnet with a length of 10 mm and diameter of 1 mm was inserted in the HMR.First, we experimentally measured the rotating frequency of the HMR required to maintain a stationary position with respect to flow rate, interpolating the measured data by the least square method, as shown in Fig. 4(a).We found that the required rotating frequency is linearly proportional to the flow rate with a scale factor of 0.2042.Next, we set the peristaltic pump to generate a pulsatile flow of 99 beat/min and 100 mL/min, as shown in Fig. 4(b), which is similar to the fluidic environment of a coronary artery.The measured flow rate was interpolated by the Fourier series to determine the Fourier coefficients and the phases.By using Eq. ( 4) and the scale factor, we could specify the rotating frequency of the HMR required by the flow rate, as shown in Fig. 4(c
IV. CONCLUSIONS
We proposed a position control method for an HMR that uses the rotating frequency of the ERMF to minimize HMR position fluctuations caused by the pulsatile flow in human blood vessels.We experimentally obtained the relation between the flow rate and the rotating frequency of the ERMF to keep the HMR stationary in a given pulsatile flow.Then we approximated the pulsatile flow by Fourier series and applied the required ERMF rotating frequency to the HMR in real time.With the application of the proposed control method, the position fluctuation of the HMR was drastically reduced, by 80.2%.This research could be extended to the precise and effective navigation of an HMR for various mechanical and medical operations.
FIG. 1 . 3 Kim
FIG. 1.(a) Structure of the HMR.(b) ERMF to generate helical motion of the HMR.(c) Helical motion of the HMR and its navigating direction. | 2018-11-05T16:29:39.200Z | 2017-01-25T00:00:00.000 | {
"year": 2017,
"sha1": "80a957aa447b097eca7126c822d2e082fcc5ad38",
"oa_license": "CCBY",
"oa_url": "https://aip.scitation.org/doi/pdf/10.1063/1.4975127",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "80a957aa447b097eca7126c822d2e082fcc5ad38",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Physics"
]
} |
15499081 | pes2o/s2orc | v3-fos-license | Expression of a heat-stable NADPH-dependent alcohol dehydrogenase from Thermoanaerobacter pseudethanolicus 39E in Clostridium thermocellum 1313 results in increased hydroxymethylfurfural resistance
Background Resistance to deconstruction is a major limitation to the use of lignocellulosic biomass as a substrate for the production of fuels and chemicals. Consolidated bioprocessing (CBP), the use of microbes for the simultaneous hydrolysis of lignocellulose into soluble sugars and fermentation of the resulting sugars to products of interest, is a potential solution to this obstacle. The pretreatment of plant biomass, however, releases compounds that are inhibitory to the growth of microbes used for CBP. Results Heterologous expression of the Thermoanaerobacter pseudethanolicus 39E bdhA gene, that encodes an alcohol dehydrogenase, in Clostridium thermocellum significantly increased resistance to furan derivatives at concentrations found in acid-pretreated biomass. The mechanism of detoxification of hydroxymethylfurfural was shown to be primarily reduction using NADPH as the cofactor. In addition, we report the construction of new expression vectors for homologous and heterologous expression in C. thermocellum. These vectors use regulatory signals from both C. bescii (the S-layer promoter) and C. thermocellum (the enolase promoter) shown to efficiently drive expression of the BdhA enzyme. Conclusions Toxic compounds present in lignocellulose hydrolysates that inhibit cell growth and product formation are obstacles to the commercialization of fuels and chemicals from biomass. Expression of genes that reduce the effect of these inhibitors, such as furan derivatives, will serve to enable commercial processes using plant biomass for the production of fuels and chemicals. Electronic supplementary material The online version of this article (doi:10.1186/s13068-017-0750-z) contains supplementary material, which is available to authorized users.
Caldicellulosiruptor bescii [9] can convert furfural and HMF to the less toxic alcohols, furfuryl alcohol and furan dimethanol, respectively. Overexpression of oxidoreductases, such as alcohol dehydrogenases (ADH1, ADH6, and ADH7) [7,10,11], a propanediol oxidoreductase (FucO) [8], and a butanol dehydrogenase (BdhA) [9] has been shown to increase specific furfural and HMF conversion rates. Among them, Teth39_1597 encoding the BdhA enzyme from Thermoanaerobacter pseudethanolicus 39E was shown to reduce both furfural and HMF at 60 °C using NADPH as the cofactor [12]. We recently demonstrated that heterologous expression of this heatstable BdhA enzyme increased resistance of engineered C. bescii strains to both furfural and HMF [9]. C. bescii is a hyperthermophilic, Gram-positive, anaerobic bacterium that has the unusual ability to grow on a variety of lignocellulosic biomass substrates without conventional pretreatment [13,14]. We recently engineered C. bescii to produce ethanol directly from switchgrass making it a strong candidate for CBP [15]. Pretreatment, however, increases rates of hydrolysis but releases furans that are toxic to growing cells. C. thermocellum relies primarily on pretreated biomass producing ethanol at high yield (72% of theoretical maximum) and produces ethanol as a single fermentation product [16,17], making it perhaps the strongest candidate so far studied for CBP. To test whether BdhA from T. pseudethanolicus might also improve resistance to these compounds in C. thermocellum, we designed new expression vectors for C. thermocellum, using three different promoters, the C. bescii S-layer promoter, and the C. thermocellum Clo1313_1809 and enolase promoters. The vectors were based on the C. bescii replicon pBAS2 [18,19]. Expression of BdhA in C. thermocellum resulted not only in increased resistance to HMF but also increased growth on cellulosic substrates and improved ethanol production. These data suggest that redox homeostasis in C. thermocellum plays an important role in its growth on cellulosic substrates.
Results and discussion
Heterologous expression of the bdhA gene from T. pseudethanolicus in C. thermocellum Expression vectors for bdhA were based on plasmid pDCW89 [18] constructed from the native C. bescii plasmid pBAS2 [19] for use as an E. coli/Caldicellulosiruptor shuttle vector. This replicon is maintained stably in C. thermocellum at its optimal growth temperature of 60 °C [18]. Previous studies showed that the C. bescii S-layer [15,20] and the C. thermocellum enolase [21] promoters were useful for expression of target genes in both C. bescii and C. thermocellum. For this study, the Clo1313_1809 promoter was also tested based on the fact that the steady state levels of RNA determined by transcriptional profiling (http://www.ncbi.nlm.nih.gov/geo/ query/acc.cgi?acc=GSE54082) for genes under the control of this promoter were high. While steady state levels of RNA reflect both promoter strength and RNA stability, we selected this promoter as a possible candidate. The bdhA gene from T. pseudethanolicus 39E (Teth39_1597) was amplified by PCR and cloned under the transcriptional control of the C. bescii S-layer, Clo1313_1809, and C. thermocellum enolase (Cthe_0143) promoters. The P Slayer -bdhA expression cassette containing a C-terminal 6X His-tag and a Rho-independent transcription terminator was cloned using plasmid pDCW89 as template to construct plasmid pSKW01 (Fig. 1a). pSKW02 and pSKW04 plasmids are identical to pSKW01 except for the promoter region, which contain Clo1313_1809 and C. thermocellum enolase promoters, respectively (Fig. 1b, c).
Plasmid DNA was transformed into a pyrF deletion mutant of C. thermocellum [22] and transformants were selected for uracil prototrophy. The presence of the plasmid in transformants was confirmed by PCR analysis (Additional file 1: Figure S1A). Primers (SK04 and DC228) were used to amplify the portion of the plasmid containing the open reading frame of bdhA, annealing to regions of the plasmid outside the bdhA gene. The expected PCR product was detected for pSKW01 and pSKW02 transformants but not for pSKW04 (Additional file 1: Figure S1A) suggesting that pSKW04 might have integrated into the C. thermocellum chromosome. To test whether the plasmids were replicating autonomously, total DNA isolated from C. thermocellum transformants containing pSKW01 (JWCT06), pSKW02 (JWCT07) or pSKW04 (JWCT08) was used to back-transform E. coli. Transformants were obtained for DNA from JWCT06 and JWCT07 but not JWCT08, again suggesting that while the plasmid was present it was not autonomously replicating. Two different digestions by restriction endonucleases performed on plasmid DNA purified from two independent E. coli back-transformant colonies resulted in identical digestion patterns relative to the original plasmids (Additional file 1: Figure S2), indicating that plasmids (pSKW01 and pSKW02) were autonomously replicating in C. thermocellum and were structurally stable during transformation and replication in C. thermocellum and back-transformation into E. coli. To test whether plasmid pSKW04 had integrated into the chromosome PCR amplification using various primers inside and outside plasmid sequences was used. The enolase promoter sequence is the only region of homology between the plasmid and the chromosome and the only potential site for homologous recombination. PCR amplification using a primer set specific for 0.1 kb upstream region of enolase promoter in the chromosome (SK038) and plasmid pSKW04 (DC461) (Additional file 1: Figure S1B) confirmed pSKW04 plasmid integration into the C. thermocellum chromosome at the site of the enolase promoter region via a single crossover event.
To investigate expression of T. pseudethanolicus 39E BdhA in C. thermocellum, JWCT06, JWCT07, and JWCT08 strains were grown in CTFUD-NY medium without uracil. Although the BdhA protein (44 kD) was difficult to visualize using Coomassie blue staining (Fig. 2a), it was clearly visible by Western hybridization analysis using monoclonal anti-His antibodies (Fig. 2b). Of the three different expression systems, the constructs containing the C. bescii S-layer and C. thermocellum enolase promoters resulted in the best protein expression. We emphasize that detection of a protein product is an assay that combines transcription efficiency, RNA stability and protein stability and is not a direct assay of the promoters themselves. While expression levels of the P S-layer -bdhA cassette were similar throughout the midlog and stationary phases, those of the P enoalse -bdhA cassette decreased slightly as cells entered stationary phase (Fig. 2b).
Effects of BdhA expression on the growth and tolerance of C. thermocellum to furan derivatives
Interestingly, strains expressing the bdhA gene grew significantly better than the control strain. Maximum optical densities of strains expressing BdhA were 26% (JWCT06, P value = 0.022) and 28% (JWCT08, P value = 0.036) higher than the control strain in standard CTFUD-NY medium without furan aldehydes (Fig. 3a). In addition, volumetric ethanol production of JWCT06 and JWCT08 strains were 8% (P value = 0.030) and 13% (P value = 0.058) higher than the control strain with no effect on cellobiose consumption, lactate or acetate production (Fig. 3b). Previous studies reported that a complete loss of NADHdependent activity by directed evolution of the AdhE enzyme, a bifunctional acetaldehyde-CoA/alcohol dehydrogenase, with concomitant acquisition of an NADPHdependent activity conferred increased tolerance to ethanol in C. thermocellum, which likely affected the maintenance of NADP/NADPH pools linked to membrane changes [23]. The BdhA enzyme expressed in this study is also an NADPH-dependent alcohol dehydrogenase that does not use NADH as a cofactor [12]. This study and earlier studies suggest that redox homeostasis To investigate the effects of BdhA expression on the tolerance of C. thermocellum to furan derivatives, we also performed fermentation experiments in the presence of 10 mM furfural or HMF. These compounds are present at approximately these concentrations in dilute acid pretreatment hydrolysates [5,24]. As shown in Fig. 3c, conversion of furfural was rapid in all strains of C. thermocellum, but the strains expressing BdhA grew to significantly higher cell densities than the control strain. Maximum optical densities of strains expressing BdhA were 35% (JWCT06, P value = 0.011) and 30% (JWCT08, P value = 0.005) higher than the control strain in the presence of 10 mM furfural without affecting the amount of cellobiose consumed and end product concentrations (Fig. 4a). While both strains expressing BdhA were significantly more efficient at conversion of HMF (Fig. 3d), for strain JWCT08 growth was significantly better than either the control strain or strain JWCT06. Maximum optical densities of strains expressing BdhA were 54% (JWCT06, P value = 0.0033) and 84% (JWCT08, P value = 0.018) higher than the control strain in the presence of 10 mM HMF (Fig. 3d). Interestingly, conversion of HMF in strain JWCT06 was increased earlier and throughout growth phase compared to either the control strain or strain JWCT08. The JWCT06 strain consumed 18% (P value = 0.030) more cellobiose and produced 29% (P value = 0.025) more ethanol than the control JWCT02 strain with HMF present (Fig. 4b). The JWCT08 strain consumed 24% (P value = 0.013) more cellobiose and produced 40% (P value = 0.016) more ethanol than the control strain (Fig. 4b). These results show that BdhA expression increases resistance to HMF relative to the control strain. Addition of HMF decreased ethanol production and increased acetate production in the control strain compared to growth in the medium without inhibitors (Figs. 3b, 4b). Expression of BdhA led to reduced Fig. 4 Comparison of fermentation products and in vitro reduction activity of furan derivatives by JWCT02, JWCT06, and JWCT08 strains. a, b Cellobiose consumed and fermentation products of C. thermocellum. JWCT02, JWCT06, and JWCT08 strains were grown in defined medium with 5 g/L cellobiose containing 10 mM furfural (a) or HMF (b). Results are the mean of duplicate experiments and error bars indicate s.d. c, d In vitro assays of reduction activity of furfural (c) or HMF (d). Crude protein extracts of JWCT02, JWCT06, and JWCT08 strains were assayed for reduction activity using NAD(P)H as cofactor. JWCT02, the parent control strain; JWCT06, containing P S-layer -bdhA; JWCT08, containing P enolase -bdhA. Results are the mean of triplicate experiments and error bars indicate s.d inhibition of ethanol production, and we speculate that acetate production might lead to an additional ATP per acetate, partially relieving ATP depletion caused by furan derivatives [25]. As shown in Fig. 4a, b, ethanol yield of the control JWCT02 strain in the presence of HMF was 21% (P value = 0.027) lower than that in the presence of furfural. In contrast to previous studies showing that the toxic effect of furfural are greater than that of HMF in other microorganisms [6,8,9,26], in this study, HMF was more inhibitory than furfural to the growth of C. thermocellum.
In vitro NADH-and NADPH-dependent conversion activity of strains expressing BdhA
Aldehydes are toxic to microbial cell growth and cells convert these compounds to less toxic compounds such as alcohols and carboxylic acids. Previous studies reported that reduction activity was much higher than oxidation activity for detoxification of the furan derivatives [6,8,27]. To investigate the mechanism of conversion of furfural and HMF by strains expressing BdhA, we examined NADH-and NADPH-dependent activities of furfural and HMF reduction. Crude extracts from the BdhA expressing strains, JWCT06 and JWCT08, and the control strain (JWCT02) were prepared and in vitro reduction of furfural and HMF was measured. While specific activities of crude extracts of all strains toward furfural were similar, BdhA expression increased the reduction of HMF by 59-69% relative to the control (Fig. 4c, d). This result is likely due to the specificity of the BdhA enzyme as this enzyme is known to have a twofold higher activity on HMF than furfural [12]. We concluded from these data that the mechanism of improved detoxification rates of HMF by BdhA expression (Fig. 3d) results from higher reduction of HMF.
Bacterial strains, media, and culture conditions
Clostridium thermocellum and E. coli strains used in this study are listed in Table 1. All C. thermocellum strains were grown anaerobically in modified CTFUD medium [28], pH 7.0, with cellobiose (0.5% w/v) as the sole carbon source for routine growth and transformation experiments. C. thermocellum cells were grown at 60 °C, under an atmosphere of 85% nitrogen, 10% CO 2 , and 5% hydrogen. For uracil auxotrophs, 360 µM uracil was supplemented. E. coli BL21 (Invitrogen, Grand Island, NY, USA) grown in LB medium with 50 μg/mL apramycin was used for plasmid constructions.
Construction and transformation of bdhA expression vectors
Plasmid DNA was isolated using a Qiagen Miniprep Kit (Qiagen, Valencia, CA, USA). Chromosomal DNA from C. thermocellum strains was extracted using the Quick-gDNA MiniPrep (Zymo Research, Irvine, CA, USA) according to the manufacturer's instructions. Plasmids used in this study were constructed using Q5 High-Fidelity DNA polymerase (New England BioLabs, Ipswich, MA, USA) for PCR reactions, restriction enzymes (New (Fig. 1a) was constructed in two cloning steps. First, the 2.8 kb Cthe0423 expression cassette containing the regulatory region of Cbes2303 (S-layer protein), a C-terminal 6X Histidine-tag, and a Rhoindependent transcription terminator was amplified by PCR with primers DC460 (with an added PvuI site) and DC461 (with an added NotI site) using pDCW144 as template. A 7.7 kb DNA fragment containing the pSC101 replication origin for E. coli, a putative C. thermocellum replication origin, an apramycin resistance gene cassette (Apr R ) and a C. bescii pyrF cassette was amplified with primers DC481 (with an added PvuI site) and DC482 (with an added NotI site) using pDCW89 as template. These two linear DNA fragments were digested with PvuI and NotI, and ligated to construct an 10.5 kb intermediate vector, pDCW148. In a second step, the 7.9 kb DNA fragment was amplified with primers DC576 (with and added PstI site) and DC466 (with an added SphI site, a 6X Histidine-tag, and a stop codon) using pDCW148 as a template. A 1.2 kb DNA fragment containing the coding sequence of bdhA (Teth39_1597) was amplified with DC577 (with an added PstI site) and DC578 (with an added SphI site) using pDCW171 as template. These two linear DNA fragments were digested with PstI and SphI, and ligated to construct a 9.1 kb plasmid, pSKW01. Plasmids pSKW02 and pSKW04 are identical to pSKW01 except for the promoter regions (Fig. 1). To make this change, a 0.3 kb DNA fragment containing the regulatory region of Clo1313_1809 was amplified with primers SK07 (with an added PstI site) and SK36 (with an added AvrII site) using C. thermocellum LL1005 genomic DNA (gDNA) as template. The 9.0 kb DNA fragment of pSKW01 without the regulatory region of Cbes2303 was amplified with primers SK04 (with an added PstI site) and SK28 (with an added AvrII site). These two linear DNA fragments were digested with PstI and AvrII, and ligated to construct a 9.3 kb plasmid, pSKW02 (Fig. 1b).
In the case of plasmid pSKW04 (Fig. 1c), a 0.2 kb DNA fragment containing the enolase promoter region was amplified with primers SK19 (with an added PstI site) and SK26 (with an added AvrII site) using C. thermocellum LL1005 gDNA as template. E. coli BL21 cells were transformed by electroporation in a 1-mm-gap cuvette at 1.8 kV and transformants were selected for apramycin resistance. The sequences of all plasmids were verified by Automatic sequencing (Genewiz, South Plainfield, NJ, USA). Electrotransformation of C. thermocellum cells was performed as previously described [29]. Cultures, electro-pulsed with plasmid DNA (~0.5 μg), were recovered in CTFUD+ C medium [29] at 60 °C. Recovery cultures were transferred to liquid CTFUD-NY medium [29] without uracil to allow selection of uracil prototrophs. Cultures were plated on solid CTFUD-NY media to obtain isolated colonies, and DNA was isolated from transformants. Taq polymerase (Sigma, St. Louis, MO, USA) was used for PCR reactions to confirm the presence of the plasmid. PCR amplification with primers (SK04 and DC228) outside the gene cassette on the plasmid was used to confirm the presence of the plasmid with the bdhA gene. In the case of the JWCT08 (ΔpyrF + pSKW04) strain, integration of plasmid pSKW04 after a single crossover in the enolase promoter region was verified by PCR amplification with primers SK038 (specific for 0.1 kb upstream region of enolase promoter in JWCT08 gDNA) and DC461 (specific for plasmid pSKW04). Primers used for plasmid constructions and confirmation are listed in Additional file 1: Table S1.
Preparation of cell lysates and western blotting
Clostridium thermocellum strains (JWCT02, JWCT06, JWCT07, and JWCT08) were grown to mid-log or stationary phase at 60 °C in 20 mL CTFUD-NY medium without uracil. Cells were harvested by centrifugation at 6000×g at 4 °C for 15 min, and cell pellets were washed using 50 mM Tris-Cl buffer (pH 8.0) and resuspended in Tris-Cl buffer to OD 600 20. Cells were lysed by boiling in the presence of SDS [30]. Cell free extracts were electrophoresed in 4-15% gradient Mini-Protean TGX gels, that were either stained using Coomassie blue or were transferred to PVDF membranes (ImmobilonTM-P; EMD Millipore, Billerica, MA, USA) using a Bio-Rad Mini-Protean 3 electrophoretic apparatus and then probed with His-tag (6xHis) monoclonal antibody (1:5000 dilution; Invitrogen, Grand Island, NY, USA) using the ECL Western Blotting substrate Kit (Thermo Scientific, Waltham, MA, USA) as specified by the manufacturer.
Fermentations
To test tolerance to various compounds, cultures of JWCT02, JWCT06, or JWCT08 strains were serially passaged every 24 h in 20 mL CTFUD-NY medium without uracil. After the second transfer, cultures were inoculated to the last culture to initial optical density (OD 600 ) of 0.01. Batch fermentations were performed at 60 °C without agitation in 10 mL CTFUD-NY medium without uracil supplemented with either furfural or HMF at 0 or 10 mM concentrations. Optical cell density was monitored using a Jenway Genova spectrophotometer, measuring absorbance at 600 nm. | 2018-04-03T04:50:27.941Z | 2017-03-15T00:00:00.000 | {
"year": 2017,
"sha1": "018b36b13c05641726a95ee8b31ce036b5aa75eb",
"oa_license": "CCBY",
"oa_url": "https://biotechnologyforbiofuels.biomedcentral.com/track/pdf/10.1186/s13068-017-0750-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "747927335d2d7385746f71436b411f68f19cff16",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
119160223 | pes2o/s2orc | v3-fos-license | Generalized Jackiw-Rebbi Model and Topological Classification of Free Fermion Insulators
We present a new perspective to the classification of topological phases in free fermion insulators by generalizing the Jackiw-Rebbi model to arbitrary dimensions. We show that a generalized Jackiw-Rebbi model where the Dirac mass ($m$) satisfies $m(x)=-m(-x)$ is invariant under a parity transformation ($P$) that relates the $x>0$ half to the $x<0$ half. Determining the form of $P$ gives rise to a Clifford algebra that has been shown to give a complete topological classification of free fermion insulators. Gapless edge states are a natural consequence of our construction and their topological nature can be understood from the fact that all gapless edge states at a given interface transform similarly under $P$ (all odd or all even). A naive non-topological model for states confined to the interface will allow both even and odd states.
In the Landau paradigm, phase transitions are characterized by spontaneously broken symmetries. However, the discovery of integer quantum Hall states [1] showed that there are states of matter which are clearly distinct from each other but do not break any symmetry and, therefore, require a topological classification [2]. More recently, the theoretical prediction [3][4][5][6][7] and subsequent experimental discovery [8,9] of time reversal invariant topological insulators have added a new class of free fermion insulators where gapless edge states are protected by some symmetry of the system. Such topological insulators are smoothly connected to trivial insulators once the symmetry is broken and they come under a broad family called Symmetry Protected Topological (SPT) insulators.
SPT insulators are a very active topic of research, with a rapidly growing list of predicted SPT insulators [10][11][12][13][14]. The need to bring order to this zoo of SPT insulators has renewed interest in topological classification of phases. Classification of SPT phases depend crucially on the symmetries and dimensionality of the system. In the case of effectively free fermion systems, two independent schemes have been developed recently: K-theory approach by Kitaev [15] and nonlinear σ model for disordered fermions by Schnyder et al. [16]. Both approaches provide a comprehensive classification of all topological insulators and superconductors.
In this paper, we present a new perspective to the classification of free fermion SPT insulators by reconnecting them to generalizations of the Jackiw-Rebbi model [17] to arbitrary dimensions. Historically, the Jackiw-Rebbi model is significant as one the earliest theoretical description of a topological edge state. There has been hints of a broader connection between the Jackiw-Rebbi model and topological insulators [18][19][20], but a systematic link has not been established yet. Here, we show a rigorous mapping of all classes of free fermion SPT insulators to a suitable Jackiw-Rebbi model. Its simplicity lends insight into the problem and it is by construction well suited for determining the nature of topologically protected edge
states.
The main results presented in this paper are: 1) A generalized Jackiw-Rebbi model where the Dirac mass (m) satisfies m(x) = −m(−x) is invariant under a parity transformation P which relates the x > 0 half to the x < 0 half. Determining the form of P yields a Clifford algebra identical to Kitaev's approach. So, for d ≥ 1 the classification of P leads to a complete topological classification of free fermion insulators. 2) The m > 0 and m < 0 halves of the generalized Jackiw-Rebbi model correspond to different topological phases in the same symmetry class. In other words, the interface between any two topologically different insulators in the same symmetry class can be represented by a suitable Jackiw-Rebbi model. 3) All symmetry protected gapless edge states at the interface transform similarly under P (all odd or all even). This explains their topological nature because only half of the possible states from a naive non-topological model for the interface are allowed as edge states. Once the form of P is fixed, the structure of edge states (e.g. spin-momentum locking) is completely determined.
Generalized Jackiw-Rebbi Model : The original model proposed by Jackiw and Rebbi [17] describes a one dimensional system where Dirac fermions are coupled to a soliton field. They showed that a kink in the soliton field arXiv:1406.0500v1 [cond-mat.mes-hall] 2 Jun 2014 which is equivalent to the Dirac mass (m) changing sign across an interface leads to a topologically protected zero energy mode localized at the interface. Re-interpreting the m > 0 and the m < 0 halves as different topological phases, the zero energy mode can be understood as the edge state one expects at the interface of two topologically different states of matter.
Here we generalize the Jackiw-Rebbi model to arbitrary dimensions d, where m changes sign at the x = 0 interface as shown in the schematic diagram in Fig. 1. m is independent of the remaining (d − 1) physical dimensions. We require that m(x) = −m(−x) and choose m(x) > 0 for x > 0. The Dirac Hamiltonian describing such a system has the form where the γ matrices are in general complex Hermitian matrices and satisfy In order to describe insulators with time reversal symmetry and superconductors, the Majorana representation becomes important where the matrices are real and the Dirac Hamiltonian can be written as γ 0 is a real anti-symmetric matrix while the rest are real symmetric matrices. In the rest of the paper, {γ µ } will indicate the real case while {γ µ } will mean the complex case. Note that the real Majorana representation and the complex Dirac fermion representation are related by a basis transformation.
We focus on Dirac Hamiltonians because all gapped free fermion systems close to a topological phase transition can be described by the appropriate Dirac Hamiltonian in the continuum limit. The structure of the Dirac spinor is related to degrees of freedom like spin and band index relevant for the system.
Parity Transformation: We now show that we can always define a parity transformation P : x → −x under which the Dirac Hamiltonians in Eq. 1 and Eq. 3 are invariant. The interface at x = 0 maps onto itself under this operator. For the complex case in Eq. 1, consider the form P = iOγ 1 X where X : x → −x acts on the co-ordinate space and Oγ 1 acts on the spinor space. For P to a good symmetry of the model, we require that P HP = H and P 2 = 1 which yields the following constraints on O So, without loss of generality, we can choose O = γ 0 and the parity transformation which leaves the system invariant is given by For the real case, a similar analysis shows that the parity operator has the form Topological Classes: In Kitaev's K-theory approach to topological classification of free fermion insulators, topological insulators with no additional symmetries are described by the complex Dirac equation in Eq. 1. The matrices γ 1 , .., γ d in Eq. 2 form a representation of the complex Clifford algebra Cl d (C) and the goal is to find an additional Clifford generator γ 0 which acts as the mass term and opens a gap. The space for γ 0 is C (d mod 2) (see Table 2 of Ref. [15]) and the topological classification is given by the zeroth order homotopy group π 0 (C (d mod 2) ) which is Z for even d and 0 for odd d. As shown above in Eq. 7, finding γ 0 for a given set of γ µ (µ = 1, ..., d) is equivalent to finding the parity operator P . So, for the complex case, there is a one to one mapping between the topological classes and a generalized Jackiw-Rebbi model.
We now turn to the real Majorana representation in Eq. 3 which is useful for classifying insulators with additional symmetries like time-reversal (T ) and/or charge conjugation (C). In general, let A i , i = 1, .., p + q denote the symmetry operators of the system such that A 2 i = 1 for i = 1, .., p and A 2 p+i = −1 for i = 1, .., q. The matrix representations of the symmetry operators can be chosen so that Combining these requirements with the properties ofγ matrices in Eq. 4, it is clear that {A i }, i = 1, .., p + q and {γ µ }, µ = 1, .., d form a representation of the real Clifford algebra Cl q p+d (R) with p + d positive generators and q negative generators. The space of insulators with symmetries {A i } in d-dimensions is then described by the space of the mass termγ 0 which is an additional negative Clifford generator. Table 3 of Ref. [15] shows that the space ofγ 0 is R q p+d R (q−p−d+2) mod 8 and the topological classification is given by π 0 (R (q−p−d+2) mod 8 ). Just as in the complex case, the task of findingγ 0 given A i , .., A p+q ,γ 1 , ..,γ d is equivalent to finding the parity operator P defined in Eq. 8. So, the topological classification of symmetry protected free fermion insulators is equivalent to the classification of the appropriately represented P operator.
By construction, the Jackiw-Rebbi model is defined only for d ≥ 1 which leaves out the d = 0 case. While the d = 0 case is interesting, it doesn't give rise to any edge states. The focus of our paper is to develop a simple and yet powerful theory for the interface between different topological states.
Edge States: The structure of the Jackiw-Rebbi model is ideal for determining the properties of the edge states. First, we show that our construction guarantees the existence of gapless edge states. Then, we show that the edge states are topological in nature which implies that the m > 0 and m < 0 halves represent different topological states in the same symmetry class.
For brevity, we solve for the edge states in the complex case only. The steps are identical in the real Majorana representation. Consider the following ansatz for the edge states where k ⊥ and r ⊥ are the momentum and position vectors perpendicular to x. φ k ⊥ is a spinor whose dimension is determined by the band and/or spin index. Plugging this ansatz into the Dirac equation in Eq. 1, we get For the zero energy mode, we set E = 0 and k ⊥ = 0. After multiplying Eq. 10 on the left by γ 0 and using the definition of P in Eq. 7, we get m(x) (P + 1) ψ 0 = 0 It is clear that ψ 0 must be an eigenstate of P with eigenvalue -1. Since P is a good symmetry of the Hamiltonian, all other E = 0 edges states which are smoothly connected to ψ 0 as a function of k ⊥ must also be odd eigenstates of P . The even eigenstates of P are not allowed as gapless edge states. Note that if we had chosen P = −iγ 0 γ 1 X which differs from the definition of P in Eq. 7 by a (-) sign, only the even eigenstates of P would be permissible. Let us compare the gapless edge states obtained from the generalized Jackiw-Rebbi model to the states of a massless Dirac Hamiltonian. If we set x = 0 and m(0) = 0 in Eq. 10, the massless Dirac Hamiltonian in (d − 1) dimensions is the naive Hamiltonian for the edge states.
Using the properties of γ matrices in Eq. 2 and the definition of P in Eq. 7, it is easy to show that [P, H Edge ] = 0, and we can find common eigenstates of P and H Edge .
The requirement that only odd eigenstates of P are allowed as edge states projects out half of the eigenstates of H Edge . This illustrates the topological nature of the gapless edge modes. The m > 0 half is topologically different from the m < 0 half. In other words, the interface between any two topologically inequivalent free fermion insulators belonging to the same symmetry class can be represented by a suitable generalized Jackiw-Rebbi model. This general result provides us with a convenient method for determining the nature of topological edge states as demonstrated below with some simple examples.
The fact that half of the eigenstates of H Edge are not allowed as edge states does not violate any fermion doubling theorem [21]. One can imagine periodic boundary condition along x with periodicity 2L. Then, another kink in the mass term will occur at x = L which is the opposite of the kink at x = 0. The other half of eigenstates of the interface Hamiltonian H Edge are localized at the x = L interface.
Examples: We now demonstrate the usefulness of our approach by using two well known examples: 1) Integer Quantum Hall Effect (IQHE) and 2) Quantum Spin Hall Effect (QSHE). We construct the lowest dimensional matrix representations of each and indicate how it can be generalized to higher dimensional representations.
IQHE occurs in d = 2 and there is no time reversal symmetry. So it belongs to the complex representation. The only symmetry is charge conservation or U (1) symmetry which is trivially satisfied by the Dirac equation. In the lowest dimensional representation, we can choose γ 1 = σ x and γ 2 = σ y . For this choice of γ 1 and γ 2 , the only allowed form for the mass term is γ 0 = σ z . Here σ's are the Pauli matrices. Then the Hamiltonian for the gapless edge states and the parity operator are given by where we have set v f = 1. Restricting the allowed edge states to the odd eigenstates of P selectively picks out the positive eigenstates of σ y or the right moving eigenstates of H Edge . These are the chiral edge states of the integer quantum Hall states which emerge naturally from our construction (see Fig. 2(a)). Generalizations to higher dimensional representations can be achieved by constructing block diagonal matrices. Keeping in mind that K-theory allows supplementing the Hamiltonian by a trivial piece, we can choose the following form for the γ matrices: where I n×n is the n × n identity matrix. This gives rise to l right moving and m left moving edge states. Note that the Pauli matrices along with the identity matrix exhausts the space 2 × 2 Hermitian matrices. So, the most general unitary transformation that leaves both γ 1 and γ 2 in Eq. 14 invariant has the form U ⊗ I 2×2 where U is any n × n unitary matrix. γ 0 is not invariant under this transformation and becomes Here, the unitary transformation only affects the first factor in γ 0 and does not change its eigenvalues. Therefore, the difference in the number of right and left moving edge states (l − m ∈ Z) is a topologically invariant quantity.
Moving on to the next example, QSHE requires time reversal symmetry in addition to the U (1) symmetry. We consider the BHZ Hamiltonian for HgTe/CdTe [4] but retain terms only upto linear order in k. Replacing k µ by −i∂ µ gives the required Dirac equation with γ 1 = σ z ⊗σ x , γ 2 = −I 2×2 ⊗ σ y and γ 0 = I 2×2 ⊗ σ z . The top 2 × 2 block corresponds to the conduction and valence band spin up states while the bottom 2 × 2 block describes the corresponding spin down states. The two blocks are related by time reversal symmetry. Like in the case of IQHE, we set v f = 1 without loss of generality. Then, the Dirac Hamiltonian for the edge states and the parity operator have the form Since both the edge Hamiltonian and the parity operator are block diagonal, we can look at each spin species separately. For the spin down sector or the lower 2 × 2 block, the odd eigenstate of P means the negative eigenstate of σ y or, equivalently, the right moving eigenstate of H Edge . Similarly for the spin up sector, the odd eigenstate of P picks out the left moving eigenstate of H Edge . So, we have gapless edge modes consisting of right moving spin down states and left moving spin up states, as shown schematically in Fig. 2(b). Such helical edge states have been predicted for QSHE [11] and it appears in a transparent way in our analysis. Higher dimensional representations can be constructed in a block diagonal fashion, in a way very similar to the case of IQHE.
If we focus on one spin sector, it consists of l negative and m positive helicity states. While the edge states are always helical, the sign of helicity is not a robust quantity. Both γ 1 and γ 2 in Eq. 17 are invariant under a unitary transformation by U = U ⊗ exp(iθ σ y ⊗ σ y ). For θ = π/2, γ 0 transforms as which exchanges the number of positive and negative helicity edge states. The topologically invariant quantity in this case is whether the number of edge states is odd or even ( (l + m)mod 2 ∈ Z 2 ).
In the current analysis for QSHE, we have used the complex representation. It can be equivalently done in the real basis by choosing the Majorana operator to be η 1σ (k) = c σ (k) + c † σ (k) and η 2σ (k) = −i(c σ (k) − c † σ (k)). The simplest representation of realγ's consists of 8 × 8 matrices. While the real representation is important for classification, it is not crucial for determining the edge states. In fact, the lower dimensional complex representation is much simpler to deal with and physical interpretation of the edge states is easier.
In conclusion, we have established that there is a deep connection between the symmetry protected free fermion topological insulators and generalized versions of the Jackiw-Rebbi model. Every pair of topologically different insulators belonging to the same symmetry class can be mapped to a Jackiw-Rebbi model where m(x) = −m(−x). We have defined a parity operator (P ) which maps the x < 0 half to the x > 0 half and is a good symmetry of the model. The topological classification of symmetry protected free fermion insulators in d ≥ 1 is equivalent to the classification of the P operator. The simplicity of the model provides insights into the topological nature of the edge states. Our analysis yields a general scheme for determining the structure of gapless edge states. One simply needs to find the common eigenstates of the massless Dirac Hamiltonian for the edge and P , and keep only the odd eigenstates of P . While P plays a crucial role in our construction, any smooth deformation which breaks P -symmetry without closing the bulk gap does not change the topology of the states. In this paper, we have illustrated the usefulness of our approach by using only two of the well known examples (IQHE and QSHE). We hope to extend the analysis to a larger class of symmetry protected topological insulators in the future. | 2014-06-02T20:00:01.000Z | 2014-06-02T00:00:00.000 | {
"year": 2014,
"sha1": "7b8c8e034fec2465d89fd49c6452727b6a0561c2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7b8c8e034fec2465d89fd49c6452727b6a0561c2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
252170421 | pes2o/s2orc | v3-fos-license | Whole-Genome Profile of Greek Patients with Teratozοοspermia: Identification of Candidate Variants and Genes
Male infertility is a global health problem that affects a large number of couples worldwide. It can be categorized into specific subtypes, including teratozoospermia. The present study aimed to identify new variants associated with teratozoospermia in the Greek population and to explore the role of genes on which these were identified. For this reason, whole-genome sequencing (WGS) was performed on normozoospermic and teratozoospermic individuals, and after selecting only variants found in teratozoospermic men, these were further prioritized using a wide range of tools, functional and predictive algorithms, etc. An average of 600,000 variants were identified, and of them, 61 were characterized as high impact and 153 as moderate impact. Many of these are mapped in genes previously associated with male infertility, yet others are related for the first time to teratozoospermia. Furthermore, pathway enrichment analysis and Gene ontology (GO) analyses revealed the important role of the extracellular matrix in teratozoospermia. Therefore, the present study confirms the contribution of genes studied in the past to male infertility and sheds light on new molecular mechanisms by providing a list of variants and candidate genes associated with teratozoospermia in the Greek population.
Introduction
Infertility is defined by the World Health Organization (WHO) as the failure to conceive after at least 12 months of regular and unprotected sexual intercourse [1]. It is considered a major health problem that affects the couple's psychology and social life [2,3], but at the same time, it causes a significant economic burden on the health care system and patients [4]. Moreover, it is estimated that more than 186 million people are affected worldwide [5], and in half of these cases, after a thorough examination, a male cause seems to be present alone or in conjunction with female causes [6]. Based on semen analysis and defects associated with sperm quality or quantity, several subtypes of male infertility can be defined, such as asthenozoospermia, oligozoospermia, or teratozoospermia [7]. Specifically, when in semen less than 4% of spermatozoa have normal morphology, according to WHO, the sample is characterized as teratozoospermic.
Male infertility is considered a multifactorial disorder [8], and it is estimated that genetic factors are involved in 15% of cases and more [9]. Although extensive research has resulted in significant advances in the field, the identification of specific genes and mutations is a great challenge, as more than 2000 genes are required only for spermatogenesis, a crucial process for fertility [9,10]. Especially for particular subtypes of male infertility, such as teratozoospermia, detection of causal mutations that lead to specific defects in sperm parameters remains limited [8,10]. Furthermore, the fact that assisted reproductive technologies (ART) are being used more frequently than in the past proves the progress achieved in the area of male infertility, but it has been observed that ART outcomes differ between ethnic groups probably due to different genetic factors that contribute to infertility among populations [11,12]. However, the impact of ethnic differences on male infertility is not adequately addressed in research [12]. Today, whole-genome sequencing (WGS) enables the identification of common and even rare variants, which are often not captured at Genome-Wide Association Studies (GWAS) and SNP-chips [13], observed in specific geographical populations, which have an impact on phenotypic variation for a wide range of diseases in an accurate and cost-effective way [14,15].
Thus, this study aimed to perform WGS in teratozoospermic and normozoospermic individuals of the Greek population (a) to identify and further characterize variants, including rare variants, that can contribute to the pathogenic phenotype, and (b) to highlight genes potentially linked with teratozoospermia, a specific subtype of male infertility, and to explore their role. The ultimate goal of our study was to provide a valuable reference for future male infertility research, particularly regarding teratozoospermia. In this way, the detection of mutations contributing to the infertile phenotype in ethnic minorities may be of utmost importance for unraveling the genetic basis of male infertility by identifying new or rare variants and studying the genes on which they are found, and for improving the diagnosis of teratozoospermia, as well as the chances of successful ART.
More specifically, with our study that is focused on the Greek population, as there are no other studies for the genetic background of teratozoospermia in the Balkans, we attempt to (a) provide a roadmap for future studies investigating different ART outcomes due to different genetic background among populations and maybe improve the chances for successful ART for Balkanian populations, and (b) provide preliminary information about genetic causes of male infertility specifically found on Balkan populations enabling comparison with other populations, e.g., Chinese, African, etc., in order to investigate the mechanisms causing teratozoospermia.
Selection of Patients and Biological Material
For this study, human blood, as well as sperm samples, were collected from volunteers in cooperation with the "Embryolab IVF Unit" (55134 Thessaloniki, Greece) for the Spermogene research program. Ethical approval was obtained from the Ethics Committee, University of Thessaly (38221 Volos, Greece), and all individuals have given their approved written informed consent.
All the volunteers recruited underwent an andrological examination and semen analysis was performed on samples derived from all of them. It should be noted that sperm samples were collected via masturbation after at least two to three days of abstinence from sexual intercourse. For semen analysis (seminogram), cell vision counting slides (Tek-Event) were used for cell counting and observation was conducted on Nikon Eclipse TS100, Nikon Eclipse E200, and Nikon Eclipse Ts2 microscopes. Semen analysis was performed according to WHO guidelines (fifth edition, 2010, https://apps.who.int/iris/handle/10665/44261 (accessed on 26 June 2022)) that include information on assessing semen volume, sperm count, motility, morphology, etc. These reference values proposed in this edition were used to define normozoospermic and teratozoospermic phenotypes.
Moreover, the inclusion criterion for this study was the Greek ethnicity as according to research, infertility exerts racial differences [16,17] and the ART outcomes differ between populations [11,12], but there is limited information regarding specific variations and polymorphisms associated with male infertility for ethnic groups found in the Balkans, and especially for the Greek population and teratozoospermia. Therefore, place of birth and relevant data were collected from the volunteers through the questionnaire which was required to fill in along with the consent form. Demographic information on the individuals enrolled in this study is presented in Table 1.
Sample Preparation and Whole-Genome Sequencing
Genomic DNA was extracted from blood samples of teratozoospermic and normozoospermic individuals using the PureLink Genomic DNA Mini Kit (Invitrogen, Waltham, MA, USA-Catalog number: K182002) according to the manufacturer's instructions. DNA quality and quantity were assessed by agarose gel electrophoresis and by Qubit 2.0 fluorometer using the Qubit dsDNA BR Assay Kit (Invitrogen, Waltham, MA, USA-Catalog number: Q32850), respectively. After that, three sequencing pools were created. More specifically, DNA extracted from ten normozoospermic individuals was used for the two pools (five individuals for each pool), and similarly, the third contained pooled DNAs from five teratozoospermic individuals. The DNAs were mixed equimolar for each pool in a final concentration of 100 ng/uL and a final quantity of 2 mg.
Once their preparation was completed, DNA samples were shipped to Novogene (Cambridge, UK) where 100-bp paired-end libraries were constructed and sequenced using an Illumina HiSeq 3000 in a mean sequencing coverage of 30x. For the analysis of the FASTQ files produced, at first, the quality of the reads was evaluated using FASTQC (available online at: http://www.bioinformatics.babraham.ac.uk/projects/fastqc/, accessed on 26 June 2022) and then low-quality reads (minimum PHRED Score: 30), as well as adapter sequences, were discarded using Trimmomatic [18]. After quality control, the reads were aligned to a human reference genome (GRCh37/hg19) retrieved from the Ensembl database [19] using the Burrows-Wheeler aligner (BWA) [20]. Duplicate reads produced by polymerase chain reaction (PCR) were marked and removed by Picard tools before further analysis. SAM files of the alignment were then converted to BAM files using SAMtools [21] and individual BAM files for the two normozoospermic pools were merged to create one file representing normozoospermic individuals using SAMtools, too. Variant calling was performed using freeBayes [22] and as an output, the results were stored in variant call format (VCF). BCFtools [21] was then used to compare the VCF files from normozoospermic and teratozoospermic individuals to detect unique variants that are present only in one of the two groups, thus, they are not shared between normozoospermic and teratozoospermic individuals. Further analysis was performed for variants found only on men diagnosed with teratozoospermia as described earlier, as the aim was to identify variants and/or polymorphisms on teratozoospermic men that have the potential to contribute to the pathogenic phenotype and and may cause teratozoospermia in the Greek population. Moreover, as they exist only on patients, they could be used for the development of teratozoospermia biomarkers in the Greek population for effective diagnosis as well as for ART outcome improvement.
Following the detection of unique variants for teratozoospermic individuals, annotation was performed using the VEP tool (available at https://www.ensembl.org/Tools/VEP, accessed on 26 June 2022), provided by the Ensembl database. Furthermore, a list of databases, software tools, and additional prediction algorithms were used to retrieve more biological information and prevent bias in the filtering and prioritization of variants that were performed afterward, since it is common for potentially disease-relevant mutations to be ignored due to inadequate annotations [23]. Among them, the Single Nucleotide Poly-morphism Database (dbSNP) [24], 1000 Genomes Project [25], and Genome Aggregation Database (gnomAD) [26] were used to obtain information about allele frequencies and the identification of rare/novel variants. Further analysis was carried out to predict the effect of variants on protein's functionality or their potential pathogenic effect using Polymorphism Phenotyping v2 (PolyPhen2) [27], Sorting Intolerant From Tolerant (SIFT) [28], Combined Annotation Dependent Depletion (CADD) [29], and MutationAssessor [30].
Variant Prioritization
In the present study, the variants found only in teratozoospermic individuals were filtered to prioritize those that are more likely to be involved in the occurrence of this specific subtype of male infertility in the Greek population and contribute to the unique genomic profile of Greek patients with teratozoospermia. The corresponding genes that the variants are mapped on were also studied to explore their role. Thus, the prioritization was performed as follows ( Figure 1 (a): High Impact variants [31]. At first, protein-truncating variants (PTVs) that lead to a truncated protein or its complete absence and thus, can have a severe consequence on protein function [32], as they have been associated with the causing of several diseases according to studies [33], were selected. The PTVs that were prioritized in this study included nonsense single-nucleotide variants (SNVs), frameshift insertions or deletions (indels), splice-disrupting variants, and start-loss SNVs. Common variants with an allele fre- (a): High Impact variants [31]. At first, protein-truncating variants (PTVs) that lead to a truncated protein or its complete absence and thus, can have a severe consequence on protein function [32], as they have been associated with the causing of several diseases according to studies [33], were selected. The PTVs that were prioritized in this study included nonsense single-nucleotide variants (SNVs), frameshift insertions or deletions (indels), splice-disrupting variants, and start-loss SNVs. Common variants with an allele frequency > 0.05 according to data retrieved from the 1000 Genomes Project [25] for the European population as well as from the gnomAD database [26] for Non-Finnish Europeans were excluded. This filter was also applied because the aim was to identify rare variants. According to studies, rare variants can help to shed light on common diseases as they are not found frequently in the population, and thus, there is a greater possibility to be associated with pathogenic phenotypes [13]. It should also be noted that information about allele frequencies was obtained only for the geographical regions mentioned above, as this study is strictly focused on the Greek population. In addition, as an extra filter, a CADD Score > 10 was used to further prioritize the above variants. Any variant with a CADD Score > 10 is considered to be in the top 10% of the human genome's likely functional and harmful variants.
(b): Moderate Impact Variants [31]. Moderate impact variants, including inframe indels, missense and protein-altering variants, though non-disruptive, have the potential to affect protein's effectiveness [31]. Of these, we selected to study missense variants as they have been more often associated with complex diseases [34]. In addition, missense variants that can affect protein functionality and which have a low frequency in a population may represent the "missing link" for explaining inheritable diseases [35]. Therefore, missense variants were also chosen and, to assess their effect on protein function, structure, and conservation, bioinformatics prediction tools were used. More specifically, only variants with SIFT Score [28] ≤ 0.05, Polyphen2 Score [27] ≥ 0.8, and with a medium or a high functional impact based on evolutionary conservation as assessed by MutationAssessor [30] were included in the analysis. The criteria regarding CADD score and allele frequency were also applied as in high-impact variants.
In all the above cases, novel variants that are not listed in the existing databases (Ensembl, 1000 Genomes and gnomAD), were also included in this study.
Furthermore, Gene Ontology (http://www.geneontology.org/GO, accessed on 26 June 2022), KEGG (http://www.genome.jp/kegg/, accessed on 26 June 2022), and GeneCards (https://www.genecards.org/, accessed on 26 June 2022) were used to investigate the role of the genes in which the prioritized variants were found and to identify pathways involved in teratozoospermia. STRING (https://string-db.org/, accessed on 26 June 2022) was also used to study interactions and correlations between proteins encoded by these genes. Genes with more than two prioritized variants were identified, as they may be more likely to be involved in the pathogenesis process and teratozoospermia.
Finally, Genotype-Tissue Expression (GTEx) database was used in an attempt to explore how the prioritized variants affect gene expression. GTEx Program is a database including information on the relationship between genetic variants and gene expression in multiple human tissues enabling among others the identification of expression quantitative trait loci (eQTLs) [36]. eQTLs are genomic loci that explain at least a fraction of the genetic variance of a gene expression phenotype and thus, can provide valuable information about the role of variants and their effect on phenotype [25].
Results
In brief, in this study, blood samples from normozoospermic and teratozoospermic individuals were used for DNA extraction. After sample preparation, whole-genome sequencing and data analysis were performed, in order to identify unique variants that are present only in teratozoospermic individuals and thus, can contribute to the pathogenic phenotype or have the potential to be used as biomarkers. Then, variant prioritization was performed by applying specific filters and selecting high-and moderate-impact variants that have the greatest possibility to affect protein function and have a role in teratozoospermia according to bioinformatics tools. Therefore, the identified high and moderate impact variants are going to be presented in this section as well as the results of the analyses performed to investigate the role of the genes on which these were found, including GO annotation, KEGG enriched pathways, protein-protein interactions, etc.
Variant Calling and Annotation of WGS Data
After whole-genome sequencing, data analysis was performed. More specifically, the comparison between normozoospermic and teratozoospermic individuals to detect unique variants found only in one of the two groups revealed 617,722 variants specifically observed in teratozoospermic, while 2,342,243 variants were present only in normozoospermic men. The variants were mapped in 34,603 and 22,022 genes and characterized noncoding regions (miRNAs, lncRNAs genes, etc.) in normozoospermic and teratozoospemic males, respectively.
In the present study, only the variants identified in teratozoospermic individuals were selected for further analysis and prioritization, as the objective was to detect and investigate variants that contribute to the genomic profile of Greek patients with teratozoospermia. Of the 617,722 variants found in teratozoospermic men, 17.29% were mapped in intergenic regions, while the largest proportion (74.15%) was in intronic regions. The remaining variants were mapped in protein-coding regions and, more specifically, 0.62% and 1.07% of all variants were found in the 3' UTR and 5' UTR regions, respectively. Moreover, 0.40% of the unique teratozoospermic variants were also synonymous.
High Impact Variants Identification
In high impact variants, start-lost, nonsense, frameshift, and splice-disrupting variants were included. Thus, 175 variants were characterized first as high impact. Then, further filters were applied to prioritize them.
More specifically, of the 175 high-impact variants detected, 36 nonsense mutations were found among them. Variants with a CADD score less than 10, as well as variants with allele frequency in the European population greater than 5%, were removed; thus, 20 variants were identified (Table S1) and selected for further pathway enrichment analysis in the next steps. Regarding start-lost mutations, 3 variants specific for teratozoospermic individuals were detected from the WGS analysis, but after the filters referred above were applied, none of them was prioritized. However, 41 frameshift variants were found to be unique in teratozoospermic men, and after prioritization, 22 frameshift mutations (Table S2) were identified. Finally, the highest number of high impact variants were splicedisrupting mutations, since 48 were identified in teratozoospermic men. After filtering, 19 splice-disrupting variants were used for further pathway enrichment analysis (Table S3).
The prioritized variants are presented on whole-genome level in Figure 2.
Moderate Impact Variant Identification
Inframe indels, missense, and protein-altering variants can be characterized as moderate impact because they potentially affect protein function [31]. In the present study, among the variants found only in teratozoospermic individuals, 2141 moderate impact SNVs and indels were identified.
Then, to perform further prioritization, after applying the criteria described in the previous section (CADD Score > 10, Allele frequency (Europe) < 5%, SIFT Score ≤ 0.05, Polyphen2 Score ≥ 0.8, and medium/high impact according to MutationAssessor), 153 missense mutations were identified as the most likely to be associated with teratozoospermia in the Greek population (presented in Table S4).
Gene Investigation and Enrichment Analysis
After selecting all prioritized variants (total number: 214) as explained above, the next step was to evaluate the role of the genes on which these are found and to explore their potential contribution to male infertility and, particularly, teratozoospermia. More specifically, the full list of genes, as well as their description according to GeneCards [38], is presented in Table S5.
Based on these results, top genes with more than two prioritized variants from different categories (nonsense, frameshift, splice disrupting, missense) can also be identified as the accumulation of mutations may lead to a greater impact on protein function. These genes are presented in Table 2. In addition, KEGG enrichment analysis was performed for all the genes identified above. The extracellular matrix (ECM)-receptor interaction pathway was found to be the most gene-enriched and significant pathway, as presented in Table 3. GO was also used to group genes by functional categories based on GO terms for biological processes, cellular components, and molecular functions. As our gene set was rather small, we performed gene ontology annotation instead of gene ontology enrichment in order to have reliable results and to obtain as much biological information as possible. The most important groups are presented for all three of them in Figure 3A-C. As it is observed, GO Cellular Component analysis revealed many molecules found in the extracellular region, whereas important categories identified in GO Biological Processes were morphogenesis of anatomical structures and reproduction. The most important GO terms regarding the molecular function can be considered the "molecular transducer", "signaling receptor", and transferases.
The STRING database was then used to study the network of the respective proteins of the genes identified above, as protein interaction networks can provide useful information on the molecular basis of complex diseases [39], such as male infertility (Figure 4). The network contained 202 nodes and 120 edges, while coiled-coil proteins were found to be enriched according to Uniprot [40] (Table 4).
3 Figure 3. Grouping of genes on which the prioritized variants were found according to GO terms for Molecular Function (A), GO terms for Cellular Component (B), and GO terms for Biological Process (C). The horizontal axis represents the GO terms and the vertical axis the number of genes that were found to be annotated on every GO term. Table 4. Results of the functional enrichment analysis of the network according to Uniprot. The count in network indicates how many proteins in our network are annotated with the term "coiledcoil" of the total number of proteins assigned to the same term. Strength is calculated as log10 (observed/expected) for the number of proteins expected to be annotated with this term in a random network and finally, FDR (false discovery rate) is calculated after multiple testing correction according to Benjamini and Hochberg. After that, we also used MCODE [41], a plug-in that is optimized to detect clusters in a network. More specifically, three clusters were calculated according to k-core = 2. Cluster 1 ( Figure 5) contained 6 nodes and 12 edges and had the highest score among the identified clusters. The other clusters found on the network are presented in Table 5. Finally, to investigate how the prioritized variants can affect gene expression, GTEx was used. It was revealed that some of them alter gene expression, particularly in the testicular tissue. The results are presented in Table 6. In brief, the main results of the present study are presented in Figure 6. Finally, to investigate how the prioritized variants can affect gene expression, GTEx was used. It was revealed that some of them alter gene expression, particularly in the testicular tissue. The results are presented in Table 6. In brief, the main results of the present study are presented in Figure 6.
Enriched Annotated Keywords According to Uniprot
Genes 2022, 13, x FOR PEER REVIEW Figure 6. The main results of the present study. As presented, after WGS, 617,722 variants were identified only in teratozoospermic individuals. These were further prioritized to identify those with the greatest possibility to contribute to the pathogenic phenotype. Thus, 20 nonsense, 22 frameshift, and 19 splice-disrupting variants were characterized as high impact variants, being the most likely to affect protein function. A total of 153 missense variants were also identified. Further investigation of the genes on which these were found revealed that two genes carry more than one mutation in teratozoospermic individuals, and most genes play a role in the extracellular matrixreceptor interaction. In addition, many protein interactions were identified and some of the variants can affect gene.
Discussion
In the present study, a WGS approach was used to identify and examine variants associated with clinical phenotypes of teratospermia. These may be involved in the molecular basis of teratozoospermia and have the potential to be used to improve the chances of successful ART or the diagnosis of male infertility in specific populations, since significant differences are observed and different genetic factors seem to contribute to infertility among ethnic groups [11,12]. Thus, to fill the gap regarding knowledge on polymorphisms associated with male infertility for Balkan populations, this study focused specifically on the Greek population and teratozoospermia.
Since next-generation sequencing (NGS) technologies provide massive amounts of data, a thoughtful approach should be followed to filter and select functional variants that are more likely to contribute to the pathogenic phenotype [42]. For this reason, this study focused only on protein-coding variants and after selecting variants found specifically in teratozoospermic individuals, a pipeline was developed that integrated the methodology of Juhari et al. (2021) [43] to prioritize high-and moderate-impact variants involved in teratozoospermia in the Greek population. Figure 6. The main results of the present study. As presented, after WGS, 617,722 variants were identified only in teratozoospermic individuals. These were further prioritized to identify those with the greatest possibility to contribute to the pathogenic phenotype. Thus, 20 nonsense, 22 frameshift, and 19 splice-disrupting variants were characterized as high impact variants, being the most likely to affect protein function. A total of 153 missense variants were also identified. Further investigation of the genes on which these were found revealed that two genes carry more than one mutation in teratozoospermic individuals, and most genes play a role in the extracellular matrix-receptor interaction. In addition, many protein interactions were identified and some of the variants can affect gene.
Discussion
In the present study, a WGS approach was used to identify and examine variants associated with clinical phenotypes of teratospermia. These may be involved in the molecular basis of teratozoospermia and have the potential to be used to improve the chances of successful ART or the diagnosis of male infertility in specific populations, since significant differences are observed and different genetic factors seem to contribute to infertility among ethnic groups [11,12]. Thus, to fill the gap regarding knowledge on polymorphisms associated with male infertility for Balkan populations, this study focused specifically on the Greek population and teratozoospermia.
Since next-generation sequencing (NGS) technologies provide massive amounts of data, a thoughtful approach should be followed to filter and select functional variants that are more likely to contribute to the pathogenic phenotype [42]. For this reason, this study focused only on protein-coding variants and after selecting variants found specifically in teratozoospermic individuals, a pipeline was developed that integrated the methodology of Juhari et al. (2021) [43] to prioritize high-and moderate-impact variants involved in teratozoospermia in the Greek population.
Genes Associated with Male Infertility
In the current study, we discovered high-impact variants specifically found in teratozoospermic men that were mapped in approximately 60 genes (Table S2). These should be of primary interest and focus in future disease-related studies as they can directly affect protein function [44]. Furthermore, the number of moderate impact or missense variants detected was higher, and these are also of great importance and could provide a list of candidate genes for future studies since they can affect protein structure and functionality leading to diseases [45].
Interestingly, many of our candidate genes have been associated in the past with abnormal spermatogenesis, sperm defects, or particular types of male infertility. Among the proteins required for successful fertilization is zonadhesin, involved in the speciesspecific adhesion of sperm to egg zona pellucida [46], and DCXR, also called "sperm surface protein P34H" [47]. Furthermore, CREBP has been suggested to play a role in azoospermia [48] and FYCO1 was also observed recently to be involved in the regulation of the chromatoid body, which is crucial for spermatogenesis, through autophagy [49].
Genes with Potential Role in Male Infertility and Teratozoospermia
For many of the highlighted genes with prioritized variants, there are indications of their involvement in male infertility, but further research is needed to investigate their exact role. Some of them are centrosomal protein 170 (CEP170) [50], gametogenetin (GGN) that is a testis-enriched gene [51], ghrelin, and obestatin, identified in human semen and the male reproductive system in general [52,53], Janus Kinase 1 (JAK1), found in the midpiece of spermatozoa and activated during capacitation [54,55], the lipopolysaccharide-binding protein that it is found in sperm and tail of spermatozoa [56] and tyrosine-kinase 2 (TYK2) that is active in human sperm [57] and plays a role in crucial signaling pathways. TET2 is also expressed in the cytoplasm of late pachytene spermatocytes of Stage V and there are indications that its expression levels are associated with fertility status and sperm parameters [58].
In addition, several of the highlighted genes have been associated with spermatogenesis process or male fertility in other species. These genes should be of primary interest for future studies, as genes involved in spermatogenesis are highly conserved between species [59], and maybe similar molecular patterns are also involved in the presence of male infertility in humans.
This study is also of particular interest because it draws attention to genes that have not been directly associated in the past with teratozoospermia or male infertility but are part of gene families that have members involved in the spermatogenesis process and reproduction in men. More specifically, laminins are important components of testicular basement membranes, and many laminin genes have been characterized as essential for normal testicular function [79]. Variants on laminin subunits have been also identified in the present study (LAMA3, LAMB1). Another protein family with a role in a variety of pathological mechanisms is Coiled-Coil Domain-Containing (CCDC) Proteins. Studies suggest that many members of this family, including CCDC42, CCDC9, and CCDC87, are required for the fertilization capacity of males and have been associated with abnormal formation of the sperm flagella [80][81][82]. Coiled-coil proteins were also found to be enriched in the network produced using STRING, thus, CCDC9B is a very promising candidate gene for teratozoospermia. ANK2 may also be a gene of interest, as Ankyrins are a family of proteins linking membrane and submembranous cytoskeletal proteins that play an important role in many cellular functions [83]. However, a specific gene of this family seems to be involved in reproduction as Ankrd31 −/− mice are infertile possibly due to deregulation of the blood-epididymal barrier [84]. Other such genes are CMTM3, a member of the chemokine-like factor (CKLF)-like MARVEL transmembrane domain-containing family (CMTM) [85], AKAP8, a member of a-kinase anchoring proteins (AKAPs) [86] and HSD17B14, as hydroxysteroid (17-beta) dehydrogenase (HSD17B) gene family has a role in steroid hormone biosynthesis and deficiencies in such genes can even lead to sex development disorders [87]. Furthermore, GATA6 is a candidate gene of importance, since GATA transcription factors play a crucial role in mammalian reproduction [88], whereas KLK13 is part of a protein family involved in semen liquefaction [89] that has the potential to affect sperm quality [90].
Other candidate genes may also be of particular importance for future studies as they interact with proteins involved in reproduction. For example, PPP1R15B has been found to interact with Protein Phosphatase 1 which plays a key role in the spermatogenesis process and has been shown to affect sperm motility [91].
The Special Case of BRCA2
It should also be noted that two different variants (missense and nonsense) found only in teratozoospermic individuals were identified in BRCA2. This gene has been associated with idiopathic cases of infertility characterized mainly by azoospermia or severe oligozoospermia [92], but in general, there are several studies linking polymorphisms in DNA repair genes with idiopathic infertility in males [93,94]. More interestingly, BRCA2 exhibits a highly evolutionarily conserved interaction with HSF2BP, a testis-specific protein that is essential for mouse spermatogenesis [95]. In the present study, a missense variant characterized as moderate impact was also identified on HSF2BP in teratozoospermic individuals suggesting a role of this interaction in teratozoospermia that requires further investigation.
The Role of Genes Associated with Cilia and Flagellum Malformation
Furthermore, the results of the present study are important as they support previous findings associating male infertility with dynein deficiencies [96][97][98][99].
In the present study, several mutations on some of the above genes specifically found in teratozoospermic men were identified. More specifically, a missense variant was identified in DNAH1. Mutations in DNAH1 can cause morphological abnormalities of the sperm flagella and lead to male infertility [103,105,107]. Mutations in DNAH10, that exhibits testis-specific expression and causes asthenoteratozoospermia in humans and mice [96], were also identified in teratozoospermic men. Moreover, though Dynein Axonemal Light Chain 4 (DNAL4), on which a frameshift variant was found in teratozoospermic men, has not been recorded to be involved in teratozoospermia in humans, researchers observed that a mutation in this gene in boar affected sperm motility and caused midpiece abnormalities [108]. Furthermore, variants in teratozoospermic individuals were also identified in genes associated with primary ciliary dyskinesia (PCD), a disease characterized by many symptoms, including male infertility, due to abnormal motile cilia, or celiac disease [103]. These genes affect cilia biogenesis and structure and more specifically, in the present study variants in FBF1, KIAA0586, ODAD1, GHRL, BACH2, and IFT74 were identified. Interestingly, except for its association with primary ciliary dyskinesia, intraflagellar transport protein 74 has been also associated with male infertility as for many years studies highlighted that it was an essential component for the spermatogenesis process in mice [109].
These findings suggest that maybe teratozoospermia is characterized by a set of mutations associated with cilia and flagellum malformation that act accumulative resulting in sperm deficiencies. However, the interesting fact is that these mutations identified here alter sperm morphology but do not have such a strong impact on sperm motility as sperm samples of the present study were characterized only as teratozoospermic and no combinations of male infertility subcategories were included (e.g., asthenoteratozoospermia, etc.). Thus, the identification of these mutations could be useful for distinguishing teratozoospermia from other subcategories of male infertility and providing implications for successful diagnosis. Furthermore, the study of teratozoospermic men harboring mutations that lead to sperm tail abnormalities, as usually are mutations in dyneins, is of importance because studies show that a good sperm nuclear quality in combination with mutations affecting the sperm tail are usually indicators of good embryonic development after intracytoplasmic sperm injection (ICSI) [97]. In particular, mutations in DNAH1 have been associated in the past with a good pregnancy rate [110], suggesting that further study is required to identify more variants on dynein genes that might contribute to teratozoospermia and have the potential to improve ART outcome or prognosis.
The Role of the Extracellular Matrix in Teratozoospermia
Finally, in the present study, pathway enrichment analysis showed that ECM-receptor interaction was the most significant pathway according to the genes on which prioritized variants were found specifically in Greek teratozoospermic individuals. GO analyses also identified variants in teratozoospermic men on a large number of proteins found in the extracellular region or proteins that are part of the extracellular matrix (ECM), suggesting a crucial role of the ECM in teratozoospermia.
ECM consists mainly of glycoproteins and polysaccharides but, most importantly, its components interact with a wide range of molecules, including proteases, protease inhibitors, cytokines, etc. [111,112]. Thus, this network of ECM proteins and their partners play a crucial role in the regulation of junction dynamics in testis [111,113]. Junction restructuring is required during germ cell movement in the seminiferous epithelium [112], but as this is a very complex process mediated by several mechanisms [113], it is proposed that protein deficiencies in this network can affect the spermatogenesis process and result in male infertility, as the results of this study indicate.
Although many studies are highlighting the important role of ECM in the spermatogenesis process, there are no studies directly associating the extracellular matrix with teratozoospermia. However, scientists have previously reported that abnormal basement membrane structures were observed in infertile men with aspermatogenesis [114]. Scientists have also detected in the past that disruption of pathways involved in the ECM and junction dynamics can result in male infertility because germ cells are depleted from the epithelium [112]. Taking into consideration these findings and the results presented here it can be suggested that ECM could play a crucial role in teratozoospermia which was underrated as the abnormal translocation of the germ cells across the seminiferous epithelium can also affect sperm morphology. Therefore, more research is required to identify the molecular mechanism that links ECM with teratozoospermia.
Prioritized Variants' Effect on Gene Expression
Investigation of the effect of prioritized variants on gene expression based on the GTEx database for various tissues revealed that some of them affect the mRNA levels of genes on testis tissue (Table 6); more specifically, among them, NT5C1B codes for a protein called autoimmune infertility-related protein that is highly expressed in testis [115]. FBF1 is another interesting gene whose expression seems to be affected by rs113062332. This gene is associated with primary ciliary dyskinesia and in addition, studies in Drosophila prove that is essential for male fertility as RNAi-knockdown flies have impaired sperm flagella and are infertile [116]. GGN is a testis-enriched gene that has also been implicated to have a role in male infertility [51] too.
Directions for Future Studies
Future studies aiming at the validation of these findings can be experiments of different types. More specifically, GWAS can be used to investigate if the variants found here are associated with teratozoospermia in large samples of controls-cases. RNA-sequencing experiments can also add valuable information about how the variants affect gene expression and which of the genes identified here are deregulated in teratozoospermia. Finally, functional experiments could also validate the effect of variants on protein function and provide knowledge about their specific impact on the phenotype observed, teratozoospermia.
Finally, though the variants were filtered and selected to have a low allele frequency, lower than 5%, and it is not likely to be in homozygosity, it should be investigated experimentally, e.g., PCR, if all the variants identified in this paper are found in homozygous or heterozygous state in a large sample of teratozoospermic individuals. More experiments are required to explore if these mutations are dominant or recessive and how they affect protein function potentially using animal models. This information will be extremely valuable, as in dominant mutations that cause male infertility there is a high probability that the use of assisted reproductive technologies will lead to the transfer of this allele to the next generation [117].
Conclusions
This study is the first comprehensive investigation of the genomic profile of teratozoospermic patients in the Greek population using WGS. It is of importance because it provides a roadmap for future studies, enlisting candidate genes and variants that are associated with teratozoospermia for the first time here and confirming the role of genes that have been studied in the past in male infertility. In particular, the stop-codon variants presented in the present study, that cause termination in protein production, should be further explored as they may shed light on molecular mechanisms and pathways that were previously underrated. Additionally, missense variants detected in teratozoospermic individuals should also be examined. The identification of the important role of the extracellular matrix and the process of cilia and flagellum formation as well as their direct association with teratozoospermia is very promising for future research, too.
However, the small sample size of the patients recruited is a limitation of the present study. Therefore, validation of these findings in a larger sample could provide more definitive evidence for the role of the variants and the candidate genes in teratozoospermia in the Greek population. Furthermore, assisted reproductive technology (ART) has finally expanded the opportunities for infertile couples, but previous studies have revealed the different outcomes of intracytoplasmic sperm injection (ICSI) for individuals harboring different mutations [11,110]. Thus, further experiments are required to assess and explore the impact of the variants identified in the present study on ICSI outcomes. In this way, by analyzing the genetic profile of a man with teratozoospermia, ICSI would be recommended or not based on the identification of specific mutations as it seems that these affecting only the flagellum structure may have better chances for successful ART outcomes [97]. Finally, it should be noted that synonymous variants and variants in non-coding regions were excluded in this study, but they can also provide useful information and should be investigated for their contribution to teratozoospermia in the future as they can help to fully assess the whole-genome profile of patients.
In conclusion, the present study does not provide conclusive evidence on specific mutations, but the analysis of whole-genome data contributes to our understanding of teratozoospermia by highlighting important pathways and provides the foundation for the improvement of ART and the successful diagnosis or prognosis as a wide spectrum of variants was identified, as well as genes and pathways that had not been explored in the past acting as promising candidates for future research.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/genes13091606/s1, Table S1: Prioritized nonsense variants found in teratozoospermic individuals. Variants not found in the 1000 Genomes Project are highlighted because they were also searched in the gnomAD database. Ref; Reference allele, Obs; Observed allele, Table S2: Prioritized frameshift variants found in teratozoospermic individuals. Variants not found in the 1000 Genomes Project are highlighted because they were also searched in the gnomAD database. Ref; Reference allele, Obs; Observed allele, Table S3: Prioritized splice-disrupting variants found in teratozoospermic individuals. Variants not found in the 1000 Genomes Project are highlighted because they were also searched in the gnomAD database. Ref; Reference allele, Obs; Observed allele, Table S4: Prioritized moderate impact variants found in teratozoospermic individuals; Table S5: Genes on which the prioritized variants (high and moderate impact) were found in teratozoospermic individuals and their description according to GeneCards. The results are presented for every group of variants studied (nonsense, frameshift, splice disrupting, and missense). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. | 2022-09-10T15:06:48.938Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "11156dfd377b970fc1baf3cec1c0b55d8c2ff04c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4425/13/9/1606/pdf?version=1662633233",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "79c25ea7867513a30fb411a1d0b054bd9e5c5b85",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
20026971 | pes2o/s2orc | v3-fos-license | The Sialic Acid Binding SabA Adhesin of Helicobacter pylori Is Essential for Nonopsonic Activation of Human Neutrophils*
Infiltration of neutrophils and monocytes into the gastric mucosa is a hallmark of chronic gastritis caused by Helicobacter pylori. Certain H. pylori strains nonopsonized stimulate neutrophils to production of reactive oxygen species causing oxidative damage of the gastric epithelium. Here, the contribution of some H. pylori virulence factors, the blood group antigen-binding adhesin BabA, the sialic acid-binding adhesin SabA, the neutrophil-activating protein HP-NAP, and the vacuolating cytotoxin VacA, to the activation of human neutrophils in terms of adherence, phagocytosis, and oxidative burst was investigated. Neutrophils were challenged with wild type bacteria and isogenic mutants lacking BabA, SabA, HP-NAP, or VacA. Mutant and wild type strains lacking SabA had no neutrophil-activating capacity, demonstrating that binding of H. pylori to sialylated neutrophil receptors plays a pivotal initial role in the adherence and phagocytosis of the bacteria and the induction of the oxidative burst. The link between receptor binding and oxidative burst involves a G-protein-linked signaling pathway and downstream activation of phosphatidylinositol 3-kinase as shown by experiments using signal transduction inhibitors. Collectively our data suggest that the sialic acid-binding SabA adhesin is a prerequisite for the nonopsonic activation of human neutrophils and, thus, is a virulence factor important for the pathogenesis of H. pylori infection.
Colonization of the human stomach with Helicobacter pylori is accompanied by chronic active gastritis, which may lead to peptic ulcer disease, atrophic gastritis, and gastric adenocarcinoma (1). To date a number of H. pylori virulence factors have been identified. Among these are the urease, the blood group antigen-binding adhesin (BabA), 1 the cag pathogenicity island, the vacuolating cytotoxin (VacA), and the H. pylori neutrophilactivating protein (HP-NAP). Binding of the bacterium to fucosylated host cell receptors is mediated by the BabA adhesin, an outer membrane protein of H. pylori (2). The cag pathogenicity island encodes a type IV secretion system that enables translocation of the CagA protein into host cells, where the protein becomes tyrosine-phosphorylated and subsequently activates a eukaryotic phosphatase leading to dephosphorylation of host cell proteins and morphological changes (3). The VacA toxin induces formation of large cytoplasmic vacuoles in eukaryotic cells and causes alterations of tight junctions (4). VacA also forms anion-selective channels, which may be blocked by chloride channel inhibitors (5,6). HP-NAP promotes the adhesion of neutrophils to endothelial cells and the production of reactive oxygen radicals (4,7).
A prominent feature of the H. pylori-induced gastritis is an infiltration of neutrophils into the gastric epithelium (8). Neutrophils play a major role in epithelium injury, because these cells have direct toxic effects on the epithelial cells by releasing reactive oxygen and nitrogen species and proteases (9,10). An additional virulence factor of H. pylori bacterial cells is thus the neutrophil-activating capacity, i.e. the ability of certain H. pylori strains to activate human neutrophils in the absence of opsonins (8). Strains with neutrophil-activating capacity are significantly more often isolated from patients with peptic ulcer disease (8,11,12). The factor(s) of H. pylori responsible for the activation of neutrophils are heat-labile and dependent on whole nondisintegrated organisms (8). Recently, preincubation with sialylated oligosaccharides demonstrated that the nonopsonic H. pylori-induced activation of human neutrophils occurs by lectinophagocytosis, i.e. recognition of sialylated glycoconjugates on the neutrophil cell surface by a bacterial adhesin leads to the phagocytosis and the oxidative burst reactions (13).
In the present study the role of some H. pylori virulence factors in the nonopsonic H. pylori-induced activation of human neutrophils was investigated. Human neutrophils were challenged with wild type H. pylori strains and isogenic deletion mutant strains lacking HP-NAP, BabA, SabA, VacA, or the 37-kDa fragment of VacA, followed by chemiluminescence measurement of the superoxide anions produced by the neutrophils. The nonopsonic adherence to and phagocytosis by neutrophils of wild type and mutant bacterial cells were examined at various time intervals as the appearance of visible macroscopic aggregation/agglutination of neutrophils and by microscopy of acridine orange stained smears. In addition, the effects of signal transduction inhibitors on H. pylori-induced neutrophil activation were studied, to identify intracellular signaling pathways required for H. pylori-induced neutrophil oxidative burst.
MATERIALS AND METHODS
H. pylori Strains, Culture Conditions, and Labeling-Characteristics of the H. pylori strains are presented in Table I. Strain NCTC 11637 was obtained from the National Collection of Type Cultures, London, UK, strain C-7050 from Professor T. Kosunen, Helsinki, Finland, and strain CCUG 17874 from the Culture Collection University of Göteborg. Strain J99 and the construction of the J99/SabAϪ mutant sabA(JHP662)::cam were described by Mahdavi et al. (14). The J99/BabAϪ mutant (babA::cam) and the J99/BabAϪSabAϪ mutant (babA::cam sabA::kan) were constructed as previously described (2,14).
For construction of the HP-NAP knock-out mutant, designated J99/ NAP-(napA::kan), the napA gene was amplified by PCR using the napA1F (forward) and napA1R (reverse) primers. The PCR fragment was cloned into the pBluescriptSKϮ EcoRV site (Stratagene, La Jolla, CA). The resulting plasmid was linearized with primers napA2F (forward) and napA2R (reverse), and then ligated with the kanamycin resistance (KanR) cassette from pILL600 (22). The plasmid carrying the deleted napA was used for transformation of the J99 strain. For transformation the bacteria were grown for 24 h on agar plates before addition of 2 g of plasmid DNA. After transformation the bacteria were grown on nonselective plates for 48 h to allow for the expression of antibiotic resistance and then transferred onto kanamycin-containing plates. The transformants were analyzed by PCR using primers napA3F and napA4R which verified that the KanR cassette was inserted into napA. Western blot analysis of napA mutants using anti-HP-NAP antibodies showed that the mutant strain was devoid of HP-NAP expression.
For chromatogram binding experiments the bacteria were grown in a microaerophilic atmosphere at 37°C for 48 h on Brucella medium (Difco Laboratories, Irvine, CA) containing 10% fetal calf serum (Harlan Sera-Lab Loughborough, UK) inactivated at 56°C, and 1% BBL TM IsoVitalex enrichment (BD France S.A., Le Pont de Claix, France). The mutant strains J99/SabAϪ and J99/BabAϪ were cultured on the same medium supplemented with chloramphenicol (20 g/ml). For the mutant strain J99/SabA-BabAϪ supplementation with chloramphenicol (20 g/ml) and kanamycin (25 g/ml) was used, whereas the 17874/VacAϪ and 17874/p37Ϫ strains were cultured on the above described medium supplemented with kanamycin (20 g/ml). Bacteria were radiolabeled by the addition of 50 Ci of [ 35 S]methionine (Amersham Biosciences) diluted in 0.5 ml of phosphate-buffered saline (PBS), pH 7.3, to the culture plates. After incubation for 12-72 h at 37°C under microaerophilic conditions, the bacteria were harvested, centrifuged three times, and thereafter suspended to 1 ϫ 10 8 colony forming units/ml in PBS. The specific activities of the suspensions were ϳ1 cpm per 100 bacterial cells.
Alternatively, the strains were grown in a microaerophilic atmosphere at 37°C for 48 h on GC agar plates (GC II agar base, BBL, Cockeysville, MD) supplemented with 1% bovine hemoglobin (BBL), 10% horse serum, and 1% IsoVitaleX enrichment (BBL), without antibiotics for the wild type strains, and with antibiotics as above for the mutant strains J99/SabAϪ, J99/BabAϪ, J99/BabAϪSabAϪ, 17874/ VacAϪ, and 17874/p37Ϫ. The H. pylori organisms were collected in PBS and used in chemiluminescence and phagocytosis experiments as described below.
Chromatogram Binding Assay-Glycosphingolipids were isolated and characterized by mass spectrometry, 1 H NMR, and degradation studies, as described (24). De-sialylation was done by incubating the glycosphingolipids in 1% acetic acid (by volume) at 100°C for 1 h.
Extraction of Membrane Proteins from Human Neutrophil Granulocytes-Membranes from fresh neutrophils were isolated as described previously (27). The outer membrane fragment fraction was dissolved in 25 mM Tris-HCl containing 2.5% SDS and 1 mM EDTA, pH 8.0, heated to 95°C for 10 min and centrifuged at 10,000 ϫ g for 10 min.
Electrophoresis and Binding of H. pylori-SDS-PAGE and staining were carried out with NuPAGE TM gels (Novex, San Diego, CA). Briefly, neutrophil membrane proteins samples in SDS sample buffer, with 50 mM dithiothreitol added, were heated to 95°C for 5 min, and applied on a homogeneous 10% polyacrylamide gel. After electrophoresis, the gels were either stained with Coomassie Blue or electroblotted to polyvinylidene difluoride (0.2-m) membranes.
The polyvinylidene difluoride membrane was incubated in blocking solution, 3% bovine serum albumin, 50 mM Tris-HCl, 200 mM NaCl, 0.1% NaN 3 , pH 8.0, for 1.5 h. The membrane was then incubated with 35 S-labeled H. pylori strain CCUG 17874 diluted in PBS for 2 h at room temperature and thereafter washed in a solution of 50 mM Tris-HCl, 200 mM NaCl, and 0.05% Tween 20, pH 8.0. After drying at room temperature, the membrane was exposed to XAR-5 x-ray films overnight. Reference bovine fetuin and bovine asialofetuin were purchased from Sigma.
Human Neutrophil Granulocytes-Heparinized blood from healthy blood donors was used to prepare neutrophils by Ficoll-Paque (Amersham Biosciences) centrifugation in accordance with the method of Böyum (28), slightly modified as described (8). For each series of experiments on a particular day neutrophils were prepared and pooled from three blood donors of the same blood group (A Rhϩ or O Rhϩ). Neutrophils were thus obtained from different blood donors at each experiment. They were suspended in PBS supplemented with MgCl 2 , CaCl 2 , glucose, and gelatin as previously described (8). The purity and viability of the neutrophils exceeded 95%.
For the signal transduction inhibition experiments neutrophils (5 ϫ 10 6 /ml) were treated with 800 ng of pertussis toxin for 60 or 120 min at 37°C, with wortmannin (5, 10, or 20 nM) for 5 min at 37°C, or diphenyleneiodonium chloride (DPI, 10 M) for 5 min at 37°C, centrifuged, and resuspended in PBS supplemented with MgCl 2 and CaCl 2 before they were challenged with H. pylori cells of strain NCTC 11637 as described above. Pertussis toxin, wortmannin, and DPI were purchased from Calbiochem. For oligosaccharide inhibition experiments 50 l of H. pylori (5 ϫ 10 8 /ml) were mixed with 50 l of 3Ј-sialyllactose (IsoSep, Tullinge, Sweden) to receive final concentrations of 0.1-1.0 mM in the CL, for 15 min at 37°C, and 100 l of the mixture was thereafter transferred to the test tube for CL measuring.
The oxidative bursts of the neutrophils were measured as luminolenhanced chemiluminescence with a luminometer (LKB Wallac 1251, Turku, Finland), and the measurements were always started within 1 min after the bacterial suspension had been added. The assays were performed at 37°C, and CL from each sample was measured at 60-to 90-s intervals during a period of 30 -60 min, and data were stored in a computer for computerized calculations. This technique thus measures both the external and internal oxidative bursts of the nonopsonic phagocytosis by neutrophils, which was previously checked by quenching the external burst in the presence of catalase (2000 units/ml), and the internal one in the presence of azide (1 mM) and horseradish peroxidase (4 units/ml) as described by Lock and Dahlgren (29). The H. pylori strains NCTC 11637 (giving a strong and rapid CL response) and C-7050 (inducing no CL response) (8) were included in each series of experiments as positive and negative controls, respectively.
Adherence, Phagocytosis, and Neutrophil Agglutination Assays-To each test tube were added 350 l of PBS supplemented with MgCl 2 and CaCl 2 , 100 l of neutrophils (5 ϫ 10 6 /ml), and 50 l of nonopsonized H. pylori organisms (5 ϫ 10 8 /ml). Adherence to and phagocytosis by neutrophils of wild type and mutant H. pylori strains were examined by microscopy (see below) at the various time intervals of Ͻ2-5, 20 -30, and 60 -90 min, and for the appearance of visible, i.e. macroscopic agglutination (aggregation) of neutrophils by ocular inspection of the tubes at the same time intervals. For microscopic examination of adherence/phagocytosis assays, and the formation of neutrophil agglutinates/aggregates by H. pylori cells, 10 l of the H. pylori/neutrophil mixture was smeared on a glass slide within an area of ϳ2.5-3 mm 2 , air-dried, fixed in cold methanol for 5 min, washed in distilled water, and then stained with acridine orange as described by the manufacturer. The slides were inspected with a Zeiss fluorescence microscope for incident light with appropriate filter combinations at a magnification of 400ϫ to look for bacteria that had adhered to neutrophils and/or were phagocytosed. The results obtained by this technique corresponded well to previously reported findings by electron microscopy (30). However, even though adherence of acridine orange-stained bacteria of H. pylori to the neutrophil cell membrane looks different from those that are obviously inside the cell, phagocytosis of an individual bacterial cell cannot be definitely separated from adherence with this technique. Adherence/phagocytosis have therefore been taken together and were graded as negative (Ϫ) when neutrophils were evenly dispersed with only occasionally adhered H. pylori cells; minor (ϩ) or moderate (ϩϩ) adherence/ phagocytosis with 5 Ͻ 10 or 10 Յ 20 bacterial cells, respectively, per neutrophil in representative fields of view; and heavy (ϩϩϩ) adherence/phagocytosis with Ͼ20 bacterial cells per neutrophil.
In the signal transduction inhibition experiments, neutrophils were treated with pertussis toxin, wortmannin, and DPI in concentrations given above, and adherence to and phagocytosis by neutrophils of strain NCTC 11637, as well as visible macroscopic agglutination were examined at the time intervals Ͻ2-5, 20 -30, and 60 -90 min, and compared with untreated neutrophils challenged with the same strain. Adherence, phagocytosis, and neutrophil agglutination were graded as described above.
Binding of H. pylori to Human Neutrophil Gangliosides Is
Lost after Deletion of the SabA Adhesin-The sialic acid binding capacities of the wild type and deletion mutant H. pylori strains utilized in this study were evaluated by binding of the bacteria to glycosphingolipids separated on thin-layer chromatograms. The results are exemplified in Fig. 1 and summarized in Table I. The criterion used for sialic acid recognition was binding to the acid glycosphingolipid fraction of human neutrophils (Fig. 1, lane 2) with no binding after de-sialylation (Fig. 1, lane 3). Thus, while both the parent strains and their mutants bound to the nonacid reference glycosphingolipid gangliotetraosylceramide (Fig. 1, lane 1), binding to human neutrophil gangliosides was observed for all strains except the J99/SabAϪ mutant (Fig. 1C), the J99/BabAϪSabAϪ mutant (not shown), and the C-7050 strain (not shown).
H. pylori Also Bind to Human Neutrophil Membrane Proteins-Binding of SabA-expressing H. pylori strain CCUG 17874 to human neutrophil membrane proteins is shown in Fig. 2B. As reported previously (21), the bacteria bound to several proteins with apparent relative molecular masses between 40 and 70 kDa.
Binding of H. pylori to Sialic Acid-carrying Neutrophil Receptors Is Necessary for Induction of the Oxidative Burst-The neutrophil-activating abilities of the wild type and deletion mutant H. pylori strains were investigated by luminol-enhanced chemiluminescence. Challenge of human neutrophils with the wild type H. pylori strains J99, CCUG 17874, and NCTC 11637 resulted in strong CL responses (Figs. 3-6), although there was some strain to strain variation in the ability to induce the oxidative burst manifested by differences in peak values (millivolts) and time to reach peak (minutes). In most cases a biphasic response was observed, where the initial phase is due to activation of the plasma membrane (extracellular) NADPH oxidase, whereas the second phase represents activation of both plasma membrane and the intracellular NADPH oxidases. The extracellular and the intracellular production of H 2 O 2 are linked to two separate pools of NADPH oxidase localized to the plasma membrane and granule membranes, respectively (31). The reduction of the CL responses obtained after incubation of the wild type H. pylori strains with 3Ј-sialyllactose also showed some strain to strain variation. As an example, for strain NCTC 11637, which exhibited a very strong CL response, 0.1 mM 3Ј-sialyllactose gave a 62% reduction of the peak value and 1 mM gave a 72% reduction (Fig. 3), and for strain J99, with a weaker response, 1 mM 3Ј-sialyllactose totally abolished the peak (data not shown).
As shown in Fig. 4, the absence of the neutrophil-activating protein HP-NAP (J99/NAP; Fig. 4A), or the Le b -binding BabA adhesin (J99/BabAϪ; Fig. 4B), had no effect on the neutrophil activation, i.e. the CL response of the mutant strains were very similar to the response induced by the J99 parent strain. However, after deletion of the gene coding for the sialic acid-binding SabA adhesin (J99/SabAϪ and J99/BabAϪSabAϪ; Fig. 4B), and when using the SabA negative C-7050 wt strain (Fig. 4A), no CL responses were obtained, demonstrating that binding of H. pylori to sialylated neutrophil receptors plays a pivotal initial role in the induction of the oxidative burst.
Absence of the VacA Cytotoxin Abrogates the Extracellular H 2 O 2 Production-Deletion of the whole vacA in the strain CCUG 17874 resulted in a delayed and reduced CL response (Fig. 5). The same effect was observed when the 37-kDa fragment of VacA was deleted. In both cases the initial external burst reaction was lacking, and thus no activation of the plasma membrane pool of NADPH oxidase occurred.
Adherence, Phagocytosis, and Neutrophil Agglutination Are Impaired upon Deletion of SabA-When neutrophils are challenged with nonopsonized type 1 H. pylori cells like NCTC 11637 adherence of bacterial cells to neutrophils starts within the first 5 min with continued adherence and subsequent phagocytosis during the following 20 -30 min. During this time frame macroscopic aggregation/agglutination of neutrophils is visible, and, by microscopy of acridine orange-stained smears and by electron microscopy, large aggregates of neutrophils showing heavy adherence and phagocytosis of bacterial cells are obvious (8,30). This is also typical for other type I H. pylori cells challenging neutrophils. In contrast bacterial cells of the control strain C-7050, a type 2 variant (VacAϪ/cagA ϩ ) that is devoid of SabA, remain evenly dispersed among the neutrophils for more than 120 min (8). All these events seen by microscopy correspond well to the oxidative burst responses detected by luminol enhanced CL.
The results from microscopy of neutrophils challenged with wild type and mutant H. pylori strains are summarized in Table II. Thus, the mutant strains with deletions of the BabA (J99/BabAϪ) or HP-NAP (J99/NAPϪ) adhered to neutrophils, agglutinated them, and were phagocytosed in a manner indistinguishable from the parental wild type strains J99, NCTC 11637, and CCUG 17874. In contrast, the sialic acid bindingdefective mutants J99/SabAϪ and J99/BabAϪSabAϪ caused, like C-7050, no neutrophil agglutination, and only occasional bacterial cells adhered to or were phagocytosed by the neutrophils, the majority of which were evenly distributed.
The bacterial cells of the 17874/VacAϪ mutant, with deletion of the entire vacA gene, adhered to neutrophils and were phagocytosed. However, these events were retarded as compared with its parent strain, and no macroscopic agglutination was visible even though relatively large aggregates of adhered neutrophils were seen by microscopy. The picture was very similar for the 17874/p37Ϫ mutant.
Effects of Signal Transduction Inhibitors on H. pylori-induced
Neutrophil Activation-To investigate the intracellular signaling mechanisms activated by binding of H. pylori to sialylated neutrophil cell surface glycoconjugates and involved in NADPH oxidase activation, the effects of the signal transduction inhibitors DPI, pertussis toxin, and wortmannin on the chemiluminescence responses were tested. Strain NCTC 11637 was selected for the challenge of neutrophils pretreated with the signal transduction inhibitors as described under "Materials and Methods." The expected H. pylori responses, both the initial phase and the late phase, were decreased severalfold by treatment of the neutrophils with DPI (Fig. 6A), an inhibitor of cellular flavoproteins (32), demonstrating that the induction of the neutrophil respiratory burst by H. pylori bacterial cells occurs through assembly of both plasma membrane and the intracellular NADPH oxidases.
Next the effect of pertussis toxin, a potent inhibitor of heterotrimeric G-proteins, was evaluated. When the neutrophils were pretreated with pertussis toxin, a complete abrogation of the H. pylori-induced oxidative burst was obtained (Fig. 6B), demonstrating that the neutrophil activation induced by bind- ing of H. pylori to sialylated receptors is transduced by a member of the G-protein-coupled receptor family.
The role of phosphatidylinositol 3-kinase was investigated using wortmannin, an inhibitor of this kinase. As shown in Fig. 6C, the H. pylori-induced activation of neutrophils was inhibited by wortmannin in a dose-dependent manner.
Pre-treatment of neutrophils with DPI, pertussis toxin, or wortmannin did not prevent adherence of H. pylori, which, however, was delayed and retarded when compared with untreated neutrophils (Table III). There was no obvious phagocytosis, and those H. pylori cells that adhered to the neutrophil cell membrane seemed to contribute to the occurrence of minor neutrophil aggregates. DISCUSSION The human pathogens H. pylori and Neisseria gonorrhoeae are the two most studied of the relatively few bacterial species that nonopsonized activate neutrophils to an oxidative burst with the production of potentially reactive oxygen species (8,11,33). For gonococci proteins belonging to a family of heatmodifiable outer membrane proteins termed opacity-associated proteins are responsible for the activation (34). For H. pylori several soluble factors involved in the nonopsonic neutrophil activation have been described, including HP-NAP (7), the urease (35), a low molecular weight factor in water extracts from the bacteria (36), and the cecropin-like bactericidal peptide Hp (2-20) (37).
In this study we show that the sialic acid-binding SabA adhesin of particular H. pylori strains has a pivotal role in the nonopsonic activation of human neutrophils. This is further supported by the fact that treatment of neutrophils with sialidase abrogates the activation induced by H. pylori (data not reproduced). Thus, the nonopsonic neutrophil oxidative burst induced by H. pylori bacterial cells, adherence, and phagocytosis of the bacteria are all initiated by binding of H. pylori SabA to sialic acid-carrying neutrophil receptors. This is the first time a distinct functional role has been defined for an H. pylori adhesin. The following events, linking receptor binding to oxidative burst reaction, involve a G-protein-linked signaling pathway and downstream activation of phosphatidylinositol 3-kinase, as shown by the inhibition experiments.
The SabA adhesin is present in ϳ40% of H. pylori strains and is subjected to phase variation (14). Individual H. pylori strains differ in their ability to bind to human neutrophils and to induce production of reactive oxygen radicals (11), and the reductions of the CL responses obtained by preincubation with sialylated oligosaccharides varies to some extent between different H. pylori strains (13). This might be due to a variable expression of the SabA adhesin caused by phase variation. Indeed, Western blot analysis using anti-SabA antibodies showed that the reference strain C-7050, which is devoid of neutrophil-activating capacity, does not express SabA (to be reported separately).
Rautelin et al. (8) showed that nonopsonized H. pylori cells phagocytosed by neutrophils resist phagocytic killing, in sharp contrast to those opsonized by serum complement. These observations are alike those described by Allen et al. who reported that type I H. pylori strains phagocytosed by bone marrow macrophages resist phagocytic killing, and after uptake the bacteria reside inside a novel form of vacuoles called megasomes (38). Furthermore, it was recently demonstrated that a novel phagocytic pathway regulated by atypical protein kinase Cis involved in the uptake of H. pylori by bone marrow macrophages (39).
Interestingly, upon knock-out of the VacA cytotoxin, or the p37 fragment of VacA, in the sialic acid binding strain CCUG 17874 the initial extracellular burst was abrogated, while the second phase of the reaction was not affected. Thus, presumably, the p37 fragment of VacA contributes to the activation of the plasma membrane NADPH oxidase. This p37 fragment along with ϳ110 amino acids of the p58 part has been identified as the minimal intracellular vacuolating unit (40,41). The p58 fragment, on the other hand, carries the cell binding capacity, but is devoid of vacuolating activity and is not internalized (23). The abrogation of the external burst obtained with the mutant strain expressing only the p58 fragment (17874/ p37Ϫ) might indicate that the activation of the plasma membrane pool of the NADPH oxidase requires the channel-forming capacity, or (more likely) internalization of the toxin, and that binding of the toxin is not enough. These matters, however, need to be determined, and it should be noted that the effect observed was obtained with whole bacterial cells deleted of VacA or the p37 fragment and not with purified proteins.
Our results seem to be in contrast with the findings of Makristathis et al. (35), who reported that knock-out of VacA has no effect on neutrophil-activation caused by H. pylori. However, the flow cytometry method used in their study mainly detects intracellular oxygen radicals, and thus the effect on the external burst was not noted.
Deletion of the Le b -binding BabA adhesin did not affect the chemiluminescence response. Furthermore, no Le b -carrying glycoconjugate has been identified in human neutrophils (42)(43)(44)(45). Taken together this suggests that the Le b -binding capacity of H. pylori is not involved in neutrophil activation.
The CL response obtained with the mutant deleted of the neutrophil-activating protein HP-NAP was also very similar to the response of the parental strain. This finding is in agreement with that of Leakey et al. (36) who showed that nonopsonic activation of neutrophils, as measured with luminol enhanced CL, was independent of HP-NAP. Correspondingly, no HP-NAP response was obtained with the present CL method when using pure protein (13). However, HP-NAP-induced production of reactive oxygen metabolites in human neutrophils may be measured with the monovanillic acid method (46) and the cytochrome c reduction method (47).
It is obvious from our findings and those of others that the demonstration of nonopsonic neutrophil activation by whole cells, sonicates, or soluble factors of H. pylori is to some extent dependent on the methods used. Very few studies have, for example, compared whole cells with sonicates or purified proteins (13), and more studies are therefore needed to elucidate these conditions. Luminol-enhanced CL, used in the present study, offers in this respect some advantages, because it allows the demonstration of not only the external and internal oxidative bursts but also the study of their kinetics. Our findings thus showed that the immediate and rapid extracellular burst like that obtained with J99 wild type and NCTC 11637 is associated with rapid adherence of bacterial cells to the neutrophils. These initial events obviously activate the plasma membrane pool of NADPH oxidase. They are followed by phagocytosis for the next 20 -30 min and activation of the pool of NADPH oxidase linked to neutrophil granules that is responsible for the internal burst (31). The rapid initial adherence and subsequent phagocytosis will apparently be responsible for the visible aggregation or agglutination of neutrophils by such strains that express SabA. Other strains like CCUG 17874, which also expresses SabA and activates neutrophils, may have a somewhat different kinetic pattern with a weaker external burst but a strong and prolonged internal burst. Of interest was the fact that the VacA of this strain seemed to play a role for the activation, because its VacA negative mutant abrogated the initial external burst but had no obvious influence on the internal. The reason for this is not clear but might have a clinical relevance, because Rautelin et al. (11) showed that VacA-producing strains with neutrophil-activating capacity were significantly more often found in patients with peptic ulcer disease than in those infected with strains lacking these phenotypic markers.
The signal transduction inhibitors DPI, pertussis toxin, and wortmannin inhibited or abrogated the nonopsonic neutrophil activation by whole H. pylori cells. However, the adherence of H. pylori to neutrophils was only moderately inhibited, indicating that the SabA still attached to the neutrophil receptor, whereas the signals were turned off. The same inhibitors used by us also inhibit the activation of human neutrophils induced by HP-NAP when measured by the monovanillic acid method (46). Thus, the signaling pathways employed by soluble HP-NAP and H. pylori whole bacterial cells are very similar. Still, although the bacterial cells induce a strong CL response, no effect with HP-NAP was obtained in this assay. This might be due to the fact that very limited amounts of HP-NAP are present on the bacterial surface (48). It was recently shown that HP-NAP activates the extracellular regulated kinase and p38 mitogen-activated protein kinase in human neutrophils, and these events are essential for the HP-NAP-induced neutrophil respiratory burst and for the chemotaxis and adhesion of the neutrophils (47). Whether this is also the case with whole bacterial cells of H. pylori needs to be determined.
Hansen et al. (49) have reported that the pertussis toxin does not inhibit the activation of neutrophils by H. pylori. The reason for the discrepancy with our results presumably lies in the choice of methods for measuring. Hansen et al. studied the effects of sonicates of H. pylori, whereas in this study whole bacterial cells were used. Furthermore, the concentration of luminol used by them was 70-fold higher than in our experimental system.
The term lectinophagocytosis was originally defined by studying the interactions between mannose-specific type-1 fimbriated bacteria and neutrophils (reviewed in Ref. 50). These bacteria also induce a neutrophil oxidative burst reaction, which may be inhibited by preincubation with mannose derivatives.
However, there are also nonmicrobial systems where protein-carbohydrate interactions lead to neutrophil activation. Thus, neutrophil oxidative burst may be induced by plant lectins, as e.g. mannose-binding concanavalin A (51). Also carbohydrate-binding proteins present in mammalians, such as the -galactose-binding galectin-1 and galectin-3, have the ability to activate the human neutrophil NADPH oxidase (52,53).
The loss of H. pylori-induced nonopsonic neutrophil activation upon deletion of the sialic acid-binding SabA adhesin shows that binding of the bacteria to sialic acid-carrying neutrophil receptors is required for induction of the oxidative burst and for phagocytosis. Interestingly, other sialic acid-binding ligands, such as influenza A virus and the lectin from the slug Limax flavus, elicit a respiratory burst response in human neutrophils with formation of hydrogen peroxide, but with minimal accompanying superoxide generation (54,55). The human influenza strain used in these studies, having an H3 hemagglutinin, preferentially recognizes glycoconjugates with terminal NeuAc␣6 (56), whereas the L. flavus lectin binds to terminal NeuAc␣ irrespective of linkage position (57). The neutrophil activation was mediated via stimulation of phospholipase C and is not sensitive to pertussis toxin. This suggests that by binding to different sialic acid-carrying neutrophil receptors, influenza A virus and H. pylori activates distinct signaling pathways leading to different types of neutrophil oxidative burst reactions. Adherence to and phagocytosis by neutrophils pretreated with signal transduction inhibitors as indicated in the table before they were challenged with the type I H. pylori NCTC 11637. After 20 -30 and 60 -90 min, respectively, acridine orange-stained smears were examined by fluorescence microscopy as described under "Materials and Methods." In addition, test tubes were inspected for visible macroscopic neutrophil agglutination. See Table II In this study human neutrophil gangliosides were utilized to define sialic acid binding by H. pylori. However, some of the carbohydrate sequences of neutrophil gangliosides are also present on neutrophil glycoproteins (44), and sialic acid-dependent binding of certain H. pylori strains to glycoproteins of human neutrophils has also been demonstrated (Fig. 2) (21). Although the most likely receptor candidates are transmembrane glycoproteins, functional glycosphingolipid receptors for microbial proteins have also been described such as, e.g. the GM1 ganglioside receptor of cholera toxin (58,59). Thus, so far it is an open question if the functional H. pylori neutrophil receptor is a glycosphingolipid or a glycoprotein. | 2018-04-03T02:42:57.678Z | 2005-04-15T00:00:00.000 | {
"year": 2005,
"sha1": "780f8a8cd95968579a4070aea1314bd3cceaf122",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/280/15/15390.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "49862f9a12d00180044d155feacb1e445020e3ae",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
256681576 | pes2o/s2orc | v3-fos-license | NUMERICAL SIMULATION FOR HEAT TRANSFER OF SILICA-AEROGEL FILLED 3D STITCHED SPACER FABRIC COMPOSITES
Spacer fabrics and their composites have great advantages of being excellent heat insulation materials because of their cavity structure characteristics. However, there are few researches on the thermal insulation of spacer fabric composites with different spacer shapes and geometric parameters. In this work, stitched spacer fabric composites filled with silica aerogel with rectangular, triangular and trapezoidal spacer shapes and their models were designed. The results of experiments and simulations show that the temperature distribution of spacer fabric composites has good consistency at high temperature. By analyzing the heat transfer results of composites with different geometric parameters, it was found that the length of the connecting layer between the top and bottom layers and the distance between two adjacent sutures in the top layer affect the minimum and maximum temperatures of the top surface.
Introduction
Spacer fabric is a 3-D fabric whose surface layer is connected with the surface layer by yarn or fabric, forming a space in the vertical direction of the fabric according to a certain rule [1,2].According to the production process, spacer fabric can be divided into three categories: woven spacer fabric, knitted spacer fabric, and stitched spacer fabric [3][4][5].Spacer fabric and its composite materials have the characteristics of high strength, high elastic modulus, good pressure resistance, impact resistance, heat insulation and light weight, and have a wide range of applications in construction, thermal protection, automobile, medicine and aerospace [6][7][8][9][10].
By using the fabric as a spacer of intermediate connecting layers, the cross-sections of the spacers can be designed into various shapes, such as trapezoids, triangles and rectangles, as needed.The composite material made of this spacer fabric has excellent mechanical properties.Neje and Behera [11] designed spacer fabric composites with three different spacer shapes: rectangle, trapezoid and triangle, and studied the transverse compression properties of THERMAL SCIENCE: Year 2023, Vol. 27, No. 5B, pp.3891-3901 spacer fabric composites with different spacer shapes.In the following research, Neje and Behera [12] found that the change of the geometric parameters of the spacer shape will affect the mechanical properties of the spacer fabric composites to a certain extent.The researchers also made composites by filling the cavities of the spacer fabric with thermal-insulating material.Wang et al. [13] designed several spacer fabrics with different structures using glass fiber and found that the spacer fabric composite filled with silica had good thermal insulation ability.Many researchers have studied on the mechanical properties of spacer fabric composites with different spacer shapes and different geometric parameters.However, there are few papers on the influence of those on the thermal insulation of spacer fabric composites.
In this work, to study the influence of different spacer shapes and different geometric parameters on the thermal insulation of spacer fabric composites is the main purpose.The basalt fiber yarns were used to weave plain fabrics.In order to control the geometric parameters accurately, three kinds of spacer fabrics with different spacer shapes of rectangle, triangle and trapezoid were prepared by suture.Silica aerogels were filled in the gap of spacer fabric to prepare the composites with thermal-insulation ability.In the COMSOL5.5, the corresponding models were established according to the geometric parameters of the spacer fabric composites.The heat transfer processes of the models at 773.15 K were also analyzed.The experimental equipment was designed and the results of numerical simulation were verified by experiments.Finally, the effects of different spacer shapes and geometric parameters on the thermal insulation of spacer fabric composites were analyzed by simulation and experiment.This provides some reference value for the design, optimization and application of spacer fabric composites in the field of thermal insulation.
Preparation of 3-D stitched spacer composites and geometrical models
Basalt fiber yarn was purchased from Sichuan Juyuan Basalt Fiber Technology Co., Ltd., China.Fiber's diameter of the basalt yarn was 17 μm.The plain-woven fabric was woven by the basalt yarn on a small sample loom (SGA598, Tongyuan, China).The thickness was measured by a fabric thickness gauge (YG141D, Jigao, China).The specifications of the woven fabric are listed in tab. 1.The 3-D stitched spacer fabrics were sutured by two separate plain-woven fabric layers which were up and down and a plain-woven fabric layer connected in the middle.All the plain-woven fabric layers were woven by the basalt yarn.A kind of basalt fiber yarn which was purchased from Sichuan Juyuan Basalt Fiber Technology Co., Ltd.,China was used to be the suture.The yarn's linear density and the fiber's diameter of the yarn were 85 tex and 9 μm.The spacer's shapes of the 3-D stitched spacer fabrics were designed as rectangle, triangle and trapezoid.The shorthand notations of samples with different spacer shapes were marked as REC, TRI, and TPZ, respectively in the following.The cross-section's schematic diagrams of the 3-D stitched spacer fabrics are shown in fig. 1.
In fig. 1, the top length side of rectangle and the topline of trapezoid were defined as a, the bottom length side of rectangle, the base of triangle and the baseline of trapezoid were defined as b, the base angle between the bottom layer and the connecting layer was defined as θ. he 3-D stitched spacer fabric is a combination of fiber and air.In order to simplify the model and improve the computing efficiency, the fabric assembly was modeled as a whole by using COMSOL 5.5, and on the basis of the actual fabric geometric parameters and spacer's shapes.Some sketch maps of the 3-D stitched spacer fabrics and their corresponding models are shown in fig. 2.
The spacers of the 3-D stitched spacer fabrics were filled with pure silica aerogel purchased from Langfang Yvao Energy Saving Technology Co., LTD., China.The final model diagram of 3-D stitched spacer composite filled with silica aerogel is shown in fig. 3.
Numerical simulation method Heat transfer equation
In COMSOL5.5, the equation of steady-state heat transfer can be described: where ρ, C p , and k are the density, specific heat capacity, and thermal conductivity of material, respectively, T -the temperature, Q -the heat source, and µ → -the velocity vector.According to the eq.( 1), the densities, specific heat capacities and thermal conductivities of 3-D stitched spacer composite models need to be set.
Material thermophysical parameters
The 3-D stitched spacer composite model contains fabric and silica aerogel two parts.In this work, the models of fabric and silica aerogel are considered isotropic materials in order to simplify numerical simulation.For any material, there is a dependent relationship between thermal conductivity and temperature.Therefore, in order to accurately simulate the steadystate heat transfer of 3-D stitched spacer composite at high temperature, this property of material cannot be ignored.In the numerical simulation, the specific heat capacity and density of the
Boundary conditions
The direction of heat transfer in the 3-D stitched spacer composite was set to along the fabric thickness.The initial temperature of the model was set to 293.15 K which was the room temperature at that time.The boundaries of models were considered as thermal insulated.A heating temperature load of 773.15K was set to the bottom surface of the models.
It was well known that the main ways of heat transfer are heat conduction, heat radiation and heat convection.According to eq. ( 1), heat conduction is the main heat transfer form in this simulation work.Since the thermal conductivity parameter used in this work takes into account the influence of temperature, and the influence of high temperature on thermal conductivity includes thermal radiation, it was reasonable to not consider thermal radiation again in this work.Therefore, thermal convection between the air and the boundaries of 3-D stitched spacer composite also needs to be taken into account.In COMSOL5.5, the convective heat flux, q 0 , is defined: where h is the convective heat transfer coefficient, T ext -the external temperature, and T -the temperature of the top surface of the 3-D stitched spacer composite.The convection heat transfer coefficient of the top surface on the 3-D stitched spacer composite was set to 15 W/m 2 K [17].
Meshing
In order to grid the stereo model accurately, regular tetrahedron was selected as the unit of grid.The selection of mesh size affects the accuracy and operation time of numerical simulation.After multiple comparisons, the refinement option under the physical field control grid in the software was selected to grid the models.The cell size of the grid in the models ranges from 0.8-8 mm.Through the refinement research, it was found that the minimum cell quality of the mesh is greater than 0.1, which shown that the overall grid quality is feasible.
Experimental verification
The experimental device's diagram for verifying the experiment is shown in fig. 4. The heating device was supplied by Xuankang Electric Heating Appliance Co., Ltd., China.First, the temperature of the heating plate was set to 773.15K by the heating device.Then, the 3-D stitched spacer composite was placed in the center of the heating plate when the temperature of the heating plate was steady.The infrared thermometer (869, Testo, Germany) was placed 20 cm approximately above the 3-D stitched spacer composite to observe the tempera-
The results of experimental verification
The top surface's temperature distributions of the 3-D stitched spacer composites and the corresponding models under the heating temperature is shown in the fig. 5.In order to better compare the thermal insulation's effect of composites with different spacer's shapes, and compare the simulation results and experimental results, a series of nodes along the x-axis were selected on the top surface of the composites.The method of selecting nodes which were located on the different positions of the top surface and in regular order is shown in fig.6. Temperature curves were drawn according to the temperatures of the selected nodes.The top surface's temperature curves are shown in fig. 7. The simulation results were very close to the experimental results from the temperature values and curve rules, and it was also found that the maximum relative error between the simulated value and the experimental value is about 4.8%, which also showed that the heat transfer simulation of the 3-D stitched spacer composites in this work was feasible.The maximum temperature of the top surface of the 3-D stitched spacer composites was generally located on the suture of the connecting layer and the top layer.This is because the thermal conductivity of the fabric is much greater than that of aerogel, so the heat from the bottom layer is more easily transferred to the top layer along the connecting layer in the middle.This also shows that the filled silica aerogels have a good effect on thermal insulating.By comparing the maximum temperatures of the top surfaces of the three kinds of 3-D stitched spacer composites with different spacer's shapes, it can be found that the temperature reached at the top surface of the composite with triangle shape is the highest.This is because that the heat from two paths from the bottom layer to the top layer of the composite gathers at the same suture on the top surface, resulting in the maximum temperature of the composite with triangle shape bigger than that of other composites.
The fig. 7 shows that the minimum temperature of the top surface of the 3-D stitched spacer composites with the rectangle and triangle shapes is located in the middle of the two adjacent sutures.But the minimum temperature of the top surface of the 3-D stitched spacer composite with the trapezoid shape occurred on the middle of the trapezoid's baseline.It can be seen from the data in fig.7 that the minimum temperature of the top surface of the composite with trapezoid shape is the lowest.To explore different geometry parameters in the influence of 3-D stitched spacer composites on the thermal insulation, numerical simulation method was used in this work to study the impact of the value changes of the top length side of the rectangle, the base angle of the triangle, the topline and the base angle of the trapezoid on thermal insulation effect of 3-D stitched spacer composites with corresponding spacer shape.
The length of the rectangle's length side
The simulation temperature curves of the selected nodes on the top surface of the composites with different lengths of the rectangle's top length sides are shown in fig.8(a).The maximum temperature of the top surface of the composites occurred on the suture between the connecting layer and the top layer, and the values of maximum temperature on each place of the top surface were nearly consistent.The minimum temperature of the top surface of the composites was located in the middle of the rectangle's top length side.But it can be seen from fig. 9, the values of the two adjacent minimum temperatures were not the same, and when the connecting layer was combined with the top layer the minimum temperature was higher than that of the single top layer.This might be due to the fact that when heat transferred from the connecting layer to the top surface suture, it continued to transferring through the fabric on either side of the top surface suture.The top surface of a double-fabric layer tended to transfer more heat than that of a single-fabric layer.
It can be seen from fig. 8(b), as the length of the top length side increases, the minimum temperature of the top surface of the rectangle-shape's composite decreases.This was due to the increased length of the top length side, which increased the heat transfer path from the suture on the top surface to the either side, thereby reduced the amount of heat reaching the middle of the top length side.It can also be found that when the length of the top length side was 5 mm, the maximum temperature of the top surface of the rectangle-shape's composite was far greater than that of other composites.This was because when the length of the top length side was small enough, the heat from the sutures to the either side easily and then reached the adjacent suture, caused the other maximum temperature's value to rise.
The degree of the triangle's base angle
The simulation temperature curves of the selected nodes on the top surface of the triangle-shape's composites with different degrees of the triangle's base angles are shown in fig.10(a).The maximum temperature of the top surface of the composite was located on the suture and the minimum temperature was located in the middle of the triangle's baseline.As can be seen from fig. 10(b), as the degree of the base angle increases, the values of maximum and minimum temperatures increase together.It is because with the increase of the base angle's degree, the length of the intermediate connecting layer will become shorter, which makes the path of heat transfer from the bottom surface to the top surface become shorter, causing more heat to reach the top surface.In addition, as the base angle's degree increases, the length of the baseline on the top surface decreases, so that the heat from the top surface sutures is more likely to reach the sutures on either sides and the middle of the baseline, so that the minimum temperature is in the middle of the baseline and the maximum temperature is on the suture increase.
The degree of the trapezoid's base angle
The simulation temperature curves of the selected nodes on the top surface of the trapezoid-shape's composites with different degrees of the base angles are shown in fig.11(a).As can be seen from the figure, the maximum temperature occurred on the suture of the top surface of the composites, and the minimum temperature occurred in the middle of the trapezoid's baseline on the top surface.
It can be seen from fig. 11(b), with the increase of the base angle's degree, both the maximum and minimum temperatures of the top surface will increase.This is because as the base angle's degree increases, the heat transfer path from the bottom surface to the top surface becomes shorter, resulting in more heat reaching the suture on the top surface.The length of the trapezoid's topline on the upper surface of the composite are consistent, so the increase of the degree of the base angle will reduce the length of trapezoid's baseline on the upper surface, thus making it easier for the heat to reach the suture on the top surface and the middle of the baseline on the top surface, which causes the increasing the maximum and minimum temperatures of the top surface.12(b).When the lengths of trapezoid's topline of the top surface were greater than or equal to 10 mm, the maximum temperature of the top surface of the composite did not change very much.This is because when the base angle is constant, the length of the path of heat transfer from the bottom layer to the top layer is also constant, and the temperature on the sutures at either end of the toplines has little influence on each other, thus the maximum temperature on the suture is approximately steady.Since the minimum temperature occurs in the middle of the trapezoid's baseline of the top surface, and the fixed base angle makes the length of the trapezoid's baseline also more enough, the minimum temperature does not change much.
Conclusions
The thermal insulation performance of spacer fabric composites with different spacer shapes and different geometric parameters was studied in this paper.Through the comparative analysis of the heat transfer of spacer fabrics with different spacer shapes and geometric parameters by numerical simulation and experiment, the conclusions are as follows.y The temperature distributions on the top surface of the spacer fabric composites with three spacer shapes were close to the numerical simulation results at high temperature, and the maximum relative error between the simulated value and the experimental value was about 4.8%, which indicates that the heat transfer simulation in this paper have certain predictive ability for the thermal insulation of spacer fabric composites.y The heat transfer results of composite materials with triangular and trapezoidal spacers with different base angle's degrees were simulated and analyzed.It was found that the increase of the base angle will reduce the heat transfer path in the vertical direction, resulting in the increase of the maximum temperature of the top surface.y The heat transfer results of composite materials with rectangular and trapezoidal spacers with different length side's lengths and topline's lengths were simulated and analyzed.It was found that the distance between two adjacent sutures on the top surface will affect the minimum temperature of the top surface, and as this distance goes up, the effect gets smaller and smaller.The maximum temperature would also be affected when the distance was small enough which makes the maximum temperature increase.
Figure 1 .
Figure 1.The cross-section's schematic diagrams of the 3-D stitched spacer fabrics; (a) rectangle, (b) triangle, and (c) trapezoid In this work, a series of geometrical parameters of the 3-D stitched spacer fabrics with the three different spacer's shapes were designed.The geometrical parameters are shown in tab. 2.
Figure 2 .Figure 3 .
Figure 2. Sketch maps of the 3-D stitched spacer fabrics and their corresponding models; (a) REC20, (b) TRI45°, and (c) TPZ10 (TPZ45°) THERMAL SCIENCE: Year 2023, Vol. 27, No. 5B, pp.3891-3901 ture distribution on the top surface of the composite.When the temperature distribution of the composite reached a stable state, the temperature distribution of the upper surface of the 3-D stitched spacer composite was shot with infrared thermometer.
Figure 8 .
Figure 8.(a) Simulation temperature curves of composites with different the rectangle shape's top length sides and (b) maximum and minimum temperature of the top surface of the rectangle-shape's composites with different lengths of top length sides
Figure 9 .
Figure 9.The simulation result of the composite with the rectangle shape
Figure 11 .
Figure 11.(a) Simulation temperature curves of the trapezoid-shape's composites with different degrees of the base angles and (b) maximum and minimum temperature of the top surface of the trapezoid-shape's composites with different degrees of the base angles
Figure 12 .
Figure 12.(a) Simulation temperature curves of the trapezoid-shape's composites with different lengths of the toplines and (b) maximum and minimum temperature of the top surface of the trapezoid-shape's composites with different lengths of the toplines | 2023-02-09T16:19:17.118Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "f3b64ce5164152e6b2083c24d7c1ad6b2545d232",
"oa_license": null,
"oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0354-98362300027L",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6d5763669cc858125fd974f4effac418cc6bde0a",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
256747731 | pes2o/s2orc | v3-fos-license | High-speed train cooperative control based on fractional-order sliding mode adaptive algorithm
Purpose – This study aims to propose an adaptive fractional-order sliding mode controller to solve the problemoftrainspeedtrackingcontrolandpositionintervalcontrolunderdisturbanceenvironmentinmovingblocksystem,soastoimprovethetrackingefficiencyandcollisionavoidanceperformance. Design/methodology/approach – The mathematical model of information interaction between trains is established based on algebraic graph theory, so that the train can obtain the state information of adjacent trains, and then realize the distributed cooperative control of each train. In the controller design, the sliding mode control and fractional calculus are combined to avoid the discontinuous switching phenomenon, so as to suppress the chattering of sliding mode control, and a parameter adaptive law is constructed to approximate the time-varying operating resistance coefficient. Findings – Thesimulationresultsshowthatcomparedwithproportionalintegralderivative(PID)controland ordinaryslidingmodecontrol,thecontrolaccuracyoftheproposedalgorithmintermsofspeedis,respectively,improvedby25%and75%.Theerrorfrequencyandfluctuationrangeoftheproposedalgorithmarereduced inthepositionerrorcontrol,theerrorvaluetendsto0,andtheoperationtrendtendstobeconsistent.Therefore,thecontrolmethodcanimprovethecontrolaccuracyofthesystemandprovethatithasstrongimmunity. Originality/value – The algorithm can reduce the influence of external interference in the actual operating environment, realize efficient and stable tracking of trains, and ensure the safety of train control.
Introduction
With the rapid development of railway communication technology, automatic driving technology and train positioning system, the train control mode has also changed from quasimobile block to mobile block (Long, Meng, Wang, Luan, & Zhang, 2020), which is inevitably affected by external interference in the actual operating environment of the train, so in such a high-speed and high-density tracking mode, a collaborative control algorithm is urgently needed to improve the control performance and immunity during train operation.Under the quasi-moving block mode, the target point of high-speed train tracking operation is the starting point of the block zone occupied by the front train, which limits the transportation efficiency to a certain extent; under the moving block mode, the train adopts the method of "hitting the soft wall" to realize tracking operation, which further shortens the tracking distance and increases the flexibility and autonomy of trains.With the rapid development of railway communication technology, automatic driving technology and train positioning system, conditions are created for multi-train cooperative control.During the tracking operation of high-speed train, real-time and high-quality information interaction is realized between trains through Radio Block Center (RBC) and other ground equipments, and the cooperative control between trains is realized through intelligent control algorithm based on automatic driving technology, so as to achieve smaller tracking interval and accurate tracking of the expected speed curve, thus improving traffic efficiency, safety and comfort (Tian, 2020).However, today's control algorithms ignore the influence of interference on the control system, so improving the control algorithm and enhancing the immunity of the system is of far-reaching significance to the research of multi-train collaborative control.
At present, scholars at home and abroad mainly study the multi-train cooperation problem from the perspective of operation scheduling and control algorithms.In terms of operational scheduling, the key is to shorten the departure interval, optimize the adjustment of the running chart and improve the efficiency of the line transportation of cooperative operation between trains (Pan, Mei, & Zheng, 2015;Sun, 2019;Zeng, Zhang, & Chen, 2019).On the other hand, the automatic train driving control algorithm based on fuzzy control and particle swarm optimization was studied (Cao, Ma, & Zhang, 2018;Xu, Yang, Tu, & Wu, 2021;Zhang & Wu, 2021).Chen, Dang, and Hu (2014) established multi-agent system (MAS) interaction mechanism between the train and RBC based on multi-agent theory and realized real-time train ground communication and safe distance control of multi-train tracking operation.Liu (2020) proposes a virtual coupling train group control strategy based on MAS, and builds a control model according to the control rules to achieve the goal of stable and coordinated operation of train groups.Therefore, in an ideal operating environment, status information can be received in real time through the communication mechanism between trains and grounds, so as to ensure stable and safe tracking between trains.
However, in the actual line operation, it is inevitably affected by external interference, resulting in system uncertainty, and sliding mode control has strong robustness to uncertain parameters and external interference, so the accuracy of the model can be ensured by introducing the controller.Zhao et al. (2022) artificially eliminate the influence of parameter uncertainty and external interference on the longitudinal cruise control of intelligent trains, and propose a longitudinal cruise control method based on adaptive dynamic sliding mode, which finally realizes stable and accurate tracking of vehicle speed.In order to achieve accurate end trajectory control of the multi-robotic arm system, Li, Xu, and Gui (2021) used time delay estimation and adaptive fuzzy sliding mode controller to eliminate interference and realize the operation of collaborative handling of target objects.The action mode of the hook and slow device on adjacent trains is clarified, the strong coupling model of the highspeed EMU is established, and the distributed neural network sliding mode control strategy is designed to carry out speed tracking control for the high-speed EMU (Li, Jin, Yang, Tan, & Fu, 2020).However, due to the existence of the inertia of the sliding mode control system, the system will inevitably lag in the switching process, so it is easy to cause jitter, and some scholars have used fractional-order to weaken jitter.Yu, Zhang, and Jiang (2020) studied that in the case of actuator failure of quadrotor UAV, combined with neural network technology and fractional sliding mode control method, it can still follow the trajectory of the virtual long machine and maintain the ideal relative position.Under the condition of parameter change and external disturbance in the system, Wei, Wang, Ji, and Fang (2021) designed an adaptive fuzzy fractional sliding mode control method to improve the robustness of the system and ensure the tracking accuracy of the system.Aiming at sensorless remote control robot system with uncertainties such as time delay and external interference, variable structure control based on neural networks and optimized fractional-order selection strategy are proposed High-speed train cooperative control (Ma, Liu, Huang, & Kuang, 2022).Dong, Yang, and Basin (2022) focuses on the problem of fast position tracking while reducing jitter under logarithmic sliding mode control signals in permanent magnet linear motor systems.
In order to obtain the accurate parameters of the train dynamics model in time-varying environment and further improve the robustness and fault tolerance, an adaptive mechanism is proposed.He, Yang, and Lv (2019) designed a controller based on tracking error according to the sliding mode control theory for the problem of accurate pit and parking of the automatic driving system of high-speed train, and then introduced adaptive and fuzzy reasoning rules to further weaken the jitter phenomenon, so as to achieve accurate parking.In the vehicle-tovehicle communication mode, Wang (2018) proposed the multi-train adaptive collaborative control algorithm based on sampling feedback and nonlinear gain, and finally realized the steady-state tracking of trains.Wang, Wu, Feng, and Zhang (2016) applied the terminal sliding mode control principle to design the train stop control algorithm, and introduced the parameter adaptive mechanism to further enhance the adaptability of the control system.Based on Lyapunov stability theory, Li, Meng, Xu, and Yin (2018) designed the automatic sliding mode adaptive robust controller for high-speed trains, and used adaptive control to approximate the system input coefficient with uncertain characteristics in real time, thereby eliminating the system jitter phenomenon.Train collaborative tracking operation is a multitrain coupling control system, and the mutual influence between trains, in the complex nonlinear and disturbance environment, the study of multi-train speed tracking and interval control is of great significance to realize the overall collaborative stability of the queuing train.
The main contributions of this paper are: (1) During the operation of the train, considering the influence of external interference on the tracking accuracy of the queue train, a sliding mode controller based on state error is designed, which can improve the robustness and response speed of the algorithm in complex nonlinear environment to a certain extent (Liu, 2019).
(2) In view of the problem of jitter in traditional sliding mode control, the fractional difference is added on the sliding mode surface by introducing fractional calculus, so as to reduce the inherent jitter of the sliding mode controller and improve the accuracy of the controller.
(3) In the actual line operation environment, there are system parameters that change with time, so the parameter adaptation law can be used to approximate the real train drag coefficient in real time, avoiding the model error caused by the use of traditional empirical parameters.
This paper studies the collaborative control problem of trains based on the theory of multiple agents.Firstly, the information interaction model between train and ground equipment is constructed based on matrix graph theory, the system state equation is determined according to the train operation mode, and the sliding surface function is established based on the train state error.Secondly, in order to suppress the jitter phenomenon of the sliding mode controller, a fractional derivative is added to the sliding surface, and in order to reduce the influence of time-varying parameters on the accuracy of the model, the parameter adaptive law is introduced to approach the time-varying resistance coefficient on the basis of the fractional sliding mode control algorithm.Finally, the experimental simulation using MATLAB software verifies the effectiveness of the proposed algorithm.
Mathematical description of train operation process 2.1 Multi-train interaction topology model in adjacent communication mode
In this paper, the high-speed rail multi-train system operates under the mobile blocking system, and the train and the ground system in the automatic train control (ATC) system exchange information through the GSM-R wireless communication network, and the train-toground wireless structure is used to achieve information exchange, as shown in Figure 1.
In the process of high-speed railway multi-train tracking operation, the following information interaction mechanism is defined: taking train i as an example, train i carries out bidirectional information interaction with its "topology adjacent" train through RBC, that is, trains are kept at the minimum safe interval from the forward train according to their respective braking curves.At the same time, each train transmits its own state information to the ground equipment RBC through the GSM-R network, from which it is transferred to other trains to realize the information interaction between trains.This train-ground-train communication mechanism is the basis of train expected state calculation and cooperative control.Algebraic graph theory is used to establish a high-speed rail multi-train information interaction topology model as the information basis for train collaborative control.The research object of this paper is multi-train tracking in a single line, so the topology of the highspeed railway multi-train communication network remains constant in the process of train operation, which is a fixed topology.
The scenario of multi-train operation in high-speed railway is regarded as the communication network topology of multi-agent system, and each running train and RBC are regarded as an individual agent.The mathematical model established based on matrix and graph theory can be represented by a simple directed graph G ¼ ðV ; EÞ, where V ¼ fa 0 ; a 1 ; . . .; a n g represents the train set in the graph, E ⊂ V 3 V represents the edge set of a directed graph, and ða i ; a j Þ is defined as the edge of agent a i to agent a j (representing the communication relationship between the agents).If there is a communication relationship between the two trains, then ða i ; a j Þ exists, it is called that agent a i and agent a j are a neighbor nodes.The neighbor set that interacts with a i is represented by Considering that the train control system has high real-time requirements for the state information of each train, in order to simplify the complexity of the communication model, the information interaction model does not take the effect of time delay into account.In order to give the mathematical model of multi-train communication relationship, formula (1) is established.
Dynamic equation of train operation
Considering that multi-vehicle collaboration is realized through the ATC of a single train, the train is modeled as a rigid nature point, and the dynamic model of the single particle point of the train is obtained as follows: Where x is the displacement; v is the real-time speed of train; u is the traction/braking force received by the train; w is the basic resistance of the train; a; b and c are the rolling mechanical resistance coefficients, friction resistance coefficient and air resistance coefficient of the train, respectively; ξ is the acceleration coefficient of the train, among them, 0.098 is the reference value defined in the technical regulations of high-speed trains (Yang & Zhou, 2018); γ is the wheel rotation mass coefficient of the train.In actual train operation, the resistance of the train includes additional resistance and basic resistance.The basic resistance is affected by the train speed and mechanical wear, but the additional resistance only appears in the fixed part of the line.In this paper, the additional resistance and uncertain disturbance are combined for a convenient computing (Lian, Liu, & Li, 2020).
According to the train model of equation ( 2), the train operation state-space equation is established as: & x i ðtÞ 0 ¼ v i ðtÞ mð1 þ γÞv i ðtÞ 0 ¼ f i ðtÞ À w i ðtÞ À dðtÞ (3) The physical meanings of the above variables are explained as follows, m i is the mass of train i, v i ðtÞ is the real-time speed of train i, x i ðtÞ is the real-time position of train i, f i ðtÞ is traction/braking force of train i, w i ðtÞ is real-time basic resistance, dðtÞ is additional resistance and external disturbance.Referring to the technical regulations of highspeed railways, the basic running resistance of the train is affected by environmental factors, such as wind speed, rail surface conditions, etc.The coefficients are obtained through multiple test fitting in the project.Therefore, the error in formula used in the model is inevitably caused by external environmental factors in the actual running of the train.The parameter structure of the basic running resistance is Considering the uncertainty and time-varying characteristics of parameters under the action of the complex environment during train's operation, the coefficients a i ; b i ; c i are set to a variable parameter structure with a reference constant term and an unknown time variant.These parameters use adaptive law to approach the actual value under a complex external environment in real-time, they are a i ðtÞ ¼ a , including standard constant terms a * i ; b * i and c * i .Terms Δa i ðtÞ; Δb i ðtÞ and Δc i ðtÞ represents the time-varying characteristics under external disturbance and internal variation of the system.The above three parameters can be approximated in real time using the adaptive law to approximate the actual values in complex external environments (Li & Hou, 2015).Comparing adaptive sliding mode control with traditional sliding mode control in following text, it can be found that the former is a fixed parameter, while the latter is a parameter of adaptive law fitting, and the improved algorithm is more in line with the actual characteristics of the train.
Controller design
The control object of this paper is multi-trains operation in a single track.In the whole tracking process, the biggest difficulty for the controller lies in the accurate tracking of the speed curve.Secondly, the complexity of the resources and environment in the whole line determines high nonlinearity of the train's additional resistance.Therefore, if the controller has a good disturbance suppression ability, the tracking accuracy in the operation process will be guaranteed and it is conducive for the train to realize the coordination of operation state.According to the operation characteristics of high-speed train, the basic resistance parameters of the system are highly susceptible to time-varying characteristics such as wear of wind speed locomotive components.Therefore, the uncertainty of the basic resistance coefficients of the train will affect the stable operation of the train.So the uncertainty of system model parameters should be considered in the design process of the controller.The designed controller has a good robustness so that it can overcome the unknown interference outside the system and the uncertainty of braking system parameters, thus achieving a fast and stable online control, and ensuring high-precision speed tracking during the tracking of the train, stable and compact train distance and smooth control input during train tracking operation.The structural block diagram of train cooperative control is shown in Figure 2.
Error model extraction
The sliding mode controller responds quickly to the target requirements, which can make the system state converge to the desired trajectory in a limited amount of time.It also has a strong parameter adaptability and robustness, so as to ensure that there will be no overshoot when the system has parameter uncertainty, and avoid adverse impact on the system (Zhang, 2019).Therefore, in multi-train collaborative control, sliding mode control can suppress the interference of the external environment and provide a solution for the uncertainty of the model caused by time-varying parameters.
The state equation of velocity and position error are defined as, Where e i is the position error of the train; e i 0 is the speed error of train; x i is the actual position of the train and x ir is the referential position; v ir is the referential speed.Considering the relative position between trains, the expression of position error according to the actual situation is, Where L b is the rear car braking distance, L s is the safety envelope, L d is the train length.High-speed train cooperative control
Fractional calculus
Due to the inevitable switching phenomenon of sliding mode control, in order to suppress its chattering, fractional calculus is introduced to improve this problem.Based on the advantages of fractional calculus in softening discontinuous switching (Deng, 2014), the fractional calculus of Caputo form adopted in this paper is defined as follows: Where d m =dt m is the differential in the traditional sense, m is the minimum integer not less than the fractional-order α, and t is the time; τ is the integral variable; when α < 0, it is a fractional-order differential, and when α > 0, it is a fractional-order integral; ΓðxÞ is the gamma function ΓðxÞ ¼ R ∞ 0 e −t t x−1 dt, and e is the position tracking spacing error.Since the controlled object in this paper has a large operating range and is relatively flexible, it needs to have a good dynamic processing effect, so the fractional calculus parameters are selected.The performance of the adjustment process under different fractional orders is different, the appropriate fractional calculus operator can be selected according to the actual operation status of the site, so that the system can meet different dynamic and static performance.In summary, the sliding mode control based on fractionalorder can make the error system converge faster, the control accuracy is higher and the control process is smoother (Fang, 2021).
Fractional-order sliding mode controller design
In the process of train tracking operation, it is necessary to accurate track the reference position and the reference speed curve at the same time, and by adding the fractional differentiation on the sliding hyperplane (Zhou, 2021), that is, introducing the train position error e i and train speed error e i 0 into the sliding hyperplane, it can not only effectively suppress the inherent jitter phenomenon of sliding mode control, but also ensure the rapid synchronous convergence of the error.The fractional-order sliding surface is designed as, Where λ is the gain coefficient of sliding mode surface, λ > 0.
The above formula only considers the speed error and position error of the train i, but does not consider the role of neighboring trains in the coordinated formation control, so it is difficult to accurately describe the stability of the multi-train queue.Therefore, in order to realize the cooperative control characteristics of multiple trains, a weighted error N i is introduced to couple the status information of adjacent vehicles: Where Δj and β i characterize the regulatory parameters of the relationship between the train i and the train j.
Next, the sliding mode controller is designed to realize the online tracking of train reference speed and reference position curve: RS 2,1 Where b w i is the estimation term of train basic resistance; k 2 sgnðS i Þd i is the nonlinear switching control term of the system, which is used to deal with external disturbances and uncertainties, and k 1 , k 2 is the control gain, where k 1 > 0; k 2 > 0.
The uncertainty and time variation of the basic resistance depend on its resistance coefficients, so the control law is Lemma (about tracking performance and closed-loop signal boundedness): For the multitrain dynamic system under adjacent communication topology, if the above control laws and parameter adaptive laws are selected, only when the linear weighting error N i and parameter estimation error g i finally converge to compact set Ω Ni and Ω gi , can the multi-vehicle closedloop signal be bounded and eventually consistent.A compact set is defined as,
Proof of stability
In order to verify the stability and effectiveness of the controller designed in this paper, the Lyapunov function is used for stability proof (Zhang & Wang, 2021).Combined with the cascade error and adaptive parameters, the Lyapunov function shown below is designed The derivative of V i can be obtained High-speed train cooperative control According to Young's inequality −q b q ¼ −qðq þ qÞ ≤ − 1 2 q 2 þ 1 2 q 2 ; q ∈ fa i ; b i ; c i g, it can be obtained according to the above lemma where μ 1 and θ i are defined as follows According to the above Lemma, V i is finally bounded.At the same time, when it tends to infinity, , and V i ð0Þ is the initial value of V i , according to the definition of V i , the linear cascade error N i and parameter estimation error g i can converge to the following compact set, where, Theoretical proof is completed.Through the Lyapunov function constructed above, the feasibility of this control algorithm is mathematically proved, and the stability of the designed distributed control law is verified on a theoretical basis, so the controller can achieve the goal of multi-train cooperative control state.
Simulation comparison and analysis
In order to verify the effectiveness of the proposed algorithm in cooperative formation control, CRH380AL EMU is the simulation control object.Its specific parameters are shown in Table 1.
RS 2,1
Under the mobile occlusion system, the coordinated tracking operation of five high-speed trains is selected.In order to fit the actual operating characteristics of high-speed trains, Figure 3 shows the expected operation curve of train 1.The entire run time is 4,400 s, the upper limit of speed in the operation process is 97 m/s, the acceleration in the start-up stage is set as 0.16 m=s 2 , the temporary speed limit is 75 m/s, and the braking deceleration is 0.12 m=s 2 .This paper tests the cooperative control effect of the designed control algorithm, it is assumed that the trains depart synchronously at a given initial spacing.Figure 4 shows the external disturbance during the whole operation process.
The goal of collaborative control in this paper is that each train will eventually maintain a consistent running speed and a stable expected position interval in a complex operating environment.The initial speed of the train is set as 0 m/s, and the relative initial position interval of the five trains is set to 10,000 m.Meanwhile, a small amplitude of initial position error is added in the process of simulation execution.The search range of parameters is determined based on prior knowledge, and then the optimal value is obtained by traversing the parameter set in trial-and-error method.We can get these parameters as k 1 ¼ 0:75, 1.And "V-X" represents the speed and position curves of the train.
Through the simulation experiment of CRH380AL EMU in its tracking operation process, it is assumed that the first train obtains the expected speed curve, the subsequent trains obtains the expected interval and expected speed in real-time through multi-agent interactive topology, then it realizes cooperative control through distributed control law and adopts the calculation method to adjust each train online.Figure 5 shows the displacement-velocity tracking operation curve realized by using the distributed control law of traditional PID control algorithm, although the algorithm keeps the basic consistent operation trend of each train, in the cruising stage, the speed error is controlled within the range of (À0.4,0.4) m/s, and the range fluctuation is more severe, which is difficult to meet the accuracy requirements of passenger comfort and smooth operation of high-speed trains.When the unimproved sliding mode controller model is selected and the resistance coefficient is selected to give a fixed empirical value, the control effect obtained by simulation is shown in Figure 6, and the subsequent train can basically track the reference trajectory of train 1, and the train running state basically tends to be consistent, but the actual speed error fluctuates in the range of (À0.3, 0.3) m/s in the cruising stage of maintaining the highest speed, due to the problem of jitter in the traditional sliding mode control, so that the train speed fluctuates back and forth at 97m/s, without a decreasing trend.Figure 7 shows the speed curve obtained by using the fractional order adaptive sliding mode control algorithm, from the overall tracking operation, each train can stably track the calculated reference curve, and maintain a stable tracking interval between each train.According to the local magnification figure, when the train is in the cruise mode, the actual speed error is controlled in the range of (À0.1, 0.1) m/s, and the speed error gradually tends to 0 during the cruising process, compared with the previous two algorithms, the speed error range is further reduced, and the multi-train collaborative control effect is improved, and the jitter amplitude is better suppressed.Therefore, the proposed algorithm meets the requirements for high-speed train tracking operation speed tracking accuracy.show the train position error under the modes of PID control, ordinary sliding mode control and fractional-order adaptive sliding mode control, respectively.By comparing and observing the following three figures, it can be seen that the three algorithms meet the safety requirements for the train tracking spacing, the error fluctuation range is within the warning line, and there is no collision risk.The PID control algorithm in Figure 8 responds quickly, but a position error of approximately 10 m is generated at the initial stage of the train, and in the subsequent 4400s operating time, the error amplitude is large and fluctuates frequently, the system is prone to overshoot, and the overall position error is controlled within (À10, 10) m.The ordinary sliding mode control algorithm in Figure 9 is robust and has a certain inhibitory effect on interference, so that the initial position error is reduced to less than 8m, and the number of error fluctuations is reduced, and the position error is controlled in the range of (À0.5, 0.5) m, but there is still the problem of system inaccuracy caused by jitter.Under the condition of setting the initial position error, we can see from Figure 8 that the PID control algorithm responds quickly but the error amplitude is large in the operation process, and the fluctuation is due to the inherent overshoot phenomenon of the PID control algorithm; The ordinary sliding mode control algorithm has a strong robustness, and the system response speed is fast, but there is the problem of excessive chattering in Figure 9, which is caused by the discontinuous switching phenomenon inherent in sliding mode control.When the fractional-order adaptive sliding mode controller designed in this paper is adopted, the error of the position distance of each train is basically the zero-crossing point of the horizontal line, and the tracking error is basically 0. It can realize tracking without static error and has good tracking accuracy.Each train is basically kept at the desired tracking position, and the fluctuation trend is similar according to the locally enlarged small window in Figure 10, it shows that under this algorithm, the trains can realize safe and efficient cooperative operation.And there are some short fluctuations due to discontinuous switching at switching High-speed train cooperative control points such as traction and cruise braking, which are an inevitable problem in the transition of train operating conditions and it does not affect the stability of the control system.
Conclusion
(1) Based on the multi-agent algorithm, the network topology of train information transmission is established through algebraic graph and matrix theory, and the train operation control algorithm is designed by a sliding mode controller.
(2) Based on the fractional-order adaptive sliding mode control, the cascading error of the integrated weight design is used as the input of the control law variable to realize the control, so as to ensure the cooperative predetermined performance.
(3) The distributed control law of train tracking operation and the strict mathematical stability proof of the corresponding closed-loop system are given to realize the stable tracking operation between multiple high-speed trains.Compared with the simulation results of the traditional PID controller and ordinary sliding mode control algorithm, it is proved that the control algorithm proposed in this paper has higher robustness and control accuracy.When the model have parameter uncertainty or strong external disturbance, the control algorithm proposed ensures the accuracy of the train in tracking the reference curve and realize a safe cooperative operation of the train queue efficiently.
(4) The coordinated operation scenario of the train selected in this paper is the tracking operation of a large area, and the trains are synchronized to start and stop synchronously.In future research, it is necessary to be closer to the actual scenario, (5) In order to simplify the calculation, the single particle model of the train used in this paper does not consider the coupling force between the carriages between trains, and at the same time does not study the complexity of the external environment faced by the train operation in detail, but combines various external disturbances and additional resistance into the simulation, which reduces the model accuracy to a certain extent, and can be studied in more detail to establish a more accurate model for the above shortcomings in the future, so as to further improve the control accuracy.
Figure 1.Multi-train communication mechanism in adjacent communication mode Figure 2. Structure diagram of multi-train cooperative controller 10) Where b a i ; b b i ;b c i are the estimated values of the resistance coefficients.In order to eliminate the influence of time-varying and uncertain factors of the basic resistance, the following parameter adaptive laws are designed: 11) Where μ a ; μ b ; μ c ; ζ 1 ; ζ 2 ; ζ 3 are positive constants, ζ 1 ; ζ 2 ; ζ 3 are used as compensation factors to correct estimation errors.
Figure 3. Reference speed curve and displacement curve during train operation Figure 4. External disturbance during train operation Figure 5. V-X operation curve generated based on PID control algorithm Figure 7. V-X operation curve generated based on fractional order adaptive sliding mode control algorithm Figure 10.Train position error based on fractional order adaptive sliding mode control algorithm | 2023-02-11T16:10:56.605Z | 2023-02-10T00:00:00.000 | {
"year": 2023,
"sha1": "ae65c3e3a60f42b48254d849dfd03adc7d28afbf",
"oa_license": "CCBY",
"oa_url": "https://www.emerald.com/insight/content/doi/10.1108/RS-05-2022-0022/full/pdf?title=high-speed-train-cooperative-control-based-on-fractional-order-sliding-mode-adaptive-algorithm",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "b796e9bbb1dbb2a33dae995536345111f1ad96f9",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
265040433 | pes2o/s2orc | v3-fos-license | Expression of PD-1/PD-L1 in peripheral blood and tumor tissues of patients with classical Hodgkin’s lymphoma
Significant biomarkers can predict and estimate the response to chemotherapy for different types of lymphoma. Classical Hodgkin’s lymphoma (cHL) and peripheral T-cell lymphoma (PTCL) belong to different types of lymphoma, their prognosis is very different, programmed cell death receptor 1 (PD-1) and its ligand (PD-L1) have been studied in these 2 types of diseases. However, few studies have involved the difference in PD-1/PD-L1 levels between cHL and PTCL. To find out the difference and relevant clinical application value, we collected blood samples of 29 newly diagnosed cHL patients and 11 newly diagnosed PTCL ones. At the same time, tumor tissue paraffin sections of 13 patients with cHL were collected at the initial diagnosis. Flow cytometry, enzyme-linked immunosorbent assay, and immunohistochemical staining were used to detect PD-1/PD-L1 levels in peripheral blood T cells, plasma, and tumor tissues, and the relationship between the above results and clinical data of patients in patients with cHL were investigated. The levels of PD-1 on CD4+ T cells, PD-L1 on CD4+ T cells and PD-1 on CD8+ T cells in peripheral blood of cHL and PTCL patients were higher than those of healthy controls, the level of PD-1 in CD4+ T cells from peripheral blood was higher from cHL patients with stage III-IV (P = .0178), B symptoms (P = .0398), higher lactate dehydrogenase (P = .0056), higher international prognostic index score (P = .0349), and relapsed in later stages (P = .0306). The expression level of soluble PD-L1 (sPD-L1) from cHL (P < .001) and PTCL (P < .0001) patients was higher than that of the healthy control group, and there was higher sPD-L1 level in patients with higher international prognostic index scores (P = .0016). The dynamic detection of sPD-L1 showed that after 2 courses of chemotherapy, the sPD-L1 level in cHL patients with complete remission declined, but the level of sPD-L1 from patients with incomplete remission was not significantly changed (P > .05). In tumor tissues of cHL patients, PD-1(+) was 77%, PD-L1(+) was 69%, PD-1 and PD-L1 expression levels were high. Our results suggest that PD-1 levels in peripheral blood CD4+ T cells are helpful for the stage of disease in patients with cHL, and the dynamic detection of sPD-L1 level is helpful for the judgment of patients with cHL.
Introduction
Classical Hodgkin lymphoma (cHL) is a kind of B cell-derived malignant lymphoma, accounting for about 20% of all lymphomas.The presence of abundant reactive cells in Reed Sternberg cells and tumor microenvironment is its significant clinicopathological feature, and most Reed Sternberg cells have the expression of programmed cell death ligand 1 (PD-L1), which may be related to tumor cells escaping from the immune response of the body. [1]PD-L1 is highly expressed in melanoma, [2] gastric cancer, [3] liver cancer, [4] ovarian cancer, [5] and other tumors.Previous studies have found that PD-L1, which is highly expressed in tumor cells, interacts with programmed cell death receptor 1 (PD-1) on lymphocytes to inhibit the proliferation and activation of lymphocytes.In this way, tumor cells escape the immune system of the body and survive. [6]However, few studies have been reported on the expression of PD-1 or PD-L1 in peripheral blood from cHL patients.Therefore, we examine the PD-1 or PD-L1 level in 3 types of samples from cHL patients, including T lymphocytes in peripheral blood, plasma, and tumor tissues.We hope to find dynamic monitoring means and efficacy evaluation methods for Hodgkin lymphoma from the way of the PD-1 signaling pathway (immune checkpoints).
Study population and treatment
This study included 29 cases of cHL (13 males, median age 25 years) and 11 cases of peripheral T-cell lymphoma (PTCL) diagnosed at West China Hospital of Sichuan University from September 2017 to September 2018.The diagnostic criteria were based on the 2016 WHO classification of hematopoietic and lymphoid tissue tumors. [7]At the time of initial diagnosis, 4 mL EDTA anticoagulant peripheral blood samples were collected from the patients involved, and paraffin samples of tumor tissues from cHL patients were also collected.Twenty-four patients (86%) received the ABVD chemotherapy scheme, and radiotherapy was considered after at least 2 courses of chemotherapy.Therefore, the efficacy evaluation results after 2 courses of chemotherapy were not affected by radiotherapy.The therapeutic effect was determined according to the Evaluation Criteria for the Efficacy of Malignant Lymphoma, including complete remission (CR), partial remission, stable disease, and disease progression. [8]The results showed that 18 patients (62%) reached CR, and the remaining 11 patients (38%) did not reach CR (non-CR, including 10 partial remission and 1 disease progression).antibodies were purchased from Tianjin Sanjian Biological Co., Ltd.Red cell lysate was purchased from Beijing Leigen Biotechnology Co., Ltd., and multi-color flow cytometer was purchased from American Beckman Company.The human peripheral blood lymphocyte isolation solution was purchased from Tianjin Haoyang Biological Co., Ltd.The soluble PD-L1 (sPD-L1) enzyme-linked immunosorbent assay (ELISA) kit was purchased from American Abcam Company, the mouse anti-human PD-1 antibody (primary antibody) was purchased from Beijing Zhongshan Jinqiao Company, the rabbit anti-human PD-L1 [28-8] antibody (primary antibody) (catalog: ab277712, Cambridge, UK) was purchased from American Abcam Company, and the anti rabbit/mouse (secondary antibody) (Envision ™ Detection Kit (catalog: K5007, Copenhagen, Denmark)) universal immunohistochemistry test kit was purchased from Danish DAKO Company.(Navio Beckman Coulter, CA).Finally, the data was analyzed by FlowJo X 10 software.
ELISA of soluble PD-L1 in plasma.
The plasma sample was used to quantify the plasma sPD-L1 concentration through the human PD-L1 [28-8] ELISA kit (catalog no.28-8, Abcam, Cambridge, UK) according to the instructions of the reagent kit.
The minimum detectable concentration of this kit is 2.91 pg/ mL.
Immunohistochemical staining of tumor tissue paraffin
sections.Paraffin sections (3 μm) of tumor tissue were used for immunohistochemical staining as previously described. [9]The primary antibody (PD-1 or PD-L1, dilution 1:75) and universal secondary antibody (Envision™ Detection Kit(Code No: K5007)) were sequentially added for antibody incubation.The positive rate of PD-1 and PD-L1 expression (number of positive cells/number of all cells) was analyzed by Image J software.The positive rate >30% was defined as PD-L1 positive, and the positive rate >5% was PD-1 positive.This positive standard was based on the literature report. [10]
Statistical analysis
GraphPad Prism 8.0 software was used for statistical analysis of the data.The test results are expressed by median and interquartile intervals [median (interquartile range)].Kruskal-Wallis test is used to analyze the differences between groups, Mann-Whitney U test and Spearman rank correlation analysis are used to analyzing the relationship between the test results and the clinical data of patients, Wilcoxon paired sign rank test is used to compare the changes of SPD-L1 levels of the same patient at the time of diagnosis and the end of 2 courses of treatment.Set P value < .05 as the difference is statistically significant.
Baseline clinical characteristics of patients
The baseline clinical characteristics of cHL or PTCL patients are summarized in Table 1 (The clinical data of patients were from the medical records and examination data of patients in Sichuan University).13 patients (45%) in stage III-IV, 8 patients (28%) with lactate dehydrogenase (LDH) > 250 U/L.All patients are considered for radiation therapy after receiving at least 2 courses of chemotherapy.After 2 courses of chemotherapy, 18 patients (62%) achieved CR.We also recruited 27 healthy volunteers (13 males and 14 females) as a control group, with a median age of 49 years (interquartile range, 34-55) (data not shown).
Expression of PD-1 or PD-L1 in peripheral blood T cells of lymphoma patients
The results of flow cytometry showed that the expression levels of CD4 + T cell PD-1, CD4 + T cell PD-L1, and CD8 + T cell PD-1 in peripheral blood of cHL and PTCL patients were higher than those of the healthy control group (Fig. 1A, Table 2).The analysis results of the relationship between flow cytometry results and clinical characteristics showed that the level of PD-1 in CD4 + T cells from peripheral blood was higher in cHL patients with stage III-IV (P = .0178),B symptoms (P = .0398),higher LDH (P = .0056),higher international prognostic index (IPI) score (P = .0349),and relapsed in later stages (P = .0306)(Fig. 1B).
In addition, Spearman rank correlation analysis found that the level of PD-L1 on CD4 + T cells was negatively correlated with the absolute lymphocyte count (Spearman r = −0.5997,P = .0323,Fig. 1B).
Expression of sPD-L1 in plasma of lymphoma patients and healthy volunteers
ELISA results showed that the expression level of sPD-L1 in plasma of cHL (P < .001)and PTCL (P < .0001)patients was higher than that of the healthy control group (Fig. 2A), and there was higher sPD-L1 level in patients with higher IPI scores (P = .0016)(Fig. 2B).
Efficacy evaluation of cHL patients
The dynamic detection and analysis results of sPD-L1 are shown in Figure 3.In the CR group, after 2 courses of chemotherapy, the level of sPD-L1 dropped to the level of the healthy control group.In the non-CR group, the sPD-L1 level after chemotherapy was not significantly different from that before chemotherapy and was higher than that of the healthy control group.
Expression of PD-1/PD-L1 in tumor tissues of cHL patients
Immunohistochemical staining of PD-1 or PD-L1 was performed on paraffin sections of tumor tissues of 13 cHL patients.The distribution and expression of PD-1/PD-L1 in tumor tissues of a typical patient are shown in Figure 4.The membrane of positive cells was stained brown.PD-L1 (+) was defined as the proportion of positive cells >30%, and PD-1 (+) was defined as the proportion of positive cells >5%; PD-1 (+) in tumor tissues of cHL patients was 10 cases (77%); 9 cases of PD-L1 (+), accounting for 69%
Relationship between PD-1 or PD-L1 level of peripheral blood T lymphocytes, sPD-L1 level and PD-1/ PD-L1 level in tumor tissues of cHL patients
Spearman rank correlation analysis showed that the levels of PD-1/PD-L1 in peripheral blood T lymphocytes, sPD-L1, and PD-1/PD-L1 in tumor tissues of cHL patients were not statistically significant (data not shown).
Discussion
Previous studies have confirmed that the PD-1/PD-L1 signaling pathway plays an important role in the regulation of the peripheral blood immune system in various tumor patients.We have confirmed that PD-1 level is significantly increased in CD4 + T cells or CD8 + T cells from peripheral blood in solid tumors such as non-small cell lung cancer, [11] oral squamous cell carcinoma, [12] and ovarian cancer. [13]Similarly, we also found that the expression levels of PD-1 on CD4 + or CD8 + T cells and PD-L1 Table 2 Expression of PD-1/PD-L1 on T lymphocytes in peripheral blood of lymphoma patients and healthy volunteers.on CD4 + T cells from peripheral blood from cHL patients are higher than those in the healthy control group.
Other studies have shown that the level of PD-1 in peripheral blood T lymphocytes is positively correlated with the staging and clinical progression of cervical cancer [14] and gastric cancer. [15]Similarly, this study also found that PD-1 levels on CD4 + T cells from peripheral blood was higher in cHL patients with stage III-IV, B symptoms, higher IPI score, and higher LDH concentration.This implied that PD-1 level on CD4 + T cells from peripheral blood contributes to the staging of cHL patients.In addition, PD-L1 is mainly overexpressed on tumor cells, but this study found that the expression of PD-L1 on CD4 + T cells from peripheral blood in cHL patients was up-regulated.It has been reported that PD-L1 is up-regulated on CD4 + regulatory T cells in Hodgkin lymphoma tissue and inhibits the function of PD-1 (+) T cells. [16]Therefore, we speculated that a high level of PD-L1 from CD4 + T cells can inhibit the function of the PD-1 (+) T cells by binding with PD-1 on other T cells, to reduce the function of the peripheral blood immune system and promote tumor immune escape.In addition, sPD-L1 levels in cHL patients decreased to healthy control levels after achieving complete remission through chemotherapy, indicating that dynamic monitoring of sPD-L1 can be used for efficacy evaluation.Finally, the immunohistochemical staining results in the tumor tissue of cHL patients showed that the positive rates of PD-1 and PD-L1 were higher in the tumor tissue of cHL patients, which may be one of the factors contributing to the better efficacy of PD-1 monoclonal antibodies in HL patients.
In summary, the results of this study show that the expression levels of PD-1 on peripheral blood CD4 + T cells are helpful for disease staging in cHL patients, while dynamic detection of sPD-L1 helps evaluate the efficacy of cHL patients.However, the sample size of this study is not sufficient, the duration of the study is relatively short, and sufficient follow-up of patients has not been conducted.Further expansion of the sample size and extension of follow-up time should be conducted to verify this.
In addition, 27 healthy volunteers were recruited as healthy controls (13 males, median age 49).During the treatment of cHL patients, peripheral blood samples were collected again after 2 chemotherapy courses.This study has been approved by the Ethics Committee of West China Hospital of Sichuan University (Approval number: No.373) and has signed an informed consent form with all participants.
Table 1
Baseline and clinical characteristics of 40 patients. | 2023-11-08T05:05:44.227Z | 2023-11-03T00:00:00.000 | {
"year": 2023,
"sha1": "bb69a15cc2006133e90e27dc01789589eb22018e",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "bb69a15cc2006133e90e27dc01789589eb22018e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3912956 | pes2o/s2orc | v3-fos-license | Keratin 18 attenuates estrogen receptor α-mediated signaling by sequestering LRP16 in cytoplasm
Background Oncogenesis in breast cancer is often associated with excess estrogen receptor α(ERα) activation and overexpression of its coactivators. LRP16 is both an ERα target gene and an ERα coactivator, and plays a crucial role in ERα activation and proliferation of MCF-7 breast cancer cells. However, the regulation of the functional availability of this coactivator protein is not yet clear. Results Yeast two-hybrid screening, GST pulldown and coimmunoprecipitation (CoIP) identified the cytoplasmic intermediate filament protein keratin 18 (K18) as a novel LRP16-interacting protein. Fluorescence analysis revealed that GFP-tagged LRP16 was primarily localized in the nuclei of mock-transfected MCF-7 cells but was predominantly present in the cytoplasm of K18-transfected cells. Immunoblotting analysis demonstrated that the amount of cytoplasmic LRP16 was markedly increased in cells overexpressing K18 whereas nuclear levels were depressed. Conversely, knockdown of endogenous K18 expression in MCF-7 cells significantly decreased the cytoplasmic levels of LRP16 and increased levels in the nucleus. CoIP failed to detect any interaction between K18 and ERα, but ectopic expression of K18 in MCF-7 cells significantly blunted the association of LRP16 with ERα, attenuated ERα-activated reporter gene activity, and decreased estrogen-stimulated target gene expression by inhibiting ERα recruitment to DNA. Furthermore, BrdU incorporation assays revealed that K18 overexpression blunted the estrogen-stimulated increase of S-phase entry of MCF-7 cells. By contrast, knockdown of K18 in MCF-7 cells significantly increased ERα-mediated signaling and promoted cell cycle progression. Conclusions K18 can effectively associate with and sequester LRP16 in the cytoplasm, thus attenuating the final output of ERα-mediated signaling and estrogen-stimulated cell cycle progression of MCF-7 breast cancer cells. Loss of K18 increases the functional availability of LRP16 to ERα and promotes the proliferation of ERα-positive breast tumor cells. K18 plays an important functional role in regulating the ERα signaling pathway.
Background
Estrogen receptor α (ERα), a member of the nuclear receptor (NR) superfamily of transcription factors, plays a crucial role in the control of epithelial cell proliferation and mammary gland development [1,2] as well as in the development and progression of breast cancer [3,4]. Classically, ERα is activated by estrogen binding, and this leads to receptor phosphorylation, dimerization, and to recruitment of coactivators to the estrogen-bound receptor complex [5]. Oncogenesis in breast cancer frequently involves excessive activation of the ERα signaling due primarily to overexpression of ERα and/or its coactivators [6][7][8][9]. Factors that affect the balance of ERα and its cofactors in breast cancer cells can modulate ERα signaling and thereby alter the cell growth response to estrogen stimulation. Human MCF-7 breast cancer cells express functional ERα and display estrogen-dependent growth, and have been widely used as an in vitro model for studying the regulatory mechanisms of ERα action in estrogen-dependent breast cancer [10,11].
Most coactivator proteins contain different activation domains or enzyme activity modules that include classical histone acetylase, bromo, chromo, Su(var) 3-9, Enhancer of zeste, Trithorax and ATPase domains, by which coactivators facilitate the assembly of the transcription initiation complex through their chromatin remodeling activities [12,13]. LRP16 is a member of the macro domain superfamily with a simple structure compared to other members because it contains only a single standalone macro module in its C-terminal region [14,15]. LRP16 was previously identified as a target gene for both ERα and the androgen receptor (AR) [15,16]. The proximal region (nt -676 to -24) of the human LRP16 promoter contains a 1/2 ERE/Sp1 site and multiple GC-rich elements that confer estrogen responsiveness and is sufficient for estrogen action [17,18]. LRP16 protein interacts with both ERα and AR and enhances their transcriptional activities in a ligand-dependent manner, thus establishing a positive feedback regulatory loop between LRP16 and ERα/AR signal transduction [15,19]. In addition, LRP16 has also been reported to act as a potential coactivator that amplifies the transactivation of 4 other NRs [15]. Overexpression of LRP16 can stimulate the proliferation of MCF-7 breast cancer cells by enhancing estrogen-stimulated transcription mediated by ERα [16,19]. Inhibition of LRP16 gene expression significantly suppresses the proliferative activity and invasiveness of estrogen-responsive epithelial cancer cells [19,20]. Consistent with findings in cell culture, a positive correlation was found between LRP16 mRNA levels and the progression of primary breast cancers [21]. Although the mechanisms of estrogen regulation of LRP16 expression and the functional role of LRP16 in ERα-mediated transcriptional regulation are rel-atively well characterized, the regulation of the functional availability of this coactivator protein is unclear.
The cytoskeleton of epithelial cells is predominantly formed by intermediate filament protein keratins (KRTs) that are subclassified into type I (acidic, KRT9 through KRT20) and type II (neutral-basic, KRT1-KRT8) families [22]. K18 (KRT18) is expressed in single-layer epithelial cells of the human body and is localized in the cytoplasm and perinuclear region. In the normal mammary epithelium, K18 is expressed in the luminal cells that represent the differentiation compartment [23]. K18 has been recognized for many years as an epithelial marker in diagnostic histopathology [24]. The level of K18 expression has been inversely associated with the progression of breast cancer: 25% to 80% of all breast carcinomas exhibit loss of K18 expression and this is associated with significantly poorer prognosis [25][26][27][28][29][30]. Transfection of K18 into ERαnegative MDA-MB-231 breast cancer cells caused significant reduction of malignancy both in vitro and in vivo [31]. Results from cell-culture experiments and clinicopathological parameter analyses have also revealed a relationship between decreased amounts of K18 in the cytoplasm and increased proliferative activity of breast cancer cells [27,28]. These previous studies suggest that K18 plays an important role in tumor progression in breast cancer patients, but the molecular mechanisms are poorly understood.
In the present study we first used the yeast two-hybrid system to investigate proteins interacting with LRP16. This revealed that K18 physically interacts with LRP16 through its C-terminal region. Moreover, K18 binding sequesters LRP16 in the cytoplasm and prevents its enhancement of ERα-mediated transcription in MCF-7 cells. Using estrogen-responsive MCF-7 cells as a model we have demonstrated that K18 modulates both estrogen activation of ERα target genes and cell cycle progression. These results suggest that loss of K18 expression in ERα-positive breast cells, and failure of cytoplasmic sequestration of the ERα coactivator LRP16, may contribute to tumor proliferation by increasing ERα signaling in the nucleus.
K18 is a novel interactor of LRP16
The yeast two-hybrid system was used to screen for new polypeptides interacting with LRP16. Sequences from a MCF-7 breast cancer cell cDNA library were screened for binding to LRP16; this identified nine clones corresponding to 12 different potential LRP16-binding proteins. One such cDNA clone was found to contain a full-length coding sequence (amino acids 1-430) for the cytokeratin K18. The specificity of the interaction between LRP16 and K18 was demonstrated by chromogenic assay using X-Gal; no staining developed using either factor alone or in pairwise controls containing only the Gal4 activation domain (AD) or the Gal4 DNA binding domain (DBD) (Figure 1).
To confirm the specificity of the interaction between K18 and LRP16 we analyzed glutathione S-transferase (GST) fusion proteins and in vitro-translated proteins by pulldown assays. GST-LRP16 efficiently bound to in vitrotranslated 35 S-labeled full-length K18 (Figure 2). A series of K18 deletion constructs were then used in GST pulldown assays to identify the region within K18 that is required for LRP16 binding. GST-LRP16 failed to bind to either K18-N (amino acids 1-150) or K18-F (80-375) but bound strongly to both K18-C1 (301-430) and K18-C2 (390-430) ( Figure 2). We then tested N-and C-terminal LRP16 deletion constructs for K18 binding. Full-length K18 polypeptide bound strongly to GST-LRP16-C (amino acids 161-324) but only weakly to GST-LRP16-N (1-160); K18 failed to bind to GST alone ( Figure 3). Together these results indicate that the interaction between K18 and LRP16 is mediated primarily by the C-terminal region of K18 and the single macro domain of LRP16.
We then used co-immunoprecipitation (CoIP) to confirm that K18 interacts with LRP16 in mammalian cells. A pcDNA3.1 expression vector directing the expression of LRP16 (pcDNA3.1-LRP16) was transfected into MCF-7 cells; cell lysates were then immunoprecipitated with antibodies directed against either K18 or LRP16. Precipitates were resolved by gel electrophoresis and probed with anti-body against LRP16. The empty pcDNA3.1 expression vector provided a negative control. An intense band corresponding to LRP16 was detected in anti-K18 antibody immunoprecipitates from LRP16-overexpressing MCF-7 cells ( Figure 4A, lane 5). In addition, a weak band corresponding to endogenous LRP16 was detected in anti-K18 immunoprecipitates from vector-transfected MCF-7 cells ( Figure 4A, lane 6). Nonspecific IgG antibody failed to immunoprecipitate LRP16 (lanes 3 and 4 in Figure 4A). To confirm the specificity of LRP16-K18 complex formation we transiently transfected Flag-tagged empty vector or Flag-K18-C1 (amino acids 301-430) into MCF-7 cells for CoIP assays. As shown in Figure 4B, the exogenous Flag-K18-C1 and the endogenous LRP16 could be reciprocally coimmunoprecipitated by use of anti-Flag and/or anti-LRP16 antibodies. These results confirm that K18 can bind to LRP16 in MCF-7 breast cancer cells.
K18 modulates the nucleo-cytoplasmic localization of LRP16 in MCF-7 cells
K18, a member of the family of intermediate filament keratins, is localized to the cytoplasm and is not generally found in the nucleus. By contrast, LRP16 acts as a common coactivator for the nuclear receptors ERα and AR, and this implies that LRP16 is present in the nucleus. The physical association between K18 and LRP16 therefore suggested the possibility that K18 might modulate the nucleo-cytoplasmic distribution of LRP16.
To address this possibility we examined whether increased K18 expression in MCF-7 cells might alter the subcellular distribution of a LRP16-GFP fusion protein. As expected for a nuclear protein, LRP16-GFP fluorescence was found primarily in the nucleus, and nuclear fluorescence was detected in 78% of GFP-positive cells; cytoplasmic fluorescence was only detected in 22% of GFP-positive cells cotransfected with empty vector ( Figure 5A and 5C). However, the distribution was reversed when cells expressing LRP16-GFP were cotransfected with a construct directing the expression of K18. Here nuclear fluorescence was detected in only 32% of GFP-positive cells whereas 68% exhibited cytoplasmic localization ( Figure 5B and 5C). These results suggest that the ectopic expression of K18 can sequester LRP16 into cytoplasm.
To further confirm this finding transfected cells were physically separated into cytoplasmic and nuclear fractions and the distribution of LRP16 was analyzed by immunoblotting. MCF-7 cells were transfected with a K18 expression construct, Flag-K18, or with empty vector, and total, cytoplasmic and nuclear extracts were analyzed using antibody to LRP16. As shown in Figure 6A, total LRP16 protein levels were not altered by ectopic expression of K18 in MCF-7 cells; by contrast, K18 expression significantly increased LRP16 levels in the cytoplasm and reduced the K18 interacts with LRP16 in yeast cells Figure 1 K18 interacts with LRP16 in yeast cells. Yeast AH109 cells were transformed with the indicated GAL4-DBD (DNA Binding Domain) and GAL4-AD (Activation Domain) chimeric constructs and β-galactosidase activity was measured by a liquid o-nitrophenyl-β-D-galactoside (ONPG) assay. The experiment was repeated 3 times, and 2 different yeast transformants were used for each measurement. The interaction of p53 with SV40 large T-antigen protein provided a positive control.
pGBKT7-p53+pGADT7-SV4-large T antigen K18 interacts with LRP16 protein by its C-terminal region mediation 19kD 16kD Coomassie staining proportion present in the nucleus, a finding consistent with K18 sequestration of LRP16 in the cytoplasm.
LRP16 interacts with K18 by its macro domain mediation
To address whether endogenous K18 polypeptide also sequesters LRP16 in the cytoplasm we studied the effects of inhibiting endogenous K18 expression on the distribution of LRP16. Three different small interfering RNA (siRNA) duplexes directed against human K18 mRNA, siRNA361, 609 and 908, were designed and transfected into MCF-7 cells; levels of LRP16 in the total, nuclear, and cytoplasmic fractions were measured by immunoblotting as before. Levels of K18 polypeptide were significantly reduced by transfection with all three K18 siRNAs as compared to cells transfected with a control siRNA; knockdown activity declined in the order siRNA361, 609, 908 ( Figure 6B). None of the siRNAs affected the total levels of LRP16, but knockdown of endogenous K18 expression with the three siRNAs led to a significant and graded decrease in cytoplasmic LRP16 levels and a corresponding graded increase in nuclear levels ( Figure 6B). Similar effects of K18 overexpression and knockdown on the subcellular distribution of endogenous LRP16 protein were also observed in human cervical cancer HeLa cells (data not shown). Together these data indicate that endogenous K18 sequesters LRP16 in the cytoplasm.
K18 binding to LRP16 modulates ER signaling
Our previous studies demonstrated that LRP16 is a coactivator of ERα in the nucleus and that knockdown of LRP16 in MCF-7 cells can significantly attenuate estradiol (E2)stimulated ERα signaling [19]. Because K18 can sequester LRP16 from the nucleus into the cytoplasm it is possible that K18 expression might modulate ERα signaling. To explore this possibility we assayed E2-activation of a construct in which expression of a luciferase gene (Luc) is under the control of three estrogen-response elements . Consistent with our previous report [19], the E2-activated reporter system was further augmented by LRP16 transfection ( Figure 7A, lane 4), but this LRP16-enhanced reporter gene activity was also markedly impaired by cotransfection with the K18 expression construct ( Figure 7A, lane 5). Comparison of reporter gene activities in lanes 3 and 5 revealed that K18 suppression of E2-stimulated ERα transcriptional activity was efficiently antagonized by overexpression of LRP16. We next used RNA interference in the cotransfection system to explore K18 suppression of reporter gene expression in MCF-7 cells. siRNA directed against K18 was found to enhance ERα-mediated transactivation in the presence of E2. In the absence of E2, however, knockdown of endogenous K18 failed to increase reporter gene expression ( Figure 7B). Furthermore, CoIP analysis revealed that ectopic K18 expression in MCF-7 cells markedly attenuated the association of ERα with LRP16; there was no evidence for any direct interaction between K18 and ERα ( Figure 7C, left panel). Consistent with our previous observations [19], E2 stimulation enhanced the interaction between LRP16 and ERα but had no effect on the interaction between K18 and LRP16 ( Figure 7C, right panel). Together these results indicate that K18 can suppress E2-stimulated ERα transactivation by blunting the binding of LRP16 to ERα.
To address whether K18 affects E2 induction of ERα target genes in MCF-7 cells we used quantitative PCR to measure mRNA expression levels of the pS2, cyclin D1, and c-Myc genes whose expression is known to be E2-regulated in MCF-7 cells [19]. As shown in Figure 8A, E2 treatment produced a marked increase in the mRNA levels of pS2, cyclin D1, and c-Myc but not of the control gene HPRT. However, this induction was attenuated by overexpression of K18. Overexpression of LRP16 efficiently relieved K18 inhibition of E2-induced expression of these target genes. We next analyzed E2 induction of these target genes in MCF-7 cells transfected with K18 siRNAs. As shown in Figure 8B, knockdown of endogenous K18 expression greatly increased the level of E2-induced up-regulation of pS2, cyclin D1, and c-Myc mRNA.
K18 interacts with LRP16 protein in vivo
To confirm that the effects of K18 are mediated at the transcriptional level we used chromatin immunoprecipitation (ChIP) assays to analyze ERα recruitment at the pS2 promoter region. As shown in Figure 8C, ERα binding at the pS2 promoter was significantly increased in the presence of E2, but binding was substantially blunted by overexpression of K18.
We previously reported that knockdown of LRP16 can markedly inhibit E2-stimulated growth of MCF-7 cells [19]. To determine whether the K18-LRP16 association might modulate the E2-stimulated transition from the G1 to S phase of the cell cycle, MCF-7 cells were transfected with constructs directing the expression of K18 and/or LRP16 as well as with a GFP expression plasmid. The extent of DNA synthesis was assessed by incorporation of BrdU into GFP-positive cells. As shown in Figure 9A (lane 1), S-phase entry was 13% greater in E2-treated cells than in control cells (lane 2), whereas in cells transfected with a construct expressing K18 there was only a 4% increase in S-phase entry in K18-transfected cells (lane 3). Furthermore, overexpression of LRP16 substantially increased E2-stimulated S-phase entry (lane 4); however, this increase was blocked by K18 overexpression (lane 5). We next performed BrdU incorporation assays on MCF-7 cells transfected with K18 siRNA. As shown in Figure 9B, transfection of K18-specific siRNAs greatly increased E2-promoted S-phase entry compared to controls. Together these data indicate that, by sequestering LRP16 in the cyto-K18 sequesters LRP16-GFP fusion protein in the cytoplasm from nucleus
Discussion
Regulation of transcription factor and cofactor activity by subcellular compartmentalization is well documented [32][33][34]. A common mechanism is sequestration of the factor into inactive compartments, and this typically takes place via direct or indirect association with the cytoskeleton [35][36][37][38]. LRP16 is a new type of ERα coactivator that augments the receptor's transcriptional activity in a ligand-dependent manner and can have a profound impact on the final output of cellular signaling [19]. LRP16 is though to modulate ERα activity in the nucleus; in the present paper we have confirmed that a LRP16-GFP fusion protein localizes primarily to MCF-7 cell nuclei. We also report a new LRP16 ligand, K18, identified by yeast twohybrid screening. K18 is a member of the family of intermediate filament proteins that contribute to cytoskeletal architecture. In the present study we report that K18 binds to and sequesters LRP16 in the cytoplasm, thus preventing its nuclear action and attenuating both E2-induction of ERα target genes and E2-stimulated cell cycle progression of MCF-7 cells. These findings underscore the functional role of K18 in regulating the ERα signaling pathway.
LRP16, a member of the macro domain protein superfamily, contains a single stand-alone macro module in its C-terminal region [14,15]. We recently demonstrated that LRP16 is a non-redundant coactivator of both ERα and AR [15,19]. LRP16 was also able to interact with another 4 nuclear receptors (NRs) in vitro, including estrogen receptor β(ERβ), the glucocorticoid receptor, and peroxisome proliferator-activated receptors α and γ, and can efficiently amplify the transactivation of these NRs in a liganddependent manner [15]. Our finding that K18 binds to and sequesters LRP16 in the cytoplasm suggests that differential tissue expression of K18 could constitute a new layer in the regulatory cascade of signaling pathways in which LRP16 participates.
Keratins (KRTs) provide mechanical stability to tissues, as evidenced by the range of pathological phenotypes seen in patients bearing mutations in epidermal keratins [39]. The intermediate filament network in simple glandular epithelial cells predominantly consists of heterotypic complexes of KRTs K8 and K18. Additional evidence for a more widespread role of KRTs comes from mouse gene knockout studies. Double deletion of the genes encoding K18 and K19 results in complete loss of a functional cytokeratin skeleton and embryonic lethality [40]. The assembly of intermediate filament involves several steps during which the α-helical rod domain of the cytokeratin molecules plays a central role [41][42][43]. The head and the tail domains are not thought to be part of the filamentous backbone, and instead these protrude laterally and contribute to profilament and intermediate filament packing and to intermediate filament interaction with other cellular components [44][45][46]. By associating with signal transduction factors K18 may modulate both intracellular signaling and gene transcription. For example, K18 is known to bind specifically to the tumor necrosis factor (TNF) receptor type 1(TNFR1)-associated death domain protein (TRADD) through its N-terminal region and prevent TRADD from binding to activated TNFR1, thus attenuating TNF-induced apoptosis in simple epithelial cells [44].
We report here that K18 binding to LRP16 is primarily mediated by the C-terminal region of K18 and the single macro domain of LRP16. We used two independent approaches, including subcellular localization analysis of GFP-tagged LRP16 and cytoplasmic/nuclear LRP16 protein expression analyses, to demonstrate that ectopic K18 expression in MCF-7 cells sequesters LRP16 in the cyto-Differential expression of K18 regulates the nucleo-cytoplas-mic distribution of the endogenous LRP16 protein in MCF-7 cells Figure 6 Differential expression of K18 regulates the nucleocytoplasmic distribution of the endogenous LRP16 protein in MCF-7 cells. A, MCF-7 cells were transiently transfected with Flag-tagged K18. Total, nuclear and cytoplasmic proteins were extracted 48 h after transfection and were subjected to immunoblotting analysis with the indicated antibodies. B, MCF-7 cells were transiently transfected with K18-specific siRNAs or the control siRNA. Total, nuclear and cytoplasmic proteins were extracted 48 h after transfection and were subjected to immunoblotting analysis with the indicated antibodies. β-actin was used as a loading control for total protein extracts and cytoplasmic extracts. Transcription factor Sp1 expressed constitutively in the nucleus was used as a loading control for nuclear protein extracts. Relative Luc activities (fold) Accumulating evidence from clinicopathological observations has shown that the level of K18 gene expression cor-relates inversely with the progression of breast cancer [25][26][27][28][29][30][31]47]. Several reports have proposed that downregulation of K18 might increase the invasiveness of breast cancer cells [25][26][27][28][29][30]47]. It was previously demonstrated that overexpression of K18 in the ERα-negative and highly invasive MDA-MB-231 breast cancer cell line caused a marked reduction in the aggressiveness of the cells in vitro and in vivo but had no significant effect on cell growth rate. This change was accompanied by complete loss of K18 modulates E2-stimulated expression of ERα target genes and the recruitment of ERα to its target DNA in MCF-7 cells Figure 8 K18 modulates E2-stimulated expression of ERα target genes and the recruitment of ERα to its target DNA in MCF-7 cells. A, MCF-7 cells were grown in phenol-red free media stripped of steroids for at least 3 days, then cotransfected with the indicated vectors and cultured for the indicated times. Before total RNA was extracted, the cells were treated with E2 (100 nM) or vehicle (DMSO) for 1 h. Expression of the indicated transcript abundance was analyzed by quantitative RT-PCR (qPCR). HPRT was used as the internal control. All experiments were repeated at least 3 times; results are expressed as means ± SEM. B, MCF-7 cells were grown in phenol-red free media stripped of steroids for at least 3 days, then cotransfected with the indicated siRNAs. After 47 h, cells were treated with E2 (100 nM) or vehicle (DMSO) for 1 h and were subjected to qPCR analysis. Transcript abundance was analyzed by qPCR. HPRT was used as the internal control. All experiments were repeated at least 3 times; results are expressed as means ± SEM. C, MCF-7 cells, grown in phenol-red free media stripped of steroids, were transiently transfected with K18 expression vector or empty vector. 40 h post-transfection, cells were treated with E2 (100 nM) for 1 h and were subjected to ChIP analyses with the indicated antibodies. NS-IgG C the previously strong vimentin expression in the parent cell line and upregulation of adhesion proteins such as Ecadherin [31]. However, experimental studies and clinicopathological observations also revealed a significant association between K18 expression and the proliferation rate of breast cancer cells. Analysis of the association between K18 expression and different clinicopathological risk factors revealed that K18 expression is highly and significantly correlated with size (pT1-3), differentiation grade, and mitotic index of the primary tumor [27]. These parameters are a function of the proliferation rate of the primary tumor, and this suggests that there is a relationship between downregulation of K18 expression and increased proliferative activity. In addition, the expression of the proliferation-associated antigen Ki-67 is significantly associated with the downregulation of K18 in a subset of primary breast carcinomas [27]. Moreover, cell culture experiments on bone-marrow micrometastases of breast cancer have indicated that most proliferating tumor cells lack detectable expression of K18 protein [28]. These previous data suggested that K18 might make an important contribution to tumor metastasis as well as to tumor cell growth. In the present study we have demonstrated that, by blunting estrogen-stimulated ERα signaling activity, K18 can significantly suppress the growth response of MCF-7 cells to estrogen. We propose that the regulatory mechanism of ERα transactivation by the K18-LRP16 association might explain in part the relationship between K18 downregulation and increased proliferative activity of breast cancers. However, K18 loss is also associated with the metastasis of ERα-negative breast cancers (47), and it therefore appears likely that K18 can modulate breast cancer progression by more than one mechanism.
Oncogenesis in breast cancer commonly involves excess activation of ERα signaling. We previously reported that LRP16 mRNA is overexpressed in nearly 40% of all primary breast cancer samples [21]. LRP16 overexpression in breast cancer cells is tightly linked with cell proliferation and enhanced ERα activation [16,19,21]. As a functional suppressor of LRP16, K18 is frequently absent from different types of breast carcinoma [25][26][27][28][29][30]. Excess activation of ERα function in tumor cells is commonly mediated by overexpression of ERα and/or its coactivators including LRP16 [6][7][8][9]21]. We now propose a further level of regulation that can modulate ERα function in breast cancer. Loss of K18 from ERα-positive breast tumor cells releases the functional activity of LRP16, and is thus likely to promote tumor cell proliferation. Tests that evaluate the subcellular localization of LRP16 in ERα-positive breast tumor cells therefore have potential in the categorization of different clinopathological stages.
Conclusions
In summary, these findings provide evidence that K18 binding to LRP16 leads to cytoplasmic sequestration of LRP16. By determining the nuclear availability of the receptor coactivator LRP16, K18 can not only modulate the transcriptional activity of ERα in response to estrogen but can also govern estrogen-stimulated cell cycle progression of MCF-7 cells. Loss of K18 from ERα-positive breast tumor cells releases the functional activity of LRP16, and such loss is thus likely to promote tumor proliferation. These findings underscore a functional role for K18 in regulating the ERα signaling pathway.
Generation of the cDNA library and yeast two-hybrid screening
Total RNA from MCF-7 cells was extracted using TRIzol reagent (Invitrogen, Carlsbad, CA, USA) and a cDNA library was generated using the BD SMART™ kit (Clontech, Palo Alto, CA, USA) according to the manufacturer's instructions. Yeast two-hybrid screening for the identification of LRP16-interacting proteins involved the MATCH-MARKER two-hybrid system 3 kit (Clontech) according to the manufacturer's instructions.
GST pull-down assay
GST and GST fusion proteins were prepared as described previously [15]. 35 S-labeled proteins were produced with use of a TNT-coupled in vitro transcription and translation system (Promega Corporation, Madison, WI, USA) with the expression vector K18 and its derivatives in pcDNA3.1.
Quantitative analysis of LRP16-GFP subcellular localization
MCF-7 cells were grown in 35 mm culture dishes and cotransfected with LRP16-GFP and K18 or pcDNA3 empty vector. 24 h after transfection cells were fixed with 3% formaldehyde (15 min) and nuclei were counterstained with 4',6'-diamidino-2-phenylindole dihydrochloride (DAPI). Cells were visualized under an inverted fluorescence microscope (IX-71; Olympus) equipped with a digital camera. The proportion of cells displaying LRP16-GFP in the nucleus was determined by counting at least 500 cells from each plate. The means and SEM were calculated from 3 separate plates from 3 independent experiments.
Luciferase assays MCF-7 cells were cultured in phenol-red free media stripped of steroids for at least 3 days and were then seeded into 35 mm culture dishes. Cells at 50% confluence were cotransfected by use of Superfect (Qiagen, Valencia, CA, USA). Cells were cotransfected with 0.5 μg of the reporter construct and 0.25 μg of ERα-and/or 0.5 μg of K18-or LRP16-expression vectors. Cotransfaction with plasmid pRL-SV40 (1 ng/per well) was used to control for transfection efficiency. Total DNA was adjusted to 2 μg per well with pcDNA3.1 empty vector. 36 h after transfection cells were treated with or without E2 (100 nM), cultured for a further 6 h, and cell extracts were prepared and relative luciferase activities were measured as described previously [19]. For knockdown experiments, 1 | 2014-10-01T00:00:00.000Z | 2009-12-26T00:00:00.000 | {
"year": 2009,
"sha1": "57150d42cb4e25ab3aadeb8d358f57812f56d77b",
"oa_license": "CCBY",
"oa_url": "https://bmccellbiol.biomedcentral.com/track/pdf/10.1186/1471-2121-10-96",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0fe15da92c9609853063b2ffc06b6ea47c863f4e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
16718930 | pes2o/s2orc | v3-fos-license | Seeing and believing: recent advances in imaging cell-cell interactions
Advances in cell and developmental biology have often been closely linked to advances in our ability to visualize structure and function at many length and time scales. In this review, we discuss how new imaging technologies and new reagents have provided novel insights into the biology of cadherin-based cell-cell junctions. We focus on three developments: the application of super-resolution optical technologies to characterize the nanoscale organization of cadherins at cell-cell contacts, new approaches to interrogate the mechanical forces that act upon junctions, and advances in electron microscopy which have the potential to transform our understanding of cell-cell junctions.
Introduction
Cell biologists are often resolutely visual people: we believe most what we can see best. This is a heritage of the history of our discipline, which found its roots in work such as Palade's application of electron microscopy to characterize cellular and subcellular structure. Later, the introduction of antibody technologies allowed morphology to be complemented by molecular specificity. Advances in our understanding of cell biology thus have been driven by the combination of new technologies in microscopy and new reagents that allow us to probe cellular constitution and function.
In this article, we aim to review how this combination of new technologies and reagents has advanced our understanding of the biology of cadherin-based adherens junctions. We focus on three of these advances. First, we have come to appreciate that adherens junctions are not homogenous collections of cadherin receptors but rather have patterns of organization that are apparent at the nanoscale (smaller than a micron) and mesoscopic scale (tens of microns). Second, we now know that cadherin-based adhesions are active mechanical agents where cells generate force to test their environment and sense forces that are applied upon them. Third, although many of these insights have come from developments in light microscopy, the last 5 to 10 years have also seen the development of dramatic new tools in electron microscopy; these have yet to be widely applied to study cell-cell interactions, but their potential is enormous.
Organization and structure of adherens junctions Optical microscopy has been revolutionized by techniques that have overcome the limits that the diffraction of light imposes on spatial resolution 1 . These include approaches such as structured illumination ( Figure 1) and fluorescence photoactivated localization microscopy/ stochastic optical reconstruction microscopy (F-PALM/STORM), which are now being applied to the characterization of cell-cell junctions 2-4 . Already, they have provided valuable insights into how cadherins are organized into clusters at the nanoscale.
The capacity for cadherins to organize into lateral clusters was observed nearly 20 years ago when it was identified as a mechanism that could strengthen cadherin-based adhesion 5,6 , probably by increasing the avidity of adhesive binding between cadherins and their ligands 6 . However, those experiments were performed by using reductionist models, such as fibroblasts engineered to express E-cadherin 5 or cells adherent to substrata coated with C-cadherin ligands (analogous to the two-dimensional substrata that students of integrin biology have long used to study focal adhesions and focal contacts) 6 . It was more difficult to determine the extent to which lateral clustering might occur at the native cell-cell contacts formed between cells that express endogenous cadherins, such as simple polarized epithelia. High-resolution confocal imaging had identified clustering in Drosophila embryos 7 and cultured mammalian cells 8 but did not readily permit quantitative analysis of the extent or nature of this clustering. More commonly, cadherins appeared to distribute extensively at contacts between cells, as if junctions represented carpets of homoligated cadherin complexes.
Two recent articles applied PALM/STORM to characterize nanoscale E-cadherin distribution in Drosophila embryonic epithelia 3 and cultured mammalian cells 4 . Both clearly demonstrated that E-cadherin was distributed in polydisperse clusters throughout the junctions of these epithelial systems. They confirm that lateral clustering is a fundamental feature of the supramolecular organization of cadherins at junctions. Furthermore, mammalian junctions displayed clusters with a preferred size of approximately 50-60 nm, which then could organize into larger-scale groups 4 .
More detailed quantitative analysis also provided provocative insights into the cellular control of clustering. Earlier studies based on analysis of the crystal structure of cadherin ectodomains proposed a model in which trans-interactions between the ectodomains presented on the surfaces of neighboring cells, combined with cis-interactions between ectodomains on the same cell surface, could cause packing into clusters 9,10 . However, the cytoplasmic tail also supports clustering in cells 4,11 . Wu et al. 4 (2015) found that the molecular density of cadherins could vary even within the same cluster. Some regions within clusters showed high packing density, comparable to that predicted from the crystal structures; this required the ability of cadherins to undergo both cis-and trans-interactions. However, even when the ability to make cis-and trans-interactions was ablated, cells could still make clusters with a size (50-60 nm) similar to those of wild-type cadherins. This implied that adhesive ligation might not be necessary for clustering to occur. Indeed, clusters were observed at the free surfaces of cells, where cadherins could not engage in adhesion, and even with cadherin mutants that lacked the whole adhesive ectodomain 4 . Instead, clustering required an intact actin cytoskeleton, and detailed inspection suggested that cadherin clusters might be delimited by "corrals" of cortical actin. Consistent with this, Troung Quang et al. 3 (2013) demonstrated that F-actin integrity was necessary to stabilize cadherin clusters. Overall, this implies that multiple mechanisms can influence clustering. In one model, cortical actin may define a minimal cadherin cluster, which does not require adhesive ligation; however, the packing of cadherin molecules within clusters is increased upon ligation.
Comparison of the two studies also highlights how the operational definition of "clusters" can fundamentally condition the detailed quantitative analysis and its interpretation. For example, although both groups used the same algorithm to analyze their data, they differed in their definition of clusters and hence in the metrics that they used to describe the clusters. Troung Quang et al. 3 used a kineticsbased model which defined "size" in stoichiometric terms, as the number of cadherin molecules present within clusters. In contrast, Wu et al. took a more empirical approach that focused on the spatial size of the clusters. What emerged with the first approach, as confirmed by Wu et al., was that the distribution of "sizes" followed a power law, implying that the mechanisms that governed how many cadherin molecules accumulate in a cluster did not have a preferred number. However, a power law relationship was not evident when "size" was defined spatially, as the diameter of the cluster, the data being better fit to a Gaussian distribution. Taken together, these findings suggest that there may be a preferred spatial dimension to a cadherin cluster (approximately 50 nm), but within this physical limit the number of cadherin molecules that can be accumulated varies over a wide range. This emphasizes that how the apparently straightforward notion of "size" is explicitly implemented in the computational analysis will deeply influence data interpretation with these approaches.
More generally, these studies suggest that the notion of a "cluster" may need to be conceptually defined with greater precision than we have sometimes done in the past. The work of Wu et al. suggests that there may be elemental units that may reflect the spatial organization of the cortical actin cytoskeleton. However, these appear to be able to organize into larger-scale conglomerations and accumulate a variable number of cadherin molecules. It should be remembered that cadherins exist as macromolecular complexes with a range of associated proteins 10 . So the clusters of cadherins will more likely represent nanoassemblies of many different proteins. What mechanisms define these larger-scale patterns of organization have yet to be established. However, insofar as the phenomenon of receptor clustering has been implicated in regulating cellular processes as fundamental as cell signaling 12,13 and receptor sensitivity 14 , it will be important for us to clearly specify what aspect of "clustering" we are talking about when we come to further analyze the role that clustering plays in cadherin biology.
Probing the mechanical properties of cadherin junctions A fundamental advance in our understanding of cadherin biology has come from the realization that cadherin adhesion serves to couple the contractile cortices of cells together 15,16 . Indeed, cadherins may promote the biogenesis of the junctional contractile apparatus itself 8,17 . An important part of this advance has come from the application of tools and theory from the physical sciences to biology, combined with the development of new reagents that allow us to measure molecular-scale tension.
For example, one of the most popular approaches to assessing tension is to cut regions (cortices, junctions, and whole cells) with a laser and measure the instantaneous velocity of recoil as an index of the tension that had been present beforehand 18 . This has been used in embryonic tissues 19,20 as well as in cell culture models 8 . Similar nanoablation techniques have been combined with physical theory to characterize patterns of cortical tension in Caenorhabditis elegans embryos 21 . It should be noted that, though intuitively attractive, the velocity of recoil is not itself a direct measure of tension. Instead, recoil velocity reflects the ratio of tension over frictional forces. When used to infer tension, this assay assumes that the frictional elements (which would reflect the viscoelastic properties of the junctions) do not change between experimental maneuvers 18 . Ultimately, precise interpretation of recoil velocity needs to be informed by measurements of junctional viscoelasticity 22 . Other indirect assays have measured junctional movements to infer tension when combined with explicit mechanical models 23,24 .
These essentially mesoscopic measurements can be productively complemented by the use of molecular-level tension-sensitive biosensors, such as the Förster resonance energy transfer (FRET)based system developed by Grashoff et al. 25 . This sensor reports tension based on the displacement of FRET pairs that are separated by an elastic linker derived from spider silk. The tension sensor (TS) module has been inserted into a range of proteins, where it reported tension over both cadherins (E-cadherin and VE-cadherin 26,27 ) and vinculin at cell-cell junctions 28 . Of note, the TS module was calibrated in vitro, where it displayed greatest sensitivity over a range of 1-6 pN 25 . Therefore, its efficacy as a reporter will depend on whether the molecular-level forces that are present fall within its range of sensitivity. Nonetheless, the mesoscopic and molecularscale approaches to measuring tension are complementary and it is informative to compare both assays, where possible. For example, in mature focal adhesions, which are thought to be sites where contractile force is exerted upon integrin complexes 29 , vinculin itself can become uncoupled from tension 25 , despite the integrity of the focal adhesion being unchanged. Thus, molecular-level tension may not always correlate with mesoscopic-level tension.
An important issue for the future is to better characterize the material properties of cell-cell junctions. Until now, we have lacked the tools to measure those properties. But things have begun to change. He et al. 30 (2014) followed the patterns of flow of microbeads injected into Drosophila embryonic epithelia to assess the patterns of mechanical connectivity between cells. They concluded that lateral cell-cell junctions did not present substantive barriers to hydrodynamic flow between cells. Furthermore, Bambardekar et al. 22 (2015) demonstrated that it was possible to manipulate cell-cell junctions in Drosophila embryonic epithelium by using optical tweezers and thereby assess the mechanical properties of the junctions. Whether such approaches will be more broadly applicable in other cellular systems remains to be tested.
New directions in ultrastructural analysis of cell-cell interactions
The suite of light microscopic techniques available to researchers is impressive, but we are also witnessing a revolution in electron microscopy, from high-resolution structural analysis to ultrastructural analysis of whole tissues in three dimensions (3D). Many of these methods are becoming routine in laboratories throughout the world but have not been extensively applied to the study of cell-cell interactions. Here, we will briefly summarise relevant techniques and their possible applications in this area.
Ultrastructural methods can potentially answer how molecular interactions and spatial interactions contribute to the formation and function of junctional assemblies. The ideal method would allow visualization of both the cytoskeleton and membranous elements which together generate the active junctional complex; it should also have the resolution to identify the location of individual protein components in the context of a 3D volume of the cell-cell contact sites. This should include actin and other cytoskeletal networks, cadherin, and actin-binding proteins and should be correlated with real-time observations of junctional dynamics. Although some elements can be recognized by morphology alone (cytoskeleton and junctions), new labeling methods are now facilitating visualization of otherwise undetectable components and can be combined with 3D methods.
Conventional electron microscopy, involving chemical fixation and embedding in resin, is still an excellent method for visualization of the membrane and cytoskeletal elements of cell-cell contacts ( Figure 2). However, note that the complexity of the junctional cytoskeleton makes detailed analyses of its organization difficult. This can be resolved by electron tomography, which involves tilting a relatively thick (for example, 300 nm) section and obtaining images at different angles relative to the specimen. This provides not only a 3D view through the depth of the specimen but also far greater resolution, allowing identification and tracing of individual elements. This has been used to great effect in recent studies of the actin organization in cultured cells with actin filaments running parallel to the adherens junction 31 .
New methods are now providing far greater sample depths and, for the first time, the ability to examine entire cells, large tissue areas, and even entire organisms (albeit the smaller specimens of the animal kingdom). This method, serial blockface scanning electron microscopy (SEM), relies on the imaging of an exposed blockface by SEM in the back-scattered mode 32 . Material is removed from the blockface, slice by slice using either a knife or a focused ion beam, within the electron microscope, and the exposed blockface is imaged after each slice is removed to generate literally thousands of serial images. Improvements in back-scattered electron detectors now mean that image quality is approaching that of a conventional transmission electron microscope (and the image is contrast-inverted to give a similar appearance). This technique has the potential to provide large-scale information on the way that cells interact in the culture dish but also in a tissue environment, with the capacity to contain numerous cells in a single 3D data set.
The above methods rely on an initial fixation step, usually using chemical fixatives. The latter can be slow and introduce artefacts, and so there has been a move to cryofixation, usually high-pressure freezing. These methods provide excellent preservation of cellular structures and are becoming routine in many laboratories. However, avoiding chemical fixation by cryofixation introduces another problem: how to go from a frozen sample in liquid nitrogen to an embedded specimen that can be sectioned (note that thin samples can avoid this problem, but this is unlikely to be the case for the study of most cell-cell junctions). Cryosectioning of frozen material provides the optimal method to preserve structure, avoiding both fixatives and any staining process. But this is technically demanding, and the retention of cytoplasmic material can actually hinder visualization of cytoskeletal elements. Freeze substitution, the removal of water at low temperature before embedding, offers a simple and, now, very rapid alternative for embedding in resin after freezing 33 . Freezing of specimens to sectioning can now be completed in one day. Of particular note for studies aiming to correlate real-time light microscopy with electron microscopy is that methods now exist to maintain the fluorescence of green fluorescent protein and related proteins in resin-processed material 34-38 . Thus, the behavior of proteins can be followed in real-time, and the cells then fast-frozen to capture a rapid transient event and then processed for embedding in resin. The same material then can be analyzed by light microscopy and by electron microscopy to allow precise correlation of the two sets of observations. Recent modifications of these methods have described fluorescent proteins that are resistant to harsh fixation conditions 39 , opening the possibility for correlative microscopy to combine super-resolution imaging of fluorescent proteins with electron microscopy to better characterize their local cellular nano-environment.
Ultimately, researchers would like to see and recognize all the components involved in cell-cell interactions and understand their precise molecular arrangement. We can already see and recognize some of those components, such as F-actin and junctions, and, as Microtubules are highlighted in green, and putative actin filaments in red. Bar = 500 nm. described above, we can see them in 3D and increasingly even in the context of whole tissues. But what about the recognition of other components? Can we imagine visualizing individual cadherin molecules or the key regulators of the junctional actin network in a quantitative fashion? Immunogold labeling has long been used to label on sections, and this method has been the gold standard for ultrastructural localization studies 40 . However, immunogold labeling is relatively inefficient and labeling is generally restricted to the surface of the section (and therefore is hardly useful for the 3D methods such as electron tomography and serial blockface SEM). The most efficient method, using thawed frozen sections, provides excellent visualization of membranes 41 but is not routinely useful for visualizing cytoskeletal structures. But new labeling methods are offering possibilities for genetic tagging of proteins for electron microscopy. Of these, the most promising appears to be a peroxidase construct which can be fused to any protein of interest 42 . The expressed fusion protein can be visualized by using a simple peroxidase reaction on the fixed material to deposit an electrondense precipitate at the site of the fusion protein. This method may appear to lack the precision of a particulate marker, but the enzyme is directly fused to the protein of interest rather than being detected with antibodies. Importantly, the reaction product can also be detected within the depth of a thick section (for tomography) or in a whole cell or tissue sample, facilitating detection of a protein of interest by serial blockface SEM. This has immense potential for 3D studies of protein localization.
Future directions
We are living in a Golden Age for biological imaging, where new microscopy techniques and reagents are allowing us to identify biological structures with unparalleled detail and to interrogate the chemical and physical properties of cells and tissues. Nor is it likely that we have exhausted the possibilities. Already light sheet microscopy in its developing forms provides the opportunity to analyze whole organisms in a comprehensive, dynamic manner 43 . One consequence of these advances has been the generation of quantitative data, and this has entailed the need for mathematical and statistical tools to analyze often very large data sets. These large data sets carry challenges for how we present and "consume" such data. It seems likely that this will promote an even greater nexus between theory and experiment in biology. Just as seeing can be believing, so can our pre-existing ideas and beliefs influence what we see. The application of new physical theory provides the opportunity to develop predictive models, which are informed by the new dynamic and quantitative data that microscopy provides and which yield predictions for further experimentation. These new advances in microscopy and theory provide the chance for us to interrogate complex biological phenomena at cell-cell junctions across vastly different length and time scales, from molecular events to organismal development. | 2018-04-03T02:31:40.929Z | 2015-07-17T00:00:00.000 | {
"year": 2015,
"sha1": "9ba4bc547584f9c9fc53bbb11708864a6013e102",
"oa_license": "CCBY",
"oa_url": "https://f1000research.com/articles/4-273/v1/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "09b2ce1510ec70d69476490640ee946ce127cf71",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
238636940 | pes2o/s2orc | v3-fos-license | Risk factors for intraoperative endplate injury during minimally-invasive lateral lumbar interbody fusion
During lateral lumbar interbody fusion (LLIF), unintended intraoperative endplate injury (IEPI) can occur and thereafter lead cage subsidence. The aim of this study was to investigate the incidence of IEPI during LLIF, and its predisposing factors. A retrospective review was conducted on consecutive patients (n = 186; mean age, 70.0 ± 7.6 years) who underwent LLIF at 372 levels. Patient’s demographic and surgical data were compared between patients with and without IEPI. Also, the radiographic data of each level were compared between intact and IEPI segments. IEPI was identified at 76 levels (20.4%) in 65 patients. The incidences of IEPI at every 100 consecutive segments were not different. When 372 segments were analyzed independently, sagittal disc angle (DA) in the extended position (4.3° ± 3.6° at IEPI segments vs. 6.4° ± 4.0° at intact segments), the difference between sagittal DA in the extended position and cage angle (− 2.2° ± 4.0° vs. 0.0° ± 3.9°), and the difference between preoperative disc height and cage height (− 5.4 mm ± 2.4 mm vs. − 4.7 mm ± 2.0 mm) were different significantly. Also, endplate sclerosis was more common at intact segments than IEPI segments (33.2% vs. 17.3%). Multivariate analysis showed that male sex (odds ratio [OR] 0.160; 95% confidence interval [CI] 0.036–0.704), endplate sclerosis (OR 3.307; 95% CI 1.450–8.480), and sagittal DA in the extended position (OR 0.674; 95% CI 0.541–0.840) were significant associated factors for IEPI. IEPI was correlated not with surgeon’s experience, but with patient factors, such as sex, preoperative disc angle, and endplate sclerosis. Careful surgical procedures should be employed for patients with these predisposing factors.
Incidence of IEPI. The interobserver intra-class coefficients (ICC) for the measurement of IEPI was 0.842.
The intraobserver ICC were 0.933 and 0.886 for each examiner. IEPI was identified at 76 levels (20.4%) in 65 patients. Unilateral endplate injury, indicated by damage of either endplate in an intervertebral disc, was noted at 67 levels and bilateral injury, indicated by damage of both endplates in an intervertebral disc, was noted at nine levels. Injury at the endplate cranial and caudal to the disc was noted at 33 and 52 levels, respectively ( Table 2). The incidence of IEPI at each level ranged from 19.4% (33/170) to 35% (7/20). However, significant differences between the incidence at each segment were not found ( Table 2). The type of IEPI had no association with the disc level. The incidence of IEPI every 100 segments is shown in Fig. 1. There was no significant difference between the incidence in each group.
Characteristics of IEPI segments and risk factors.
Age, BMI, BMD of the lumbar spine, and whether single-or multiple-level LLIF were not different between the patients with IEPI and the patients without IEPI (Table 3). Sex distribution was significantly different between the two groups (Table 3). When 372 LLIF segments www.nature.com/scientificreports/ were analyzed independently, coronal DA and sagittal DA in the neural and flexed positions were not different between the segments with and without IEPI ( Table 4). The degree of facet arthrosis and disc height were not different between the two groups. However, sagittal DA in the extended position was significantly smaller at the IEPI segments than the intact segments (4.3° ± 3.6° vs. 6.4° ± 4.0°, P = 0.001). The difference between sagittal DAs in the extended position and cage angle was significantly different (− 2.2° ± 4.0° at IEPI segments vs. 0.0° ± 3.9° ; P = 0.005) were negatively associated with the development of IEPI. Smaller sagittal DA in the extended position was a risk factor for IEPI (OR 0.674; 95% CI 0.541-0.840; P < 0.001).
Discussion
The incidence of IEPI was 20.4% (76/372), which was higher than that in the previous reports. This might have resulted from our definition of IEPI, which was as at least 1 mm of cage settling, compared to the study by Satake et al. 12 that defined IEPI was defined as at least 2 mm of cage settling. Vertebral endplate thickness has been reported to range from 0.35 to 1.03 mm [13][14][15][16][17] . Therefore, the authors decided to use at least 1 mm as the criteria for IEPI in this study.
The vertebral endplate is a thin cortical bone located at the cranial and caudal surfaces of the vertebral bodies. In their histologic study, Hou et al. 18 showed that the endplate was not genuine cortical bone but a porous structure with the involvement of trabeculae. The significance of the endplate had been already demonstrated in many reports. Removal of the endplate can significantly decrease the structural properties of the lumbar vertebral bodies [18][19][20][21] . Interbody cages in the lumbar spine are commonly used to increase mechanical stability and promote fusion, however, lumbar vertebrae with endplate damage have a higher risk of cage subsidence.
The factors associated with cage subsidence following intervertebral fusions have been reported to involve BMD 22 , cage geometry 23,24 , cage material 25 , cage location [26][27][28][29] , and the use of osteobiologics 30 . Cage subsidence is thought to result from biological remodeling at the cage-bone interface in a chronic fashion 31 . In contrast, IEPI develops in an acute fashion during the surgical procedure and the development of IEPI may have the different pathomechanisms and risk factors. There were little studies about IEPI after MIS-LLIF. Two risk factors for IEPI after MIS-LLIF had been reported in the study by Satake et al. 12 : reduced BMD and cage height.
Although our results that female sex was a significant risk factor for IEPI in this study could suggest a possible correlation between BMD and IEPI, the direct measurement of BMD in each vertebra was not correlated with IEPI, unlike the previous report 12 . BMD, which is usually assessed by DEXA, shows trabecular bone quality. Hou et al. 26 and Patel et al. 32 conducted biomechanical tests on human cadaveric lumbar vertebrae and their results indicated that reduced BMD was positively associated with the failure load of the endplate, which ranged from 20 to 50%. They concluded that the lumbar vertebrae with reduced BMD had a higher risk of cage subsidence 26,32 . However, IEPI actually means cortical bone injury. Thus, the authors concluded that IEPI might be affected by cortical bone strength. Some parameters for cortical bone status of the endplate have been reported in previous studies. The endplate cranial to the intervertebral disc was thicker and had a higher density than the caudal one [13][14][15][16][17]33,34 . Based on this biomechanical property, IEPI could be expected to occur at the weak endplate. Our findings that the endplate injury was more common at the caudal endplate and one of the associated factors was endplate sclerosis (as a protective factor) supported the previous data.
Also, the characteristics of the intervertebral disc should be considered as factors affecting the development of IEPI. Previous reports showed that over-distraction of the intervertebral height by a tall cage could damage the endplate either intraoperatively or postoperatively 12,34 . However, there was no effect of segmental disc height and its difference from cage height on the development of IEPI. In the current study, the degree of facet arthrosis was not associated with the development of IEPI. However, a smaller sagittal disc angle in the extended position was found to be positively correlated with IEPI. It is plausible that the extent of disc motion could affect the development of IEPI; the more mobile disc, the less IEPI, and vice versa. We believe our results support this hypothesis. www.nature.com/scientificreports/ When planning LLIF followed by posterior spinal fixation in patients with less mobile discs, a different surgical strategy, for example, 3-stage surgery (posterior facetectomy-to-LLIF-to-posterior fixation) should be considered. IEPI was considered somewhat iatrogenic in the previous reports. However, the conclusions were not an evidence-based, but empirical conclusions 10,12 . To the best of our knowledge, no study has analyzed the learning curve for IEPI after MIS-LLIF. In our study, there was no difference in the incidences of IEPI during MIS-LLIF at every 100 discs. This suggests that the iatrogenic factor for IEPI was minimal or none.
There were some limitations in our study. First, this was a retrospective study. Therefore, the factor of surgeon's caution in osteoporotic patients could not be considered. Second, the radiographic measurement of the endplate injury was not verified because it could be missed on plain radiographs, especially in patients with scoliosis. Third, we did not evaluate the radiographic findings and clinical prognosis during the late period. IEPI was a radiographic finding and its clinical significance was not analyzed. However, some patients with IEPI experienced progressive cage subsidence and even vertebral body fracture. We are preparing the next article about clinical outcomes of IEPI. Fourth, the quantification of IEPI was not included in this study. The authors defined IEPI as > 1 mm of endplate damage. However, the surgical prognosis could be different according to the severity of the IEPI.
In conclusion, this study showed that the development of IEPI after MIS-LLIF was significantly correlated with some patient-related factors, including gender, sagittal disc angle in the extended position, and endplate sclerosis, whereas the surgeon's experience did not affect the development of IEPI. Therefore, patients who have these risk factors are at risk of IEPI after MIS-LLIF. Thorough preoperative evaluation is needed to avoid IEPI when considering MIS-LLIF surgery and careful surgical procedures should be performed in patients with an elevated risk.
Methods
Patients. This retrospective study was approved by The Catholic University of Korea Catholic Medical Center's Institutional Review Board before study initiation and all methods were performed in accordance with the relevant guidelines (approval no. KC20RISI0169). Informed consents were waived by The Catholic University of Korea Catholic Medical Center's Institutional Review Boar because of the retrospective study design. All consecutive patients who underwent MIS-LLIF for degenerative lumbar disc diseases (from L1-2 to L4-5) between May 2012 and December 2017 were reviewed and the operative data in the medical records were investigated. To minimize bias, patients who underwent operations by surgeons other than the single senior author (KYH) were excluded. The patients included in this study were the first 186 patients who underwent MIS-LLIF performed by this surgeon. Clinical data, including age, sex, body mass index (BMI), and bone mineral density (BMD) were reviewed in the medical records. BMD at the L1-4 levels of the posteroanterior spine was measured using dualenergy X-ray absorptiometry (DEXA) bone densitometry (Lunar Prodigy Advance, GE Healthcare, Waukesha, WI, USA). T-scores of the lumbar spine (L1-4) were recorded and BMD in each vertebra was recorded as g/m 2 .
Surgical procedures. MIS-LLIF was performed, according to XLIF manner, by splitting the psoas muscle using tubular retractors and intraoperative neuromonitoring 5 . All procedures were performed in a right lateral decubitus position with the hip and knee joints flexed. A surgical incision was made through the skin after identification of the position of the target disc using a C-arm. A single 5-cm skin incision was made in patients who underwent LLIF in one or two segments, whereas two separate incisions were made in patients who underwent LLIF in three or more segments. The retroperitoneal space was reached via blunt dissection and tubular retractors were placed onto the disc. After that, removal of the disc materials and endplate preparation was conducted. For endplate preparation, a Cobb elevator and ring curette were used. None of the shavers was used at any step. The cage size was determined using trial cages and then the real cage was inserted with two containment blades. The dimensions of each cage were recorded in the medical charts. All procedures were performed under fluoroscopic guidance. Poly-etherether-ketone (PEEK) cages (Clydesdale, Medtronic Sofamor Danek, Memphis, TN, USA) were used in all patients. After anterior surgery, posterior fusion with pedicle screws was performed in a single or staged manner. Radiographic measurements. IEPI was identified on the immediate postoperative lateral X-ray compared to the preoperative lateral X-ray. It was defined as cage sinking of more than 1 mm from the bony endplate (Fig. 2). Two spine fellows, who had not been participate in the surgery, independently measured the extent of endplate injury and the same measurements were repeated by these two examiners. The average values were obtained and used in the definition of IEPI. IEPI was classified into two groups based on involvement: unilateral or bilateral, superior or inferior. The profiles of the inserted cage, such as its height and lordotic angle, were recorded. The following parameters of each intervertebral disc were measured: (1) segmental disc angle (DA) on the sagittal plane in the neural, flexed, and extended positions and (2) disc height (DH) in the anterior and posterior corners. We calculated the differences in their height and angle between the disc and the cage. Because endplate sclerosis could affect endplate injury, the existence of sclerosis was investigated. Four grades of facet joint arthrosis were identified on the computed tomography (CT) and magnetic resonance imaging (MRI) using the criteria proposed by Weishaupt et al. 35 .
Overall incidence of intraoperative endplate injury and its distribution according to experience. The incidence of IEPI in each segment (from L1-2 to L4-5) was calculated. Because the development of IEPI could be affected by surgical skills, the authors divided the cohort into four arbitrary groups, one every 100 segments, and analyzed the IEPI incidence of each group to evaluate learning-curve effects. | 2021-10-13T06:16:50.858Z | 2021-10-11T00:00:00.000 | {
"year": 2021,
"sha1": "5152739385adf868225dc07bced20b3c4bb1485a",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-99751-6.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7219318cb5e4d83e7653c94761b793bba6d13bee",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252780945 | pes2o/s2orc | v3-fos-license | Time Minimization in Hierarchical Federated Learning
Federated Learning is a modern decentralized machine learning technique where user equipments perform machine learning tasks locally and then upload the model parameters to a central server. In this paper, we consider a 3-layer hierarchical federated learning system which involves model parameter exchanges between the cloud and edge servers, and the edge servers and user equipment. In a hierarchical federated learning model, delay in communication and computation of model parameters has a great impact on achieving a predefined global model accuracy. Therefore, we formulate a joint learning and communication optimization problem to minimize total model parameter communication and computation delay, by optimizing local iteration counts and edge iteration counts. To solve the problem, an iterative algorithm is proposed. After that, a time-minimized UE-to-edge association algorithm is presented where the maximum latency of the system is reduced. Simulation results show that the global model converges faster under optimal edge server and local iteration counts. The hierarchical federated learning latency is minimized with the proposed UE-to-edge association strategy.
I. INTRODUCTION
High-tech mobile devices and Internet of Things (IoT) are generating a large amount of data [1]. These immense volumes of data have incentivized high-speed development in big data technology and Artificial Intelligence. Conventional Machine Learning (ML) and Deep Learning (DL) methods require devices to upload their data to a central server to develop a global model. However, the threats involving leakages of, and attacks on privacy-sensitive data demotivate users to upload data from their user equipments (UE) to a central server for computing. Fortunately, rapid development in computing technology has spurred the age of Mobile Edge Computing (MEC), in which the computing power of chips in mobile devices is strengthening, facilitating more computing-intensive tasks such as machine learning tasks. Computing processes that were traditionally computed centrally at a server are shifting to mobile edge devices. Decentralized ML, which takes into account privacy concerns, has been coined Federated Learning (FL) [2], [3] and this model training technique involves user equipments or user-edge devices to perform ML tasks locally, after which the locally trained model parameters will be uploaded to a central server for global model parameter aggregation. Globally aggregated models are then downloaded by the UE, and that concludes a single round of FL. The process mentioned above is repeated up until a stopping criterion is met. The FL process facilitates devices to build a shared model while preserving the data privacy of the users.
With the rapid development of Federated Learning, federated learning also faces challenges over wireless networks. It is common for FL parameter transmissions to be undertaken by numerous participating user equipments over resource-limited networks, for example, wireless networks where the bandwidth or power is limited. The repeated FL model parameter transmission between user equipments and servers, therefore, can cause a significant delay that can be as much or more than the machine learning model training time, which impairs the performance of latency-sensitive applications. Some of these works acknowledged the physical properties of wireless communication and proposed solutions such as analog model aggregation over the air (over-the-air computation). These analog aggregation techniques aim to reduce communication latency by allowing devices to upload models simultaneously over a multi-access channel [4]- [7]. Nevertheless, analog model aggregation over the air would require stringent synchronization conditions to be met. Another genre of papers aims to minimize the overall FL convergence time (delay in both communication and local device computation) by allocating resources and solving optimization problems. Papers such as [8], [9] established optimization problems which optimize variables such as UE uplink bandwidth and devices selection with the aim of minimizing FL convergence time. Some other works proposed optimization problems with the aim of reducing FL training loss. In [10], the authors proposed a joint resource allocation and device scheduling optimization problem with the aim to minimize training loss. Similarly, the authors of [11] proposed an optimization problem with the aim of minimizing training loss by a varying number of local update steps between global iterations, and the number of global aggregation rounds. However, these works focused on energy consumption and machine learning model performance but failed to consider delay minimization. [12] proposed a joint transmission and computation optimization problem aiming to minimize the total delay in FL, but it only considered the traditional 2-layer FL framework. In [13], the edge server aggregation and edge model transmission delay are taken into consideration, but they are not incorporated into the objective function.
The challenges of minimizing the total delay in hierarchical federated learning framework. Different from traditional 2-layer FL, in hierarchical FL the total time is determined not only by end devices, but also by edge servers. To achieve a specific global accuracy, more local iterations provide a more accurate local model while taking more local computation time, but the cost can be mitigated by reducing the number of edge aggregations. More edge aggregations may reduce the demand for local computing, but it incurs more communication delay. To tackle these problems, in this paper, we formulate a joint communication and learning optimization problem in order to find the optimal solutions for local training iterations and edge aggregation iterations. We give an analysis of the property of the proposed optimization problem.
Our work is novel and contributes to the existing literature by: • Proposing a 3-layer hierarchical FL time minimization model, and setting up an optimization problem which aims to minimize the hierarchical FL time by optimizing the number of local UE computations, and the number of local aggregations under given global accuracy. • Considering the edge server model aggregation and edge to cloud server transmission delay into our optimization objective function. To the best of our knowledge, there are no existing papers that utilize the edge server model aggregation and edge-to-cloud server transmission delay in the optimization objective function. • Presenting a UE-to-edge association strategy that aims to minimize the system's latency. The results show that the proposed UE-to-edge association strategy achieves the minimum latency compared with other methods. The rest of the paper is structured as follows. Section II surveys related work. In Section III, we describe the system model and give a framework for our hierarchical federated learning system. The problem is then formulated in section IV, followed by the analysis and the optimal solutions of the optimization problem. In Section V, the numerical results are shown and analyzed. Finally, we give a conclusion in Section VI.
II. RELATED WORK
There have been a lot of efforts to improve and analyze the performance of federated learning.
Three-layer hierarchical federated learning. Many studies considered 3-layer federated learning. In [13], the authors introduced a hierarchical FL edge learning framework in contrast to the traditional 2-layer FL systems proposed by the other above-mentioned papers. We should note that there are other papers which propose hierarchical architectures for their FL training that have other aims apart from those mentioned above. In [14], the authors studied hierarchical federated learning with stochastic gradient descent and conducted a thorough analysis to analyze its convergence behavior. The works in [15] considered a client-edge-cloud hierarchical federated learning system and proposed a novel HierFAVG algorithm which allows edge servers to perform partial model aggregation to enable better communicationcomputation trade-off while allowing the model to be trained quicker. The authors in [16] utilized a Stackelberg differential game to model the optimal bandwidth allocation and reward allocation strategies in hierarchical federated learning. [17] utilized branch and bound-based and heuristic-based solutions to minimize the data distribution distance at the edge level. For IoT heterogeneous systems, [18]proposed an optimized user assignment and resource allocation solution over hierarchical FL architecture. It can be seen that hierarchical FL is a promising solution that allows for the adaptive and scalable implementation by making use of the resources available at the edge of the network.
Delay minimization in federated learning. The convergence time of FL was studied in many works. [8] jointly considered user selection and resource allocation in cellular networks to reduce the FL convergence time. FedTOE [19] executed a joint allocation of bandwidth and quantization bits to minimize the quantization errors under transmission delay constraint. [20] proposed to reduce the FL convergence time by reducing the volume of the model parameters exchanged among devices. [12] proposed a joint transmission and computation optimization problem aiming to minimize the total delay in FL. However, these papers mainly focus on the resource allocation under the constraint of delay or convergence time but they failed to consider how user equipments themselves can reduce computation and communication delay from the frequency of communication between them and the edge servers while maintaining the required machine learning accuracy. Besides, they only studied the traditional 2layer federated learning framework. In contrast to other papers, [13] utilized a 3-layer, hierarchical model (UE-edge-cloud) for the optimization problem which aimed to minimize the weighted combination of FL convergence time and UE energy consumption. While the authors of [13] did consider the edge server aggregation and edge-to-cloud transmission delay in their paper, they did not propose or incorporate these factors into their proposed optimization objective function.
Novelty of our work. In this paper, we propose to minimize transmission and computation delay between the cloud and edge servers, and edge servers and UEs in a hierarchical FL framework, by optimizing the number of local UE computations and the number of local aggregations. Although the above-mentioned works considered the FL convergence time minimization, they did not take the number of local computations and the number of edge aggregations into account. In our work, under given accuracy, the proposed method is able to find the optimal number of local computations, the number of edge aggregations and the UE-to-edge association strategy, thus providing an optimal global setting for 3-layer hierarchical FL system. This paper appears in the Proceedings of 2022 ACM/IEEE Symposium on Edge Computing (SEC). Please feel free to contact us for questions or remarks.
III. SYSTEM MODEL
We consider a hierarchical federated learning model consisting of a cloud server S, a set M of M edge servers and a set N of N user equipments (UEs) as shown in Fig. 1. Each UE n owns a local data set D n with size D n .
A. Three-Layer Federated Learning Process
The hierarchical federated learning process between UEs, edge and cloud is shown as follows. The procedure contains five steps: local computation at UE, local model transmission, edge aggregation, edge model transmission and cloud aggregation.
1) Local computation: Let f n be the CPU frequency for computation of UE n, and C n be the number of CPU cycles required for UE n to compute one sample data. D n is the size of local data set, then the time required in each iteration for computation of UE n is Let a be the number of local iterations for each UE to perform in a single round of communication with the corresponding edge server. In order to achieve a local accuracy θ ∈ (0, 1), the number of local iterations a that each UE needs to run is where ζ is a constant depending on the loss function [21].
2) UE-to-edge model transmission: After a local iterations, UE uploads their local federated learning model to an edge server. We introduce the indicator variable χ n,m which represents the association between UE n and edge server m. χ n,m = 1 means that UE n uploads its local federated learning model to edge server m. Otherwise, χ n,m = 0. Each UE can be associated with only one edge server. The user-server association rule can be described as: Let the set of UEs that choose to transmit their local federated learning model to edge server m be N m . Without loss of generality, orthogonal frequency division multiple access (OFDMA) communication technique is adopted in this paper. According to Shannon's formula, the achievable transmission rate of UE n and edge server m can be formulated as where B n is the bandwidth allocated to UE n, g n,m is the channel gain between UE n and edge server m, and p n is the transmission power of UE n, and N 0 is the noise power. In this paper, we assume the bandwidth is equally allocated to all the UEs associated with the edge server. Note that the total bandwidth each edge server m can allocate is B, so we have n χ n,m B n ≤ B, ∀m ∈ M. Let d n denote the size of local 3) Edge aggregation: When edge server m receives the model parameters transmitted from its associated UE N m , it obtains the averaged parameters ω m by where D Nm := n∈Nm D n is the total size of data aggregated at edge server m. Let b be the number of iterations for each edge server to perform in a single round of communication with the cloud. For simplicity, we use "edge iterations" to represent the number of edge aggregations. To achieve an edge accuracy µ, for convex machine learning tasks, the number of edge iterations is given by [21] b = γ ln(1/µ) 1 − θ .
From (7), it can be observed that b is affected by both edge accuracy µ and local accuracy θ. γ is a constant related to the loss function and required loss function. It can be given This paper appears in the Proceedings of 2022 ACM/IEEE Symposium on Edge Computing (SEC). Please feel free to contact us for questions or remarks.
by γ = 2L 2 β 2 δ [21] where δ is a constant which is related to local training. When the model is required to be more accurate (µ and θ are small), the edge needs to run more iterations. The delay at edge server m in each iteration is shown to be max n∈Nm {at cmp n + t com n→m }.
4)
Edge-to-cloud model transmission: After b rounds of edge aggregation, each edge server m ∈ M uploads its model parameters ω m to the cloud and downloads the global model from the cloud again. The delay within one round can be formulated as where d m is the size of model parameters at the edge server, and r m is the transmission rate between edge server m and the cloud.
In order to achieve a global accuracy , let the number of communications between edge server and cloud be R(a, b, ). The global accuracy means that after t iterations, where ω * is the actual optimal global model.
5) Cloud aggregation:
Finally, the cloud aggregates the model parameters transmitted from the edge servers as follows: where D := m∈M D Nm is the size of total data.
B. FL Model
In this work, we consider supervised federated learning.
is the training set where x i ∈ R d is the i-th input sample and y i ∈ R is the corresponding label. Vector ω is the parameters related to the FL model. We introduce the loss function f (ω, x i , y i ) that represents the loss function for one sample data. For different learning tasks, the loss function may be different. Loss function F n (ω) on each UE n is given by The training process is to minimize the global loss function F (ω), which can be formulated as We utilize distributed approximate Newton algorithm (DANE) [22] to train the FL model. DANE is one of the most popular each UE computes ∇F n (ω n (i)) and sends it to the edge server). 5: each edge server computes ∇F (ω m (i)) = 1 N N n=1 ∇F n (ω n (i)) and broadcasts it to all the UEs. 6: for each UE n = 1, 2, . . . , N in parallel do 7: update ω n (t). if i | a = 0 then 10: for each UE l = 1, 2, . . . , N in parallel do 11: communicate with its corresponding edge server and edge server perform edge aggregation. 12: end for 13: end if 14: if i | ab = 0 then 15: for each edge server k = 1, 2, . . . , M in parallel do 16: communicate with cloud and cloud perform aggregation. 17: end for 18: end if 19: i ← i + 1 20: end while communication-efficient distributed training algorithm which is designed to solve general optimization problems. At each iteration, DANE takes an inexact Newton step appropriate for the geometry of objective problem. Stochastic gradient descent (SGD) is widely utilized since the computational complexity of SGD is low. However, it can only be used where the requirement on accuracy is not strict. Besides, SGD requires more iterations than gradient descent (GD). In federated learning, the wireless communication resource is valuable, so we use GD in UE local training. At each iteration, each UE n updates its local model parameters. According to (2), the UE needs to run a iterations to achieve a local accuracy θ. After every a local iterations, each UE uploads its local model to the corresponding edge server, and edge server aggregates these models. Then after every b edge server aggregations, the edge server uploads its model to the cloud, and the cloud performs global aggregation. Table I summarizes the notations. IV. HIERARCHICAL FEDERATED LEARNING DELAY OPTIMIZATION In this section, we give a formulation of the proposed problem followed by analysis and solutions. This paper appears in the Proceedings of 2022 ACM/IEEE Symposium on Edge Computing (SEC). Please feel free to contact us for questions or remarks.
where at cmp n + t com n→m is the time taken for UE n to perform a iterations of local computation and one round of communication with edge server m. Thus, max n∈Nm {at cmp n + t com n→m } is the time taken for one single round of communication between all UEs and its corresponding edge servers, and max m∈M {b max n∈Nm {at cmp n + t com n→m } + t com m→c } is the time taken for a single round of communication between all edge servers and the cloud. In the objective function, the total delay of the entire federated learning task is minimized. Constraints (13a) and (13b) are the maximum CPU frequency and transmission power of UEs; constraints (13c) and (13d) are the UE-edge server association rules; constraint (13e) guarantees that the bandwidth of each edge server does not exceed the upper bound limit; constraint (13f) specifies that a and b are integers. This optimization problem falls into the category of integer programming since a, b are positive integer values. While integer programming is considered to be an NP-hard problem [23], we could obtain sub-optimal solutions by relaxing the integer constraints and allowing a, b to be continuous variables, which are rounded back to integer numbers later. According to (2), θ can be expressed as θ = 1/e a ζ . According to (7), µ can be expressed as µ = 1/e b γ (1−θ) . The number of communications between edge server and cloud R(a, b, ) is given by: Substitute µ with µ = 1/e b γ (1−θ) and θ with θ = 1/e a ζ , we get:
B. Analysis
In this section, we design an algorithm to solve the minmax problem (13). By introducing new slack variables T and τ , problem (13) is equivalent to the following optimization This paper appears in the Proceedings of 2022 ACM/IEEE Symposium on Edge Computing (SEC). Please feel free to contact us for questions or remarks. problem: where T defines the time interval between each round of communication between edge server m and the cloud, and τ m defines the time interval between each round of communication between UE n and edge server m. We should note that constraints (16a) and (16b) confine the delay (aggregation and communication to the cloud) at each edge server and the delay (computation and communication to the edge server) at each UE, respectively. To solve the optimization problem, we decompose the problem into two sub-problems. Sub-problem I solves the local iteration number a, edge iteration number b, UE CPU frequency f and transmission power p. Sub-problem II obtains the optimal UE-to-edge association.
C. Solution of sub-problem I
We will show that Problem (16) is a convex optimization problem under given UE-to-edge association χ. To this end, we present lemmas below. Lemma 1. The reciprocal of a positive and concave function is convex.
is twice differentiable. Then the second order derivative of h(x) can be given by: Since f (x) is positive and concave, f (x) > 0 and f (x) < 0.
We have Lemma 2 below, which together with Lemma 1 shows that R(a, b, ) · T is convex. CT ln( 1 ) .
where a, b, ζ, γ are all positive numbers. Thus, 0 < e − a ζ < 1. f (a, b). then f (a, b) can be expressed as The second-order partial derivatives f aa , f bb , f ab can be given by: Note that g ( . Then, f aa , f bb , f ab can be rewritten as: The Hessian matrix of f (a, b) is Since f (a, b) is twice differentiable, the Hessian matrix is symmetric. That is, f ab = f ba . Next, we show that f aa < 0 From (21), (22) and (23), it can be obtained that It is clear that 1 This paper appears in the Proceedings of 2022 ACM/IEEE Symposium on Edge Computing (SEC). Please feel free to contact us for questions or remarks.
Next, we investigate the sign of Let g a ζ = t and b γ = k, and then it can be expressed as where k > 0 and t ∈ (0, 1).Since kt is a relatively large Proof. The constraints (16a), (16b), (16c) and (16d) are convex. Therefore, the convexity of problem (16) depends on the objective function of problem (16). From Lemmas 1 and 2, we can conclude that the objective function of problem (16) is a convex function with respect to a, b.
The Lagrange function of problem (16) can be given by: where λ m and µ n are Lagrangian multipliers associated with the constraints (16a) and (16b). Then the dual function of problem (16) is According to the Karush-Khun-Tucker (KKT) conditions, the optimal solution of problem (16) can be obtained by taking the partial derivatives of Lagrange function L(a, b, f , p, λ, µ) with respect to variable a and b: 1) The optimal solution of local CPU frequency and transmission power: From constraints (16a) and (16c), it can be seen that it is always efficient to utilize the maximum CPU frequency f max n , ∀n ∈ N m . Besides, from constraints (16b) and (16d), it can be seen that minimum time can be achieved if UE uses the maximum transmission power p max n , ∀n ∈ N m . So the optimal solution of local CPU frequency and transmission power can be given by f * n = f max n , p * n = p max n , ∀n ∈ N m . 2) The optimal solution of local and edge iteration times (a * , b * ) within one round communication: Let A = CT ln(1/ ) and Y = 1 − e − a ζ , we can get: 3) Solution of T and τ : The optimal solution of a * , b * have been obtained under given accuracy . According to problem (13), the optimal solution of τ and T can be given by:
4) Lagrange multipliers update:
The Lagrange dual variables λ, µ, β, ν can be obtained by solving the Lagrange dual problem of problem (16), which can be expressed as follows: max λ,µ,β,ν g(λ, µ, β, ν), The Lagrange dual problem is a convex problem, which can be solved by subgradient projection method. The subgradients of g(λ, µ, β, ν) can be given by: Having obtained the subgradients of λ m , µ n , β n , ν n , the Lagrange multipliers can be obtained by subgradients projection method in an iterative method as follows: µ n (t + 1) = µ n (t) − η∇µ n (t), β n (t + 1) = β n (t) − η∇β n (t), where η is the step size and t denotes the number of iterations. The Lagrangian dual variables λ m , µ n , β n and ν n is updated according to the subgradients projection method in an iterative way in (37). After obtaining the Lagrangian dual variables, we substitute them into (30) to acquire the optimal value of a and b. We then substitute the optimal solution of a and b This paper appears in the Proceedings of 2022 ACM/IEEE Symposium on Edge Computing (SEC). Please feel free to contact us for questions or remarks.
D. Solution of sub-problem II
In this part, we present the time-minimized UE-to-edge association scheme. With the optimal a, b, f and p, the UEto-edge association problem is equivalent to the following problem: We should notice that when the problem (39) reaches the optimality, the maximum of the left-hand side of (39a) is equal to z. Otherwise (i.e., the maximum of the left-hand side of (39a) is less than z), we could decrease z since it would bring a smaller objective value for (39). Problem (39) is now a mixed integer linear programming (MILP) problem, which can be solved by branch-and-bound algorithm. However, the computational complexity of branchand-bound algorithm is exponential in general, and thereby cannot be implemented in practice. To solve the UE-toedge association problem practically, we then propose a more efficient algorithm.
In the proposed algorithm, we first identify the UEs with the largest SNR for each edge server successively, under the bandwidth constraint (39d). Since each UE can only be associated with one edge server, we need to remove one edge server if two edge servers are associated with one UE. Specifically, let the set of UEs chosen by edge server m 1 and m 2 be N m1 and N m2 . If a UE n is in both N m1 and N m2 , then the algorithm will compare the uplink channel SNR between UEs that are not in N m1 , N m2 and edge server m 1 , m 2 , denoted by {(n, m) | n ∈ N \(N m1 ∪N m2 ), m ∈ {m 1 , m 2 }}. Then, the UE n and edge server m with largest uplink channel SNR g n ,m p n /N 0 are chosen. If m = m 1 , then we remove UE n from m 1 and associate n with m 1 . Otherwise, we remove UE n from m 2 and associate n with m 2 . The above process proceeds until the last edge server finishes. The main procedures of the proposed algorithm are summarized in Algorithm 3. In each round, a maximum of B/B n comparisons are made. Hence, the complexity of proposed algorithm is O(mB/B n ) in the worst case. choose the N m UEs with largest g n,m p n /N 0 , denote the set by N mi , set χ n,i = 1, ∀n ∈ N mi . 4: while ∃ n, m j , χ n,mi == 1 and χ n,mj == 1 (i > j) do In this section, numerical experiments are conducted to verify the performance of our solutions. The advantages of hierarchical federated learning system over edge-based federated learning system and cloud-based federated learning system are investigated in [15]. Hence, in this paper, we focus on the iteration counts of local UEs and edge servers under different conditions, as well as UE-to-edge association.
A. Experiment Settings
For simulations, we consider a hierarchical federated learning system with multiple user equipments, edge servers and one cloud server. The user equipments are deployed in a square area of size 500 m × 500 m with the edge servers located in the center and all the edge servers are deployed in an area with the cloud server located in the center. For machine learning tasks, we consider a classification task using standard dataset MNIST. For the training model, we use LeNet. The constants γ, ζ and δ are set to random integers between 1 to 10. For This paper appears in the Proceedings of 2022 ACM/IEEE Symposium on Edge Computing (SEC). Please feel free to contact us for questions or remarks. simplicity, we use the free-space path loss model in [24]. Then with wavelength being the wavelength of the wireless signal and distance being the distance between UE n and edge server m, it holds that g n,m = ( wavelength 4π×distance ) 2 . We set the frequency at 28GHz so that wavelength = 3×10 8 28×10 9 = 3 280 m. Then g n,m = ( The maximum CPU frequency f max n is 2 GHz, and maximum transmission power p max n is 10 dBm for each device. For the parameters in the optimization problem regarding Assumption 1, L-smooth and β-strongly convex, we follow the experiment setting in [7].
B. The optimal number of local computations and edge aggregations
Firstly, we fix the number of UEs and edge servers. We deploy 1 cloud server, and 5 edge servers and each edge server is associated with 20 UEs. To achieve a given global accuracy within minimum time, the local iterations and edge iterations needed between two communication rounds are shown in Fig. 2. As decreases (i.e., higher machine learning model accuracy is required), a decreases while b increases, and the value of a × b (the number of local iterations in one cloud round) increases. It means that in order to obtain a more accurate global model within minimum time, edge servers need to run more edge iterations while UEs run fewer local iterations within one communication round. In the simulation experiment, we train LeNet on MNIST dataset with each edge server associated with 10 UEs. It can be observed from Fig. 4 that under different required test accuracy, the optimal value of a and b differs. For example, in Fig. 4, if the required machine learning model accuracy is between 0.88 to 0.89, then a = 35, b = 5 is the optimal value. If the required machine learning model accuracy is beyond 0.92, then a = 30, b = 7 is the optimal value.
Next, we associate different numbers of UEs with each edge server from 10 UEs to 100 UEs. To obtain a fixed global accuracy in minimum time, the local iterations and edge iterations needed are shown in Fig. 3. As the number of UEs each edge server associates with increases, the number of local iterations and edge iterations exhibit no visible trend. That is because at the aggregation step, the weighted average scheme balances the variance among all the UEs. In Fig. 6, each edge server is associated with 20 UEs. It can be seen from Fig. 6 that the optimal values of a and b are different when we require different machine learning model accuracy. Similar to the case when each edge server associates with 10 UEs, a = 35, b = 5 is the optimal value when the required model accuracy is between 0.88 to 0.89. If the required model accuracy is beyond 0.9, then a = 30, b = 5 is the optimal value. It also verifies the observation that the optimal value of local iteration counts and edge iteration counts have no correlation with the number of UEs edge server associates.
C. The optimal UE-to-edge association In this part, we test three different UE-to-edge association strategies under the global accuracy requirement = 0.25. They are the proposed method, the greedy algorithm and the random UE-to-edge association strategy.
• Greedy algorithm : The greedy algorithm chooses the UEs available with maximum SNR under the bandwidth constraint for each edge server. • Random UE-to-edge association: The random UE-toedge association assigns UEs to all the edge servers randomly, under bandwidth constraint. servers is small, UEs have few choices. More UEs have to associate with edge servers whose SNR is high.
VI. CONCLUSION
In this paper, we have investigated the problem of latency minimization in the setting of 3-layer hierarchical federated learning framework. In particular, we formulated a joint learning and communication problem, where we optimized local iteration counts and edge server iteration counts. To solve the problem, we studied the convexity of the problem. Then we proposed an iterative algorithm to obtain the optimal solution for local counts and edge counts. Besides, we proposed a UE-to-edge association strategy which aims to minimize the maximum latency of the system. Simulation results show the performance of our solutions, where the global model converges faster under optimized number of local iterations and edge aggregations. The overall FL time is minimized with the proposed UE-to-edge association strategy. | 2022-10-11T01:16:25.236Z | 2022-10-07T00:00:00.000 | {
"year": 2022,
"sha1": "bd0279b8c22c5943ea3dd042c60efb2cdff1e9f9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "bd0279b8c22c5943ea3dd042c60efb2cdff1e9f9",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
266785298 | pes2o/s2orc | v3-fos-license | Alternative and complementary medicine: A look at the general culture
This paper explores the world of traditional medicine and complementary and alternative medicine from a multicultural perspective. It begins by highlighting the importance of traditional medicine in various cultures and its vital link to the cultural identity of ethnic groups. It then differentiates between traditional medicine and complementary/alternative medicine, explaining that the former is part of a specific culture, while the latter is used in conjunction with or in place of conventional medicine. The paper highlights how traditional and complementary medicine often seek to balance the physical, spiritual and experiential aspects of health and how these practices are rooted in culture and nature. Numerous alternative and complementary therapies, such as herbal medicine, acupuncture, reflexology, yoga, and aromatherapy, are mentioned, and it is emphasized that these therapies are based on natural and noninvasive approaches. In addition, the relationship between traditional medicine and Western medicine is discussed, and how in some places they are being harmoniously combined to provide holistic health care. The example of intercultural medicine in Cuba is mentioned, where scientific medicine, traditional Chinese medicine and natural and traditional medicine are integrated. The importance of preserving and respecting the traditions and practices of traditional medicine of indigenous cultures, such as Mapuche medicine in South America, is emphasized. It is mentioned that these traditions not only treat individual diseases, but also seek to maintain balance with nature and culture. In conclusion, it is emphasized that traditional and complementary medicine offer a different perspective on health and wellness, and it is important to approach them critically and with proper medical guidance. These practices can offer holistic approaches to health care and are an integral part of cultural diversity in health care
INTRODUCTION
Since the beginning, humans have had to fight diseases with "traditional medicine."With time, this type of medicine has been modified, and even in some places, the use of medicinal plants, minerals, and other natural elements has been lost.
In our culture, it responds to the immediate need to solve health problems.It has been known since immemorial and is a practice performed by doctors or healers.
An essential characteristic of traditional medicines is their vital link with the cultural being, both individual and social-cultural. (1,2)radition becomes the transmitter of knowledge accumulated and bequeathed from generation to generation, which maintains the identity and culture of the original groups of different world cultures, such as the Mapuche ethnomedicine, which has ancestral origins, having its own traditional Mapuche medicine and is still practiced today. (3)ach society evolves with time and reaches its development following its pattern, model, and path of evolution.In the same way, it has happened with the traditional cultures and Western society (counting the variants of countries and regions). (4)As a result, nowadays, different alternative and complementary therapies have origins in different cultures.
In this way, we can question whether traditional and complementary medicines are the only elements that help to perpetuate culture and maintain the cohesion and identity of groups or whether they fulfill specific objectives and purposes.
According to the WHO, it is the total of knowledge, skills, and practices based on theories, beliefs, and experiences of different cultures, explainable or not, with a long history, used to maintain health, prevent, diagnose, improve, or treat physical and mental illnesses, while complementary medicine, also called alternative medicine, are health care practices that are not part of conventional medicine.Medicine is specific to a social group, to a culture, since therapeutic systems are constructed according to the cultural characteristics of the groups.If cultures vary, the ways of understanding health and disease and how to approach problems and provide solutions will also vary.Thus, from the ancient shamanic culture in Asia and indigenous America, traditional medicines have been developed over time, following traditions. (5)
DEVELOPMENT
According to Peter Brown, conventional medicine is one more ethnomedicine in our Western society, just as there are others: Mapuche ethnomedicine and traditional Chinese ethnomedicine due to the result of the search for solutions to health problems within a culture that is appropriate to the characteristics of each group.
In recent times, there has been a resurgence of therapies in the health system, especially ancestral medicine, with good growth and expansion of the use of natural products, sustained through beliefs, knowledge of widespread knowledge, practices, and resources from this knowledge, in a socio-cultural context and those who exercise the community of a people to solve empirically specific health problems despite the scope of scientific medicine. (6)urrently, there is a WHO strategy on traditional medicine, which helps to find solutions and have a broad vision regarding improving health.This was officially initiated with the declaration of alma ata in 1978, inviting Member States to seek and achieve the active participation of the population, taking advantage of Community and Interculturality in Dialogue.2023; 3:119 2 their knowledge in this area of medicine, considering their needs, local resources, and social and cultural characteristics.
In Argentina, in the province of Misiones, the law recognizes traditional and complementary medicine, thus giving a regulatory framework to practices and therapies not framed within conventional medicine, allowing patients to opt for complementary treatments.
On the other hand, there is a difference when we talk about alternative medicine; it refers to non-conventional practices used instead of conventional medicine.On the other hand, when we talk about complementary medicine, it refers to non-conventional practices used in conjunction with conventional medicine. (7)raditional medicine generally looks beyond the body, trying to balance both the observable (affected) and the spiritual and experiential.
Traditional medicine, natural therapies, and Western medicine can be combined beneficially and harmoniously. (8)edicinal and aromatic plants play an essential role in people's health care.Until the advent of modern medicine, man depended on them to treat his illnesses.Human society of all ages has accumulated much traditional knowledge about the use of medicinal plants.About 80% of the population in most developing countries still uses traditional medicine derived from plants to satisfy primary health needs. (9)s we know, medical practices have been going on for centuries, where there are different types of treatments, but first, let us define: • Medicine: We can define medicine as the sum of knowledge, techniques, and practices based on theories, beliefs, and experiences based on studies, either x-rays or laboratory tests, given to maintain physical and mental health, according to WHO. (10) • Disease: We define disease as the alteration or deviation of the physiological state in one or more body parts, generally known causes, manifested by signs and symptoms, characteristics, and whose evolution is more or less predictable.Taking into account these concepts, we will develop the medicine. (11)fferent alternative and complementary therapies A wide variety of therapies are grouped under the topic of alternative and complementary medicine.These are: • Phytotherapy: Using plants or herbs for medicinal or therapeutic purposes.
• Acupuncture: It is an ancestral Chinese technique that uses different types of needles inserted into the body through the skin to treat diseases.• Moxibustion: It is a technique that treats body ailments by applying heat at a certain distance from the skin through the burning of the mugwort plant, already dried and compressed, in pure form.• Massage therapy is a technique performed by massaging the body to relax.A relaxing massage is designed to relieve muscle tension and promote general well-being.On the other hand, there is the therapeutic massage, which consists of various techniques used in some medical conditions.• Chiropractic: It is a practice that deals with the diagnosis, care, and prevention of the musculoskeletal system.• Reflexology: An ancient practice in China, Egypt, and India that applies pressure on the feet and hands, called "reflex zones" with the thumb, finger, and hand.They hold the theory that the body's organs belong to regions of the foot.Moreover, with pressure applied to a particular region, healing occurs.• Aromatherapy: It is the use of essential oils inhalation to improve the individual's psychological and physical well-being.• Yoga: It is the set of physical-mental disciplines, concentration, flexibility, strength, and vitality.This practice connects the body, breath, and mind.• Reiki: It is a technique that tries to achieve the healing of physical and mental illnesses using the hands (sender) to transmit universal vital energy to an individual (receiver).• Pranic healing is an art/science in which the healer uses prana ("the breath of life"), projecting it to the person to alleviate, heal and prevent disease.Prana is the body's vital energy that keeps it healthy and alive.• Equine therapy is a therapeutic discipline where horses rehabilitate people with physical and mental problems for a better quality of life.• Hydrotherapy: It is a treatment that uses water, subjecting it to temperature changes, either hot or cold, to treat a disease or maintain health.• Curanderismo: It is the holistic approach to healing the mind, body, and spirit, using natural elements to heal both the physical and spiritual. of the use of chemicals in many areas of health.It is an alternative to the use of drugs and antibiotics that are used to treat minor illnesses and physical ailments. (12)ternative medicine has three basic principles: 1. Natural Medicine does not treat diseases, but people.Therefore, the individual is conceived as a whole.2. this discipline aims to enhance the natural healing power of the human body.The physician must help the patient throughout the healing process and trust the body's ability to self-regulate its organism.3. The remedies and techniques used to treat patients must be natural and non-aggressive.The Hippocratic maxim should always be followed, i.e., do no harm to the patient. (13)nditions that traditional and complementary medicine treats It treats various conditions, some of which are acute, such as headaches, sore throats, colds, and flu.Moreover, they also treat chronic diseases such as migraines, gastrointestinal problems, gynecological, arthritis, physical injuries, and other trauma using natural medicine, as long as it is not a life-threatening condition. (14)edicinal plants are also very useful in treating psychological problems such as stress, anger, anxiety, nervousness, etc.
In addition, any natural treatment must be accompanied by good lifestyle habits and a healthy diet according to the needs of each person.
What makes Western medicine different from the rest?No medicine is better or worse.A therapeutic system is valid if it solves or helps to solve health problems.Traditional medicines generally look beyond the body, trying to rebalance both the affected observable aspects and those of a spiritual, experiential, and emotional nature.
Traditional practitioners say combining traditional therapeutic systems with Western western-technologicalscientific ones is possible, but each one explains its reasons.Each system has its particularities.Conventional hospital doctors see and cure Some situations and problems (surgery, central respiratory infections, heart problems, etc.).Others are better treated by traditional doctors, mainly diseases that have to do expressly with culture.Some are trained in the healing tradition of their people and have a long history in the study of disease.Thus, they combine traditional medicine with the Western vision, using remedies from their tradition and patent medicines according to the need.Medicine tends to be intercultural as cultures come into contact with each other. (15)n China, TCM (Traditional et al.) not only contains the traditional characteristics of a therapeutic system related to its cultural context but also has elements of academic purification and others from Western science.Increasingly, the elements are being used together.In Latin America and Mexico, traditional doctors are trained, in addition to the traditions of each one, in schools, university courses, conferences, and other procedures of permanent training.There is more and more demand from the population for systems such as traditional Chinese medicine, homeopathy, naturopathy, and the ways of healing of the traditions of each people, next to Western medicine. (16)n Russia, there is an essential tradition in phytotherapy and neuropathy.There is also a revival of the old shamanic culture gradually to the vital context of the different groups.In almost all of the East, the millenary traditional Chinese medicine and its intercultural variants treat millions yearly.
There is an official example of intercultural rapprochement in Chile with the Makewe Pelale Hospital and the herbal pharmacy in Temuco.In Mexico, traditional Mexican medicine is increasingly being developed along with Chinese medicine in its intercultural aspect.There are educational organizations that teach them, such as Tlahui.Universities such as Chapingo offer training courses for traditional doctors and those trained in Western science. (17)n Cuba, there is an integral and integrated health system in which scientific medicine, Chinese medicine, and traditional and natural medicine are related together as intercultural medicine.The official Cuban therapeutic system, mixed, is an example of ecocultural medicine.Cuba's health curricula include phytotherapy, acupuncture, and natural and traditional medicines.Research on natural products that meet the population's needs has been promoted.Cuban health professionals have a high level of training.The University of Holguin offers international courses to foreign professionals as a sample of the high degree reached in apitherapy, phytotherapy, diverse natural techniques, and intercultural Chinese medicine. (2,18)n China, when looking at the sky, the clouds, the wind, the sun, and the stars, everything is related in balance, without complications, and everything flows.Therefore, there are no diseases.The Chinese apply the same principle to the body and its parts.The energies of nature are influential and responsible for the health of the environment and individuals.At the popular level, all this is enriched by ancient traditions of supernatural elements intervening in nature, life, and the destiny of individuals. (19)n northern Argentina and Bolivia, traditional doctors have much knowledge about phytotherapy and know at what time how to perform remedies to treat kidney problems, pressure, and abdominal pain, among others.It also serves to counteract the symptoms of COVID-19.The preparation of herbs, such as chips, matico, lemon, and honey, relieved the symptoms of the disease. (20)
Traditional medicine as eco-cultural medicine
The ethnomedicines of native cultures, traditional Mexican medicine, and traditional Chinese medicine (different from Western medicine) are natural and cultural medicines.Tradition is the support to receive adequate information to achieve good organization, procedure, and safe transmission.Traditions must be taken into account in order to accommodate indigenous groups to progress.
Nature and culture form a unity and a dynamic reality in most traditions of indigenous cultures.Natural resources serve for survival and are conceived as "brothers" with whom one lives together.When Nature gives, it is necessary to give back in return.
Ethnomedicine in this context adapts to the physical reality and what is specified in the tradition.Ecocultural medicine is a (varied) system that conceives natural remedies not as a means to an end but as elements with which one interacts, having cultural characteristics that influence the behavior and life of these individuals.The intervention of this medicine is not only done on the patient but on him integrally (as an element of Nature), the natural environment, the social environment, and the cultural environment; if there is a balance of the broad reality, health is achieved and maintained.Any imbalance in the planes of the multi-reality (physical-symbolic) could cause diseases in people.
Mapuche Ethnomedicine
Mapuche medicine is part of the "cultural entity" of one of the original peoples of South America.It is an ethnomedicine system established since ancient times with its characteristics, and others shared in essence to almost all Amerindian ethnomedicines.If Nature gets sick, human beings get sick, and vice versa.The relationship with the earth can be altered and give rise to diseases (called by the Mapuche Mapuche kutran).Traditional health, from Patagonia to the mountains of the Sierra de Oaxaca, is based on equilibrium, understood as the balance of forces coming from Nature, the human being, culture, spiritual beings, and the cosmos (with the Higher Self). (21)ome authors affirm a deep kinship between the cultures of the original American groups.Could we take this kinship back to the migrations of Siberian peoples who began to cross Bering some 35,000 years ago?When the mechanisms of transmission of traditions worked one hundred percent, not only myths were passed from generation to generation, but also many other elements, aspects of the culture and science of these groups, among them the concepts and ideas of health, social organization and the way of understanding the world and relationships with Nature.The Mapuche people resisted the Spanish conquerors' advances and maintained internal cohesion for a long time until their conquest by the Chilean State. (22)apuche ethnomedicine has been preserved to the present day, being the object of study and interest of researchers and others.There are several references to it in documents of intercultural encounters and different sites specialized in Mapuche information online.
The Mapuches distinguish between ailments and disharmonies arising from characteristics of the Mapuche idiosyncrasy and culture.Diseases and illnesses that can be treated by the specialist of western science, winka kutran (infections, traumatological problems, problems that require surgical interventions, etc.).Mapuche medicine has a specific vision of the disease and relates it to the group, its members, nature, the world of beliefs, and the cosmos. (23)herefore, to re-harmonize an altered situation, botanical means are used, and rites and ceremonies gather the community around vital ancestral practices to ensure the cures and the very existence of the Mapuche people as such.The idea of joint and integral action to achieve the balance that means health is also present in the thinking of cultures in the Mesoamerican, Siberian, Asian, and other traditions. (24)rom a contemporary and multicultural perspective, the Mapuche traditional medical system constitutes an abundant, diverse, and well-preserved ethnomedicine, with elements related to other Amerindian and possibly Siberian ethnomedicines, a fact that interests anthropology in order to deal with the health problems of these people, according to their tradition.
In Mapuche medicine, the Machi is the person in charge of carrying out the ancestral practice of the native peoples to maintain health, well-being, and balance with the environment.Through therapeutic rituals in which remedies are prepared with herbs or natural elements as the main elements, they carry out ceremonies accompanied by melodies, dances, and rites.In traditional cultures, talking about health goes beyond the simple well-being of the body.Specialists and ethnomedical not only work to rebalance the person who is sick or has problems but also think about the group and the correct relations of people, groups with the environment and resources, according to the norms of this culture. (25)
CONCLUSIONS
In conclusion, traditional medicine and alternative and complementary medicine provoke a continuous interest in today's society at a global level, which aims to provide a different vision of wellness and health.These medicines are very varied, allowing us many options; they are quick, less invasive, and, in some cases, economical ways to cure or prevent some ailments and specific diseases that do not require surgical interventions.
It is necessary to approach them critically and accompany them with proper medical guidance.Therefore, with the support of research offered by multicultural medicine about traditional and alternative medicine, they work together and offer us a holistic approach to health care. | 2024-01-06T16:12:12.152Z | 2023-12-31T00:00:00.000 | {
"year": 2023,
"sha1": "7e715ac8ddc7e5ec8d006a3e1661b13f12d900fd",
"oa_license": "CCBY",
"oa_url": "https://cid.saludcyt.ar/index.php/cid/article/download/119/159",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "f0c01d5e2d85a428453e18c4934910a0c1daa0b4",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": []
} |
11900020 | pes2o/s2orc | v3-fos-license | Local formula for the index of a Fourier Integral Operator
We show that the index of an elliptic Fourier integral operator associated to a contact diffeomorphism $\phi$ of cosphere bundles of two Riemannian manifolds X and Y is given by $\int_{B^*X}\hat{A}(T^*X)\exp{\theta} - \int_{B^*Y}\hat{A}(T^*Y)\exp{\theta}$. Here $B^*$ stands for the unit coball bundle and $\theta$ is a certain characteristic class depending on the principal symbol of the Fourier integral operator. In the special case when X=Y we obtain a different proof of the theorem of Epstein and Melrose.
Introduction
Let X and Y be two smooth closed connected Riemannian manifolds of the same dimension such that there exists a contact diffeomorphism φ : S * X → S * Y between the two unit cotangent bundles which induces a homogeneous symplectomorphism, still denoted by φ, from T * X \ X onto T * Y \ Y .
We first recall the definition of the index of φ when dim X ≥ 3 following [10]. We will denote by Ω1 2 the half-density bundle over X or Y . Let C φ be the graph of φ −1 in (T * Y \ Y ) × (T * X \ X) and L C φ be the associated Maslov bundle. Let A : L 2 (X, Ω1 2 ) → L 2 (Y, Ω1 2 ) be an elliptic Fourier Integral Operator of order zero whose canonical relation is C φ and whose principal symbol is an invertible section of the bundle Ω1 2 ⊗ L C φ → C φ (see [11], [5]) for details). Suppose that B : L 2 (Y, Ω1 2 ) → L 2 (X, Ω1 2 ) is an elliptic Fourier Integral Operator of order zero whose canonical relation is C φ −1 . Then B •A : L 2 (X, Ω1 2 ) → L 2 (X, Ω1 2 ) is an elliptic scalar pseudo-differential operator of order zero. Since dim X ≥ 3 there exists a smooth non vanishing function x ∈ X → a(x) ∈ C * such that the principal symbol of B • A is homotopic to (x, ξ) ∈ T * X → a(x) ∈ C * . In particular the index of B • A is zero. Thus IndB = −IndA for any Fourier Integral Operators A and B as above, and, as the corollary of this fact, IndA does not depend on the choice of A. Since it only depends on the transformation φ, it is called the index of φ and denoted by Ind φ. A. Weinstein has proved (see [11]) that the integer Ind φ naturally appears if one wants to compare the spectrum ( λ k (X) ) k∈N of the Laplace Beltrami operator ∆ X of X with the one of ∆ Y ; for instance if T * X \ X is simply connected then the sequence (λ k (X) − λ k−Indφ (Y )) k∈N is bounded.
The goal of this paper is to provide a geometric formula for the index an elliptic Fourier Integral Operator Φ of order zero whose canonical relation is C φ ( we do not assume dim X ≥ 3).
Let us first fix some notation. Given a smooth manifold X, we will use T * X to denote the cotangent bundle of X and B * X to denote the projective compactification of T * X. We will use M to denote the the smooth manifold obtained by glueing at infinity B * X and B * Y with the help of the map φ ′ : (x, ξ) → φ(x, −ξ).
Let S 0 (T * X) and S 0 (T * Y ) denote the algebras of asymptotic symbols of pseudodifferential operators of order at most zero on X and Y . Given an element a ∈ S 0 (T * X), we denote by a the symbol a scaled by in the cotangent direction and by Op(a) the pseudodifferential operator associated to a. Given a pseudodifferential operator A, we denote by σ(A) its full symbol (for the precise definition see the next section).
The general strategy is as follows. We interpret conjugation by Φ as an isomorphism of the algebras of pseudodifferential operators on X and Y . Translated into terms of formal deformations of the cotangent bundles, this allowes us to construct a formal deformation A (M) of C ∞ (M) which on T * X and T * Y represents the calculus of differential operators, while on the common cosphere at infinity represents the calculus of pseudodifferential operators. While the symplectic structures on T * X and T * Y do not glue together (so there is in general no almost complex structure on B * X ∪ φ ′ B * Y ), there is a (noncanonical) symplectic Lie algebroid structure (E, [·, ·], ω) over M and A (M) is a deformation associated to it in the sense of [8]. The usual traces on the algebras of smoothing operators on X and Y give rise to a trace τ can on A (M) such that ind Φ = τ can (1). An application of the general algebraic index theorem from [8] gives the local formula for the index.
The content of the paper is given below.
1.
In the first section we recall the relation between the calculus of smoothing operators on X and a formal deformation of T * X which is basically given by full symbol of a pseudodifferential operator.
2.
B * X carries a structure of symplectic Lie algebroid (E X , [, ], ω) described in Section 2. The symbolic calculus of pseudodifferential operators gives rise to a formal deformation of the sphere at infinity of B * X which, together with the formal deformation of T * X given above, gives rise to a formal deformation A (X) of B * X associated to (E X , [, ], ω).
3.
Let us fix an almost unitary elliptic Fourier Integral Operator Φ whose canonical relation is given by the graph of φ −1 . In Section 4 we show how to glue together the deformations A (X) and A (Y ) into a formal deformation A (M) of M associated to a symplectic Lie algebroid structure (E, [, ], ω) on M. The construction is based on the following strengthening of the Egorov theorem (see Theorem 1).
1. The map which to any a ∈ S 0 (T * X) associates the asymptotic expansion at = 0 of ( σ(ΦOp(a )Φ * ) ) −1 induces an algebra isomorphismΦ For each k ∈ N * , there exists an E X −differential operator D k on B * X such that, for any a ∈ S 0 (T * X), the following identity holds: Egorov theorem corresponds to the leading term in the above expansion.
The real symplectic vector bundle E is isomorphic to T M (as a vector bundle over M) and hence T M is the realification of a complex vector bundle on M which will be denoted by E C .
4.
In Section 5 we identify the space traces on A (M) and relate it to the traces on the algebras of smoothing and of pseudodifferential operators.
5.
In Section 6 we identify the index of the Fourier Integral Operator with the trace of 1 in the formal deformation.
The local index formula for Ind Φ follows from the algebraic index theorem of [8], the class θ 0 being the coefficient of 0 in the characteristic class θ ( [8], [4]) of the deformation.
The main result can be formulated as follows. 1. Let Φ be a Fourier Integral Operator and A (M) the formal deformation of M associated to it as in Definition 2. Then where θ 0 denotes the characteristic class of the deformation of the Lie Algebroid (E, [ , ], ω) given by A (M). 2. Let ∇ X be a connection ∇ X on the tangent bundle T (B * X) and A(T * X) an associated representative form of theÂ−class of ∇ X . The symplectomorphism φ induces a connection φ * (∇ X ) on the tangent space of B * Y \ B * (Y ). Let ∇ Y denote its extension to a connection of T (BY ) andÂ(T * Y ) an associated representative differential form of theÂ-class of ∇ Y . Then The computation of the characteristic class of the deformation is given in the last section, where we simultaneously construct a deformation of M and the Fourier Integral Operator whose index is given by the trace of the 1 in the deformed algebra. As the starting point we give a somewhat nonstandard definition of the characteristic class of a formal deformation which is more amenable to computations in the case of deformations associated to (twisted) differential or pseudodifferential operators.
As a corollary we get the following result.
In the case when X = Y a straightforward Meyer-Vietoris type argument with the mapping torus of φ shows that our results recover those of Epstein and Melrose.
7.
Remark 1. The methods of this paper extend in a fairly straightforward manner to the case of a Fourier Integral Operator Φ between L 2 sections of vector bundles E and F of the same dimension on X and Y . In the case when both X and Y posses a metalinear structure, the corresponding index formula is given by the expression Here L is the vector bundle over M obtained by glueing together pull backs (by the canonical projections π * X and π * Y ) to the cotangent bundles of X (resp. Y ) of the bundles Λ n 2 (X) ⊗ E and Λ n 2 (Y ) ⊗ F with the help of the symbol of Φ.
Note that existence of an isomorphism of π * X (Λ n 2 (X)⊗E) and π * Y (Λ n 2 (Y )⊗ F ) over φ ′ is equivalent to existence of an elliptic Fourier Integral Operator from L 2 (X, E) to L 2 (Y, F ).
Remark 2. It is easy to see that our local formula implies the following fact: If φ extends as a symplectomorphism : T * X → T * Y up to the zero section, then Ind Φ = 0.
Symbolic calculus for ΨDO's and formal deformations
2.1. Deformation of T * X. We will recall the pertinent facts from [9].
Let χ be a smooth, non-negative function on X × X satisfying the following conditions: (1) χ(x, y) = χ(y, x); (2) χ ≡ 1 on an open set containing the diagonal in X × X.
(3) For each x ∈ X, the set D x = {y ∈ X/ (x, y) ∈ supp χ} is geodesically convex. We denote by Exp −1 x the unique smooth inverse to the exponential map: Exp x : T x X → X defined on D x and such that Exp −1 x (x) = 0. Given x ∈ X, y ∈ D x , let z denote the midpoint of the unique geodesic joining x and y within D x , and let v ∈ T z X be given by Now, denote by S m (T * X) the space of classical symbols of order m on X, i. e.. smooth functions θ on T * X satisfying estimates of the form: S m (T * X) is given the topology of (Frechet) topological vector space by the "best" C α,β . We will denote by S +∞ (T * X) = ∪ m∈R S m (T * X) the set of all classical symbols on T * X. With the above notation (1), the map: given by defines a pseudo-differential operator. Conversely, if P is a pseudodifferential operator on X we define its complete symbol to be: where z is the midpoint of the geodesic joining x and y and v satisfies (1). We observe that P − Op(σ(P )) is a smoothing operator whose Schwartz kernel vanishes to infinite order on the diagonal. Now, for a given θ ∈ C ∞ (T * X), we set: θ (x, ξ) = θ(x, ξ).
Following ([9]), we endow the algebra
with a star product ⋆ X by defining, for any symbols θ 1 , θ 2 , ∈ S m (T * X), θ 1 ⋆ X θ 2 to be the asymptotic expansion at = 0 of: One sees immediately that ⋆ X extends to A (T * X) Recall that there exists, unique up to normalization, a canonical trace on (A (T * X), ⋆ X ), Tr X can , given by: ( Proposition 2.5 (3) of [9] ).
2.2.
Lie algebroid structure and deformation quantization the projective completion B * X. For any x ∈ X, we set B * Then we consider the fiber bundle B * X over X defined by B * X = ∪ x∈X B * x X. Therefore B * X is a compactification of T * X and a smooth compact manifold with boundary: ∂B * X = B * X \ T * X. Similarly one defines the bundle B * Y over Y . We observe that the map from S * X into B * X given by ξ → 0 ⊕ ξ defines an isomorphism between S * X and B * X \ T * X. For Clearly, φ induces a natural smooth isomorphism of manifolds with boundary: By glueing B * X and B * Y along the boundary B * X \ T * X with the help of φ ′ , we define the following smooth compact manifold M: Let Π X : B * X → X be the projection map. We denote by Ξ X the set of smooth vectors fields of B * X which are tangent to all the submani- be a local chart of T * X and (ρ, θ) be the polar coordinates: ρ = ||ξ||, θ = ξ ||ξ|| , where || || denotes the Euclidean norm of T * X. Then a local chart of B * X near B * X \ T * X is given by In this local chart, Ξ X is generated by the vector fields t ∂ ∂x j , t ∂ ∂t , ∂ ∂θ l , where 1 ≤ j ≤ n, 1 ≤ l ≤ n − 1. We will use several times the following obvious Lemma . Moreover we observe that the set of classical symbols of order zero on T * X is nothing else but C ∞ (B * X).
Before we continue, let us recall the definition of a symplectic Lie algebroid ( see for instance [6], [8]).
is a Lie algebra structure on the sheaf of sections of E, ρ is a smooth map of vector bundles: ρ : E → T M such that the induced map: is a Lie algebra homomorphism and, for any sections σ and τ of E and any smooth function f on M, the following identity holds: Lastly, ω is a closed E−two form on M such that the associated linear map: defines a symplectic structure on E.
2) The ring of E−differential operators is by definition the ring generated by smooth functions on M and smooth sections of E. We leave to the reader the easy proof of the following: where I p is the set of smooth real-valued functions on B * X which vanish at p. Then: such that the set of smooth sections over B * X of E X is the same as defines a symplectic Lie algebroid over B * X.
The star product ⋆ X on T * X extends to a star prod- Then for any f, g ∈ C ∞ (B * X) and (x, ξ) in the domain of this local chart we have: then, using the local coordinates (5) and Lemma 1, one gets easily all the results of the proposition.
Proposition 1 allows to formulate the following definition.
Regularized Index formula for a Fourier Integral
Operator an elliptic Fourier integral operator of order zero whose canonical relation is C φ and whose principal symbol a is a unitary section of the bundle Ω1 2 ⊗ L C φ → C φ : this means that a is homogeneous of degree zero (i.e. constant on each ray) and that aa ≡ 1: see [11]. We can, and will, assume in the sequel that ΦΦ * − Id and Φ * Φ − Id are smoothing. As observed in [11] Φ is Fredholm, with index defined by ind Φ = dim ker Φ − dim cokerΦ. In order to give a formula "via regularization" for ind Φ we introduce the following algebra A which will have a "regularized" trace.
We leave to the reader the easy proof of the following:
Algebraization of a Fourier Integral Operator
We are going to use the following (deformed quantized algebra), where the manifold Z is equal to X or Y : where C ∞ 0 (B * Z) denotes the set of smooth functions which vanish to infinite order at B * Z \T * Z. We observe that ⋆ Z induces a star-product, Theorem 1.
1) The map which to any a ∈ S 0 (T * X) associates the asymptotic expansion at = 0 of ( σ(ΦOp(a )Φ * ) ) −1 induces an algebra isomorphism 2) For each k ∈ N * , there exists an E−differential operator D k on B * X such that for any a ∈ C +∞ (B * X) which is identically zero in a neighborhood of the zero section, we have the following identity : Before proving this theorem we state the next proposition which is an easy consequence of Proposition 2 and Theorem 1 where the B (n) are E Y −bidifferential operators.
Proof. Let us first assume part 2). Then, using the results of Section 2.1 and the fact that ΦΦ * − Id and Φ * Φ − Id are smoothing, one proves easily thatΦ is an isomorphism whose inverse is given by: Now let us prove part 2). Following [5] page 26, we recall that the Schwartz kernel of Φ is the finite sum of a smooth function and of oscillatory integrals (supported in small coordinates charts) of the following type: where b(y, η) ∈ S 0 (T * Y ) vanishes for ||η|| ≤ 1, ϕ(y, η) is an homogeneous phase function parametrizing locally the graph C φ of φ −1 which satisfies det ∂ 2 ϕ ∂y∂η = 0 so that locally we have: . Notice moreover that (y, η) → (y, ϕ ′ y (y, η)) and (y, η) → (ϕ ′ η (y, η), η) are local diffeomorphisms. With these notations, the Schwartz kernel of Φ * is the finite sum of a smooth function and of oscillatory integrals (supported in small coordinates charts) of the following type: Let a ∈ C +∞ (B * X) which is identically zero in a neighborhood of the zero section, in order to analyze Φ • Op(a ) it is enough to study the operator K • Op(a ) where K denotes the operator whose Schwartz kernel is given by (6). The Schwartz kernel of K • Op(a ) is given by: In this integral we replace a(x, ξ) by its Taylor expansion: Using the following two identities and integrating by parts we see that T (y, z) is the sum of a smooth function and of H(y, z) = e i(ϕ(y,η)−x·η) b(y, η) Now for α ∈ N n we set c α (z; η) = ∂ α x D α ξ a(z, η) and we consider If we replace c α (z, η) by its Taylor expansion then, using integrations by parts as above, it follows easily that H α (y, z) is the sum of a smooth function and of R n e i(ϕ(y,η)−z·η) We observe that if we apply the Leibniz rule for the term D β η .... in the previous integral then the following differential operators will appear It is clear from Lemma 1 that, expressed in the coordinates (ϕ ′ η (y, η) , η), these differential operators (8) are E−differential operators. Therefore we have just proved that T (y, z) is the sum of a smooth function and of: where the P k are E−differential operators. Now we recall that the Schwartz kernel of Φ * is the finite sum of a smooth function and of terms of the type (7). So in order to analyze Φ • Op(a ) • Φ * it is enough to study the operator K • Op(a ) • K * whose Schwartz kernel is the finite sum of a smooth function and of integrals of the type: Moreover we can write ϕ(y, η) − ϕ(y ′ , η) = (y − y ′ ).η(y, y ′ , η) wherê η(y, y, η) = ϕ ′ y (y, η) and we can assume (at the expense of shrinking the local coordinates charts) that η →η(y, y ′ , η) is a local diffeomorphism whose inverse is denotedη → η(y, y ′ ,η). With these notations, we set: A k (y, y ′ , ,η) = P k (a)(ϕ ′ η (y, η) , η)b 1 (y ′ , η) Then a change of variable formula allows us to see that the oscillatory integral (9) is equal to We observe that, expressed in the coordinates (ϕ ′ η (y, η) , η), the vector fields ∂ η (ϕ ′ η )∂ y are E−differential operators. Therefore one proves easily the assertion of Part 2) of the Theorem by replacing A k (y, y ′ , ,η) by its Taylor expansion β∈N n 1 β! ∂ β y ′ A k (y, y, ,η) |y=y ′ (y ′ − y) β and using, as before, integration by parts.
The formal deformation and traces on B
at the boundary of B * X.
We are going to use the ⋆-products denoted ⋆ X , ⋆ Y on B * X and B * Y defined in Propositions 2 and 4. We set A (B Let A (M) be the vector space given by induced by a (resp. b). Theorem 1 shows that A (M) is an algebra with respect to the diagonal product: (⋆ X , ⋆ Y ). In particular, pairs of the form ( σ(Φ * Φ), σ(ΦΦ * ) ) belong to A (M).
In the statement of the next proposition we will use the notations of Theorem 1.
Computation of τ .
Since the space of traces on A (M) may be very big we introduce the following algebra: Another way of describing D (M) is given by glueing from where χ is as in previous Proposition. It is easily seen that τ can defines, by the same formula as (10) Proof. For Z = X or Y we set: where C ∞ 0 [B * Z) denotes the set of smooth functions which vanish of infinite order at B * Z \T * Z. Then we have the following exact sequence: Here T (M) denotes the induced formal E-deformation of the sphere at infinity. A direct construction of this deformation may be described as follows. Let P i denote the space of pseudodifferential operators on, say, X of order ≤ i modulo the smoothing operators. Then the space of doubly infinite sequences , where the multiplication by acts as the right translation. If we endow it with the product it is easily seen to be isomorphic to T (M). Any trace τ on T (M) is given by a sequence of C-linear, C[[ ]]-valued functionals τ n on P n such that The -linearity of τ implies that τ i+1 = τ i and the trace condition on τ implies that each τ i is a trace on the algebra of pseudodifferential operators modulo the smoothing operators. Recall that, on this latter algebra, the Wodzicki residue res is the unique trace up to multiplicative constant. Thus τ is, up to multiplicative constant, uniquely determined by τ −n = res, and hence the space of traces on T (M) is one-dimensional. We recall that A 0 (T * X) is H-unital (in the sense of Wodzicki, see [13]), so we have the following long exact sequence in cyclic cohomology: δ : HC 0 ( A 0 (T * X) ⊕ A 0 (T * Y ) ) → HC 1 (T (M)) is given by taking a trace on A 0 (T * X) ⊕ A 0 (T * Y ), extending it to a linear functional on D (M) and taking its Hochschild boundary. In particular, it is not zero (this is equivalent to existence of a pseudodifferential operator with nonzero index!). This implies that HC 0 (D (M)) is either one or two dimensional. Since, with the notations of the Proposition, τ can , τ 1 are two linearly independent elements of the vector space of traces on D (M), the rest of the statement of above Proposition follows.
The algebraic index theorem for the Lie algebroid E
The following Theorem is proved in [8] and is an extension to the symplectic Lie algebroid (E, [, ], ω) of the Riemann Roch theorem (on symplectic manifolds) for periodic cochains of [1], [7]. Theorem 2. The following diagram is commutative: T T T T T T T T T T T T T T where σ is the specialization map at = 0, µ is the Hochschild-Kostant-Rosenberg map, µ is the trace density map defined in [7] and θ = is the characteristic class of the deformation of the symplectic Lie algebroid (E, [, ], ω) ( [8]).
The natural injection
2) There exists a constant C such that Moreover 1 •µ (1, 1) = 0 and for any (a, b) ∈ D (M) such that a is zero in a neighborhood of the boundary of B * X, 1 •µ (a, b) = 0.
Proof. 1) A standard Mayer-Vietoris sequence argument shows that E H 2n (M, C) is indeed two dimensional. The fact that ( reg , 1 ) defines a basis is left to the reader.
2) This is an easy consequence from part 1) and of the properties (see [7], [8] ) of the trace density map µ . Let ∇ X be a connection ∇ X on the tangent bundle T (B * X) andÂ(T * X)
Local formula for the index of a Fourier Integral Operator
an associated representative form of theÂ−class of ∇ X . The symplectomorphism φ induces a connection φ * (∇ X ) on the tangent space of andÂ(T * Y ) an associated representative differential form of theÂclass of ∇ Y . Then Proof. One obtain this formula by first applying Proposition 3, Theorem 2 and Proposition 7 and then by letting → 0 + . As we will see below, the involved characteristic classes of vector bundles on M are in fact standard de Rham cohomology classes and hence the regularized integral coincides with the orientation class of M.
The previous formula shows that if φ extends as a symplectomorphism T * X → T * Y up to the zero section then ind Φ = 0. For a deformation associated with a Fourier Integral Operator (as in Proposition 5) the characteristic class θ of Theorem 2 is in fact of the form where θ 0 ∈ H 2 (M, C) is a closed differential form (not only an E−differential form). In order to do this and to identify the relevant characteristic class we will give below a slightly nonstandard description of a formal deformation.
7.1. General construction of the characteristic class of a formal deformation. Let us start with some notation. Let A denote the Weyl algebra of the symplectic vector space R 2n with the standard symplectic structure, i.e. the algebra generated by the vectorsx l ,ξ l (1 ≤ l ≤ n) satisfying the relations [ξ k ,x l ] = √ −1 δ k,l . The algebra A is completed in the topology associated to the ideal generated by {x l ,ξ l , ; 1 ≤ l ≤ n} and has the grading induced by degx l = degξ l = 1, deg = 2.
The corresponding Lie algebra 1 A will be denoted byg. We set: g = Der (A ) =g/center and G = Aut(A ) = exp(g ≥0 ) We setG = {g ∈ 1 A | g ∈ sp(2n, R) mod g ≥1 } and will endow it with the group structure coming from the exponential map. Note thatG is an extension of G associated to the (Lie algebra) central extensiong of g.
We endow the bundle R 2n × A with the obvious fiber-wise action of G and with theg-valued (Fedosov) connectioñ Let us recall (see Section 2.2) that a local chart of B * R n near B * R n \ T * R n is given by: By using the local coordinates (11) one checks easily that∇ 0 extends as an E R n −connection, still denoted∇ 0 , of B * R n × A .
The description given below of a formal deformation of a symplectic Lie algebroid structure on M is just the representation of the Fedosov construction in terms of the bundle of jets on M with the fiber-wise product structure induced by the *-product (which is isomorphic to Weyl bundle) .
Local description of the characteristic class θ of a formal deformation.
The deformation is described by a local (Darboux) cover {U i } i∈I of (M, ω), a collection of functions {g i,j : U i ∩ U j →G} and a collection ofg-valued E−connections∇ i on U i × A which, when expressed in terms of local Darboux coordinates (x 1 , . . . , x n , ξ 1 , . . . , ξ n ) (resp (11)) if U i does not meet (resp meets) the boundary at infinity , are equal tõ ∇ 0 modulog ≥1 and so that the three following conditions hold.
1) The cocycle condition: In particular {g i,j : U i ∩ U j →G} define a smooth bundle W of algebras over M with fiber isomorphic to A and the structure groupG.
2) The local connections∇ i define ag-valued connection∇ on the bundle W, i.e.: g i,j∇j =∇ i g i,j 3) The induced g-valued connection ∇ on the bundle W is flat, i. e. θ =∇ 2 is a globally defined differential form on M with values in the center ofg, necessarily of the form The algebra of ∇-flat sections of W is a formal deformation of (M, ω) whose characteristic class is θ.
7.2.
Local canonical liftings. We endow R 2n with its canonical symplectic structure ω = n l=1 dξ l ∧ dx l . Given any smooth, C[[ ]]-valued function H on R 2n , we set We will associate to H the followingg-lift of the Lie derivative L {H, } : We can think of it as an element of the Lie algebra of the semidirect product of C ∞ (R 2n ,G) by the pseudogroup of local diffeomorphisms of R 2n . The D H 's form a Lie algebra, in fact and they satisfy We will also have an occasion to use which commutes with∇ 0 . 7.3. The cotangent bundle case. The deformation of T * X associated to the sheaf of differential operators on X can be now described as follows.
Locally on a coordinate domain U ⊂ X we use coordinates on U to give an explicit symplectomorphism and use Weyl deformation of R 2n to construct the deformation of T * U. This amounts to the choice of a (g-valued) connection given in our local coordinates (x 1 , . . . , x n ) on U and the induced local coordinates The infinitesimal change of coordinates on U is given by a vector field of the form n l=1 X l ∂ x l and the associated infinitesimal symplectomorphism of T * U is given by the Hamiltonian vector field { n l=1 X l ξ l , ·}. It is immediate to see that the map l X l ∂ x l → D l X l ξ l is the Lie algebra homomorphism.
The associated local diffeomorphisms (coordinate changes) exp l X l ∂ x l lift to a local isomorphisms of the bundle T * U×A given by exp D l X l ξ l . Given a local coordinate cover {U i } i∈I of X it is now immediate to construct the associatedG-valued cocycle {g ij } glueing the bundles together. Note that, since D's do not commute with the connectioñ ∇ 0 , the corresponding collection of connections ∇ i =∇ 0 in i'th coordinate system on T * U i do not glue together. But it is not difficult to check that where Dg ij is the induced action of g ij on the tangent bundle. By trivializing the cocycle 1 2 d log det Dg ij inČ 1 (X, Ω 1 (T * X)) we get a globally defined connection∇ and it is immediate that the characteristic class of the associated deformation is 1 2 π * c 1 (T C X), where π : T * X → X is the canonical projection. It is also immediate that the deformation constructed in this way coincides with the one associated to the calculus of differential operators on X, while its jet at ξ = ∞ gives the deformation associated to the calculus of pseudodifferential operators on X, the characteristic class being given by the jets at ξ = ∞ of 1 2 c 1 (E C ) (recall that the real symplectic vector bundle E is the realification of a complex vector bundle E C ).
7.4. The Lie algebroid case. Recall now that the Lie algebroid (on M) (E, [ , ], ω) is given by glueing (at infinity) the two cotangent bundles (T * X, ω X ) and (T * Y, ω Y ) by the symplectomorphism φ ′ . To construct the deformation in this case, we will use the following data, whose existence follows immediately from the compactness of the co-sphere bundles of X and Y . 1. A local coordinate cover {U i } i∈I of X and an open relatively compact neighborhood U X of the zero section in T * X; 2. A local coordinate cover {V i } i∈I of Y and an open relatively compact neighborhood U Y of the zero section in T * Y ; 3. For each i ∈ I a one-homogeneous real-valued function H i on T * X \ X ∼ = T * Y \ Y such that the restriction φ i of the symplectomorphism φ to T * U i \ U X is given by integrating the (time dependent) hamiltonian flow L H i . Using the above data, we can construct cocycles and intertwining the flat connections∇ 0 i up to the term 1 2 d log det Dg ij ( 1 2 d log det Dh ij respectively) as in the cotangent bundle case. We can also construct, using the notation of (12), a lifting of φ i . ¿From now on we view Φ i as local isomorphisms of jets at infinity of thẽ G-bundles on compactified cotangent bundles of X and Y constructed from the cocycles g i,j and h i,j . While both g i,j 's and h i,j 's do satisfy the cocycle conditions on T * (X) (T * (Y ) respectively), however λ ij = Ψ −1 j h ji Ψ i g ij = 1. and hence we do not yet have the data necessary to construct the bundle W over M.
The following facts are easy corollaries of the construction: To begin with, note that both 1 2 d log det Dg ij and 1 2 d log det Dh ij as cohomology classes on T * X \ X and T * Y \ Y represent (under our symplectomorphism) the same cohomology class, to wit half of the first Chern class of the tangent bundle with the complex structure induced by the symplectic form. Since these vanish, we can find a zero-Čech cochain τ i of the sheaf of functions with values in C \ {0} and such that τ i λ ij τ −1 j intertwines the (local) flat connections∇ 0 i and∇ 0 j . In particular, τ i λ ij τ −1 j are given by exponentials of jets of∇ 0 i -flat sections of the bundle T * U i × A and, using a partition of unity, they can be written in the form τ i λ ij τ −1 j = λ i λ −1 j , where λ i is a jet of a flat section of the Weyl bundle supported on T * U i \ U X . We now define an operator Ψ, acting on the set of sections of the Weyl bundle W, by setting for each i ∈ I: Here M λ i stands for the operator of multiplication with the flat section λ i . It is easy to see that Ψ descends to an isomorphism of jets at the sphere at infinity of the deformations of cotangent bundles constructed above so that 7.5. The characteristic class of A (M). The characteristic class of the deformation constructed above can now be easily obtained as follows. The collection g ij , h ij , and the jet at infinity of Ψ i M λ i give a cocycle with values inG, and it commutes with local flat connections up to the Cech cocycle given by the collection of differential forms 1 2 d log det Dg ij , d log τ i , 1 2 d log det Dh ij .
As in the case of cotangent bundle, we can correct local connections by a scalar term. The characteristic class θ 0 of the deformation is given by (13) as a cochain inČ 1 (M, Ω 1 (T * M)) Moreover, in the case that both X and Y admit metalinear structures, the collection {τ i } i∈I can be thought of as glueing of the pulled back)of the half-top form bundles of X and Y along the graph of the symplectomorphism into a line bundle L over M and, in this case, θ 0 = c 1 (L) 7.6. The Fourier Integral Operator. To get the Fourier integral operator we will work locally. We will dispense with the half-density bundles (trivial in any case) for the sake of simplicity of notation. We will begin by constructing, for each i, an operator on L 2 (R n ) as follows. Choosing local coordinates on U i and V i , we can assume that H i (introduced at the end of section 7.4) is actually a smooth function on T * R 2n which is 1-homogeneous in the cotangent direction. The solution of the following differential equation is a smooth family of bounded operators. Using the fact that {τ i } i∈I is aČech zero-cochain of the sheaf of functions with values in the unit circle and proceeding as in [4], one checks that T i (1) satisfies Ad (T i (1)Op(λ i ))Op(f ) ∼ Op(Ψ(f ) ) mod +∞ as → 0 whenever suppf ⊂ T * U i \ X ( recall that λ i is introduced in Section 7.4). In other words, the deformation constructed above is associated (in the sense of Proposition 5) to the almost unitary Fourier Integral Operator Φ = i∈I T i (1)Op(λ i ) whose canonical relation is C φ . Moreover, the index of this operator Φ 0 is given by MÂ (M)e θ 0 , Remark 4. The result above depends on the choice of the τ i 's which in turn determine the homotopy class of the symbol of the Fourier Integral Operator. Moreover, since the characteristic classes involved are given by differential forms associated to connections on a vector bundle over M and Ω ⊂ E Ω, the E-classes involved in the index formulas are in fact identical with corresponding standard characteristic classes.
Let us recall that the real vector bundle E ≃ T M is given by realification of a complex vector bundle E C on M (the almost complex structure coming from the symplectic vector bundle structure on E). Moreover, it is easy to see, that there exists a choice of the τ i 's such that the associated characteristic class of the deformation coincides with 1 2 c 1 (E C ). This gives the following result (compare with [3] and [11]): Theorem 4. There exists an almost unitary Fourier integral operator Φ 0 (as in Section 3) whose canonical relation is C φ and such that: | 2014-10-01T00:00:00.000Z | 2000-04-05T00:00:00.000 | {
"year": 2000,
"sha1": "192585b1179895cb01d4ac7069959619fa6b1f1f",
"oa_license": "implied-oa",
"oa_url": "https://doi.org/10.4310/jdg/1090349429",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "192585b1179895cb01d4ac7069959619fa6b1f1f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
222166546 | pes2o/s2orc | v3-fos-license | Identifying optimal capsid duplication length for the stability of reporter flaviviruses
ABSTRACT Mosquito-transmitted flaviviruses cause widespread disease across the world. To provide better molecular tools for drug screens and pathogenesis studies, we report a new approach to produce stable NanoLuc-tagged flaviviruses, including dengue virus serotypes 1-4, Japanese encephalitis virus, yellow fever virus, West Nile virus, and Zika virus. Since the reporter gene is often engineered at the capsid gene region, the capsid sequence must be duplicated to flank the reporter gene; such capsid duplication is essential for viral replication. The conventional approach for stabilizing reporter flaviviruses has been to shorten or modify the duplicated capsid sequence to minimize homologous recombination. No study has examined the effects of capsid duplication length on reporter virus stability. Here we report an optimal length to stabilize reporter flaviviruses. These viruses were stable after ten rounds of cell culture passaging, and in the case of stable NanoLuc-tagged Zika virus (ZIKV C38), the virus replicated to 107 FFU/ml in cell culture and produced robust luciferase signal after inoculation in mosquitoes. Mechanistically, the optimal length of capsid duplication may contain all the cis-acting RNA elements required for viral RNA replication, thus reducing the selection pressure for recombination. Together, these data describe an improved method of constructing optimal reporter flaviviruses.
Introduction
Viruses from the arthropod-borne genus Flavivirus afflict people across the globe causing febrile, neurologic, and hemorrhagic disease [1]. Notable among the flaviviruses are the four serotypes of dengue virus (DENV), which cause an estimated 96 million symptomatic infections yearly [2], Japanese encephalitis virus (JEV) which causes the annual loss of 709,000 disability-adjusted life years [3], the recently emerged Zika virus (ZIKV), which has become associated with congenital malformations [4], encephalitic West Nile virus (WNV), and yellow fever virus (YFV), which periodically emerges from its sylvatic transmission cycle to start an urban transmission cycle [5]. Concerted efforts by scientists and clinicians have brought about vaccines for YFV, JEV, and DENV [6,7] though effective antiviral drugs have yet to be approved. Reporter flaviviruses, first published in 2003 [8], have been critical for high-throughput antiviral compound screens [9,10], host and virus pathogenesis studies [11], and serological diagnosis [12,13]. Despite these advances, reporter flaviviruses suffer from genetic instability during longer periods of growth or passaging, thought to be primarily mediated by recombination [14,15].
Reporter genes are routinely engineered at the beginning of the single open reading frame of the viral polyprotein, between the 5' UTR and the capsid gene, as first described using YFV [16]. RNA signals that aid in genome cyclization, which is essential for viral replication, and facilitate translation are continuous from the 5' UTR into the beginning of the capsid. These signals must function together, therefore it is necessary to duplicate a portion of the capsid gene and place it upstream of the inserted reporter gene. Until recently, efforts to stabilize these constructs centered on reducing homology between the duplicated capsid sequences by codon scrambling in an effort to reduce homologous recombination [16,17]. This also had the benefit of expunging the cis-acting elements in the full capsid sequence, leaving only the upstream elements. Two additional methods for stabilizing reporter flaviviruses have been newly developed, both focusing on blocking recombined reporter viruses from continued infection. Volkova and colleagues report a single-nucleotide insertion to the duplicated capsid portion of a reporter ZIKV that minimally perturbs critical RNA elements but causes a +1 frameshift [18]. If recombination occurs between the duplicated capsid sequences that flank the reporter gene, this frameshift mutation is incorporated into the viral polyprotein and causes mistranslation, effectively taking out recombined viruses from the population. We developed a related method for reporter ZIKV and YFV using recombination-dependent lethal mutations in the duplicated capsid [19]. These lethal mutations stop viral particle formation if recombination brings them into the viral polyprotein.
The length of capsid duplications in different flavivirus reporter constructs that have been reported varies from 25, to 33 or 34, 38, 50, or even the full capsid [10,16-18,20-24.] Shorter lengths are tolerated by some viruses and not by others. At the onset of this investigation, there was no published comparison of the effect of capsid duplication length and its effect on stability. It was believed that shorter capsid repeats were preferred because the shorter homologous sequence minimizes homologous recombination. We hypothesize that an optimal length of capsid duplication is required for efficient viral replication; a shortened capsid duplication imposes a selection pressure on viral replication, leading to undesired recombination and deletion of the engineered reporter gene. The goal of this study is to test this hypothesis by engineering different lengths of capsid duplication and investigating the length effect on the stability of the reporter gene in various flaviviruses. Indeed, we found an optimal length of capsid duplication of 34 or 38 amino acids that can increase the reporter gene stability for at least ten rounds of cell culture passages. Taking this new approach, we have developed a panel of long-term stable NanoLuc-tagged flaviviruses, including the four serotypes of DENV, JEV, YFV, WNV, and ZIKV. In addition, we demonstrated the use of the reporter flaviviruses for rapid antibody neutralization testing and antiviral drug discovery. Taken together, our results have established a previously unrecognized approach to generate stable reporter flaviviruses that are useful for research and countermeasure development.
Immunofluorescence assay
Vero cells were seeded into chamber slides immediately post-electroporation. At the indicated time points, cells were washed with PBS and fixed with cold methanol at −30°C for >30 minutes. Slides were washed with PBS and blocked with PBS+1% FBS overnight at 4°C. The flavivirus envelope reactive antibody 4G2 (ATCC Cat# HB-112), diluted in blocking buffer, was used as a primary antibody incubated for 1 hour at room temperature. A secondary goat anti-mouse IgG antibody conjugated with Alexa Fluor 488 (Thermo Fisher Scientific Cat# A-11001) was then used to probe for 4G2 for 1 hour. Slides were then washed 3X with PBS and stained with DAPI (Vector Laboratories, H-1200) and then imaged on a Nikon Eclipse Ti2 microscope. ImageJ (NIH) was used to process the images.
Focus forming assay
All viruses were titered by focus-forming assay. Viruses were serially diluted ten-fold in DMEM supplemented with 2% FBS and used to infect Vero cells that had been seeded the day previously at 2 × 10 5 cells per well in a 24-well plate. After a 1-h infection with rocking every 15 minutes, the virus was removed and methylcellulose/DMEM overlay was added. After four days of infection, the overlay was removed, and cells were fixed with a 1:1 solution of methanol/ acetone for >15 minutes. Plates were then washed 3X with PBS, blocked with PBS+3% FBS for 30 minutes, and then incubated with virus-specific mouse immune ascites fluid (MIAF, World Reference Center for Emerging Viruses and Arboviruses, UTMB). After >1-h incubation with MIAF, plates were washed with PBS and incubated with a horseradish peroxidaseconjugated anti-mouse IgG antibody (SeraCare KPL Cat# 474-1806). After a 3X PBS wash, foci were developed in the dark using an AEC peroxidase substrate kit (Enzo 43825) according to the manufacturer's protocol and imaged using a BioRad ChemiDoc Imaging System.
Reporter virus passaging and stability
Virus recovered from electroporation was termed P0. 500 µL of this was added to a T75 flask with a confluent layer of Vero cells. The infection was discontinued once cell death was observed after which media was harvested. 500 µL of the new passage was then used to infect a new T75 flask for the next passage. This was carried out in parallel series for each virus. Stability was assessed by isolating viral RNA (Qiagen 52904 or TriZOL, Invitrogen 15596026) from each passage and using this as a template for an RT-PCR reaction (Invitrogen 12574) with primers that spanned the 5' UTR to the end of the capsid. The products were then run on a 0.6% agarose gel and imaged with a BioRad Gel-Doc EZ Imager.
Growth kinetics by focus forming assay
Six well plates were seeded at 8 × 10 5 cells per well with Vero cells the day before infection. Cells were infected at a MOI of 0.01 in triplicate for 1 h with shaking every 15 minutes, followed by a 3X PBS wash and addition of media supplemented with 2% FBS. Supernatant samples were taken at 24, 48, 72, 96, and 120 h and titered by focus-forming assay.
Reporter neutralization assays
Reporter neutralization tests were done by two-fold serially diluting sera, starting at 1:50 in DMEM with 2% FBS. Sera samples positive for ZIKV and DENV1-4 were pooled from mice infected with the respective virus. YFV and JEV sera samples were from vaccinated mice. Sera dilutions were mixed 1:1 by volume with the respective reporter virus and incubated at 37°C for 1 h. The virus/sera mixture was then plated on Vero cells in a white, opaque 96 well plate that was seeded at 1.5 × 10 4 cells per well. After a 4-h incubation at 37°C, the wells were washed 2X with PBS and 50 µL of NanoGlo substrate diluted 1:50 in NanoGlo Assay Buffer was added to the cells. Plates were read in a BioTek Cytation 5 plate reader after 3 minutes with a gain of 150.
Positive controls consisted of virus infection with no sera. Negative controls comprised virus plated in wells with no cells. This negative control allows for subtraction of background luciferase signals from the virus media. After subtraction of negative controls, values were converted to a percent of the positive control. The data was then analyzed by four parameter nonlinear regression, with the top and bottom constrained to 100 and zero, respectively, and the NT 50 reported.
Antiviral assays
The panflavivirus inhibitor NITD008 was diluted in 90% DMSO to 10 µM and then two-fold serially diluted 8 times. These dilutions were mixed with virus (MOI 0.1-DENV1, DENV2, DENV3; MOI 0.01-ZIKV; MOI 0.001-DENV4, YFV, JEV) and plated on Huh7 cells that were seeded at 1.5 × 10 4 cells per well the previous day in media with 2% FBS. Cells were washed 48 h post infection three times with PBS followed by addition of NanoGlo substrate diluted 1:100 in NanoGlo Assay Buffer. Plates were read by a BioTek Cytation 5 plate reader after 3 minutes.
Mosquito infections
For the micro-injection study, Rockefeller strain Aedes aegypti mosquitoes were injected (100 nL) intrathoracically with virus diluted in PBS so that each mosquito received 50 FFU. Mosquitoes were cultured at 28°C for 8, 12, and 18 days. For blood feeding study, sheep's blood was centrifuged at 1000 g, 4°C for 20 minutes to separate plasma and cells. The plasma was heat-inactivated at 56°C for 1 hour and the cells were washed twice with PBS after which they were combined again. The blood was spiked with virus to a concentration of 2×10 6 FFU/mL. Engorged mosquitoes were further reared and harvested at 8, 12, and 18 days. At the time points indicated mosquito samples from both experiments were thoroughly homogenized (Qiagen TissueLyser II) in 200 µL PBS and centrifuged to pellet the tissue. 50 µL of supernatant was used for both RT-qPCR and luciferase assay. RNA was harvested using Qiagen RNeasy minikit and used in a Taqman RT-qPCR reaction targeting a region in NS5. Aedes aegypti actin served as a control. For luciferase assay, 50 µL of supernatant from homogenized mosquito was added to a 96 well opaque white plate, followed by addition of NanoGlo substrate, diluted 1:50 in Nano-Glo Assay Buffer. Samples were read in a BioTek Cytation 5 plate reader. Background luciferase levels (no mosquitoes) and clean, uninfected mosquitoes were used as negative controls.
Statistical analysis
Graphpad Prism 8 was used for graphing and statistical analysis. Statistical tests used, as well as significance levels, are noted in the figure legends. All replicated values are shown on each graph.
Illustrations
Figures were created using Biorender and Abode Illustrator.
Panel of reporter flaviviruses
Reporter flaviviruses were constructed as first described by [16] using capsid duplication lengths as found in the literature [10,16,20,23,24] (Figure 1(A,B)). All viruses except DENV3 and DENV4, which were assembled by in vitro ligation ( Figure S1), were constructed using traditional, plasmid-based reverse genetics approaches [28]. DENV3 and DENV4 full genomes are difficult to clone in bacteria, due to putative toxic elements present [29,30]. The NanoLuc gene followed by the 2A sequence from thosea asigna virus (T2A) were inserted between a duplicated portion of the capsid and the full-length capsid. To help prevent homologous recombination, the codons in the capsid sequence corresponding to the duplicated portion were scrambled to reduce homology. All full-length DNAs were used as a template in an in vitro transcription reaction to generate full-length RNAs, which were subsequently electroporated into Vero cells (for ZIKV, YFV, and WNV) or baby hamster kidney (BHK) cells (for DENV1-4 and JEV). Immunofluorescence assay (IFA) was used to indicate viral spread, post electroporation (Figure 1(B) and Figure S2(A)). Focus-forming assay shows all reporter viruses form distinct foci (Figure 1(C)), but no plaques when stained with crystal violet (data not shown). The IFA and focus-forming results corroborate with the viral replication kinetics among different versions of reporter viruses (see later replication kinetic results in Figure 3). To assess stability, viruses were passaged ten times on Vero cells according to the scheme in Figure S3(A). RT-PCR products corresponding to each passage show consistent band size for all viruses out to P10, including West Nile virus (WNV) which results were obtained after the initial review of this manuscript ( Figure S2(B)). The exception to this positive outcome is ZIKV and YFV, which have the shortest capsid duplication, 25 amino acids (Figure 1(D)). These two viruses showed a decrease in band size during passaging. These results suggested that the length of the capsid duplication may have an impact on virus stability.
Extended capsid duplication
The results from passaging these different flaviviruses indicated that a longer capsid duplication could positively impact its stability. This hypothesis was tested by creation of ZIKV and YFV C38 NanoLuc viruses. C38 was chosen based on the robust results from DENV1-4 using this length. During this time, Volkova, et al., published a report on reporter ZIKV and the effect of capsid duplication size on replication. Their conclusion was that C50 is the optimal length for viral growth, with this being the shortest length that was not statistically attenuated compared to WT virus [18]. Based on this report, we also constructed a C50 ZIKV. IFA results post-electroporation suggested that ZIKV C38 virus replicated more robustly than the C25 and C50 virus, while YFV C38 showed little difference compared to YFV C25 ( Figure 2(A), compare to Figure 1(B)). Focus size (Figure 2 (B)) comparison between ZIKV C25 and ZIKV C50 showed little difference but, ZIKV C38 formed clear, larger plaques similar to non-reporter wild-type ZIKV ( Figure S3(B)). YFV C38 focus size was only slightly larger than YFV C25. The C38 ZIKV and YFV and C50 ZIKV were continuously passaged and analyzed for reporter gene stability by RT-PCR (Figure 2(C)). Unexpectedly, ZIKV C50 showed early instability, so only passaging results from P0-P2 are shown. In contrast, ZIKV and YFV C38 were stable after ten rounds of continuous cell culture. These results suggest that an optimal length of duplicated capsid sequence (e.g. C38) is required for reporter virus stability. Under such condition, the frameshift or other mutations in the duplicated capsid region is not required for the stability of reporter virus.
Effect of extended capsid duplication on viral growth
The effect of capsid duplication length on viral growth was assessed on Vero cells. Cells were infected with ZIKV C25, C38, and C50 at a MOI of 0.01 and assessed at 24, 48, 72, 96, and 120 h post infection by focus forming assay (Figure 3(A)). ZIKV C38 replicated to significantly higher titers than ZIKV C25 and ZIKV C50, reaching 10 7 FFU/mL, at 24, 48, and 72 h post infection. ZIKV C25 and ZIKV C50 growth are similar across the same time period, though ZIKV C25 titers did continue to increase until 96 h. Conversely, growth comparison of YFV C25 and YFV C38 showed no significant difference at any time point, despite YFV C38's increased stability (Figure 3(B)). DENV1-4 and JEV growth kinetics on Vero cells (MOI 0.01) show that DENV1 replicated significantly higher at times 24-96 h post infection (Figure 3(C)). Together these data show that among C25, C38, and C50, C38 is the most optimal capsid duplication length for ZIKV replication. In contrast to these results, there seems to be no replication advantage for YFV C38 over YFV C25.
To directly examine the effect of reporter gene insertion on viral replication, we compared the replication kinetics between the parental wild-type ZIKV and the reporter ZIKV C38 on Vero cells ( Figure S3). The results showed that replication between the two is similar at alltime points but 72 h, where WT virus is 10-fold lower, possibly due to death of the host cells.
Rapid neutralization tests and antiviral discovery
One of the aims of this study was to develop stable reporter flaviviruses for neutralization tests and antiviral compound assays. As we have previously done, the stable reporter viruses were used to test a panel of flavivirus-immune mouse sera (Figure 4(A)) in a four-hour neutralization test. NT 50 results are indicative of the previous infection with the homologous virus yielding the highest NT 50 , though some crossneutralization by heterologous viruses was observed (Figure 4(B)). Using the flavivirus inhibitor NITD008 [31,32], each virus was used in an antiviral compound assay (Figure 4(C)). Increasing concentrations of NITD008 decreased luciferase expression compared to control (Figure 4(D)) resulting in potent EC 50 values (Figure 4(E)). These results support our previous data showing that NanoLuc-tagged flaviviruses can be valuable tools in rapid sero-diagnostic assays and antiviral compound screens.
ZIKV C38 in mosquitoes
Reporter virus use in mosquito experiments is highly advantageous, where RNA extraction from individual mosquitoes is time-consuming. Intrathoracic injection of mosquitoes with ZIKV C25 showed no viral replication and no luciferase expression (data not shown), possibly due to reporter gene induced replication attenuation. Because ZIKV C38 had more robust luciferase expression compared to C25 in C6/36 cells (data not shown), we characterized ZIKV C38 replication in whole Aedes aegypti mosquitoes by both microinjection, which surpasses the midgut barrier, and membrane blood feeding. Mosquitoes were microinjected in the thorax with 50 FFU ZIKV C38 and at days 8, 12, and 18, whole mosquitoes were homogenized in PBS and assayed by both qPCR (Figure 5(A), left panel) and luciferase assay (right panel). Although viral RNA did not increase from day 8 to day 12, the luciferase assay shows a statistically significant peak at day 12. In a separate experiment, mosquitoes were allowed to feed on blood spiked with 2×10 6 FFU/mL ZIKV C38. These mosquitoes were also homogenized on days 8, 12, and 18 and evaluated by both qPCR and luciferase assay (Figure 4(B), right and left panel, respectively). By blood feeding, ZIKV C38 titers increased at each time point by qPCR, though the increase was not statistically significant. Corroboratively, the luciferase activities significantly increased from day 8 to 18. Uninfected mosquitoes were also assayed in Figure 5(B) (right panel) as a negative control. These data show that ZIKV C38 replicates in Aedes aegypti mosquitoes and luciferase output can be used to assay viral replication.
Discussion
Flavivirus reporter constructs have been notoriously unstable since they were first reported [14,33]. Improvements in design [16] have increased the stability, but previous efforts have fallen short of the high standard of ten passages. Here we report a panel of NanoLuc-tagged flaviviruses with stability to at least ten passages in cell culture, which is double the passages routinely reported. It was found that C38 ZIKV and YFV reporter viruses were more stable than their C25 counterparts, and in the case of ZIKV, C38 had a distinct replication advantage. Previous hypotheses for constructing reporter flaviviruses assumes that shorter capsid duplication lengths would be more stable, due to a shorter stretch of homologous sequence. These results challenge that assumption, Figure 3. Effect of capsid duplication length on viral growth. Multi-step growth kinetics (MOI 0.01, n = 3) on Vero cells, using focusforming assay to quantify growth for A. ZIKV C25, C38, and C50; B. YFV C25 and C38; C. DENV1-4 and JEV. 2-way repeated measures ANOVA with Tukey's multiple comparisons test was used to assess significance for A and C. 2-way repeated measures ANOVA with Sidak's multiple comparisons test was used for B (p > 0.5 = ns, p < 0.5 = *, p < 0.1 = **, p < 0.01 = ***, p < 0.001 = ****).
suggesting that C38 is optimal. The establishment of a stable reporter virus system will greatly facilitate the production of reporter virus in cell culture through viral infection and amplification rather than transfection of viral RNA transcribed from its infectious cDNA plasmid. The ease of stable reporter virus production enables potential high-throughput flavivirus neutralization testing and antiviral screening, as recently demonstrated for reporter SARS-CoV-2 [34,35]. Many different cis-acting elements present in the flavivirus capsid coding region have been mapped, including the 5'CS [36], the cHP [37], the 5' DAR [38], and the DCS-PK [39]. These elements work together to regulate RNA translation, genome cyclization, and viral replication. C25 includes all of those elements except the full DCS-PK, a pseudoknot that has been modeled in various flavivirus genomes, including ZIKV [40], and experimentally been found to aid viral replication in DENV2 [41] and DENV4 [39]. Extending C25 to C38 includes the full DCS-PK, which may explain the increased replication capacity of ZIKV C38 compared to ZIKV C25. We hypothesize that the lack of the DCS-PK in ZIKV C25 caused increased selective pressure and helped drive recombination. Inclusion of DCS-PK in ZIKV C38 lessened this selective pressure, expanding its stability. In contrast, YFV C38 replicated very similarly to YFV C25, supporting a model that YFV lacks, or has a shortened, DCS-PK [39]. Despite this, the lengthened capsid duplication still had a positive effect on YFV stability. It remains to be determined what RNA element in the C38 coding sequence facilitates YFV replication.
Previous work with reporter ZIKV identified C50 as an optimal length for capsid duplication in relation to replication but its effect on stability was not independently tested [18]. In our hands, C38 performed remarkably better in both stability and viral replication when compared to C50. The discrepancy could be due to different ZIKV strains, different sequences flanking the reporter gene, and the absence of a frameshift mutation which was included to help block recombination. ZIKV C50 includes the DCS-PK, along with the other required replication elements and as such would be expected to replicate similar to ZIKV C38. The extra capsid amino acids C39-C50, which contain residues shown to be important in capsid dimerization [42], could allow the C50 capsid fragment to interfere with full-length capsid, thus Figure 5. ZIKV C38 Nano in Mosquitoes. A. Aedes aegypti mosquitoes were micro-injected with 50 FFU ZIKV C38 Nano (n = 30 per day). On days 8, 12, and 18, whole mosquitoes were collected and individually homogenized in PBS. Samples were analyzed by both qPCR (left panel) or luciferase assay (right panel). B. Aedes aegypti mosquitoes (n=50 per group) were inoculated by membrane blood-feeding on sheep's blood spiked with 2 × 10 6 FFU/mL ZIKV C38 Nano. Mosquitoes were then analyzed as in panel A. For the luciferase assay, clean (uninfected) mosquitoes were used as a control. Values for luciferase activity are reported relative to background (no mosquito) levels. ANOVA with Tukey's post-hoc test was used to assess significant differences in all panels (p > 0.5 = ns, p < 0.5 = *, p < 0.1 = **, p < 0.01 = ***, p < 0.001 = ****).
explaining the attenuation of C50 compared to C38. This selective pressure, along with a larger region for possible recombination, could also be driving the poor stability seen during passaging.
We used sera from known virus-infected mice to demonstrate the utility of the reporter virus for neutralization testing. The reporter virus neutralization assay has been optimized in a 96-well format for high-throughput testing. For clinical use of the reporter neutralization testing, the assay must be first validated using patient sera with well-defined viral infections. The validation could be achieved by comparing the antibody neutralizing titers derived from the conventional plaque reduction neutralization test (PRNT) with those derived from the reporter virus assay. Efforts are ongoing to obtain the well-defined patient sera to perform the clinical validation of reporter virus neutralization assay.
Conclusion
Together, these results demonstrate that extending the portion of capsid duplicated to make ZIKV and YFV reporter viruses can increase their stability and in the case of ZIKV enhance its replication capabilities in mammalian cells and whole mosquitoes. These data help inform a new generation of stable flavivirus reporter constructs to be used for high-throughput drug screens, serological diagnosis, pathogenesis studies, and transgene delivery. | 2020-09-29T13:06:09.969Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "d06f4de7de849cb502bd07c997b96ba834e0b0ae",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/22221751.2020.1829994?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "95d113ca8b3a2ab4e42c5269cd8c21d1e8e76244",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
6861505 | pes2o/s2orc | v3-fos-license | Stationary uphill currents in locally perturbed Zero Range Processes
Uphill currents are observed when mass diffuses in the direction of the density gradient. We study this phenomenon in stationary conditions in the framework of locally perturbed 1D Zero Range Processes (ZRP). We show that the onset of currents flowing from the reservoir with smaller density to the one with larger density can be caused by a local asymmetry in the hopping rates on a single site at the center of the lattice. For fixed injection rates at the boundaries, we prove that a suitable tuning of the asymmetry in the bulk may induce uphill diffusion at arbitrarily large, finite volumes. We also deduce heuristically the hydrodynamic behavior of the model and connect the local asymmetry characterizing the ZRP dynamics to a matching condition relevant for the macroscopic problem.
I. INTRODUCTION
Fick's law of diffusion stands as one of the basic tenets of the theory of transport phenomena and Irreversible Thermodynamics, and predicts that mass diffuses against the density gradient [1,2]. Nonetheless, there is some increasing experimental and theoretical evidence, in the literature, of diffusive currents flowing from a reservoir with lower density towards one with larger density, that are hence said to go uphill [3][4][5][6]. Such "anomalous" currents have been observed and studied in different contexts. Consider, for instance, a system made of particles of a certain species A, whose diffusive motion obeys the standard Fick's law, namely the current of particles A includes a term proportional to minus the density gradient of A itself. Suppose that a second species B is then added, whose interaction with A affects the diffusive motion of the particles of the first species. Thus, a second contribution to the current of particles A arises, related to the density gradient of B, that may counterbalance the first contribution. As a result, at variance with the standard Fick's law prescription [7,8], the species A undergoes a process of uphill diffusion induced by the external potential generated by the species B: this is, essentially, the phenomenon highlighted in the seminal paper by Darken [9], reporting an experiment of transient diffusion of carbon atoms subjected to a repulsive * emilio.cirillo@uniroma1.it † matteo.colangeli1@univaq.it interaction with silicon particles in a welded specimen, where the silicon content is concentrated on the left of the weld (and negligible on the right). A second stationary mechanism which is known to produce uphill currents is related to the presence of a phase transition in non-equilibrium conditions [10,11]. This phenomenon has been observed in computer simulations in a model constituted by a single species undergoing a liquid-vapor phase transition. This system, with one boundary fixed at the density of the metastable vapor phase and the other at the density of the metastable liquid phase, exhibits a stationary state in which the current flows from the vapor boundary to the liquid one. In particular, in Ref. [12] the authors prove the existence of the uphill diffusion phenomenon for a stochastic cellular automaton, in which the particles are subjected to an exclusion rule (preventing the simultaneous presence of two particles with same velocity on a same site) and to a long-range Kac potential [13]. As distinct from the Darken experiment, the mechanism responsible, in this case, for the breaking of the standard diffusive behavior is the creation of a sharp interface located near one of the two boundaries -called bump therein -separating the vapor and the liquid phases. The density profile results essentially decreasing almost everywhere along the 1D spatial domain, except at the transition region: in fact, the stationary current proceeds downhill in most of the space, but it goes uphill right along the interface.
Noticeably, the occurrence of stationary uphill currents induced by a phase transition was also recently reported in [14], for a 2D Ising model in contact with two infinite reservoirs fixing the values of the density at the horizontal boundaries.
In this paper we study a different mechanism to produce uphill currents, based on a local perturbation of a stationary state. The model discussed below allows to recover some of the important features of the physical examples of uphill diffusion mentioned above. In fact, despite being simple enough to permit an analytical solution, it gives rise to a stationary uphill diffusion which is not induced by a phase transition as in [11,12,14], but is triggered by a local asymmetry in the hopping rates that rule the microscopic dynamics in the bulk. The asymmetry at the center of the lattice stands as a caricature of the external potential exerted by the silicon particles on the carbon atoms, as described in the Darken experiment [9], cf. also the set-up discussed in Sec. III of [15].
The effect of local perturbations of stationary states is a fascinating problem which deserved a lot of attention in the recent physics and mathematical literature, see e.g. the review [16]. A classical question in this field is the so-called blockage problem, posed in [17] for the totally asymmetric simple exclusion process on a ring. The question is whether slowing down a single bond on the lattice can ultimately affect the value of the stationary current in the infinite volume limit, see also [18][19][20] for related results for different models.
Differently from the blockage problem, the question addressed in this paper concerns the effect of a local asymmetry in a globally symmetric model. Consider the stationary state of a 1D system with symmetric dynamics and suppose that a nonvanishing current exists due to the coupling of the system with two particle reservoirs at the boundaries. What happens if the dynamics is perturbed and made asymmetric just on a single site of the lattice? Is such a local asymmetry effective enough to reverse the natural current flowing direction?
More precisely, the model we shall consider is a 1D channel with open boundaries at its extremities (hereafter called ZRP-OB), in contact with two reservoirs. The reservoirs are equipped with assigned particle densities, which also fix the injection rates at the boundaries. The dynamics in the channel is symmetric, therefore in the steady state a particle current exists which moves from the reservoir with larger density to the one with smaller density, as prescribed by the Fick's law. Then, on a single site at the center of the lattice, the dynamics is modified in such a way that particles locally hop with higher rate towards the reservoir with larger den-sity. More general inhomogeneous random ZRP have been considered in the recent literature [21,22]. We prove that such a bias may give rise to stationary uphill currents in the channel. In particular, we prove that for any fixed difference between the two injection rates it is always possible to tune the local asymmetry in order to observe an uphill current for arbitrarily large finite volumes. The mechanism is the following: for sufficiently large volumes the density at the boundaries of the channel depends only on the injection rates and not on the local bias; moreover, if the bias is large enough the current changes sign so that the particles move uphill. The model we shall use is a 1D Zero Range Process. More detailed results will be derived by establishing an appropriate form for the intensity function, namely the rate at which a site is updated, and eventually this will be chosen proportional to the number of particles occupying the site. In this case, we shall also develop a heuristic argument to derive the hydrodynamics equations. These will be endowed with two matching conditions -one concerning the density function and another its first space derivative -at the center of the slab, stemming from the local asymmetry in the hopping rates characterizing the microscopic dynamics. We will then solve the problem via a Fourier series expansion and we shall finally compare, finding a perfect match, the solution of the hydrodynamic problem with the evolution of the original ZRP. We also mention that uphill currents are observed in queuing network models.
Moreover, we will introduce a periodic version of the inhomogeneous ZRP, in which the channel is coupled at its extremities with two slow sites -mimicking two finite particle reservoirs -which can also exchange particle between themselves: the whole system thus constitutes a closed circuit (hereafter called ZRP-CC). One of the open questions posed in [12], in the context of stochastic particle systems, was the conjectured existence of stationary states with nonvanishing self-sustained currents running in circuits, this phenomenon being also called "time crystals" in the literature [23,24]. We shall not tackle rigorously the existence of those fascinating rotating states here; rather, we aim to give theoretical and numerical evidence that the local asymmetry introduced in the ZRP-CC may lead to a stationary state in which the densities of the finite reservoirs are different and the current flows, in the channel, from the reservoir with lower density to the one with larger density (as it was also the case for the ZRP-OB). Steady states for ZRP with peri-odic boundary conditions and spatially varying hopping rates were also discussed in [25,26].
The paper is organized as follows. In Section II we introduce the two ZRP models, the ZRP-OB and the ZRP-CC, we define the stationary current and also recall some useful properties. In Section III we prove the existence of uphill currents for the ZRP-OB model. Section IV is devoted to the study of uphill currents for the ZRP-CC. In Section V we discuss heuristically the hydrodynamic limit of the ZRP-OB and compare the solution of the hydrodynamic equation to the profile evolving according to the stochastic ZRP dynamics. Finaly, Section VI is devoted to our brief conclusions.
II. THE MODEL
We define the two ZRP models to be studied in the following sections, see also [27][28][29] for a survey on ZRP models.
A. The ZRP-OB
We consider a positive integer R and define a ZRP on the finite lattice Λ = {1, . . . , 2R + 1} ⊂ Z. We consider the finite state or configuration space Ω R = N Λ . Given n = (n 1 , . . . , n 2R+1 ) ∈ Ω R the non-negative integer n x is called number of particles at the site x ∈ Λ in the state or configuration n. We let u : N → R + , a positive and non-decreasing function such that u(0) = 0, be the intensity. Given n ∈ Ω R such that n x > 0 for some x = 1, . . . , 2R+1, we let n x,x±1 be the configuration obtained by moving a particle from the site x to the site x ± 1; in particular, we understand n 1,0 and n 2R+1,2R+2 to be the configurations obtained by removing a particle from the site, respectively, 1, and 2R + 1. Similarly, we denote by n 0,1 and n 2R+2,2R+1 the configurations obtained by adding a particle to the site 1 and 2R + 1, respectively.
We then consider the ZRP-OB model, defined as the continuous time Markov jump process n(t) ∈ Ω R , t ≥ 0, with rates r(n, n 0,1 ) = α and r(n, n 2R+2,2R+1 ) = δ (1) for particles injection at the boundaries, and with rates r(n, n x,x−1 ) = q x u(n x ) for x = 1, . . . , 2R + 1 (2) for bulk leftwards displacements, and r(n, n x,x+1 ) = p x u(n x ) for x = 1, . . . , 2R + 1 (3) for bulk rightwards displacements (see Figure 1). Note that equations (2) and (3) for x = 1 and x = 2R + 1, respectively, account for the particles removal at the boundaries. The generator of the dynamics can be written as for any real function f on Ω R . This means that particles hop almost everywhere on the lattice to the neighboring sites with rates qu(n x ) and pu(n x ). At the center of the lattice, instead, different rates are assumed, namelyqu(n x ) andpu(n x ). The system is "open" in the sense that a particle hopping from the sites 1 or 2R + 1 can leave the channel via, respectively, a left or a right move, with rates γu(n 1 ) and βu(n 2R+1 ). Finally, particles are injected in the channel at the left and right boundaries with rates, respectively, α and δ.
No further characterization of the (infinite) reservoirs is required for the ZRP-OB, as the action of the reservoirs is suitably described in terms of the injection rates α and δ. Nevertheless, it may be useful to think of each injection rate as being proportional to the (fixed, for the ZRP-OB model) particle density of the corresponding reservoir, as proposed in [14] for a continuous-time dynamics, see also [11,12] in the case of a cellular automaton. Hence, a larger injection rate corresponds to a larger density of the reservoir.
B. The ZRP-CC
The definition of the model is similar to the ZRP-OB. We consider the positive integers R, N and define a ZRP on the finite torus Λ = {0, 1, . . . , 2R + 2} ⊂ Z. We consider the finite configuration space n = (n 0 , . . . , n 2R+2 ) ∈ Ω R,N the non-negative integer n x is called number of particles at the site x ∈ Λ in the configuration n. We let u : N → R + , a positive and non-decreasing function such that u(0) = 0, be the intensity. Given n ∈ Ω R such that n x > 0 for some x = 0, . . . , 2R + 2, we let n x,x±1 be the configuration obtained by moving a particle from the site x to the site x ± 1, where we denote by n 0,−1 the configuration obtained by moving a particle from the site 0 to the site 2R + 2, and by n 2R+2,2R+3 the configuration obtained by moving a particle from the site 2R + 2 to the site 0.
Given p, q,p,q, λ > 0 we set q x = q for x = 1, . . . , R and x = R + 2, . . . , 2R + 1, q R+1 =q, p x = p for x = 1, . . . , R and x = R + 2, . . . , 2R + 1, and p R+1 =p. We consider the periodic ZRP defined as the continuous time Markov jump process n(t) ∈ Ω R , t ≥ 0, with rates r(n, n 0,±1 ) = λu(n 0 ) and for the boundary conditions, and with rates for bulk leftwards displacements, and r(n, n x,x+1 ) = p x u(n x ) for x = 1, . . . , 2R + 1 (8) for bulk rightwards displacements (see Figure 1). The generator of the dynamics can be written as The ZRP-CC model differs from the ZRP-OB for the boundary conditions: particles can neither exit nor enter the system. Furthermore, the sites 0 and 2R + 2 are updated with rates proportional to λ. The interesting case, from the modelling perspective, is that in which λ is much smaller than one: namely, the boundary sites are slowed down and mimic the action of large particle reservoirs.
III. UPHILL CURRENTS IN THE ZRP-OB
In this section we shall prove that the ZRP-OB can exhibit stationary uphill currents. More precisely, we shall consider the process described by the generator given in (4), and show that, for a particular choice of the parameters, the steady state is characterized by a current flowing from the reservoir with smaller density to the one with larger density.
A. Stationary measure for the ZRP-OB
Consider the ZRP-OB defined in Section II A. A probability measure µ R on Ω R is stationary for the ZRP if and for any function f . A sufficient condition is provided by the balance equation for any n ∈ Ω R . Consider the positive reals s 1 , . . . , s 2R+1 , called fugacities, and the product measure on the space Ω R defined as where By exploiting equation (11), or by applying (10) to the functions f (n) = n x for any x, it can be proven that ν is stationary for the ZRP-OB provided the reals s x satisfy the following equations: After some simple algebra we get the equations for x = 1, . . . , R − 1 and x = R + 2, . . . , 2R, which reduce to [30, equation (13)] in the case (q,p) = (q, p). These equations admits a unique solution to be discussed in detail in the sequel for a particular choice of the parameters p, q,p,q, α, β, γ, and δ.
B. Stationary current and density profile for the ZRP-OB The main quantities of interest, in our study, are the stationary density or occupation number profiles see [31] for the details, and the stationary current for x = 1, . . . , 2R, where we omitted the last straightforward computation. The stationary current represents the difference between the average number of particles crossing a bond between two adjacent sites on the lattice from the left to the right and the corresponding number in the opposite direction. Equations (15) shows that the stationary current does not depend on the site x, therefore we shall simply write J R ≡ J R,x . Note that it was possible to express the current in terms of the fugacities without relying on any specific choice for the intensity function u. Yet, an explicit form for u is needed in the computation of the density profile. In general it can be proven, see [31], that hence, at each site, the stationary mean occupation number is an increasing function of the local fugacity. Particularly relevant cases are the so-called independent particle and the simple exclusion-like ZRP models, in which the intensity function is respectively given by u(k) = k and u(k) = 1 for k ≥ 1 (recall that u(0) = 0). In these two cases it is easy to prove that Z ip for the independent particle and the simple exclusion-like models, respectively.
The discussion of the the case α < δ is more delicate, since the sign of σ depends on the value of the difference δ − α. More precisely, it holds where we have introduced the critical bias c . Thus, we have to distinguish three cases. For 0 ≤ < c we have that σ > 0, 2α < (α + δ)(1 − 2 ) ≤ α + δ, and α + δ ≤ (α + δ)(1 + 2 ) < 2δ. For = c we have that σ = 0, (α + δ)(1 − 2 c ) = 2α, and (α + δ)(1 + 2 c ) = 2δ. For c < < 1/2 we have that σ < 0, 2α > (α + δ)(1 − 2 ) > 0, and 2(α + δ) > (α + δ)(1 + 2 ) > 2δ. The graphs in Figure 3 represent the fugacity profiles for R large, in the three cases. Remarkably, the presence of a critical value for the bias, marking the transition from a regime of standard (downhill) diffusion to another regimes of uphill diffusion, was also reported in [15], cf. Figure 4 therein. In that work, much in the same spirit of the ZRP-OB model, the diffusion of particles in the channel results from the balance between the standard diffusive behavior induced by the reservoirs and the uphill motion triggered by the Kac potential in the bulk (whose effect is only visible in a neighborhood around the central site of the lattice). As already mentioned in Section III B, the stationary current can be computed from the knowledge of the fugacity profile without specifying the intensity function. Applying (17) and (20) we find On the other hand, to compute the density profile it is necessary to consider a particular form for the intensity function, see (19) for the independent particle and the simple exclusion-like cases.
For the independent particle model, the fugacity profiles shown in Figures 2 and 3 correspond to the density profiles. In particular, by summing up the density profile ρ x for x = 1, . . . , 2R + 1, we find the average total number of particles in the channel in the steady state: Moreover, in the case α > δ, σ < 0 implies that J R > 0. The current goes downhill, i.e. it flows from the reservoir with larger density (characterized by the injection rate α) towards the reservoir with smaller density (with injection rate δ). When α = δ, σ < 0 implies that J R > 0. The diffusion is now uphill: indeed, in spite of the equality of the injection rates, the current goes from the boundary site 1, with lower density, to the site 2R + 1, with higher density. The effect, though, is barely visible because s 2R+1 − s 1 = 8 α/(R + 1) vanishes for R large. It is also interesting to note that, for R sufficiently large, the density profile corresponding to the case α = δ recovers, qualitatively, the plot portrayed in Figure 2 of [15], referring to a scenario similar to the one considered here for the ZRP-OB model, where the injection rates at the boundaries coincide.
In the case α < δ, finally, for < c the diffusion is downhill, for = c the current vanishes, and for c < the diffusion is uphill. We cannot write a general formula for the density profile for any choice of the intensity function u. But, using (16), we have that ρ x+1 > ρ x if and only if s x+1 > s x . This implies that the results we deduced for the independent particle model are, indeed, completely general.
Let us now compare our exact results with the Monte Carlo simulations. The model has been simulated as fol-lows: call n the configuration at time t, then (i) a number τ is picked up at random with exponential distribution of parameter U = α + δ + 2R+1 x=1 u x (n x ) and time is update to t + τ ; (ii) an integer y in 0, 1, . . . , 2R + 2 is chosen at random on the lattice with probability π y = u y (n y )/U for y = 1, . . . , 2R + 1, π 0 = α/U , and π 2R+2 = δ/U (note that, for simplicity, we skipped the time dependence in the notation); (iii) if y = 0, 2R+2 a particle is moved from the site y to the site y + 1 with probability p y /(q y + p y ) (in the case y = 2R + 1 the particle is removed) or to the site y − 1 with probability q y /(q y + p y ) (in the case y = 1 the particle is removed), if y = 0 a particle is added to the site 1, if y = 2R + 2 a particle is added to the site 2R + 1.
In Figures 4 we report the Monte Carlo measure of the total number of particles in the channel and of the boundary currents as functions of time, for the independent particle and the simple exclusion-like models. The values of the parameters used in the simulations are indicated in the caption. Both panels of Figure 4 show that when time is large enough the time-averaged values of the total number of particles and of the currents tend to the analytical results (24) and (25) (dashed lines). Concerning the theoretical value of the total number of particles, note that the analytic expression (25) only applies to the independent particle case; in the simple exclusion-like model we summed up numerically, for x = 1, . . . , 2R + 1, the values of ρ x given by (19), with s x in (20).
The data in Figure 4 (left panel) give also an insight into the magnitude of the "thermalization" time, namely the time interval in which the time-averaged total number of particles converges to the corresponding theoretical stationary value. As visible in the inset of Figure 4 (left panel), the thermalization time in the independent particle case is considerably smaller than that observed in the simple exclusion-like model, which is of the order of 2 × 10 6 (for the given initial datum used in the simulations). It should also be noted that the steady state fluctuations of the instantaneous total number of particles around the time-averaged value are larger in the simple exclusion-like model. To numerically check the convergence of the current to its theoretical value, see Figure 4 (right panel), we thus skipped the initial transient dynamics and measured the current starting from the time 2 × 10 6 .
The same procedure was also adopted for the measure of the stationary density profiles, reported in Figure 5. The match between the Monte Carlo numerical measure and the exact results is striking. As expected (see (19)), the density profile in the simple exclusion-like model is not linear. Note, also, that in the simple exclusion-like model no symmetry between the left and the right halves of the lattice exists. Moreover, though the values of the boundary rates α and δ used in the simulations are the same in the two considered models, completely different values of the density at the boundary sites are obtained. This suggests, hence, that the dynamics in the bulk significantly affects the value of the density at the boundaries.
IV. UPHILL CURRENTS IN THE ZRP-CC
In this section we shall prove that also the ZRP-CC can exhibit anomalous uphill currents. More precisely, we shall consider the process described by the generator given in (9) and discuss the effect produced, in the steady state, by the local asymmetry in the bulk and by the two slow boundary sites.
B. Stationary current and density profile for the ZRP-CC
We shall focus, again, on the stationary density profile s ny x u(n y )! (31) and the stationary current for x = 1, . . . , 2R. With the same arguments used to prove [29, equation (11)] we get hence for x = 1, . . . , 2R, where the last equalities follows from [29, equation (11)]. Equations (30) proves that the current does not depend on the site x, hence we shall simply write J R,N ≡ J R,N,x . In this periodic case it is not possible to push forward the discussion without embracing a specific form for the intensity function. Then, from now onwards in this section, we shall restrict our description to the independent particle case u(k) = k and add the superscript "ip" to the notation. We first compute the partition function where we used the convention 0! = 1 and applied the multinomial theorem [32, equation (3.35)]. From (34) (and from the notational remark below it) we then get Moreover, since u(n x ) = n x , the equation (33) can be also used to compute the density profile: where in the last step we used (33) and (35). Note that ρ ip 0 and ρ ip 2R+2 correspond to the average total number of particles allocated, respectively, in the left and in the right reservoirs. Retaining the interpretation discussed at the end of Sec. II A, from the expression of the injection rates given in (5) and (6) we find that the average particle densities in the two reservoirs take the values, respectively, λρ ip 0 and λρ ip 2R+2 . Thus, in the ZRP-CC model, the two slow sites act as finite particle reservoirs, each constituted by λ −1 sites. In Figure (6) (right panel) shown are, hence, the density profile in the bulk, i.e. for x = 1, ..., 2R + 1, and at the slow sites in x = 0 and in x = 2R + 2.
In conclusion, and J ip R,N > 0, which proves that the channel is crossed by an uphill current flowing from the reservoir with lower particle density (in x = 0) to the one with higher particle density (in x = 2R + 2).
We have numerically simulated the almost everywhere symmetric ZRP-CC model following a scheme similar to that outlined in Section III C. We find, also in this case, an optimal match between the exact density profiles obtained from (37) and (38) and the numerical data, see Figure 6.
V. THE HYDRODYNAMIC LIMIT
We discuss on heuristic grounds the hydrodynamic limit [28,33] of the almost everywhere symmetric ZRP-OB model introduced in Section III C, with the intensity function corresponding to the independent particle case, namely u(k) = k. For any i ∈ Λ set x i = i/(2R+1) so that x i ∈ [1/(2R+ 1), 1]. Denote by n i (t) the time-dependent density profile at time t, i.e. n i (t) is the average number of particles occupying the site i at time t. The change of the number of particles at a site in the bulk, i.e. i = 1, R, R + 1, R + 2, 2R + 1, in a small interval ∆t, can be estimated as This equality can be rewritten as Thus, if time is rescaled as t/(2R + 1) 2 → t (diffusive scaling), in the limit R → ∞ the particle density profile n i (t) will tend to a function u(x, t) solving the diffusion equation In order to guess the boundary conditions at x = 0, 1/2, 1 we shall write the balance equation of the currents at the sites x 1 , x R , x R+1 , x R+2 , and x 2R+1 . More precisely, we consider a small interval of time ∆t and we first write Note that Eqs. (43) are obtained by assuming that the injection rates α and δ are independent of R; different boundary conditions may hold under different scalings of α and δ with R. Moreover, we have that The equation in the middle can be rewritten as follows which, divided by 1/(2R + 1), in the limit R → ∞ provides the condition Combining the first and the third equation, on the other hand, we get Since in the limit R → ∞ we have that [n R−1 (t) − n R (t)]/2 and [n R+2 (t) − n R+3 (t)]/2 tend to zero, the above equation can be interpreted as In conclusion, we find that the evolution of the model in the hydrodynamic limit is described by the differential equation (42) supplemented with the boundary conditions (43), (44), and (45). In particular, the stationary profile is the solution of the problem where the subscripts − and + denote, respectively, the left and the right limits.
The stationary problem (46) can be easily solved: one can write u(x) = Ax + B for x ∈ (0, 1/2) and u(x) = Cx + D for x ∈ (1/2, 1). The boundary conditions then yield for 0 ≤ x ≤ 1/2 and The solution of the macroscopic stationary equation matches perfectly with the stationary density profile of the microscopic lattice model. Indeed, by performing the change of variable x/(2R + 1) → x in (20), one finds, for R large, the equations (47) and (48). It is also possible to solve the time dependent problem (42)-(45) and write the solution in terms of a Fourier series. We first introduce the functions for x ∈ [1, 1/2] and note that the conditions (43)-(45) imply Moreover, by (42) we have that both Y 1 and Y 2 solve the heat equation with diffusion coefficient 1/2. As a second step, we introduce the functions and note that from the boundary conditions on Y 1 and Y 2 we get Thus, we obtained two PDE problems, one for W and another for U , which are decoupled and can hence be solved by the standard method of separation of variables. Denoting by u 0 (x) the initial condition for the original equation (42), we can define Y 1,0 (x) = u 0 (x) and Y 2,0 (x) = u 0 (1 − x) for x ∈ [0, 1/2]. Moreover, we set Then, by a standard computation, we find with α n = (1 + 2n)π and for n = 0, 1, . . . . For the function U we find B n e −β 2 n t/2 sin(β n x) (50) with β n = 2nπ and for n = 0, 1, . . . . Solving the equations that define W and U with respect to Y 1 and Y 2 we find and Finally, we get the solution of the original problem, We now test numerically the solution (49)-(51). We consider the hydrodynamic problem in the case α = 0.5 and δ = 1 in Figure 7 and α = δ = 0.5 in Figure 8. In both figures = 0.4 and the initial datum is u 0 (x) = 1 for x ∈ [0, 1/2] and u 0 (x) = 2 for x ∈ [1/2, 1]. The density profile is plotted at different macroscopic times and is compared with the numerical estimate.
The numerical solution is constructed as follows: a set of 5 × 10 5 independent realizations of the stochastic process is constructed by running different Monte Carlo simulations started from the same initial datum (the one also used for the analytical solution) and by varying the seed of the random number generator routine. Then, the profile corresponding to a certain fixed macroscopic time is obtained by averaging over all the different realizations of the process. Finally, the numerical profile is plotted after rescaling the space microscopic variable as x/(2R + 1) → x and the very good match illustrated in Figures 7 and 8 is found. It should be observed that, in both Figures 7 and 8, the Monte Carlo results display some (little) discrepancies with respect to the theoretical behavior indicated by the solid lines. These are fluctuations stemming from finite size effects.
Indeed, fixed the initial datum, averaging over a (large enough) set of different realizations of the process corresponds to considering the expectation E µ t R [n x (t)] with respect to a probability measure µ t R associated with the stochastic process at time t. We recall, then, that the hydrodynamic behavior holds in the limit R → ∞. More precisely, one introduces the empirical density [34] π t R (n) = where δ x is the delta measure. From Eq. (52) one finds that, for any continuous function f : Ω R → R, it holds Ω R f dπ t R (n) = 1 2R + 1 x∈Λ n x (t)f (x) .
One says, then, that a sequence of probability measures µ t R on Ω R is associated with a density profile u(x, t) if for any continuous function f and for any > 0 it holds where 1 denotes the characteristic function.
In Figure 9 we show that the match between the solution of the hydrodynamic limit equations and the numerical simulation becomes better and better when the size of the lattice used in the simulations increases. The same situation as the one portrayed in Figure 7 at the macroscopic time 0.001 is considered and simulations are run for R = 25, 50, 75, 100. Note that the case with R = 50 (black triangles) is also the case shown in Figure 7.
VI. CONCLUSIONS
A variety of systems, e.g. two-species models, particle or spin models undergoing a phase transition, queuing network models, are known to exhibit uphill currents. In this paper we prove that the phenomenon of uphill diffusion can also be observed in the simplest and, somehow, paradigmatic transport model, namely the 1D Zero Range Process.
Indeed, such a model is proven to show uphill currents in presence of a bias on a single defect site. For an open ZRP in contact with two particle reservoirs at different densities, for sufficiently large volumes the density at the boundaries of the channel depends only on the injection rates and not on the local bias. If the bias is large enough the current changes sign, so that particles typically move uphill, from the reservoir with lower density to the one with higher density. This result is demonstrated both analytically and numerically, with a striking match between the exact and the Monte Carlo results.
We have also investigated the hydrodynamic limit of the model: a heuristic argument yields the structure of the limit problem and provides the matching conditions mimicking the presence of the defect site in the microscopic lattice model. We managed to write the time dependent solution as a Fourier series and compared it with the evolution of the original ZRP process.
ACKNOWLEDGMENTS
We thank A. De Masi and E. Presutti for inspiring this work and for the many enlightening discussions. We also thank D. Andreucci and D. Gabrielli for useful discussions on problems related to the derivation of hydrodynamic limits in presence of local discontinuities. | 2017-11-13T20:31:58.000Z | 2017-09-12T00:00:00.000 | {
"year": 2017,
"sha1": "863d3b904c9bf5b4c565b578d12dffd2597717b1",
"oa_license": null,
"oa_url": "https://iris.uniroma1.it/bitstream/11573/1046374/2/cc-pre_96_052137_2017-uphill.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "863d3b904c9bf5b4c565b578d12dffd2597717b1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics",
"Mathematics"
]
} |
255151232 | pes2o/s2orc | v3-fos-license | The relationship between smartphone addiction and aggression among Lebanese adolescents: the indirect effect of cognitive function
Background Despite a large body of research has shown that smartphone addiction (SA) is associated with aggressive behaviors, only a few mediators have been previously examined in this relationship among early adolescent students. No previous studies have explored, to our knowledge, the indirect role of cognitive function despite its great importance during this life period. This study is intended to verify whether cognitive function have indirect effects on the relationship between SA and aggression among high-school students in the context of Lebanese culture. Methods This was a cross-sectional designed study, conducted between January and May 2022, and enrolling 379 Lebanese adolescent students (aged 13–17 years). The Cognitive Functioning Self-Assessment Scale, the Buss–Perry Aggression Questionnaire-Short Form, and the Smartphone Addiction Scale-Short form were used. Results The bivariate analysis results revealed that higher SA and worse cognitive function were significantly associated with more physical aggression, verbal aggression, anger and hostility. The mediation analyses found that cognitive function mediated the association between SA and physical aggression, verbal aggression, anger and hostility. Higher SA was significantly associated with worse cognitive function and more physical aggression, verbal aggression, anger and hostility. Finally, worse cognitive function was significantly associated with more physical aggression, verbal aggression, anger and hostility. Conclusion Our findings cautiously suggest that, to reduce adolescent students’ aggression, interventions that promote cognitive performance may be effective. Particularly, students who are addicted to smartphones and show aggressive tendencies require interventions designed to improve cognition function.
Background
Over the last decades, the number of smartphone owners has been constantly increasing to reach 83.72% of the world's population in 2022 (compared to 49.40% in 2016) [1], with the highest percentage of smartphone users being adolescent students (high school graduate or less) [1]. Smartphones are practical, and provide easy, convenient access to many services including unrestricted communication with others, academic materials access, and leisure online activities. In particular, smartphones have offered adolescents opportunity to develop their selfidentity and personal autonomy, establish interpersonal relationships, be creative, and entertain [2,3]. All these attractive attributes and its non-restricted use by space and time have led to the emergence of addictive smartphone behaviors, especially at a young age [4].
SA among adolescent students
Adolescence is a critical period of heightened biological vulnerability to addiction, and of onset of addictive disorders [5,6]. Previous studies investigating smartphone addiction (SA) using the most widely used measure (the Smartphone Addiction Scale-Short Version, SAS-SV) revealed high prevalence rates of SA worldwide among early adolescent students (e.g., 16.9% in Switzerland [7], 22.8% in China [8], 26.61% in Korea [9], 36.9% in Turkey [10], 37.1% in Iran [11], 42.9% in Brazil [12], 55.8% in Morocco [13], and 62.6% in the Philippines [14]). Using the same scale, we could found a study in Lebanon that surveyed young adults of the general population (aged 18 to 29 years), and found that 46.9% of participants had SA [15]. However, as far as we are aware, no prior studies have been interested in evaluating SA in Lebanese adolescent students.
Increased evidence supports the detrimental effects of SA that became significant and growing social and public health problems [16]. SA has been shown to negatively impact the students' mental health, and to be linked to a variety of psychological problems including anxiety, depression, stress [17,18], sleep problems [19], poor academic performance [20], peer relationship problems, self-harm and even suicidal ideation and behaviors [21,22]. Another potential negative consequence that is gaining attention, due to its serious impacts on adolescents' lives, is aggression [23]. Despite all these harmful effects, research related to this topic remains to date limited [24]. We join the view of Wilmer et al. who claimed that "it is crucial to understand how smartphone technology affects us so that we can take the steps necessary to mitigate the potential negative consequences" [25]; and we point to the necessity of deeply understanding how SA is related to poor socio-behavioral outcomes so that we can take the measures needed to overcome them.
SA and aggression
According to Buss and Perry [26], aggression is classified into four dimensions : physical and verbal aggressions (i.e., instrumental component), hostility (i.e., cognitive component), and anger (i.e., affective component). Extensive research highlighted that aggressive behaviors, which refer to any observable act intended to inflict harm to others [27][28][29], are highly prevalent and represent an integral part of adolescents' daily lives [30][31][32]. For instance, a large study from eight countries and 14,967 in-school adolescents aged 10-19 years revealed that 53.7% of participants exhibited interpersonal violence, among them 29.2% and 43.2% reported physical fighting and physical attacks, respectively [33]. Lebanese adolescents are more prone to engage in aggression given the environment saturated with violence in which they grow up and live [34]. A previous study among 568 Lebanese adolescents aged between 15 and 18 years revealed that 34.0% and 31.9% had moderate and high aggression respectively [35]. Indeed, the political instability, deteriorating economy and ongoing conflicts that Lebanon has known in the past years resulted in increased violence rates in schools and streets; that have gone so far as to be engaged in armed conflicts [34,36].
Empirical studies have identified various risk factors of aggression in adolescence [37], mainly gender (boys display more physical aggression than girls) [38,39], mental health disorders (most notably disruptive behavior disorders and attention-deficit/hyperactivity disorder (ADHD), alexithymia, anxiety and depression) [40,41], family characteristics including single-parent household and divorced parents [42], and peer factors involving parental divorce [43], peer rejection, bullying, and loneliness [44]. Moreover, during adolescence, developmentally normative changes in social relationships, including decreasing parental supervision, increasing influence of peers and engaging in new risky behaviors (e.g. alcohol drinking, drug use and smoking) may also elevate risk for aggression [45,46]. In addition, the adolescent brain evolves its capability to organize, regulate impulses, and weigh risks and rewards; however, these changes can make adolescents highly vulnerable to risk-taking behavior [47]. More particularly, studies showed that increased amygdala volume and decreased leftward asymmetry of the anterior cingulate cortex were associated with increased duration of aggressive behaviors during the interpersonal interactions [48].
A large body of correlational research has shown that SA is significantly related to aggressive behaviors. For example, positive correlations have been found between problematic cellular phone use and a number of behavioral problems, including aggression, in Taiwanese adolescent students [49]. Similarly, problematic smartphone use has been shown to be associated with aggression and hostility among young adults in Switzerland [50]. A Korean study by Um et al. showed that smartphone dependency (as assessed using a scale by Lee et al. [51]) significantly correlated with aggression among middle school students, suggesting that a "careful use of smartphones is necessary" in this population [23]. Another Korean study by Wee and Kang found that many forms of addiction (i.e., alcohol, gambling and SA) are significantly related to aggression [52]. A study by Khoo and Yang conducted among Singaporean students found that SA is a potential risk factor for the hostility facet of aggression [53]. In sum, most of the evidence came from Asian and Western countries, and supports a positive association between SA and aggression. According to Zarei [54], problematic smartphone use is one of the major variables affecting the aggressive behavior of students.
Aggression among early adolescents represents a serious problem that can significantly impede their development and lead to major clinical and social concerns, including school violence between peers [55], school drop-out [56,57], substance abuse [57], physical violence and crime perpetration later in adulthood [56,58,59], as well as future economic difficulties and health problems [60]. This wide range of possible negative outcomes highlight that this topic deserves careful consideration at the scientific, clinical and policy levels.
Cognitive function as a Mediator between SA and Aggression
Another substantial factor that can drive aggression among adolescents is cognitive functioning. It is well established that cognitive skills and functions are determinant in regulating adolescents' thoughts and actions [61]. It is thus understandable that cognitive impairment poses a major risk of aggressive thoughts and behaviors. Heavy smartphone users would be highly prone to report cognitive failures during everyday life [62]. Some authors even suggested that the mere presence and/or the simple reminder of one's smartphone could highly and adversely affect students' cognitive functioning and performance [63].
On the other hand, cognitive impairment has been demonstrated as one of the negative consequences of SA [25,64,65]. Although research concerning the cognitive effects of smartphone use is still quite limited and longitudinal evidence is scant, a literature review by Wilmer et al. [25] showed that smartphones can be detrimental to a variety of cognitive domains, including mnemonic functioning, attentional capacities, and tendency to delay gratification. A more recent review by Liebherr et al. [64] found that smartphone use impacts working memory, inhibition, attention, among other cognitive functions. Regarding the student population in particular, a study from Singapore found that smartphone overuse impaired students' cognitive abilities (i.e., executive functions) [66]. In Turkey, SA has been found to negatively affect students' cognitive flexibility [67].
Given that both SA and cognitive function are involved in aggression, we suggest that cognitive function could play an indirect role in fostering the relationship between SA and aggression. Investigating the cognitive function effects could provide valuable information about how SA can affect early adolescent students' brain and behaviors during a period of increased developmental plasticity. Only a few mediators have been previously examined in the relationship between SA and aggression among early adolescent students (e.g., peer attachment, ego-resilience, parenting behavior; [23]); however, to our knowledge no studies have explored the mediating role of cognitive function despite its great importance during this life period.
The present research
To date, there is little amount of research focused on the relation between smartphone use and its subsequent socio-behavioral outcomes [53]. Khoo and Yang [53] recently suggested that, among the various aspects of smartphone use, SA in particular is potentially impactful to students' aggression risk, and thus requires more research and targeted interventions. We decided to perform this study for several reasons. First, although an increasing number of studies supported the notion that SA could predict adolescents' aggression, only a few studies have attempted to test the mediating effects of personal factors in the association between SA and aggressive behaviors, which has substantially restrained the development of interventions [68]. Second, prior research examining the relation SA and aggression involved children, primary school students [69,70], or young-adult university students [71]; whereas, there are a few or no studies conducted among early-adolescent high school students despite being particularly vulnerable to develop both addictive and aggressive behaviors with long-lasting consequences [72,73]. Third, as previously said, the vast majority of studies on this topic emerged from Asia and the developed world, with no studies from the low-middle-income countries of the Middle East and North Africa region. Given that the findings related to both SA [71] and aggression [74] might vary cross-culturally, we believe that the present study has an original value and contributes to the literature by adding data from an unexplored country and region. Based on these gaps identified in the existing literature, our study is intended to verify whether cognitive functions have indirect effects on the relationship between SA and aggression among high-school students in the context of Lebanese culture.
Study design and Procedure
This was a cross-sectional designed study, conducted between January and May 2022, and enrolling 379 adolescent students currently residing in Lebanon (13 to 17 years old), from all Lebanese governorates (Beirut, Mount Lebanon, North, South, and Bekaa). Our sample was chosen using the snowball technique; a soft copy of the questionnaire was created using google forms software, and an online approach was conceived to proceed with the data collection. The study's main aims and goals, in addition to instructions for filling the questionnaire, were conveyed online for the participants, prior to their participation. Later, initial participants approached by the research team were asked to recruit other participants they know, preferably as diverse as possible with regard to place of habitat within the Lebanese governorates and within the same age interval required to participate in the study. Internet protocol (IP) addresses were examined to ensure that no participant took the survey more than once. There were no credits received for participation. Included were Lebanese adolescents, aged between 13 and 17 years and who have a smartphone. Excluded were those who do not fulfill one of these criteria.
Minimal sample size calculation
A minimal sample of 127 was deemed necessary using the formula suggested by Fritz and MacKinnon [75] to estimate the sample size: n = L f 2 + k + 1 , where f=0.26 for a small to moderate effect size, L=7.85 for an α error of 5% and power β = 80%, and 10 variables to be entered in the model.
Questionnaire
The first part of the questionnaire included an explanation of the study topic and objective, a statement ensuring the anonymity of respondents and an explanation for the student to get his/her parents' approval before participation. The student had to select the option stating "I got my parents' approval and consent to participate in this study" to be directed to the questionnaire.
The second part of the questionnaire contained sociodemographic information about the participants (age, gender, governorate, current self-report weight and height). The Body Mass Index (BMI) was consequently calculated as per the World Health Organization [76]. The household crowding index, reflecting the socioeconomic status of the family [77], is the ratio of the number of persons living in the house over the number of rooms in it (excluding the kitchen and the bathrooms). The physical activity index is the cross result of the intensity, duration, and frequency of daily activity [78]. Regarding the financial burden, respondents were asked to answer the question "How much pressure do you feel with regard to your personal financial situation in general?" on a scale from 1 to 10, with 10 referring to overwhelming pressure.
The third part included the scales used in this study:
The Buss-Perry Aggression Questionnaire-Short Form (BPAQ-SF)
Validated in Lebanon [79], the Buss-Perry Aggression Questionnaire-Short Form (BPAQ-SF) [80] is a short version of the BPAQ and consists of 12 Likert-type items rated on a 5-point ordinal scale and organized into four scales of three items each: Physical Aggression, Verbal Aggression, Anger, and Hostility. Bryant and Smith (2001)
Smartphone addiction scale-short version (SAS-SV)
The SAS, validated in Lebanon [81], is a ten-item scale used to evaluate SA among adolescents [82]. The total score was computed by adding the answers of these 10 items, with higher scores reflecting higher SA (Cronbach's alpha = 0.90).
Cognitive Functioning Self-Assessment Scale (CFSS)
The questionnaire included 18 statements; participants were required to estimate, on a five-point scale anchored ''never-always'' , the frequency of each described situation in the past 12 months (e.g. Difficulty in performing two tasks simultaneously; Difficulty in performing mental calculation) [83] (Cronbach's alpha = 0.95). Higher scores indicate worse cognitive function.
Translation procedure
The forward and backward translation method was applied to different scales. The English version was translated to Arabic by a Lebanese translator who was completely unrelated to the study. Afterwards, a Lebanese psychologist with a full working proficiency in English, translated the Arabic version back to English. The initial and translated English versions were compared to detect and later eliminate any inconsistencies.
Statistical analysis
SPSS software version 23 was used to conduct data analysis. Cronbach's alpha values were computed for each scale. We had no missing data since all questions were required in the Google form. All aggression subscales scores were normally distributed, with its skewness and kurtosis varying between − 1 and + 1 [84]. The Student t and ANOVA tests were used to compare two and three or more means respectively, whereas the Pearson correlation test was used to compare two continuous variables. The PROCESS SPSS Macro version 3.4, model four [85] was used to calculate three pathways. Pathway A determined the regression coefficient for the effect of smartphone addiction on cognitive function; Pathway B examined the association between cognitive function and aggression, and Pathway C' estimated the direct effect of smartphone addiction on aggression. An indirect effect was deemed significant if the bootstrapped 95% confidence intervals of the indirect pathway AB did not pass by zero. Variables that showed a p < 0.25 in the bivariate analysis were entered in the multivariable and mediation models. Significance was set at a p < 0.05.
Bivariate analysis
The bivariate analysis results are shown in Tables 2 and 3. A higher mean physical aggression score was seen in males compared to females (7.03 vs. 6.36; p = 0.043), whereas a higher mean anger score was seen in females compared to males (8.45 vs. 7.53; p = 0.009). Higher SA and worse cognitive function were significantly associated with more physical aggression, verbal aggression, anger and hostility. Older age was significantly associated with more verbal aggression. Higher BMI was significantly associated with more physical aggression, whereas more financial burden was significantly associated with more hostility.
Indirect effect analysis
Cognitive function mediated the association between SA and physical aggression, verbal aggression, anger and hostility (Table 4; Figs. 1, 2, 3 and 4). Higher SA was significantly associated with worse cognitive function and more physical aggression, verbal aggression, anger and hostility. Finally, worse cognitive function was significantly associated with more physical aggression, verbal aggression, anger and hostility.
Discussion
Lebanon is a young society in which 44% of people are under the age of 24 [36,86]. Being at this developmental stage carries a risk of unhealthy and risky behaviors, such as SA and aggression. Indeed, previous studies revealed high rates of both SA and aggression in Lebanese youth [15,34], highlighting the need to investigate the relationship between these two entities in this specific population and context, to help improve development and implementation of socially and culturally tailored prevention and intervention approaches. In this study, we tested the hypothesis that cognitive functions mediate the relationship between SA and the four aggression dimensions among Lebanese high-school students. For this, we established path analyses models where SA was taken as an independent variable and each of aggression dimensions as dependent variables. All models showed partial mediation, confirming our hypothesis. As for the direct effects of SA on aggression, our findings were in line with the existing literature. There is some evidence that SA significantly and positively contributes to aggressive tendencies in students [54,87,88]. Previous studies from different countries (e.g., Korea [23], Taiwan [89], Singapore [53], Switzerland [50]) have shown that students who excessively use a smartphone are prone to heightened aggression. Other studies also found that specific online activities are linked to more aggression among students, such as smartphone gaming [90] and online gambling [91]. Various theories have been advanced to explain the relationship SA-aggression. For example, it has been suggested that, because students are under high levels of stress, the overuse of smartphones may easily trigger chronic fatigue and mental health problems; that, in turn, lead to a loss of self-control in challenging situations [92]. In the same line, a prospective study revealed that students' high lack of self-control predicted aggressive behaviors [93]. Another possible explanation is that students with SA are heavily exposed to violent and suggestive applications; which may result in a loss of social skills and coping abilities [94]. A recent cross-sectional study conducted in Singapore showed that students' addictive smartphone use predicted the cognitive component of aggression (i.e. hostility) [53]. Authors explained their results by the fact that SA might generate hostile cognitive beliefs, such as high levels of jealousy or suspiciousness [95]. In addition, SA is highly disruptive leading to heightened negative affect, which in turn triggers aggression [53].
Although we found evidence supporting that SA is associated with aggression, we cannot establish the causality or directionality of the observed relationship. Some previous research rather supported the path leading from aggression to SA [68]. It has been suggested that adolescents with aggressive tendencies may turn to their smartphones to better express various urges and pressures in a space where internal aggressiveness can be easily and conveniently expressed [17]. Also, aggressive adolescents may excessively use their smartphones because they experience social difficulties, such as poor peer relationships [96]. These data along with our findings suggest a bidirectional relationship between SA and aggression, and call for further longitudinal research using different timeframes.
Regarding the indirect effects, we found that higher SA was significantly and inversely associated with cognitive function; and that cognitive function was negatively associated with aggression (all dimensions). These findings are in line with prior longitudinal evidence that has identified the role of cognitive deficits in the development of later adolescents' aggression [97]; as well as the role of smartphone use in decreasing cognitive abilities [98]. In addition, our expectations could be confirmed, showing that cognitive functions partially mediate the relationship between SA and all aggression components. Different theoretical explanations could be proposed for these findings. First, many previous studies demonstrated that SA negatively impacts cognitive functions (for review, see [25,64,65]). For instance, individuals with SA or those who use their smartphones in situations where it is dangerous or prohibited more often show low trait inhibitory control [99,100]. This lack of self-control has also been robustly associated with aggression [101]. Second, both smartphone addiction [102,103] and deficits in cognitive functions [104,105] are linked to higher levels of negative affect; which may in turn lead to more aggressive behaviors among adolescents [106,107]. Third, SA exposes to important structural and functional brain changes, including white matter changes in brain regions involved in emotional processing and executive functions [108]. At the same time, white matter abnormalities have been suggested to potentiate aggressive tendencies in nonclinical adolescents [109].
Study strengths & limitations
The present study has strengths that deserve to be mentioned. First, this topic has not received previous scrutiny in low-middle income countries with an Arab cultural background. In addition, this study is innovative in examining cognitive functions as a mediator in the relationship SA-aggression; this has not yet been actively researched. Another strength lies in considering the multifaceted construct of aggression with four components (i.e., physical and verbal aggression, anger, hostility) [26,95], while most of the previous research considered aggression as Fig. 4 (a) Relation between smartphone addiction and cognitive function (R 2 = 27.19%); (b) Relation between cognitive function and hostility (R 2 = 25.61%); (c) Total effect of the relation between smartphone addiction and hostility (R 2 = 11.23%); (c') Direct effect of the relation between smartphone addiction and hostility. Numbers are displayed as regression coefficients (standard error). ***p < 0.001; **p < 0.01; *p < 0.05 a unidimensional construct [110], or only treated one aspect of aggression (e.g., anger, [111].
This study has also some limitations to be noted and that point to suggestions for future research. First, the crosssectional design precludes any causal inferences. Further longitudinal research is needed to further ascertain the directionality of the investigated relationships. Second, the use of self-reported measures might have led to recall bias or social desirability issues; and calls for the use of objective measures in future studies [112]. Third, we only examined SA, whereas smartphone-related behaviors are complex and multidimensional. Thus, examining the various activities, contents and patterns of smartphone use in additional studies would be useful [113].
Clinical, research and policy implications
Today's students have been exposed to smartphones from a very young age, are particularly vulnerable to SA because of their age-related characteristics (including a less-developed self-control [92,114,115], and are not necessarily aware of the smartphones' potential harmful impacts on their development, mental health and well-being. There is enough evidence to suggest that SA leads to aggression in adolescent students [54,87,88]. Our findings provide further support to these data, and could help guide targeted prevention and intervention strategies for aggression in adolescent students. Despite aggression is occurring at staggering rates in Lebanon, there are no programs so far to monitor students' aggressive and violent behaviors. Therefore, in light of our findings and prior evidence, we highlight the urgent and basic need to implement school programs that target in-school adolescents, especially those who show addictive smartphone use behaviors, to combat aggression and violence in school settings. We recommend that one way to overcome aggression efficiently could be sensitizing students on the potential harms of SA and helping them to monitor their duration and frequency of smartphone use [54]. Furthermore, specific interventions designed to reduce reactive aggression such as cognitive reappraisal, self-control training, cognitive control training, and Mindfulness could correspondingly be instigated in schools settings [116]. In addition, four strategic priorities could be recommended: (1) establishing recreational services which encourage students to engage in other leisure activities than their smartphone; (2) developing and implementing various educational programs which raise awareness about smartphone addiction among students; (3) developing policies and guidelines limiting the usage of smartphones during lectures; (4) establishing free and accessible sports facilities in all schools. Moreover, schools could implement a smartphone-based coaching program for addiction prevention among students [117]; the latter consists on an individually tailored intervention approach effective in increasing life skills and reducing risk behaviors in a group of adolescents with a particular high risk of addictive behaviors [118].
Another important finding of this study that might offer a potential avenue for intervention, relates to the indirect role of cognitive function in the relation SAaggression. This implies that, to reduce adolescents' aggression, interventions that promote cognitive performance may be effective. These interventions include activities such as problem-solving training, mnemonic training, and guided imagery [119]. Particularly, students who are addicted to smartphones and show aggressive tendencies require interventions designed to improve cognitive function. In other words, we suggest that moderating the cognitive function may decrease the effect of SA on aggression. Future longitudinal and experimental research is required to better understand the interactions between SA and aggression, and ascertain the indirect effects of cognitive function in this relationship. Future research need to consider the multidimensionality of smartphone use [53,111], aggression [26,95], and cognitive function [120].
Conclusion
This study provides empirical evidence to test a mediation model exploring whether cognitive function underlies the relationship between SA and aggression. The findings can help educators, researchers, policy makers, and school counselors advance knowledge on this critical issue among students, and contribute to the development of effective prevention and intervention strategies. The main practical implications are that students should also be educated about the direct and indirect negative effects of SA, including the occupation of cognitive capacities and a heightened aggression. Promoting healthy ways of using smartphones could be one of the potential and effective strategies to prevent aggression in adolescent students. Taking measures to decrease the level of smartphone addiction and improve cognitive function may be effective in reducing students' aggressive behaviors. Future research is needed to confirm our findings and help develop strategies for prevention and intervention. | 2022-12-28T05:06:04.197Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "a761ca33c5b9ce570419beb5f946ae4487b6d167",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a761ca33c5b9ce570419beb5f946ae4487b6d167",
"s2fieldsofstudy": [
"Psychology",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
41328503 | pes2o/s2orc | v3-fos-license | The Nijmegen Corpus of Casual Spanish
This article describes the preparation, recording and orthographic transcription of a new speech corpus, the Nijmegen Corpus of Casual Spanish (NCCSp). The corpus contains around 30 hours of recordings of 52 Madrid Spanish speakers engaged in conversations with friends. Casual speech was elicited during three different parts, which together provided around ninety minutes of speech from every group of speakers. While Parts 1 and 2 did not require participants to perform any specific task, in Part 3 participants negotiated a common answer to general questions about society. Information about how to obtain a copy of the corpus can be found online at http://mirjamernestus.ruhosting.nl/Ernestus/NCCSp
Introduction
Spanish is one of the best documented languages in the world. However, to our knowledge no large corpus of casual Spanish suitable for detailed phonetic analysis is available. The goal of this article is to introduce the Nijmegen Corpus of Casual Spanish (NCCSp from now on), a new corpus designed to fill this gap. The corpus was designed taking the Nijmegen Corpus of Casual French as a model (Torreira et al., 2010), which was also collected in our lab. The uniqueness of the NCCSp can be characterized as follows: • It contains around 30 hours of casual conversations among groups of friends. This makes it possible to study a wide range of phenomena characteristic of casual speech.
• It contains speech from 52 native Madrid Spanish speakers sharing a similar educational background and age.
• It contains large amounts of data for every speaker (around 90 minutes of recorded speech for every group of three speakers). This allows researchers to study withinspeaker variability.
• It contains audio as well as video data, which can be used for the study of facial and body gestures during verbal communication.
The following sections provide a detailed description of the creation and transcription of the NCCSp.
Participants
The corpus creation was begun in March 2008. A group of university students were hired as confederates. These confederates were instructed about their role and asked to find two friends willing to participate in recordings of natural conversations. These friends are referred to as speakers from now on. Every recording consisted of a conversation among these three participants: a confederate and two speakers. All participants complied with the following conditions: • They knew the two other participants in the recording well.
• They were of the same sex as the two other participants in the recording.
• They were university students in Madrid.
• They had been raised in the Madrid region. • They reported not suffering from any pathology related to speech or hearing.
The corpus consists of 20 recordings (10 groups of male participants and 10 groups of female participants). Speakers were invited to act as confederates in later recordings. For this reason, nine participants took part in more than one recording session (first as a speaker and later as a confederate). In total there were 52 participants (27 female and 25 male). All participants were university students aged between 19 and 25. More details about the participants' background will be available in the NCCSp corpus package.
Recording set-up
The recordings took place in a sound-attenuated booth at the Universidad Politécnica de Madrid. The booth had an approximate size of 4 x 2 m. The participants sat on chairs around a table. The confederate always sat on the south side of the table, while the speakers occupied the chairs on the north and west sides. Figure 1 shows the layout of the recording room. The speakers were recorded on a Edirol R-09 solid-state stereo recorder. Each speaker was recorded in a separate channel. The confederate was directly recorded on a computer via a dedicated sound card. All participants wore a Samson QV head-mounted unidirectional microphone. The microphones were placed at an average distance of 5 cm from the left corner of the speakers' lips. The sampling rate used was 44.1 KHz, and quantization was set to 32 bits. The conversations were filmed using a Sony HDR-SR7 video camera. The camera was placed in a corner of the recording room in a position that allowed us to film the two speakers, but not the confederate. Figure 2 provides a sample snapshot from one of the films. In order to avoid inhibiting the speakers, we tried to make them believe that the camera was turned off during the recordings. As a first step, a small piece of duck tape was placed on each of its lights. Additionally, an unplugged cable was left hanging from the camera in order to reinforce the impression that it was turned off. Moreover, the camera was not placed on a tripod also present in the room. Finally, we placed several unused objects near the camera, including old boxes and cables, a computer screen, several loudspeakers and other audio equipment. As shown in Figure 3, our camera appeared as one among the numerous shut down devices in the recording room.
Recording procedure
The recording procedure was similar to that employed during the collection of the Nijmegen Corpus of Casual French and the Nijmegen Corpus of Casual Czech (http://mirjamernestus. ruhosting.nl/Ernestus/). Previous research has shown that this procedure is successful at eliciting casual spontaneous speech (Torreira et al., 2010). This subsection describes the recording session in more detail. Preparations: Confederates arrived at the Universidad Politécnica de Madrid for an interview with the first author (FT from now on) thirty minutes earlier than their friends. During this interview, FT informed the confederates that it was their responsibility to elicit natural speech from their friends, by raising appropriate topics whenever the conversation seemed to approach a dead end. In order to maximize the amount of recorded speech from the speakers, they were instructed not to monopolize the conversation. They were also informed that the conversation would be filmed, and where to sit so that only the other participants would appear in the film. Importantly, they were asked not to unveil any of these details to their friends until the end of the recording, and to pretend that they had never met FT. Finally, they were briefly instructed about the activity planned for the third part of the recording (see below for details). At the end of the interview, the confederates were asked to wait for the other participants in the entry hall. At the time of the appointment, FT met the three participants there and asked them to wait while he made an urgent phone call. He then returned to the recording room, started the video recording, turned off the lights and closed the door. Back at the entry hall, he invited the participants to follow him to the recording room, making sure that the confederate would be the first person to enter in order to prevent the other participants from taking her/his seat. Once in the room, the participants were asked to stay seated and not to touch their microphones or play with any other object (e.g. keys, watch) during the conversation.
Part 1: After adjusting the recording volume from outside the booth, FT entered the recording booth again and informed the participants that the confederate's microphone was not working properly. He then asked the confederate to come out of the room in order to try a new one. At this moment, the speakers left in the room did not know with certainty whether they were being recorded. It was precisely then that the recording was started. This situation elicited very natural speech right from the beginning of the recording. Part 2: After a period of ten to thirty minutes (depending on the liveliness of the conversation), confederates were asked to go back into the room. The conversation then held by the three participants constituted the second part of the recordings.
No instructions were provided about the topics to be discussed during this part of the conversation. Among the conversation topics addressed by the speakers during this part were exams, parties, and travel plans. Words characteristic of such topics are therefore well represented in this part of the recordings (e.g. 86 tokens of the word estudiar 'study' and morphologically related words; 43 tokens of the word viaje 'travel'; 84 tokens of the word beber 'to drink' and morphologically related words). Part 3: After a period of thirty to forty minutes, FT entered the room and provided the participants with a sheet of paper describing the activity for the remaining part of the recording session. The participants were asked to choose at least five questions about political and social issues from a list, and then negotiate a unique answer for every question. An English translation of this list can be found in Appendix A. In order to encourage the participants to negotiate common stances rather than just discuss the chosen topics, we informed them that they would have to write down their answers at the end of the recording session. A characteristic of the speech elicited during this part is that its vocabulary reflects the chosen questions. For instance, the word fumar 'to smoke' is very frequent in this part of the recordings (217 tokens) because most groups of participants chose to discuss a question about a recent smoking ban in Spain. At the end of the recording, we revealed our procedures to the participants. We paid 30 euros to each of the speakers and 45 euros to the confederates as a compensation for their time. We then handed them a consent form agreeing to the use of the audio and video recordings for academic and scientific purposes. All of the participants signed the consent form without adding any restriction.
Orthographic transcription
The corpus was orthographically transcribed in Barcelona by Verbio Speech Technologies S.L. using TRANSCRIBER software (Barras et al., 2001). The transcription process consisted of three passes. In the first pass, the speech of every pair of speakers was orthographically transcribed in a two-tier annotation file (one tier for each speaker) from stereo-channel audio streams. Confederates, who had been recorded in a separate mono channel, were transcribed separately in a one-tier annotation file. The transcribed text is organized into chunks, each containing not more than 15 seconds of the speech signal. In the second pass, non-speech events (e.g. laughter, filled pauses, etc) were added to the orthographic transcription, the location of chunk boundaries was readjusted, and the spelling of the transcription was checked on the basis of the Diccionario de la Real Academia Española (http://www.rae. es/rae.html). In the third pass, an automatic revision of the formatting of the transcription files was performed. Every pass was carried out by a different transcriber. The orthographic transcription of the corpus contains around 393 000 word tokens and 16 500 word types (distinct orthographic forms) distributed over 98 000 chunks. Part 1 contains around 83 000 word tokens, while Parts 2 and 3 contain each around 155 000 word tokens. A look at the most frequent lexical items reveals the casual and interactional nature of the corpus. For instance, speakers often used informal terms to address each other during the recordings (e.g. 2750 tokens of tío, 1789 tokens of tía). Swear words, which are not expected to occur in a formal setting, are also numerous (e.g. 822 tokens of joder, 245 tokens of puta). The interactional nature of the corpus is reflected among other things in the high frequency of discourse markers (e.g. 3445 tokens of sabes, 2457 tokens of pues, 1744 tokens of bueno).
Corpus availability
Information about how to obtain a copy of the corpus can be found online at http://mirjamernestus.ruhosting. nl/Ernestus/NCCSp. This webpage also provides audio and transcription examples, scripts for searching the corpus using Praat, and more information about each participant and conversation in the corpus. | 2015-07-14T19:54:51.000Z | 2010-05-01T00:00:00.000 | {
"year": 2010,
"sha1": "62d0f8b0b4c9b8d8573fd01ec1380d45af790bdd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "a3aab3c63e4a284abfd035354c65fc6b60e2beda",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
86695081 | pes2o/s2orc | v3-fos-license | Pseudo hypsarrhythmia: An early marker of angelman syndrome
Annals of Indian Academy of Neurology ¦ Volume 22 ¦ Issue 3 ¦ July-September 2019 359 disorders. Ann Endocrinol 2007;68:173-6. 4. Patnaik SK, Upreti V, Dhull P. Steroid responsive encephalopathy associated with autoimmune thyroiditis (SREAT) in childhood. J Ped Endocrinol Metabol 2014;27:737-44. 5. Nolte KW, Unbehaun A, Sieker H, Kloss TM, Paulus W. Hashimoto encephalopathy: A brainstem vasculitis? Neurology 2000;54:769-70. 6. Chen KA, Brilot F, Dale RC, Lafferty AR, Andrews PI. Hashimoto’s encephalopathy and anti-MOG antibody encephalitis: 50 years after Lord Brain’s description. Eur J Ped Neurol 2017;21:898-901. 7. Lee J, Yu HJ, Lee J. Hashimoto encephalopathy in pediatric patients: Homogeneity in clinical presentation and heterogeneity in antibody titres. Brain Dev 2018;40:42-8. 8. Bektas Ö, Yılmaz A, Kendirli T, Sıklar Z, Deda G. Hashimoto encephalopathy causing drug-resistant status epilepticus treated with plasmapheresis. Ped Neurol 2012;46:132-5. 9. de Holanda NC, de Lima DD, Cavalcanti TB, Lucena CS, Bandeira F. This is an open access journal, and articles are distributed under the terms of the Creative Commons
Pseudo Hypsarrhythmia: An Early Marker of Angelman Syndrome case rePort
An 18-month-old girl presented with concerns of language delay and recurrent seizures from the age of 12 months. She was the second born to a nonconsanguineous couple and had an uneventful perinatal period. Her development was age-appropriate except in the language domain. She recognized words for common items like "cup," and listened when spoken to, but her expressive language was restricted to gurgling sounds and babbling. Her seizures were brief, generalized tonic, and were associated with a short period of postictal drowsiness. Grinding of teeth was also noted. The seizure frequency had gradually increased to 4-5 episodes per week over the last 6 months, and the parents had noticed the appearance of generalized tremulousness and lack of any new milestones. At 18 months of age, she was able to take a few steps by holding on to furniture, pick up small objects, and wave bye-bye. Her speech was unclear, she used gestures to communicate and was still babbling.
On examination, she was easily excitable and had frequent unprovoked episodes of laughter. Her mouth was constantly open, and she had a protruded tongue. She also exhibited abnormal, involuntary, brief, non-stereotypic uncoordinated, jerky movements of arms and legs. Her occipitofrontal head circumference was 42.5 cm suggestive of microcephaly (−3.05 Z score by WHO growth charts). Her neurological examination revealed mild generalized hypotonia. Her magnetic resonance imaging of the brain was unremarkable [ Figure 1]. Her electroencephalography (EEG) at 1 year of age showed runs of high amplitude generalized irregular delta waves with multifocal epileptiform discharges suggestive of hypsarrhythmia [ Figure 2]. A repeat EEG at 18 months of age showed rhythmic delta activity more pronounced in the frontal region with superimposed notched appearance on the descending phase of slow wave [ Figure 3]. Her karyotype was normal (46, XX). Fluorescence in situ hybridization confirmed the diagnosis of Angelman syndrome (AS) by microdeletion of 15q11-q13 in all 100 cells using Vysis Prader-Willi/Angelman region DNA probe LS1 small nuclear ribonucleoprotein polypeptide N. A neurorehabilitative program was initiated, and antiepileptic drug titration was done to control epilepsy.
dIscussIon
AS is a rare neurogenetic disorder with a prevalence of 1:10000-1:40000. [1] The consistent clinical picture comprises of severe intellectual impairment, easily excited behavior phenotype with frequent laughters, balance, and speech impairment. [2] Fascination with water and prominent microbrachycephaly with protruded tongue are clinical hints. One of the problems for the early diagnosis is that the complete phenotypic expression becomes evident only after 3 to 4 years. Epilepsy in AS typically develops at 1-3 years of age with onset ranging from 1 month to 20 years. Most frequent seizure types are atypical absence, generalized tonic-clonic, and myoclonic seizures. [3] Vendrame et al., in their prospective study of 115 children with AS, identified age as robust predictor of seizure onset, with the odds of developing seizures increasing by a factor of 1.29 for every year of life. [4] The electroencephalographic patterns, although are not pathognomonic, are sufficiently characteristic for the diagnosis. Excessive hypersynchronous electrical activity over the thalamocortical and hippocampal networks secondary to misregulation of gamma-aminobutyric acid inhibitory system is the proposed mechanism of epilepsy in AS. [5] The typical EEG patterns are, in general, detected early in the course of syndrome, even before the onset of epilepsy. Another interesting facet is the lack of difference in EEG findings in patients with and without epileptic seizures.
Boyd et al. first described three unequivocal patterns of electrophysiological activity in AS as follows: (a) prolonged rhythmic theta 4-6 s theta activity more evident in the centrotemporal region; (b) generalized rhythmic 2-3 s delta activity more pronounced in the anterior cortical region; and (c) spikes mixed with rhythmic delta/theta activity in the posterior region facilitated by passive eye closure. [6] Interindividual and intraindividual variations in the most common delta pattern were reported by Valente et al. in 47 EEGs of 23 patients and included hypsarrhythmia-like variant, notched variant, triphasic-like variant, and slow variant. [7] The association of hypsarrhythmia-like pattern with AS, as in the index case, is poorly recognized and has been reported in only a handful of cases.
The first description of hypsarrhythmia in AS was reported in Mayo et al., in 1973, in a 32-month-old toddler with global delay. [8] Since then, hypsarrhythmia variant has been limited to less than 20 cases worldwide. Valente et al. recognized three EEGs with similar pattern in two children at 4 months, 14 months, and 15 months, respectively, in a review of serial EEGs of 26 patients. Vendrame et al. conducted the largest prospective study to delineate the EEG features in a large cohort of 160 children followed longitudinally in the Angelman Natural history study. The authors had reported five children with hypsarrhythmia-like pattern on EEG. The age-specific hypsarrhythmia pattern is assumed to be the result of exaggerated cortical excitability reminiscent of the brain maturation period from 3 months to 2 years.
In addition, recognition of this hypsarrhythmia pattern in AS helps in appropriate management. Misdiagnosis as West syndrome and treatment with vigabatrin may worsen the symptoms, especially seizures. [9] The distinguishing features in AS are the absence of changes in EEG from awake to sleeping state and a less chaotic background. [10] A new-onset tremulousness and clinicoelectrographic discordance with hypsarrhythmia pattern in the absence of clinically evident epileptic spasms, are additional clues to AS in very young children. Furthermore, it is imperative to rule out AS in undiagnosed children with refractory epilepsy, maladaptive behavioral phenotype, and severe intellectual disability.
conclusIon Pseudo-hypsarrhythmia pattern is poorly recognized yet, a clue to the early diagnosis of AS in infants and toddlers. Serial electrographic recordings may show the complete constellation of EEG changes suggestive of AS over time. Early detection in very young children before complete phenotype expression provides the opportunity for early intervention, genetic diagnosis, and prenatal counseling.
Declaration of patient consent
The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2019-03-28T13:33:07.507Z | 2019-07-01T00:00:00.000 | {
"year": 2019,
"sha1": "b4124842968f9ddd38ad6cf3e0dbc6cbc82c416e",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/aian.aian_413_18",
"oa_status": "GOLD",
"pdf_src": "WoltersKluwer",
"pdf_hash": "b4124842968f9ddd38ad6cf3e0dbc6cbc82c416e",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220872549 | pes2o/s2orc | v3-fos-license | Pharmacotherapy of Traumatic Childhood Aphasia: Beneficial Effects of Donepezil Alone and Combined With Intensive Naming Therapy
At present, language therapy is the only available treatment for childhood aphasia (CA). Studying new interventions to augment and hasten the benefits provided by language therapy in children is strongly needed. CA frequently emerges as a consequence of traumatic brain injury and, as in the case of adults, it may be associated with dysfunctional activity of neurotransmitter systems. The use of cognitive-enhancing drugs, alone or combined with aphasia therapy, promotes improvement of language deficits in aphasic adults. In this study we report the case of a 9-year-old right-handed girl, subject P, who had chronic anomic aphasia associated with traumatic lesions in the left temporal-parietal cortex. We performed a single-subject, open-label study encompassing administration of the cholinergic agent donepezil (DP) alone during 12 weeks, followed by a combination of DP and intensive naming therapy (INT) for 2 weeks and thereafter by a continued treatment of DP alone during 12 weeks, a 4-week washout period, and another 2 weeks of INT. Four comprehensive language and neuropsychological evaluations were performed at different timepoints along the study, and multiple naming evaluations were performed after each INT in order to assess performance in treated and untreated words. Structural magnetic resonance imaging (MRI) was performed at baseline. MRI revealed two focal lesions in the left hemisphere, one large involving the posterior inferior and middle temporal gyri and another comprising the angular gyrus. Overall, baseline evaluation disclosed marked impairment in naming with mild-to-moderate compromise of spontaneous speech, repetition, and auditory comprehension. Executive and attention functions were also affected, but memory, visuoconstructive, and visuoperceptive functions were preserved. Treatment with DP alone significantly improved spontaneous speech, auditory comprehension, repetition, and picture naming, in addition to processing speed, selective, and sustained attention. Combined DP-INT further improved naming. After washout of both interventions, most of these beneficial changes remained. Importantly, DP produced no side effects and subject P attained the necessary level of language competence to return to regular schooling. In conclusion, the use of DP alone and in combination with INT improved language function and related cognitive posttraumatic deficits in a child with acquired aphasia. Further studies in larger samples are warranted.
INTRODUCTION
Childhood aphasia (CA) is defined as a language impairment that affects previously acquired linguistic abilities, which cannot be explained by other cognitive or physical disorders (Aram, 1998). Since the diagnosis of CA requires a minimum development of linguistic skills prior to the brain injury, the age of 2 years is the established cut-off to differentiate CA from language developmental disorders (Woods and Teuber, 1978;Aram et al., 1985;Van Hout, 1997;Van Hout, 2003;Avila et al., 2010).
CA exhibits some singularities that distinguishes it from adult aphasia and raises the need for developing specific lines of research that take into account the characteristics of this population. Among these differences is the fact that brain damage during childhood may not only affect previously acquired language functions but also interfere with the ongoing brain maturation and language development. A further relevant differential feature is related to the etiology, thus while stroke is the leading cause of adult aphasia, the main cause of cognitive disability and aphasia in children and adolescents is traumatic brain injury (TBI) (Rothenberger, 1986;Jennett, 1996;Sergui-Gomez and MacKenzie, 2003;Babikian and Asarnow, 2009). Relevantly, one third of children that suffer a severe TBI, as measured by the Glasgow Coma Scale (Teasdale and Jennett, 1976), exhibit residual cognitive and language deficits (Anderson et al., 2001;Anderson et al., 2005;Anderson and Catroppa, 2006;Anderson et al., 2009) that may persist in the long term. Language disorders such as aphasia have a tremendous impact in the cognitive, social, and emotional development in children and adolescents, often resulting in reduced social integration, poor academic achievement, and behavioral problems (Beitchman et al., 2001;Johnson et al., 2010), as well as an increased risk of developing anxiety and social isolation during adulthood (Brownlie et al., 2016).
TBI usually results in focal and diffuse brain damage causing a wide range of linguistic deficits that may be contingent on several variables such as age at the time of injury, lesion size and location, severity of the injury, as well as premorbid language functioning (Sullivan and Riccio, 2010). Axonal injury derived from diffuse damage emerges as a result of the sudden acceleration and deceleration forces together with the simultaneous rotation of the freely moving brain mass (Levin, 2003;Vik et al., 2006). Importantly, diffuse axonal injury frequently affects white matter bundles connecting frontal and temporal cortical areas (Levin, 2003;Vik et al., 2006) that support linguistic and executive functions, including attentional capacity and processing speed. Accordingly, linguistic deficits in CA are frequently associated with weakened executive functions, such as deficits in lexical access as observed in naming tasks (Coelho, 2007;Slomine and Locascio, 2009). In fact, word finding difficulties (i.e. anomia) in spontaneous speech, naming and fluency tasks (Laine and Martin, 2006) are common deficits in the medium and long term after TBI and may persist even when other domains have been recovered (Van Hout et al., 1985;Narbona and Crespo-Eguilaz, 2012). However, despite its high frequency (Ewing-Cobbs and Barnes, 2002), reported cases of CA resulting from TBI are scarce, probably because in many cases linguistic deficits are hidden behind general cognitive impairments (e.g., attention). At brain level, language and executive functions depend on the activity of distributed networks involving bilateral dorsal and ventral structures. Current models suggest that language functions are supported by two functionally and anatomically segregated processing streams: dorsal and ventral (Hickok and Poeppel, 2000). On the one hand, the dorsal stream is involved in verbal production and repetition (Saur et al., 2008) required, for instance, for phonological word learning (Loṕez- Barroso et al., 2013). This stream is supported by the arcuate fasciculus (AF) system connecting frontal, postero-temporal, and infero-parietal areas (Hickok and Poeppel, 2007). On the other hand, the ventral stream projects from the superior temporal gyrus to the middle and inferior temporal cortices to support semantic and comprehension processes (see Hickok and Poeppel, 2007). This ventral interaction occurs mainly through the inferior frontooccipital fasciculus (IFOF), the inferior longitudinal fasciculus (ILF) and the uncinate fasciculus (UF) (Catani and Thiebaut de Schotten, 2008). Despite the above-mentioned functional division of labour, the language system is flexible enough to recruit additional areas during high-demanding language situations (Lopez-Barroso et al., 2011;Torres-Prioris et al., 2020) or during development, when some pathways are not fully mature (Brauer et al., 2011). For instance, studies of children with brain injury have shown that early damage to the AF may be succesfully compensated through recruitment of ipsilateral and contralateral brain areas and tracts, resulting in an average performance on multiple language tasks (Rauschecker et al., 2009;Asaridou et al., 2020), although some deficits may persist (Yeatman and Feldman, 2013). In this line, after early brain damage, functional and structural rightward lateralization of the dorsal pathway is associated with better language outcomes (Northam et al., 2018;Francois et al., 2019). Despite this evidence, spontaneous readjustment of the language system after brain lesion seems to be limited as evidenced by the frequent persistence of language deficits (Tavano et al., 2009;Turkstra et al., 2015;Francois et al., 2016). Therefore, research aimed at developing effective interventions to potentiate language recovery in CA is highly needed.
Despite the increasing efforts to advance in the development of effective therapeutic strategies for the cognitive and language after-effects of childhood TBI, studies targeting modern treatment approaches (cognitive/language therapy, pharmacotherapy, noninvasive brain stimulation) in CA are still scarce (for a review on rehabilitation programs for children with acquired brain injury not focused on language, see Laatsch et al., 2007;Slomine and Locasio, 2009). The fact that there are so few studies on this topic is probably because interventions for CA are frequently tailored to individual cases and carried out in instructional settings (Bowen, 2005;Duff and Stuck, 2015) with no sound methodological designs. The few existing intervention studies mainly focused on exploring the efficacy of behavioral strategies, as well as on identifying compensatory behaviors (Sullivan and Riccio, 2010;Turkstra et al., 2015). Overall, these interventions have proven beneficial effect for intensive training (6 to 8 weeks) of different language skills (lexical retrieval, verbal comprehension, fluency, communication pragmatic) and cognitive functions (attention, executive functions) commonly affected in TBI ( Thomas-Stonell et al., 1994;Wiseman-Hakes et al., 1998;Chapman et al., 2005). Yet, the results from these studies are variable and, despite the growing number of published reports on cognitive and behavioral deficits after childhood TBI, rehabilitation recommendations are still insufficient.
The well-established strategy of using cognitive-enhancing drugs alone or in combination with speech-language therapy in adults with post-stroke aphasia (see Berthier and Pulvermuller, 2011; has not been explored in CA. In adult post-stroke aphasia, several clinical trials have shown that a combined intervention with the cholinesterase inhibitor donepezil (DP) and speech-language therapy significantly improves language skills and communication (see Zhang et al., 2018). The rationale for using cholinergic compounds to treat aphasia arises from the fact that brain lesions disrupt the cholinergic transmission from the basal forebrain and brainstem nuclei to the thalamus, basal ganglia, subcortical white matter, and cerebral cortex, including the left perisylvian language core (Simićet al., 1999;Mesulam, 2004;Mena-Segovia and Bolam, 2017;Markello et al., 2018). The resulting cholinergic depletion negatively influences learning, declarative memory, language, and attention by reducing experience-dependent neural plasticity to relevant stimuli during training (Kleim and Jones, 2008;Rokem and Silver, 2010;Gielow and Zaborsky, 2017). Although experimental TBI studies have shown that cholinergic neurotransmission is chronically depleted after TBI (Dixon et al., 1996;Dixon et al., 1997;Ciallella et al., 1998), the role of the cholinesterase inhibitor DP in adult TBI is controversial (Walker et al., 2004;Warden et al., 2006;Shaw et al., 2013) unless when used in combination with environmental enrichment therapies (see De la Tremblaye et al., 2019).
The main aim of the present study was to evaluate the effects of pharmacotherapy with DP alone and combined with intensive naming therapy (INT) on CA recovery. Our secondary objective was to examine the effects of both interventions on naming, reading, and other linguistic and cognitive functions (executive functions, attention, memory), which were expected to change due to the interventions. Finally, the impact of TBI on the brain structure was explored at baseline to describe the possible brainbehavior relationship in light of the current knowledge. To do that, we studied the case of a 9-year old girl (subject P) with posttraumatic chronic anomic aphasia who was evaluated and treated following a single-subject, open-label design with DP alone and in combination with INT. DP was selected because it has repeatedly been shown to be effective in reducing aphasia severity, but also in boosting performance on lexical retrieval tasks (picture naming) in post-stroke aphasic adults (Berthier et al., 2003;Berthier et al., 2006). In addition, it is well known that the effect of cholinergic stimulation is more powerful when it is combined with behavioral training to promote experiencedependent plasticity (Berthier et al., 2014;Berthier et al., 2017). INT was selected because previous works have demonstrated that short-term intensive language therapies are more effective than distributed therapies (Pulvermuller et al., 2001;Kurland et al., 2010;Berthier et al., 2014). Considering previous evidence, it was expected that DP alone would induce significant improvements in attentional and executive functions and, as a result, language functions would be enhanced. Further gains were expected in language, attentional, and executive functions with the synergistic action of combined treatment with DP and INT. Since we also envisioned that gains in naming would be maintained after INT, several post-therapy evaluations were performed. To our knowledge, this is the first study evaluating the effects of DP and INT on language recovery in CA after TBI.
Case Description
Subject P was a 9-year-old right-handed girl [+ 100 on the Edinburgh Handedness Inventory (Oldfield, 1971)] who suffered a severe closed TBI after being hit by a car on a pedestrian crossing. At the time she was admitted to the emergency room of a local Pediatric University Hospital, she was in profound state of coma, with bilateral otorrhagia, and a right hemiparesis. An emergency computerized tomography scan of the brain revealed diffuse bilateral brain edema, peribrainstem subarachnoid hemorrhage, and a focus of contusion in the left temporalparietal region. A structural brain magnetic resonance imaging (MRI) 4 days later (acute stage) revealed marked communicating hydrocephalus and left temporo-parietal and parahippocampal non-hemorrhagic contusions ( Figure 1). The hydrocephalus was uneventfully resolved with a ventriculo-peritoneal shunt. In the following days, subject P presented a gradual recovery of consciousness that uncovered a pronounced language impairment. Bedside language testing revealed that she was mute with null comprehension but had severe automatic echolalia, a profile compatible with mixed transcortical aphasia (Berthier, 1999). The aphasia and the right hemiparesis improved and subject P was referred to our Unit for evaluation of residual language deficits 6 months after the TBI. Her parents contacted the research team after reading about our work on combined treatments by using cognitive-enhancing drugs and aphasia therapy to treat acquired language disorders. Subject P was of Chinese origin, being adopted at the age of 4 by a Spanish couple living in Malaga, Spain. Her medical records from China indicated that she had no medical problems and showed typical motor, cognitive, and language developmental milestones. At the time of adoption, subject P only spoke Chinese, but she rapidly learned Spanish and started using it both at school and at home. At the time she suffered the TBI, she had normal language development and schooling records. She attended the third grade of elementary school, which was the academic course corresponding to her age.
Study Design
A single-subject, open-label design was used. Figure 2 depicts the study design. At the beginning of the trial (week 0), DP was started at a very low dose (2.5 mg/day) and titrated up to 5 mg/day one month after initiating the treatment (week 4). This DP dose was maintained and administered alone for 8 weeks (weeks 4 to 12) and then it was combined with INT (INT1) for 2 weeks (weeks 12 to 14). Thereafter, subject P continued treatment with DP alone (5 mg/day) for 12 weeks (weeks 14 to 26) and thereupon it was gradually tapered off over 4 weeks (weeks 26 to 30). This intervention phase was followed by a washout period of 4 weeks (weeks 30 to 34) and then by 2 weeks of INT alone (INT2) (weeks 34 to 36) 1 . Language and neuropsychological evaluations (LNE) were performed at different timepoints (LNE1, LNE2, LNE3, LNE4; as illustrated in Figure 2) in order to track the language and cognitive impact of the different interventions. In addition, to evaluate the duration of the potential gains achieved with each INT, six post-therapy naming evaluations (NE) were performed after each INT phase in which naming performance for treated and untreated control words was assessed. A baseline NE (NE0), including all treated and untreated words, was performed before INT1. Evaluations and language therapies were performed by the first author (GD), a neuropsychologist with experience in aphasia testing and treatment.
Drug Treatment
The cholinergic agent DP was used according to the statement of ethical principles for medical research involving human subjects of the Declaration of Helsinki (section 37: Unproven Interventions in Clinical Practice). The protocol of this study was approved by the local Ethical Research Committee (Provincial of Malaga, Spain). DP has been used in several developmental and acquired cognitive and behavioral disorders involving children and adolescents Popper, 2000;Wilens et al., 2000;Hardan and Handen, 2002;Spencer and Biederman, 2002;Heller et al., 2004;Pityaratstian, 2005;Doyle et al., 2006;Hazell, 2007;Spiridigliozzi et al., 2007;Cubo et al., 2008;Kishnani et al., 2010;Buckley et al., 2011;Handen et al., 2011;Srivastava et al., 2011;Castellino et al., 2012;Sahu et al., 2013;Castellino et al., 2014;Lassaletta et al., 2015;Tamasaki et al., 2016;Thornton et al., 2016). Treatment with DP in this population has proven to be safe Hardan and Handen, 2002;Heller et al., 2004;Spiridigliozzi et al., 2007;Kishnani et al., 2010;Handen et al., 2011;Castellino et al., 2012;Sahu et al., 2013;Thornton et al., 2016), demonstrating good efficacy profile Popper, 2000;Wilens et al., 2000;Hardan and Handen, 2002;Spencer and Biederman, 2002;Heller et al., 2004;Pityaratstian, 2005;Doyle et al., 2006;Hazell, 2007;Spiridigliozzi et al., 2007;Cubo et al., 2008;Buckley et al., 2011;Srivastava et al., 2011;Castellino et al., 2012;Castellino et al., 2014;Lassaletta et al., 2015). Therefore, we considered that the prescription of this agent for an unapproved use was appropriate for this particular case of CA. The dose of DP was chosen based on the prescription used in previous studies of DP in pediatric population (Kishnani et al., 2010;Srivastava et al., 2011;Sahu et al., 2013), on the child's weight (Hardan and Handen, 2002;Kishnani et al., 2004;Castellino et al., 2012), and on the proven tolerability of this agent (Heller et al., 2004;Spiridigliozzi et al., 2007). Subject P's parents were provided with the package leaflet of DP, and they were also fully informed about the pharmacological characteristics, the potential benefits, and adverse events of the drug. Written informed consent was obtained from subject P and her parents. During both the titration phase and the drug treatment, they were contacted regularly to detect potential adverse events and to track the adherence to the drug treatment.
Intensive Naming Therapy (INT)
INT based on hierarchical cueing was administered 1.5 h per day, 7 days per week, first during 2 weeks combined with DP (INT1, Figure 2) and, after, during 2 more weeks administered alone in the DP washout phase (INT2, Figure 2), resulting in a total duration of 4 weeks (~42 h). Stimuli in each INT session were black and white pictures representing Spanish nouns presented on a computer screen. Naming therapy based on cueing hierarchy has been shown to be effective in the treatment of naming deficits (Fridriksson et al., 2005;Green-Heredia et al., 2009;Best et al., 2013;Suaŕez-Gonzaĺez et al., 2015). After each picture was presented, subject P was required to name the depicted stimulus. If she could not name the target picture in 20 s, a written phonological cue (i.e., the first syllable of the stimulus name) was then presented beneath the picture, and the word stem was read aloud by the therapist. In those circumstances in which subject P was still unable to name the target word, the full written name was presented underneath the picture and was read aloud by the therapist. After hearing it, subject P was asked to repeat the word aloud. A set of 153 pictures consisting of white line drawings of living beings and non-living things was selected, all of them represented nouns. Half of these stimuli were trained in the two INTs (INT1 and INT2), and the other half were used as control items. The selection of these pictures was based on two criteria: (i) pictures that subject P consistently failed to name in the naming tests included in LNEs prior to INT1 (LNE1 and LNE2); and (ii) pictures selected from her natural science textbook, subject in which her parents reported marked naming difficulties. Specifically, 117 items were selected from the following naming tests: the object naming subtest of the Western Aphasia Battery-Revised ([WAB-R], Kertesz, 2007), the Snodgrass and Vanderwart Object Pictorial Set ([SVPS], Snodgrass and Vanderwart, 1980), the Boston Naming Test ([BNT], Kaplan et al., 1983), the Nombela 2.0 Semantic Battery ([NSB], Moreno-Martıńez and Rodrıǵuez-Rojo, 2015), and two naming subtests of the Psycholinguistic Assessments of Language Processing in Aphasia ([PALPA 53 and PALPA 54], Kay et al., 1992;Valle and Cuetos, 1995). The pictures from her natural science book (36 items) were selected by the therapist.
The 153 stimuli were divided into two sets, one containing 77 pictures and the other one 76 pictures, which were used as the to-be-trained and control items, respectively, for the INT1 and INT2. Specifically, 37/77 pictures were trained (hereinafter treated words) in the INT1 phase and 40/77 pictures corresponded to the treated words in the INT2 phase. The remaining pictures (36/76 and 40/76) were used as control items (hereinafter untreated words) in the six NEs performed after INT1 and INT2, respectively. The sets of In each INT session, the full set of treated words assigned to each INT was presented twice. To avoid associative learning between items, the presentation order of the words was randomized. For this, 10 lists were created containing all the items assigned to each INT but with different presentation order. Two lists were used in each daily session.
Language and Neuropsychological Evaluations
In order to assess treatment-induced changes, a set of primary and secondary outcome measures comprising language, executive functions, attention and memory functions were selected. In addition, visuoconstructive and visuospatial functions were measured only at baseline. Note that the same outcome measures were used for both interventions (DP alone and combined treatment of DP and INT), in line with the expected changes.
Outcome Measures
The primary outcome measures consisted on different measures of the WAB-R (Kertesz, 2007). Specifically, these were the aphasia quotient (WAB-R AQ) and the WAB-R subtests scores: information content and fluency in spontaneous speech, comprehension, repetition, and naming. Despite contributing to the WAB-R AQ, the different WAB-R subtests were also included individually as primary outcome measures in that they are sensitive to detect treatment-induced changes and may show a differentiated evolution pattern (Berthier et al., 2003;Berthier et al., 2006). The secondary outcome measures included a set of tests selected to assess relevant aspects of language and other cognitive functions, especially attentional and executive functions. As for the primary outcome measures, the functions targeted by these tests were expected to improve with the treatments. The selected language tests were: (a) the SVOPS, BNT, and NSB to evaluate naming; (b) the Peabody: Picture Vocabulary Test (PPVT-III), Dunn et al. (2006) to evaluate comprehension via word-picture matching; (c) the PALPA (Kay et al., 1992;Valle and Cuetos, 1995) to evaluate repetition, naming, comprehension, and reading; (d) the Token Test-short version ([TT-sv], De Renzi and Vignolo, 1962) to evaluate comprehension of syntax and spatial relationships; (e) and the Controlled Oral Word Association Task ([COWAT], Borkowski et al., 1967) to assess phonological verbal fluency (see Table 2). These tests as well as the ones included as primary outcome measures were administered in each LNE. The selected memory and executive functions tests were: Memory and Learning Test ([TOMAL], Reynolds and Bigler, 1996) Furthermore, although they were not expected to change due to treatment and, therefore, were not considered primary or secondary outcome measures, visuoconstructive and visuoperceptive functions were also assessed at baseline to estimate premorbid cognitive functioning. For this purpose, the two following tests were used: Rey-Osterrieth Complex Figure ([ROCF], Osterrieth, 1944), Benton Laboratory of Neuropsychology Tests ([BLNT], Benton et al., 1994) (see Table 2). As some of the employed tests in the LNEs are widely used in the Spanish population but may not be familiar to the English speaking countries, a brief description of these tests is provided in the Supplementary Material.
Naming Evaluations (NE)
First, a baseline naming assessment (NE0) comprising the full set of 153 words was performed after treatment with DP alone and before INT1 ( Figure 2). Then, in order to track the maintenance of gains in naming performance for treated words and the potential generalization to untreated ones, multiple NEs were performed after each INT. Specifically, after INT1 and INT2, six NEs were performed: 20 min after the end of each INT (NE1 1 and NE2 1 ), and at days 2 (NE1 2 and NE2 2 ), 7 (NE1 3 and NE2 3 ), 21 (NE1 4 and NE2 4 ), 49 (NE1 5 and NE2 5 ), and 84 (NE1 6 and NE2 6 ) ( Figure 2). In each NE, treated and untreated words were evaluated. The presentation order of the words in each NE was randomized. No feedback was provided to subject P during the NEs.
Control Group
Since there are no normative data for most of the language tests used in the evaluation of subject P, a control group of healthy children (classmates and relatives of subject P) was recruited in order to obtain reference scores for these tests. The group was composed of 7 children (4 boys and 3 girls) matched with subject P for age (8.9 ± 0.69 years; range: 8-10 years; Crawford's t, two-tailed = 0.136; p = 0.896), general intelligence (verbal IQ: subject P = 95; control group = 112.43 ± 20.02 [Crawford's t, twotailed = -0.814; p = 0.446]; non-verbal IQ: subject P = 103; control group = 115.57 ± 15.43; [Crawford's t, two-tailed = -0.762; p = 0.475]) and sociocultural background. Control children were administered the WAB-R and other tests (SVOPS, NSB, PALPA, and COWAT). Healthy adults tend to show a ceiling effect on the WAB-R AQ (AQ ≥ 93.8/100), and subjects with scores below this cut-off are considered to have aphasia. Regarding WAB-R use in children, it was reasoned that healthy children with ages between 8 and 10 years and high verbal intelligence quotient (IQ) (≥ 110) would have a good performance in the WAB-R.
The parents of the control children were informed about the aim of the study and written informed consent was obtained.
Image Acquisition
The MRI acquisition was performed at baseline (6 months after TBI) on a 3-T MRI scanner (Philips Gyroscan Intera, Best, The Netherlands) equipped with an eight-channel Philips SENSE
Lesion-Based Approach to Mapping Disconnection
Two different methods were used to gain knowledge about the direct and remote structural effects of the brain lesion. Tractotron and Disconnectome Maps, included in the BCB Toolkit (http://toolkit.bcblab.com/; Foulon et al., 2018). In order to apply these methods, subject P's lesion was manually delineated over the T 1 -weighted image in native space using MRIcron software (Rorden and Brett, 2000). Then, both the T 1 -weighted image and the binarized lesion mask were normalized to the MNI space using Statistical Parametric Mapping 12 (SPM 12, www.fil.ion.ucl.ac.uk/spm/). The normalized lesion was mapped onto tractography reconstructions of white matter pathways obtained from a group of 10 healthy controls (Rojkova et al., 2016). The analyses were focused on different language-related dorsal and ventral tracts, being these white matter pathways commonly affected in individuals with aphasia (Ivanova et al., 2016). Three ventral tracts were studied: (1) the IFOF connecting fronto-temporal regions, crossing from one lobe to the other through the extreme capsule; (2) the ILF that connects the posterior inferior, middle, and superior temporal gyri with the temporal pole; and (3) the UF which links the temporal pole with frontal areas (Catani and Thiebaut de Schotten, 2008). Three dorsal tracts were also explored, corresponding with the three segments of the AF: (1) the long segment that connects the frontal (including Broca's area and the premotor cortex) and the temporal cortices (including Wernicke's area and the middle and inferior temporal gyri); (2) the anterior segment that connects the same frontal areas with the angular and supramarginal gyri in the inferior parietal cortex; and (3) the posterior segment which connects the same parietal areas with the inferior and middle temporal gyri (Catani et al., 2005). Different measures were explored for each of the studied tracts. First, Tractotron provided the probability of a given tract to be affected by the brain lesion (≥ 50% was considered pathological) and the percentage of damage of each tract. Second, Disconnectome maps software provides a spatial map representing the probability of remote areas to be indirectly affected by the lesion. These indexes allowed to explore the remote impact of the focal brain lesions in the brain circuitry. Thus, the normalized lesion of subject P was used as seed point to identify which tracts passed through the lesion. Subject P's disconnectome map was thresholded at a value of p > 0.9. A detailed description of these methods and software is reported in Foulon et al. (2018).
Statistical Analyses
First, in order to evaluate longitudinal changes due to treatment effects, performance of subject P in each test included in the LNEs was either compared to the performance of the matched control group or to normative data. Specifically, for those tests that do not provide normative data for the age range of subject P, her performance was compared to the achievement of the control group on these tests. Notice that this served the purpose of the main aim of the study, that is to establish the effect of the different treatments on aphasia recovery, seeking for the return of subject P to an average performance in primary and secondary outcome measures. Statistical comparisons were performed using one-tailed Crawford's modified t-tests (Crawford et al., 2010), as done in previous studies (Francois et al., 2016;Birba et al., 2017;Cervetto et al., 2018). This statistic allows the comparison between a single subject and a control group. It has proven to be robust for non-normal distributions, and it has low rates of type-I error. Effect sizes for all results are reported as point estimates (Z CC ) (Crawford et al., 2010) (see Table S1). In all analyses, the alpha level was set at p < 0.05. For those tests reporting normative data, raw scores derived from subject P's performance were standardized (percentile or decatype) (for details see Tables 1 and 2), unless specified otherwise.
Second, results derived from each NE were analyzed in three different ways: i) in order to explore naming gains promoted by each INT, McNemar tests (two-tailed) were used to compare performance in the first NE after each INT against naming performance for those same words in NE0; ii) to track the evolution patterns of the potential gains found in NE1 1 and NE2 1 , performance in each of the subsequent NEs (2-6) was compared to the performance in the first evaluation after INT (NE1 1 and NE2 1 ). The analyses i) and ii) were performed independently for the treated and untreated words; iii) performance in treated and untreated words in each NE was compared via Chi-squared tests (with Yate's correction).
Findings From Language and Neuropsychological Evaluation 1 (LNE1): Baseline
In relation to the primary outcome measures, subject P obtained a WAB-R AQ score of 78.4, which is significantly lower than the cut-off score for adults (≤ 93.8) and the mean of the age-matched control group (95.13 ± 2.73; Crawford's t, one-tailed = -5.73; p ≤ 0.001). Her language deficits were characterized by impoverished information content with fluent yet anomic speech production. Specifically, significant lower scores were found in the subtests of the WAB-R targeting information content and fluency in spontaneous speech, and naming (see Tables 1 and S1), whereas performance in comprehension and repetition did not differ from that of the control group. According to the WAB taxonomic criteria, this profile was compatible with an anomic aphasia (Kertesz, 1982). The mean WAB-R AQ of the control group (95.13) was above the cut-off score (93.8) for the clinical diagnosis of aphasia in adults (Kertesz, 1982). However, 3/7 control children obtained AQ scores slightly below the cut-off (92.4, 92.5, and 93.3). These children, being the youngest ones (8 years), committed a few failures in comprehension of reversible sentences in the sequential commands subtest of the WAB-R. Despite this age-dependent limitation, the WAB-R was considered appropriate for being administered to subject P. Regarding secondary outcome measures, subject P's scores were significantly lower than those of the control group in all selected naming measures (SVOPS, BNT, picture naming and semantic fluency [NSB], PALPA-53, and PALPA-54), auditory comprehension of nouns (word-picture matching [NSB], PALPA-47, and PALPA-52), auditory comprehension of sentences (PALPA-55), word repetition (PALPA-9), nonwords repetition (PALPA-8), sentence repetition (PALPA 12), and reading (PALPA-25, PALPA-32, PALPA-36, PALPA-37, PALPA-48, and PALPA-56). Also, performance in auditory word comprehension (PPVT-III) and of sentences (TT-sv) was lower than the normative data for her age. Lastly, performance of subject P in the verbal fluency test (COWAT) was not different from controls, although a trend toward significance was found (see Tables 1 and S1).
In relation to other cognitive functions, compared to normative data subject P showed poor performance on most of the executive function tests (see Table 2 ). Yet, executive function impairments were not generalized, since subject P showed high scores in tests measuring inhibition of a prominent response (inhibition [FDT]). Furthermore, subject P's performance on most of verbal and nonverbal memory tests was within the normal range of normative data (see Table 2) except for the reduced auditory-verbal short-term memory (Digit Span). The scores obtained by subject P in tests assessing visuoconstructive and visuoperceptive functions, which are mostly related to the undamaged right hemispheric functioning, were within the normal range ( Table 2).
Findings From Language and Neuropsychological Evaluation 2 (LNE2): DP Alone
Regarding primary outcome measures, the WAB-R AQ of subject P significantly improved after 12 weeks of DP treatment alone (see LNE2 column in Table 1). The AQ score increased 14.2 points (from 78.4 to 92.6) indicating that she could be considered a "good responder" to the pharmacological intervention Cherney et al., 2010). In fact, in the LNE2, the scores obtained in information content and fluency in spontaneous speech improved and they were comparable to the performance of the control group. Statistically significant lower scores were only found in the naming subtest of the WAB-R, which remained moderately impaired (Tables 1 and S1).
In relation to secondary outcome measures, subject P showed improved naming abilities in both noun retrieval (picture naming and semantic fluency [NSB]), and auditory word (word-picture (2007). 4 Brickenkamp and Cubero (2002). 5 Reynolds and Bigler (1996). 6 Gardner (1981). 7 Arango-Lasprilla et al. (2017). 8 Rey et al. (1999). There are no children's normative data for BLNT; reference values used are based on an adult sample. † Standard scores ranging from 1 to 10 (decatypes) have, by definition, a mean of 5.5 and a standard deviation of 2. ‡ Percentiles (standard score 1 to 99) have a median of 50. Underlined scores are bellow two standard deviations of the normative mean. § Test cut-off value; scores greater than the cut-off are considered within a normal range.
matching [NSB]
, PPVT-III, PALPA-47) and sentence (TT-sv, PALPA-55) comprehension. Likewise, improvements promoted by DP alone were found in words and nonwords repetition (PALPA-9 and PALPA-8). Performance in these tests was comparable to controls, and, in the case of the PPVT-III, fell within the normal range of the normative data. Comprehension improved slightly for written words (PALPA-48) and written sentences (PALPA 56), yet remained significantly lower than the performance of the control group. Conversely, sentences repetition (PALPA-12) showed a mild decrement ( Table 1). As regard other cognitive functions (
Findings From Language and Neuropsychological Evaluation 3 (LNE3): DP-INT1
This evaluation assessed gains in language after two weeks (weeks 12-14) of DP-INT1 and another 12 weeks of DP treatment. Concerning primary outcome measures, there were no statistically significant differences between subject P and the control group neither in the WAB-R AQ nor in the WAB-R subtests, meaning that subject P achieved an average performance on all language domains. This was the first evaluation in which she showed a naming score comparable to the controls (naming subtest of the WAB-R) (see Table 1).
Regarding secondary outcome measures, all improvements observed after DP treatment alone (LNE2) remained in LNE3 (as revealed by comparison of subject P's performance with the control group's). Besides, at this endpoint, further improvements were observed in almost all language-related secondary outcome measures: (a) noun retrieval test (SVOPS, BNT, picture naming, and semantic fluency [NSB], and PALPA-53); (b) all auditory word comprehension tests (word-picture matching [NSB], PPVT-III, PALPA-47, and PALPA-52); (c) auditory sentence comprehension (PALPA-55; but not in TT-sv); (d) nonword repetition (PALPA-8); (e) written-recognition of spoken words (PALPA-52) and comprehension of written sentences (PALPA-56). However, in the picture naming x frequency subtest (PALPA-54), subject P's performance remained significantly lower than that of the control group, as well as sentence repetition and most of the reading measures. In relation to other cognitive functions, no relevant changes in measures of executive, attention or memory functions were observed at LNE3.
Findings From Language and Neuropsychological Evaluation 4 (LNE4): Washout
In week 26, the dose of DP was gradually tapered off and suspended at week 30. As regards primary outcome measures, in LNE4 (week 34) it was observed that the improvements found in the primary outcome measures (WAB-R AQ and subtests) were maintained four weeks after withdrawal of DP (see Table 1). Thus, no statistically significant differences were found between subject P and the control group in any of the primary outcome measures at this point.
Concerning secondary outcome measures, the benefits observed in comprehension of auditory sentences (PALPA-55), FIGURE 3 | Performance of subject P in the multiple naming evaluations (NE). Percentage of correct words in each evaluation is shown. NE0 indicates the performance in the baseline NE performed before INT1. NE0 at the left of NE1 1-6 indicates pre-treatment performance for the treated and untreated words used in INT1 and NE1 1-6. NE0 at the left of NE2 1-6 indicates pre-treatment performance for the treated and untreated words used in INT2 and NE2 1-6. Six NEs were performed after INT1 (NE1 1-6 ) and after INT2 (NE2 1-6 ). NE1 1-6 and NE2 1-6 were performed: 20 min after the end of each INT (NE1 1 and NE2 1 ), and at days 2 (NE1 2 and NE2 2 ), 7 (NE1 3 and NE2 3 ), 21 (NE1 4 and NE2 4 ), 49 (NE1 5 and NE2 5 ), and 84 (NE1 6 and NE2 6 ) after the end of each INT. nonword repetition (PALPA-8), and comprehension of written sentences (PALPA-56) remained unchanged. Performance in some naming tests dropped down (naming subtest of the WAB-R, BNT, and SVOPS), although only the SVOPS score was significantly lower than the control group. A slight decline in semantic fluency (NSB) and word comprehension (word-picture matching [NSB], and PPVT-III) was also observed, although performance did not differ from that of the control group (NSB) or was within the normal range (PPVT-III). In relation to other cognitive functions, compared to normative data, subject P maintained a within-average level of selective and sustained attention (d2 Test), cognitive flexibility (FDT) and attentional fluctuation (d2 Test) after drug withdrawal. Processing speed was within normal range when measured with the d2 Test, but was impaired when measured with the FDT.
In relation to untreated words, performance in NE2 1 did not significantly differ from performance in NE0 (McNemar, p = 0.250). Naming performance in NE2 2-6 was comparable to that on NE2 1 (for all comparisons: McNemar, p = 1).
Neuroimaging Findings
Lesion Location MRI performed 6 months after the TBI showed left cortical tissue damage, mostly involving the inferior temporal gyrus (Brodmann area [BA] 37) and to a lesser extent the middle temporal gyrus (BA21) and the angular gyrus (BA39) ( Figure 4A).
There was also a focal cortical atrophy and gliosis in the subcortical white matter, causing a discrete retraction of the temporal horn of the left lateral ventricle. The ventricularperitoneal shunt was correctly placed in the occipital horn of the right lateral ventricle.
Mapping Disconnection: Tractotron and Disconnectome Maps
Tractotron revealed that in the left hemisphere, the AF anterior segment showed a probability of 48% to be directly affected by the lesion; and the AF long and posterior segments showed a probability of 98%. Ventrally, the ILF showed a 100% probability of involvement; the IFOF showed a probability of 92%, while the UF was unlikely to be affected (probability of 0%). In the right hemisphere, none of these tracts were damaged (all probabilities were equal to 0%). The high probability of affectation found for the AF posterior and long segments, the ILF and the IFOF are in line with the cortical damage observed in subject P, which affected the inferior and middle temporal gyri and the angular gyrus, regions connected by these tracts (Catani and Thiebaut de Schotten, 2008).
Yet, the probability of a given tract to be affected does not inform on the amount of damage. To obtain this measure, the proportion of damage was extracted. The proportions of each tract to be affected by the lesion were: AF anterior segment: 0%; AF long segment: 29%; AF posterior segment: 32%; ILF: 13%; IFOF: 4%; UF: 0%. Finally, note that those distant cortical areas that showed a high probability of disconnection (> 80%) due to the TBI are in fact the ones connected by the affected white matter pathways as revealed by Tractotron ( Figure 4B).
DISCUSSION
In the present intervention study, we described the case of subject P, a girl with chronic anomic aphasia secondary to a TBI in the left temporo-parietal region. She received three successive treatments: (1) DP alone; (2) a combination of DP and INT; and (3) INT alone. Multiple language and other cognitive domains evaluations were performed at baseline and at different time-points ( Figure 2) in order to track changes promoted by these interventions. Results obtained from these evaluations were compared to a socio-demographically matched control group.
Several important findings of our study should be highlighted. First, at baseline, subject P showed significantly lower scores than the control group in the primary and secondary outcome measures targeting language, attentional, and executive functions. Second, treatment with DP alone (week 0 to week 12) induced improvements in primary outcome measures (see LNE2 results). Aphasia severity and scores in different language domains (fluency and information content during spontaneous speech and naming) improved, and at this point subject P performance was comparable to the control group's in all primary outcome measures (WAB-R AQ and WAB's subtests), Frontiers in Pharmacology | www.frontiersin.org July 2020 | Volume 11 | Article 1144 except for naming. Third, combined treatment with DP-INT1 (week 12 to week 26, see LNE3 results) further increased the WAB-R AQ, placing the language deficits of subject P in the nonaphasic range. Fourth, the combined intervention provided further gains in picture naming, the most affected language function at baseline (LNE1). Fifth, secondary outcome measures improved with DP alone (LNE2), denoting the beneficial effect of the drug, and most of the differences to the controls observed at baseline disappeared with combined treatment (DP-INT1; LNE3). Lastly, most gains provided by DP intervention were stable 4 weeks after withdrawal. It is noteworthy that at washout evaluation, the WAB-R AQ remained within the normal range as compared to the control group. Language disorders in childhood often have important implications in everyday life and represent a risk factor of developing anxiety and social problems in adulthood (Brownlie et al., 2016;Ryan et al., 2018). Currently, the only available treatment for CA is speech-language therapy, and although it often promotes recovery of linguistic and other cognitive functions, restoration is far for being complete. However, research aimed at finding new therapeutic strategies to improve outcomes in CA is still underdeveloped. Overlooking the investigation of new therapeutic approaches to improve CA may have negative consequences such as preventing the development of language and communication skills during childhood and adolescence. There is now encouraging evidence derived from model-based interventions indicating that adult aphasia outcomes can be improved with intensive aphasia therapy and other therapeutic approaches (pharmacotherapy, non-invasive brain stimulation) (Pulvermüller and Berthier, 2008;Breitenstein et al., 2017;Fridriksson et al., 2018). Taking advantage from data on these new interventions in adults with aphasia, in the present case study we used a similar therapeutic approach demonstrating, for the first time that DP is safe and well tolerated in CA and can be used alone and in combination with a tailor-made aphasia therapy (e.g., INT) to boost recovery of language and cognitive deficits.
Pre-Treatment Behavioral Profile and Brain-Behavior Relationships
Baseline testing (LNE1) with the WAB-R classified the language disorder in subject P as anomic aphasia (Kertesz, 1982), yet she also displayed deficits in phonology and semantic processing. On the WAB-R, the deficits were mainly observed in spontaneous speech (information content and fluency), and naming. Furthermore, subject P showed significantly lower scores in most of the secondary outcome measures than the control group (auditory and visual-verbal comprehension, repetition, noun naming, and reading). Subject P also showed low performance in tests measuring executive functions, attention, and auditory verbal short-term memory, manifested by slow processing speed, limited cognitive flexibility, low selective and sustained attention levels, and reduced verbal span. Impairments in executive and attentional functions are common after TBI due to diffuse cerebral damage that frequently affects the white matter bundles in frontal and temporal lobes (Levin, 2003;Vik et al., 2006). Although memory dysfunction is usually associated to oral language deficits in children with TBI (Conde-Guzoń et al., 2009), the performance of subject P on the different memory subscales revealed that this function was preserved, except for a reduced digit span. In addition, performance in visuoconstructive and visuoperceptive tests was preserved.
Structural MRI in the chronic stage showed a large contusion in the left temporo-parietal cortex together with focal cortical atrophy and gliosis in the subcortical white matter. Our lesion-based approach suggests that the tract with major proportion of damage was the AF posterior segment, which connects regions that were specifically damaged in subject P (i.e., inferior parietal cortex and ventral posterior temporal cortex). This segment is part of the indirect connection of the AF system implicated in verbal repetition (Forkel et al., 2020) and reading (Thiebaut de Schotten et al., 2014). Notice that at baseline evaluation (LNE1 , Table 1), both nonword (PALPA-8) and sentence (PALPA-12) repetition, as well as reading (PALPA-25, PALPA-32, PALPA-36, PALPA-37) were impaired. Ventrally, the ILF was the tract with the major proportion of damage. This tract runs in parallel from posterior to anterior parts of the temporal lobe and is implicated in lexical access (Herbet et al., 2019). This is consistent with the fact that naming was the main deficit of subject P. Thus, the high probability of affectation of these tracts together with the observed cortical involvement of temporo-parietal areas (BA21, BA37, and BA39) may explain the prominent naming difficulties found in subject P, as well as the pattern of committed errors (semantic paraphasias). For instance, axonal degeneration of the ILF is related to naming deficits and the production of semantic paraphasias in post-stroke aphasia (McKinnon et al., 2018) and in patients with brain tumors . In addition, the ILF has been systematically implicated in semantic processing, lexical access (Nugiel et al., 2016;Herbet et al., 2019) and word learning involving lexical-semantic association in healthy subjects (Ripolleś et al., 2017). In this line, BA37, which is damaged in subject P, is an important cortical hub for two distinct networks implicated in visual recognition (perception) and semantic functions (Ardila et al., 2015), and its damage is associated with fluency, comprehension, repetition, and naming impairments after stroke (Gleichgerrcht et al., 2015). The subtle involvement of BA21 in the posterior middle temporal cortex may have altered semantic control for comprehension (Noonan et al., 2013). Finally, despite the small size of the parietal cortical damage, the angular as well as the supramarginal gyri may be disconnected due to the affectation of the AF posterior segment, as revealed by the lesion analyses. Therefore, although it seems that the lesion sizes were not large enough to induce major disconnections, they were strategically placed to interrupt intrinsic connections within the left perisylvian language area. Unfortunately, since we only were able to perform a MRI study at baseline 2 , we could not compare pre-and post-2 Multimodal MRI studies at baseline and repeated scanning at different time points were not performed because exposing patients with ventricular shunts to prolonged 3-Tesla MRI procedures poses a significant risk of unintentional changes in shunt settings. Therefore, subject P only underwent a single, rapid acquisition of structural MRI. Despite this, reprogramming of the shunt was needed after the study. treatment MRIs to explore the brain correlates of the observed improvements in naming.
Drug Treatment Alone Improves Language and Cognitive Deficits
Although the beneficial action of the cholinesterase inhibitor DP is controversial in adult TBI, with studies showing both positive effects and lack of benefits (Walker et al., 2004;Warden et al., 2006;Shaw et al., 2013), our findings clearly show that CA may be improved with cholinergic potentiation. After 12 weeks of DP treatment (LNE2), a decrease in aphasia severity was found in subject P, as revealed by increased scores on both the WAB-R AQ and its subtests, except for naming. In fact, improvements were found for some naming tests (NSB subscale), but not for others (SVOPS, BNT, EPLA-53, EPLA-54). The treatment with DP alone also induced significant improvements in measures of verbal fluency (semantic and phonological), auditory-verbal comprehension (words and sentences), and word and nonword repetition. These linguistic improvements may be associated with enhancement of selective and sustained attention which eventually favored phonological and lexical processing for these stimuli. This is consistent with the role of anticholinesterase drugs, like DP, in improving sustained attention (Spiridigliozzi et al., 2007) and language function (Heller et al., 2004) in children. The bulk of the lesion in subject P was in the left temporal cortex, and this lobe contains more choline acetyltransferase than its homologous counterpart (Amaducci et al., 1981;Hutsler and Gazzaniga, 1996). Therefore, a neurobiological explanation for this finding would be that the language improvements could be accounted for cholinergicinduced neural plasticity in left perilesional temporal cortical areas and white matter tracts (ILF and IFOF), though the contribution of remote right cortical regions cannot be dismissed. The gains produced by DP in selective and sustained attention were not associated with improvement in other frontal executive functions, which most likely resulted from diffuse axonal injury and the pressure effects of acute hydrocephalus on frontal tissue. Although an increase in processing speed was observed in d2 Tests, the performance on other tests evaluating this domain remained low. Likewise, there were slight improvements in cognitive flexibility (ENFEN), but this finding was not substantiated by other tests. Repetition and written comprehension of sentences, reading functions, and auditoryverbal short-term memory (Digit Span) also remained altered. This is in accord with Martin and Ayala's findings (2004) who have reported significant correlations between the severity of language impairment (in both phonological and lexical-semantic measures) and the size of digit and word span in individuals with aphasia.
Reading problems persisted in subject P. The fact that associative visual areas in the left inferior occipito-temporal cortex, such as the visual word form area (VWFA), were damaged, might be the simplest explanation of the reading deficits in subject P. The VWFA is a region specifically devoted to the recognition of the written words in literate persons (Cohen et al., 2000;Loṕez-Barroso et al., 2020) and its damage causes alexia. Although compensation by recruitment of the VWFA homolog in the right hemisphere can take place (Cohen et al., 2003), this plastic shift may require intensive training.
Combined Therapy Increases Gains Obtained With Drug Monotherapy
Treatment with DP alone improved language deficits in subject P. Nevertheless, recent evidence suggests that cholinergic stimulation in adults with TBI is useful when combined with environmental enrichment (De la Tremblaye et al., 2019). The current findings further support the importance of augmenting the effect of DP on brain tissue with INT. Almost all scores obtained under treatment with DP alone showed further improvements after two weeks of combination therapy (LNE3). Moreover, in comparison with baseline (LNE1) and the evaluation after DP alone (LNE2), the highest gains after combination therapy (LNE3) were in several measures of naming production (see Table 1). Naming evaluations post-INT1 (NE1) under ongoing DP treatment (weeks 14-26) showed significantly better performance for treated items than for untreated ones. During this time period, gains in treated items were maintained, whereas low scores in untreated items remained unchanged.
Since then, DP was gradually tapered off (weeks 26-30) and followed by a washout period (weeks 30-34) and a new phase of INT (INT2). Post-washout evaluation (LNE4) showed that improvements observed in the WAB-R AQ decreased slightly but remained comparable to the scores of the control sample and well above subject P's baseline score. At this point, the score on the naming subtest of the WAB-R presented a slight decrease, but decrements were more evident in other naming tasks (SVOPS, semantic fluency, BNT). By contrast, the benefits observed in other language tasks (fluency, word and nonword repetition, auditory sentence comprehension, sentence reading comprehension, and phonological and semantic fluency) were stable. Likewise, washout testing revealed that subject P maintained an above-average level in several attentional and cognitive flexibility measures. Naming evaluations post-INT2 alone (NE2) showed a similar tendency to the outcomes of naming evaluation in NE1 except for the more pronounced decline in the two last NE2s.
Two of these results were unexpected. First, although beneficial effects of DP-INT1 were generalized to several language and cognitive domains, it was surprising that untreated items showed no improvement. The lack of generalization did not result from differences in selection of treated and untreated words, because both sets of words were closely matched controlling key linguistic variables. Although this negative evidence deserves further research, our results suggest that the effect of the DP on untreated nouns was not as powerful as when the drug was combined with intensive noun training, aimed to strengthen experience-dependent plasticity. Similarly, combined dexamphetamine with naming therapy in two subjects with chronic adult post-stroke aphasia improved treated nouns but not untreated ones, nor a control nonword reading task (Whiting et al., 2007). The second unexpected finding was that the results of post-INT2 (NE2) evaluations (without pharmacotherapy) were similar to those obtained in post-INT1 evaluations (NE1) while subject P was still under DP treatment. A likely explanation could be that the previous prolonged treatment with DP (duration: 26 weeks) induced long-lasting brain changes that were then profitable seized by the application of INT2 after a short washout period (4 weeks). Thus, a lesson to be learned from this finding would be that once the brain has been primed with a combined intervention (DP-INT1), it would be similarly responsive to a single modality of intervention (INT2 or a drug) applied at a later stage (see Berthier et al., 2009;Berthier, 2020).
Finally, the results of the present study should be interpreted considering some limitations. First, this is an open-label study performed in a single subject. Thus, randomized controlled trials in larger samples are strongly needed. Second, we initiated the drug treatment before aphasia therapy, so that the effect of the naming training alone could only be evaluated after previous treatment with DP. Therefore, other designs should be evaluated in the future. Lastly, it is not possible to rule out that some beneficial changes in subject P may have resulted from the continued maturation and evolution of cognitive and language processes that may be partially blended with the beneficial effects of the two therapeutic interventions. Yet, this is unlikely, at least for naming ability, since no improvements were seen for untreated items which served as control. A further strategy to reduce the confounding factor of language and cognitive development and maturation in outcomes of an intervention trial in CA is performing multiple baseline assessments. Multiple baseline assessments were not used in this study. Notice that we performed a very comprehensive language and cognitive evaluation that took several days to be completed. This may preclude the utilization of multiple baseline testing. Indeed, longer and repetitive evaluations are very tiring, particularly for children, and may reduce motivation, putting at risk adherence to evaluation and treatment. The rationale to use such a large test battery in subject P was to examine, for the first time, the effect of DP and INT not only in language functions but also in several other cognitive domains, which are commonly affected after TBI and may influence outcomes. To overcome this limitation, future studies may perform multiple baseline assessments in the most affected language domain(s) (e.g., naming in subject P) or in the domain(s) targeted for the intervention.
In summary, subject P, who presented an acquired aphasia after suffering a TBI involving the left temporo-parietal region, significantly improved anomia and related cognitive deficits through the use of a cholinergic agent (DP) alone and in combination with INT.
DATA AVAILABILITY STATEMENT
The dataset that support the findings of this study will be available upon request from the corresponding author.
ETHICS STATEMENT
The Ethics Research Committee Provincial de Maĺaga approved this study.
AUTHOR CONTRIBUTIONS
All authors contributed to the article and approved the submitted version. GD, MM, MT-P, MB, LE, and DL-B were involved in conception and design, acquisition of data, or analysis and interpretation of data. GD, MM, MT-P, LM-C, LE, and DL-B were involved in cognitive and language assessment. GD, MT-P, MB, and DL-B interpreted neuroimaging data. GD, MT-P, MB, and DL-B drafted the article and revised it critically for important intellectual content.
ACKNOWLEDGMENTS
Data analysis included in this study and the writing of the article were made while the first author was at the Sheffield Hallam University (United Kingdom) under the guidance of Professor Karen Sage. The authors thank the parents of subject P and healthy control children for their kind participation in the study. Thanks also go to Alba Magarıń and Marıá Castillo for collaborating in the assessment of the control children. | 2020-07-31T13:13:44.542Z | 2020-07-31T00:00:00.000 | {
"year": 2020,
"sha1": "dccbd04c47ae1fd43b8d385df96c40bd4f03a307",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2020.01144/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dccbd04c47ae1fd43b8d385df96c40bd4f03a307",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14026318 | pes2o/s2orc | v3-fos-license | Two-loop renormalization group analysis of hadronic decays of a charged Higgs boson
We calculate next-to-leading QCD corrections to the decay $H^+ \to u\bar d$ for generic up and down quarks in the final state. A recently developed algorithm for evaluation of massive two-loop Feynman diagrams is employed to calculate renormalization constants of the charged Higgs boson. The origin and summation of large logarithmic corrections to the decay rate of the top quark into a lighter charged Higgs boson is also explained.
Introduction
The accelerators planned to be built in the near future will provide an insight into physics at TeV energy scale and thus probe a region especially interesting from the point of view of electroweak interactions. Therefore we observe recently an increased interest in various aspects of phenomenology of the electroweak symmetry breaking, in the framework of the Standard Model and its extensions, which predict one or more doublets of Higgs bosons.
As far as the experimental detection of the Higgs particles is concerned, detailed knowledge of their decay properties is of special interest. In the present paper we concentrate on hadronic decays of charged Higgs bosons, predicted e.g. by the Minimal Supersymmetric Model (see ref. [1] for a review and further references). In the case of the Standard Model Higgs boson hadronic decays have been analyzed in a number of publications. QCD and QED corrections were first calculated in [2], where it was noted that as the ratio of mass of the decaying scalar particle to that of fermions in the final state increases, the one-loop corrections diverge logarithmically. This problem was solved by renormalizing mass of quarks at the energy scale equal to the mass of the decaying particle and thus absorbing the large correction into the tree level decay rate expressed in terms of the running mass. This approach was subsequently extended by means of the renormalization group technique in ref. [3,4] (see also ref. [1] for a clear summary), where next-to-leading QCD corrections were summed up in the case of large mass of the Higgs boson. The effect of three-loop QCD corrections was first calculated in ref. [5] and further analyzed in ref. [6]. Leading logarithmic approximation and various ways to parametrize the next-to-leading corrections to the Standard Model Higgs boson are subject of several recent studies [7,8,9].
In the minimal extension of the Standard Model suggested by supersymmetry there are two Higgs boson doublets, and one of the charged modes becomes physical. It is therefore of great importance to look at the phenomenology of such charged scalar particles, since their discovery would give information about the theory underlying the Standard Model. Various radiative corrections to hadronic decays of charged scalar particles have been calculated and published recently. One-loop QCD corrections and the sum of leading logarithms can be found in [10,11] and the leading electroweak effects in the limit of large mass of the top quark in [12,13] (see also [14]).
The purpose of the present paper is to apply the renormalization group technique to calculate next-to-leading QCD corrections to the process H + → ud, where u and d represent generic up and down type quarks respectively. In section 2 we describe the theoretical framework of this calculation, and in 3 we present the calculation of the two-loop mass and wave function renormalization constants of a charged Higgs boson. The results for the corrected rate of the Higgs decay are presented in section 4. In section 5 we digress to discuss an application of the summing of leading logarithms to the closely related process t → H + b. Our conclusions are given in section 6.
Decay rate and operator product expansion
As has been demonstrated in refs. [3,4], the hadronic decay rate of a Higgs boson can be represented by 3 where C 0 is the coefficient of the unit operator in the operator product expansion of the correlator function of the scalar currents 4 , and the scalar current in the present case is defined by In this formula Z ′ 1 denotes the renormalization constant of the charged Higgs-fermion vertex, R and L are the chiral projection operators, R = (1+γ 5 )/2 and L = (1−γ 5 )/2, and the coefficients a and b depend on the specific model. We will consider two models characterized by the absence of flavour changing neutral currents, described in detail in [1], where references to original papers can also be found. In model I we have whereas in model II, which corresponds to the Higgs sector of the Minimal Supersymmetric Standard Model: and tan β is the ratio of vacuum expectation values of the two Higgs doublets. We now describe the procedure of calculating next-to-leading QCD contributions to the coefficient function C 0 (q 2 ). Following ref. [3] and using methods described in [16] one derives the renormalization group equation for the space-like values of the argument of C 0 (q 2 < 0). The solution to this equation is found in form of an expansion in the QCD coupling constant g s and in the ratios of masses m u,d /m H (below we shall also use a common notation for the quark masses, m q ≡ m u,d ).
Keeping only first two terms in the mass expansion is justified in the region far above the threshold of the ud production; near the threshold not only this expansion is insufficient, but also the perturbative treatment of the vacuum polarization diagrams is questionable.
While the general method follows closely ref. [3] and needs no further discussion here, we want to concentrate on the novel feature of the present calculation, namely on the computation of the renormalization constants of the charged Higgs boson.
Derivation of renormalization constants
We work in D = 4 − ω dimensional space, considering γ 5 to be anticommuting with other γ-matrices, γ 2 5 = 1. We may use such a scheme because there is no anomaly problem in the case considered (see, e.g., ref. [15]). Solution of the renormalization group equation requires knowledge of 1/ω poles of the quantities with Z ′ 3 and Z m H denoting wave function and mass renormalization constants of the charged Higgs field. Hence we have to calculate divergent parts of the self energy diagrams as depicted in figure 1. It has to be noted that in the present case of two different masses of quarks in the loop, diagrams corresponding to figs. 1(b) and 1(d) should be considered together with their counterparts with corrections on the other quark line. Their sum is of course symmetric under m u ↔ m d .
A modification of the method developed in ref. [17] enabled us to obtain exact expressions for divergent parts of all diagrams presented in figure 1, that are valid for any values of the external momentum k, m u and m d . Since in the present paper we are mainly interested in the expansion in m 2 q /m 2 H , we expand the relevant integrals in m 2 q /k 2 , keeping the k 2 and the k 2 (m 2 q /k 2 ) = m 2 q terms only. In the one-loop order (see figure 1(a)) we get while the sum of all two-loop-order contributions (see figures 1(b,c,d,e) together with counterparts) yields In the above formulae the colour factors are N C = 3 and C F = 4/3. It is remarkable that the terms containing ln(m 2 q ) and ln(−k 2 ) (occurring in the expressions for separate diagrams) disappear in the whole sum (8). In particular, this fact enables us to consider analytic continuation to time-like values of the momentum without difficulty. It should be also noted that the factors of π −ω/2 Γ(1 + 1 2 ω) are included into the definition of coupling constants g 2 and g 2 s , as it is usually done in the framework of the MS scheme (this is also equivalent to a re-definition of 1/ω poles).
We can now calculate coefficients S n and T n of 1/ω n poles in S and T including terms of the order of g 2 s : with s (1) = 10 3 and t (1) = 8 3 . We have displayed S 2 and T 2 because they illustrate nice agreement of our calculation with equations following from the renormalization group analysis [3,16]. Namely, both these quantities can be found in the lowest relevant order of perturbation theory from 5 and we reproduce our expressions by putting γ (0) m = −8.
Corrected decay width
Here we shall present the formula for the decay rate of the charged Higgs boson including next-to-leading corrections. We define L = ln m 2 H /Λ 2 QCD and obtain: where the mass correction δ is We have expressed the decay width of the charged Higgs boson in terms of the renormalization group invariant masses of the quarks m q (q = u, d), and the coefficientsâ andb defined by equations (4) and (5) with m q replaced by m q . 6 Although eqs. (14) 5 Here and below, γ (n) m and β n correspond to the coefficients of expansion (in g s ) of the anomalous dimension of mass, γ m (g s ), and the beta function, β(g s ). In the normalization used (see, e.g., ref. [3] and references therein) we have: m = − 404 3 + 40 9 N F , where N F is the number of quark flavours. 6 If we formally put m u = m d =â =b ≡ m q in eqs. (14) and (15), we get the correct result for partial decay width of a neutral Higgs into qq pair (see eqs. (3.21)-(3.22) in [3]). and (15) correspond to partial decay width H + → ud, it is clear that the main contribution to the sum over generations will be given by the term with maximal quark masses allowed.
Following the method of ref. [3,18] we use the threshold condition stating that the running mass of the quark at the energy scale of production of a pair quark-antiquark is equal to half this energy; from this condition we obtain: .
We now want to visualize the magnitude of the leading and next-to-leading order corrections. The leading order correction can be obtained from equations (14) and (15) by dropping all the terms divided by L = ln m 2 H /Λ 2 QCD . In the case of model II, our formulae in the leading logarithmic approximation reproduce the results of ref. [10], for both reactions H + → cs and H + → tb. To our knowledge, the corrections in model I have not been analyzed so far even in the leading order. We present both the leading and the next-to-leading logarithmic corrections to the rate of the decay H + → tb in fig. 2(a-c) for the value of tan β = 2, for which this decay has similar rate in both models (we take m t = 150 GeV and m b = 4.5 GeV). It turns out that although the leading corrections decrease the expected width of the charged Higgs boson, the next-to-leading terms can increase it by a large factor, especially for the light mass of the Higgs. On the other hand, this effect may be an artifact of the choice of the threshold condition. The sensitivity of the result to this choice is shown in fig. 2, where we present the results using two different conditions: (a,b) m(4m 2 ) = m and (c) m(m 2 ) = m (where m denotes the running mass of the quark). The dependence on the initial condition has also been discussed in much detail in ref. [25].
Size of corrections also strongly depends on the value of tan β, as can be seen in fig. 3. For this plot we have used the usual condition m(4m 2 ) = m, and we see that for large tan β, where the decay is dominated by the coupling proportional to the bottom quark mass, next-to-leading corrections slightly decrease the effect of the leading ones. Therefore, the effect of increasing the width mentioned above is due to the corrections to the mass of the top quark; this may be a signal of insufficiency of the expansion in m q /m H for lighter Higgs bosons. We are going to address this issue in future (see also ref. [19]).
On the decay t → H + b
The same term in the Lagrangian which is responsible for the decay of the charged Higgs boson into quarks will also enable a sufficiently heavy top quark to decay into a bottom quark and an H + . Both electroweak [20,21] and QCD [22,23,24] corrections to this decay have been studied at the one-loop level. It has been found [24] that in the two Higgs doublet model predicted by supersymmetry the relative size of QCD corrections becomes large for growing values of tan β. We would like to discuss this effect here in order to demonstrate that this large correction can be absorbed in the Born rate if one uses the running mass of the b quark 7 , just like in the case of the Standard Model Higgs boson [2,25].
For this purpose we compute the tree level decay rate in the limit of very large tan β and mass of the top quark: where we have introduced the following notation for the ratios of relevant masses: The first order QCD corrections, calculated in ref. [24], can be expressed in the limit of large tan β by: and the explicit formulae for the coefficient functions G i can be found in ref. [24].
Here we only need terms of the order of ln ǫ: Using these expressions we can calculate the asymptotic value of first order corrections for large values of tan β and m t : We see that for α s ≈ 0.1, m b = 4.5 GeV and m t = 100 GeV this correction is of the order of −40%, in agreement with diagrams presented in [24]. The size of corrections becomes even larger as the mass of the top quark increases, and eventually the oneloop corrected rate of decay becomes negative; such large corrections are a sign of a breakdown of the perturbation theory. However, it is possible to avoid the large corrections by renormalizing the mass of the b quark not on the mass-shell but at the energy scale characteristic to the process, which is the mass of the top quark. The running mass of the bottom quark at this energy is: where N F = 6 is the number of quark flavours, and we take Λ QCD = 150 MeV (in the MS scheme). We can expand the above expression in a series in the coupling constant α s , and we find that: It can now be seen from the formula (21) that for large tan β and m t the one-loop corrected decay rate approaches the Born rate expressed in terms of the running b quark mass. In figure 4 we show the dependence of decay rate of the top quark on tan β. We note that for large values of tan β the QCD corrected rate is not much different from the Born rate expressed in terms of the running b quark mass (22), and that we no longer face the problem of unreasonably large corrections.
On the other hand, for the small values of tan β the Born rate remains approximately unchanged when expressed in terms of the running b mass. The reason for this is that the dominant coupling of the quarks to the charged Higgs is m t cot β in this region and mass of the b quark does not play any important role. The same can be said about the analysis of the top quark decay in the non-supersymmetric two Higgs doublet model, and it explains why no large logarithmic corrections were found there for any values of tan β [24].
Summary
We have found leading and next-to-leading order corrections to the decay width of the charged Higgs boson in the framework of two models. In the case of the leading corrections in the model motivated by supersymmetry we confirmed the previously published formulae [10]; the remaining results are new. For a heavy Higgs boson or for large values of tan β we found that the next-to-leading order corrections sizably decrease the effect of the leading order corrections, and increase the final result for the decay rate. We have also examined the process t → H + b, explaining the origin of large logarithmic corrections found in a previous paper [24]. GeV and s = 0:1: Born rate (long dash), rate including rst order QCD corrections (short dash), and the improved Born rate (solid line). | 2014-10-01T00:00:00.000Z | 1993-09-08T00:00:00.000 | {
"year": 1993,
"sha1": "dae49391640a47b7aa0f2afad4168047181d1036",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/9309245",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c6c7fd55947aa0b0d9ad91fc3204d20a0bc14d66",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
240339332 | pes2o/s2orc | v3-fos-license | Prediction of Human Population Responses to Toxic Compounds by a Collaborative Competition
When it becomes completely possible for one to computationally forecast the impacts of harmful substances on humans, it would be easier to attempt addressing the shortcomings of existing safety testing for chemicals. In this paper, we relay the outcomes of a community-facing DREAM contest to prognosticate the harmful nature of environment-based compounds, considering their likelihood to have disadvantageous health-related effects on the human populace. Our research quantified the cytotoxicity levels in 156 compounds across 884 lymphoblastic lines of cells. For the cell lines, the transcriptional data and genotype are obtainable as components of the initiative known as the Tox21 1000 Genomes Project. In order to accurately determine the interpersonal variations between toxic responses and genomic profiles, algorithms were created by participants in the DREAM challenge. They also used this means to predict the inter individual disparities of cytotoxicity-related data at the populace level from the organizational characteristics of the considered environmental compounds. A sum of 179 predictions were submitted and then evaluated at odds with experiment-derived data set to the blinded lot of participants. The cytotoxicity forecasts performed better than random, showcasing modest interrelations and consistency with a complexity of trait genomic prognostics. Contrastingly, the response of population-level predictions to a variety of compounds proved higher. The outcomes spotlight the likeliness of forecasting health-associated risks with regards to unidentified compounds, despite the reality that one’s risk with estimation accuracy persists as suboptimal. Most of the computational means through which chemical toxicity can be predicted are more often than not based on non-mechanistic cheminformatics-inspired solutions. They are typically also reliant on descriptions in QSAR arsenals and usually related with chemical structures rather vaguely. Majority of these computational methods for determining toxicness also employ black box math algorithms. Be that as it may, while such machine learning models might possess much lower capacities for generalization and interpretability, they often achieve high accuracy levels when it comes to predicting a variety of toxicity results. And this is reflected unambiguously by the outcomes of the Tox21 competition. There is a huge capitalization on the ability of present-day Artificial Intelligence (AI) to determine the benchmark data of Tox21 with the aid of a series of 2D-rendered chemical drawings, using no chemical descriptors whatsoever. In particular, we processed some unimportant 2D-based molecules sketches within a controlled convolutional neural 2D network—also represented as 2DConvNet). We also demonstrated that today’s image recognition tech culminates in prediction correctness which can be compared to cutting-edge cheminformatics contraptions. Moreover, the 2DConvNet’s image-based model was evaluated comparatively dwelling on a set of external compounds from the stables of the Prestwick chemical library. They led to an experimental recognition of substantial and initially undocumented antiandrogen tendencies for diverse drugs in the generic and well-established categories.
INTRODUCTION
When we are able to determine the level of toxic response present in a given population, we would also be able to assist in the establishment of safe exposure levels to the alien compounds. We would also be able to identify the section of the populace that are at a significantly high risk due to the adverse effects from the contamination. The present assessment of risks cannot be held responsible for the individual disparities that exist in the response from the chemical exposure. In addition, we perform safety testing at standard levels on a little fraction of the environmental compounds that are already in existence (Judson, R. et al., 2009). This also employs costly animal archetypes, which also consume a lot of time and isn't consistent when it comes to reflecting the safety profiles of individuals (Jacobs, A.C. & Hatfield, 2013). Algorithms with the capacity to supply correct in silico prognostication of human safety risks can make for an error-free and cost-efficient tool that will help in the identification of the possible risks to the given populaces (Bynagari, 2014). Nonetheless, predictions from the past have been hampered by the scarcity of data regarding the variability of populations as well as the drawbacks associated with extrapolation from prototype organisms (Zeise, L. et al., 2013;Dorne, J.L.C.M, 2010).
When there is a development of high throughput in studies concerning vitro toxicity, the utilization of human-based prototypes (Abdo, N. et al., 2015) and meteorically reducing sequencing costs have helped sizable and genetically differentiated populations to undergo characterization. In vitro systems, significant throughput has been used to successfully gain access to the changes existing in the transcriptional (Burczynski, M.E. et al., 2000) and phenotypical (Uehara, T. et al., 2011) traits as it has to do with response to the exposure of the considered compounds (Kleinstreuer, N.C. et al., 2014).
Continually, genomically (Caliskan, M., Cusanovich, D.A., Ober, C. & Gilad, Y.. 2011) attributed lines of cells with ever decreasing non-genetic variation sources have been employed in the identification of genetic variants and transcripts in relation to vitro and clinical drug responses alike (Ref 11 and 12). With these technologies, it is possible to enable the automated toxicity tests for an extensive array of compounds in lines of human cells (Ganapathy, 2015). This can be done to assess the level of responses among the population and also examine the variation between the associated risk profiles across the numbers (Ref 13). This project is a salient aspect of a community-based open challenge with the Dialogue on Reverse Engineering Assessment and Methods, also known as DREAM (Margolin, A.A. et al., 2013;Costello, J.C. et al., 2014;.
The researchers who participated in the DREAM project were asked to predict interpersonal variability in response to cytotoxicity. The researchers predicted based on genomic and transcriptional profiles. The challenge also had them predict the parameters of cytotoxicity at the population level and across chemicals that are based on the structural characteristics of the compounds involved in the process. Toxicity on the cellular level was examined for 156 compounds across the cell lines of lymphoblastoid as derived from 884 individuals from clear-cut regional subpopulations sourced from Asia, Europe, Africa and the Americas (1000Genomes Project Consortium. et al, 2012. The transcriptional and genetic information obtained from these cell lines were available under the auspices of the 1000 Genomes Project (1000Genomes Project Consortium. et al., 2010. The set of data possesses double the count of cell lines and thrice the amount of compounds in comparison with the study which previously stood as the largest (Brown, C.C. et al., 2014). The paper carried out an evaluation on the submitted cutting-edge approaches to modeling in order to put a benchmark on the existing most reliable practices in the predictive modelling universe. Additionally, the community challenge was able to pinpoint the algorithms capable of prediction with more improved random correctness, personal and group-level responses to various compounds factoring in only genomic information. Despite the reality that these outcomes are a representation of the betterment occurring over previous efforts to forecast the response in cytotoxicity, significant improvements for predictive correctness never cease to be critical.
In the DREAM contest, the cytotoxic data employed is made up of the EC10 information, which was generated from the lines of cells in response to the common 156 compounds based in the considered environment. The participants were equipped with a set of training cytotoxicity information. This data was provided for 620 lines of cells as well as 106 compounds. Then, it was coupled with genotype-related data for the entire cell lines, RNA-seq information for 337 lines of cells and chemical characteristics for all the compounds involved. DREAM was split into two non-dependent sub contests.
In the first sub challenge, those who partook were requested to forecast EC10 values for a different test group of 264 lines of cells in reaction to the 106 compounds in the environment. On a mindful note, just 91 toxic compounds were utilized for the eventual scoring procedure. Whereas, in the second sub challenge, the participants were asked to accurately determine the parameters of the concerned population in terms of middle-stage EC10 values as well as 5th interquartiles distance. The predictions were made for a different test set comprising 50 separate compounds.
PARTICIPATION IN THE DREAM CHALLENGE
Fanning from over 30 countries, a total of 213 people registered to enter the NIEHS-NCATS-UNC DREAM Toxicogenetics Challenge. NIEHS-NCATS-UNC is the acronym used to describe the collaboration between the National Institute of Environmental Health Sciences, the National Center for Advancing Translational Sciences and the University of North Carolina. In this seemingly social experiment, people who partook were supplied with data subsets to learn models for a period of 3 months. Furthermore, the models were evaluated on an additional subset of test data that participants were not allowed to see. The learning information involved, firstly, a quantification of the cytotoxicity proneness per cell line. Considering EC10 values, the rate at which a decrease of 10 percent in viability transpired. 106 compounds were tested for a span of 337 lines of cells.
The training data, secondly, included genotypes for all the 884 lines of cells. It, thirdly, comprised RNA-seq-hinged measurements of genetic transcripts for 337 lines of cells. Fourthly, the training data consisted of the structural characteristics of the whole 156 separate compounds. A sum of 34 research groups tendered 99 forecasts or interpersonal variability in response to the first sub challenge. Another 23 research teams came up with 80 different predictions based on the toxicity parameters at the population level in reaction to the second sub challenge. DREAM Toxicogenetics Challenge offered the unmatched avenue to compare the predictive performance across an extensive array of ultra-modern approaches to the prediction of cytotoxic reaction to compounds existing in the environment.
The First Sub Challenge: Determining Interpersonal Variability
In this sub challenge, evaluations were carried out on the given models based on their capacity to forecast EC10 values in a blind experiment that consists of EC10 values that have been quantified in 264 cell lines that were not a part of the learning set. The accuracy of the Volume 4, No 2/2017 | AJHAL predictions was graded with the aid of a pair of metrics. The first is the Pearson correlation ®, a metric that functions as an evaluator for the linear reliance that exists between predicted and technically quantified EC10 values. The second is the rank-based metric, wherein the underministic C-index 15 takes the probabilistic nature of the gold standard into consideration as a result of the technical provenance of the noise in the related measures through the evaluation of the concordance between ranks in cell line cytotoxicity.
Scoring-concerned analyses were restricted to 91 compounds, not including 15 that did not experience any cytotoxicity across the whole headcount. That was done in order to sidestep the emergence of noise among the ranking. For all the metrics, the research teams' overall rankings were arrived at by ranking the teams differently for every compound involved and then conducting an averaging across all the compounds. For this paper, we initially conducted assessments to ascertain where the forecasts were substantially more improved compared to the random (Neogy, & Paruchuri, 2014). This we did by comparing the average ® with the average pCi. We computed across the compounds to obtain each submission with the concording null model of aimlessly, empirically sourced EC10 values.
Of the total 99 submissions, the void postulation of indefinitely formulated forecasts can be bounced back based on the false rate discovery rate (FDR) principle (< 0.05), which auto-corrects for a multiplicity of conjectures. For 46 presentations using ®, 47 using pCi and 42 that blend both training metrics. Across all the considered compounds, the average values for ® and pCi were quite modest, with maximum and minimum values of 0.07 and 0.51, respectively. This suggests that cytotoxic reactions to exposure to chemical compounds is not generally well predicted based on a single nucleotide polymorphism data. In spite of the average predictive capacity turning out low, there was no uniformity in the performance among all the compounds. Variation levels in prognostic performance across the compounds start from −0.21 to 0.28 when it comes to ® values.
For pCi values, the performance ranged from 0.45 to 0.56. We carried out tests to determine whether the cytotoxicity present in each of the compounds can be predicted in a manner better than ordinary chance. For each of them, the forecast EC10 values for all the teams were juxtaposed with the void, unselective model. With this analysis, we could verify that the forecasts were, to a large extent, better than random guesses, for the majority of the involved compounds; 55 out of 91. This is despite the reality that some of the performances are quite poor. Rankings for the teams with the best performance proved wholesome duly considering the compounds employed in the scoring process.
Sub Challenge One: the Method with the Best Performance
The method with the most impressive performance for the forecasting of interpersonal variability in cytotoxin-related reactions as well had the capacity to determine using maximum ® that is equal to (average pCi = 0.51). Like the analysis for scoring, this method as well cancelled the 15 compounds that had no success inducing cytotoxic response. One set of 0.15 SNPs was chosen for inclusivity purposes for this analysis, using a pair of approaches (Vadlamudi, 2016).
The first approach involving SNPs are not synonymous existing within any gene, including SNPs close to genes defined by the 2kb upstream region and the 500bp. The second approach involves leftover SNPs if they are situated within close proximity with 41 KEGG20 gene member sets (Kanehisa, M. et al., 2014) They will be documented in the MsigDB database (Subramanian, A. et al., 2005) to symbolize the cycle or finality of cells, cancer or cancer biology.
If they were able to demonstrate how it correlates (P>0.06) with the representation of no less than one native gene leamning of the RNA-seq type of information, where comes in the eQTL analysis. The information that is stored inside this set of SNP was later archived into ten clusters believed to be generic. Doing so, k-means clustering was put into use based on the initial trio of basic components derived by scaling analysis in various dimensions (Purcell, S. et al., 2007;. Volume 4, No 2/2017 | AJHAL The variable genetic cluster that resulted from the experiment was high symbolic of comprehended geographic sub-populaces. However, it also comprised extra data that was not necessarily represented by each sub demography. With this variable, a cytotoxicity model was built for each of the compounds using the algorithm called Random Forest. Random Forest was combined with sex, geographical locale and experimental groups. In order to choose the parameters needed for clustering and to decide on the approaches for SNP filtering, crossvalidation was observed. In the last scoring stage, the forecast method garnered the strongest performance in the midst of dozens of considered models, all of which are judged by experiment-derived true-response information. The entailments of this particular modelling method are further addressed in supplementary predictive models and online techniques.
The Second Sub Challenge: Predicting Parameters at the Population Level
Predicting parameters at the population level was graded with the aid of both the Pearson correlation ® and the Spearman correlation (rs) alongside an approach that is in semblance to the previous contest. For both stats, the worldwide performance of each team's submission was assessed through the averaging of the correlations that have been separately computed for median EC10 values. That is a vivid representation of the archetypal response to cytotoxicity. The assessment also factored in the disparity between the 5th and 9th percentiles for EC10 values, which, in turn, represents a quantification of the dispersion rate of the populace. The juxtaposition with the void model of indefinite forecasts was performed in order to conduct an assessment on the statistical symbolism of compound forecasts. Out of the 80 predictions that were tendered, the null postulation of unevenly formulated predictions was not accepted (FDR < 0.05) for 13 predictions using average ® and for 17 that were using average (rs). For a 13 prediction count, the void proposition was not accepted when both metrics were applied. The same kind of result was derived when the Fisher's method was employed in the assessment of the significance of each submission.
To emphasize, rankings for the teams with the highest levels of performance proved to be robust in relation with the compounds used for the scoring process. The medium cytotoxicity (which is also referred to as the average EC10) of compounds seemed easier to predict compared to the variability in the response of the population. That is, ® renegade from −0.31 to 0.66 while (rs) ranged from −0.29 to 0.72). In response to interquartile ranges, ® ranged from −0.22 to 0.37 while (rs) started from −0.14 to 0.48.
The general criterion for evaluating the second sub challenge teamed up the prediction of median with the interquartile range. The method that showed the most impressive performance when it comes to forecasting the median and interquartile distance with ® and (rs) is equivalent to 0.52 (0.45) and 0.37 (0.40). The flow of work used by this specific approach comprises four major steps: feature choosing, group selection, the development of the model and the trial compound forecast. Attributes were chosen from structural sources based on the attributes of the chemicals derived in three comparable ways. They are CDK23 (Steinbeck, C. et al., 2003 andSiRMS (Kuz'min, V.E., Artemenko, A.G. &Muratov, E.N., 2008) descriptors, the pair of which exist under the auspices of the facilitators of the Challenge, and the Dragon descriptors (Todeschini, R., Consonni, V., Mauri, A. & Pavan., 2004). Normalized separately, chemical descriptors that were in correlation with the core toxicity were used for learning the considered models. The models that use the Dragon descriptors had the highest performance level in both crossvalidation and the eventual grading (Vadlamudi, 2015). In the steps, compounds were distributed into four distinct categories based on hierarchical jam-packing of their EC10 profiles in all of the 487 lines of cells. Built in separation for each compound category.
Best Performing Approach in Sub Challenge Two
Random Forest lexicons were used to choose the attributes that appear specific to forecasting in that group, of course, using smaller compounds as learning guidelines. For every new compound, toxicity was estimated with the use of a weighted median of forecasts from all four category-specific models. This is where the weights are determined according to their semblance with individual compound clusters. The similarity approach considered the distance from the cluster in the category-specific spaces containing Volume 4, No 2/2017 | AJHAL descriptors. The aforementioned learning methods were applied to determine the average EC10 values and the interquartile ranges alike.
When it came to median EC10, cell-line-facing forecasts were formulated with distinct models, after which they were averaged. Particularly for interquartile distance, a multi-set model was created to necessarily match the quantified interquartile distance for every compound involved. Additional information for the approach to modelling have been talked about in Supplementary Predictive Models and Online Methods. Regardless of the fact that this blueprint emerged with the best performance generally, other approaches like KSPA offered up a more ideal forecast of the average cytotoxicity with ® and (rs) respectively equivalent to 0.65 and 0.72.
COMPOUND-BASED PREDICTABILITY
Clearly, the compounds were ungrouped into three separate clusters, leaning on the correctness of the cytotoxic forecasts. A compound cluster for which determinations are high across the entire model chain of 14 compounds, a compound cluster for which forecasts are at a minimum across all models of 17 compounds, and a compound cluster for which predictions varied throughout all models. This observed separation proved consistent between the pair of metrics applied in the evaluation of team performances. After that, we tested for the characteristics that had the capacity to acutely differentiate between the compounds in the high-against-low clusters of predictability. A good number of chemical descriptors proved able when it comes to distinguishing between the high and low predictability of the compounds in the environment.
Noteworthy, a method known as the Lipinski rule (Lipinski, C.A., Lombardo, F., Dominy, B.W. & Feeney, P.J., 2001) was among those features that set it apart from the others. Similar to anticipations, the compounds existing in the high forecastable cluster possessed lower pooled variations, which makes them a lot less noisy compared to those in the cluster born of poor forecasts. Contradicting expectations, the chemical compounds found in the high-versus low clusters of predictability didn't show distinctive attributes considering the distribution of cytotoxic reaction throughout the populace in terms of average and interquartile radius nor to the approximated heritability of compound-based cytotoxicity (Bynagari, 2016). Be that as it may, we noticed that when a principal component analysis is in progress, on the cytotoxicity-related information, we were able to discern between the chemical compounds with increased and decreased forecast ability.
That is an indication that the predictability was, at least, in part, as a result of the cytotoxic pedigree of the compounds throughout the surveyed populace .
ANALYSIS OF SURVEYED DATA & CONCLUSION
For this paper, we received a total of 75 out of 99 response submissions for the first sub challenge. For the second sub challenge, we received 51 submissions out of 80. Correspondingly, this makes for 21 out of 34 teams for the first sub challenge and 14 out of 23 for the second tranche. Overviewing the information obtained from the survey, we noticed that there is an impact of used data and models on the performances of the submission (Bynagari, 2015). In order to manage the reality that every team submitted up to five predictions that could not be independent of each other, the forecasts made use of similar data and approaches-based on the survey-derived data-were averaged and taken as a single prediction. With this model, we were able to obtain 49 non-dependent submissions for the first and second sub challenge (Paruchuri, 2015).
That brings us to the data input for the forecasts. To create a solution for the first sub challenge, 89 percent of the participants who submitted predictions for the survey favored SNPs data as made available by the organizers of the exercise. The SNPs data were provided either alone or alongside other information using extra sources like pathway info and GO terms to sift through them. The RNA-seq data were used for nearly half (about 47 percent) of the submissions, thus showcasing the provision of general enhancement performances. Only a minor count of the participants (about 16 percent) as well included their preferred predictive models and details regarding the chemical descriptors.
For the second sub challenge, the majority of submissions, reaching 78 percent, did not consider any genetic information to forecast the cytotoxicity of the new compounds in the test environment. As regards the chemical properties, around 76 percent favored at least one chemical descriptor provided by the organizers of the challenge (CDK and SiRMS). Nonetheless, since as much as 45 percent of the teams partaking added exclusive information from external sources such as the ubChem37 and ChEMBL36 public databases or a variety of chemical descriptors such as PubChem37 (Wang, Y. et al., 2012) and ChEMBL36 (Gaulton, A. et al., 2012) open-to-all databases or various chemical descriptors in the lineage of ECFP38 and Dragon25. | 2021-11-01T15:10:49.951Z | 2017-12-31T00:00:00.000 | {
"year": 2017,
"sha1": "b40219e611a939debaf311fe0ac42a4aa54f117a",
"oa_license": "CCBYNC",
"oa_url": "https://i-proclaim.my/journals/index.php/ajhal/article/download/577/535",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "c70e733b916927e5f24b7f555ecd65de3c5ca31e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
235642514 | pes2o/s2orc | v3-fos-license | Chimeric Antigen Receptor (CAR) T Cell Therapy for Metastatic Melanoma: Challenges and Road Ahead
Metastatic melanoma is the most aggressive and difficult to treat type of skin cancer, with a survival rate of less than 10%. Metastatic melanoma has conventionally been considered very difficult to treat; however, recent progress in understanding the cellular and molecular mechanisms involved in the tumorigenesis, metastasis and immune escape have led to the introduction of new therapies. These include targeted molecular therapy and novel immune-based approaches such as immune checkpoint blockade (ICB), tumor-infiltrating lymphocytes (TILs), and genetically engineered T-lymphocytes such as chimeric antigen receptor (CAR) T cells. Among these, CAR T cell therapy has recently made promising strides towards the treatment of advanced hematological and solid cancers. Although CAR T cell therapy might offer new hope for melanoma patients, it is not without its shortcomings, which include off-target toxicity, and the emergence of resistance to therapy (e.g., due to antigen loss), leading to eventual relapse. The present review will not only describe the basic steps of melanoma metastasis, but also discuss how CAR T cells could treat metastatic melanoma. We will outline specific strategies including combination approaches that could be used to overcome some limitations of CAR T cell therapy for metastatic melanoma.
Introduction
Melanoma is the most lethal type of skin cancer [1][2][3], which develops from uncontrolled proliferation of the melanin producing cells within the skin, called melanocytes [1,4]. Patients with melanoma are staged based on the 2009 American Joint Committee on Cancer (AJCC) staging system [5]. Based on location of the primary tumor, tumor size, number of tumors, lymph node involvement, and the presence or absence of metastasis, melanoma is classified into four stages [6]. According to the melanoma staging systems, patients with stage IV melanoma show tumor spread throughout the body, called metastatic melanoma [6][7][8][9]. Strategies for treatment of metastatic melanoma include: chemotherapy, radiation, targeted therapy, and immunotherapy [6,10,11]. Because of their non-specificity and treatment resistance, chemotherapy and radiation therapy are not considered to be good options for the treatment of metastatic melanoma [6]. New pharmaceutical agents, including anti-PD-1 checkpoint blockade immunotherapy, and B-RAF inhibitor targeted therapy have both been approved for metastatic melanoma. However, B-RAF inhibitors lead to treatment resistance, while checkpoint blockades can cause autoimmune disease [12][13][14].
Thus, researchers are still seeking new treatments for metastatic melanoma. Recently, adoptive T cell therapy (ACT) has been investigated as a new strategy for improving the treatment of metastatic melanoma [11]. CAR T cells kill tumor cells through recognition of target antigens on the surface of tumor cells in a non-MHC restricted manner. Upon antigen recognition, CAR T cells release various cytotoxic molecules such as granzyme and perforin as well as cytokines, leading to tumor cell apoptosis and lysis [15]. CAR-T cells are genetically modified T lymphocytes that express chimeric antigen receptors, with three main regions: extracellular, transmembrane, and intracellular domains [16]. The extracellular domain contains the scFv (single chain variable fragment) of an antibody that targets a tumor antigen in a non-MHC restricted manner. The scFv domain is linked to the intracellular domain CD3z with or without additional co-stimulatory domains in the intracellular region by the transmembrane region to trigger T lymphocyte activation [16]. Based on the number of intracellular domains and inclusion of additional genes to CAR transgene, CAR T cells have been classified into five generations. First generation of CAR consists of CD3ζ molecules. Second generation CAR includes CD3ζ and one co-stimulatory molecule (e.g., CD28 or 4-1BB) while third generation CAR consists of CD3ζ and two costimulatory molecules (e.g., CD28 and 4-1BB or OX-40 and CD28). The fourth generation of CAR is designed based on second generation CAR paired with an inducible or constitutively expressed cytokine or chemokine (e.g., . The fifth generation of CAR is also based on second generation CARs coupled with truncated cytoplasmic domain of particular cytokine receptors with a binding site for specific transcription factors such as STAT3/5. [17,18]. CAR T cell therapy has been shown to be a potent immunotherapeutic approach for the treatment of hematologic malignancies, and two types of CAR T cells have been approved by the US Food and Drug Administration (FDA) for the treatment of B-cell malignancies [19]. Nevertheless, there remain challenges, such as selecting the proper target antigen, the immunosuppressive tumor microenvironment (TME), and barriers preventing the infiltration of CAR T cells into the tumor microenvironment, which lower the efficacy against solid tumors [20]. In this present paper, we review the mechanisms of melanoma metastasis to find suitable antigen targets, summarize the preclinical and clinical studies of CAR T cell therapy in metastatic melanoma, together with the advantages and disadvantages, and provide some suggestions to increase treatment efficiency against metastatic melanoma.
Mechanisms of Melanoma Metastasis: Implications for CAR T Cell Therapy
Although regional lymph node involvement (stage III) is a part of the metastasis process, according to the staging system, only stage IV in which tumor cells metastasize to distant organs is considered as metastatic melanoma [21]. Melanoma cells mostly metastasize through the lymphatic route [21], but the hematogenous route also seems to be involved in some cases [22]. Herein, we will not only review the mechanisms of metastasis that lead to lymph node involvement, hematogenous spread, and the involvement of distant organs (Figure 1), but also how selection of an ideal antigen which has a high surface expression level on cancerous tissues and low surface expression level on normal tissues is critical and could significantly decrease the risk of CAR T cell mediated off-tumor toxicity [23]. However, the complete mechanisms of melanoma metastasis are not yet fully understood [21].
Angiogenesis
Angiogenesis describes the process by which new blood vessels are formed from pre-existing blood vessels, and is one of the most important factors involved in tumor progression and metastasis [24,25]. Several agents in the TME can stimulate angiogenesis through different mechanisms [21]. One of the most important causes of angiogenesis is the family of vascular endothelial growth factors (VEGFs) [26]. VEGF-A as a member of the VEGFs family is largely linked to angiogenesis [27], and its expression level is upregulated in melanoma cells in a hypoxia condition [27]. Data has been shown that VEGF-A may be involved in the progression of metastasis [28]. VEGF-A alone or VEGF-A/PGF heterodimer contributes to VEGFR2 activation [28,29]. In line with this finding, studies have pointed out that VEGFR2 activation predominantly mediates angiogenesis response [29]. In addition to VEGFs, other growth factors, cytokines, and integrins, including basic fibroblast growth factor (bFGF), placental growth factor (PGF), urokinase plasminogen activator [30], IL8, αvβ3 and αvβ5 integrins, and angiopoietins play important roles in melanoma angiogenesis [25]. Melanoma cells secrete other growth factors, including bFGF (FGF2) and PGFs. bFGF is a potent angiogenic factor that regulates many cellular functions such as angiogenesis [31]. It has been shown that the action of matrix metalloproteinase results in bFGF release, which binds to its receptor on endothelial cells, FGFR1. This interaction promotes melanoma metastasis by regulating endothelial cell proliferation and increasing angiogenesis [32,33]. Melanoma cells migrate to distant organs, requiring adhesion molecules and MMPs to intravasate into and extravasate out of vessels, and degrade the ECM to cause metastasis. (C) Leukocyte-cancer cell fusion hypothesis. Melanoma cells can fuse with leukocytes, especially macrophages, and cause metastasis following epigenetic reprograming. (D) Embolisms. Platelet aggregation facilitates melanoma cell metastasis by protecting them from NK cell cytolytic activity and increasing their migration by platelet-tumor cell interactions. (E) Cancer stem cells. These cells have the potential to induce angiogenesis, matrix degradation, intravasation, and extravasation to promote melanoma metastasis.
PGFs (PGF1 and PGF2) interact with VEGFR1 and also with neuropilin-1 and neuropilin-2 receptors on endothelial cells. PGFs enhances angiogenesis in two ways: direct effects on pre-existing endothelial cells, and indirect effects, by recruitment of VEGFR-1 positive hematopoietic precursor cells from the bone marrow to blood vessels [34,35].
Urokinase plasminogen activator, which binds to the uPAR receptor, plays a role in melanoma progression and metastasis [36]. uPAR is expressed on both endothelial and melanoma tumor cells [25]. uPAR regulates angiogenesis by increasing endothelial cell migration and organization into tube-like structures [37].
IL-8 is another molecule with a significant role in angiogenesis. IL-8 binds to Gprotein-coupled receptors (GPCRs), including CXCR1 and CXCR2 on endothelial cells, and induces endothelial cell migration and permeability. Experimental studies have shown that IL-8 over-expression could increase angiogenesis and melanoma metastasis [38,39].
Studies have also shown that lymphangiogenesis plays an important role in the spread of melanoma to regional lymph nodes, and in metastasis [40].
VEGF-C secreted from melanoma cells can bind to VEGFR2 and VEGFR3 on lymphatic endothelial cells [41] and promote formation of lymphatic vessels and increase lymph node metastasis [42,43]. In 2008, Sini and colleagues have reported that inhibition of VEGFRs reduced lymph node metastasis in an animal model of metastatic melanoma [44].
In addition to cellular adhesion, invasion and migration, αvβ3 and αvβ5 integrins expressed on endothelial cells can regulate angiogenesis by modulating VEGF and bFGF [30,45]. These pathways play a crucial role in the progression of localized melanoma to metastatic melanoma [46].
Extravasation and Intravasation
Intravasation is the process by which a single or a group of melanoma cells, which have become detached from the primary tumor, can enter the blood or lymphatic vasculature system [47]. After intravasation, tumor cells move out of the vessels into the surrounding tissues (extravasation) [48]. In these steps, the melanoma cells require adhesion molecules to stick to the endothelial cells, and proteolytic enzymes to invade into the extracellular matrix (ECM) [21,47].
Adhesion molecules include cadherins, integrins, and the immunoglobulin superfamily. Matrix metalloproteinases (MMPs) also play an important role in intravasation and extravasation, which are two important steps in melanoma metastasis [47].
Cadherins
Melanoma cells undergo loss of expression of E (epithelial)-cadherin, and at the same time gain expression of N (neural) cadherin. N-cadherin is a classical cadherin which leads to adhesion of melanoma cells to each other, and to other N-cadherin expressing cells such as endothelial cells [47]. An in vivo study in immunocompromised mice showed that silencing of N-cadherin inhibited melanoma cell extravasation and lung metastasis [49]. Therefore, this membrane protein may be a potential target for CAR T cell therapy for metastatic melanoma.
Integrins
Integrins are a family of adhesion molecules that contribute to angiogenesis, tumor cell proliferation, migration and metastasis, by cell-cell or cell-matrix interactions [50]. Expression of integrin αvβ3 on melanoma cells led to increased metastasis to the lungs [51]. Melanoma cells that express integrin α4β1 tend towards lymph node metastasis [52] through binding to vascular cell adhesion molecule-1 (VCAM-1) on endothelial cells [53]. Integrin α4β1 also facilitated migration of CCR9 bearing melanocytes to the small intestine [54]. As melanoma cells do not express B2-integrins (LFA-1 or Mac-1) on their surface, these cells can bind to the B2-integrins of neutrophils by their intercellular adhesion molecule-1 (ICAM1), and then move into the vessels [48]. During angiogenesis, the endothelial cells over-express αv-integrins that can bind to lactadherin (also known as milk fat globule-EGF factor 8 protein) on melanoma cells, thus increasing adhesion and migration [55]. The α6β4 integrin on melanoma tumor cells is able to interact with lung endothelial cell adhesion molecule-1 (Lu-ECAM or CLCA2), that is expressed on lung cells and can lead to lung metastasis [56]. Because of the role of integrins in angiogenesis, tumor growth, and metastasis, several integrin inhibitors are under investigation in clinical trials [57]. Most of the clinical trials are studying αvβ3 integrin inhibitors [58].
Immunoglobulin Superfamily (IgSF)
Several members of the IgSF, including MCAM (CD146), NCAM (CD56), ALCAM (CD166), and L1-CAM (CD171) have been associated with metastasis in several cancers such as melanoma [59]. The melanoma cell adhesion molecule (MCAM/MUC18), also known as CD146, is a member of the immunoglobulin superfamily, and a cell adhesion molecule that mediates adhesion between melanoma cells themselves, as well as adhesion between melanoma cells and endothelial cells [60]. An in vivo study showed that an anti-MCAM/MUC18 antibody inhibited melanoma growth and metastasis [61]. Activated leukocyte cell adhesion molecule (ALCAM), CD166, or MEMD is a type I membrane protein and another member of the IgSF. Over-expression of ALCAM increased melanoma cell aggregation and metastasis [62][63][64]. Studies have shown that blocking of ALCAM by the secreted variant of ALCAM diminished metastatic capacity in nude mice [65]. The interaction between NCAM and CD56, that mediates cell-cell adhesion, is expressed on several tumor types such as melanoma, where it increases metastasis [66]. An in vivo study demonstrated that the silencing of NCAM expression inhibited melanoma cell invasion and metastasis [67]. L1-CAM is another cell adhesion molecule, and its over-expression was associated with melanoma metastasis [68]. L1-CAM knock-down reduced metastasis in a melanoma xenograft model [69]. Platelet endothelial cell adhesion molecule 1 (PECAM-1) or CD31 is another member of the IgSF that is expressed on endothelial cells. Melanoma cells which over-express heparan sulfates can interact with endothelial cells by binding to PECAM-1 [70]. It was shown that the heparan sulfate-PECAM-1 interaction contributed to tumor cell arrest and extravasation [59]. An in vivo study showed that an anti-PECAM-1 mAb inhibited metastasis in melanoma [71], and could be a potential target for CAR T cell therapy in metastatic melanoma.
Platelet factor 4 (PF4) or CXCL4 is a protein that can inhibit tumor metastasis by decreasing blood vessel integrity, angiogenesis inhibition, increasing myeloid-derived suppressor cells (MDSCs), and hematopoietic stem cells (HSCs) [72,73]. As such, based on studies [74], the PF4/CXCL4 recombinant protein could be a therapeutic option in combination with CAR T cell therapy for metastatic melanoma by preventing angiogenesis.
Matrix Metalloproteinases
MMPs are endopeptidase enzymes [75] that degrade and remodel the extracellular matrix (ECM) via proteolytic activity, and are involved in invasion and metastasis [76]. Members of this family include both soluble and membrane-bound types [77]. Although the soluble forms of MMPs may not be a direct target for CAR T cell therapy, they can be used indirectly in combination with CAR T cell therapy against other targets. Thus in this section, we will cover both soluble and membrane bound MMPs. Over-expression of MMP-1, 2, 9, 13 and 14 was shown to occur in invasive melanoma [76]. Studies have shown that inhibition of MMP-1 decreases melanoma metastasis in nude mice [78], whereas forced over-expression of MMP-1 in non-invasive melanoma cells induced a metastatic phenotype [79]. Moreover, the role of MMP-2 in human melanoma invasion and metastasis is also important [80]. Membrane type-1 MMP (MT1-MMP) (or MMP-14) is a cell-surface membrane protein, which activates MMP-2 leading to matrix degradation and melanoma cell invasion and metastasis [81]. MT1-MMP can also degrade ECM directly by cleavage of ECM components, such as collagens, laminin, and fibrins [77]. The expression of MT1-MMP on endothelial cells contributes to angiogenesis by ECM remodeling and promotion of vessel growth [82]. Active MMP-2 can either be membrane-bound or secreted [76]. The secreted form can bind to integrin αvβ3 and facilitate matrix degradation and melanoma cell migration [83]. The cell-surface hyaluronan receptor, CD44, binds to MMP9 on melanoma cells and forms a CD44/MMP9 complex. It has been shown that disruption of the CD44/MMP9 complex inhibits tumor invasion [84]. Based on an in vivo study, stromal derived MMP-13 (a collagenolytic enzyme) is also required for melanoma metastasis [85]. However, MMP-8 (a collagenase II enzyme) has anti-tumor and anti-metastatic activity in cancers such as melanoma. Therefore, CAR T cells that additionally express MMP-8 may be a good combination therapy for treatment of metastatic melanoma [86].
Leukocyte-Cancer Cell Fusion Hypothesis
In 1992, John Pawelek, discovered metastatic melanoma cells that resembled fused macrophage-melanoma hybrid cells [87]. In vitro studies have shown that melanomamacrophage hybrids showed markedly higher tumorigenicity and metastatic capacity [88,89]. Macrophage and cancer cell fusion results in epigenetic reprogramming. The metastatic hybrid cells increase the expression of macrophage markers, including SPARC, SNAIL, MET, MITF, CD14, CD68, CD163, CD204 and CD206. They also express integrin subunits including α3, α5, α6, αv, β1, β3; GnT-V (β1,6-acetylglucosaminyltransferase-V) and its enzymatic products, β1,6-branched oligosaccharides and cell-surface LAMP1. These molecules lead to increased tumorigenicity and metastatic potential in these melanoma hybrid cells [90,91]. The hybrid cells also express melanocyte markers (ALCAM, MLANA), and stem cell markers (CD44, CXCR4) [91]. The glycan molecules, β1,6-branched oligosaccharides, contribute to motility, invasion and metastasis of melanoma by influencing the adhesion to extracellular matrix components [92,93]. Secreted protein acidic and rich in cysteine (SPARC) or osteonectin is a extracellular protein [94] that increases tumor metastasis by driving vascular permeability and extravasation [95]. Snail is a transcription factor that induces epithelial-mesenchymal transition (EMT-EMT is a crucial characteristic of invading cancer cells) by suppressing the expression of E-cadherin [96,97]. MET is a receptor tyrosine kinase and membrane-bound protein that binds to hepatocyte growth factor (HGF). In vivo studies have indicated that over-expression of either MET or HGF promotes metastasis [98,99]. Moreover, MET inhibitors have been shown to inhibit metastasis in both melanoma animal models and in patients [100][101][102]. Microphthalmia-associated transcription factor (MITF) is a transcription factor that is required for metastasis in melanoma animal models, and its depletion from melanoma cells decreased lung metastasis [103]. Lysosome-associated membrane protein-1 (LAMP1), also known CD107a, is a surface protein that is expressed on melanoma-macrophage hybrid cells and increases their invasion and metastatic potential [104]. Binding of this molecule to galectin-3 in the lungs is one mechanism that can lead to metastasis [105]. LAMP1 interaction with the ECM in target organs could be another mechanism [106]. Also, LAMP1 is a major carrier of β1,6 branched N-glycans [107]. Down-regulation of LAMP1 significantly reduced the metastatic capacity of melanoma cells [108]. GnT-V is a Golgi membrane-bound protein that catalyzes the formation of 1,6 Nacetylglucosamine in the Golgi apparatus and its down-regulation inhibited metastasis in gastric cancer cells [109]. Therefore, targeting GnT-V, SPARC, SNAIL, MITF in combination with CAR T cells that are specific to surface proteins of MTFs (macrophage-tumor fusion cells), such as β1,6-branched oligosaccharides, MET and LAMP1 may improve CAR T cell performance in metastatic melanoma by eliminating leukocyte-cancer fusion cells. Since CD14 and CD68 are pan-macrophage markers [91], it would be risky to target them, but CD163, CD204 and CD206 are mainly expressed by M2 macrophages, which are the tumor-promoting phenotypes [91]. Therefore, targeting M2 markers may be a good idea for elimination of melanoma-macrophage hybrid cells. MLANA/MART-1 is a melanocyte specific marker that is recognized by T cells. One clinical trial investigated adoptive transfer of MART-1 specific TCR engineered T cells in metastatic melanoma patients. Their results showed potent evidence of tumor regression [110]. Although antibodies against this marker are used for the diagnosis of melanoma [111], this target has not been used for MAb therapy. CD44 is the receptor for hyaluronic acid and a transmembrane glycoprotein that is expressed on cancer stem cells [112]. Among CD44 variants, the CD44v3 splice is associated with metastasis in melanoma patients [113]. The targeting of CD44 variants with MAbs could be a novel immunotherapy for cancer treatment [114], and CD44v3 may be a potential target for CAR T cell therapy for metastatic melanoma. CXCR4 activation on tumor cells is associated with increased metastasis in several cancers by promoting invasion, tumor cell proliferation, matrix degradation, and neoangiogenesis [115]. Interestingly, CXCR4 is widely expressed on both T cells and hematopoietic stem cells [116]. It is possible to design anti-CXCR4 CAR T cells by inserting a CAR construct into the endogenous locus of CXCR4 in T cells using a Crispr/CAS9 approach [117]. However, the use of this molecule as a CAR T target should be further investigated due to its expression on hematopoietic stem cells.
Embolisms
The formation of a tumor cell embolism appears to contribute to hematogenous metastasis. In the emboli, tumor cells form aggregates with leukocytes and platelets [22]. Moreover, prothrombotic agents such as protease-activated thrombin receptor (PAR-1), thrombin, and platelet-specific receptor glycoprotein Ib-IX (gpIb-IX) play a role in embolism formation and melanoma metastasis [118,119]. PAR-1 is a seven transmembrane G-protein-coupled receptor that is expressed on metastatic melanoma cells and activated by proteolytic cleavage of the N-terminal domain of the receptor by serine proteases (especially thrombin) [120]. MMP-1 is also involved in PAR1 activation [121]. Maspin is a tumor suppressor protein that is negatively regulated via PAR-1 in metastatic melanoma. Maspin decreased lung metastasis in melanoma by inhibiting MMP-2 expression and activity [120]. Also, PAR-1 increases expression of connexin-43 in gap junctions, which is critical for tumor cell extravasation in metastatic melanoma [122]. PAR-1 activation increased expression of platelet activating factor receptor (PAFR) and its ligand (PAF). The PAFR/PAF complex can activate platelets and promote tumor-platelet aggregation [123]. Therefore, PAR-1 targeting could be a monotherapy or a combination approach with CAR T cell therapy for treatment of metastatic melanoma [120]. TR47 is a soluble peptide that is generated upon cleavage of PAR-1, and decreased melanoma metastasis in vivo [124]. Therefore, CAR T cells that express TR47 may be a good choice for treatment of metastatic melanoma. Platelet-specific receptor gpIb-IX is a major adhesion protein that is expressed on platelet membranes, and is activated after interaction with different ligands, such as von Willebrand factor (vWF), to form platelet aggregates [125]. Tumor cells express adhesion molecules, such as P-selectin glycoprotein ligand-1 (PSGL-1) and CD44, which bind to P-selectin on activated platelets to form aggregates [126]. Activated platelets in the thrombus protect circulating tumor cells from the cytolytic activity of NK cells by formation of platelet-tumor cell aggregates, and enable melanoma cells to extravasate from the circulation and metastasize to the lungs [119,127]. Although these glycoproteins play a major role in platelet aggregation, it is possible that targeting this receptor may cause platelet disorders including thrombocytopenia.
Cancer Stem Cells
In several solid tumors including metastatic melanoma, cancer stem cells (CSCs) are responsible for resistance to conventional treatment, recurrence, and progression of tumors. In metastatic melanoma CSCs are also known as malignant melanoma stem cells [128]. These cells contribute to melanoma metastasis by promoting neovascularization, angiogenesis, matrix degradation, intravasation, and extravasation. Melanoma stem cells can differentiate into endothelial-like cells and promote neovascularization. They can also lead to angiogenesis by expression of VEGFs. Growth factors and cytokines in the tumor microenvironment can affect melanoma stem cells, and reprogram the expression of transcription factors that are involved in EMT. These cells can degrade the matrix by MMPs, and destroy the endothelial barrier, resulting in intravasation and extravasation. Therefore, melanoma stem cells ultimately promote invasion and metastasis [129]. Melanoma stem cells express specific markers, including CD133, CD20, ABCB5, CD271, and ALDH1 [129]. Studies have shown that targeting of melanoma stem cells using CD133 and CD20 specific monoclonal antibodies attenuated tumor growth and lowered the metastatic potential [130,131]. ABCB5 promotes metastasis by activation of the NF-κB pathway, so this marker could provide a potential therapeutic target [132]. Immunotherapy by administering a CD271 specific antibody effectively suppressed metastasis in a melanoma animal model [133]. Targeting of ALDH1 (aldehyde dehydrogenase) reduced metastasis in melanoma [134]. As such, it appears that these markers may be good targets for CAR T cell therapy. Other candidate markers, including CD166, CXCR4, or neural precursor cell expressed developmentally down-regulated protein 9 (NEDD9), might also be involved in MMSC invasion, migration, and metastasis [128], but further studies are need to confirm their involvement. Other markers such as jumonju AT-rich interactive domain 1B (JARID1B) are expressed on melanoma stem cells [129], but have not been correlated with melanoma invasion or metastasis [135].
Chemotactic Molecules
Chemokines and chemoattractant cytokines bind to their receptors, and play a key role in the metastatic process in several cancers, including melanoma [136]. Takeuchi, in 2004, showed that expression of the CCR7 chemokine receptor on melanoma cells increased the migration of these cells to lymph nodes by binding to the CCL21 chemokine [137]. However, since CCR7 is also expressed on naïve T cells and dendritic cells [138], it is possible that targeting of this receptor, although it could decrease metastasis, could also disrupt the migration of normal immune cells to the lymph nodes and inhibit the anti-tumor response. Therefore, this receptor is not thought to be a specific target for melanoma immunotherapy using CAR T cells. Preclinical studies have shown that CXCR3 expression on melanoma cells increases metastasis to lymph nodes, and that inhibition of CXCR3 by antisense RNA decreased lymph node metastasis [139]. IGF-1R and CXCR4 showed higher expression on uveal melanoma cells compared to normal melanocytes. Therefore, because of the high expression of the corresponding ligands (IGF and CXCL12, respectively) in the liver, uveal melanoma most often shows liver metastasis [140,141]. Moreover, CCL25 which is produced by small intestinal epithelial cells recruits CCR9-expressing melanoma cells [54].
In conclusion, several MAbs and antagonists have been used to block chemokine receptors in melanoma to inhibit metastasis. Application of CAR T cells to target chemokines and receptors must be carefully investigated and monitored due to their wide expression, and their role in recruitment of immune cells to tumor sites. It is possible that SynNotch receptor CAR T cells or tandem CAR T cells could reduce the off-target toxicity of CAR T cells that recognize chemotactic molecules. Further, using the fourth generation TRUCK (T cells redirected for antigen-unrestricted cytokine-initiated killing) CAR T cells should be largely investigated. Despite the benefits of cytokines and chemokines produced by TRUCKs for CAR T cells function and endogenous immune system activation [142], it is possible these cytokines and chemokines facilitate melanoma metastasis in the cancer patients.
Collectively, we discussed the mechanisms involving in melanoma metastasis and further introduced several target antigens that can be targeted via CAR T cells in melanoma (Table 1). Still, it should be noted that many other targets are available in this context, but since they cannot be targeted by CAR T cells, mainly due to lack of surface expression, we did not fully discuss here. For example, various studies have shown that targeting pathways involved in melanoma tumor growth and survival including BRAF and MAPK pathways with specific inhibitors such as vemurafenib and trametinib, respectively, contribute to enhanced overall survival of metastatic melanoma patients [143,144]. Of note, acquired resistance and reoccurrence of disease is the main challenge in this type of therapy [145]. Therefore, we believe that CAR T cells might have an additional advantage compared to conventional inhibitor therapies and could be used in combination therapeutic regimens to enhance overall anti-tumor efficacy in the clinical practice. Moreover, combination therapy using CAR T cells and immune checkpoint inhibitors (e.g., anti-PD-L1) can also be another appealing therapeutic strategy for patients with metastatic melanoma.
Preclinical and Clinical Studies Using CAR T Cell Therapy in Metastatic Melanoma
In the previous sections we summarized some candidate target antigens in metastatic melanoma and the metastasis process (Table 1). Here we discuss some preclinical studies and clinical trials that have been carried out (or are in progress) on melanoma and metastatic melanoma.
The chondroitin sulfate proteoglycan 4 (CSPG4) antigen also known as MCSP is strongly expressed on 90% of melanoma cancer cell lines, and other tumors such as glioblastoma, sarcoma, and leukemia. CSPG4 plays a crucial role in melanoma cell proliferation, migration, invasion, and metastasis [173]. A study showed that anti-CSPG4 NKT cells could be transiently transfected using electroporation, with improved activity, and the in-vitro functionality after stimulation with melanoma cells was assessed. Compared to conventional CAR T cells, the anti-CSPG4 NKT cells generated a lower amount of cytokines, such as IFN-γ and TNF, but could still effectively kill melanoma cells [174]. In another study anti-CSPG4 CAR T cells transfected with siPD-1 and siCTLA4 showed reduced expression of PD-1, and were able to secrete higher quantities of cytokines and a good cytolytic effect against the A375M melanoma cell line [174]. In an interesting study, Simon et al. engineered T cells expressing both anti-CSPG4 CAR and anti-gp100 engineered TCR at the same time. These cells generated cytokines and carried out cytolytic activity when encountering their target antigens alone, while no two-sided suppression of the receptors was observed. However, an improved cytolytic effect was observed after co-culturing with target cells that expressed both the target antigens [168].
Inhibition of vascular endothelial growth factor receptor-2 (VEGFR-2) could theoretically inhibit tumor growth by an anti-angiogenic effect [148]. Pre-clinical studies targeting this marker have shown promising results [147][148][149]. The above-mentioned studies have tested different approaches, all with satisfactory results. Chinnasamy et al. utilized CAR T cells targeting VEGFR-2 plus exogenous IL-2. This approach inhibited the melanoma tumor, mediated through attacking the vasculature and not the tumor itself, because the tumor expression of VEGFR-2 was low [147]. Another study combined TCRs that targeted melanoma tumor antigens (gp100, TRP-1, and TRP-2) with VEGFR-2 that targeted the tumor vasculature. They concluded that the simultaneous approach had synergistic effects on tumor eradication, and increased the infiltration and persistence of adoptively transferred tumor-specific T cells within the tumor microenvironment. No significant morbidity or mortality were observed in the mice receiving anti-VEGFR-2 CAR T cells, except those administered with anti-VEGFR-2 CAR T cells and TRP-2 TCR transduced T cells [149]. Inoo et al. utilized CAR-coding messenger RNA (mRNA) delivered by electroporation to produce anti-VEGFR-2 CAR T cells. The mRNA approach resulted in an efficacy of 100% to prepare anti-VEGFR-2 CAR T cells from human or murine CAR T cells for the first few days, without damaging the human T cell activity and phenotypes [148].
A recent study from Yang et al. reported tandem CAR T cells that targeted CD70 and B7-H3, and found that the tandem CAR T cells could simultaneously distinguish two tumor-associated antigens and boost the cytolytic effect against tumor cells, as well as specifically targeting a single antigen. A melanoma mouse model treated by TanCAR T cells exhibited a more pronounced reduction in tumor burden, when compared to controls, and to single CAR (CD70 or B7-H3) groups [164].
The α v β 3 integrin has been found to be over-expressed on a broad range of cancers including, melanoma, breast, prostate, and pancreatic cancer. CAR T cells that targeted αvβ3 were able to eradicate the metastatic A375 melanoma cell line in vitro, and boost the anti-tumor effect in vivo. A single inoculation of anti-α v β 3 CAR T cells in mice was able to inhibit melanoma growth, and increase the long term survival. In summary, anti-αvβ3 CAR T cells killed αvβ3-positive tumor cells rapidly and specifically, secreted IL-2 and IFN-γ, and themselves underwent effective proliferation [152].
Overall, the studies discussed above mostly report satisfactory results of various CAR T cell targeting antigens against metastatic melanoma. Another review on this topic also discussed the future promise of several molecular targets for CAR T cell therapy in melanoma, including CSPG4, GD2, CD70, CD20, gp100, and NY-ESO-1 [175]. Nevertheless, achieving this goal requires continuing efforts of researchers to overcome these barriers, and to design the most efficient CAR T cells for future clinical studies.
Clinical Studies
Owing to the demonstrated success of CAR T cell therapy for melanoma in preclinical studies, researchers are conducting clinical trials to assess the effectiveness and safety of this treatment approach in patients. We identified ten clinical trials involving CAR T cell therapy in patients with melanoma (Table 3). Target tumor antigens included VEGFR2, GD2, cMet, hCD70, gp100, NY-ESO-1, CD20, IL13R-alpha2, B7H3, and bispecific B7H3xCD19 ( Figure 2). All of these trials are phase I or II non-randomized, or single-arm trials, which will warrant further investigation if successful.
These clinical trials are novel, and many of them are yet to be completed, with the earliest trial having been started in 2010. Only two studies are completed as of now, and only one of them has available published data. Six studies are still recruiting patients, and two others are suspended or have been terminated. Safer activity, similar cytotoxicity and reduced cytokine production of γ/δ engineered T cells against melanoma cells compared to conventional CAR-T cells [169] T2. A1 and A375 melanoma cells line CSPG4 (MCSP) specific CAR T cells, gp100 specific TCR α/β T cells or T cells expressing both receptors (TETARs) gp100 and MCSP Similar melanoma tumor cell killing capacity, reduced unspecific response and recognition of both antigens by TETARs [176] SK-Mel-28 melanoma cell lines anti-GD2 iCAR T cells + Pembrolizumab (PD-1 inhibitor) GD2 Melanoma cell killing in vitro [171] SCID-Luc mice 4405M or P1143 cell lines NT or GD2 CAR T cells Significant anti-tumor activity of GD2 CAR T cells both in vitro and in vivo [170] BALB/c nude mice GD3 + M21 cell lines GD3CAR T cells GD3 Enhanced cytotoxicity, proliferation, and cytokine production of ScFv-CD28/TCRζ receptor expressing T cells [172] NSG mice A375-FFLuc cell line B7-H3 CAR T cells
B7-H3
Enhanced survival and significant anti-tumor activity of B7-H3 CAR T cells against Melanoma cells [164] NSG mice NCI-H460 or A375 cell lines CD70 CAR2, or B7-H3 CAR T-cells Reduction of tumor burden Increased the overall survival of the mice The only published results come from a single study conducted on 24 patients, with a subset of these patients having metastatic melanoma. VEGFR-2 was the target antigen for the CAR T cells used in this study. Patients received different numbers of CAR T cells in varying numbers of cycles combined with administration of low or high-dose IL-2. The study was terminated because of no observed objective response, and 23/24 patients showing progressive disease. On the other hand, 23/24 patients also suffered from adverse events, with 5 of them being serious adverse events. Three patients had seriously increased alanine transaminase (ALT), aspartate transaminase (AST), and bilirubin, 2/5 had hypoxia, and 1 case of serious pain, infection, nausea, and vomiting occurred [177]. These disappointing results were in contrast with promising results of previous pre-clinical studies that targeted VEGFR-2 in B16 melanoma bearing mice [147][148][149]. One of these animal studies combined anti-VEGFR-2 CAR T cells with exogenous IL-2, similar to the clinical trial [147]. Interestingly, a previous pre-clinical study concluded that co-administration of anti-VEGFR-2 CAR T cells with TCR transduced cells against tumor antigens (gp100, TRP-1, and TRP-2) dramatically improved the tumor-free survival compared to anti-VEGFR-2 CAR T cells alone [149]. Therefore, simultaneous T cell therapies might be more effective in future clinical trials despite the failure of anti-VEGFR-2 CAR T cells in this clinical trial.
GD-2 was the only antigen targeted in two clinical trials [2,7], of which one has been completed, and the results might be published soon. The study added the C7R gene to the CAR T cells to increase their survival and provide a constant cytokine supply [7]. One study tested the anti-GD-2 CAR T cells on blood samples from melanoma patients, and SK-Mel-28 melanoma cell lines [176]. CAR T cells produced significant cancer cell killing, and concurrent PD-1 blockade enhanced their effectiveness. Therefore, we might expect durable responses from these two clinical trials targeting GD2.
Overall, current evidence from clinical studies is very limited in CAR T cell therapy for melanoma and metastatic melanoma, and no CAR-based therapy has yet produced promising results in the clinics. However, the completion of the ongoing clinical trials might decipher the unknowns in this field and pave the path for possible larger clinical trials.
Challenges and Future Directions
Despite the success of CAR T cells in cancer immunotherapy, several challenges still limit the efficacy of CAR T cell therapy in cancers, including antigen selection and off-target toxicity, antigen loss and heterogeneity, immunosuppressive tumor microenvironment, and insufficient infiltration and penetration of the T cells. Herein, we discuss these challenges and propose some solutions in the context of metastatic melanoma (Table 4).
Antigen Selection and Off-Target Toxicity
Several studies have shown that CAR T cell therapy can result in off-target toxicity in patients. This arises in patients who express the target antigen in their healthy tissues. Off-target toxicity can be reduced by improving the specific recognition of tumor cells and selecting safer target antigens. Melanoma-associated antigen (MAGE) and New York esophageal squamous cell carcinoma (NY-ESO1) have been targeted in metastatic melanoma, as well as in other cancers [182][183][184][185][186]. A few studies have reported neurological toxicity following MAGE-A3 targeting by TCR engineered T cells [187]. Moreover, anti-VEGFR2 CAR T cells were examined in a clinical trial (NCT01218867) with 24 metastatic melanoma patients. The results showed that all patients had disease progression after treatment, except one patient who showed a partial response. Therefore, targeting VEGFR2 seems to be safe in patients, but more research should be performed to improve its effectiveness in the clinic. Another interesting target is CD248. Studies have shown that CD248 is expressed in 86% of metastatic melanoma specimens using tumor microarrays, with no expression in healthy tissues [188]. It is believed that CD248 is involved in the tumor vasculature [189]. Thus, CD248 could be a useful and safe antigen for CAR T cell therapy in metastatic melanoma patients. Despite all the benefits of CAR T cells, safety concerns should be considered, especially in high-dose CAR T cell therapies, and CAR T cell ther-apies that target the metastasis-associated molecules mentioned in Table 1. Numerous approaches have been proposed by several groups to overcome these problems, including inhibitory CAR T cells [190], dual or multivalent antigen recognition domains, with split signaling [191], and insertion of suicide genes [192]. All these mechanisms could be used to reduce the off target toxicity in CAR T cell therapy.
Antigen Loss and Heterogeneity
One of the least investigated challenges in adoptive T cell therapy is tumor antigen heterogeneity. It is possible that not all the tumor cells express the target antigen, or if they do, the antigen expression is variable. Moreover, immune editing induced by the therapy can further lead to immune escape and tumor outgrowth [193]. Several strategies to overcome this hurdle have been proposed, including the bystander killing effect, using armored CAR T cells, using dual or multivalent CARs, and administration of drugs to upregulate the target antigens [149,[194][195][196][197][198]. Moreover, many studies have reported antigen heterogeneity in metastatic melanoma [199,200], and have discussed the implications for immunotherapy [201]. Still, no antigen loss or heterogeneity has yet been reported in CAR T cell therapy for metastatic melanoma, but this is predicted to occur in the future, after wider clinical use of CAR T cells in metastatic melanoma patients. Other strategies can be used to overcome antigen heterogeneity and loss in CAR T cell therapy, including using epigenetic reversal agents as drugs in combination with CAR T cell therapy. It has been shown that histone deacetylase inhibitors or DNA methyltransferase inhibitors can upregulate antigen expression in cancer cells. Kailayangiri et al., showed that pharmacological inhibition of Enhancer of Zeste Homolog 2 (EZH2 (EZH2 is responsible for repressive histone methylation in the genome)), induced surface expression of the GD2 antigen in Ewing sarcoma (EwS) cells [202]. They showed that EZH2 inhibition in EwS cells improved anti-GD2 CAR T cell anti-tumor activity. Thus, if antigen loss or heterogeneity was observed to present a problem in the application of CAR T cells in melanoma or metastatic melanoma patients, a combination of epigenetic drugs and CAR T cells could be used to overcome this roadblock.
Melanoma Resistant to CAR T Cell-Mediated Apoptosis
Another obstacle in CAR T cell therapy, especially in the application of CAR T for melanoma patients, is the melanoma cell resistance to apoptosis. This occurs mainly because CAR T cell-mediated tumor cell killing mostly happens through inducing apoptosis in target cells [203]. In line with this notion, it has been established that IGF-1 (insulin-like growth factor 1) plays a key role in inducing resistance to apoptosis in melanoma cells. It has been shown that IGF-1 increases expression of antiapoptotic members of the BCL2 family, and survivin (in both mRNA and protein levels) to protect mitochondria from damage that occurs during apoptosis [204]. Tumor necrosis factor (TNF)-related apoptosis-inducing ligand (TRAIL) mediated apoptosis is another mechanism which immune cells use to kill cancer cells [205]. It has been shown that some melanoma cells in patients are resistant to TRAIL-mediated apoptosis [206]. A recent study has demonstrated that while fully functional anti-CD19 CAR T cells are able to effectively kill tumor cells, the administration of anti-CD19 CAR T cells with a TRAIL inhibitor can suppress the cytotoxic effect of the CAR T cell [207], indicating the importance of TRAIL-mediated target cell killing as a key killing mechanism of CAR T cells. This finding may be applicable to CAR T cell therapy in the context of melanoma; however, it remains to be assessed. Interestingly, it has been reported that apoptosis resistance or partial apoptosis resistance of melanoma cells leads to their increased aggressiveness and metastasis ability by triggering the c-Jun N-terminal kinase (JNK) pathway, as the RNA sequencing signature of JNK pathway activation has been found to be similar to metastatic melanoma. Moreover, it has been demonstrated that partial apoptosis in melanoma cells results in enhanced cell adhesion, chemotaxis, increased melanoma migration and invasion [208]. To overcome this challenge, it is suggested that combination therapy using CAR T cells and (epi)drugs targeting antiapoptotic signaling pathway in melanoma cells could also be an interesting solution.
Insufficient T Cell Infiltration and Penetration
Another potential obstacle to the clinical efficacy of CAR T cell therapy is insufficient infiltration and penetration of the CAR T cells into the tumor mass. This could be caused by the stromal cells inhibiting the CAR T cell penetration, or interfering with the ability of CAR T cells to infiltrate properly into the tumor bed. Chemokines and their receptors can regulate the infiltration of various immune cells into the tumor site, and have been shown to be involved in tumor progression and angiogenesis [209]. A study looking at metastatic melanoma biopsies showed that up-regulation of several chemokines, including CCL2, CCL3, CCL4, CCL5, CXCL9, and CXCL10 could be correlated with the presence of T cells in the tumor [210]. That study also showed that the corresponding chemokine receptors were also up-regulated in effector T cells. Therefore, up-regulation of chemokines and receptors could be used in CAR T cell therapy in metastatic melanoma patients to overcome insufficient infiltration of CAR T cells into the tumor site. In another study, temozolomide (TMZ), which is the most frequently used drug for metastatic melanoma patients, showed an increase in T cell infiltration in mouse models of transplanted melanoma and genitourinary tumors, but despite a similar increase in CXCL9 and CXCL10 in all sites after TMZ exposure, no increased infiltration of T cells was seen in the cutaneous tumor. This further showed that the combined use of collagenase plus TMZ could induce infiltration of T cells into a cutaneous tumor site [211]. Thus, it can be concluded that stromal cells act as barriers in skin tumors, and the use of CAR T cells that target stromal cells could enhance T cell infiltration into the tumor site leading to a better clinical outcome. Furthermore, it has been shown by Caruana Their results demonstrated improved efficacy of CAR T cells in degrading ECM along with enhanced infiltration and improved anti-tumor activity of CAR T cells [212].
The manipulation of CAR T cells to over-express specific chemokine receptors, and/or using agents to up-regulate chemokine expression in the tumor site, as well as targeting stromal cells by CAR T cells, could result in enhanced infiltration and penetration of CAR T cells into tumors.
Immunosuppressive Tumor Microenvironment
Each cancer has a specific environment in and around itself called the TME. The TME can suppress the cytotoxic activity of immune cells, and thus maintain tumor growth. The TME in metastatic melanoma is complex, with several factors including extracellular matrix, cytokines, growth factors, hypoxic conditions, and various cells such as fibroblasts and immune cells [213,214]. All these factors working together can aid tumor growth and invasion, as well as decrease CAR T cell anti-tumor activity. Several attempts have been made to modify CAR T cells to overcome the hostile TME properties. For example, one study designed PD-1 knock-out TCR engineered T cells specific for the Melan-A antigen. They showed these engineered T cells had higher anti-tumor efficacy, and could delay the progression of PD-L1 positive melanoma tumors in NSG mice [215]. The adenosine produced in the TME, can be used by melanoma cells to evade immune surveillance. Pharmacological or genetic targeting of the adenosine 2A receptor in CAR T cells resulted in enhanced anti-tumor activity of CAR T cells [216,217]. Indeed, targeting other inhibitory receptors in CAR T cells could result in obtaining CAR T cells with superior anti-tumor activity in patients. In another study, the authors engineered anti-VEGFR-2 CAR T cells to constitutively express single-chain IL-12. Their engineered CAR T cells were able to cause regression of established tumors without exogenous IL-2 administration. They showed that anti-VEGFR-2 CAR T cells altered the immunosuppressive TME by reducing both systemic and intra-tumoral CD11b+ Gr1+ myeloid suppressor cell subsets [218].
However, it should be noted that since not all immunosuppressive factors in the TME can be overcome by manipulating CAR T cells themselves, we will discuss additional solutions to the immunosuppressive TME in the "combination therapy" section.
Combination Therapy
Combination therapies have been investigated, because single therapies in patients showed somewhat unsatisfactory results. As we discussed in this paper, metastatic melanoma cannot be easily treated with a single therapy, even with the best CAR T cells. Therefore, we discuss potential combination therapies that could be used in clinical trials in metastatic melanoma patients to increase their survival.
One of the most attractive combination therapies is using an oncolytic virus (OV) in combination with CAR T cells. The first and only FDA approved oncolytic virus for the treatment of advanced melanoma is called Talimogene laherparepvec (T-VEC) [219]. Other viral vectors, including Coxsackie viruses, adenovirus, HF-10, echovirus, reovirus, and Newcastle disease virus are currently under investigation [220]. Previous review articles [221,222] have suggested that oncolytic viruses are able to induce secretion of IFN type I in the TME, increase danger signals (PAMPs), reverse tumor immunosuppression, and enhance immune cell infiltration in the tumor site. Use of these viruses could increase CAR T cell infiltration, enhance TAA release by oncolytic virus-dependent tumor cell lysis, and promote the persistence, proliferation, and anti-tumor activity of CAR T cells in solid tumors. Thus, oncolytic viruses might be ideal partners for CAR T cells in tumor eradication. Several studies have pointed out the possible advantages of oncolytic viral therapy in combination with CAR T cells [223][224][225]. However, more investigations in this field should be conducted on melanoma and metastatic melanoma tumors. Guedan et al. used an adenovirus expressing hyaluronidase in a melanoma xenograft model [226]. They showed tumor regression and wide viral distribution following oncolytic adenovirus administration. This concept could be used in combination with CAR T cells to enhance CAR T cell infiltration into skin tumors such as melanoma and prevent melanoma metastasis.
In a phase I clinical trial using GD2 targeted CAR T cells in metastatic melanoma patients who expressed GD2, up-regulation of LAG-3 and PD-1 expression was observed in stimulated CAR T cells [227]. Thus, using checkpoint blockades in combination with CAR T cells in metastatic melanoma patients might increase overall survival. Another interesting approach to enhance checkpoint blockade efficacy in CAR T cell therapy is the use of armed oncolytic viruses expressing an anti-PD-L1 antibody [223]. This approach could overcome obstacles against CAR T cells in solid tumors, benefiting from both immune checkpoint blockades and oncolytic virus therapy.
A recent study showed that IFN type I secreted in response to oncolytic virus therapy can interfere with CAR T cell anti-tumor activity [228]. They showed that combination use of OVs and CAR T cells injected on the same days did not provide superior anti-tumor effects (compared to either treatment alone) in mice bearing B16EGFRvIII tumor cells. Thus, it may be that precise scheduling of OV and CAR T cell injections could result in achieving better results.
Immune checkpoint blockade (ICB) is well-established in combination with CAR T cell therapy. Melanoma was the first cancer type to be treated with immune checkpoint inhibitors (ICI) in the clinic. In 2011, ipilimumab (anti CTLA4) gained Food and Drug Administration (FDA) approval for treatment of metastatic melanoma [229]. Shortly after, another ICI pembrolizumab (anti PD-1) was approved by the FDA in 2015 for patients with non-resectable or metastatic melanoma [230]. Nowadays ICIs are considered standard care in metastatic melanoma patients. The combination of nivolumab and ipilimumab was shown to be the most efficient ICI therapy for advanced melanoma [231,232]. However, this combination is associated with high levels of toxicity [233]. In addition, the combination of ICI and CAR T cells has proven to be safe and effective in hematological malignancies [234,235]. Several studies are in progress to assess the safety and efficacy of combinations of CAR T cells and ICI therapy in solid tumors, including glioblastoma (NCT04003649, NCT03726515). Surprisingly, despite the mentioned advantages of ICI and CAR T cell combination therapies, there have yet been no studies or clinical trials to assess the efficacy and safety of combining these two promising anti-cancer therapies in advanced melanoma. Moreover, Chapuis et al., reported a single patient with advanced melanoma whose disease was refractory to both ICI monotherapy (ipilimumab) and monoclonal CTL therapy. However, the combination of IL-21 primed CTLs (MART1-specific CTLs) plus ipilimumab led to a durable complete remission in that patient [236]. Furthermore, clinical trial results have shown that combining antigen-specific CTLs with ipilimumab (CTLA-4 blockade) is safe as well as effective, and could produce durable clinical responses in patients with metastatic melanoma [237]. This result is promising for future combinations of CAR T cell therapies with immune checkpoint inhibitors in advanced/metastatic melanoma patients.
The inhibition or blockade of soluble molecules responsible for promoting metastasis in melanoma, including MMP1, MMP2, and MMP13 is another approach. Another possibility is the over-expression of soluble molecules that inhibit or decrease metastasis in melanoma, including PF4/CXCL4, TR47, and MMP8, which could be used in combination with CAR T cells that target surface molecules. Table 4. Roadblocks and challenges in CAR T cell therapy.
Challenges Solutions Reference
Antigen selection and off-tumor toxicity Using tumor-associated antigens (TAAs) as target antigens in CAR T cell therapy [183,185,187] Control of CAR T cell therapy associated toxicity, including: iCARs, multivalent CARs, and implementation of suicide genes [190][191][192] Tumor heterogeneity and antigen loss Using armored CAR T cells [198] Using bystander killing approach [197] Using drugs, especially epigenetic drugs to up-regulate TAAs [202] Using multivalent CAR T cells [194] Insufficient infiltration and penetration Using CAR T cells that express corresponding chemokine receptors (Suggestion) [210] Using engineered CAR T cells expressing heparinase or other enzymes that can degrade ECM components [212] Immunosuppressive tumor microenvironment Using CAR T cells resistant to immunosuppressive molecules including adenosine [216,217] Using inhibitory receptor knock out/down CAR T cells [215]
Conclusions
In this review, we have summarized the mechanisms of melanoma metastasis to discover surface markers that can be used for CAR T cell therapy (Table 1). However, there are still some gaps in our knowledge that need to be fully addressed. So far, plenty of CAR T studies have described the engineering of CARs against cell surface targets, which take part in the metastatic process of melanoma. Some of these engineered CAR T cells have been investigated in preclinical studies and in clinical trials with remarkable results. Moreover, we discussed several potential markers for treatment of metastatic melanoma using CAR T cells alone or in combination with other types of immunotherapy. Nevertheless, more investigation needs to be done in order to evaluate the safety and efficacy of newly proposed targets for CAR T cell therapy. Indeed, in the future a "surfaceome" analysis will need to be done, especially for leukocyte-cancer fusion cells to discover suitable target antigens on these cells that could be used in CAR T cell therapy. There are several challenges ahead involving the selection of target antigens and engineering new CAR T cells. Combination therapies can be a solution to some of these roadblocks. Using CAR T cells in combination with oncolytic viruses seems as if it may be an interesting possibility. Nevertheless, future studies need to be cautious about the possible adverse effects of combination treatments. | 2021-06-27T05:23:26.793Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "6bd78b51b5641d376471f3ede2b4ad3a894e8bbe",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/cells10061450",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6bd78b51b5641d376471f3ede2b4ad3a894e8bbe",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270080804 | pes2o/s2orc | v3-fos-license | ADLBiLSTM: A Semantic Generation Algorithm for Multi-Grammar Network Access Control Policies
: Semantic generation of network access control policies can help network administrators accurately implement policies to achieve desired security objectives. Current semantic generation research mainly focuses on semantic generation of single grammar and lacks work on automatically generating semantics for di ff erent grammatical strategies. Generating semantics for di ff erent grammars is a tedious, ine ffi cient, and non-scalable task. Inspired by sequence labeling in the fi eld of natural language processing, this article models automatic semantic generation as a sequence labeling task. We propose a semantic generation algorithm named ADLBiLSTM. The algorithm uses a self-a tt ention mechanism and double-layer BiLSTM to extract the features of security policies from di ff erent aspects, so that the algorithm can fl exibly adapt to policies of di ff erent complexity without frequent modi fi cation. Experimental results showed that the algorithm has good performance and can achieve high accuracy in semantic generation of access control list (ACL) and fi re-wall data and can accurately understand and generate the semantics of network access control policies.
Introduction
Network access control policies are a commonly used method to implement organizational security policies [1].They define a series of rules and actions to match and process network packets.These rules are set based on various factors such as source address, destination address, port number, protocol type, etc., to ensure that only packets that meet specific conditions can pass through the network.However, in practical applications, the configuration and management of network access control policies face many challenges.Traditional network access control configuration methods mainly rely on manual operations, but due to the low readability of policies [2][3][4], manual configuration is inefficient and prone to errors and when new security threats emerge, manual modification and updating of policies are required, resulting in poor scalability [5][6][7][8], which can lead to serious security issues in the network [9].At the same time, the lack of unified standards and specifications increases the difficulty of configuring network access control policies with different grammars.Therefore, the automatic semantic generation of security policies is particularly important.Automatic semantic generation can effectively parse security policies with different syntaxes, realize the automation of network access control configuration, and thus improve configuration efficiency and reduce errors.However, there are currently few studies on the automation of security policy semantic generation.In the early stage, semantic generation mainly relied on manual work to transform security policies into machine-understandable languages.Later, compilers were used to semantically generate security policies, but it requires manual settings beforehand and only adapts to fixed security policies.Nowadays, the compilation technique proposed by researchers [10] can work without human intervention, but it is still only suitable for monolingual grammar and cannot cope with multilingual environments, which has become a bottleneck for the automation of network access control configuration.Regarding the current limitation of security policy semantic generation techniques in accommodating multi-grammar environments, this paper aimed to implement an algorithm that can automatically generate security policy semantics for multiple grammars.Inspired by the advancements in natural language processing, this paper abstracted the task of automatic security policy semantic generation as a sequence labeling task in natural language processing and proposes a network security policy semantic generation algorithm named ADLBiLSTM, which is used to generate semantics for different types and grammatical structures of security policies.This model combines a self-attention mechanism and double-layer BiLSTM for automatic feature extraction, which is used to generate semantics for security policies with different grammars and assist in the management of network security policies.Our contributions in this work can be summarized as follows.
(1) We propose an algorithm for semantic generation of network security policies, which combines the self-attention mechanism with a double-layer BiLSTM to automatically extract features of network security policies from different levels and generate semantics accordingly.
(2) Through experiments, we verify that the model performs excellently in automatically generating semantics for network security policies of different types and grammars.
This paper is organized as follows.Section 2 reviews related works.We propose the semantic generation algorithm in Section 3. We provide the evaluating experiments in Section 4. Section 5 concludes the paper.
Related Work
Early security policies primarily relied on static, predefined rule sets [11], which were typically configured by security experts based on the specific needs and environments of the organization.In this case, semantic generation primarily involves converting rules into machine-readable formats for implementation in network devices or security systems, which relies heavily on manual operations.Increasingly complex security needs have gradually transformed simple rule sets into more advanced formal methods.Elfaki et al. [12] using first-order logic (FOL) to formalize firewall rules.Hamilton et al. [13] proposed an algorithm that converts firewall rules into conjunctive normal form (CNF) or disjunctive normal form (DNF) Boolean expressions.To represent and understand security policies more effectively, semantic models such as firewall decision diagram models [14] and ordered binary decision diagram (OBDD)-based models [15] have been introduced.These models not only enhance the expressive power of policies but also improve the flexibility and complexity of policy evaluation.However, manual optimization is still required during policy semantic generation to ensure that it accurately reflects the organization's security intentions.To address this issue, [10] designed a verified stateless firewall policy compiler, but this compiler can only perform semantic generation for specific security policies and requires reprocessing of data when dealing with security policies of other syntaxes.With the development of big data and artificial intelligence technologies, security policies are gradually relying on these technologies for optimization and automatic generation [16].The accuracy of semantic generation is crucial when generating semantics for many different types of security policies.Deep learning has demonstrated its unique advantages in this scenario, as it can extract useful information and patterns from vast datasets for in-depth learning.
The advancements in the field of natural language processing (NLP) [17] have provided technical support for the semantic generation of security policies, with sequence labeling techniques being one of its key components.This technology possesses robust capabilities in information extraction, text understanding, and semantic analysis, enabling it to identify and label crucial concepts and entities from unstructured text, providing a solid foundation for automatic semantic generation.Long short-term memory neural networks (LSTMs), which are a form of recurrent neural networks (RNNs), are widely used in NLP [18][19][20][21].RNNs face issues such as gradient vanishing and exploding when dealing with long sequences, making it difficult to capture long-term dependencies.LSTMs alleviate these problems through their gating mechanism and memory cells, but standard LSTMs are unidirectional, limiting their ability to capture contextual information.Therefore, researchers have begun to explore more effective network structures.Huang et al. [22] first proposed the BiLSTM-CRF structure, which uses a bidirectional LSTM (BiLSTM) plus a conditional random field (CRF) to solve the problem of text tagging.In the article, they compared long short-term memory networks (LSTMs), bidirectional long short-term memory networks (BiLSTMs), and BiLSTM-CRF for natural language tagging.This network structure is still one of the mainstream methods today.Subsequently, Bahdanau et al. [23] first introduced the attention mechanism into NLP tasks, an innovation that enabled the model to better allocate attention when processing long sentences, improving processing effectiveness.However, the structure of these models is relatively simple, and the feature extraction for complex data is not comprehensive.Therefore, Lin et al. [24] proposed an innovative model that adopts a hierarchical feature extraction method to extract more information from both the character level and the word level, such as suffix information, and applies attention mechanisms at both levels to distinguish the importance of information.This addresses the limitations and one-sidedness that exist when extracting contextual features solely from characters or words in current settings, resulting in an improved ability for the model to extract features from data.However, for non-natural language data, the importance of character-level features is lower than that of word-level features, thus requiring accurate capture of features between words.Liu et al. [25] proposed a novel unified architecture for text classification, which includes BiLSTM, an attention mechanism, and convolutional layers.The model performs well in text classification tasks, but for sequence labeling tasks, ensuring the legality of labels is crucial.Wang et al. [26] proposed a character-level entity recognition model based on multi-features, which adopts a double-layer BiLSTM-CRF structure and uses BERT for character vector embedding to identify named entities in unstructured data of daily scheduling in the power system.This model performs well in feature extraction, and compared with traditional models, the F1 value has been improved.Wu et al. [27] proposed an attention-based CNN-LSTM-BiLSTM model for short-term power load forecasting in integrated energy systems.These models perform well in non-natural language fields, but the types of data they handle are relatively stable.When the types of data change significantly, the model needs to be able to extract features from the data more effectively.
The Semantic Generation Algorithm
Current semantic generation methods mainly rely on compilers, which are typically only suitable for a single grammar or single vendor and require manual configuration.Therefore, they are difficult to adapt to environments with multiple grammar configurations.However, natural language processing has made significant progress in dealing with unstructured data, as it can convert unstructured text into structured data, enabling further utilization of these data.This is consistent with the goals we need to achieve in our task.Therefore, we can consider utilizing natural language processing techniques to design improved semantic generation methods and better adapt them to environments with multiple grammar configurations.
In this section, we present our proposed model, named ADLBiLSTM.The proposed model is shown in Figure 1 and can be divided into three layers based on the bottom-up function, which are the self-attention layer, the DL-BiLSTM layer and the CRF layer.Firstly, the self-attention layer is used to capture the correlation between the input vectors i a ( i a is obtained from input after word embedding) and weights the original inputs; the output i x is a rule that combines the importance of the whole utterance, and i x serves as the input to the next DL-BiLSTM layer.Then, the DL-BiLSTM layer obtains the contextual features from the input vector i x and computes the score matrix of the labels input to the next layer.Finally, the score matrix is used as an input to the CRF layer for joint training with the feature scores of the CRF layer to restrict the label range to obtain the final label output.The final output of the model is the semantics of the input security policy.
To illustrate more specifically how this model can be applied to the semantic generation of security policies, we can use the ACL statement "access-list acl extended permit tcp any 2.178.0.0 0.0.0.255" as an example.This statement defines an access control rule that permits TCP traffic from any source address ("any") to the IP address range 2.178.0.0/24 (defined by the IP address 2.178.0.0 and subnet mask 0.0.0.255).
When we input this ACL statement into the ADLBiLSTM model, the model first converts the text into vector representations through word embeddings.Then, the self-attention layer captures the correlations between these vectors (e.g., the association between "permit tcp any 2.178.0.0"), and weights the original input, where in this case, permit and tcp have higher importance, followed by "any 2.178.0.0 0.0.0.255".Next, the DL-BiLSTM layer extracts contextual features from the weighted inputs and calculates a score matrix for labels, which will compute the label scores for each word in the security policy.Finally, the CRF layer performs joint training based on the score matrix and its own feature scores, outputting the final label sequence.This layer considers not only the final semantic generation result but also whether the semantics are reasonable.For example, in this case, it learns that the statement is valid when "any" represents the source address (sub_network) and 2.178.0.0 represents the destination address (obj_network_netadd).
These three layers are described in detail below.
Self-Attention Layer
At the bottom, we apply the self-attention mechanism to fuse the relevance of words into the original input.The self-attention mechanism is at the bottom of the network and processes the raw inputs directly, weighting the raw inputs as the output of the next layer.A rule, after word embedding, can be represented as * = [ , ,…, ], where denotes the n th word in the rule.The input vectors are dotted with the coefficient matrices , , and to obtain Q = [ , … ], K = [ , … ], and V = [ , … ], as in Equations ( 1)- (3).
The values of each input word are dotted with the values of the other input words (including itself) to obtain the attention scores so that the correlation between the words can be obtained, and then the attention scores are normalized using the softmax to form the attention score matrix.Then, each is multiplied by the attention score matrix to obtain its weight vector, and then all the vectors are weighted and summed to obtain the final output = [ , … ].This weighted processing method makes the vital information more fully used, which improves the effect of semantic generation.Equation (4) describes the entire process, where denotes the dimensionality of the word vectors, divided by to prevent computational overflow caused by too large a value of T QK .
( , , ) ( ) The self-attention mechanism considers the relationship between each element and other elements, processing sequential data in a disorderly manner to better understand the context.In rules such as those for routers and firewalls, the same address may have different labels due to factors such as location and specific connecting words.The self-attention mechanism can ignore non-critical parts of a rule and find the word with the highest correlation to the obfuscated word for judgment.In addition, security policies from the same vendor may adjust the original syntax due to changes in versions or operational requirements.Therefore, employing the self-attention mechanism to capture the relevance of each word in each rule can solve the problem of label inconsistency and syntactic heterogeneity in the semantic generation of security policies.
However, the self-attention mechanism does not directly include positional information, thus requiring additional mechanisms to capture sequential information.
DL-BiLSTM Layer
Double-layer BiLSTM (DL-BiLSTM) consists of two layers of BiLSTM, which we utilize for context feature extraction.The given input = [ , … ] is a set of inputs that can embody long sequential information.Inputting it into the first BiLSTM procures a context vector for each time step .The hidden state of the first BiLSTM layer is ℎ , , which performs forward and backward operations on the input , denoted as the contextual feature vector for the th tag, i.e., at the th time point, the procured feature data contains both past and future information, as in Equation ( 5).The output t c of the first BiLSTM layer is used as an input to the second BiLSTM layer.
[ : ] The second BiLSTM is recursively executed to perform label probability prediction, and then the prediction vectors are input to the dense layer to procure the score matrix.
The state of the hidden layer of the second BiLSTM layer at moment t is ℎ , , which is used as the output of the probabilistic prediction vector n p , n p composition of the predicted score matrix as an input to the next layer.The process is shown in Figure 2.
Our security policy, as long sequence data, requires the model to fully understand its context.Therefore, we chose the BiLSTM structure, which consists of two LSTMs that are responsible for capturing the before and after information of the sequence, respectively, and improves the model's expressive ability.
To further enhance the model's ability to capture temporal information, we adopted a double-layer BiLSTM structure; i.e., stacking another layer on top of the standard BiLSTM, which enables the upper BiLSTM to process the output of the lower layer, thus extracting and integrating temporal sequence information in a deeper way.This structure not only extracts information from past and future directions, but also enhances the feature extraction ability of the model for complex data by double stacking.In addition, the double-layer BiLSTM has good generalization ability to capture rich timing information, strong expressive power, and stable and reliable performance in processing new data, which is crucial for dealing with new data or scenarios in real tasks.
However, due to its depth and complexity, it may face issues such as gradient vanishing or exploding during training, which requires adjusting parameters to enable the model to converge better.
CRF Layer
After the output of the double-layer BiLSTM, we obtained the prediction score matrix, which, however, disregards the connections between labels.Therefore, we used CRF to impose certain constraints on the final prediction results to ensure the accuracy of label generation.CRF is a probabilistic graph-based model for modeling sequence labeling problems, which achieves specification of labels by learning the joint distribution between the input sequence and the output sequence, taking full account of the relationship between individual markers, and dealing with the dependencies between the markers.For a given input sequence , its = [ , , … , ].When inputting , calculate its score to obtain the score matrix, as in Equation ( 6).
, ( , )
[ ] where ℎ [ ] represents the emission score, which is the score when predicting the result to be a particular label, and it comes from the output of DL-BiLSTM, and , represents the transition score, which is the probability of the previous character's labeling result to the current character's labeling result.Considering the labeling process as a path, the true path (the correct prediction result) is the highest scoring of all the paths, and thus the formula is expressed as the sum of the emission score and the transition score over the paths.We want the correct scores to have a greater weight in the total score, and we typically apply softmax to the scores, so we defined the loss as in Equation ( 7): By training, we obtained the final prediction and in the process normalized the prediction labels.
As the CRF layer needs to calculate the scores for the entire sequence during training, this can lead to increased computational complexity.
Experiments
This section focuses on the experiments we designed and the experimental results.We designed two experiments on semantic generation.The experiments: (1) revealed the influence of different model structures on the generation of security policy semantics; and (2) verified the performance of the proposed model for generating security policy semantics with different grammars.The experimental results showed that the proposed model can generate highly accurate semantics for different security policies in different scenarios.
Dataset
There is no labeled dataset of security policies for network access control.Therefore, based on the role of each part of the security policy in the network, this research constructed a dataset of network access control policies [28].A security policy primarily filters, allows, or discards data packets on interfaces by setting specific conditions.Therefore, we annotated the definition statements and conditions within the security policy to accurately express its semantics.In this experiment, we used two types of data, ACL data and firewall data.
ACL data contains seven datasets, among which Data_mix for training is composed of actual data used by three vendors: Cisco, Huawei, and Ruijie.The six datasets used in the experiment were simulated based on the actual usage frequency of network configurations for standard ACLs and extended ACLs from three different vendors, with each dataset containing 3000 policy rules.Firewall data included two datasets, among which Firewall_train for training is the actual data used by TOPSEC, and Firewall_test for experiments is the simulated data based on the actual configuration frequency.
Our dataset was manually annotated.The volume of various data is shown in Table 1, and the number of labels of each dataset is also shown in Table 1.In Table 1, the number of labels corresponding to each dataset represents the number of distinct label categories that appeared in that dataset.Taking Cisco's extended ACL dataset as an example, this dataset includes two types of extended ACLs: one that contains port information and one that does not.The total number of distinct label categories for these two types of security policies is 10.
The example datasets used in this experiment are shown in Table 2, and due to the large number of firewall definition statements, only two types are shown here.
Network Parameters
Here, we describe the parameter settings used in the experiment.Word embedding: We utilized embedding to generate a 100-dimensional vector for each word.The embedding layer was trained together with the neural network, so pre-trained embeddings were not used.The weights of the embedding were randomly initialized.
Optimization algorithm: We used small-batch stochastic gradient descent with an initial learning rate set to 0.001 and a decay rate of 0.0001.Gradient clipping was set to 0.5 to avoid gradient explosion.
The network structure: The self-attention layer was fed with 100-dimensional word embeddings; the output length was the number of labels received by the DL-BiLSTM layer, the hidden layer was also set to the number of tags for dynamic adjustment, and the output was of the same dimension as the hidden layer, which was then fed to the subsequent layers.
Model Structure and Evaluation Metrics
In this paper, four experimental groups were set up to demonstrate the superiority of the proposed model.The first group was the baseline group; that is, the BiLSTM-CRF model.The second group added another layer of BiLSTM to the baseline to confirm the performance improvement of the double-layer BiLSTM in extracting contextual features of security policies.The third group added a self-attention mechanism to the baseline to verify the performance improvement of the self-attention mechanism in generating semantic representations of security policies.The fourth group was the model proposed in this paper, ADLBiLSTM, which adds a self-attention mechanism and uses double-layer BiLSTM to extract contextual features of security policies.The specific model structure is shown in Table 3.
Network Structure
Name BiLSTM + CRF Baseline Double-layer BiLSTM + CRF Baseline + DL-BiLSTM Self-attention + BiLSTM + CRF Baseline + Self-attention Self-attention+ double-layer BiLSTM + CRF Ours The models in the four experimental groups were all trained using Data_mix in Table 1, and then the semantic representations of security policies from three vendors, Cisco, Huawei, and Ruijie, were generated respectively.To evaluate the performance of the models, this paper adopted common evaluation metrics, namely precision (P), recall(R), and F1 score(F1).We defined TP as the number of correctly generated semantics, NP as the number of incorrectly generated semantics, and N as the total number of semantics that should be generated.Therefore, these three metrics can be defined as follows in Equations ( 8)-( 10) In the experiments, the dataset used was simulated based on usage frequency, resulting in a small amount of data for some types, which may lead to inaccurate evaluation of the model's performance.Therefore, we conducted multiple experiments and calculated the metrics to improve their stability and reliability.
Semantic Generation for ACLs
ACL policies include extended ACL and standard ACL, so this experiment separately generated the semantics of these data to verify the performance of the model.Table 4 presents the semantic generation results of the four models when processing standard ACL security policies with different syntaxes.The syntactical structure of standard ACL is relatively simple, and the length of the policies is also shorter.Therefore, the model proposed in this paper exhibits higher accuracy in extracting features of standard ACL.Especially for the standard ACLs of Huawei and Ruijie, the model achieved higher precision, which fully demonstrated the good performance of the model in processing security policies with clear and concise structures.This result not only verified the effectiveness of the model's feature extraction ability but also further proved the applicability of the self-attention mechanism and the double-layer BiLSTM network structure in the semantic generation task.By comparing the performance of different models on standard ACL, we found that the model combining the self-attention mechanism and double-layer BiLSTM has significant advantages in extracting and understanding concise syntactical structures.This advantage enabled the model to capture key information more accurately and generate semantic representations that conformed to the original meaning.
Table 5 shows the detailed results of semantic generation for security policies of extended ACL with different syntaxes using the four models.Compared to standard ACL, the syntactical structure of extended ACL is more complex and the length is longer.From the table, we can observe that for extended ACL, adding BiLSTM and the self-attention mechanism to the baseline both improved the performance of semantic generation for security policies.However, the model proposed in this paper significantly enhanced the P, R, and F1 scores.Taking Cisco's extended ACL as an example, P increased from 34.92% to 99.74%, R increased from 45.99% to 99.74%, and F1 increased from 39.48% to 99.74%.Our model achieved good results when generating semantics for Huawei's extended ACL and Ruijie's ACL, and the generated results achieved high precision.The self-attention mechanism can focus on the most important parts of the original data, which is helpful for processing structured and logical texts such as ACL.However, security policies often involve not only individual rules or conditions but also the relationships between these rules or conditions and their positions and roles in the entire security policy.Therefore, adding an additional layer of BiLSTM further enhanced the ability to extract contextual features of security policies.By combining the self-attention mechanism and double-layer BiLSTM, the model not only focused on the most important information but also understood the position and role of this information in the entire security policy, further improving the performance of the model.Figure 3 shows the number of correct semantic generations made by these four models for both standard ACLs and extended ACLs.As can be seen from the figure, compared to the baseline, the other three models significantly improved the number of correct semantic generations.For standard ACLs with simple grammatical structures and shorter sentences, the difference in the number of correct semantic generations among the three models was not obvious.However, for more complex extended ACLs, the gap between these three models was more pronounced.Overall, our model outperformed other models in terms of semantic generation performance.To further validate the role of the self-attention mechanism and double-layer BiLSTM in semantic generation, we generated a confusion matrix using the experimental results of Huawei's extended ACL as an example to verify the impact of these two layers of the network.As shown in Figure 4, adding BiLSTM and the self-attention mechanism to the baseline improved the model's ability to extract features from security policies.By comparing Figure 4a-c, it can be seen that "dest_host_ip" and "dest_host_ip_mask" were difficult to generate as labels.The reason is that the data for these two labels were in the form of IP addresses and very similar.Therefore, the baseline often generated these two labels as "obj_network_mask" and "obj_network_netadd".These two labels also corresponded to data in the form of IP addresses, but they represented different meanings in the security policy.The correct labels represented the address and mask of the destination host, while the incorrectly generated labels represented the address and mask of the source host.When we added a self-attention layer and an additional BiLSTM layer to the baseline, although the accuracy of other semantic generations improved, there were still errors in generating these two labels.However, our model, which combined the double-layer BiLSTM and the self-attention mechanism, achieved a 100% accuracy rate when generating these two labels, as shown in Figure 4d.
Semantic Generation for Firewalls
Compared to ACL data, firewall data is more cumbersome and contains many definition statements, making it more difficult to extract features from firewall data.Our model had advantages in this scenario.Table 6 demonstrates the performance of four models on the task of firewall security policy semantic generation.Through comparison, we found that the method proposed in this paper achieved significant improvements in precision (P), recall (R), and F1 score compared to the baseline model.P increased from 64.74% to 96.09%, R increased from 68.4% to 96.18%, and the F1 score increased from 66.51% to 96.13%.This was due to our model structure, and as we can see from the data in the table, the self-attention mechanism preprocessed complex data by extracting the importance of the original data, providing convenience for subsequent networks to extract contextual features.This is crucial in complex redundancy scenarios such as firewall data, as it helps the model capture key semantic information more accurately, and double-layer BiLSTM can extract better contextual features than single-layer BiLSTM, which together improve the accuracy of semantic generation.The results of the above two experiments indicated that due to the advantages of the self-attention mechanism and the double-layer BiLSTM network structure, the model proposed in this paper performed well in automatic semantic generation.Whether it was faced with security policies with simple grammatical structure and short length, or with complex grammatical structure and cumbersome structure, the model effectively performed automated semantic generation.
The self-attention mechanism used in the model captured long-distance dependencies in the input sequence and focus on the most relevant parts to the current task, thereby improving the model's semantic understanding ability.The double-layer BiLSTM structure fully utilized the before and after information of the sequence, encoded the input sequence bidirectionally, and further enhanced the model's context awareness.
Therefore, this structure combining the self-attention mechanism and the double-layer BiLSTM enabled our model to perform well in automatic semantic generation tasks, cope with security policies of different complexities, and generate its semantics.
Conclusions
In this article, we propose a semantic generation algorithm for security policies named ADLBiLSTM.The algorithm model consists of three layers, namely the self-attention layer, the DL-BiLSTM layer, and the CRF layer, which are used to extract features from security policies in different aspects.This algorithm can generate semantics for different types of security policies with varying grammatical structures.Experimental results demonstrated that our algorithm can achieve a precision rate of 100% for semantic generation of ACLs and a precision rate of 96.09% for semantic generation of firewalls.
The algorithm proposed in this paper can understand different types and grammatical structures of security policies, generating their corresponding semantic representations, and avoiding errors caused by human factors, thus improving the enforceability of the policies.In addition, this algorithm can be applied to automatic configuration of network security policies, enhancing accuracy while reducing management costs, to address the problems in configuring network security policies caused by new security threats.
Author Contributions: Conceptualization, J.Z.; methodology, J.Z. and X.L.; software, J.Z.; Validation, J.Z.; data curation, J.Z.; writing: J.Z. and X.L.All authors have read and agreed to the published version of the manuscript.
Figure 2 .
Figure 2. The inner workings of the DL-BiLSTM.
(a) Generating the standard ACL semantics.(b)Generating the extended ACL semantics.
Figure 3 .
Figure 3.The number of correct security policy semantics generated by four models.
(a) Performance of baseline.( b) Performance of baseline + DL-BiLSTM.(c) Performance of baseline + self-attention.(d) Performance of our model.
Figure 4 .
Figure 4. Comparison of confusion matrices among four models.
Table 1 .
Data used in the experiments.
Table 2 .
Examples of the data used in the experiments.
Table 3 .
Experimental group design.
Table 4 .
The model's semantic generation results for the standard ACL.
Table 5 .
The model's semantic generation results for the extended ACL.
Table 6 .
The model's semantic generation results for firewall. | 2024-05-29T15:14:04.959Z | 2024-05-25T00:00:00.000 | {
"year": 2024,
"sha1": "eb934e07ae26731a1b2c21b744f189420c6cf8a5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/14/11/4555/pdf?version=1716642993",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5c357c10dfea1a6dd2f5239dd9b6b06df1f5e66d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
266378899 | pes2o/s2orc | v3-fos-license | From cellular to fear memory: An epigenetic toolbox to remember
Throughout development, the neuronal epigenome is highly sensitive to external stimuli, yet capable of safeguarding cellular memory for a lifetime. In the adult brain, memories of fearful experiences are rapidly instantiated, yet can last for decades, but the mechanisms underlying such longevity remain unknown. Here, we showcase how fear memory formation and storage – traditionally thought to exclusively affect synapse-based events – elicit profound and enduring changes to the chromatin, proposing epigenetic regulation as a plausible molecular template for mnemonic processes. By comparing these to mechanisms occurring in development and differentiation, we notice that an epigenetic machinery similar to that preserving cellular memories might be employed by brain cells so as to form, store, and retrieve behavioral memories.
Introduction
Memories of the past have a fundamental role in life, providing individuals with a framework to structure their present and future behavior on the canvas of previous experiences.Understanding how the brain converts temporary external stimuli into long-lasting changes is a fundamental question of neuroscience, and generations of scientists have been confronted with the challenge to reconcile the transient nature of synapse-based events with the persistence of memory [1e4].During development, programs of gene expression initially established in response to extracellular signals are maintained across multiple rounds of cell division by means of changes to the chromatin structure referred to as epigenetic modifications [5e7].In essence, epigenetic mechanisms e defined as the perpetuation of altered gene activity states in the context of the same DNA sequence [6] e sit at the core of "cellular memories" that arise during lineage development.By ensuring the persistence of cell fate trajectories even in the absence of the initial signals, the newly established epigenetic landscape locks cell identity into a specialized function for life, as Conrad Waddington (1905e1975) had likely already envisioned when portraying the developmental history of a cell in his renowned illustration [8,9].
Similarly, in the adult central nervous system, synaptic inputs initiate signaling cascades that triggerspecific transcriptional programs in response to environmental stimuli [10,11], which, as a growing body of evidence over the past two decades has shown, are regulated by epigenetic mechanisms [12e15].Nevertheless, whether epigenetic mechanisms constitute the molecular equivalent for long-term memory storage is still a matter of debate.This question has been best studied in the context of fear learning (Figure 1), which is one of the longest-lasting forms of memory [16].In particular, Pavlovian fear conditioning e the learning of associations between a neutral and a painful, dangerous, or threatening stimulus (Figure 1a) e offers an ideal paradigm to access the sequence of molecular events underlying long-term memory storage [17].
In this review we summarize the most recent studies in the field of epigenetic regulatory mechanisms of fear learning, focusing on DNA methylation, histone modifications, and higher-order chromatin organization (Figure 1b).In particular, by drawing parallels with the epigenetic principles of cellular memory, we explore the hypothesis that epigenetic mechanisms could be coopted by the nervous system in the adult brain so as to register memories of past experiences using chromatin as a template.Finally, we outline emerging tools and future challenges for pushing the boundaries of epigenetic memory research even further.For advances in the role of histone variants, noncoding RNAs, and epitranscriptional modifications in memory processes, we refer the reader to further publications [18e21].
The Janus-faced property of DNA methylation for memory formation and storage DNA methylation (DNAme) on the fifth position of cytosine (5 mC), which in vertebrates mainly occurs at CpG sites [22], is a key layer of epigenetic regulation during development and throughout life [23].Indeed, for any given cell type, the genome-wide distribution of DNAme often correlates with its transcriptional state and reflects specific differentiation patterns.By virtue of its ability to be maintained across cell replication, DNA methylation is traditionally viewed as the bona fide epigenetic mark for cellular memory [22,24].
Notwithstanding, as emerged in more recent years, DNA methylation is also a highly dynamic mechanism, whose homeostasis reflects the interplay between 3 distinct pathways: a) the establishment of de novo methylation marks, carried out by the DNA methyltransferases (DNMTs) DNMT3A and DNMT3B complexed with DNMT3L; b) the maintenance of existing methylation patterns across DNA replication, ensured by the activity of DNMT1; and c) the erasure of DNA methylation, which predominantly occurs through a cascade of oxidation reactions mediated by the Teneeleven translocation (TET) family of enzymes [23].
By being at once dynamic and stable, DNA methylation attracted the interest of neuroscientists since the early days of the neuroepigenetics field.Indeed, pioneering work demonstrated that in the adult brain patterns of Epigenetic mechanisms in fear learning a) Using Pavlovian fear conditioning in mice to study the "lifespan" of a memory.During encoding, a neutral stimulus, such as a novel context, is paired with an aversive stimulus, such as foot shock.A memory of the event begins to form within the population of neurons with higher excitability at the time of training.The mouse is then returned to its home cage, and the new memory trace is stabilized into a more long-lasting form.Upon subsequent exposure to the same context, the stored memory gets retrieved and the mouse exhibits conditioned fearful responses, such as freezing.For a time-limited period, the retrieved memory becomes labile and needs to be reconsolidated to be stored further.Such further storage can be overcome by re-exposing the mouse repetitively to the contextual cues without foot shock, a process called memory extinction.b) Several levels of epigenetic regulation have been shown to accompany the encoding, storage, retrieval, and extinction of fear memories.Highlighted in the boxes are the epigenetic mechanisms discussed in this review, alongside the most recent studies from the field with icons representing the individual memory phases they focused on.lncRNA, long noncoding RNA; TADs, Topologically Associated Domains, C, Cytosine; 5 mC, 5-Methylcytosine; 5hmC, 5-Hydroxymethylcytosine; 5 fC, 5-Formylcitosine; DNMT, DNA Methyltransferase; TET, Ten Eleven Translocation; HAT, Histone Acetyltransferase; HDAC, Histone Deacetylase; HDACi, HDAC inhibitor; KMT, Lysine Methyltransferase; KDM, Lysine Demethylase.
DNA methylation rapidly change upon neuronal activity in response to physiological and environmental stimuli, including cognitive processes [25,26].Importantly, learning-induced bi-directional changes in DNA methylation levels were observed at promoters of plasticity-related genes (becoming de-methylated) and memory suppressor genes (gaining methylation) starting from 1 h and up to 4 weeks after memory encoding [27e29].In addition, the pharmacological inhibition of DNMTs in the rat hippocampus either immediately after contextual fear conditioning (CFC) or one month later was shown to impinge on the formation and maintenance of long-term memory, respectively [25,27].Likewise, similar impairments in memory-related hippocampal functions were observed upon disruption of DNMT activity using targeted genetic approaches, such as conditional knockout mice or brain region-specific knockdowns [30e33].Finally, the manipulation of the TET pathway also affects memory performances, and it does so in a highly isoform-specific manner: constitutive TET1 knockout mice display normal fear memory formation but impairment of its extinction, whereas the loss of TET2 in adult excitatory neurons enhances the recall of fear memories [34,35].Together, these findings show that manipulating DNA methylation either in the entire brain or in a brain region/cell-type-specific manner changes the genomic responses to neuronal activity as well as memory performances.
Recent technological advances now offer the possibility to investigate the role of DNA methylation by targeting exclusively the neuronal ensemble that encodes a specific memory trace, also commonly referred to as engram.Broadly, an engram can be defined as a group of neurons that 1) are activated by a specific learning experience, 2) become modified by this experience, 3) are later re-activated by the re-exposure to the same experience, and 4) thus store the content of the learned experience [36,37].Importantly, restricting the overexpression of DNMT3a2 e an isoform previously identified to regulate cognitive processes [32] e to engram cells of the dentate gyrus (DG) increased engram reactivation at recall and strengthened fear memory recall [38].Hence, dynamics of DNA methylation within the engram itself are likely crucial for a successful memory retrieval, but where precisely across the genome DNA methylation changes remains to be investigated.
A first attempt in this direction comes from a study showing that, following novel environment exploration, the activated ensemble is characterized by differential methylation specifically occurring at genomic regions harboring bistable DNA methylation states, suggesting that variability in DNA methylation levels may account for why only certain cells react to a given environmental stimulation [39].Intriguingly, this hypothesis resonates well with the observation that during development the oscillatory DNA methylation dynamics typically found at the pluripotency stage facilitate key lineage decisions by allowing the same external signal to have different transcriptional outputs, eventually resulting in the emergence of different cell lineages starting from the same pool of stem cells [40].Along these lines, it would be interesting to explore whether and the extent to which the balance in DNA methylation governs the allocation of fear memory in the brain, especially in light of the observation that DNMT3a2 overexpression in a sparse and random population of cells prior to CFC did not bias the recruitment of individual neurons to the fear memory trace [38].Whether memory allocation relies on other DNMTs rather than DNMT3a2, or on other epigenetic marks rather than DNA methylation e with dynamic posttranslational histone modifications (PTMs) being a likely candidate e remains to be investigated.
Histone acetylation as mnemonic markers on the chromatin
With a plethora of enzymatic factors involved in their deposition, removal, or reading, histone posttranslational modifications (PTMs) serve as a signal integration template between genes and the environment in complex biological processes ranging from stem cell differentiation to immunological responses [6,41].In the adult brain, early efforts focused on the role of histone acetylation, with pioneering work showing that learning triggers the recruitment of histone acetyl transferases (HATs) and the increase of histone acetylation at the promoter of synaptic plasticity and memory-associated genes, whilst histone deacetylases (HDACs) such as HDAC2 negatively affect memory processes [42e44].
In the last five years, several studies further refined the role of histone acetylation in memory formation and storage, using either pharmacological, behavioral, or epigenetic editing tools.In parallel, three important papers started to shed light on the upstream regulatory mechanisms of histone acetylation, which revealed an important metabolic contribution (Box 1).Multiple research efforts have explored the consequences of manipulating acetylation levels e either by directly inhibiting the activity of HDACs with HDAC inhibitors (HDACis) or by replacing the pharmacological intervention with a pure behavioral alternative e with the goal of ameliorating memory and rescuing cognitive impairments [45,46].Indeed, several different types of HDACis were found to improve performances in CFC and extinction learning, as well as to rescue memory in mouse models of Alzheimer's disease [47e49].Mechanistically, it is interesting to highlight that despite being administered systemically and lacking any inherent target specificity, HDACi treatment elicits electrophysiological, transcriptional, and epigenetic changes only when applied jointly with CFC [43,47].In particular, HDACi treatment was found to enhance H3K27ac levels at genes involved in synaptic communication that were already acetylated by CFC, suggesting that the amelioration of behavioral responses is likely due to a reinforcement action by the HDACi [47].To explain this phenomenon, the idea of cognitive epigenetic priming has been proposed [46], purposely evoking a widely used concept in developmental studies, epigenetic priming [50,51].Epigenetic priming describes the state of chromatin regions in pluripotent cells that are neither silenced nor fully active, but are instead epigenetically bookmarked for rapid gene activation in response to signaling and developmental cues [50,51].In developing sensory neurons, for example, immediate early genes (IEGs) are embedded into a unique chromatin signature carrying H3K27ac on promoters but repressive H3K27me on gene bodies [52].Such epigenetic signature prevents inappropriate transcription in response to non-relevant stimuli, but at the same time primes IEGs for fast induction following appropriate stimuli [52].Similarly, glucocorticoid exposure during hippocampal neurogenesis induces long-lasting changes in DNA methylation that prime target genes for an enhanced responsivity to future stress exposures [53].Although based on only a few lines of evidence, it appears that both histone acetylation and DNA methylation are epigenetic mechanisms which the brain appears to have co-opted from development for its adult functioning.
These advancements notwithstanding, a more finegrained level of investigation is still missing, namely addressing the causal link between the epigenetic modification per se and the storage of fearful experiences as long-term memory.Thanks to the development of transcriptional and epigenetic engineering technologies, it has in the meantime become possible to achieve a precise transcriptional and epigenetic control only at genomic sites of interest.A first example along these lines used engineered zinc finger proteins (ZFPs) fused to the p65 transcriptional activation domain to upregulate the expression of the Cdk5 gene in the mouse hippocampus, resulting in long-term fear memory attenuation [54].Furthermore, in a rat model of adolescent alcohol abuse, CRISPR-based epigenomic editing was shown to improve anxiety by targeting histone acetylation markers at a specific enhancer that responds to synaptic activity [55].To date, epi-editing technologies have not been exploited yet in fear memory studies, and whether the sitespecific manipulation of histone acetylation in a defined brain region e or even, only in its engram cells e would affect memory performances is one of the next open questions in the field.
Histone methylation: Bivalency at play
Similar approaches could also be used to investigate the contribution of other histone PTMs that have been less extensively studied in the neuroepigenetics of memory such as histone methylation.Considered to be more durable and stable relative to histone acetylation, the effects of histone methylation on gene expression depend on the specific residue and the degree (i.e., mono, di, tri) of methylation.Namely, H3K4me3 is associated with genes that are either poised for activity or actively transcribed, whereas H3K27me3 is a marker of repressed chromatin, and H3K4me1 of silent or active enhancers [56,57].For activating methylation marks, global levels of H3K4me3 were found to be elevated in the hippocampus and broader domains of H3K4me3 established at the promoters and super-enhancers of learning-associated genes by the histone lysine methyltransferases (KMTs) KMT2A and KMT2B [58,59].For repressive methylation marks, an siRNA-mediated knockdown of the KMT EZH2 in rat hippocampus was shown to reduce fear memory retrieval by affecting H3K27me3 levels [60].
So far, patterns of H3K4me3 and H3K27me3 have been investigated separately from each other.Yet, whether and the extent to which these markers co-exist in neurons activated by learning would be highly interesting to investigate in light of embryonic stem cell (ESC) differentiation.There, the promoters of key developmental genes are simultaneously enriched for both activating (H3K4me3) and repressive (H3K27me3) marks forming so-called bivalent domains.These have Box 1.The role of the metabolic epigenetic axis in memory.
Acetyl-CoA is the metabolic substrate of HATs to generate histone acetylation by transferring the acetyl-group from acetyl-CoA to the lysine residues of histones [88].In neuronal nuclei, circulating acetate derived from alcohol consumption was found to be captured and turned into acetyl-CoA by the chromatin-bound acetyl-CoA synthetase 2 (ACSS2).In particular, ACSS2 has been shown to bind to the promoter of memoryrelated genes alongside the HAT CREB-binding protein (CBP) in the mouse hippocampus, suggesting a key role in the regulation of histone acetylation at these genomic sites [89,90].Indeed, upon CFC, mice constitutively lacking ACSS2 show a reduction in the levels of H3K9ac and H4K5actwo markers associated with learningand in the expression of activity-dependent genes, as well as a deficit in the formation of longterm fear memory [89].Noteworthily, the same effects were also observed when blocking ACSS2 using a small molecule inhibitor via systemic administration, a result that showcases how the acetyl-CoA pathway could be amenable for pharmacological interventions targeting persistent memories of traumatic events [91].In the future, it will be interesting to extend these lines of research to other metabolic substrates that also fuel epigenetic mechanisms involved in memory processes, for example S-adenosylmethionine (SAM) for DNA methylation.
been proposed to maintain genes in a "poised" state, maintaining repression in the absence of differentiation signals, but at the same time allowing for rapid activation in response to external stimuli via the removal of H3K27me3 [61,62].As bivalent domains are also found on specific genes in adult brain cells [63,64], it is tempting to speculate that a similar "poising" mechanism also occurs for rapidly induced learning and memory genes.Emerging methods for single-cell sequencing of histone marks may shed light on this intriguing possibility in the future.
Towards an integrated view: Multi-omics profiling of the memory engram
A crucial step towards understanding the molecular foundations of long-term memory storage is e the nowadays possible e integration of omics approaches in cell type-specific manner, in particular of engram cells [65].Recently, several studies have begun to profile different modalities of the engram's epi-transcriptional landscape throughout various memory stages e from the initial learning event to the preservation of the fear experience over time until its final retrieval e, producing insightful results [66e68].
By combining activity-dependent genetic labeling of engram cells in the mouse hippocampus with fluorescent-activated nuclei sorting, Marco and colleagues analyzed transcriptional changes, chromatin accessibility, and three-dimensional (3D) genome architecture over the lifespan of a 5-day-long fear memory [68].Although memory formation led to an extensive chromatin reorganization characterized by an increase in enhancer accessibility, these gained-open regions did surprisingly not match changes in gene expression, indicating a potential priming mechanism.Then, during the consolidation period, the newly established epigenetic landscape was maintained and further stabilized by means of new promoter-enhancer interactions which in turn led to a modest transcriptional activity, likely functional to facilitate memory expression at the time of recall (Figure 2a).When the memory ensemble was reactivated by memory retrieval 5 days post-encoding, primed engram cells finally underwent more robust transcriptional changes, resulting in the upregulation of genes involved in protein synthesis and synaptic morphogenesis [68].Although the temporal stability of such epi-transcriptional program has not been addressed beyond the 5-day experimental setup, it is tempting to speculate that it could also be maintained over longer periods of time.Indeed, when a similar study investigated the transcriptional signature of prefrontal cortex engram cells activated by the recall of a remote fear memory at single-cell resolution, profound alterations of gene expression signatures were found up to 14 days after encoding, both in neurons as well as in astrocytes and microglia cells [66].
A further important confirmation that stimulus-induced epigenetic modifications in the brain persist over time comes from the transcriptomic and epigenomic profiling of hippocampal neurons activated by a novel context exploration or kainic acid injection [67].In this experimental model, neuronal activity triggered rapid changes in both gene expression and chromatin organization at the level of enhancer-promoter interactions and transcription factor (TF) binding site accessibility, but at later times, only the epigenetic alterations remained [67].
Drawing parallels between epigenetic mechanisms for memory storage in the brain and in other systems, the sequence of molecular events occurring in neuronal cells activated by a fearful experience, a novel environment, or elevated neuronal activity are not only similar with one another, but also with intracellular responses upon differentiation or inflammation observed in immune cells [69,70].For example, treatment of murine epidermal stem cells with imiquimod (IMQ) e a known inflammatory agent e induced fast transcriptional changes and increased chromatin accessibility at specific enhancers, which acquired H3K4me1 and H3K27ac marks [70].Following IMQ withdrawal, transcriptional activity returned to baseline, while the open chromatin configuration remained, likely through the coordinated action of histone markers and homeostatic TFs (Figure 2b).Upon further inflammatory challenge, a robust transcriptional response was promptly reinstated leading to enhanced tissue inflammation [70], akin to a fear memory recall event triggering epi-transcriptional program for longterm memory storage [66,68].Thus, it appears that to retain a memory of inflammation epidermal stem cells rely on a similar epigenetic toolbox as used by brain cells to remember a fearful experience (Figure 2).What dictates the specificity of the outputs in these cases remains unknown, but likely lies in the interplay between the epigenetic landscape and the transcriptional state of each individual cell type.
Future strategies to disentangle the epigenetic basis of memory function
The application of epigenetic research tools to fear conditioning or other learning paradigms has significantly changed our interpretation of how the brain stores and retrieves memories of past experiences.Traditionally thought to be an exclusive property of synapses, it is now clear that mnemonic processes also have a significant impact on chromatin.Here, we have showcased how modulations of the epigenetic landscape at the level of DNA methylation, histone modifications, and 3D chromatin structure parallel and may indeed underlie the cellular processes behind fear memory formation, persistence, and retrieval.To push the boundaries of neuroepigenetics research on memory even further, two major challenges must be met in the future: 1) achieving a composite and refined portrait of the epigenetic machinery for long-term memory; 2) addressing causality by means of precise functional epigenetic manipulations.
Memory processes are highly dynamic, involving different cell types and multiple brain areas across an extensive timespan [71e74].Nevertheless, most studies to date have only captured snapshots of the epigenetic mechanisms of memory, focusing on individual chromatin modifications and/or brain regions without accounting for cell type diversity and temporal resolution.State-of-the-art technologies such as singlecell bisulfite sequencing (scBS-seq), single-cell Cleavage Under Targets and Tagmentation (scCUT&Tag), single-cell assay for transposase accessible chromatin (scATAC-seq), and single-cell Hi-C (scHi-C) are now enabling to profile, respectively, DNA methylation, histone PTMs and TFs occupancy, chromatin accessibility and 3D chromatin organization from individual cells [75e78].For some of these methods, the option to combine transcriptomic and epigenomic profiling of the same cell in a single readout is already technically possible, and by so-called spatial-omics approaches, even preserving a cell's positional information within the brain can be achieved [79e82].With these possibilities on the horizon, research in neuroepigenetics is poised to achieve hitherto unmatched levels of precision.ened, but chromatin is kept in a poised state with the retention of H3K4me1 and some cases H3K27ac.Once the inflammation memory is later recalled by a second inflammatory event, primed chromatin sites become rapidly transcriptionally activated.Although specific histone PTMs and chromatin conformation changes were not assessed in a and b, respectively, it is interesting to note that H3K4me1 and H3K27ac have been found at the boundaries of 3D chromatin loops in the adult brain, where they co-regulate the expression of genes important for spatial memory [87].TF, transcription factor.
As the resolution of omics techniques continues to improve, the need arises to also infer causality between epigenetic states, transcriptional activity, and behavioral responses by means of functional validation.The explosion of epigenomic editing tools, and in particular those based on CRISPR/dCas9 e an enzymatically inactive (dead) variant of Cas9 e opens up the possibility of testing the relevance of specific chromatin perturbations at the genomic site(s) of interest, both in vitro and in vivo [83].Versions of the system already exist where dCas9 has been fused to HATs or HDACs, Tet1 or DNMT3a, writers and erasers of H3K4me and H3K9/K27me, to mediators of chromatin looping or TF domains, and it is likely that dCas9 combining different epigenetic effectors at once will be engineered soon [84,85].Moreover, a further level of precision can be achieved by controlling the activity of the dCas9-based systems both in time and space, for example by using cell type-specific expression constructs or optogenetically and chemically inducible approaches [86].Doing so will allow for the interrogation of the mnemonic capacity of specific loci within specialized cell types at precise moments pre-and post-learning.At the same time, such experimental efforts will constitute precious resources to further explore the therapeutic potential of epigenetic mechanisms as biomarkers and drug targets for memory disorders.
In conclusion, it is exciting to think outside the box of the adult central nervous system and notice that, in order to store memories of our past, the brain might coopt similar molecular mechanisms e epigenetic in nature e that maintain cellular memory throughout development in other organs.To future research, the challenge of exploring this captivating hypothesis further.
*
. Chen MB, Jiang X, Quake SR, Südhof TC: Persistent transcriptional programmes are associated with remote memory.Nature 2020, 587:437-442.Long-term contextual fear memory induces complex gene expression programs in engram cells from medial prefrontal cortex.The activityspecific transcriptional alterations persist for weeks after the learning event and occur not only in neuronal cells but also in astrocytes and microglia cells.
Albert J, Lipinski M, Lopez-Cascales MT, Rowley MJ, Martin-Gonzalez AM, Del Blanco B, Corces VG, Barco A: Immediate and deferred epigenomic signatures of in vivo neuronal activation in mouse hippocampus.Nat Neurosci 2019, 22:1718-1730.Status epilepticus upon kainic acid stimulation triggers robust transcriptional and chromatic changes in hippocampal excitatory neurons.While transcriptional profiles return to baseline levels within 24 h, some chromatin changes and interactionsin particular, those driven by AP1remain up to 48 h after neuronal activation.68 * * .Marco A, Meharena HS, Dileep V, Raju RM, Davila-Velderrain J, Zhang AL, Adaikkan C, Young JZ, Gao F, Kellis M, et al.: Mapping the epigenomic and transcriptomic interplay during memory formation and recall in the hippocampal engram ensemble.Nat Neurosci 2020, 23:1606-1617.In engram cells of the hippocampus, memory encoding is characterized by increased chromatin accessibility at enhancers without cooccurring transcriptional changes.The newly established chromatin landscape is maintained during memory consolidation and results in the upregulation of specific synapses-related genes at the time of memory recall.69.Ostuni R, Piccolo V, Barozzi I, Polletti S, Termanini A, Bonifacio S, Curina A, Prosperini E, Ghisletti S, Natoli G: Latent enhancers activated by stimulation in differentiated cells.Cell 2013, 152:157-171.70 * .Larsen SB, Cowley CJ, Sajjath SM, Barrows D, Yang Y, Carroll TS, Fuchs E: Establishment, maintenance, and recall of inflammatory memory.Cell Stem Cell 2021, 28:1758-1774.e8.Characterization of the changes in chromatin accessibility, histone modifications and transcription factor binding occurring during an inflammation process, from the initial responses to its attenuation up to a second inflammatory event. | 2023-12-21T15:10:06.284Z | 2023-12-20T00:00:00.000 | {
"year": 2023,
"sha1": "aa364a2ba1f4f069398bc7a998d8e6424b50e170",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.conb.2023.102829",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "aa364a2ba1f4f069398bc7a998d8e6424b50e170",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7123957 | pes2o/s2orc | v3-fos-license | Localization for Schr\"odinger operators with Poisson random potential
We prove exponential and dynamical localization for the Schr\"odinger operator with a nonnegative Poisson random potential at the bottom of the spectrum in any dimension. We also conclude that the eigenvalues in that spectral region of localization have finite multiplicity. We prove similar localization results in a prescribed energy interval at the bottom of the spectrum provided the density of the Poisson process is large enough.
Introduction and main results
1.1. Background and motivation. Consider an electron moving in an amorphous medium with randomly placed identical impurities, each impurity creating a local potential. For a fixed configuration of the impurities, described by the countable set X ⊂ R d giving their locations, this motion is described by the Schrödinger equation −i∂ t ψ t = H X ψ t with the Hamiltonian (1.1) where the potential is given by with u(x − ζ) being the single-site potential created by the impurity placed at ζ.
Since the impurities are randomly distributed, the configuration X is a random countable subset of R d , and hence it is modeled by a point process on R d . Physical considerations usually dictate that the process is homogeneous and ergodic with respect to the translations by R d , cf. the discussions in [LiGP, PF]. The canonical point process with the desired properties is the homogeneous Poisson point process on R d . The Poisson Hamiltonian is the random Schrödinger operator H X in (1.1) with X a Poisson process on R d with density ̺ > 0. The potential V X is then a Poisson random potential. Poisson Hamiltonians may be the most natural random Schrödinger operators in the continuum as the distribution of impurities in various samples of material is naturally modeled by a Poisson process. A mathematical proof of the existence of localization in two or more dimensions has been a longstanding open problem (cf. the survey [LMW]). The Poisson Hamiltonian has been long known to have Lifshitz tails [DV,CL,PF,Klo3,Sz,KloP,St1], a strong indication of localization at the bottom of the spectrum. Up to now localization had been shown only in one dimension [Sto], where it holds at all energies, as expected.
In this article we prove localization for nonnegative Poisson Hamiltonians at the bottom of the spectrum in arbitrary dimension. We obtain both exponential (or Anderson) localization and dynamical localization, as well as finite multiplicity of eigenvalues. In a companion paper [GHK2] we modify our methods to obtain localization at low energies for Poisson Hamiltonians with attractive (nonpositive) single-site potentials.
In the multi-dimensional continuum case localization has been shown in the case where the randomness is given by random variables with bounded densities. There is a wealth of results concerning localization for Anderson-type Hamiltonians, which are Z d -ergodic random Schrödinger operators as in (1.1) but for which the location of the impurities is fixed at the vertices of the lattice Z d (i.e., X ≡ Z d ), and the single-site potentials are multiplied by random variables with bounded densities, e.g., [HM,CoH,Klo2,KiSS,Klo4,GK3,AENSS]. Localization was shown for a Z d -ergodic random displacement model where the displacement probability distribution has a bounded density [Klo1]. In contrast, a lot less is known about R d -ergodic random Schrödinger operators (random amorphous media). There are localization results for a class of Gaussian random potentials [FiLM, U, LMW]. Localization for Poisson models where the single-site potentials are multiplied by random variables with bounded densities has also been studied [MS, CoH]. What all these results have in common is the availability of random variables with densities which can be exploited, in an averaging procedure, to produce an a priori Wegner estimate at all scales (e.g., [HM,CoH,Klo2,CoHM,Ki,FiLM,CoHN,CoHKN,CoHK]).
In contrast, for the most natural random Schrödinger operators on the continuum (cf. [LiGP,Subsection 1.1]), the Poisson Hamiltonian (simplest disordered amorphous medium) and the Bernoulli-Anderson Hamiltonian (simplest disordered substitutional alloy), until recently there were no localization results in two or more dimensions. The latter is an Anderson-type Hamiltonian where the coefficients of the single-site potentials are Bernoulli random variables. In both cases the random variables with bounded densities (or at least Hölder continuous distributions [CKM,St2]) are not available.
Localization for the Bernoulli-Anderson Hamiltonian has been recently proven by Bourgain and Kenig [BK]. In this remarkable paper the Wegner estimate is established by a multiscale analysis using "free sites" and a new quantitative version of unique continuation which gives a lower bound on eigenfunctions. Since their Wegner estimate has weak probability estimates and the underlying random variables are discrete, they also introduced a new method to prove Anderson localization from estimates on the finite-volume resolvents given by a single-energy multiscale analysis. The new method does not use spectral averaging as in [CoH, SW], which requires random variables with bounded densities. It is also not an energy-interval multiscale analysis as in [DrK, FrMSS, Kl], which requires better probability estimates.
The Bernoulli-Anderson Hamiltonian is the random Schrödinger operator H X in (1.1) with X a Bernoulli process on Z d (i.e., X = {j ∈ Z d ; ε j = 1} with {ε j } j∈Z d independent Bernoulli random variables). Since Poisson processes can be approximated by appropriately defined Bernoulli processes, one might expect to prove localization for Poisson Hamiltonians from the Bourgain-Kenig results using this approximation. This approach was indeed used by Klopp [Klo3] to study the density of states of Poisson Hamiltonians. But localization is a much subtler phenomenon, and such an approach turns out to be too naive.
There are very important differences between the Poisson Hamiltonian and the Bernoulli-Anderson Hamiltonian. While for the latter the impurities are placed on the fixed configuration Z d , for the former the configuration of the impurities is random, being given by a Poisson process on R d . Moreover, unlike the Bernoulli-Poisson Hamiltonian, the Poisson Hamiltonian is not monotonic with respect to the randomness. Another difference is that the probability space for the Bernoulli-Anderson Hamiltonian is defined by a countable number of independent discrete (Bernoulli) random variables, but the probability space of a Poisson process is not so simple, leading to measurability questions absent in the case of the Bernoulli-Anderson Hamiltonian. These differences are of particular importance in proving localization as Bourgain and Kenig required some detailed knowledge about the location of the impurities, as well as information on "free sites", and relied on conditional probabilities.
To prove localization for Poisson Hamiltonians, we develop a multiscale analysis that exploits the probabilistic properties of Poisson point processes to control the randomness of the configurations, and at the same time allows the use of the new ideas introduced by Bourgain and Kenig.
1.2. Main results. In this article the single-site potential u is a nonnegative, nonzero L ∞ -function on R d with compact support, with where Λ L (x) denotes the box of side L centered at x ∈ R d . We need to introduce some notation. For a given set B, we denote by χ B its characteristic function, by P 0 (B) the collection of all countable subsets of B, and by #B its cardinality. Given X ∈ P 0 (B) and A ⊂ B, we set X A := X ∩ A and N X (A) := #X A . Given a Borel set A ⊂ R d , we write |A| for its Lebesgue measure. We let Λ L (x) := x + − L 2 , L 2 d be the box of side L centered at x ∈ R d . By Λ we will always denote some box Λ L (x) , with Λ L denoting a box of side L. We set χ x := χ Λ1 (x), the characteristic function of the box of side 1 centered at x ∈ R d . We write x := 1 + |x| 2 , T (x) := x ν for some fixed ν > d 2 . By C a,b,... , K a,b,... , etc., will always denote some finite constant depending only on a, b, . . ..
A Poisson process on a Borel set B ⊂ R d with density (or intensity) ̺ > 0 is a map X from a probability space (Ω, P) to P 0 (B), such that for each Borel set A ⊂ B with |A| < ∞ the random variable N X (A) has Poisson distribution with mean ̺|A|, i.e., (1.4) and the random variables {N X (A j )} n j=1 are independent for disjoint Borel subsets {A j } n j=1 (e.g., [K, R]). The Poisson Hamiltonian H X is an R d -ergodic family of random self-adjoint operators. It follows from standard results (cf. [KiM, PF]) that there exists fixed subsets of R so that the spectrum of H X , as well as the pure point, absolutely continuous, and singular continuous components, are equal to these fixed sets with probability one. It follows from our assumptions on the single-site potential u that σ(H X ) = [0, +∞[ with probability one [KiM].
For Poisson random potentials the density ̺ is a measure of the amount of disorder in the medium. Our first result gives localization at fixed disorder at the bottom of the spectrum.
Theorem 1.1. Let H X be a Poisson Hamiltonian on L 2 (R d ) with density ̺ > 0. Then there exist E 0 = E 0 (̺) > 0 and m = m(ρ) > 0 for which the following holds P-a.e.: The operator H X has pure point spectrum in [0, E 0 ] with exponentially localized eigenfunctions with rate of decay m, i.e., if φ is an eigenfunction of H X with eigenvalue E ∈ [0, E 0 ] we have (1.5) Moreover, there exist τ > 1 and s ∈]0, 1[ such that for all eigenfunctions ψ, φ (possibly equal) with the same eigenvalue E ∈ [0, E 0 ] we have In particular, the eigenvalues of H X in [0, E 0 ] have finite multiplicity, and H X exhibits dynamical localization in [0, E 0 ], that is, for any p > 0 we have (1.7) The next theorem gives localization at high disorder in a fixed interval at the bottom of the spectrum.
Theorems 1.1 and 1.2 are proved by a multiscale analysis as in [B, BK], where the Wegner estimate, which gives control on the finite volume resolvent, is obtained by induction on the scale. In contrast, the usual proof of localization by a multiscale analysis [FrS,FrMSS,Sp,DrK,CoH,FK,GK1,Kl] uses an a priori Wegner estimate valid for all scales. Exponential localization will then follow from this new single-energy multiscale analysis as in [BK,Section 7]. The decay of eigenfunction correlations exhibited in (1.6) follows from a detailed analysis of [BK,Section 7] given in [GK5], using ideas from [GK4]. Dynamical localization and finite multiplicity of eigenvalues follow from (1.6). That (1.6) implies dynamical localization is rather immediate. The finite multiplicity of the eigenvalues follows by estimating χ x χ {E} (H X ) 2 2 χ y χ {E} (H X ) 2 2 from (1.6) and summing over x ∈ Z d . Bourgain and Kenig's methods [BK] were developed for the Bernoulli-Anderson Hamiltonian. Let ε Z d = {ε ζ } ζ∈Z d denote independent identically distributed Bernoulli random variables, ε ζ = 0 or 1 with equal probability. The Bernoulli-Anderson random potential is V (x) = ζ∈Z d ε ζ u(x − ζ), and the Hamiltonian has the form (1.1). To see the connection with the Poisson Hamiltonian, let us introduce the Bernoulli-Poisson Hamiltonian. We consider a configuration Y ∈ P 0 (R d ), and let ε Y = {ε ζ } ζ∈Y be the corresponding collection of independent identically distributed Bernoulli random variables. We define the Bernoulli-Poisson Hamiltonian by H (Y,εY ) := −∆ + ζ∈Y ε ζ u(x − ζ). In this notation, the Bernoulli-Anderson Hamiltonian is H (Z d ,ε Z d ) . If Y is a Poisson process on R d with density 2̺, then X = {ζ ∈ Y; ε ζ = 1} is a Poisson process on R d with density ̺, and it follows that H X = H (Y,εY) . Thus the Poisson Hamiltonian H X can be rewritten as the Bernoulli-Poisson Hamiltonian H (Y,εY) .
For the Bernoulli-Anderson Hamiltonian the impurities are placed on the fixed configuration Z d , where for the the Bernoulli-Poisson Hamiltonian the configuration of the impurities is random, being given by a Poisson process on R d . Moreover, the probability space for the Bernoulli-Anderson Hamiltonian is quite simple, being defined by a countable number of independent discrete (Bernoulli) random variables, but the more complicated probability space of a Poisson process leads to measurability questions absent in the case of the Bernoulli-Anderson Hamiltonian. We incorporate the control of the randomness of the configuration in the multiscale analysis, ensuring detailed knowledge about the location of the impurities, as well as information on "free sites".
In order to control and keep track of the random location of the impurities, and also handle the measurability questions that appear for the Poisson process, we perform a finite volume reduction in each scale as part of the multiscale analysis, which estimates the probabilities of good boxes. We exploit properties of Poisson processes to construct, inside a box Λ L , a scale dependent class of Λ L -acceptable configurations of high probability for the Poisson process Y (Definition 3.4 and Lemma 3.5). We introduce an equivalence relation for Λ L -acceptable configurations and, showing that we can move an impurity a little bit without spoiling the goodness of boxes (Lemma 3.3), we conclude that goodness of boxes is a property of equivalence classes of acceptable configurations (Lemma 3.6). Basic configurations and events in a given box are introduced in terms of these equivalence classes of acceptable configurations, and the multiscale analysis is performed for basic events. Thus we will have a new step in the multiscale analysis: basic configurations and events in a given box will have to be rewritten in terms of basic configurations and events in a bigger box (Lemma 3.13). The Wegner estimate at scale L is proved in Lemma 5.10 using [BK,Lemma 5 Theorems 1.1 and 1.2 were announced in [GHK1]. Random Schrödinger operators with an attractive Poisson random potential, i.e., H X = −∆ − V X with V X a Poisson random potential as in this paper, so σ(H X ) = R with probability one, are studied in [GHK2], where we modify the methods of this paper to prove localization at low energies.
This paper is organized as follows. In Section 2 we describe the construction of a Poisson process X from a marked Poisson process (Y, ε Y ), and review some useful deviation estimates for Poisson random variables. Section 3 is devoted to finite volume considerations and the control of Poisson configurations: We introduce finite volume operators, perform the finite volume reduction, study the effect of changing scales, and introduce localizing events. In Section 4 we prove a priori finite volume estimates that give the starting hypothesis for the multiscale analysis. Section 5 contains the multiscale analysis for Poisson Hamiltonians. Finally, the proofs of Theorems 1.1 and 1.2 are completed in Section 6.
Preliminaries
2.1. Marked Poisson process. We may assume that a Poisson process X on R d with density ̺ is constructed from a marked Poisson process as follows: Consider a Poisson process Y on R d with density 2̺, and to each ζ ∈ Y associate a Bernoulli random variable ε ζ , either 0 or 1 with equal probability, with ε Y = {ε ζ } ζ∈Y independent random variables. Then (Y, ε Y ) is a Poisson process with density 2ρ on the product space R d × {0, 1}, the marked Poisson process; its underlying probability space will still be denoted by (Ω, P). (We use the notation (Y, . Then the maps X, X ′ : Ω → P 0 (R d ), given by If X is a Poisson process on R d with density ̺, then X A is a Poisson process on A with density ̺ for each Borel set A ⊂ R d , with {X Aj } n j=1 being independent Poisson processes for disjoint Borel subsets {A j } n j=1 . Similar considerations apply to X ′ and to the marked Poisson process 2.2. Poisson random variables. For a Poisson random variable N with mean µ we have (e.g., [K,Eq. (1.12 and hence also From (2.4) we get useful upper and lower bounds: When k > eµ > 1, we can use a lower bound from Stirling's formula [Ro] to get (2.7) In particular, if eµ > 1 and a > e 2 we get the large deviation estimate for k = 1, 2, . . .. (2.9)
Finite volume and Poisson configurations
From now on H X will always denote a Poisson Hamiltonian on L 2 (R d ) with density ̺ > 0, as in (1.1)-(1.3). We recall that (Ω, P) is the underlying probability space on which the Poisson processes X and X ′ , with density ̺, and Y, with density 2̺, are defined, as well as the Bernoulli random variables ε Y , and we have (2.2). All events will be defined with respect to this probability space. We will use the notation ⊔ for disjoint unions: We also write H (Y,tY ) := H ∅,(Y,tY ) and 3.1. Finite volume operators. Finite volume operators are defined as follows: Given a box Λ = Λ L (x) in R d and a configuration X ∈ P 0 (R d ), we set where ∆ Λ is the Laplacian on Λ with Dirichlet boundary condition, and where ∇ Λ is the gradient with Dirichlet boundary condition. We sometimes identify L 2 (Λ) with χ Λ L 2 (R d ) and, when necessary, will use subscripts Λ and R d to distinguish between the norms and inner products of L 2 (Λ) and L 2 (R d ). Note that in general we do not have which suffices for the multiscale analysis. The multiscale analysis estimates probabilities of desired properties of finite volume resolvents at energies E ∈ R. (By L p± we mean L p±δ for some small δ > 0, fixed independently of the scale.) Definition 3.1. Consider an energy E ∈ R, a rate of decay m > 0, and a config- and Note that [BK,Lemma 2.14] requires condition (3.9) as stated above for its proof.
But goodness of boxes does not suffice for the induction step in the multiscale analysis given in [B, BK], which also needs an adequate supply of free sites to obtain a Wegner estimate at each scale. Given two disjoint configurations X, Y ∈ P 0 (R d ) and t Y = {t ζ } ζ∈Y ∈ [0, 1] Y , we recall (3.1) and define the corresponding finite volume operators H X,(Y,tY ),Λ as in (3.4) and (3.5) using X Λ , Y Λ and t YΛ , i.e., with R X,(Y,tY ),Λ (z) being the corresponding finite volume resolvent.
Definition 3.2. Consider an energy E ∈ R, a rate of decay m > 0, and two configurations X, Y ∈ P 0 (R d ). A box Λ L is said to be (X, Y, E, m)-good if X ∩Y = ∅ and we have (3.8) and (3.9) with R X,(Y,tY ),ΛL (E) for all t Y ∈ [0, 1] Y . In this case Y consists of (X, E)-free sites for the box Λ L . (In particular, the box Λ L is 3.2. Finite volume reduction of Poisson configurations. The multiscale analysis will require some detailed knowledge about the location of the impurities, that is, about the Poisson process configuration, as well as information on "free sites". To do so and also handle the measurability questions that appear for the Poisson process we will perform a finite volume reduction as part of the multiscale analysis. The key is that we can move a Poisson point a little bit without spoiling the goodness of boxes, using the following lemma. and , and pick φ ∈ C ∞ c (Λ) such that 0 ≤ φ ≤ 1 and φ ≡ 1 in some open subset of Λ which contains the supports of u and u ′ . It follows from the resolvent identity that To prove (3.12), we may assume that R ′ Λ ≥ 1, since otherwise the result is trivial. The estimate (3.12) now follows immediately from (3.17) and (3.11). Using the resolvent identity, (3.17), (3.12), and 1 2 e 1 2 < 1 we get (3.13).
Part (ii) follows from part (i) as follows. Let β ≥ 2 and suppose (3.14) does not hold, i.e., R ′ Λ < e − √ η β. Since e − √ η β ≥ e − 1 2 2 > 1, we may apply (3.12) to get a contradiction to Lemma 3.3 lets us move one Poisson point a little bit, namely by η, and maintain good bounds on the resolvent. Since we will want to preserve the "goodness" of the box Λ = Λ L , we will use Lemma 3.3 with γ = e L 1− (as in (3.8)), and take η ≪ e −L .
To fix ideas we set η = e −L 10 6 d . To move all Poisson points in Λ L we will need to control the number of Poisson points in the box. Moreover, we will have to know the location of these Poisson points with good precision. That this can be done at very little cost in probability is the subject of the next lemma.
24)
and consider the event (recall that Y is the Poisson process with density 2̺) Λ } in view of (2.3) and Q Λ . We require condition (3.21) for acceptable configurations to avoid ambiguities when changing scales (cf. Lemma 3.13), but we will then need Lemma 3.6 for acceptable ′ configurations.
We now impose a condition on ̺ and L that will be always satisfied when we do the multiscale analysis: (3.26) From now on we assume (3.26).
Lemma 3.5 tells us that inside the box Λ, outside an event of negligible probability in the multiscale analysis, we only need to consider Λ-acceptable configurations of the Poisson process Y.
Given a box Λ = Λ L (x), we define an equivalence relation for configurations by This induces an equivalence relation in both Q (0′) Λ and Q The following lemma is an immediate consequence of Lemma 3.3(i); it tells us that "goodness" of boxes is a property of equivalence classes of acceptable ′ configurations: changing configurations inside an equivalence class takes good boxes into just-as-good (jgood) boxes.
The remaining statement is immediate.
Remark 3.7. Proceeding as in Lemma 3.6, we find that changing configurations inside an equivalence class takes jgood boxes into what we may call just-as-just-asgood (jjgood) boxes, and so on. Since we will only carry this procedure a bounded number of times, the bound independent of the scale, we will simply call them all jgood boxes.
Similarly, we get the following consequence of Lemma 3.3(ii).
38)
where we always implicitly assume We also set C Λ of the form where we always implicitly assume B ⊔ B ′ ⊔ S ∈ J Λ . In other words, the Λ-bevent The number of possible bconfsets and bevents in a given box is always finite. We always have In view of Lemma 3.6, we make the following definition. Those (Λ, E, m)-good bevents and bconfsets that are also Λ-dense will be called (Λ, E, m)-adapted.
Changing scales. Since the finite volume reduction is scale dependent, it introduces new considerations in the multiscale analysis for Poisson Hamiltonians.
Given Λ ℓ ⊂ Λ, the multiscale analysis will require us to redraw Λ ℓ -bevents and bconfsets in terms of (Λ, Λ ℓ )-bevents and bconfsets as follows.
Definition 3.15. Fix p > 0. Given an energy E ∈ R and a rate of decay m > 0, a scale L is (E, m)-localizing if for some box Λ = Λ L (and hence for all) we have a (Λ, E, m)-localized event Ω Λ such that (3.57) In Section 6 we will also need "just localizing" events and scales.
58)
A scale L is (E, m)-jlocalizing if for some box Λ = Λ L (and hence for all) we have a (Λ, E, m)-jlocalized event Ω Λ such that An (E, m)-localizing scale L is (E, m)-jlocalizing in view of (3.56).
"A priori" finite volume estimates
Given an energy E, to start the multiscale analysis we will need, as in [B, BK], an a priori estimate on the probability that a box Λ L is good with an adequate supply of free sites, for some sufficiently large scale L. The multiscale analysis will then show that such a probabilistic estimate also holds at all large scales.
Fixed disorder.
Proposition 4.1. Let H X be a Poisson Hamiltonian on L 2 (R d ) with density ̺ > 0, and fix p > 0. Then there exist a constant C u > 0 and a scale L 0 = L 0 (d, u, ̺, p) < ∞, such that for all scales L ≥ L 0 we have (3.26), and, setting The proof will be based on the following lemma.
Lemma 4.2. Let H X be a Hamiltonian as in (1.1)-(1.3). Given δ 0 > 0 and L > δ 0 + δ + , let Λ = Λ L (x) and set we have and (4.6) Proof. Given configurations X and Y such that X ∩ Y = ∅ and X satisfies (4.3), we pick ζ j ∈ X Λ δ 0 (j) for each j ∈ J e , and set X 1 := {ζ j , j ∈ J e }, X 2 = (X \ X 1 ) ⊔ Y . We claim that for all t X2 we have where C u > 0. Although the first inequality is obvious, the second is not, since To overcome this lack of a strictly positive bound from below for V X1 on Λ, we use the averaging procedure introduced in [BK]. Requiring δ 0 > δ − , we have (4.9) by the definition of X 1 plus the lower bound in (1.3), and hence (4.12) It follows that there isδ u ≥ δ − , such that for δ 0 >δ u we have and hence we get (4.7), which implies (4.4).
The multiscale analysis with a Wegner estimate
We can now state our version of [BK, Proposition A ′ ] for Poisson Hamiltonians.
The proof will require several lemmas and definitions.
Remark 5.5. The rate of decay m in (3.9), which by hypothesis is m 0 as in (5.2) for all scales L ∈ [L ρ1ρ2 0 , L ρ1 0 ], will vary along the multiscale analysis, i.e., the construction gives a rate of decay m L at scale L. The control of this variation can be done as usual, as commented in [BK] (but we need a condition like (5.2)), so we always have m L ≥ m0 2 , e.g., [DrK,FK,GK1,Kl]). We will ignore this variation as in [BK] and simply write m for m L . We will omit m from the notation in the rest of this section. The exponent 1− in (3.8) does not vary.
We now define an event that incorporates [BK, property ( * )].
P{Ω
(1) This can be seen as follows. First, from (5.13) and (5.14) we have and hence for large L, using also (5.1). On the other hand, letting K 1 = C ′ (K ′ − 1), it follows from (5.3) and (5.1) that Here C ′ is chosen such that the complementary has at most K 1 (not necessarily disjoint) boxes Λ ℓ1 (r) ∈ R with ω / ∈ Ω Λ ℓ 1 (r) Λ . The estimate (5.21) follows from (5.23) and (5.24). Moreover, it follows from (5.3) and (5.14) that each Ω (1) Λ (R ′ ) is a disjoint union of (non-empty) events of the form It remains to show that D R ′ can be written as a disjoint union of (Λ, E)-prepared bevents. To do so let, as in [BK], let (5.26) Since (5.10) yields and #(R \ R ′ ) ≤ K 1 , it follows as in [BK,Eq. (6.18)] that S R ′ satisfies the density condition (5.18) in Λ. It follows from (3.48) and (5.26) that we can rewrite the event D R ′ in (5.25) as a disjoint union where {C Λ,Aj,A ′ j ,S R ′ } j∈J are (Λ, E)-prepared bevents. We can now prove a Wegner estimate at scale L using [BK, Lemma 5.1 ′ ].
Lemma 5.10. Let C Λ,B,B ′ ,S be a (Λ, E)-prepared bevent, and consider a box Λ L0 ⊂ Λ with L 0 = (2nα+1)ℓ 1 for some n ∈ N , ℓ 1 ≪ L 0 ≤ L, such that Λ L0 is constructed as in (5.12) from a standard ℓ 1 -covering of Λ. Then, for sufficiently large L there exist disjoint subsets {S i } i=1,2,...,I of S 0 := S ∩ Λ 0 , such that and we have the conditional probability estimate where the constants C 1 , C 2 do not depend on the scale L. In particular, we get Proof. Let C Λ,B,B ′ ,S be a (Λ, E)-prepared cylinder event, consider Λ L0 ⊂ Λ as above, and set (5.32) where ε S0 = {ε s } s∈S0 are independent Bernoulli random variables, with P ε S 0 denoting the corresponding probability measure. All the hypotheses of [BK,Lemma 5.1 ′ ] are satisfied by the random operator H(ε S0 ) in the box Λ L0 . In particular it follows from the density condition (5.18) that S 0 is a collection of "free sites " satisfying the condition in [BK,Eq. (5.29)] inside the box Λ L0 . (The fact that we have a configuration B 0 ∪ B ′ 0 ∪ S 0 ⊂ J Λ instead of a subconfiguration of Z d is not important; only the density condition [BK,Eq. (5.29)] and the fact that C Λ,B0,B ′ 0 ,S0 is (Λ L0 , E)-prepared matter, the specific location of the single-site potentials plays no role in the analysis.) Thus it follows from [BK, Lemma 5.1 ′ ] that (L large) where the constants C 1 , C 2 do not depend on the scale L. In other words, there is a subset Q ⊂ {0, 1} S0 such that for all ε S0 ∈ Q. (5.34) We now conclude from (5.34), recalling the definitions of ℓ 1 and ℓ 2 , that there exist disjoint Λ-bevents {C Λ,B⊔Si,B ′ ⊔(S0\Si),S\S0 } i=1,2,...,I , with each S i ⊂ S 0 , such that we have (5.29) and (5.30).
Since the event Ω (1) Λ in (5.19) is a disjoint union of such (Λ, E)-prepared bevents, we have, using also Lemma 3.3 as in the derivation of (3.31) (and changing C 1 slightly), that (5.35) and hence, using the probability estimate in (5.19), we have The desired (5.31) follows using (5.1).
We are now ready to finish the proof of Proposition 5.1.
6. The proofs of Theorems 1.1 and 1.2 In view of Propositions 4.1, 4.3, and 5.1, Theorems 1.1 and 1.2 are a consequence of the following proposition, whose hypothesis follows from the conclusion of Proposition 5.1. We recall Definition 3.16.
Proposition 6.1. Fix p = 3 8 d− and an energy E 0 > 0, and suppose there is a scale L 0 and m > 0 such that L is (E, m)-jlocalizing for all L ≥ L 0 and E ∈ [0, E 0 ]. Then the following holds P-a.e.: The operator H X has pure point spectrum in [0, E 0 ] with exponentially localized eigenfunctions (exponential localization) with rate of decay m 2 , i.e., if φ is an eigenfunction of H X with eigenvalue E ∈ [0, E 0 ] we have χ x φ ≤ C X,φ e − m 2 |x| , for all x ∈ R d . (6.1) Moreover, there exist τ > 1 and s ∈]0, 1[ such that for eigenfunctions ψ, φ (possibly equal) with the same eigenvalue E ∈ [0, E 0 ] we have χ x ψ χ y φ ≤ C X T −1 ψ T −1 φ e y τ e −|x−y| s , for all x, y ∈ Z d . (6.2) In particular, the eigenvalues of H X in [0, E 0 ] have finite multiplicity, and H X exhibits dynamical localization in [0, E 0 ], that is, for any p > 0 we have sup t x p e −itHX χ [0,E0] (H X )χ 0 2 2 < ∞. (6.3) Proof. The fact that the hypothesis of Proposition 6.1 imply exponential localization in the interval [0, E 0 ] is proved in [BK,Section 7]. Although their proof is written for the Bernoulli-Anderson Hamiltonian, it also applies to the Poisson Hamiltonian by proceeding as in the proof of Proposition 5.1. When [BK,Section 7] states that a box Λ is good at energy E, we should interpret it as the occurrence of the (Λ, E, m)-jlocalized event Ω Λ as in (3.58), with probability satisfying the estimate (3.59), whose existence is guaranteed by the hypothesis of Proposition 6.1. We should rewrite such an event as in Lemma 5.2 when necessary, with p ′ = 3 8 d− < p. With these modifications, plus the use of Lemmas 3.6 and 3.8 when necessary, the analysis of [BK,Section 7] yields exponential localization for Poisson Hamiltonians.
The decay of eigenfunction correlations given in (6.2) follows for the Bernoulli-Anderson Hamiltonian from a careful analysis of [BK,Section 7] given in [GK5], and hence it also holds for the Poisson Hamiltonian by the same considerations as above. Finite multiplicity and dynamical localization then follow as in [GK5]. | 2014-10-01T00:00:00.000Z | 2006-03-12T00:00:00.000 | {
"year": 2006,
"sha1": "8d5dccf83e15f977273a6998b5666472247464ab",
"oa_license": null,
"oa_url": "http://www.ems-ph.org/journals/show_pdf.php?iss=3&issn=1435-9855&rank=7&vol=9",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "4705ffc4dbf31e9a36bdb1532bcf0f243531e6b6",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
54176104 | pes2o/s2orc | v3-fos-license | Mathematical Method for Predicting Nickel Deposit Based on Data from Drilling Points
In this article we discuss several methods for predicting nickel ore content inside the soil under a given area/region. The prediction is the main objective of the exploration activity which is very important for conducting the exploitation activity from economic point of view. The prediction methods are based on the data obtained from the drilling activity at several „points‟. The data yields information on the nickel density at those points. Nickel density over the region is approximated (with an approximate function) by applying interpolation and/or extrapolation based on the data from those points. The nickel content is predicted by applying integral of the approximate function over the given region.
Introduction
This article was motivated by the site visit of Mathematics Department, Universitas Haluoleo, to PT Aneka Tambang (Persero) Tbk in Pomalaa, Kolaka, Southeast Sulawesi.We then call the mining company as PT Antam.The main purpose of the visits, which are conducted to some industries in regular basis every year, is to improve the quality as well as the quantity of applied mathematics researches in industrial fields.This activity also promotes the applications of mathematics in a practical way in improving industrial processes including efficiency, effectiveness and accuracy of a process.This article is one of the examples of mathematics application in the nickel mining industry (Ni).Another motivation of this study is the fact that only a few literatures have discussed the nickel ore mining in Indonesia, two references that authors are able to trace include by Guiry and Dalvi [5] and by van Leeuwen [8].
As in other mining processes, nickel ore mining activities also involve the exploration, exploitation, and processing of mining products in factories.Exploration activities include geological mapping in the form of structural materials, bedding planes, fracture and fault whereas drilling is to determine the mineral deposit and the ore depth. 1,3Faculty of Mathematics and Natural Sciences, Department of Mathematics, Universitas Haluoleo.Jl.HEA Mokodampit, Kendari 93232.Email: edi_cahyono@innov-center.org, saniasrul2001@ yahoo.com 2 Faculty of Mathematics and Natural Sciences, Department of Chemistry, Universitas Haluoleo.Jl.HEA Mokodampit, Kendari 93232.Email: saprjo@yahoo.comReceived 19th January 2011; revised1 15th August 2011; accepted for publication 7th September 2011.
From the exploration activities, one can also determine what types as well as the form of minerals reserved and the amount of deposit underneath the area.In some cases, the nickel deposit relates to the distribution of gold in the area, Sibbick and Fletcher [16].
Exploitation is essentially taking out (mining) mineral ore from the earth.The exploitation activity is carried out only if the results of study, including risk analysis, show that it is, in particular, economically feasible, for instance, see Guj [6].The exploittation activities (illustrated in Figure 1) include: Clearing out forest.Stripping of top soil and overburden, top left.Top soil is a layer of soil containing nutrients.Overburden is the soil just under the top soil and does not contain nutrients.In the mining process, the top soil and overburden are collected in one place and will be restored again after the mining is completed.This is intended to minimize the impact of environmental destruction due to the mining.Excavating of minerals, middle.Minerals are then taken out/mined/ removed from the site in situ by using excavators.Transportation of ore materials to the landfill (stockyard), bottom.
The nickel ore processing at the plant includes several stages: (1) Nickel ore from mining sites is collected in the stockyard.In semi continuous, nickel ore undergoes a drying process through a sieve (SOM).This process is using the rotary dryer (Rotary Dryer) at a temperature of 600 o C to reduce the nickel ore moisture from 30% to 21%.(2) The dried nickel ore is stored in the ore bins with a capacity of 120 tons.Besides those for nickel ore, there are other bins which are used to store raw materials such as anthracite / coal and limestone.(3) All together with other supporting materials, nickel ore is continuously put into the rotary kiln (Rotary Kiln).The material coming out from the rotary kiln is called Calcine ore.(4) Then, Calcine ore is inserted/ fed into the furnaces (smelter/furnace).Ferronickel is produced in the furnace.Ferronickel production process requires a very large electrical energy (each furnace needs electrical energy at about 17 MW per hour).( 5) Intermittently, ferronickel liquid is removed from the furnace.Then, it is purified on Rafinery by flowing oxygen (O2) into the liquid metal.The liquid metal is then ready to produce.The type of ferronickel produced can be either High or Low Carbon.In terms of the shape, the ferronickel produced can be in the shape of bar (ingot) or granules (shot).
Before the stage of exploitation is preceded, the exploration should guarantee the feasibility at least from an economic perspective that the mining is profitable.This, of course, requires the knowledge of nickel deposit in the areas where the mining process will be carried out.Unfortunately, this amount cannot be measured directly but calculated based on the existing data, including density of nickel drilling results at several points.
The units of all variables in this study will use the International System Units (SI).The rest of this article is organized as follows.Section of methods describes the mathematical model of nickel deposit in a particular mining area/region (in mathematics term it is called a "bounded and finite domain").The model itself is in the form of a double integral of the nickel 'density' function which is defined at all points on the mining domain.However, the main issue is the fact that the nickel density function is not known.Therefore, this function will be approximated based on the data of the nickel density obtained from the drilling at several sites.The developed method of estimation is presented in this section.Section results and discussion deals with numerical simulation and the error analysis of the proposed approach.Finally, we end with the discussion of the conclusions and the directions of future research.
Mathematical Model
There are many steps required in the exploration in order to obtain information about the amount of nickel reserves underneath the earth.One of those is the use of soil samples in the geochemical exploration techniques as carried out, for example, by Worthington et al. [18], Miller et al. [13], Li et al. [9], Brand [2], and Kebede [7], or by combining soil samples and some data from plants in the biogeochemical techniques as practiced by McInnes et al. [12] in Papua New Guinea.
The nickel identified in plants, in one side, indicates only the nickel reserved in the soil where they grow.However, the nickel found in the soil samples suggests its density in the soil.If the sample is obtained from a drilling point (as commonly practiced by mining companies), then it will represent the density of nickel deposit in the drilling point.In this study, we will develop a mathematical model based on the data of nickel density obtained from the results of drilling at several points.
It is reasonable to assume that the mining area is very small compared to the whole surface of the earth.Therefore, this area can be considered as a flat field.Suppose represents a concerned mining area, where is a set of real numbers and is a Cartesian product of two sets of real numbers.More about the mathematical concept of a closed and finite set as well as its corollaries, one can read the theory of calculus or the introductory of real analysis, see for example, Bartle and Sherbert [1].The unit for the area is meter square (m 2 ).Practically, it is possible to determine the boundaries of based on the fact that it is not allowed mining outside the region, for example, because it is not profitable or due to the conservation or settlement area.
Suppose is a nonnegative function defined on the real numbers, and represents the nickel density at the point .The function has unit kg • m -2 .Thus, the nickel deposit in the area is given by Note that the mass m in (1) has a unit kg.The value of integral in ( 1) is very important as it determines whether or not the exploitation is feasible.Although the integral in (1) looks simple, however, in practice it is not straightforward to solve.Before presenting why and how the problem can be handled, we first give the following remark.
Remark 1
It is assumed that the depth of drilling is ignored.Thus, we only consider the surface area.The nickel density at a certain point means as the density which can be 'taken' by drilling beneath that point up to a possible depth.Therefore, the international system (SI) unit for the density is given as kg.m -2 .
The reason why the calculation of integral ( 1) is not straightforward can be explained as follows.In general, the value of the function is not known as we do not have any knowledge what is the nickel density underneath the earth at any point on the surface.Therefore, the nickel deposit would be predicted on the basis of drilling data at several sites.The function will be approximated from several available data.Suppose there are drilling locations/points, say for and the nickel density at the ith point is denoted as Then, we have for (2) This is illustrated in Figure 2.
The main concern in this study is, "how to predict (1) based on data (2)?"The following remark is based on the field study.
Remark 2
On PT Antam, Pomalaa Mining, Kolaka, Southeast Sulawesi, the distance between two adjacent drilling points is 25 m.This drilling technique is considered as a detail exploration.
Methods for Prediction
There are many methods to predict (1) based on (2), including statistical models such as the Bayesian weighted models and logistic regression by Porwal et al. [15].Mamuse et al. [10] has applied regression models to predict the density of nickel deposit.In this section, we will discuss some proposed techniques and their analysis based on the mathematical perspective.Unlike the models proposed by Porwal et al. [15] and Mamuse et al [10] which are based on statistical model, the method we will develop in this article is a deterministic model.
The Method used at PT Aneka Tambang (Persero) Tbk
In this section, we briefly discuss the method used at PT Antam to predict the nickel deposit.The distance of exploration drilling sites used in PT Antam, see Remark 1, is 25 m, as illustrated in Figure 3. Table 1 shows an example of some collected data from PT Antam.And, the prediction method assumes that the drilling point is on the diagonal intersection of a square with side 25 m, as illustrated in Figure 4.
Note that we do not show the complete data as they might have some important information for the company and the discussion is limited only on the basic approach.It is assumed that the nickel density is uniformly distributed in the square.Suppose that there are N drilling points, and the density at the ith drilling point is for .Thus, the nickel density in the square, at the ith drilling point, is , with. .So, the total nickel deposit in the exploration area is given by Note that the total area considered in this case is just .
Method of Averaging
The simplest way to calculate the integral in (1) is by applying the average method.This method assumes that the nickel density on is equal to the average of the nickel densities for all drilling points as given by Therefore, the value of integral in (1) is approximated by Strictly speaking, notation (5) states that the nickel deposit is ̅ times of the total area .
The relation between the average method ( 5) and the method used by PT Antam is as follows.By using (4), Equation (3) can be written as As we also call the area as , i.e., , equation ( 6) is equivalent to (5).Therefore, the method used by PT Antam is basically just the simple average method.
Let us consider as a closed and bounded set, and is a continuous function defined on .With the mean value theorem of multivariable integral, it guarantees that there exists a point such that ∬ ∬ (7) For more discussion of the multivariable integral and the mean value theorem, one can refer to some textbooks, see for example, the multiple variable functions (Felming [4]), advanced calculus (Taylor and Mann [17]), and vector calculus (Marsden and Tromba [11] or Corwin and Szczarba [3]).
Therefore, the method would accurately predict the nickel content if the average of all nickel density for all drilling sites is equal to ̅ .In theory, it is enough to drill at that point and the prediction will be accurate.Although it is guaranteed the existence of this point on , to find such point itself is, however, a different issue.In practice, it is almost impossible to determine that point.
Piecewise Linear Method
Piecewise linear method is a more advanced method than the average one.This method can be explained as follows.Suppose we look a rectangle-shaped area with the length and the width m and n, respectively, and the distance between two adjacent drilling points is d = 25 m, see Figure 4. So, there are m x n drilling points in the area , say , for and To approximate the density function using a piecewise linear method, we do as follows.We consider a isosceles right triangle (as partition of the region) with the vertices, , and .The density of nickel in these vertices are known based on the data of drilling, say , and , respectively.Then, the nickel density at the point in the triangle-shaped domain is approximated linearly based on the values , and .Thus, the nickel density in the domain , and is the volume of a trapezoid with a triangle base and a plane, passing through the points , and , on top part, as illustrated in Figure 4.The volume of the trapezoid is given by One can see that Equation ( 8) is just the product of the area of the right isosceles triangle and the average of density from three drilling points ( ).
This is illustrated in Figure 6.In general, nickel deposit in the area is given by and the nickel deposit in is given by Therefore, the total nickel deposit reserved in the rectangle area is Figure 6.The partition of into triangular areas to approximate the nickel deposit by using piecewise linear approach.
Simulation
Suppose we have data of nickel content at several drilling points as given in Table 2.We also write the index of the value and the point coordinates in the table.Note that the data is for simulation purpose To predict the nickel content under this area, in general we may apply any methods.Applying the piecewise linear method, we exploit equation (11) to have prediction of nickel content which is To predict the nickel content under this area, in general we may apply any methods.Applying the piecewise linear method, we exploit equation (11) to have prediction of nickel content which is Kg.
Note that, the computation is very simple for the case of 50 points may be done manually by using standard calculator.The way of obtaining equation ( 11) is, however, more important than the computation.On the other hand, the error of this prediction is our concerned.
Error Analysis
In this part we will discuss the error analysis of the proposed method presented in the previous section.One should note that the error prediction cannot be avoided in the exploration activities.However, an exploration without any prediction can cause huge losses.
The error due to the method used by PT Aneka Tambang (Persero) Tbk is just the error of the average method.This error can be calculated as the following formula Using the average multi-variable integral theorem, the magnitude these errors is obtained as with as the density of nickel at the point on the mining area.This error will be equal to zero if the average density of sample data equal to the nickel density at the point .
Meanwhile, the error factor in the general piecewise linear method ( 11) is obtained as the absolute value of the difference (1) and (11), that is In this case, one can calculate the integral (1) on each triangle as the partition of Thus (13) becomes Based on the mean value theorem of integral, there is a point on the triangle and , say ( ̃ ̃ ) and( ̃ ̃ ), respectively, that satisfies and For and .Consequently, the inequality (14), by using ( 9)-( 10) and ( 15)- (16), can be written into as the error margin of the proposed method, i.e., the piecewise linear method.
Conclusion
We have briefly discussed the exploration and exploitation process in nickel mining.From the economic perspective, it is very important to have an accurate approximation of the nickel deposit prior to the exploitation process.The main objective of the exploration activities is to predict the nickel deposit.The prediction is based on the data at some drilling points.PT Aneka Tambang (Persero) Tbk. has applied the average double integral in predicting the nickel deposit in its mining area.In mathematical theory, if the function of nickel density is continuous and bounded at any point in a domain area, it must exist a point as the average nickel density over the area.
In this study, we have proposed an alternative mathematical method in predicting nickel deposit in a certain region by using a piecewise linear approach.This method assumes that at each triangle where its vertices are three adjacent drilling points the function of nickel content is linear.We have shown as well that an integral approach with the piecewise linear approach performs better than that with the average approach.Future research will focus on the implementation of this method in predicting nickel deposit in a certain region based on available field data, and it will be on the stage of industrial research.
Figure 2 .
Figure 2. The abstraction of nickel mining regions (the enclosed curve) and the locations of exploitation drilling (points).
Figure 3 .
Figure 3. Illustration of drilling points on rectangle-shaped mining area.
Figure 4 .
Figure 4.The drilling point as the diagonal intersection of a square with sides 25 m.
we do not consider real industrial data for the sake of company privacy.Here we consider an area of 22,500 m 2 in the form of rectangle 225 m by 100 m.
Table 1 .
Some collected data from several drilling sites
Table 2 .
Simulation data representing nickel content from several drilling sites. | 2018-11-30T12:31:32.498Z | 2012-01-24T00:00:00.000 | {
"year": 2012,
"sha1": "06a81cd44abfb4fe346b1fc616d9bfa5c44dc789",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.9744/jti.13.2.73-80",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "06a81cd44abfb4fe346b1fc616d9bfa5c44dc789",
"s2fieldsofstudy": [
"Mathematics",
"Engineering",
"Geology"
],
"extfieldsofstudy": [
"Engineering"
]
} |
13109163 | pes2o/s2orc | v3-fos-license | Enhancement of aging rat laryngeal muscles with endogenous growth factor treatment
Abstract Clinical evidence suggests that laryngeal muscle dysfunction is associated with human aging. Studies in animal models have reported morphological changes consistent with denervation in laryngeal muscles with age. Life‐long laryngeal muscle activity relies on cytoskeletal integrity and nerve–muscle communication at the neuromuscular junction (NMJ). It is thought that neurotrophins enhance neuromuscular transmission by increasing neurotransmitter release. We hypothesized that treatment with neurotrophin 4 (NTF4) would modify the morphology and functional innervation of aging rat laryngeal muscles. Fifty‐six Fischer 344xBrown Norway rats (6‐ and 30‐mo age groups) were used to evaluate to determine if NTF4, given systemically (n = 32) or directly (n = 24), would improve the morphology and functional innervation of aging rat thyroarytenoid muscles. Results demonstrate the ability of rat laryngeal muscles to remodel in response to neurotrophin application. Changes were demonstrated in fiber size, glycolytic capacity, mitochondrial, tyrosine kinase receptors (Trk), NMJ content, and denervation in aging rat thyroarytenoid muscles. This study suggests that growth factors may have therapeutic potential to ameliorate aging‐related laryngeal muscle dysfunction.
Introduction
Neurotrophins are involved in muscle innervation and differentiation of neuromuscular junctions (NMJs). Clinical studies suggest that age-related laryngeal motor dysfunction may be of neuronal origin (Baker et al. 1998). In mammals, the neurotrophin family of genes includes: NTF4, neurotrophin 3 (NTF3), neurotrophin 6 (NT6), nerve growth factor (NGF), and brain-derived neurotrophin factor (BDNF) (Lai and Ip 2003;Huang and Reichardt 2012). BDNF, NTF3, and NTF4 are expressed in skeletal muscle (Gonzalez et al. 1999), and can modulate synaptic efficiency via tyrosine kinase receptors (Trk). TrkB is located at both pre-and postsynaptic NMJs (Gonzalez et al. 1999). This localization presents an anatomical medium for the effects of NTF4 on neuromuscular morphology and transmission. NTF4 decreases in rat soleus muscle when neuromuscular transmission is blocked by botulinum toxin (Funakoshi et al. 1995). Inhibition of neuromuscular transmission using botulinum toxin also decreases growth factor expression (Chevrel et al. 2006). NTF4 decreases after sciatic nerve transection (Griesbeck et al. 1995). Increases of neurotrophins have been shown with exercise and contractions induced by sciatic nerve stimulations (Chevrel et al. 2006).
Exogenous NTF4 treatment has been shown to improve neuromuscular transmission in adult rat diaphragm muscle (Mantilla et al. 2004). This improvement is blocked by inhibition of TrkB phosphorylation (Zhan et al. 2003). Neurotrophins may enhance neuromuscular transmission by increasing neurotransmitter release (Park and Poo 2013). NTF4 also increases Nav and voltage-gated channel conductance (VGCC) leading to an increase in MAPK activation and CREB phosphorylation (Lesser and Lo 1995;Lesser et al. 1997). The long-term effect of neurotrophins is thought to come from retrograde transport of the Trk complex and their regulation of acetylcholine receptors (AChRs). In sciatic nerve damage, NTF4 treatment reverses the loss of mass in recovering fast muscle fibers (Simon et al. 2004). Other growth factors have successfully been used in humans. Hepatocyte growth factor (HGF), transforming growth factor beta (TGFB), and fibroblast growth factor (FGF) have recently been used in humans for treatment of vocal fold scarring (Hirano et al. 2008(Hirano et al. , 2012Branski et al. 2009;Kishimoto et al. 2009). Collectively, these studies point to the promising effects of NTF4 as an injectable therapeutic for treatment of intrinsic laryngeal muscle (ILM) denervation and dysfunction.
Laryngeal muscles contract rapidly and consistently, and are susceptible to the deleterious effects of aging. This activity is thought to contribute to voice problems or dysphagia observed in persons over 65 years of age (Trupe et al. 1984;Ward et al. 1989;Gay et al. 1994;Hagen et al. 1996;Broniatowski et al. 1999;Lundy et al. 1999;Schindler and Kelly 2002). Mechanisms contributing to agingrelated dysfunction include remodeling of the laryngeal mucosa, muscle fiber loss, atrophy, and changes in muscle fiber regeneration (Gambino et al. 1990). ILM atrophy leads to bowing of the vocal folds and inhibiting glottic closure (Sinard 1998;Baker et al. 2001;. Studies in animal models have reported morphological changes consistent with denervation in ILMs, including distal axonal degeneration, smaller endplates, increased variability in endplate architecture, and decreased axon terminals and ACh vesicular clusters (Gambino et al. 1990;P eri e et al. 1997;Suzuki et al. 2001;Johnson et al. 2013). We hypothesized that treatment with NTF4 would improve muscle morphology and reduce age-related denervation in the rat thyroarytenoid muscle.
Animals
Thirty-two Male Fischer 344-Brown Norway F1 rats were used for systemic studies. There were eight 6-mo controls receiving saline osmotic pumps (7-day or 14-day) and eight 30-mo controls used (7-day or 14-day). The NTF4 groups consisted of eight NTF4-treated 6-mo old (7-day or 14-day) and eight 30-mo old treated rats (7-day or 14day). Twenty-four rats were used for direct injection. Four rats of each age were used as controls. Eight rats were used in each age group, 6-and 30-mo for direct injections. The age groups selected represent two points in the life span curve of this strain (Turturro et al. 1999). Animals were kept in microisolator cages prior to implantation surgery and given Harlan Teklad food and water ad libitum. Prior to tissue collection, rats were anesthetized with ketamine hydrochloride and xylazine hydrochloride (100 mg/8 mg per kg body weight injected intraperitoneal injection) and killed by exsanguination following a medial thoracotomy. This study was approved by the University of Kentucky Institutional Animal Care and Use Committee (IACUC).
Osmotic pump implantation
Ophthalmic ointment was applied and the dorsal aspect of the neck was shaved and scrubbed with iodine. Body temperature was maintained with heating pads. Aseptically prepared ALZET â osmotic pumps were filled either with NTF4 or saline, and implanted subcutaneously dorsally on the rat posterior to the scapulae using sterile instruments. First, the pump was inserted into a subcutaneous pocket delivery portal to minimize the interaction between NTF4 and incision healing. A total of 200 ng of NTF4 in 50 ul saline was delivered for either 7-or 14 days, based on previously administered dosages for NTF4 and other neurotrophic factors (Simon et al. 2004;Hirano et al. 2008Hirano et al. , 2012Branski et al. 2009;Kishimoto et al. 2009). Meloxicam was administered as a preanesthetic medication and postoperatively for pain relief. Animals were observed in their home cages until they recovered from anesthesia.
Neurotrophin injection procedure
Direct injections of NTF4 were applied into the rats' left vocal fold on day one; then, the rats were killed on day seven (Welham et al. 2008). Control rats received a 50 lL direct injection of saline. Rats were sedated with 1-2 mg/ kg acepromazine, placed in an induction chamber at 5% isoflurane and 1000 mL/min O 2 for induction until toe pinch reflex was lost. The rat was reclined in a supine position on a platform, and a specula was used to maintain the oral patency during the procedure. A 50 mm, 30 gauge, 100-lL syringe (Hamilton) was coupled to a 1.9 mm, 30-degree endoscope (Storz, Tuttlingen, Germany). This allowed for visualization of the vocal folds and guidance of the syringe. The NTF4 rats received 200 ng in 50 lL saline. This dosage was tolerated with no side effects. The dosage and length of time prior to killing was based on our systemic data and doses determined by other researchers (Simon et al. 2004;Hirano et al. 2008Hirano et al. , 2012Branski et al. 2009;Kishimoto et al. 2009).
Histology and immunohistochemistry
Larynges were dissected, placed in cryprotectant, embedded in OCT, and frozen in 2-methylbutane in liquid nitrogen. Cross sections of 10 lm thickness were used to examine the thyroarytenoid muscles. Sections were collected serially; the middle sections on each slide corresponded to the mid-section of each muscle.
For overall morphology and mitochondrial content, sections were stained with hematoxylin and eosin or with modified Gomori's trichrome (Engel and Cunningham 1963;Sheehan and Hrapchack 1980;McMullen and Andrade 2006). Glycogen content was determined with the Schiff method (Sheehan and Hrapchack 1980). After staining, slides were dehydrated in an ethanol series, cleared with xylene, and mounted in permount. NIH ImageJ software was used to measure the mean fiber area (Rasband 1997(Rasband -2012McMullen and Andrade 2006). Cross-sectional fiber area was measured at 409 magnification on 3585 fibers. Central nuclei were reported as a ratio of central to total nuclei. Glycogen content was reported as percent of glycogen positive to total fibers from 3936 fibers counted.
To determine denervation, laryngeal sections were fixed with 4% paraformaldehyde, blocked with goat serum, and incubated overnight at 4°C with Nav1.5 (Sigma, St. Louis, MO), followed by an Alexa Fluor conjugated secondary antibody (Invitrogen, Carlsbad, CA). Heart was used as a positive control. Thresholding was based on the labeling intensity of the positive controls. Muscle fibers with >50% labeling were considered denervated (Kulakowski et al. 2011). The ratio of denervated to non-denervated fibers was calculated from three muscle sections per animal for a total 3630 thyroarytenoid fibers. NMJs were stained with FITC-labeled a-bungarotoxin (Invitrogen, Carlsbad, CA) and phalloidin, a marker of muscle actin (Invitrogen, Carlsbad, CA) to denote fibers (McMullen and Andrade 2009). We analyzed 59 sections/age from thyroarytenoid muscles.
To determine TrkB intensity and NMJ quantity, sections were prepared as the same for denervation staining, except using antibodies for Trkb (Santa Cruz Biotechnology, Inc, Dallas, TX) and mounted with SlowFade â containing DAPI (Invitrogen, Carlsbad, CA.). Brain tissue was used as a positive control; primary antibody was not added to negative controls. Intensity of TrkB staining was measured with NIH ImageJ software (Rasband 1997(Rasband -2012Collins 2007;Kulakowski et al. 2011). Images were taken at the same light intensity. The background was removed from the images and RGB was measured. Results shown are integrated intensity values for the green color channel (TrkB). We analyzed 58 sections/age.
Sections were imaged with a Nikon E6 microscope equipped with a Digital Sight DS-U3 camera and Elements software (v 2.0). A total of 20% of histological images were randomly selected and examined by two blinded raters for interrater reliability. Interrater reliability was assessed using the intraclass correlation coefficient (ICC; two-way, mixed model, single measure). Results demonstrated a high degree of agreement between raters (ICC = 0.934 NMJ counts, 0.943 glycogen counts, 0.913 Nav1.5, 0.965 fiber size, and 0.990 central nuclei counts).
Data analysis
Results are presented as means and standard error of the mean (SEM). Statistical significance was determined by separate 2 9 2 between-subjects analysis of variance on the dependent variables of thyroarytenoid fiber size, glycogen content, central nuclei, Nav1.5, TrkB intensity, and NMJ quantity. The independent variables in the systemic group were time and age. The independent variables in the direct injection group were thyroartenoid muscle (left injected side vs. right noninjected side) and age (6-mo vs. 30-mo). Significant results were followed up with the Holm-Sidak Method. Several measures from each animal were averaged for each dependent variable. Animal weight before and after calculations were determined by within analysis of variance. The significance level for rejection of the null hypothesis was set at P ≤ 0.05.
Systemic effects
Systemic NTF4 application produced significant differences in muscle fiber area (Fig. 1). (See Table 1 for systemic statistics). In the 7-day systemic NTF4 group, mean muscle fiber area significantly increased at 30-mo compared to controls. In the 14-day systemic NTF4 group, thyroarytenoid muscle fiber area showed no change at 30-mo compared to controls [F(7, 28) = 4.480, P < 0.001]. There was no effect of treatment on body weight (P < 0.05).
Direct injection effects
(See Table 2 for direct injection statistics): The effects of direct injection of NTF4 into the thyroarytenoid muscles were determined 7-days post treatment. With direct NTF4 injection, muscle fiber area at 30-mo was significantly decreased compared to controls [F(7, 24) = 5.672, P < 0.001].
Systemic effects
In the 7-day systemic NTF4 group, thyroarytenoid mean percent of central nuclei increased for the 30-mo group compared to aged controls, although the effects did not reach significance. In the 14-day systemic NTF4 group, regeneration effects were not observed in the 30-mo animals compared to controls. There was a significant main effect of treatment for groups between the 7-and 14-day treatment groups as a whole [F(7, 24) = 11.22, P < 0.001].
Direct injection effects
With direct NTF4 injection, mean percent of central nuclei decreased significantly in the 6-mo injected group compared to controls. Central nuclei displayed an increasing trend for the 30-mo group compared to controls, although this effect did not reach significance. There was a main effect of treatment [F(7,15) = 1.76, P < 0.001)].
Direct injection effects
There was a trend in both treated groups of an increase of glycogen-positive fibers although the effects did not reach significance [F(7, 25) = 0.789, P < 0.603].
Systemic effects
In the 7-and 14-day systemic NTF4 groups, the percent of thyroarytenoid-denervated fibers as measured by Nav1.5, significantly decreased for the 30-mo group compared to aged controls [F(7, 17) = 25.08, P < 0.001]. At 6-mo, there was a significant increase in NMJ quantity between the control and treated at 14-days. In the 7-and 14-day systemic NTF4 groups, thyroarytenoid mean NMJ quantity did not change significantly at 30-mo, but there was a main effect of treatment [F(7, 28) = 7.33, P < 0.001]. This may be due to a change in size of the muscle fiber area (Fig. 3).
Direct injection effects
With direct NTF4 injection, the percent of thyroarytenoid denervated fibers significantly decreased for the 30-mo group compared to aged controls [F(7,15) = 14.71, P < 0.001]. There was a significant increase in NMJ quantity in the 30-mo injected animals. There was also a main effect of treatment [F(7, 28) = 6.79, P < 0.001].
Mitochondria content
Ragged red fibers were not found in 6-mo controls or in any NTF4-treated thyroarytenoid muscles, suggesting stable aerobic capacity with systemic and direct treatment (Fig. 4).
TrkB intensity
In the systemic group, there was a decrease in TrkB intensity with treatment in the NTF4 7-day 30-month group. Conversely, in the 14-day group, there was an increase in TrkB intensity with treatment at 30-mo compared to control [F(7, 28) = 31.26, P =< 0.001)] (Fig. 5).
Discussion
These data provide the first evidence for the effectiveness of neurotrophins to induce ILM remodeling responses. We hypothesized that aging ILMs would change following treatment with NTF4. Changes found in oxidative, metabolic, and glycolytic capacity are consistent with fast-contracting and fatigue-resistant fiber types. A decrease in mean fiber area in the muscle after direct neurotrophin treatment is similar to changes in other skeletal muscles after endurance training. Figure 2. Changes in aerobic capacity and innervations with NTF4 treatment. Representative glycogen (top) and Nav1.5 (bottom) stained sections from NTF4 14-day treated (right) and untreated (left) thyroarytenoid muscles at 30-mo of age. Increase of glycogen-positive muscle fibers indicates a change in respiratory capacity; darker pink fibers are considered glycogen positive. Reduction of denervation with age as measured by Nav1.5 labeling (green) and phalloidin to denote fibers (red). Left middle panels are representative Nav1.5 (green) and phalloidin (red) stained sections from untreated thyroarytenoid muscles. The right middle panels are treated muscles. Green insert is Nav1.5 staining alone. Notice decrease of fibers stained for Nav1.5 after treatment with NTF4. Bottom left micrograph is a positive control consisting of heart muscle from a 6-month control animal (scale bar = 25 lm), (P < .001).
2016 | Vol. 4 | Iss. 10 | e12798 Page 6 The study of ILMs is important to develop effective interventions or preventative measures in the aging human. Previous studies have documented alterations in aging ILMs, including deceases in fiber size, mass, total number of fibers present, regenerative capacity, NMJ size/ quantity, changes in myosin heavy chain isoforms, and contractile function (Hagen et al. 1996;Andrade 2006, 2009;Kulakowski et al. 2011;Nishida et al. 2013). The purpose of this study was to determine if application of neurotrophins could ameliorate muscle deterioration associated with age in the ILM.
Our results demonstrated an age-associated increase in fiber size in the thyroarytenoid muscle after 7 days of systemic NTF4 administration. Previous data show that aging rat muscle fibers change in the opposite direction compared to the muscles that were treated systemically with NTF4 (McMullen and Andrade 2006). With direct injection of NTF4, fiber size decreased at 30-mo of age, similar to what we have previously observed after laryngeal nerve stimulation (McMullen et al. 2011). The time and dosage administration results suggest the need for further investigation to help explain fiber size change differences observed between systemic and direct application of NTF4.
With application of neurotrophins in our study, there is a qualitative decrease in the appearance of fibrosis (Fig. 1). Fibrosis is a diminution of muscle quality due to an increase of fat and other noncontractile materials (Serrano et al. 2011). The aging process intensifies the fibrotic phenotype. In normal fibers, the nuclei are located in the periphery. A central nucleus within a fiber represents a nonspecific marker of muscle damage, such as fibrosis, and/or regeneration. Central nuclei are frequently seen in dystrophic muscle and during development (Banker and Engel 2004). With direct NTF4 treatment, the decrease or lack of change of central nuclei suggests a decrease in regenerative capacity. However, the nonsignificant effect for central nucleation in the systemic NTF4 across age groups in thyroarytenoid may indicate different regeneration capacities with treatment. The diminished regenerative capacity of the thyroarytenoid could be related to a lack of autophagy or impaired satellite cell function (McLoon and Wirtschafter 2003;). Figure 4. Evidence of red ragged fibers in aging thyroarytenoid muscle and changes with NTF4 treatment. Representative Gomorori's trichrome images for overall mitochondria content from control 30-mo (left) and 14-day NTF4-treated muscles (right). There appear to be more mitochondria clusters as (denoted by black arrows) and red ragged fibers (denoted by red arrows) in untreated aging muscle (scale bar = 25 lm). Alternatively, other researchers have shown that the thyroarytenoid regenerates consistently throughout the lifespan to compensate for fiber loss related to disease or injury, although this capacity appears to decrease with advancing age (Lee et al. 2012). The static regenerative capacity in the thyroarytenoid muscle may be detrimental to key functions of the larynx including respiration, swallowing, and voice and airway protection in the elderly. Another possibility is that these aging muscles do not need to regenerate based on the physiological differences between the muscles (McLoon and Wirtschafter 2003;). Others and we have demonstrated that aging ILMs display functional evidence of denervation (P eri e et al. 1997;Johnson et al. 2013). A decrease in NMJ abundance would demonstrate a loss of morphologically defined innervation points in these muscles. We have previously demonstrated that NMJs become smaller and less abundant in aging ILMs (P eri e et al. 1997). In systemically NTF4-treated thyroarytenoid muscle, NMJ quantity increased in the 6-mo 14-day treated group compared to control. Our current data also show that NTF4 systemic treatment significantly reduced denervation for the 30month animals as measured by Nav1.5 labeling in both systemic and direct injection groups. NTF4 may thus be effective for the reinnervation of aging ILMs due to enhanced muscle morphology. Changes in NMJs and innervation can also be due to a loss of fast motor units that can occur with normal aging (Jang and van Remmen 2011). It will be beneficial to determine if contractile function in the ILMs improves and motor unit number increases with NTF4 treatment.
Intracellular glycogen content at 6-mo accounts for 7% of muscle fibers, with the proportion of glycogenpositive muscle fibers significantly increasing at 30-mo to 27% (McMullen and Andrade 2006), thus indicating a shift in the muscle cell respiratory physiology with age ( Fig. 2). With NTF4 treatment, glycogen-positive thyroarytenoid muscle fibers decreased at 6-and 30-mo for the 7-day treatment groups. Alternatively, in the 14-day NTF4 treatment groups, 6-mo and 30-mo animals showed an increase of glycogen-positive muscle fibers indicating an overall shift in respiratory capacity. This implies a differential effect as a factor of treatment time. Both ages of the direct injection groups also showed a trend in increasing glycogen content. These data suggest an interaction effect between changes in muscle aerobic capacity and treatment duration. Muscles require different isoenzymes at different ages. Similar results to the 14-day group finding have been shown in a previous publication where we electrically stimulated the recurrent laryngeal nerve (McMullen et al. 2011). If there is a reduction in glycogen, then there will be an impairment of glycogen breakdown needed for muscle metabolism. Depletion of glycogen can impair metabolism and have a negative effect on performance. The 14-day systemic group may have a better response in terms muscle aerobic capacity. It is possible that NTF4 may have augmented glycogen loading (Garvey et al. 2015). We did not examine the content of glycogen intermediates such as maltopentaose or maltotriose. In future experiments, it would be interesting to test the effects of NTF4 application in rats with demonstrated alterations in glycogen transport or metabolism. An additional qualitative observation demonstrated changes in mitochondrial content (Fig. 4). These changes have been associated with alterations in muscle fiber size due to fiber atrophy and fiber loss (Hepple et al. 2004;Short et al. 2005). In aging ILMs, mitochondrial dysfunction may also be an important age-related event. Aging thyroarytenoid muscles from 30-mo old rats contain fibers with abnormally large mitochondrial accumulations (ragged red fibers), representing 10.4% (AE4.5) of counted muscle fibers (McMullen and Andrade 2006). These qualitative findings are suggestive of diminished aerobic capacity. Ragged red fibers were not found in 6-mo controls or in any NTF4-treated thyroarytenoid muscles, suggesting stable aerobic capacity for these muscles with treatment.
Finally, we examined the expression of Trkb (Fig. 5) and found differential effects in TrkB with systemic NTF4 treatment in both the 7-and 14-day groups. It has been suggested that TrkB expression is reduced at the NMJ with age (Personius and Parker 2013). These experiments were conducted in soleus muscle, which is composed of mostly slow muscle fibers, whereas the ILMs are mostly fast muscle fibers (Conner et al. 2002). It has been proposed that TrkB may take part in the maintenance and protection of sensory structures in the laryngeal mucosa (Yamamoto et al. 2011). The appearance of higher levels of endogenous TrkB and its increase with treatment suggests the possible enhancement of NMJ transmission with systemic treatment.
Conclusion
This study examined the effects of neurotrophin application in aging rat ILMs. These data display the effect of neurotrophin use to induce laryngeal remodeling responses in an animal model. Changes in oxidative, metabolic, and glycolytic capacity are consistent with fast-contracting and fatigue-resistant fiber type. Future study is needed to examine the functional effects of these neurotrophins on the aging laryngeal muscle. This study demonstrates that neurotrophins may have therapeutic potential on aging-related laryngeal muscle dysfunction. | 2016-09-14T22:35:13.896Z | 2016-05-01T00:00:00.000 | {
"year": 2016,
"sha1": "5e0b8a2ad02868cb4e8a4210cc7f9f239aab1a41",
"oa_license": "CCBY",
"oa_url": "https://physoc.onlinelibrary.wiley.com/doi/pdfdirect/10.14814/phy2.12798",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9cd867f85d6f84071621541af85f509a874c5d44",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
15574698 | pes2o/s2orc | v3-fos-license | Ultrahigh-resolution high-speed retinal imaging using spectral-domain optical coherence tomography
We present the first ultrahigh-resolution optical coherence tomography (OCT) structural intensity images and movies of the human retina in vivo at 29.3 frames per second with 500 A-lines per frame. Data was acquired at a continuous rate of 29,300 spectra per second with a 98% duty cycle. Two consecutive spectra were coherently summed to improve sensitivity, resulting in an effective rate of 14,600 A-lines per second at an effective integration time of 68 μs. The turn-key source was a combination of two super luminescent diodes with a combined spectral width of more than 150 nm providing 4.5 mW of power. The spectrometer of the spectraldomain OCT (SD-OCT) setup was centered around 885 nm with a bandwidth of 145 nm. The effective bandwidth in the eye was limited to approximately 100 nm due to increased absorption of wavelengths above 920 nm in the vitreous. Comparing the performance of our ultrahighresolution SD-OCT system with a conventional high-resolution time domain OCT system, the A-line rate of the spectral-domain OCT system was 59 times higher at a 5.4 dB lower sensitivity. With use of a software based dispersion compensation scheme, coherence length broadening due to dispersion mismatch between sample and reference arms was minimized. The coherence length measured from a mirror in air was equal to 4.0 μm (n = 1). The coherence length determined from the specular reflection of the foveal umbo in vivo in a healthy human eye was equal to 3.5 μm (n = 1.38). With this new system, two layers at the location of the retinal pigmented epithelium seem to be present, as well as small features in the inner and outer plexiform layers, which are believed to be small blood vessels. 2004 Optical Society of America OCIS codes: (170.4500) Optical Coherence Tomography (170.4470) Ophthalmology; (170.3890) Medical optics instrumentation; (110.4280) Noise in imaging systems; References and Links 1. D. Huang, E.A. Swanson, C.P. Lin, et al., "Optical coherence tomography," Science 254, 1178-81 (1991). (C) 2004 OSA 31 May 2004 / Vol. 12, No. 11 / OPTICS EXPRESS 2435 #4033 $15.00 US Received 16 March 2004; revised 17 May 2004; accepted 19 May 2004 2. F.W. Campbell and D.G. Green, "Optical and Retinal Factors Affecting Visual Resolution," J. Physiol.-London 181, 576-593 (1965). 3. E.A. Swanson, D. Huang, M.R. Hee, et al., "High-Speed Optical Coherence Domain Reflectometry," Opt. Lett. 17, 151-153 (1992). 4. W. Drexler, U. Morgner, F.X. Kartner, et al., "In vivo ultrahigh-resolution optical coherence tomography," Opt. Lett. 24, 1221-1223 (1999). 5. W. Drexler, H. Sattmann, B. Hermann, et al., "Enhanced visualization of macular pathology with the use of ultrahigh-resolution optical coherence tomography," Arch. Ophthalmol. 121, 695-706 (2003). 6. W. Drexler, U. Morgner, R.K. Ghanta, et al., "Ultrahigh-resolution ophthalmic optical coherence tomography," Nat. Med. 7, 502-507 (2001). 7. American National Standards Institute, American National Standard for Safe Use of Lasers Z136.1. 2000: Orlando. 8. C.K. Hitzenberger, P. Trost, P.W. Lo and Q.Y. Zhou, "Three-dimensional imaging of the human retina by highspeed optical coherence tomography," Opt. Express 11, 2753-2761 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-21-2753 9. B. Cense, T.C. Chen, B.H. Park, M.C. Pierce and J.F. de Boer, "In vivo depth-resolved birefringence measurements of the human retinal nerve fiber layer by polarization-sensitive optical coherence tomography," Opt. Lett. 27, 1610-1612 (2002). 10. B. Cense, T.C. Chen, B.H. Park, M.C. Pierce and J.F. de Boer, "In vivo birefringence and thickness measurements of the human retinal nerve fiber layer using polarization-sensitive optical coherence tomography," J. Biomed. Opt. 9, 121-125 (2004). 11. A.F. Fercher, C.K. Hitzenberger, G. Kamp and S.Y. Elzaiat, "Measurement of Intraocular Distances by Backscattering Spectral Interferometry," Opt. Commun. 117, 43-48 (1995). 12. G. Hausler and M.W. Lindner, "Coherence Radar and Spectral Radar new tools for dermatological diagnosis," J. Biomed. Opt. 3, 21-31 (1998). 13. A.F. Fercher, W. Drexler, C.K. Hitzenberger and T. Lasser, "Optical coherence tomography principles and applications," Rep. Prog. Phys. 66, 239-303 (2003). 14. T. Mitsui, "Dynamic range of optical reflectometry with spectral interferometry," Jpn. J. Appl. Phys. Part 1 Regul. Pap. Short Notes Rev. Pap. 38, 6133-6137 (1999). 15. R. Leitgeb, C.K. Hitzenberger and A.F. Fercher, "Performance of fourier domain vs. time domain optical coherence tomography," Opt. Express 11, 889-894 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-8-889 16. J.F. de Boer, B. Cense, B.H. Park, et al., "Improved signal-to-noise ratio in spectral-domain compared with timedomain optical coherence tomography," Opt. Lett. 28, 2067-2069 (2003). 17. M.A. Choma, M.V. Sarunic, C.H. Yang and J.A. Izatt, "Sensitivity advantage of swept source and Fourier domain optical coherence tomography," Opt. Express 11, 2183-2189 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-18-2183 18. N. Nassif, B. Cense, B.H. Park, et al., "In vivo human retinal imaging by ultrahigh-speed spectral domain optical coherence tomography," Opt. Lett. 29, 480-482 (2004). 19. M. Wojtkowski, R. Leitgeb, A. Kowalczyk, T. Bajraszewski and A.F. Fercher, "In vivo human retinal imaging by Fourier domain optical coherence tomography," J. Biomed. Opt. 7, 457-463 (2002). 20. N.A. Nassif, B. Cense, B.H. Park, et al., "In vivo high-resolution video-rate spectral-domain optical coherence tomography of the human retina and optic nerve," Opt. Express 12, 367-376 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-7-782 21. G.J. Tearney, B.E. Bouma and J.G. Fujimoto, "High-speed phaseand group-delay scanning with a grating-based phase control delay line," Opt. Lett. 22, 1811-1813 (1997). 22. J.F. de Boer, C.E. Saxer and J.S. Nelson, "Stable carrier generation and phase-resolved digital data processing in optical coherence tomography," Appl. Opt. 40, (2001). 23. A.F. Fercher, C.K. Hitzenberger, M. Sticker, et al., "Dispersion compensation for optical coherence tomography depthscan signals by a numerical technique," Opt. Commun. 204, 67-74 (2002). 24. D.L. Marks, A.L. Oldenburg, J.J. Reynolds and S.A. Boppart, "Digital algorithm for dispersion correction in optical coherence tomography for homogeneous and stratified media," Appl. Opt. 42, 204-217 (2003). 25. D.L. Marks, A.L. Oldenburg, J.J. Reynolds and S.A. Boppart, "Autofocus algorithm for dispersion correction in optical coherence tomography," Appl. Opt, 42, 3038-3046 (2003). 26. S.H. Yun, G.J. Tearney, B.E. Bouma, B.H. Park and J.F. de Boer, "High-speed spectral-domain optical coherence tomography at 1.3 mu m wavelength," Opt. Express 11, 3598-3604 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-26-3598 27. B.R. White, M.C. Pierce, N. Nassif, et al., "In vivo dynamic human retinal blood flow imaging using ultra-highspeed spectral domain optical Doppler tomography," Opt. Express 11, 3490-3497 (2004), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-25-3490 28. D.M. Snodderly, R.S. Weinhaus and J.C. Choi, "Neural Vascular Relationships in Central Retina of Macaque Monkeys (Macaca-Fascicularis)," J. Neurosci. 12, 1169-1193 (1992). (C) 2004 OSA 31 May 2004 / Vol. 12, No. 11 / OPTICS EXPRESS 2436 #4033 $15.00 US Received 16 March 2004; revised 17 May 2004; accepted 19 May 2004
Introduction
The time-domain variant of optical coherence tomography [1] (TD-OCT) is a technique that is clinically applied in ophthalmology for the detection of ocular diseases, as well as for monitoring of disease progression and the effects of therapy.Based on interferometry with near infrared light, TD-OCT allows for non-invasive and in vivo optical cross-sectioning of the retina and cornea.Since it is not possible to take a biopsy of the retina on a routine basis, TD-OCT is one of the few tools available for ophthalmologists to obtain depth-resolved retinal information.
Compared to histology, one drawback of TD-OCT is its limited lateral and axial resolution.The lateral resolution of the technique is limited by the optics of the eye, i.e. the focal length, corneal aberrations and pupil diameter, [2] and the optical design of the setup.Without adaptive optics, the combination of these parameters leads to a lateral resolution of approximately 20-30 µm.The axial resolution of an OCT system may be defined in terms of the coherence length, l coh , which is determined by the center wavelength and bandwidth of the source and the index of refraction of the medium.[3] In vivo OCT measurements of an African frog tadpole with an axial resolution of ~1 µm can be made using a Ti:sapphire laser with a bandwidth of 350 nm centered around 800 nm.[4] Drexler et al obtained retinal images with an axial resolution of 3 µm, [5] using a technique called ultrahigh-resolution optical coherence tomography (HR-OCT).[4][5][6] The image quality or signal to noise ratio (SNR) of a shot-noise-limited TD-OCT system depends on several factors.[3] The application of a source with a larger bandwidth decreases SNR due to the increased electronic detection bandwidth required.In order to maintain the same SNR, either A-line rate or axial scan length should be decreased, or the power incident on the sample should be increased.If the dwell time of the imaging spot on the retina is kept short, ANSI standards allow an increase of sample power.[7] Short dwell times can be achieved if the OCT beam is scanned sufficiently fast over the retina.Hitzenberger et al performed ophthalmic TD-OCT with short dwell times and incident power of up to 10 mW.[8] In previous measurements with our ophthalmic TD-OCT system, the power incident on the eye was less than 600 µW, [9,10] because this power level is known to be safe for dwell times of up to eight hours.[7] An increase in sample arm power to 10 mW could result in more than a tenfold increase in SNR.However, more source power is required and inexpensive powerful sources with a large bandwidth are not readily available.For current clinical ophthalmic applications of HR-OCT, either a lower SNR or a slower A-line rate is taken as a penalty.One can avoid such a penalty by using a more sensitive technique.
In spectral-domain optical coherence tomography (SD-OCT), also known as Fourierdomain OCT (FD-OCT), depth-resolved information is encoded in the cross-spectral density function, as measured in the detection arm of an interferometer.[11][12][13] SD-OCT offers a significant sensitivity advantage over TD-OCT.[14][15][16][17] Recently, in a direct comparison, an improvement of more than 2 orders of magnitude (21.7 dB) was experimentally demonstrated.[18] In addition, the reference arm length in SD-OCT is not modulated, making SD-OCT inherently faster than TD-OCT.The SNR of an SD-OCT system is defined as: [16] ( 1 ) with τ i the integration time required to record one spectrum.According to Eq. 1, the SNR performance of an SD-OCT system improves with increasing sample arm power or longer integration time.Most importantly, Eq. 1 shows that the SNR performance of an SD-OCT system is independent of the bandwidth of the source.In theory, by combining SD-OCT with an ultra-broadband source, ultrahigh-resolution imaging at high acquisition rates should become within reach.This new technique may facilitate the diagnosis and monitoring of several ocular diseases, such as glaucoma, diabetic retinopathy, cancer and age-related The first in vivo retinal SD-OCT images were presented by Wojtkowski et al. [19] In our previous work, we demonstrated an SD-OCT system suitable for in vivo video-rate ophthalmic imaging.[18,20] This system had a sensitivity of 98.4 dB, an acquisition rate of 29,300 A-lines per second and an axial resolution of 6 µm in the eye at a safe ocular exposure level of 600 µW.Motion artifacts within a frame due to involuntary eye movement were avoided at this frame rate.Furthermore, three-dimensional tomograms were created, which represent the true topography of the retina.In this paper we will quantify the signal to noise ration (SNR) and the axial resolution of an SD-OCT system equipped with an ultra-broadband source and identify not earlier seen features in the retina.One difficulty that arises from using ultra-broadband sources in a fiber-based OCT setup for ophthalmic imaging is chromatic dispersion in optically-dense materials like glass, tissue and water.The speed of light depends on the refractive index n(k) of the material, slowing down certain spectral components to a greater extent than others, hence dispersing the light.The total amount of dispersion increases linearly with length of the dispersive medium as well.Chromatic dispersion in air is negligible.Considerable amounts of dispersion can be tolerated if the dispersion in the two arms of the interferometer is equal, thus creating a coherence function that will be free of dispersion artifacts.However, when sample and reference arms contain different lengths of optical fiber or other dispersive media, a dispersion mismatch occurs.In the sample arm, the introduction of an eye with unknown axial length creates a similar effect.The coherence function will not only be broadened by unbalanced dispersion, but its peak intensity will decrease as well.Second order or group-velocity dispersion can be compensated for by changing the lens to grating distance in a rapid scanning optical delay line.[21] However, this method does not compensate for higher orders of dispersion.Alternatively, one can balance dispersion in an OCT system by inserting variable-thickness BK7 and fused silica prisms in the reference arm.[4] The previously-mentioned unknown factor introduced by an eye with unknown axial length requires a flexible method for dispersion compensation.An alternative to compensation in hardware is dispersion compensation in software.De Boer et al induced dispersion in the delay line of a TD-OCT system equipped with an optical amplifier based source (AFC technologies, λ 0 = 1310 nm, ∆λ = 75 nm) and compensated for dispersion artifacts in structural intensity images obtained in an onion.[22] Fercher et al compensated for dispersion induced by a glass sample.[23] Their broadband spectrum was generated with a high-pressure mercury lamp.Other dispersion compensation algorithms are described by Marks et al. [24,25] In our analysis we will compensate in software for dispersion induced by an ultra-broadband source and remove artifacts from retina data.For the near future, this technology may facilitate the application of ultrahigh-resolution systems in the clinic.
Dispersion compensation
A dispersion mismatch introduces a phase shift e iθ(k) in the complex cross-spectral density I(k) as a function of wave vector.Since spectrometer data is acquired as a function of wavelength, data has to be transformed to k-space first.The relation between the phase θ(k) and the multiple orders of dispersion can best be described by a Taylor series expansion: (k) .To determine this phase term for dispersion compensation of data obtained in the human eye in vivo requires a coherence function obtained from a wellreflecting reference point in the eye.We found that it is possible to use the center of the fovea (foveal umbo) for this purpose, because this part of the eye acts as a good reflector.To determine the phase term, after linear interpolation to k-space, the spectrum is Fourier transformed to z-space, where it is shifted such that the coherence function is centered on the origin.A complex spectrum in k-space is obtained after an inverse Fourier transformation.The phase term θ(k) is equal to the arctangent of the imaginary component divided by the real component, and indicates by how much subsequent wave numbers k are out of phase with each other.This function was fit to a polynomial expression of 9 th order, yielding a set of coefficients α 1-9 .Individual spectra obtained from a volunteer were first multiplied with a phase e -iθ(k) as determined from the last seven polynomial coefficients and then inversely Fourier transformed into A-lines, thus removing dispersion.
Setup
The turn-key ultra-broadband source was a BroadLighter (Superlum, Russia), in which two super luminescent diodes at center wavelengths of approximately 840 nm and 920 nm were combined in one system with a center wavelength of 890 nm, a FWHM bandwidth of over 150 nm and an optical output power of approximately 4.5 mW.Fig. 1 shows the source spectrum and a reference arm spectrum that were recorded with a commercial optical spectrum analyzer (OSA).The reference spectrum was also recorded with our high-speed spectrometer (HS-OSA).By comparing the blue and red curves of Fig. 1, one can see a significant drop in sensitivity of the line scan camera above 850 nm.The plot amplitudes are adjusted so that all three curves fit within the same graph.
Fig. 1.Source spectrum of the BroadLighter (black); spectrum returning from the reference arm (red).Both spectra were measured with a commercial optical spectrum analyzer.The reference spectrum in blue was recorded with our high-speed spectrometer, and by comparing the blue and red line it demonstrates the decrease in sensitivity of the line scan camera above 850 nm.Spectrum amplitudes were adjusted so that all three curves fit within the same graph.
A detailed description and drawing of our setup can be found in our earlier work.[18,20] Back reflected light from the source was isolated with a broadband isolator.Without the isolator, back reflections induce noise in the OCT system.After isolation, the power was split with a fiber coupler.The splitting ratio of this coupler was optimized to 80/20 at 830 nm.At longer wavelengths, the splitting ratio approaches 50/50.The manufacturer specification (Gould Fiber Optics, Millersville, MD) of wavelength dependent shift in splitting ratio is 0.3%/nm.The larger fraction was sent towards a stationary rapid scanning optical delay line, in which the lens-to-grating distance was optimized to minimize group-delay or second order dispersion.The smaller fraction of the power was sent towards the sample arm, where a slit lamp-based scanner apparatus was available for retinal scanning.[10] Previously, we used a dichroic splitter in the slit lamp, so that the location of scans could be monitored by a charged coupled device (CCD) camera.In order to reduce the losses in the slit lamp, this dichroic mirror was replace by a gold mirror.After this replacement, the total attenuation in the slit lamp was 1 dB in single pass.Longer wavelengths were blocked with a short-pass filter with a cut-off wavelength of 950 nm.The power that was incident on the cornea after low-pass filtering was equal to 395 ± 5 µW.This power is well below the allowed maximum for scanning beams as specified by the ANSI standards.[7] Power returning from the eye and the reference arm interfered in the 80/20 fiber coupler.Interference fringes were detected with a high-speed spectrometer, comprising a collimator (f = 60 mm), a transmission grating (1200 lines/mm) at Littrow's angle, a three-element air-spaced focusing lens (f = 100 mm) and a line scan camera (Basler) with 2048 elements (each 10 µm x 10 µm).The bandwidth of the spectrometer was 145 nm, with a designed spectral resolution equal to 0.071 nm.[18,20] The maximum scan depth z was measured to be equal to 2.7 mm in air and 2.0 mm in tissue (n = 1.38).The galvanometer's mirror in the delay line was set in a neutral position and was not driven for these measurements.Since the source bandwidth was larger than the bandwidth of the spectrometer, the spectrum was clipped at one side by offsetting the reference arm mirror such that longer wavelengths were not reflected back into the interferometer.The reference arm power was adjusted with a neutral density filter in the delay line.In the spectrometer, consecutive interference spectra were read out using a custom-made program written in Visual C++.[18,20] Data was stored on a hard disk.The continuous acquisition rate was 29,300 spectra per second.The integration time per spectrum was equal to 34 µs.The duty cycle was 98%, i.e. data was acquired during 98% of the total imaging time.
Measurement procedure
In order to compensate for dispersion, coherence functions were obtained from a reflecting spot in the foveal umbo of a human eye, from a mirror in a water-filled model eye (Eyetech Ltd.) and a mirror in air.The sensitivity of the OCT system and determination of depthdependent dispersion were tested with use of a mirror.The slit lamp setup was not used for this particular measurement.Instead, a collimator, a focusing lens and a mirror on a translation stage were used.In order to extrapolate the results to the slit lamp experiments, a low pass filter (λ cut-off = 950 nm) was inserted in the beam path.The sample arm power could be attenuated with a variable neutral density filter.To verify that the system was free from any depth-dependent dispersion induced by the spectrometer, the reference arm length was changed and measurements were taken at different positions.The depth-dependent attenuation was measured as well.In order to determine whether the system was shot noise limited, the variance of 1000 reference arm spectra was determined, and fit with a theoretical expression for the shot noise.[20] In vivo measurements were performed on the undilated right eye of a healthy volunteer.The right eye was stabilized using an external fixation spot for the volunteer's colateral eye.Multiple sets of B-scans were taken in the macular area at an acquisition rate of 29,300 spectra per second.During acquisition, a refresh rate on screen of three frames per second without dispersion compensation was maintained.For each frame the fast axis of the retinal scanner was deflected once, while the slow axis of the scanner could be stepped between frames.B-scans that contain specular reflections from the foveal surface were analyzed in detail to compensate for dispersion.
Analysis
Data was analyzed after acquisition with a custom-made program written in Matlab.Raw data was processed in several steps to extract structural intensity images [18,20,26] as well as Doppler flow data.[27] In addition, we compensated for dispersion using a phase e -iθ(k) , built with a set of polynomial coefficients α 3-9 obtained from a specular reflection in the human retina.For the coherence length determination, the density of points within an A-line was increased eightfold by zero-padding the spectral data.[26] For all data obtained from a mirror, we averaged over 100 A-lines to reduce the influence of noise.The dispersion analysis in software, which required input from the operator, took approximately five minutes in Matlab.Raw data was converted to movies in Matlab as well.To improve sensitivity, two consecutive spectra were summed before Fourier transformation.At an effective A-line rate of 14,600 per second and an effective integration time of 68 µs per spectrum, movies of 500 A-lines per frame were created.The conversion from a data set acquired in 3 seconds into a movie took approximately twenty minutes in MatLab.
Dispersion compensation
Fig. 2. The phase θ(k) obtained from a mirror in a model eye and from a specular reflection in the fovea (left axis).The residual dispersion not compensated for by the polynomial fit is given as a function of k (right axis).
In the graph of Fig. 2, the phase term θ(k) obtained from a mirror in a model eye (averaged over 100 A-lines) and from a specular reflective spot in the fovea (averaged over 5 A-lines) are shown.The differences between the measured phase terms and polynomial fits (9 th order) to the data are shown as well, with the corresponding axis on the right.The inclusion of more than 9 coefficients in the polynomial fit did not significantly improve the coherence function.Both phases show the same pattern, which indicates that both the model eye and the real eye experience similar amounts of dispersion.All in vivo data was compensated using the phase that was obtained from the specular reflection of the fovea itself (red curve of Fig. 2).
In the graph of Fig. 3, the coherence function obtained from a mirror in air is plotted.The data shows the amplitude as a function of depth, where the amplitude is given by the absolute value of the Fourier components after transform of the measured spectrum.In the same graph, a coherence function compensated for dispersion is plotted.For this plot, the same technique as applied in Fig. 2 was used, yielding a different set of coefficients, since the mirror was not located in the water-filled model eye.The dispersion compensation technique gives a significant reduction in coherence length as well as a threefold increase in peak height.Without dispersion compensation, the coherence length was 27.0 µm.After dispersion compensation it was estimated to be 4.0 µm (n = 1), equivalent to 2.9 µm in tissue with a refractive index of n = 1.38.After dispersion compensation, side lobes are present at both sides of the coherence function.These side lobes are a result of the non-Gaussian shaped reference arm spectrum (Fig. 1).
In Fig. 4, coherence functions obtained from a mirror in air at different path length differences are plotted.The coherence function plotted in black was dispersion compensated, using the earlier mentioned dispersion compensation technique.This process yielded a dispersion compensation phase e -iθ (k) .All other data sets were multiplied with the same phase before Fourier transformation.Coherence functions from different depths were overlapped for comparison.The point density of all curves was increased by a factor of 8, using zero padding.The coherence length for path length differences up to 1200 µm was 4.0 µm in air.At z = 1700 µm, the coherence length increased to 4.1 µm and at z = 2200 µm it further increased to 4.3 µm.The depth dependent attenuation was calculated from Fig. 4, [20] comparing peaks at z = 200 and 1200 µm and found to be equal to 7.2 dB.
Noise performance
Fig. 5. Shot noise measurement using the BroadLighter in an SD-OCT configuration.The shot noise level was determined with illumination of the reference arm only.The measured shot noise curve was fit with a theoretical equation of the shot noise, demonstrating that the system was shot noise limited.[20] An analysis on 1000 reference spectra demonstrated that the system was shot noise limited, using the same method and well depth as in Ref. [20].An isolator in the source arm was required to suppress optical feedback to the source.
Sensitivity measurements
The maximum power after the sample arm fiber collimator was 533 µW.Using a mirror in the sample arm, 101 µW returned to the detector arm.After attenuation with a 30.8 dB neutral density filter in the sample arm, 83.7 nW was detected at the detection arm, and a signal to noise ratio (SNR) of 58.0 dB was measured with our high-speed spectrometer at an acquisition rate of 29,300 spectra per second (see coherence function at z = 700 µm).In theory, a detected power of 83.7 nW with our earlier reported spectrometer efficiency of 28% should give an SNR of 65.6 dB.[20] Comparing this value with the measured value of 58.0 dB, the spectrometer performed 7.6 dB below the theoretical SNR.The reduction is attributed to a reduced quantum efficiency of the detector for longer wavelengths.For a measured 400 µW sample arm power after the slit lamp, a return loss through the slit lamp of 1 dB and a 3 dB sensitivity gain by coherent addition of two consecutive spectra, the system sensitivity was 89.6 dB at a 14,600 A-lines/sec acquisition rate.
In vivo measurements on a human volunteer
Fig. 6.Structural image of the fovea.The dimensions of each image are 3.1 x 0.61 mm.The image is expanded in vertical direction by a factor of 2 for clarity.Layers are labeled as follows: RNFL -retinal nerve fiber layer; GCL -ganglion cell layer; IPL -inner plexiform layer; INL -inner nuclear layer; OPL -outer plexiform layer; ONL -outer nuclear layer; ELM -external limiting membrane; IPRL -interface between the inner and outer segments of the photoreceptor layer; RPE -retinal pigmented epithelium; C -choriocapillaris and choroid.A highly reflective spot in the center of the fovea is marked with an R. A blood vessel is marked with a large circle (BV) and structures in the outer plexiform layer are marked with smaller circles.In the movie, these structures can also be seen in the IPL.Two layers at the location of the RPE at the left and right are marked with arrows and an asterisk (*).Click on the image to view the movie (29.3 frames per second and 500 A-lines per frame, aspect ratio 1:3.2, short version 45 frames (1.5 s, 2.4 MB), long version 90 frames (3.1 s, 5.3 MB).In the movie, a floater can be seen in the vitreous at the left hand side above the retina.The repositioning of the galvo mirror after each scan creates an artifact in the image on the right hand side.
The movie in Fig. 6 was recorded in the fovea at an acquisition rate of 29.3 frames per second with 500 A-lines per frame.The movie is expanded in vertical direction by a factor 3.2.The retinal scanner made horizontal scans through the fovea.For this particular movie, the slow axis of the retinal scanner was not run and as a result cross-sections from approximately the same location can be seen as a function of time.Individual frames in the movie were realigned using a cross-correlation technique to remove motion across frames.The dimensions of each image are 3.1 x 0.6 mm.A depth range between 0.4 and 1.0 mm is displayed in the images.The maximum dynamic range within the image was equal to approximately 35 dB.Several layers can be recognized in this movie.[5] The upper dark band at the left and right of the image is the retinal nerve fiber layer that becomes thicker further away from the fovea.Below this layer, we see two dark bands delineated by two whiter bands.The upper dark band consists of the ganglion cell layer and the inner plexiform layer.In several frames, one may be able to distinguish between these two layers.The two white bands are the inner and outer nuclear layers, and the second dark band is the outer plexiform layer.The first dark layer below the outer nuclear layer is the external limiting membrane, which extends over the whole width of the image.This layer is in general not visible with OCT employed with an ordinary broadband source.Below this membrane we can see the interface between the inner and outer segments of the photoreceptor layer, which rises directly below the foveola.The lowest layer comprises the retinal-pigmented epithelium (RPE).At the left and right side of the image at the location of the RPE, two layers seem to be present.We hypothesize that one of these layers might be Bruch's membrane.Below the RPE, a cloudy structure can be seen.This structure is the choriocapillaris and the choroid.The fast acquisition rate reveals the true topography of the retina.The coherence length was determined in vivo from the specular reflection in the center of the fovea, averaged over 5 A-lines.This coherence function is plot in Fig. 7, and the coherence length after dispersion compensation was equal to 4.8 µm in air and 3.5 µm in tissue (n = 1.38).Compared to the coherence length displayed in Fig. 3, using the same index of refraction of 1.38, the coherence function is broadened.Due to absorption of longer wavelengths in the vitreous, the effective bandwidth is reduced, yielding a longer coherence length.Small highly-reflecting black dots can be seen in the ganglion cell layer and in both plexiform layers.We conclude that they are not caused by speckle, because they consistently appear at the same location over consecutive movie frames.The dots seem to be almost regularly spaced in the outer plexiform layer.We believe that these black dots are very small blood vessels.Snodderly et al measured the distribution of blood vessels in an enucleated macaque eye by means of microscopy in frozen samples.[28] They report a very similar spacing of small blood vessels in the plexiform layers near the fovea.To positively identify these structures as blood vessels, we analyzed the data for blood flow as described in our earlier work.[27] Doppler flow analysis confirmed that flow occurs in the darker dots located in the ganglion cell layer, which therefore can be positively identified as blood vessels.In the plexiform layers, a clear correlation between location of the highly reflective black dots and Doppler flow could not be found.Flow was only detected incidentally in these layers at locations where black dots were seen.One explanation that no consistent flow was detected in the well-reflecting features in the plexiform layers is that these features are not blood vessels.Another explanation may be that our system is not sensitive enough to measure flow in such small vessels.For instance, the blood vessels may be too small, which reduces the number of A-lines and therefore the signal that can be used to determine the presence of blood flow.Furthermore, the analysis depends on measuring a Doppler component parallel to the direction of the beam.If the blood vessel is exactly perpendicular to the beam direction, a parallel Doppler flow component will be absent and no flow can be registered.In previous work, larger blood vessels were recognized by their white appearance and the shadow they cast, caused by light attenuation in the blood.Here blood vessels act as good scatterers, presumably due to their small sizes.
By running the slow axis of the retina scanner, subsequent cross sections of the fovea were made.The image below measures 6.2 by 1.2 mm and the slow axis scans over 3.1 mm.During processing, noise was filtered with a 2 by 2 median filter to match the data better to the pixel resolution of this video.The reflecting spot in the center of the foveola, as seen in the first movie, only shows up in a couple of frames, indicating that the specular reflection only occurs in the threedimensional center of the foveola.The horizontal lines at the top of the image are residual fixed pattern noise.Summarizing the previous, the application of ultrahigh-resolution SD-OCT technology allows us to identify new features in the human retina and measure the coherence length in the eye.
Discussion
In this study, the performance of an SD-OCT system equipped with an ultra-broadband source was determined by measuring the SNR and the coherence length on a mirror and in the eye.Comparing the measured SNR value of 89.6 dB with the theoretical value of 97.2 dB, our system performs 7.6 dB below the theoretical limit.In our previous system described by Nassif et al, the difference between the measured and theoretical SNR was 2.2 dB.[20] The extra loss of 5.4 dB that we encounter in this system equipped with the BroadLighter can be attributed to the lower sensitivity of the line scan camera.In Fig. 1, the reference arm spectrum recorded with a commercial optical spectrum analyzer is compared with the same spectrum as recorded with our high-speed spectrometer.The sensitivity of our line scan camera drops by at least a factor of two above 850 nm.In a conventional HR-OCT system, the highest documented A-line rate is equal to 250 Alines per second.The power incident on the cornea was equal to 500-800 µW, resulting in a sensitivity of 95 dB.The A-line length was equal to 1-2.8 mm.[5] The effective A-line rate of our SD-OCT system was 14,800 A-lines per second, and the measured sensitivity was 89.6 dB, with a power of 395±5 µW incident on the cornea.Comparing the two systems, the spectral-domain OCT system was 59 times faster at a 5.4 dB lower sensitivity.
Conclusion
The axial resolution of a TD-OCT system increases with bandwidth, but its SNR is inversely proportional to an increase in bandwidth.Since source bandwidth does not affect the SNR performance of a SD-OCT configuration, SD-OCT is preferred for ultrahigh-resolution ophthalmic imaging.However, ultra-broadband sources induce more dispersion than standard broadband sources.With dispersion compensation in software, we managed to reduce dispersion artifacts significantly.After dispersion compensation, the coherence length measured from a mirror in air was equal to 4.0 µm (n = 1).The dispersion-compensated axial resolution obtained from a reflecting spot in the fovea was equal to 3.5 µm (n = 1.38).To our knowledge, this is the first coherence length measured in the human eye in vivo.The combination of high axial resolution measurements at a high data acquisition rate allows us to identify features that have not been seen in the human retina before with OCT.Movies at 29.3 frames per second with 500 A-lines per frame seem to indicate two layers at the location of the RPE as well as not earlier seen small structures in the two plexiform layers, which determined from their location are believed to be blood vessels.Comparing the performance of our ultrahigh-resolution SD-OCT system with a conventional high-resolution time domain OCT system, the A-line rate of the spectral-domain OCT system was 59 times higher at a 5.4 dB lower sensitivity.
Fig. 3 .
Fig. 3. Coherence function obtained from a mirror in air.Uncompensated data (red) is compared with a coherence function after dispersion compensation (black).The density of points was increased by a factor of 8 using a zero-padding technique.
Fig. 4 .
Fig. 4. Coherence functions obtained from a mirror at different path length differences z.The coherence function at z = 700 µm was dispersion compensated, and the data of all other curves was multiplied with the same phase e -iθ(k) before Fourier transformation.The coherence length for path length differences up to 1200 µm was 4.0 µm in air, 4.1 µm for z = 1700 µm and 4.3 µm for z = 2200 µm.
Fig. 7 .
Fig. 7. Coherence function obtained from a reflective spot in the fovea.The coherence length is equal to 4.8µm in air.
Fig. 8 .
Fig. 8. Structural image of the fovea.The dimensions of each image are 6.2 x 1.2 mm.The slow axis in the movie scans over 3.1 mm.Click on the image to view the movie (2.4 s, 45 frames, with 29.3 frames per second and 500 A-lines per frame, 2.3 MB). | 2016-11-07T22:24:27.530Z | 2004-05-31T00:00:00.000 | {
"year": 2004,
"sha1": "381fb0d4c8e68d7ddd05889a660d3358c611f889",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/opex.12.002435",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "381fb0d4c8e68d7ddd05889a660d3358c611f889",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
257632311 | pes2o/s2orc | v3-fos-license | Uplift and towers of states in warped throat
We investigate the connection between the distance conjecture and the uplift potential. For this purpose, we consider the concrete model, the warped deformed conifold embedded into Type IIB flux compactifications, with the uplift potential produced by $\overline{\rm D3}$-branes at the tip of the throat. Whereas the various mass scales associated with towers of states can be found, it turns out that the lightest tower mass scale satisfies the scaling behavior with respect to the uplift potential, which is meaningful provided the number of $\overline{\rm D3}$-branes is nonzero. This indicates that the effective theory becomes invalid in the vanishing limit of the uplift potential by the descent of an infinite tower of states from UV, as predicted in the distance conjecture. Since too large uplift potential is also problematic due to the runaway behavior of the moduli potential as well as the sizeable backreaction of $\overline{\rm D3}$-branes, the uplift potential is bounded from both above and below. In the simple model like the KKLT or the large volume scenario in which non-perturbative effect is dominatd by the single term, this bound can be rewritten as the bound on the size of the superpotential.
Introduction
The low energy effective field theory (EFT) often suffers from the naturalness issues as quantum corrections to a scalar mass or the cosmological constant are sensitive to much higher energy scale in the absence of some symmetry reason.Presumably, they appear problematic because of our ignorance of quantum gravity, in the context of which the notion of naturalness may change drastically.This has recently been one of important topics in the swampland program, which aims to identify quantum gravity constraints on the low energy EFT in light of observations in string theory [1] (for reviews, see [2,3,4,5,6]).
Many of conjectured criteria distinguishing theories that are consistent with quantum gravity (belong to the 'landscape') from that are not (belong to the 'swampland') rely on the distance conjecture [7].It states that the infinite distance limit of the scalar moduli space corresponds to a particular corner of the landscape, beyond which the EFT breaks down as an infinite tower of states descends from UV.A typical example of a tower of states might be a set of Kaluza-Klein (KK) modes, which would become light if the moduli determining the size of extra dimensions are stabilized at infinitely large values.
Concerning the cosmological constant Λ, the naturalness of an extremely small and positive observed value given by ∼ 10 −120 m Pl , where m Pl = 1/ √ 8πG is the reduced Planck mass, can be studied in the context of the distance conjecture by asking whether the mass scales of towers of states remain heavy enough to be decoupled from the EFT in the vanishing limit of Λ.Indeed, it was pointed out that without introducing the negative tension objects, not only the realization of the de Sitter (dS) and the Minkowski vacuum [8], but also the scale separation between the KK mass scale and |Λ| in the anti-de Sitter (AdS) vacuum is challenging [9] in the flux compactifications (see also [10] for earlier discussion).Such an observation motivated the conjecture that for the consistency with quantum gravity, the vanishing limit of Λ in AdS space corresponds to the infinite distance limit of the moduli space.Then there exists a tower of states with mass scale ∆m following the scaling behavior, where α is some positive number [11].Extending this 'AdS distance conjecture' (ADC) to dS space, we can predict the existence of a tower of light states in the universe with a small, positive Λ as we observe it.If it is identified with the KK mode, α is constrained to lie in the range 1 4 ≤ α ≤ 1 2 [12], which is obtained by combining the observational bound on the size of extra dimensions [13] and the Higuchi bound [14].We remark here that the breakdown of the EFT in the Λ → 0 limit claimed by the ADC does not exclude the Minkowski vacuum from the landscape.The ADC just tells us the discontinuity between the Minkowski vacuum with exactly vanishing Λ and the (A)dS vacuum in the Λ → 0 limit : they are different branches of the space of vacua in the landscape hence cannot be interpolated by the EFT consisting of the finite number of fields.
Meanwhile, there are several counterexamples in string models allowing the scale separation (see, for example, [15] and references therein).Moreover, in the language of the low energy effective supergravity, the size of Λ is determined by the amount of supersymmetry (SUSY) breaking.More concretely, if SUSY is unbroken, the universe is in the AdS vacuum with the smallest negative Λ (thus the largest |Λ|) given by |Λ| = 3m 2 Pl m 2 3/2 , where m 3/2 is the gravitino mass.When SUSY is broken by F-term, D-term, or the antibrane uplift, the universe can be in the Minkowski or the dS vacuum, as well as the AdS vacuum with smaller |Λ|.Then it may be m 3/2 rather than |Λ| that is connected to a tower of states hence the distance conjecture, as claimed in the 'gravitino distance conjecture' [16,17] (see [18] for earlier discussion and [19] for the study on the size of extra dimensions in view of m 3/2 ).
In order to resolve all such ambiguities, we need to investigate the connection between the mass scale of a tower of states and various ingredients used to determine Λ in string models, i.e., fluxes, non-perturbative effects, and uplift in more detail.For this purpose, we consider the concrete model, the warped deformed conifold supported by background fluxes [20] which is realized in the orientifold compactifications of Type IIB string theory [21], with the uplift produced by antibranes at the tip of the throat [22].We point out in this article that the antibrane uplift which plays the crucial role in realizing the metastable dS vacuum [23,24] can be easily connected to the distance conjecture.Indeed, it was already observed in [25] that when the throat is strongly warped, both the KK mass scale m KK and the uplift potential V up produced by D3-branes are redshifted in the same way, satisfying the scaling behavior given by m KK ∼ (V up ) 1/4 .This gives rise to following questions, which we try to answer in this article: • Can we find the similar scaling behavior when the throat is weakly warped?: the scaling behavior in [25] tells us that m KK and V 1/4 up depend on the stabilized value of the conifold modulus and the volume modulus in the same way.Whereas the value of the conifold modulus does not play the crucial role in the extremely weakly warped throat, the volume dependence still remains, from which we can find out the scaling behavior between V up and the tower mass scale associated with the bulk.
• Why V up produced by D3-branes is directly connected to the tower mass scale?: the tower mass scale like the string or the KK mass scale is determined by the geometry of the internal manifold, such as the size of the throat or the internal volume.Meanwhile, the warping of the internal manifold also regulates the size of the four-dimensional spacetime over which D3-branes are extended, thus the size of V up .From this, we expect that the tower mass scale and V up can be connected in a direct way, which will be explored in detail in this article.
Moreover, whereas the size of V up is typically identified with the AdS vacuum energy density before uplift, this makes sense only for the tiny cosmological constant as we observe today.Since they have different origins and can be different in size in the vacuum of sizeable |Λ|, we need to distinguish them.In this sense, in the model building point of view, it is V up rather than |Λ| or m 3/2 which needs to be considered in connection with the distance conjecture.Indeed, the exponent 1/4 in the scaling behavior found in [25] originates from the fact that D3-branes are extended over the noncompact four-dimensional spacetime, which reminds us of the argument in [12] that the lower bound on α in (1) given by 1/4 is interpreted as the inverse of the number of noncompact spacetime dimensions.We also find that whereas the lightest tower mass scale obeys the scaling behavior with respect to V up , away from this, there always exists a tower of states satisfying α = 1/4, even though the mass scale of which may not be the lightest tower mass scale.We emphasize that the scaling behavior is physically meaningful only if the number of D3branes is nonzero.This indicates the discontinuity between the exactly vanishing V up in the absence of D3-branes and the nonzero but very tiny V up in the following sense.Suppose we construct some AdS vacuum by tuning the fluxes and non-perturbative effects but without using the uplift.Here the moduli determining the sizes of the throat and the overall internal volume are stabilized appropriately so that all possible towers of states are heavy enough not to affect the low energy EFT.If we try to find the AdS vacuum with the same size of Λ using the uplift in addition, however, the stabilized values of the moduli are strongly restricted not to allow very tiny V up since otherwise there appears a tower of states which becomes extremely light, invalidating the EFT.Then we can say that these two AdS vacua with the same size of Λ are different branches in the space of vacua.Extension of the argument to Minkowski or dS space is straightforward : in the moduli space the Minkowski vacuum stabilized by the fluxes and non-perturbative effects only (see, for example, [26]) is separated from that obtained by the tiny uplift of AdS, in which a tower of states becomes extremely light as well.Moreover, the uplift is an essential ingredient to realize the dS vacuum.Whereas V up cannot be too large in order not to allow the sizeable backreaction of D3-branes or the runaway behavior of the moduli potential, our discussion indicates that too tiny V up is also problematic.Hence, the size of the AdS cosmological constant before the uplift should not be too small and the Minkowski minimum before the uplift is not allowed when we try to realize the dS vacuum with tiny Λ as we observe it.
The organization of this article is as follows.Section 2 consists of three parts.In Section 2.1, we review the essential features of the warped deformed conifold and discuss the meaning of the strong and weak warping more carefully.In Section 2.2, we consider the string excitations and the KK modes as possible towers of states in the string compactifications and present their mass scales in the strongly and weakly warped throat, respectively.In Section 2.3, we investigate whether these mass scales follow the scaling behavior with respect to V up .In order to explore the scaling behavior in more detail, we do not attach the tuning between the superpotential and V up for the metastable dS vacuum with almost vanishing Λ.Indeed, there is a priori no reason that the AdS cosmological constant determined by combining various superpotential terms of different origins, namely, fluxes supporting the warped throat, fluxes supporting the bulk, and non-perturbative effects, must be almost the same as V up in size.Meanwhile, as observed in Section 3, the superpotential and V up are not completely irrelevant, but required to satisfy some inequalities for consistency.First, regardless of the sign and the size of Λ, for the EFT we use to be reliable, it should be protected from the effects of towers of states.Therefore, the masses of moduli under consideration must be lighter than the lightest tower mass scale, which imposes the lower bound on V up through the scaling behavior.We discuss this constraint by considering the conifold modulus mass in Section 3.1 and the gravitino mass as well as the volume modulus mass in Section 3.2, respectively.In particular, the condition concerning the gravitino mass imposes the inequality that the superpotential and V up must obey.Second, as discussed in Section 3.3, in the simple model like the KKLT [23] and the large volume scenario [24], V up should not be too large compared to the size of the AdS cosmological constant before uplift, since otherwise the moduli potential shows the runaway behavior and the moduli are destabilized.All the discussions above can be rewritten as the constraints on the superpotential : various terms in the superpotential must be tuned such that when they are summed up, the conditions considered in Section 3 are not violated.After emphasizing this, we close our discussion with concluding remarks.Appendices are devoted to reviews on Klebanov-Strassler throat, the form of V up describing the brane/flux annihilation, and the coefficient of the Gukov-Vafa-Witten superpotential, results of which are used throughout this article.
Notes on conventions
Throughout the article, we will focus on Type IIB superstring theory, on which models like the KKLT [23] or the large volume scenario [24] are based.The string length scale is defined as ℓ s = 2π √ α ′ , the inverse of which m s = ℓ −1 s corresponds to the string mass scale.The bosonic part of the Type IIB supergravity action is given by in the Einstein frame, where the string coupling constant g s = e Φ 0 ≡ e Φ is fixed by the dilaton stabilization.The metric is related to that in the string frame by 2 Connection between tower mass scale and uplift
Warped deformed conifold
To begin with, we consider the ten-dimensional metric given by ds 2 = e 2A(y) e 2Ω(x) g µν dx µ dx ν + e −2A(y) g mn dy m dy n , where e 2Ω(x) is the Weyl factor that can be chosen freely and g mn is the metric of the Calabi-Yau threefolds in which the deformed conifold (also known as the Klebanov-Strassler throat) is embedded (see Appendix A for a review on the throat geometry).The warp factor A(y) is obtained by solving the equation of motion where the tilde in the Laplacian and the upper indices indicates that the metric g mn rather than G mn = e −2A g mn is used.Since the equation of motion is invariant under the rescaling g mn → λg mn and e 2A → λe 2A as well as the y-independent shift of e −4A [27], one may choose both λ 2 and the shift to be the same function of x, σ(x), such that the metric above is rewritten as [28,29] ds 2 = e 2A(y) e 2Ω(x) g µν dx µ dx ν + e −2A(y) σ(x) 1/2 g mn dy m dy n , where the warp factor is given by which is often denoted by h(y).Then σ(x) is interpreted as a volume modulus, the stabilization of which fixes the size of the overall internal volume.While e −4A 0 ≃ 0 in the bulk region, e −4A 0 near the tip of the throat is given by e −4A 0 (y) = 2 2/3 (α ′ g s M) 2 ǫ 8/3 I(η) Since the conifold deformation parameter ǫ has a mass dimension −3/2, it is convenient to introduce the dimensionless parameter |z| = ǫ 2 /ℓ 3 s .From √ −G = e −2A e 4Ω σ 3/2 √ −g 4 √ g 6 and the fact that the four-dimensional part of the Ricci scalar is given by e −2A e −2Ω R 4 , one finds that Then it is convenient to choose the Weyl factor to be where such that e Ω = 1.We may also rescale the coordinates and σ(x) such that d 6 y √ g 6 e −4A = ℓ 6 s is satisfied hence the internal volume in units of the string length is simply written as V 0 = σ 3/2 .In this case, the gravitational coupling in four dimensions is given by which also reads This will be used throughout this article to convert the mass scale in terms of m s into that in terms of m Pl .Meanwhile, we will observe the behaviors of the potential and the particle spectrum in two limits, the strongly and weakly warped throat.For this purpose, we need to investigate the dominant effects in the strongly (weakly) warped throat more carefully.Through the moduli stabilization, parameters σ and |z| ≡ ǫ 2 /ℓ 3 s are fixed at σ = V 2/3 0 , respectively.Then we can say the throat is strongly warped if is satisfied (see also (7)).When the inequality is reversed, the throat may be said to be weakly warped.One caveat here is that the term containing e −4A 0 / σ ∼ (g s M) 2 /[(2π) 4 |z| 4/3 V 2/3 0 ] which we will call the 'warping term' is not always subdominant in the weak warping case defined in this way.To see this, we note that the F-term potential for |z| produced by the fluxes is proportional to the inverse of the Kähler metric K zz (see (38) and also Appendix A.3 for a review) which contains In the |z| → 0 limit, the condition ( 13) is satisfied, and the term in the parenthesis is evidently dominated by the second warping term.Meanwhile, in the opposite limit |z|/Λ 3 0 → 1, the warping term again dominates over the first logarithmic term log(Λ 3 0 /|z|) which can be identified with the throat length η UV (see discussion below (80)) as η UV becomes close to 0. Indeed, the combination |z| 2/3 [log(Λ 3 0 /|z|)] 1/2 is maximized at |z|/Λ 3 0 = e −3/4 ≃ 0.47, having the value [3 1/2 /(2e 1/2 )]Λ 2 0 ≃ 0.53Λ 2 0 .Then the logarithmic term dominates over the warping term when |z| is in the range around |z| = e −3/4 Λ 3 0 , where the lower(upper) bound becomes closer to zero (one) when V 0 gets larger.
If we restrict our attention to the case in which |z| is so small that |z| 2/3 V 1/3 0 ≪ (g s M)/(2π) 2 is satisfied (the strong warping in the sense of ( 13)), we may define the strongly warped throat in a more restrictive way by imposing i.e., the dominance of the warping term over the logarithmic term in K zz , as considered in [30].
That is, the upper bound on the combination in the strongly warped throat is more restricted by the factor 1/η 1/2 UV , which is smaller than 1 for |z|/Λ 3 0 ≪ 1.Then the throat in which is satisfied also belongs to the weakly warped throat.In the same way, we can also divide the case of the 'extremely weakly warped throat', in which the warping term [(2π is similar to or even larger than 1 thus e −4A takes the value around 1, into two classes : a) the warping term is subdominant compared to UV ), and b) |z|/Λ 3 0 is too close to 1 so the warping term dominates over the logarithm term in K zz .In this article, we mainly focus on the strongly (weakly) warped throat in the sense of (15) (( 16)) and discuss the 'extremely weakly warped throat' separately.
Mass scales of towers of states
In this section, we explore the possible towers of states in the compactification of Type IIB string theory and their mass scales in the presence of the warped deformed conifold.First of all, the string excitations produce a tower of states with the mass scale m s given by (12).Moreover, compactifying the ten-dimensional theory on six-dimensional manifold naturally introduces a tower of states consisting of the KK modes.From the Laplacian associated with the metric (5), and the facts that e Ω = 1 and V 0 = σ 3/2 , one finds that the KK mass scale is given by where R is the typical length scale of the internal manifold.
For the KK modes in the bulk, e −4A = 1 and 2πR = ℓ s = m −1 s are taken, then their mass scale is estimated as On the other hand, the mass scale of the KK modes localized near the tip of the throat is redshifted by the warp factor.Near the tip, the deformed throat is equivalent to S 3 × S 2 × R (see ( 75)) and the typical length scale of the [31].Since we are primarily interested in the case of e 0 ], we focus on the lowest KK mass scale in the presence of the warped throat given by 1 We now consider the extremely weakly warped throat, in which [(2π , then the mass scale of the KK modes localized in the throat is given by Imposing the extremely weak warping condition, the upper bound on m ew KK /m Pl is given by (2π On the other hand, the condition g s M > (2π) 2 can be imposed in addition from the requirement that the squared length scale of the deformed conifold α ′ g s M (see (77)) is larger than ℓ 2 s for the metric as a supergravity solution to be a valid description [31].Then ∼ π, the upper bound on the ratio is typically smaller than 1.
It is worth observing how the structure of the metric is reflected in that of the KK mass scale given by (18).Expressing the ten-dimensional metric in a more comprehensive way, where one immediately infers from ( 17) that the KK mass scale is written as where R ∼ η UV ǫ 2/3 = η UV |z| 1/3 ℓ s .This evidently shows that e Ω 4 = e A corresponds to the redshift factor.When the throat is warped satisfying , in which the factor |z| −1/3 in e Ω 6 is cancelled by the factor |z| 1/3 in R (see discussion above (77)).Then the KK mass scale is estimated as 0 m s , which coincide with (20).In contrast, when the throat is extremely weakly warped, e A is no
Uplift potential and a tower of states
We now put D3-branes at the tip of the throat.This breaks SUSY, and can be used to uplift the potential for the volume modulus σ(x) to a metastable dS vacuum.The uplift potential is given by the sum of the DBI action and the Chern-Simons term.Since these two are the same in magnitude, we obtain where p is the number of D3-branes, s is the D3-brane tension, and γ is the induced metric on D3-branes.If D3-branes are extended over the noncompact four-dimensional spacetime, the induced metric is given by ds 2 D3 = e 2Ω 4 (x,y) g µν dx µ dx ν = e 2A(y) e 2Ω(x) g µν dx µ dx ν , which gives where e 4A corresponds to the redshift factor.We note that the factor 4 in the exponent of e 4Ω 4 (x,y) comes from the four noncompact dimensions over which D3-branes are extended.As sketched in Appendix B, the same result is obtained in the context of the brane/flux annihilation, which is described by the polarization of NS5-brane wrapping the S 3 part of the throat [22,32].When the throat is warped satisfying or Pl .As pointed out in [25], V w up depends on V 0 and |z| in the same way as (m w KK ) 4 .More concretely, comparing (26) with (20) we obtain the scaling behavior We note that in addition to the power-law dependence on |z| and V 0 which is relevant to the scaling behavior, m w KK also contains the logarithmic term η UV = log(Λ 3 0 /|z|).Indeed, whereas V up is generated by D3-branes localized at the tip of the throat and redshifted by e 4A , the KK mode is determined by the overall size of the throat, η UV , with the same redshift effect.
Such a simple scaling behavior also appears in the extremely weakly warped throat, but the associated tower mass scale is not the throat KK mass scale.To see this, we compare s obtained from ( 25) ( e Ω(x) = 1 is used) with the throat KK mass scale given by ( 18), m 4 KK ∼ e 4A R −4 ∼ e 4A ( e Ω 6 |z| 1/3 /η UV ) 4 m 4 s .We have seen that as the warping gets stronger, the combination e Ω 6 |z| 1/3 = e −A V 1/6 0 |z| 1/3 becomes independent of both V 0 and |z| hence away from η UV , m 4 KK depends on V 0 and |z| through the combination e 4A m 4 s , just like V up .In contrast, when we estimate the throat KK mass scale for the extremely weakly warped throat, e −A ≃ 1 does not cancel V 1/6 0 and |z| 1/3 any longer.Indeed, the uplift potential in the extremely weakly warped throat is written as such that V ew up /m 4 Pl = [g 3 s /(4π)]p/V 2 0 .While this expression presumes that D3-branes are localized at the tip of the throat, this in fact is not well guaranteed in the extremely weakly warped throat.Indeed, the position of D3-branes in the throat can be found from the value of η at which V ew up with the η dependence restored, is stabilized.Since this is a monotonically increasing function of η, D3-branes are stabilized at η = 0 classically.On the other hand, as the throat is extremely weakly warped such that the warping term is much smaller than one for any value of η, the increasing rate is also suppressed, so in terms of η, the position of D3-branes in a throat, V ew up corresponds to the very shallow potential.Then quantum mechanically, the probability that D3-branes are located in other region of the throat, or even outside the throat is not negligible.Nevertheless, as we can learn from the basic quantum mechanics example, the bound state must exist even when the depth of the potential is very tiny, in which the probability to find the D3-branes inside the potential is still larger than that to find D3-branes outside the throat.Moreover, we also expect that the probability is maximized at η = 0, the tip of the throat, since this is the point at which V ew up is stabilized classically.This may become invalid if |z|/Λ 3 0 becomes close to one, or equivalently, η UV ≃ 0, in which the throat region terminates even before the potential increases in a meaningful size so the dominance of the probability to find D3-branes at the tip of the throat is not so strong.Therefore, in the following discussion, we focus on the case in which η UV is still sizeable so the localization of D3-branes at the tip of the throat is relatively reliable.
From (28), one finds that V ew up depends on σ(x) through e 4Ω(x) = V2 0 /σ(x) 3 only, so unlike V w up which is proportional to σ −2 , V ew up ∝ σ −3 .We note that since e Ω = 1, after the stabilization of σ(x), V ew up can be written as 4πp(m 4 s /g s ), which evidently shows that V ew up is independent of |z|.This explains why V ew up is not simply related to m ew KK or m ew W through the scaling behavior.Instead, we can find two possible towers of states which satisfy the scaling behavior with respect to V ew up .The first one is the string excitations : the sting mass scale m s satisfies the scaling behavior given by This reflects the fact that when we fix m Pl , m s becomes light in the V 0 → ∞ limit. 2 Indeed, as m s → 0, the D-brane tension also decreases in size, which makes V ew up given by the energy stored in D3-branes smaller.Another one is the bulk KK mass scale given by (19) satisfying This is not strange because m KK becomes light as V 0 increases.In any case, towers of states satisfying the scaling behavior with respect to V ew up are relevant to the overall internal volume V 0 , not the throat geometry.Moreover, as we discussed in Section 2.2, in the extremely weakly warped throat, m KK is typically lighter than m ew KK , so we can say that in both strongly and (extremely) weakly warped throat, the lightest tower mass scale obeys the scaling behavior with respect to V up .
It is remarkable that we can always find a tower of states satisfying ∆m up , where d is the number of noncompact spacetime dimensions.Here ∆m corresponds to the KK mass scale for the warped throat satisfying 2 and the string mass scale for the extremely weakly warped throat.We also emphasize that the scaling behavior makes sense only if the number of D3-branes is nonzero, i.e., p = 0. Indeed, the existence of a tower of states is a priori irrelevant to the presence of the uplift.In the absence of the uplift, i.e., when p = 0, the spacetime geometry is given by the AdS vacuum, which is a well defined four-dimensional supergravity solution so far as ∆m is well separated from the gravitino mass scale or the masses of the moduli under consideration.In the presence of the uplift potential, the scaling behavior becomes meaningful and indicates that the vacuum constructed by taking the V up → 0 limit corresponds to the infinite distance limit of the moduli space.That is, the values of the stabilized moduli consistent with the V up → 0 limit allows the tiny tower mass scale, indicating the descent of a tower of states from UV as claimed in the distance conjecture.
Constraints on superpotential and uplift
Whereas the potential produced by the fluxes and non-perturbative effects which we will denote by V AdS stabilizes the volume modulus σ(x) in the AdS minimum (or possibly, the (meta)stable Minkowski vacuum) with |Λ| given by | V AdS | ≡ |V AdS |, the dS or the Minkowski vacuum as well as the AdS vacuum with smaller |Λ| can be realized by adding the uplift potential V up to V AdS .As we have seen in the previous section, the mass scale of a tower of states obeys the scaling behavior with respect to V up , implying that when V up becomes too small, a tower of states descends from UV, invalidating the EFT.In other words, the AdS vacuum determined by V AdS cannot be perturbed by the tiny uplift effect.On the other hand, V up is required not to be too larger than the sum of |V AdS | and V h , the height of the potential V AdS at the local maximum, since otherwise the combined potential V AdS + V up does not have local minima but shows the runaway behavior.Indeed, when V up becomes too large, the backreaction of antibranes on the background geometry is no longer negligible.The sum |V AdS | + V h is more or less comparable to |V AdS | in the simple models like the KKLT or the large volume scenario, where the nonperturbative effect is dominated by the single term (in fact, in the KKLT scenario, V AdS does not have a local maximum).In this case, while V up almost comparable to |V AdS | can be used to realize the Minkowski or the (A)dS vacuum with tiny Λ, it should not be much larger than |V AdS |, say, O(10) × |V AdS |.In the presence of more than two comparable non-perturbative effects, tuning between them may allow the sum |V AdS | + V h much larger than |V AdS |.In this article, we restrict our attention to the simple case where the condition V up O(10) × |V AdS | is imposed.We also note that whereas the models we are considering aim to realize the metastable dS vacuum, we do not attach this goal but allow the AdS and the Minkowski vacuum, as well as the dS vacuum with the sizeable Λ.From this, we investigate the range of the superpotential consistent with the bounds on V up for the valid EFT description.
Meanwhile, the size of |V AdS | cannot be larger than the supersymmetric vacuum energy given by Λ SUSY = 3m 2 Pl m 2 3/2 , where is the gravitino mass.Since V AdS is the F-term potential produced by the fluxes and non-perturbative effects, the size of |V AdS | is determined by the amount of SUSY breaking parametrized by the F-term, In the minimal model of the KKLT scenario [23], the F-term vanishes, so V AdS stabilizes σ(x) in the supersymmetric AdS minimum satisfying V AdS = −Λ SUSY .In this case, SUSY is broken by V up only.In the large volume scenario [24], on the other hand, whereas the largest mass scale of the volume modulus is around m 3/2 [33] the F-term has nonzero vacuum expectation value, so ) is suppressed compared to Λ SUSY by log V 0 /V 0 .Then we can impose the minimal condition on V up and the superpotential in the simple model like the KKLT or the large volume scenario given by ∆m > m 3/2 , and which may be more restricted depending on the model.For instance, the lower bound on ∆m is expected to be the heaviest mass of the moduli under consideration.We note that if the non-perturbative effects are tuned such that |V AdS | + V h becomes much larger than |V AdS |, it introduces the mass scale m h defined by Then one may expect that the volume modulus mass σ(x) is not much enhanced compared to m h , from which the constraint in the form of (32) with m 3/2 replaced by m h can be imposed.This is quite model dependent so we do not explore in more detail.
As we have seen in Section 2.3, we can always find a tower of states obeying the scaling behavior with respect to V up .When the throat is warped satisfying 2 , the throat KK mass scale m w KK is the lowest tower mass scale and at the same time, scales like m w KK ∼ V w up 1/4 .Then the first condition in (32) reads V w up 1/4 > m 3/2 .For the extremely weakly warped throat, the bulk KK mass scale is typically the lowest tower mass scale.Since it satisfies m KK ∼ V ew up /m 4 Pl 1/3 m Pl , the first condition in ( 32) is equivalent to V ew up /m 4 Pl 1/3 m Pl > m 3/2 .In any case, combined with the second condition in (32), one finds that the uplift potential V w up is bounded from both above and below.
To see the behavior of m 3/2 more concretely, we first note that given the Kähler potential the flux-induced Gukov-Vafa-Witten (GVW) superpotential is written as [34] the coefficient of which is obtained by matching the flux term in the Type IIB supergravity action with the form of the F-term potential [33] (see also [30]), as reviewed in Appendix C. Then m 3/2 is given by where in the last line, we replace m Pl multiplied to Ω by m s , as the complex structure moduli in Ω are typically written in units of ℓ s , just like ǫ 2 = ℓ 3 s z, to give i Ω ∧ Ω/ℓ 6 s ∼ O(1) under the normalization d 6 y √ g 6 e −4A = ℓ 6 s .Moreover, even though the origins are different, the coefficients of the non-perturbative effects are written in the same way as the coefficient of the GVW superpotential for convenience.For later use, we define the dimensionless part of the superpotential by Indeed, the size of the GVW superpotential can be tuned by adjusting the amount of the harmonic (0, 3)-form in G 3 , which eventually determines the size of Λ SUSY .Such a tuning can be easily analyzed by considering W .
Any model concerning the warped deformed conifold also requires the stabilization of the conifold modulus z, the complex structure modulus determining the size of the warped throat.The z dependent part of the Kähler potential is written as then the F-term potential for z is given by (see Appendix A.3 for a review) This shows that the conifold modulus is stabilized at |z| = Λ 3 0 exp[− 2πK gsM ].In addition, the mass of the conifold modulus in the absence of uplift is given by (39)
Strongly warped throat
We first consider the mass of the conifold modulus in the strongly warped throat in the sense of (15).Since the term in the square brackets in (39) is dominated by the second warping term, we obtain As noted previously, the typical length scale associated with Ω is given by ℓ s so we expect that i Ω ∧ Ω/ℓ 6 s to be O (1).Then m z depends on the same combination as the KK mass scale m w KK given by (20), obeying the relation Since the throat is strongly warped, i.e., η UV > 1, the KK mass scale m w KK is typically lighter than m z .This indicates that the EFT based on the four-dimensional supergravity description can be invalidated by the KK modes lighter than the conifold modulus.As claimed in [31], the light KK modes may be compatible with the four-dimensional description for the stabilization of |z| if the species cutoff above which the gravitational coupling in the loop becomes strong by the large number of particle species [35,36] is well adjusted such that the light KK modes just rescale the Kähler metric K zz through the one-loop correction.We note that whereas the scaling behavior m z ∼ (g s M 2 p) −1/4 V w up 1/4 works only for nonzero p, i.e., it has a discontinuity between zero and nonzero p, the relation m z ∼ η UV m KK is irrelevant to the existence of the uplift potential.
We can understand the origin of the relation m z ∼ |z| 1/3 /V 1/3 0 in the following way.When the throat is strongly warped, the Kähler metric K zz is enhanced by the warp factor e −4A ≃ e −4A 0 /σ ∼ |z| −4/3 V −2/3 0 .In the F-term potential, the factors e K and (K zz ) −1 provide V −2 0 and e 4A respectively, and the redefinition of σ(x) for the canonical kinetic term gives (K zz ) −1 ∼ e 4A in addition.Combining them, we have V −2 0 (e 4A ) 2 ∼ |z| 8/3 /V 2/3 0 .Meanwhile, D z W contains log(Λ 3 0 /|z|), which originates from the monodromy behavior of z around z ≃ 0. Then V zz is dominated by the term containing |(D z W ) z | 2 ∼ 1/|z| 2 at the minimum of the potential.Combining all ingredients together, one finds that m 2 z ∼ (|z| 1/3 /V 1/3 0 ) 2 , consistent with the explicit result.While this reflects the throat geometry significantly, we did not find a simple argument for the connection to m KK or V 1/4 up .Our discussion so far is based on the assumption that p, the number of D3-branes is not so large that the mass scales are not much modified by V up and the backreaction of D3-branes is negligibly small.As pointed out in [37], the sum of potential terms depending on z, V KS + V w up (see (26) for V w up and (38) for V KS ) stabilizes z at the corrected value, from which p is restricted to be smaller than g s M 2 .When this bound is violated, z is stabilized at 0, giving η UV → ∞, which is incompatible with the compact internal volume.Whereas it was argued that such runaway behavior may not appear when we take the off-shell contributions, i.e., quantum fluctuations around the stabilized values of moduli, into account [38], it is true that the geometry and the potential are drastically changed by the backreaction of D3-branes for large p.We also note that V KS and V w up have the similar structure: since both e K/m 2 Pl and m 4 s /m 4 Pl are proportional to 1/V 2 0 , e K/m 2 Pl (K zz ) −1 ∼ V −2 0 e 4A in V KS and m 4 s e 4A in V w up contain the common factor |z| 4/3 /V 4/3 0 .But V KS also contains the z dependent factor |D z W | 2 , which plays the crucial role in stabilizing |z| at nonzero value.
Weakly warped throat
When the condition ( 16) is satisfied, the logarithmic term in K zz dominates over the warping term.In the extremely weakly warped throat, the logarithmic term loses its dominance for |z|/Λ 3 0 ≃ 1, but at the same time the localization of D3-branes at the tip of the throat is not guaranteed.Therefore, in both cases, so far as the valid EFT is concerned, the logarithmic term is the leading term in K zz .From (39), the conifold modulus mass in these cases is given by Now we can compare this with the lowest mass scale in the weak and extremely weak warping case, respectively.For the weakly warped throat in the sense of ( 16), the ratio m w KK /m z ∼ |z| 4/3 V 2/3 0 /(g s M) 2 (see (20)) is smaller than 1/(2π) 4 but larger than 1/[(2π) 4 η UV ].Since m z is lighter than m w KK , just like the strongly warped case, the stabilization of the conifold modulus in the four-dimensional supergravity description can be invalid unless the light KK modes under the species cutoff just contribute to the rescaling of K zz .For the extremely weakly warped throat, the ratio m ew KK /m z ∼ |z| 2/3 V 1/3 0 /(g s M) (see (21)) and m KK /m z ∼ |z|V 1/3 0 η UV /(g s M) (see (19)) are similar to or larger than 1/(2π) 2 and |z| 1/3 η UV /(2π) 2 , respectively, so as well known, m z can be lighter than the KK mass scale, which is trustworthy provided |z|/Λ 3 0 is not too close to 1.
Before discussing the correction to the stabilized value of |z| by V up , we note that if we are also interested in the subleading terms in the derivatives of the potential with respect to z, neglecting the warping term from the beginning can be misleading.To see this, consider dV KS /dz without assuming the weak warping, given by (see (38)) In the second parenthesis in the first line, 1/2 and the term containing (g s M) 2 /[V 2/3 0 |z| 4/3 ] come from the derivative of log(Λ 3 0 /|z|) and the warping term with respect to z, respectively.While the latter is larger than the former under the condition ( 16), it cannot be written if we ignore the warping term before taking derivative.In order to compare the term in the first line with that in the second line, let us impose the weak warping condition and ignore 1/2 in the second parenthesis in the first line.Since the warping term is subleading compared to η UV = log(Λ 3 0 /|z|) in the last line, we obtain In the square brackets, the first term comes from the dominant term of the first line (1/2 in the second parenthesis in the first line is ignored) and the second term comes from the second line in (44), respectively.This evidently shows that the term in the second line in ( 44) is most dominant : the term log M quickly approaches zero around the minimum and the coefficient is smaller then one.
In any case, the dominant term in d(V KS + V up )/dz is written as and the stabilized value of |z| is corrected to satisfy d(V KS + V up )/dz = 0.It is convenient to parametrize the correction to |z| by the shift in η UV , the exponent of |z|, which will be denoted by ε : Then we obtain From ( 16), one finds that for the weakly warped throat, ε lies in the range 8π 3 From the lower bound, we can say that the correction to the stabilized value of |z| in the presence of the uplift is controllable provided p < g s M 2 /η UV , similar to the bound on p in the strongly warped throat.For the extremely weakly warped throat, the EFT is reliable only for η UV not too close to 0. Taking derivative of the second term in (28) with respect to z, we obtain Then ε defined by ( 47) is estimated as The term in the parenthesis in the RHS is similar to or smaller than 1 and also η UV for η UV < 1, so the value of ε smaller than 1 is allowed provided p < (g s M 2 )/η UV .
Gravitino mass
We now move onto another EFT validity condition m 3/2 < ∆m.When the throat is warped satisfying , m w KK is typically the lowest tower scale, then this condition can be rewritten as Assuming Ω ∧ Ω/ℓ 6 s ∼ O(1), one finds that the fluxes and non-perturbative effects must be tuned such that W satisfies at least The upper bound on W can be understood as follows.The factor multiplied to W in m 3/2 comes from e K/(2m 2 Pl ) which is proportional to 1/V 0 , just like m 2 s ∝ 1/V 0 .Moreover, while e K/(2m 2 Pl ) contains g s , the coefficient of the GVW superpotential contains g 3/2 as can be found in (34) (see also the last expression in (110)), so m 3/2 is proportional to g 2 s .Since m 2 s is also proportional to g 2 s , m 3/2 can be estimated as m 3/2 ∼ (m 2 s /m Pl ) W . Then the bound m 3/2 < m w KK becomes (53).We note that the bound m 3/2 < m w KK discussed above is just a minimum requirement, and depending on the model, we need to investigate if other moduli under consideration are still lighter than m w KK .For the complex structure moduli, as we have seen, the conifold modulus is typically heavier than m w KK , which may invalidate the model. 3The mass scales of the Kähler moduli are also model dependent.In the large volume scenario, the non-perturbative effect is dominated by that of the small cycle modulus, stabilizing the overall volume modulus at the large value (exponential in the small cycle modulus).The overall volume modulus contributes to the potential through the Kähler potential only, hence each term in the potential shows the power-law dependence on the overall volume modulus.Then the moduli masses are similar to or even much lighter than m 3/2 [33].Meanwhile, in the KKLT scenario where the single volume modulus σ is taken into account, V σσ ∼ m 2 3/2 but the normalization for the canonical kinetic term introduces a factor (K σσ ) −1 ∼ σ 2 in addition, so the volume modulus mass is enhanced by σ, i.e., m σ ∼ σm 3/2 ∼ V 2/3 0 m 3/2 [41]. 4Requiring m σ < m w KK , the bound on W is much more constrained as In the extremely weakly warped throat, the lowest tower mass scale is given by the bulk KK mass scale, so the condition can be written as From this we obtain where the upper bound is nothing more than m Pl /m W , for the same reason as the bound on W in the previous case.In the KKLT scenario, we can impose the condition m σ ∼ V 2/3 0 m 3/2 < m KK in addition, which provides the stronger bound W < 1/(g s V 1/3 0 ).
Preventing runaway
The condition is imposed to prevent the potential including the uplift from exhibiting the runaway behavior.We first consider the case in which the throat is warped satisfying In the large volume 3 The stabilization of the axio-dilaton τ depends on the complex structure moduli and fluxes in the model.If the low energy effective superpotential is simply linear in τ , the τ mass can be light or even destabilize the vacuum through the mixing with the volume modulus [39,40]. 4We may understand why such an enhancement does not arise in the large volume scenario in the following toy example.When we have a potential term V ∼ e −a4τ4 /τ where τ 4 a small cycle modulus and τ 5 is the overall volume modulus, V τ5τ5 ∼ e −a4τ4 /τ 5 , showing the same power-law dependence on τ 5 as V .This can be contrasted with the KKLT-type potential V ∼ e −σ /σ, in which V σσ is dominated not by ∼ e −σ /σ 1+2 but by V ∼ e −σ /σ as the derivative with respect to σ can be taken on the exponential term as well.Then m 2 σ can be enhanced by multiplying scenario, SUSY is broken by F-term as well as V up , so |V AdS | ∼ Λ SUSY (log V 0 /V 0 ) is smaller than Λ SUSY .Then the condition reads which is simplified to where Ω ∧ Ω/ℓ 6 s ∼ O(1) is assumed.Combined with the condition m 3/2 < m w KK given by (53), which is equivalent to the condition that the Kähler moduli are lighter than m w KK , one obtains the bound on | W | 2 given by Comparing the lower and the upper bound, one finds that is satisfied.This inequality provides a bound on the number of D3-branes p, which can be taken into account in addition to p < g s M 2 obtained from (42).In fact, since ) is satisfied, the LHS of the inequality is smaller than p(g 2 s M)η 2 UV /[(2π) 2 log V 0 ] times a small constant, say, of O(10 −1 ).
In the KKLT scenario, the F-term potential is stabilized at the supersymmetric AdS minimum, so the runaway is prevented when V up O(10) × Λ SUSY , which reads or equivalently, We note that the RHS of the inequality is smaller than p/[(2π) 2 g s ] by the condition 2 .Combining this with (54), one obtains which is valid only if is satisfied.This inequality can be interpreted as a bound on p.We now consider the extremely weakly warped throat.The condition on the large volume scenario is given by or equivalently, Combined with the condition m 3/2 < m KK given by (56), one obtains which makes sense provided Meanwhile, for the KKLT scenario, the condition V up O(10) × Λ SUSY is written as which becomes the volume-independent condition Combined with the bound m σ < m KK given by W < 1/(g s V 1/3 0 ), we obtain which is valid provided
Conclusions
In this article, we investigate the connection between the uplift and the distance conjecture by considering the concrete model, the warped deformed conifold embedded into Type IIB flux compactification with the uplift produced by D3-branes at the tip of the throat.Whereas the various mass scales associated with towers of states can be found, it turns out that the lowest tower mass scale obeys the scaling behavior with respect to V up , which is meaningful only if the number of D3-branes is nonzero.Then in the V up → 0 limit, the EFT becomes invalid by the descent of a tower of states from UV, as the distance conjecture predicts.Since too large V up also is not allowed in the EFT due to the sizeable backreaction and the possible runaway behavior of the moduli potential, the size of V up consistent with the EFT is bounded from both above and below.In the simple model like the KKLT or the large volume scenario in which the non-perturbative effect is dominated by the single term, this bound can be rewritten as the bound on the size of the superpotential.The bound we found can be more restricted depending on the details of the model.For instance, if mass of the volume modulus becomes very heavy due to the tuning between more than two non-perturbative terms in the superpotential, the tower mass scale obeying the scaling behavior with respect to V up is required to be heavier than the volume modulus mass.At the same time, the lower bound on V up in this case is no longer |V AdS | but the sum of |V AdS | and the height of V AdS at the local maximum, so it cannot be rewritten as a bound on the superpotential in a simple manner.
Another issue which is not addressed in this article is that, whereas we simply assume e −4A ≃ 1 outside the throat, the too large value of g s MK compared to V 0 can result in the existence of the singular points in the bulk region at which e −4A becomes zero or even negative [42].This leads to the serious control issue of the KKLT scenario with almost vanishing Λ.We may avoid this problem in the large volume scenario or the moduli stabilization in the (A)dS vacuum, but the constraints on V up and the connection to the distance conjecture is the subject of the future study.
On the other hand, when more than two throats are related homologically, the length of each throat is shorter than log(Λ 3 0 /|z|) as the H 3 -flux accumulated in each throat is smaller than K [43,44].Moreover, when D3-branes are located at the tip of only one of throats, the corresponding throat is no longer equivalent to other throats [45].Then through the brane/flux annihilation, the uplift potential as well as the throat geometry changes until SUSY is restored.In these cases, we may need to revisit the criterion distinguishing the strong warping from the weak warping.Moreover, the hierarchies between mass scales are not simple as what we discussed in this article.We expect that such nontrivial model dependent features are helpful to understand the naturalness criterion on the string models, especially those realizing the tiny cosmological constant as we observe it, in light of the distance conjecture.
A Review on Klebanov-Strassler throat
In this appendix we summarize the features of the background geometry described by the Klebanov-Strassler throat [20], a noncompact, asymptotically conical solution of Type IIB supergravity supported by the fluxes.The metric of the Klebanov-Strassler throat is given by where Here ǫ parametrizes the deformation of the tip of the throat, i.e., smoothing out the S 3 singularity of the T 1,1 ∼ S 3 × S 2 base described by 4 A=1 w 2 A = ǫ 2 with w A ∈ C and the basis of 1-forms {g i } (i = 1, • • • , 4).The deformation is also parametrized by z ≡ ǫ 2 /ℓ 3 s , which is dimensionless and interpreted as the stabilized value of the conifold modulus, the complex structure modulus determining the size of ǫ.Whereas the S 3 subspace is referred to as the A-cycle, the S 2 × R subspace in which R is parametrizerd by η is called the B-cycle.Here η extends over [0, η UV ], where η UV is the coordinate at which the throat is glued to the compact bulk.Then the features of the geometry near and far from the tip of the throat can be found by taking the limits η ≪ 1 and η ≫ 1, respectively.
A.3 Stabilization of the conifold modulus
The size of ǫ 2 = zℓ 2 S is determined by the stabilization of the conifold modulus.The Kähler potential for z is studied in [48] (see also [38] and Appendix A of [30]), which we will briefly sketch here.Using the fact that the warp factor e −4A ≃ 1 in the bulk and denoting − log( i κ 6 4 bulk Ω∧Ω) by K bulk cs , the Kähler potential for the complex structure moduli is written as from which the Kähler metric for the complex structure moduli, represented by the harmonic (2, 1)-form χ a , is given by K ab = (i e −4A χ a ∧ χ b )/(i e −4A Ω ∧ Ω).For the conifold modulus S ≡ ǫ 2 = ℓ 3 s z which is localized in the throat, the numerator of G ab is dominated by the throat part, whereas the denominator is still dominated by the bulk part, i.e., where In two limits η ≪ 1 and η ≫ 1, three functions used to define χ S behave as Putting and using i g i = 64π 3 , the Kähler metric is written as The first integral can be evaluated by noting that f + F (k − f ) becomes 0 for η → 0 and UV S ) (see (78)) for η → ∞ where e −4A ≃ 1.For the second integral, one can use the fact that to evaluate the integral numerically.We note that while the first integral is dominated by the region η ≃ η UV at which e −4A ≃ 1, the second integral contains the variation of the warp factor, which is enhanced for the sizeable e −4A 0 /σ in the throat region.Then we have where r 0 = (2 5/3 /3) 1/2 r UV and c ′ ≃ 1.18.This can be obtained from the Käher potential where Λ 0 = r UV /ℓ s .
Meanwhile, the flux induced GVW superpotential can be explicitly written by introducing (α I , β I ) (I = 0, • • • , h 2,1 ), the basis of the de Rham cohomology group H 3 (Z) and (A I , B I ), the Poincaré dual homology basis satisfying such that the fluxes are quantized as which is constrained by the tadpole condition 1 ℓ 2 s CY 3 where χ is the Euler characteristic of CY 4 in the F-theory compactification.Meanwhile, the holomorphic 3-form Ω is written as where Then the complex structure moduli are identified with t a = Z a /Z 0 (a = 1, • • • , h 2. In particular, in terms of the A-cycle(S 3 ) and the B-cycle (S 2 × R) of the throat satisfying i.e., (M S , M S ) = (M, 0) and (K S , K S ) = (0, K), the conifold modulus and the corresponding prepotential are given by respectively.Then the superpotential is written as Using (89) and (100), together with the Kähler potential given by (108) (giving e K/m 2 Pl = V −1/2 0 (g s /2)(im 6 Pl Ω ∧ Ω) −1 (m s /m Pl ) 6 ) one finds the F-term potential for z given by (38).
B Uplift potential in terms of NS5-brane
In this appendix, we sketch how to obtain the uplift potential in terms of NS5-brane, which is extended over the four-dimensional noncompact spacetime and wraps the S 2 subspace of the A-cycle (S 3 part of the throat).From ( 5) and (75), the induced metric on NS5-brane is written as NS5 = e 2A(y) e 2Ω(x) η µν dx µ dx ν + e −2A(y) σ(x) 1/2 2 3 We also note that the imaginary self-duality ⋆ 6 G 3 = iG 3 gives ⋆ 6 H 3 = −g s F 3 .Then we obtain For ψ = 0, the potential is reduced to the D3-brane uplift potential given by ( 25), as the term in the square brackets becomes 2πp/M.Indeed, when the condition |z| 2/3 V 1/3 0 ≪ (g s M)/(2π 2 ) is satisfied, the coefficient of sin 4 ψ in the square root is simplified to b 4 0 such that (105) The O(1) coefficient 2 1/3 /I(0) ≃ 1.75 is often denoted by c ′′ .We note that in contrast to the Dp-brane action which is proportional to g −1 s , the NS5-brane action is proportional to g −2 s , but since the F -flux contribution is proportional to g s , V NS5 can be reduced to V D3 for ψ = 0.
C Coefficient of Gukov-Vafa-Witten superpotential
Here we sketch how to fix the coefficient of the GVW superpotential, following Appendix A of [33] (see also [30]).We begin with the fact that when the fluxes are turned on, the |G 3 | 2 term in the Type IIB supergravity action (2) gives the potential for G IASD 3 , the imaginary anti-self dual part of G 3 consisting of the harmonic (3, 0)-and (1, 2)-forms (for derivation, see, e.g., Appendix in [21]).This is interpreted as the F-term potential obtained from the GVW superpotential : where Here the indices a, b run over the complex structure moduli as well as τ and the inverse of the Kähler metric K ab is obtained from the Kähler potential which consists of the Kähler potential for the overall volume (Kähler) modulus, axio-dilaton, complex structure moduli, and Kähler moduli other than the overall volume modulus.We note that whereas the last term is not taken into account in [33], it does not affect our discussion so far as all the Kähler moduli other than the overall volume modulus are heavier than the energy scale of the EFT (see [49] for more discussion on their properties).we also note that the mass dimensions of κ 10 , Ω, and G 3 are given by −8, −3, and −2, respectively, which is consistent with the mass dimensions of V flux and W given by 4 and −5, respectively.Using 2κ 2 10 = g 2 s ℓ 8 s /(2π) and ( 12), V flux can be rewritten as
5 W
) and prepotential F (t a ) can be defined as F (Z I ) = (Z 0 ) 2 F (t a ) with F I = ∂ I F , from which the GVW superpotential (for the coefficient, see the first expression in (110)) is written as = (g s V 0 ) 1/2 m 8 Pl ℓ 2 s [(Z I M I + F I M I ) − τ (Z I K I + F I K I )]. | 2023-03-21T01:27:03.085Z | 2023-03-17T00:00:00.000 | {
"year": 2023,
"sha1": "b9b4edc8751191368eaa308f2ffe1d2db917eb1f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP07(2023)082.pdf",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "b9b4edc8751191368eaa308f2ffe1d2db917eb1f",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
234427170 | pes2o/s2orc | v3-fos-license | Quantitative Research: A Successful Investigation in Natural and Social Sciences
Research is the framework used for the planning, implementation, and analysis of a study. The proper choice of a suitable research methodology can provide an effective and successful original research. A researcher can reach his/her expected goal by following any kind of research methodology. Quantitative research methodology is preferred by many researchers. This article presents and analyzes the design of quantitative research. It also discusses the proper use and the components of quantitative research methodology. It is used to quantify attitudes, opinions, behaviors, and other defined variables and generalize results from a larger sample population by the way of generating numerical data. The purpose of this study is to provide some important fundamental concepts of quantitative research to the common readers for the development of their future projects, articles and/or theses. An attempt has been taken here to study the aspects of the quantitative research methodology in some detail.
Introduction
Research is a systematic and organized effort to investigate a specific problem to provide a solution. The aim of it is to add new knowledge, develop theories as well as gathering evidence to prove generalizations [Sekaran, 2000]. Research can be classified into three basic categories as [Swanson & Holton, 2005;Kothari, 2008;Creswell, 2011]: 1) quantitative research, 2) qualitative research, and 3) mixedmethod research. Each of these methods plays important role in the research area. Researchers choose any one of the above three types of research methods according to the research aim, the objectives, the nature of the topic, and the research questions to identify, collect and analyze information [Goertz & Mahoney, 2012].
Quantitative research is a formal, objective, rigorous, deductive approach, and systematic strategies for generating and refining knowledge to problem-solving [Burns & Grove, 2005]. Its designs are either 53 They also recommend for improving the ways in which quantitative descriptive findings are communicated throughout the education and research communities [Loeb et al., 2017]. Eyisi Daniel has shown a reason that although qualitative and quantitative research methods are different both of them are useful in research for problem-solving and to seek truth for the development of research. He has also discussed the advantages, disadvantages, strengths, and weaknesses of both methods in some detail [Daniel, 2016]. In a review paper, Haradhan Kumar Mohajan has discussed the reliability and validity of good research that increase transparency, and decrease opportunities to insert researcher bias in qualitative research [Mohajan, 2017]. In another paper, he has studied the background of the qualitative research methodology in social sciences and some other related subjects [Mohajan, 2018a].
Lydell H. Hall tries to determine a correlation between transformational leadership behavior exhibited by the leadership team and job satisfaction among California card room casino employees. He also attempts to find a correlation between transformational, transactional, or laissez-faire leadership styles exhibited by the leadership team and job satisfaction among California card room casino employees. He has found that in the first case employees are in a higher level of job satisfaction but, in the second case, they show a negative attitude toward job satisfaction [Hall, 2018]. Moses Kumi Asamoah reexamines the limitations and uses of correlational studies [Asamoah, 2014].
Edith de Leeuw, Joop Hox, and Don Dillman have discussed aspects of survey research [de Leeuw et al., 2008]. Maninder Singh Setia states that in a cross-sectional study, the investigator measures the outcome and the exposures in the study participants at the same time. He adds that the participants just are selected based on the inclusion and exclusion criteria set for the study [Setia, 2016]. Mo Wang and his coauthors attempt to clarify the conceptual, methodological, and practical issues that frequently emerge when researchers conduct longitudinal research [Wang et al., 2017]. Fernando Rajulton realizes that longitudinal data create many complexities, which is a great challenge to the researchers. He briefly discusses the historical development of ideas related to longitudinal studies [Rajulton, 2001]. David A Grimes and Kenneth F Schulz have realized that a cohort study is the best way to identify the incidence and natural history of a disease, and can be used to examine multiple outcomes after a single exposure [Grimes & Schulz, 2002].
Nelson Pinheiro Gomes and her coauthors discuss the development and consolidation process of trend studies, as a transversal area with transdisciplinary characteristics that was developed in connection with the concepts and practices of cultural studies [Gomes et al., 2018]. Anthony D. Harris and his coauthors have performed quasi-experimental research in infectious diseases in the area of interventions aimed at decreasing the spread of antibiotic-resistant bacteria [Harris et al., 2004]. Ronald R. Powell discusses several evaluation methods, such as input measurement, output/performance measurement, impact/outcomes assessment, service quality assessment, process evaluation, benchmarking, standards, quantitative methods, qualitative methods, cost analysis, organizational effectiveness, program evaluation methods, and library and information (LIS) centered methods [Powell, 2006]. Monika Mueller and her coauthors study a systematic scoping review of published methodological recommendations on how to systematically review and meta-analyze observational studies. They have extracted and summarized recommendations on predefined key items, such as protocol development, research question, search strategy, study eligibility, data extraction, dealing with different study designs, risk of bias assessment, publication bias, heterogeneity, and statistical analysis [Mueller et al., 2018].
55
validity are inevitable issues in any research. Quantitative research is more reliable than other researches. Quantitative methodology is judged for rigor and strength based on validity, reliability, and generalizability [Morris & Burkett, 2011].
It eliminates or attempts to eliminate extraneous variables within the internal structure of the study, and the data produced can also be assessed by standardized testing [Duffy, 1985]. We have tried to discuss various types of quantitative research; experimental research (pre-experimental, truly experimental, and quasi-experimental) and non-experimental research, such as descriptive research (observation studies, correlational research, and survey research), evaluation research, existing data, meta-analysis, causalcomparative research, etc. In this study, we have tried our best to maintain reliability and validity throughout the research. We have also taken attempts to enrich the article by highlighting the characteristics, advantages and disadvantages, strengths, and weaknesses of quantitative research.
Objective of the Study
The leading objective of this study is to represent the quantitative research methodology in natural and social sciences. The other specific objectives are as follows: • To provide a historical background of quantitative research.
• To discuss steps, research design, and types of quantitative research.
• To highlight the characteristics, strengths and weaknesses, advantages, and disadvantages of quantitative research.
Historical Background
Every modern research is based on two main epistemological orientations: postpositivism and constructivism. Over the years, there are always different opinions among constructivists and positivists. But each has its own unique way of gathering and analyzing data. Neither constructivists nor positivists have claimed that their instruments are more reliable and valid than the other. The objective of each type of researcher is to achieve the same goal, i.e., to prepare fruitful research for the welfare of the society [Robson, 2002;Creswell, 2011;Daniel, 2016]. Four distinct paradigms that are related to social researches are constructivism, critical theory, positivism, and postpositivism. Constructivism and critical theory are related to qualitative research, while positivism and postpositivism are related to quantitative research [Lincoln & Guba, 1985;Daniel, 2016].
Sir Isaac Newton, Henry de Saint-Simon, Auguste Comte, Karl Popper, etc. scholars contributed the positivistic idea of the absolute truth of knowledge in research. French philosopher Auguste Comte (1798-1857), the founder of positivism, adapted the methodology of the natural sciences (e.g., physics, biology, chemistry, etc.) for use in the social sciences and called his theory "positivism". Positivism is defined as a scientific methodology that aims to reach the laws of human behavior and social life [Kincheloe & Tobin, 2009]. Positivistic researches are founded on a belief that the study of human behavior should be conducted in the same way as studies conducted in the natural sciences [Collis & Hussey, 2009].
Positivism is a position in the philosophy of science that emphasizes the importance of observation for the growth of knowledge. The main purpose of positivism is to reach objective truth, facts, and laws. In the positivist belief, there is a truth that science can observe, measure, and describe. Positivist research advances through research proving or supporting a hypothesis [Popper, 2005]. Positivism seeks universal laws that govern behavior, and argues an objective external reality can be accurately and thoroughly understood. The positivist's belief that there is a truth that science can observe, measure, and describe [Park et al., 2020]. A positivist emphasizes quantitative methods, while a postpositivist considers both quantitative and qualitative methods to be valid approaches [Popper, 1963]. Historians identify two types of positivism: classical positivism that is an empirical tradition, and logical positivism that is most strongly associated with the Vienna Circle [Alexander, 1995]. Positivism is widely applied in the natural sciences, where empirical observation is used to make theories and models [Fox, 2008].
Positivistic attempts seek to identify, measure, and evaluate any phenomena and provide a rational explanation for it that establish causal links and relationships between the different variables of the subject and relate them to a particular theory [Collis & Hussey, 2009]. The positivist approach is acquired its methods from social and education research and consists of the following steps [Xining, 2002]: i) creating a hypothesis, ii) establishing variables (sampling) and measurement devices, iii) data collection, iv) data analysis, and v) conclusion. The quantitative approach has its origin in positivistic epistemology, which is an approach to the study of people that commend the application of the scientific method [Bryman, 2012].
The word "postpositivism" is named for the first time by Denis Charles Phillips [Phillips & Burbules, 2000]. Robert Dubin describes the basic components of a postpositivist theory [Zammito, 2004]. Famous philosophers Karl Popper, Willard Van Orman Quine, and Thomas Kuhn have been highly influential and led to the development of postpositivism. It is a metatheoretical stance that critiques and amends positivism. Postpositivists argue that theories, hypotheses, background knowledge, and values of the researcher can influence by the observational data. According to postpositivism, the world works according to fixed laws of cause and effect [Phillips & Burbules, 2000]. For postpositivists, science is slow, progressive, iterative, theory refining, and characterized by attempts to advance through proving a theory wrong or incomplete. A postpositivist believes in a single objective, external, tangible, measurable reality, adopting a perspective aligned with scientific realism but truth or the understanding of reality remains incomplete or probabilistic [Letourneau & Allen, 1999]. The key elements of post positivistic knowledge are [Creswell, 2011]: i) determinism, ii) theory verification, iii) empirical observation and measurement, and iv) reductionism (reality is reduced to small elements, such as variables, hypotheses, research questions, etc.).
Constructivism is a theory in education that recognizes the learners' understanding and knowledge based on their own experiences prior to entering school [Nola & Irzik, 2006]. It emphasizes the importance of knowledge, beliefs, and skills for learning and research. It indicates that people construct their understanding and knowledge of the world through experiencing things and reflecting on those experiences [Garbett, 2011]. Jean Piaget is the founder of constructivism. He has identified impressionable and developmental aspects of human thought processes. Karl Mannheim first time thinks on constructivism (interpretive), and later has been elaborated on in the works of Yvonna Sessions Lincoln and Egon G. Guba [Lincoln & Guba, 1985].
Constructivism indicates that knowledge is created by people through the interpretation and understanding of phenomena in social and historical perspectives. It is typically seen as an approach to qualitative research. Social constructivism is developed by Lev Semyonovich Vygotsky who argued that learning is a social and collaborative activity where people create meaning through their interactions among them [Aina, 2017]. Social constructivists believe that individuals seek an understanding of the world in which they live and work. The main elements of constructivist thinking are [Lincoln & Guba, 1985;Creswell, 2011]: i) understanding, ii) multiple participant meanings, iii) social and historical construction, iv) inductive reasoning, and v) theory generation.
Quantitative Research Design
A hypothesis is a tentative explanation that accounts for a set of facts and can be tested by further investigation. Quantitative researchers design studies that allow testing the hypotheses. There are three kinds of variables in quantitative research: i) dependent variables, ii) independent variables, iii) extraneous or confounding variables. The variables that are hypothesized to depend on or be caused by other variables, i.e., monitors how subjects react by measuring response in one or more outcome measures are dependent variables, the variables that are believed to be the cause or influence, i.e., the researcher manipulates one or more of the variables, are independent variables, and the variables that are confusing or confound the relationship between the dependent and independent variables are extraneous variables. Dependent variables are influenced by one or more independent variables. For example, in healthcare wound healing is a dependent variable, type of dressing is an independent variable, and patient age and presence of diabetes mellitus are extraneous/confounding variables. Smoking is the independent variable and lung cancer is the dependent variable [White & Millar, 2014]. When variables are clearly defined and numerical data are prepared, and then we can use quantitative research properly, i.e., it tests objective theories by examining the relationship among variables [Polit & Hungler, 2013].
Steps of Quantitative Research
The quantitative research process generally consists of five steps to perform the research efficiently as [Swanson & Holton, 2005;Kumar, 2011]: Formulating a research problem is a first and most important step in the research process. It identifies the destination of a researcher [Kumar, 2011]. It determines the basic questions that the researcher intends to answer with the research study. These questions describe factors or variables of interest to the researcher [Swanson & Holton, 2005]. In the second step, the researcher determines the human participants in the study, which capitalizes on the advantage of using statistics to make inferences about larger groups using very small samples [Cooper & Schindler, 2008]. In the third step, the researcher tries to select methods to answer questions, identifies variables, measures, and the research design to use in formulating specific research questions, methods, and participants of the study [Warfield, 2013]. In the fourth step, the researcher selects statistical analysis tools for analyzing the collected data. In the statistical analyses, the researcher determines how the variables describe, compare, associate, predict and contribute to explain the analysis results and to answer the propositions of the study [Cooper & Schindler, 2008]. In the fifth step, the researcher performs the interpretation of the results of the analysis based on the statistical significance determined [Swanson & Holton, 2005].
These five steps are used in quantitative research for generalization to the entire population for the coding of observations to accurate measurements. Then statistical methodologies are used to include activities that researchers want to conclude operation [Warfield, 2013].
Types of Quantitative Research
The quantitative research can be classified as [Cohen et
Experimental Research
Experimental research is the most familiar type of research design for individuals in the physical sciences and some other related fields. It tries to reduce all kinds of biases as much as possible [Nunan, 1992]. It indicates how the observations or measurements should be obtained to answer a query in a valid, efficient, and economical way. It is referred to a hypothesis testing or a deductive research method. It seeks to determine a relationship between a dependent and an independent variable. The results of it are not known in advance. It is the process of planning a study to meet specified objectives. Here one or more independent variables are manipulated and applied to one or more dependent variables to measure their effects on the latter. In this type of research, the researchers design the specific conditions to test their theories or propositions, controlling the experiment and collecting their data to isolate the relationships between their defined independent variables and dependent variables [Swanson & Holton, 2005]. It consists of making an observable and quantifiable change in the independent variable and then observing how that affects the dependent variables [Leedy & Ormrod, 2001]. During the experimental research, the researcher investigates the treatment of an intervention into the study group and then measures the outcomes of the treatment. Experiments consist of making an observable and quantifiable change in one variable (the independent variable) and then observing how that affects other variables (the dependent variables) [Chen, 2011].
The goal of it is to test a hypothesis to establish cause and affect relationships [Ary et al., 2010]. Some advantages of experimental research are [Mildner, 2019]: It controls the independent variables. It is a straightforward determination of causal relationships, the possibility of verifying results through repeatability/replicability, and the opportunity to create conditions that are not easily observed in natural settings or would take too long. Some disadvantages of experimental research are [Mildner, 2019]: It is unnatural; difficult to apply the results to real-life situations, and ethical considerations. It cannot be applied to all types of research problems, and results may appear significant because of experimenter error or the inability to control for all extraneous variables. There are three types of exploratory approaches: preexperimental, truly experimental, and quasi-experimental [Leedy & Ormrod, 2001;Ary et al., 2010].
The pre-experimental research involves an independent variable that does not vary or a control group that is not randomly selected. It follows pre-test and post-test to see the result of the treatment but fails to include a control group [Campbell & Stanley, 1963;Ary et al., 2010].
The true experimental approach provides a higher degree of control in the experiment and produces a higher degree of validity. It is first described by D. T. Campbell [Campbell, 1957]. It is a systemic approach to quantitative data collection involving mathematical models in the analyses [Campbell & Stanley, 1963]. It examines the cause and effect relationships between independent and dependent variables under highly controlled conditions. It has both pre-and post-tests, experimental and control groups, and random assignments of subjects [Nunan, 1992]. It must enable the researcher to maintain control over the situation in terms of assignment of subjects to groups, in terms of who gets the treatment condition, and in terms of the amount of treatment condition that subjects receive [Christensen, 1988].
The word "quasi" means partial, half, or pseudo. The quasi-experimental design involves a non-random selection of study participants, where control is limited and true experimentation is impossible or difficult [Harris et al., 2004]. It has both pre-and post-test and experimental and control groups, but no random assignments of subjects. Since the variable cannot be controlled, validity may be sacrificed [Campbell & Stanley, 1963]. For example, two sick people of the same age and same physical structure are given the same antibiotic, and one of them is given an additional antibiotic with the common antibiotic. After seven days the two patients are examined, and their health condition is measured. The lack of random assignment is the major weakness of quasi-experimental research. The statistical association does not imply causal association if the study is poorly designed [Harris et al., 2004].
Non-Experimental Research
It lacks the manipulation of an independent variable, random assignment a researcher simply measure variables as they occur. Non-experimental research is divided into the descriptive, causal-comparative, evaluation, existing data, meta-analysis, etc. approaches.
Descriptive Research
Quantitative research methods fall under the broad heading of descriptive research. It is used when little is known about a particular phenomenon [Walker, 2005]. Descriptive research is widely used in education, epidemiology, nutrition, and the behavioral sciences. It tries to gather information about prevailing situations for the purpose of description and interpretation [Aggarwal, 2008]. It defines the research aspects viz., who, what, where, when, why and sometimes how of the research which should be thought of as a means to an end rather than an end, itself [Yin, 1994]. The goal of this type of research is to identify and describe trends and variations in populations, creates new measures of key phenomena, or describes samples in studies aimed at identifying causal effects [Loeb et al., 2017].
Descriptive research depicts an accurate profile of people, events, or situations [Robson, 2002]. It refers to the type of research question, design, and data analysis that will be applied to a given topic. It examines a phenomenon that is occurring at a specific place and time. It is concerned with conditions, practices, structures, differences, or relationships that exist, opinions held evident processes. It gathers and analyzes empirical data, and then organizes, tabulates, depicts, and describes the data collection and attempts to develop knowledge [Glass & Hopkins, 1984;Best & Kahn, 2007].
It can be statistical research and tries to study frequencies, averages, and other statistical calculations. It generates data, both qualitative and quantitative, that define the state of nature at a point in time. It attempts to describe, explain, and interpret the conditions of the present [Koh & Owen, 2000].
It is a basic research method that examines the current situation. It identifies the characteristics of an observed phenomenon or explores correlations between two or more entities, and nothing is controlled or manipulated. In this type of research, a researcher can collect a large amount of data. It cannot be used as the basis of a causal relationship where one variable affects another. It is sometimes contrasted with hypothesis-driven research, which is focused on testing a particular hypothesis by means of experimentation [Casadevall & Fang, 2008]. The three types of descriptive research are observation studies, correlational research, and survey research.
Observation Studies: Observation study is one of the most important research methods in natural and social sciences, and has been used for collecting data about people, processes, and cultures in qualitative research. It is used for referring several different types of non-experimental studies in which behavior is systematically observed and recorded. It is an ethnographic research method and it seems that there is no specific beginning of it. But some researchers stated that use of it was started in the late 19 th and beginning of the 20 th centuries [Baker, 2006]. The Greek philosopher Aristotle used observational techniques in his botanical studies on the island of Lesbos. Auguste Comte (1798-1857), the father of sociology, listed [Adler & Adler, 1994]. Observation is a preplanned research tool that is carried out purposefully to serve research questions and objectives. It is related to positivist research [Angrosino, 2005]. It enables the researcher to combine it with questionnaires and interviews to collect objective information [Johnson & Turner, 2003]. Observation studies include the systematic recording of observable phenomena [Gorman & Clayton, 2005].
Observation study is the least intrusive data collection method and can be an abuse of an individual's privacy [Adler & Adler, 1994]. Observation data is collected by naturalistic inquiry using a structured, unstructured, or semi-structured approach [Fry et al., 2017]. An individual's privacy cannot be abused and a researcher must be unbiased during data collection. It is often criticized for lacking reliability [Adler & Adler, 1994]. It is time-consuming, costly; practically challenging and a variety of techniques are used to collect data. For the collection of data a researcher needs specialized training on how to observe, what and how to record the data, how to enter the field and leave it, length of time in the field, sampling, and data collection techniques, etc. Hence it is a complex and challenging research method [Baker, 2006].
Direct observation is called the gold standard among qualitative data collection techniques [Murphy & Dingwall, 2007]. Observation data collection can improve understanding of the practice, processes, knowledge, beliefs, and attitudes embedded in social interactions [Fry et al., 2017].
Some advantages of observational research are as follows [Foster, 2006]: • research information can be revealed carefully, a planned observation by a researcher over a period of time, • observational data seem more accurate, • the observer sees but participants cannot, • observation can provide information on those who cannot speak (e.g., babies, very young children and animals), and • data collected from observation can check on, and supplement, information obtained from other sources. There are some limitations to the observational research method as follows [Foster, 2006]: • behavior of interest may be inaccessible and observation may simply be impossible if observation not usually permit, • people sometimes consciously or unconsciously provide inaccurate information, • researcher's preconceptions and existing knowledge will bias observation, and • it is very time-consuming and costly.
Correlational Research:
The term 'correlation' is a common and useful statistical concept applied in research. Francis Galton (1822-1911) for the first time provided the idea of correlation in 1898 but Karl Pearson (1857-1936) developed and promoted it as a scientific concept of universal significance [Aldrick, 1995]. It is a type of quantitative research method within the positivism paradigm. Mainly three types of correlational research have been identified: positive correlation, negative correlation, and no correlation research [Anderson & Arsenault, 1998].
Correlational research describes what exists at the moment. It examines differences of characteristics or correlates two or more variables. In this type of research, there is no manipulation of variables [Queirós et al., 2017]. In quantitative research, it includes explaining phenomena by collecting numerical data that are analyzed using mathematically based statistical methods [Asamoah, 2014]. It is a type of non-experimental, backward-looking, and dynamic research where the researcher employs the data derived from preexisting variables. It provides an evaluation of the strength and direction of the relationship among variables. The A researcher will gather data to determine whether, and to what extent two or more quantifiable variables in a particular group. These data are numbers that reflect the measurement of the characteristics of research questions [Williams, 2007]. It plays an important role in the development and testing of theoretical models. The disadvantage of this research is that it just provides a relationship among variables; no calculations can be prepared regarding causality [Samuel & Okey, 2015].
Survey Research: Among many types of quantitative research, survey research is very popular in the natural and social sciences, which includes questionnaires, personal interviews, phone surveys, and normative surveys. In a country, the survey is conducted for the economic, social, political, and cultural shape [Bethlehem, 2009]. It is the systematic gathering of information from respondents for the purpose of understanding and predicting some aspects of the behavior of the population of interest. The survey research was invented by Paul Lazarsfeld, George Gallup, and Hadley Cantril [Glasow, 2005;Sukamolson, 2007]. It focuses on people, the vital facts of people, their beliefs, opinions, attitudes, motivations, and behavior [Kerlinger & Lee, 2000].
Survey research is a study on large and small populations by selecting samples chosen from the desired population and to discover relative incidence, distribution, and interrelations [Kerlinger & Lee, 2000]. It provides an important source of basic scientific knowledge. It uses scientific sampling and questionnaire design to measure the characteristics of the population with statistical precision [Sukamolson, 2007]. It is the best method of data collection when the researcher is interested in collecting original data for a population that is too big to test directly [Babbie, 2001].
Generally, many researchers conduct survey studies, such as economists for income and expenditure patterns among households, educationists for factors influencing academic performance, health professionals for the implications of health problems on people's lives, psychologists for the roots of ethnic or racial prejudice, political scientists for comparative voting behavior, sociologists for the effects on the family life of women working outside the home, etc. [de Leeuw et al., 2008;Kothari, 2008;Creswell, 2011]. It is the most common type of descriptive research in dietetic, nutrition, and health areas, which involves asking questions of a sample of individuals who are representative of the group or groups being studied [Koh & Owen, 2000].
The ultimate goal of survey research is to learn about a large population by surveying a sample of the population. In this method, a researcher poses a series of questions to the respondents, summarizes their responses in percentages, frequency distribution, and some other statistical approaches. It is concerned with sampling, questionnaire design, questionnaire administration, and data analysis. Survey research typically employs face-to-face personal interviews, telephone interviews, panels, observations, e-mail and internet interviews, or the common approach using questionnaires [McClosky, 1969;Mathers et al., 2009;Mathiyazhagan & Nandan, 2010].
In survey, research biases may occur and respondents may have difficulty assessing their own behavior. It is unsuitable where an understanding of the historical context of phenomena is required [Pinsonneault & Kraemer, 1993]. The survey does not show exact measurements; only provides estimates for the true population [Salant & Dillman, 1994]. There are two types of surveys: cross-sectional survey and a longitudinal survey. The key difference between them is that the first one occurs once whereas the latter takes place on multiple occasions over time [Lynn, 2009].
Cross-Sectional Survey:
A survey that is carried out at just one-point in time or over a short period is known as a cross-sectional or prevalence study. It is a type of observational study design. It is selected based on the inclusion and exclusion criteria set for the study. It is used for population-based surveys and to assess the prevalence of diseases in clinic-based samples [Ross & Vaughan, 1986;Setia, 2016]. It provides a snapshot of what is happening in that group at that particular time. It is used in the study is descriptive and survey related. It sometimes investigates associations between risk factors and the outcome of interest [Levin, 2006]. It tells us what people are thinking or doing at one point in time. It is useful in assessing practices, attitudes, knowledge, and beliefs of a population concerning a particular health-related event [Buck, 2008]. If the research needs a pool of opinions and practices, a cross-sectional survey would be appropriate and collects information from a sample drawn from a population at one point in time. The period of data collection can vary and it depends on the study weightage [Ross & Vaughan, 1986;Mathers et al., 2009].
Advantages of it are [Levin, 2006;Setia, 2016]: It is relatively inexpensive and usually be conducted relatively faster, there is no loss to follow-up, many outcomes and risk factors can be assessed. It is useful for public health planning, monitoring, and evaluation. Disadvantages of it are [Levin, 2006;Setia, 2016]: It is difficult to make a causal inference, only a snapshot is provided different results may happen another time. It is prone to incidence bias.
Longitudinal Survey: In 1759, Gueneau de Montbeillard first has recorded the longitudinal study, but in the 1920s the study has advanced when a monumental work was undertaken by Lewis M. Terman of Stanford University to study the developmental histories of gifted children [Buffon, 1837;Rajulton, 2001]. A longitudinal survey rather than taking a snapshot, paints a picture of events or attitudes over prolonged periods, often years or decades. It is used in social science, natural science, and health research (e.g., clinical pathology, public health, child development, adolescent psychosocial development, genetic classification, epidemiological case registers, etc.) to study rapid fluctuations in behaviors, thoughts, and emotions from moment to moment in processes in human and technical systems. It makes observing changes more accurate and is applied in various other fields [Venkatesh & Vitalari, 1991;Carlson et al., 2009]. Organizational science researchers apply it to understand changes in organizational structure, causal mechanisms, and organizational adaptation [Tuma & Hannan, 1984].
If a researcher specifies the objective to compare differences in opinion and practices over time, a longitudinal survey would be the ideal method and data collection is done at different points of time to observe the changes [Mathers et al., 2009]. The length of longitudinal study periods varies along with six parameters: length of study, number of data collection periods, duration between data collection efforts, method of data collection, research objectives, and unit of analysis [Venkatesh & Vitalari, 1991]. Cohort Studies: The word "cohort" has its origin in the Latin cohors, referring to warriors of 300-600man units in the Roman army [Samet & Munoz, 1998;Grimes & Schulz, 2002]. The history of the cohort study was in a 1988 review by F. D. K. Liddell and papers published from a 1983 American Cancer Society workshop on cohort studies. These researchers first discussed the terminology, the definitions, and the evolution of the cohort design [Liddell, 1988;NIH, 1997]. A cohort study is more difficult to carry out than a trend survey. Because after a long time some participants move house, some fall ill and die, and some just refuse to participate [Mathers et al., 2009].
A cohort study is the aggregate of individuals who experienced the same life event within the same time interval. In this study, the cohort is closed against new entries because such entries are impossible [Ruspini, 2002]. It tracks two or more groups forward from exposure to outcome. It provides the best way to make sure both the incidence and natural history of a disorder [Hulley et al., 2001]. Here a researcher specifies the population and lists the names of all members of this population. At each data collection point, a researcher will select a sample of respondents from the population of subordinates doing security audits and penetration testing and administer a questionnaire. This is then repeated at another point in time. It is particularly useful in tracking the progress of particular conditions over time, on the order of from 5 to 25 years or longer [Venkatesh & Vitalari, 1991].
It has the accuracy of data collection with regard to exposures, confounders, and endpoints. It is useful to investigate multiple outcomes that might arise after single or multiple exposures in one cohort. For example, cigarette smoking (the exposure) can cause emphysema, oral cancer, stroke, and heart disease (the outcomes). In cohort studies, the hypothesis can be generated [Grimes & Schulz, 2002;Euser et al., 2009]. It also reduces the risk of survivor bias; consequently, it is less biased. Its time-order is generally clear. It allows the calculation of incidence rates, relative risks, and confidence intervals. It has also some limitations, such as selection bias is built, generally require large samples, not possible to establish causal effects, loss to follow-up can be a difficulty, partitioning might be needed to avoid a blurring of exposure, sometimes termed contamination [Hulley et al., 2001;Grimes & Schulz, 2002;Euser et al., 2009].
Panel Studies: The history of panel research begins in 1759, when Count Philibert Guéneau du Montbeillard began recording his son's stature every six months from birth to age 18 [Baltes & Nesselroade, 1979). Modern panel research was established in the 1930s, when several classic studies of human growth and development began [Bogin, 1999]. The use of panel data was first introduced by F. Lazarsfeld in the 1940s in an analysis of public opinion, using market research gathered over time [Andreß, 2017]. In the last 60 years, panel study has increased in the social, life, medical, and public-health sciences on firms, countries, or other entities. It enhances our understanding of globalization, transnationalism, migration, and development of political-economic structures, acculturation, and intergenerational transmission of culture [Gravlee et al., 2009].
Panel survey provides an efficient and cost-effective means to measure changing behaviors and attitudes over time. It suggests that individuals, firms, states, or countries are heterogeneous. It has largescale research facilities for research on human behavior and attitudes in real-life settings [Andreß, 2017]. It is a method of direct extension of a questionnaire or interview survey and data are collected from the same people at two or more points in time. It is usually construed to be a short-term usually one to five years duration [Venkatesh & Vitalari, 1991]. Remarkable panel studies have been conducted prior to 1970, but [Duncan et al., 1987].
It generally samples the whole population rather than single years of age to understand the dynamics of change of the whole population, and its evolution over the lifetime of the study [Buck, 2008]. Here a researcher can identify a sample from the beginning and follow the respondents over a specified period of time to observe changes in specific respondents and highlight the reasons why these respondents have changed [Mathiyazhagan & Nandan, 2010]. In fixed panel study data is collected from the same units on multiple occasions. Only deaths or emigration reduces the size of the eligible sample. Repeated panel study involves a series of panel surveys, which may or may not overlap in time. A split panel involves a combination of cross-sectional and panel samples at each wave of the study [Buck, 2008].
Advantages of panel surveys are: It is analytically strong and provides an opportunity to link macromicro issues. It constructs and tests more complicated behavioral models than purely cross-section data. It analyzes information collected on individuals and households repeatedly over time. It provides more informative data, more variability, less collinearity among the variables, more degrees of freedom, and more efficiency. It allows researchers to track changes in an individual's attitudes and behaviors over time if they wish to. It is better able to study the dynamics of adjustment. It encourages researchers to collect a considerable amount of information on the participants [Greaves, 2017;Andreß, 2017].
Disadvantages of panel surveys are: It is costly and complex. It takes a long time for results to become available. It takes a long-term investment that requires considerable financial and human resources. Overtime reduces sample size and can result in biased inferences. Interviews in earlier waves may influence interviews in subsequent waves; consequently, the actual value of a variable may change. Measurement errors may arise due to faulty responses of unclear questions, memory errors, deliberate distortion of responses, inappropriate informants' misreporting of responses, and interviewer effects [Lynn, 2009;Andreß, 2017].
Trend studies: Trend study is a transdisciplinary area that integrates concepts, perspectives, and methodologies from cultural studies, anthropology, marketing, design, etc. It gathers data from a particular population characterized by a specific variable and different samples from a population whose members may change are surveyed at different points of time [Gomes et al., 2018]. It is a guiding policy, expressed in concepts, leading cultural relevance to the various spheres of society. The direction of it tends to move, and the effect of which focuses on culture, society, or the business sector in which it develops. It can generate scenarios on society's evolution encompassing activities, attitudes, behaviors, and social concerns [Rech, 2016]. It is a forecast of something that will happen in a certain way and will be accepted by most people. It is an essential part of the emotional, physical, and psychological environment of the human being [Vejlgaard, 2008;Gomes et al., 2018]. Trends are driven according to basic needs, drivers of change, and innovations [Mason et al., 2015].
It takes repeated samples of different people each time but always uses the same core questions. It set out to measure trends in public opinion and behavior over time. It reveals the current situation of a specific research subject in a specific field by focusing on research topics, methodological approach, theoretical framework, etc. about that research subject [Ertem-Eray, 2019]. It focuses more on "why" rather than on "what". It is based on two foundations: cultural and commercial. It can generate scenarios on society's evolution, encompassing activities, attitudes, behaviors, and social concerns [Rech, 2016]. In addition to consumer trends, the other areas are: social, political, and economic; industry; new product categories; macro trends; fashion; and futurism [Mason et al., 2015].
Causal-Comparative Research
The term causal-comparative appears to have originated in the early 20 th century. It always starts with observed effects and seeks to discover the antecedents of these effects [Good et al. 1935]. Here the researcher investigates how the independent variables are reflected by the dependent variables and involve cause and effect relationships between the variables. It attempts to determine reasons, or causes, for the existing condition [Gay, 1996]. It is also called "ex post facto or after the fact" research; as there is no manipulation of conditions due to the presumed cause has already occurred among groups of individuals before the study is initiated [Kerlinger & Lee, 20003]. It is sometimes treated as a type of descriptive research since it describes conditions that already exist. It is regularly used in education studies when experimentation is not possible [McMillan & Schumacher, 2009]. It discovers the possible causes and effects of personal characteristics by comparing individuals displaying the particular behavior pattern with individuals who do not display the behavior pattern [Borg & Gall, 1989].
It provides the researcher with the opportunity to examine the interaction between independent variables and their influence on dependent variables. It is a form of study that tries to identify and determine the cause and effect of the relationship between two or more groups. It is less costly and timeconsuming to conduct, flexible by nature, and the researcher has little to no control over independent variables [Fraenkel & Wallen, 1996].
There are two types of causal-comparative research: Retrospective and prospective causal-comparative research. In retrospective causal-comparative research a researcher starts to investigate a precise problem when the effects have previously happened and the researcher endeavors to determine if one variable might have prejudiced another variable. In prospective causal-comparative research, a researcher starts a study beginning with the causes and is resolute to evaluate the effects of a situation. The former is more common than the latter [Gay et al., 2006]. Some advantages of causal-comparative research are [Krathwohl, 1993;Fraenkel & Wallen, 1996;McMillan & Schumacher, 2009]: It shows a correlation where more rigorous experimentation is not possible, useful to avoid artificial in the research, and shows cause and effect relationships. Some disadvantages of causal-comparative research are [Krathwohl, 1993;Fraenkel & Wallen, 1996;McMillan & Schumacher, 2009]: It has lack of control for independent variable and randomizing subjects, may be regarded as too flexible, never certain if causative factor has been included or identified, and relationship between two factors does not establish cause and effect.
Evaluation Research
Evaluation research is the standard social research method and systematic measurement of the outcomes of a program to improve the quality of policy [Weiss, 1998]. As an analytical tool, it investigates a policy program to obtain all information pertinent to the assessment of its performance, both process and result. It serves as the basis for negotiating a study commission, project contract, and/or conducting the actual evaluation research study [Wollmann, 2003].
Evaluation research is appropriate whenever some social intervention occurs or is planned. It is the systematic assessment of the worth of time, money, effort and resources spent in order to achieve a goal. It enhances knowledge and decision-making and leads to practical applications. It is a form of applied research intended to have some real-world effect. To conduct evaluation research, researchers must be features and outcomes. It is a sophisticated tool for decision-makers. It tries to determine how study features influence effect sizes [Glass, 1976;Borenstein et al., 2016].
The historical roots of meta-analysis can be traced back to 17 th -century studies of astronomy. It is believed that the first meta-analysis study was conducted by Karl Pearson in 1904 when he attempted to synthesize the independent vaccine studies concerning typhoid [Littel et al., 2008]. Although meta-analysis is widely used in epidemiology and evidence-based medicine today, a meta-analysis of medical treatment was not published until 1955 [Plackett, 1958;Glass & Smith, 1979]. The term "meta-analysis" was coined in 1976 by the statistician Gene V. Glass to describe the statistical combination of data from multiple studies [Glass, 1976]. After the 1980s, scientists began to develop meta-analysis [Cooper, 1998] It provides a more objective appraisal of the evidence than a narrative review, and attempts to minimize bias by utilizing a methodological approach. It is based on mathematical and statistical rules [Egger et al., 1997;Lee, 2019]. It is not controlled over the timing of studies and the size of studies. It is not a hypothesistesting activity, and cannot reasonably be used to establish the reality of a reputed hazard or treatment [Charlton, 1996].
A fixed-effect meta-analysis assumes that there is one true intervention effect. The variation between studies is purely due to chance. A random-effects meta-analysis does not assume that there is one true effect [Hunter & Schmidt, 2004].
Some strengths of meta-analysis are [Noble, 2006;Finckh & Tramèr, 2008;Lee, 2019]: It overcomes small sample sizes of individual studies to detect the effects of interest and problems of traditional narrative reviews, increases statistical power, the generalizability of results and precision in estimating effects, and reduces the risk of false-negative results. It summarizes and quantifies results from individual studies, analyzes differences in the results of various studies, generates new hypotheses for further studies.
Some weaknesses of meta-analysis are [Noble, 2006;Finckh & Tramèr, 2008;Lee, 2019]: It cannot overcome subjectivity. It includes only published studies, it may overestimate the actual magnitude of an effect, i.e., it may be publication bias. Summarizing large amounts of varying information using a single number is a controversial aspect of it. It only deals with main effects and potential disagreement with findings of randomized trials. If it includes low-quality studies, its results will be biassed and incorrect. It is homogenous in terms of populations, interventions, controls, and outcomes.
Characteristics of Quantitative Research
Characteristics of quantitative research are associated with positive paradigm. A quantitative research approach is characterized as being structured with predetermined variables, hypotheses and design [Bryman, 2012;Creswell, 2011]. It employs the traditional, the positivist, the experimental, or the empiricist method to enquire into an identified problem [Smith, 1975]. It is used to get answers in numerical form which makes relationship between an independent variable and a dependent variable within a large population. A numerical output is easy to read and understand, and it is easy to deduce a conclusion from the numerical outcome than a detailed result. Here the output is usually found in the form of graphs, range of numbers, statistical data, tables, and percentages, etc. to show trends, relationships, or differences among variables. In quantitative research data are collected by random sampling to ensure the accuracy, reliability and validity; consequently can avoid bias in the results. In quantitative research closeended questionnaires are used whose answers are more specific and right than the open-ended questionnaires which are more detailed and scattered. Moreover, responses to the close-ended questionnaires are more reliable than the answers to open-ended questionnaires [Polit & Beck, 2017].
Quantitative research has the following major characteristics [Brink & Wood, 1998;Burns & Grove, 2005]: • All aspects of the study are carefully designed before quantitative research data collection.
• Data are collected in the form of numbers and statistics, often arranged in tables, charts, figures, percentage, or other non-textual forms. A numerical output is easy to read and understand, and deduce a meaningful conclusion than a detailed result. • The data are usually collected using structured research modern instruments, such as questionnaires or computer software are used to collect numerical data. • Statistical analysis is conducted to reduce and organize data, determine significant relationships and identify differences or similarities within and between different categories of data. • Data gathering instruments contain items that solicit measurable characteristics of the population (e.g., age, the number of children, educational status, economic status, etc.). • The results are based on larger sample sizes that are representative of the population.
• It is usually concise.
• It provides an accurate account of characteristics of particular individuals, situations, or groups.
• It emphasis on the procedures of comparing groups or relating factors about individuals or groups in experiments, correlational studies, and surveys. • Standardized, pre-tested instruments guide data collection ensures the accuracy, quite reliability as participants of the research face close-ended questions, and high validity of data for repeated research study. • Moreover, the outcome of quantitative research is easy to understand and explain.
Strengths of Quantitative Research
Quantitative research applies statistical tests, such as mean, median, and standard deviation, t-tests, multiple regression correlations (MRC), analysis of variances (ANOVAs), etc. Quantitative data are usually collected by surveys from large numbers of respondents randomly selected for inclusion. Sometimes secondary data, such as government statistics, census data, health system metrics, etc. are included in quantitative research. Some strengths of quantitative research are as follows [Walker, 2005;Atieno, 2009;Choy 2014]: • Results from sample surveys can be generalized for entire populations.
• Relatively easy to analyze.
• Results can be aggregated and are comparable across population groups.
• Results can be broken down by socio-economic group for comparisons.
• Findings can be generalized if selection process is well-designed and sample is representative of study population. • Reliability of data and findings provides powerful indicators to guide policy.
• Replicability publication of questionnaires and dataset permits scrutiny of findings.
• Transferability of dataset to other analysts means that analysis is not dependent on availability of an individual. • Data can be very consistent, precise and reliable.
69
• Precise professional or disciplinary minimum standards exist for much survey work.
• Statistical methods mean that the analysis is often considered reliable.
• Appropriate for situations where systematic, standardized comparisons are needed.
Weaknesses of Quantitative Research
No doubt quantitative research is strong enough but has some weaknesses. Some weaknesses of quantitative research are as follows [Walker, 2005;Atieno, 2009;Choy 2014]: • It sacrifices potentially useful information through process of aggregation.
• It provides useful data by placing households or events in discrete categories.
• It neglects intra-household processes and outcomes.
• It commonly under-reports on difficult issues, such as domestic violence and difficult to access individuals and households. • Large amounts of the dataset are never used if the project is very expensive.
• Poorly trained enumerators can make mistakes and inadvertently influence responses.
• Enumerators may give false data.
• May give a false impression of homogeneity in a sample.
Advantages of Quantitative Research
In quantitative research statistical, computational, mathematical techniques, etc. are applied, i.e., it looks at measurable, numerical relationships to obtain the accurate result. It is often seen as more accurate than qualitative research, which focuses on gathering non-numerical data [Bryman, 2012;Goertz & Mahoney, 2012]. In quantitative research the statistical package for social science (SPSS) are used and data are calculated and conducted by computer, it saved time and resources. Indeed it is scientific in nature and research findings are more reliable [Connolly, 2007]. Some advantages of quantitative research are as follows [Walker, 2005;Atieno, 2009;Choy 2014]: • It requires careful experimental design and the ability for anyone to replicate both the test and the results. • It allows the researcher to measure and analyze data.
• It strives to control for bias so that facts, instances, and phenomena can be understood in an objective way. • It allows for statistical comparison between various groups. • The data is considered quantifiable and usually generalizable to a larger population.
• Test hypotheses are used in experiments because of its ability to measure data using statistics.
• As statistical tests are appropriately use, less error occur during the research.
• Relationship between an independent and dependent variable is studied in detail, which is advantageous because the researcher is more objective about the findings of the research. • It emphasizes large samples that can provide an overview of an area that can reveal patterns, inconsistencies, and so forth. • It measures level of occurrence, actions, trends, etc.
• It can provide a clear, quantitative measure to be used for grants and proposals.
• It has precision, is definitive and standardized.
• It can be used when large quantities of data need to be collected.
• It indicates the extensiveness of attitudes held by people.
Disadvantages of Quantitative Research
Qualitative research is very popular and strong but; it has also some disadvantages. It strictly follows statistical relationships that overlook broader themes and relationships. As a result, it has a risk of missing important information of research [Bryman, 2012;Goertz & Mahoney, 2012]. Some disadvantages of quantitative research are as follows [Atieno, 2009;Creswell, 2011;Choy 2014]: • Results need to be calculated using Excel, Access, or SPSS, which may not always be accessible to a country program. • It can be limited in its pursuit of concrete, statistical relationships.
• The bias occurs earlier in the process of quantitative research.
• The context of the study or experiment is ignored.
• It is difficult to understand context of a phenomenon.
• Related secondary data is sometimes not available.
• Data may not be robust enough to explain complex issues. • It is time consuming. The larger the sample, the more time it takes to analyze the data and analyze results. • It ignores a very important human element.
Quantitative Evaluation
Quantitative evaluation criteria comprised of the combination of validity, reliability and generalizability. Validity is confirmed if results from a randomized sample with the assurance that conflict of interest have been minimized. Reliability is verified with an adequate sample size so that conclusions can be drawn with precision and accuracy [Mohajan, 2017]. Generalizability is of necessary as, it allows the results to be applied to the population at large [Morris & Burkett, 2011].
Ethical Reflections
Ethics pertain to morally good or correct practice and avoid any harm that may emanate during the study. Ethics are essential in any research [Lillemoen & Pedersen, 2013]. Strength and integrity are useful for a successful quantitative research. These depend on how the researchers design their researches. Ethic is an important characteristic in any research, which is about professional regulations and codes of conduct that guide the researcher in his dealings with participants [Denzin & Lincoln, 2005].
In any research respondents are assured that their names and the names of their organizations would be dealt with in the strictest confidence. Their trust would not be exploited for personal gain or benefit, by deceiving or betraying them in the research [Lubbe, 2003]. Researchers must "do no harm" as they collect data from someone and report findings to someone [Berg & Howard, 2012]. An ethical challenge happens where there are doubts, uncertainties or disagreements about what is morally good or correct practice and avoids any harm [Lillemoen & Pedersen, 2013].
Sometimes researchers do not follow the ethical characteristics during this type of research. For example, during World War II, Nazi scientists conducted some experiments, such as immersing people in ice 71 water to know how long it would take them to freeze into death. They also injected prisoners with newly developed drugs to know their effectiveness; consequently many died in the process, which are unethical and inhumane [Christensen, 1988].
According to William Lawrence Neuman "Ethics begins and ends with you, the researcher". Some general ethical issues in research that result in some prohibitions are [Neuman, 2012]: • never cause unnecessary or irreversible harm to participants, • secure prior voluntary consent when possible and never unnecessarily humiliate, and • degrade, or release harmful information about specific individuals that was collected for research purposes. In this study we have tried to maintain ethics strictly. In the theoretical analysis we have given proper references in the research. We have maintained the ethical formalities throughout the study [Mohajan, 2018b].
Conclusions and Recommendations
In this study we have tried to discuss the components of quantitative research methodology in a systematic and logical order. We observe that this type of research is highly structured, and the results are determined numerically or statistically. Researchers prefer quantitative research for its creative characteristics and strengths. Worldwide it is one of the most used approaches to conduct natural and social science researches.
In the study we have observe that the quantitative research methodology is founded on the scientific method. It uses experimental and observed measurements to develop theories and advance knowledge in the research area. In this type of research the variables are clearly defined and results seem very accurate, as the results are obtained by mathematical formulae and statistical analyses. We have discussed the historical background, steps and types of quantitative research. We have also briefly overview the characteristics, strengths, weaknesses, advantages, and disadvantages, ethical reflections of this type of research.
A researcher must be more conscious during data collection and interpretation to avoid bias in quantitative research. We confirm that in future, quantitative researchers must contribute more knowledge in their true and original researches. We hope that this review paper will help new researchers to write qualitative research articles accurately and efficiently. | 2021-01-07T09:05:41.221Z | 2020-12-31T00:00:00.000 | {
"year": 2020,
"sha1": "2f8f6992e8e85023fc232f23ed84558332a62b7b",
"oa_license": "CCBYNCSA",
"oa_url": "http://ojs.spiruharet.ro/index.php/jedep/article/download/679/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cc5d4e9c3fb1531dc511ab19370383703ea853fa",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
8359192 | pes2o/s2orc | v3-fos-license | Comparison of performance of Sri Lankan and US children on cognitive and motor scales of the Bayley scales of infant development
Background There is no validated scale to assess neurodevelopment of infants and children in Sri Lanka. The Bayley III scales have used widely globally but it has not been validated for Sri Lankan children. We administered the Cognitive and Motor Scales of the Bayley III to 150 full-term children aged 6, 12 and 24 months from the Gampaha District of Sri Lanka. We compared the performance of Sri Lankan children 6, 12 and 24 months of age on the cognitive and motor scales of the Bayley III with that of US children. Results Compared to the US norms, at 12 months, Sri Lankan children had significantly higher cognitive scores and lower gross motor scores, and at 24 months significantly lower cognitive scores. The test had a high test-retest reliability among Sri Lankan children. Conclusions There were small differences in the cognitive and motors scores between Sri Lankan and US children. It is feasible to use Bayley III scales to assess neurodevelopment of Sri Lankan children. However, we recommend that the tool be validated using a larger representative sample of all population groups.
Background
Early intervention in children at risk for developmental delays can lead to better outcomes and improved child well-being. The prevalence of developmental disabilities in Sri Lanka has been estimated to be 12 -29% and the majority of children with clinically identifiable developmental problems in Sri Lanka are referred to child mental health services relatively late, generally when they are over 5 years of age, due to low rates of early recognition [1]. Currently, a culturally validated Denver Development Screening Test (DDST) is being used in the Maternal and Child Health programme of the Ministry of Health, Sri Lanka to screen child development. The Denver II has been shown to have limited specificity and high over-referral rates [2] and has been considered as test of questionable value in terms of screening for developmental delay [3]. Furthermore, it is only a screening tool and is not useful for diagnostic purposes. A reliable and valid tool for diagnosing developmental delays in very young children would enable the early identification of children with impairments. The Bayley Scales of infant development is a commonly used psychometric tool for assessing the development of children between 1 to 42 months of age. It has been shown to be a valid diagnostic tool for identifying children with developmental delays at an early age and is widely used in clinical settings due to its solid theoretical background and robust psychometric properties [4]. The Bayley scales have been used in various countries such as Brazil [5], Taiwan [4] and Australia [6]; currently it is in its third edition [7]. The Bayley III scales can be used to assess infant and toddler development across five key developmental domains -cognition, language, socialemotional, motor and adaptive behaviour [8]. The interpretation of Bayley scores are based on norms established for children in the United States (US), which is based on a national standardized sample of 1700 children between 1 and 42 months of age, and divided into 17 age groups.
Despite its wide use and well-established psychometric properties, the Bayley III scale has not been validated in Sri Lanka. Socio-cultural influences can occur early in life and affect cognitive, sensorimotor and socio-emotional domains of behaviour [9] which may result in differences in the cross cultural application of the Bayley Scales for the assessment of infants. For example, Taiwanese children had lower scores on the Bayley cognitive and motor scales when compared to the US norms at six and 24 months [4] and Brazilian children aged between one and 12 months had lower motor scores [5]. These differences may be due to heredity, differences in socio-economic status and childrearing practices [4]. Hence, the use of US norms to assess the development of children from other cultures may not be appropriate; it is important to establish normative data for Sri Lanka children for the instrument to be used as a diagnostic tool in the clinical setting.
This manuscript compares the performance of Sri Lankan children 6, 12 and 24 months of age on the cognitive and motor scales of the Bayley III with that of US children.
Participants
Infants and toddlers aged 6, 12 and 24 months (+2 weeks) residing in the Ragama and Wattala Medical Officer of Health (MOH) areas in the Gampaha District of Sri Lanka participated in the study. Infants who were registered with the Public Health Midwife (PHM) of the area, and visiting the community child welfare clinics were recruited for the study. A total of 150 Sri Lankan children participated in the study, with 50 children from each age group. Comparison of 50 Sri Lankan children (SD = 2.0) with 100 US children (SD = 3.0) to detect a difference in means of 1.0 assuming a two-sided alpha error of 0.05 has a power of 68%. Only full-term infants were considered for enrollment and pre-term infants were excluded. Prematurity was defined as a "gestation period of ≤36 weeks" (Bayley, [7], p28). As the objective of the study was to identify mean scores for apparently normal children, children with a birth weight of <2500 g and children with diagnosed acute, chronic or congenital medical conditions such as progressive neurological disorders, congenital heart disease etc., or children with developmental delays based on the medical history and the entries made in the Child Health Development Record (CHDR) were excluded. Children of mothers who experienced complications such as hypertension or gestational diabetes during the pregnancy of the child were also excluded.
Research design
A cross sectional study design was used.
Test administration procedure
Motor behavior is considered to be one of the best indicators of overall well-being in infants in the first year of life [5] and cognitive scales provide an indication as to whether a young child has achieved a typical developmental level. Hence, the cognitive scale and the motor scale of the Bayley III were used to get an overall picture of Sri Lankan children's development in this study. The cognitive scale and the fine motor and gross motor sub tests of the motor scale of Bayley III were administered to selected children by a psychologist and two medical graduates. The test administrators were rigorously trained prior to conducting the study as recommended in the guidelines of Bayley III [7]. The psychologist was trained by a licensed clinical psychologist at the University of Alabama at Birmingham, USA; the two medical graduates were trained in Sri Lanka by the psychologist. To ensure the maintenance of accurate test administration and scoring, the following procedures were employed: the initial 20 administrations of the Bayley scales were co-scored by the two testers and 20% of all test protocols were re-scored for accuracy by the test administration supervisor i.e. the psychologist. Any tester making more than one mistake on a protocol had their following three protocols re-scored for accuracy. One in every four test administrations were directly observed by the test administration supervisor, who co-scored a protocol as the test was administered.
The instructions to children for the cognitive and motor scales were translated into Sinhala, the local language, by the principal investigator and back-translated to English by an independent person. The two English versions were found to be comparable. Participant children were all administered the test at a designated testing centre ensuring uniform test administration procedures being maintained throughout the study.
The cognitive scale, the fine motor and gross motor subtests of Bayley III were administered on participating children according to the guidelines of the manual [8]. Each administration took between 20 and 90 minutes, depending on the age of the child; the administration of the scale on older children required more time.
The three scales were re-administered on 10% of the total sample (n = 15) to establish test-retest reliability of the scales. Two testers simultaneously scored the tests for the test-retest sample on both occasions of the test administration. The test-retest reliability was assessed by the intra-class correlation (ICC) coefficient based on a oneway random effects ANOVA model to test for absolute agreement in the ratings using SPSS.
Comparison of scores
Raw scores were calculated for each sub test by adding up the scores obtained for a subtest; scaled scores denote a testee's performance on a particular subtest compared to his/her same aged peers. The raw scores calculated for the Bayley cognitive, fine motor and gross motor scales for each participant were converted into scaled scores using the tables in the Bayley manual. The scaled scores, derived from the total raw scores on each of the subtests, have been scaled to a metric with a range of 1 to 19, a mean of 10, and a standard deviation of 3. For any age group, the average performance on a subtest would be a scaled score of 10 [8]. The mean scaled scores and standard deviations for the 3 subscales for each age group were compared with the US norms using t-tests.
Ethical considerations
Ethics approval was obtained from the Ethics Committee of the Faculty of Medicine, University of Kelaniya, Sri Lanka. All participants were volunteers. Written informed consent was obtained from a parent prior to administration of the test to children.
Results
The socio-demographic information of the participant children are presented in Table 1. 57% (n = 85) of the participant children were males (Table 2). 60% (n = 90) of the children had at least one sibling, and of those children having a sibling, 97.8% (n = 88) had an older sibling(s).
There were no differences in the cognitive, fine motor and gross motor scales of Bayley III between 6 month old Sri Lankan (SL) and US children (Table 3). Sri Lankan children scored significantly higher on the cognitive scale (p = 0.001) at 12 months and significantly lower (p = 0.048) at 24 months as compared to US children. At 12 months, Sri Lankan children scored significantly less on the gross motor subtest as compared to US children (p = 0.034); at 24 months, there was no difference in the gross motor subtest scores between the two groups of children. There were no differences in the fine motor subtest scores between the two groups of children at any of the ages considered.
Male and female children performed equally well on the Bayley scales; there was also no difference in performance between children with and without siblings (p >0.05).
The test-retest interval ranged from 5 to 17 days, with a mean test-retest interval of 12.6 days. The intra class correlation (1, 1) coefficients for the cognitive, fine-motor and gross-motor scales were 0.897, 0.914 and 0.905, respectively.
Discussion
The objective of this study was to assess the appropriateness of using the Bayley III among Sri Lankan children by comparing the cognitive and motor scores of Sri Lankan children aged 6, 12 and 24 months with that of US children.
There were some differences in the performance of Sri Lankan and US children on the cognitive and motor scales of the Bayley III. While there were no differences between Sri Lankan and US children at six months in either the cognitive or the motor domains, the cognitive scores of 12 month old SL children were significantly higher than their US counterparts (even after correcting for multiple hypotheses testing using the Bonferoni correction), but their gross motor scores were lower. At 24 months, Sri Lankan children had lower cognitive subtest scores than US children, but there were no differences in the motor scores. There were no differences in any of the subtests at any of the ages between male and female children.
The differences between Sri Lankan and US children in their gross motor scores at 12 months may be partially explained by differences in child rearing practices. It was observed that during the testing procedure that the mothers of 12 month olds did not encourage their children's attempts to stand and walk, even with support, for fear of them falling. Sri Lankan parents, particularly those from rural areas, are reported to encourage clinging behavior in children as this keeps the child close to them, and hence safe, and actively discourage exploratory behavior [10]. Several items of the gross motor scale of Bayley III require children standing up with support, standing alone, etc. which may be the reason for SL children performing poorly on these items at this age. There were no differences between Sri Lankan and US children in the motor domain at 24 months when children had gained more control of their movements and mothers may have been less fearful.
A previous study among Asian children had reported that first birth order and female gender was associated with a child's mental and motor scores on Bayley II [4]. In this study, there were no significant differences in the cognitive or motor scores between male and female children, or between children who had siblings and who did not have siblings. Socio-demographic factors such as father's occupation and maternal education level were shown in a previous study to be associated with infant's mental development [4]. The small sample size in this study did not permit meaningful comparisons between different groups; future studies should be designed to compare the performance of children from different socio-economic groups to identify predictors of infant development in Sri Lanka.
It has been observed that the length of the test can affect the performance of children, particularly at older ages. The recommended time for administration of Bayley III to children 13 months or older is 90 minutes. In this study, even though only three out of the seven subscales of the Bayley III were administered, in some instances, with 24 month olds, the test took about 90 minutes to administer. Some children got bored and lost attention or became restless and non-compliant. In order to maintain children's interest and obtain their optimal performance, test administrators would need to develop strategies to give children breaks in between the administration of the different subscales without unduly lengthening the total administration time. It is important for test administrators to have a firm understanding and experience in testing young children [8]. Test familiarity is a key factor that results in a longer or shorter testing time. If this test is to be used in a clinical setting, test administrators, whether they are psychologists or physicians, need to be rigorously trained in the use of the test. Considering the length of the test, and the training required to administer and interpret the scores, it is not feasible for general physicians to administer this test routinely in their clinical practice. Although there are no established standards for minimum acceptable levels of different statistics of reliability and agreement, the intra class correlation coefficients exceeded the recommended minimum value of 0.75 [11] for each of the subtests tested, indicating the test-retest stability of the Bayley III in the Sri Lankan setting.
While this study has provided important preliminary information about Sri Lankan children's cognitive and motor development compared to US children, there are clear limitations in generalizing the findings to all Sri Lankan children. The sample was limited to a small geographic area in Sri Lanka; even though children from different ethnic and socio-economic groups were included, all the different subgroups in Sri Lanka may not have been represented. We also did not adjust for multiple hypothesis testing.
Conclusions
There were some differences between the performance of SL and US children on the motor and cognitive scales of the Bayley III, but the differences showed no consistent age-related pattern. It can be concluded that it is feasible to use the Bayley III to assess neurodevelopment in SL children. We recommend that the Bayley III be validated for different age bands, using a larger sample of children from different geographic regions and ethnic backgrounds in Sri Lanka for it to be used as a diagnostic tool in Sri Lanka. | 2017-06-28T08:39:42.355Z | 2014-05-16T00:00:00.000 | {
"year": 2014,
"sha1": "59bf8bda482b4a894f40ee407a6f5219fd1f78a1",
"oa_license": "CCBY",
"oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/1756-0500-7-300",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bd9bdb4d66fb6a3a92ffb227d9b894d1e9651aba",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
55630612 | pes2o/s2orc | v3-fos-license | Effects of Four Herbal Plants on Kidney Histomorphology in STZ-induced Diabetic Wistar Rats
Fortytwo healthy adult Wistar rats (Rattus novergicus) with an average weight of 153.4 g were randomly divided into seven groups (n=6). STZ (65 mg/kg) dissolved in citrate buffer was administered intraperitoneally to animals in groups (B-G) while animals in group A received equivalent volume of citrate buffer. Plant extracts (100 mg/kg) were administered daily (orally) to animals in groups C-F and glimepiride (anti-diabetic drug) to animals in group G for fourteen days. After the expiration of the study the animals were sacrificed and the kidneys were excised, fixed in 10% formol saline for histology and morphometric analysis.
Introduction
Diabetes mellitus (DM) is a group of metabolic diseases characterized by hyperglycemia (high blood glucose level) which results from defects in insulin secretion, insulin action or both [1]. The chronic hyperglycaemia of DM is associated with long-term damage, dysfunction and failure of various body structures and organs especially the eyes, nerves, heart, blood vessels and also the kidney [1]. Existing therapy for DM are known to provide good glycaemic control, but are believed to do little in regards to the complications to various organs. Besides, these anti diabetic drugs are associated with mild to moderate side effects [2]. In view of this, the present study has investigated the effects of some common plants traditionally used in herbal management of diabetes amongst the Yorubas of Ile-Ife, Nigeria, on the histomorphology of the kidney in STZ-induced Wistar rats.
The herbal plants used for this study were leaves of Veronia amygdalina, shaft of Citrullus colocynthis seed, leaves of Psidium guajava, and leaves of Ficus mucuso (SPP).
Veronia amygdalina (VA) commonly called bitter leaf belongs to the family Asteraceae. It has petiolate leaves of about 6 mm diameter and elliptic shape. The leaves are green with a characteristic odour and a bitter taste [3]. It is called 'Ewuro' by the Yorubas of Nigeria. The leaves have been used in traditional folk medicine as anthelmintics, antimalarial, antimicrobial anticancer and as a laxative herb [4]. Phytochemical substances in VA include oxalates, phylates and tannins [5,6], and also flavonoids [7,8].
Citrullus colocynthis (CC) popularly known as 'bitter apple' , 'colosynth' , and 'vine-of-Sodom' is a tropical plant belonging to the family Cucurbitaceae [9]. It is also commonly referred to as 'egusi' amongst the Yorubas of Nigeria. In the traditional medicine, it has been used in treatment of constipation [10], diabetes [11] oedema, fever, jaundice leukaemia, bacterial infections, cancer and used as an abortifacient [12].
Psidium guajava (PG) is a semi deciduous tropical tree commonly known as 'guava' and belongs to the family Myrtaceae. Phytochemical constituent have been shown to include Vitamin C, B1, B2, and B6, free sugars [13]. Guava fruits have been shown to have antioxidant properties [14]. The fruits have been shown to possess hypoglyceamic effects in diabetic mice and human volunteers [13]. Studies have indicated the presence of various flavonoids, terpenoids and their glycosides [15,16], and these compounds have been shown to be antidiabetic [17,18].
Ficus mucuso (FM) belongs to the family Moraceea. The Ficus genus has wide distribution and is used traditionally as medicine, vegetable, food, fodder and fuel wood [19]. Phytochemical analyses of FM have revealed the presence of monoterpenoids and flavonoids [20]. average weight of 153.4 g were procured from the animal house of College of Health Sciences, Obafemi Awolowo University, Ile -Ife, Osun State. The animals were kept under standard laboratory condition of good lighting, moderate temperature, and adequate ventilation in a hygienic environment. They were feed on standard rat chow containing proteins, carbohydrate, fats, vitamins and minerals. The animals were placed under standard laboratory protocols as stipulated by the Institutional Animal Care and Use Committee (IACUC, 2010).
Animal grouping and treatment
The animals were randomly divided into seven groups of 6 animals each • Group A -control normal rats administered with equivalent volume of citrate buffer.
• Group B -experimentally-induced diabetic rats were administered with single intraperitoneal injection of streptozotocin (65 mg/kg), • Group C -experimentally-induced diabetic rats (65 mg/kg) treated with aqueous extract of VA leaves (100 mg/kg) orally, dissolved in normal saline for 14 days, • Group D -experimentally-induced diabetic rats (65 mg/kg) treated with aqueous extract of shaft of CC seeds (100 mg/kg) orally, dissolved in normal saline for 14 days, • Group E -experimentally-induced diabetic rats (65 mg/kg) treated with aqueous extract of PG (100 mg/kg) orally, dissolved in normal saline for 14 days, • Group F -experimentally-induced diabetic rats (65 mg/ kg) treated with aqueous extract of FM (100 mg/kg) orally, dissolved in normal saline for 14 days, • Group G -experimentally-induced diabetic rats (65 mg/ kg) treated with a standard antidiabetic drug (2 mg/kg of glimepiride) orally, dissolved in normal saline for 14 days.
Plants materials
Preparation of extracts: The plant leaves were procured from a local market in Ile-Ife metropolis in Osun state, Nigeria. The leaves were taken to the herbarium in the Department of Botany, Obafemi Awolowo University, Nigeria, to confirm identification. The leaves and shaft of the plants were air dried and powdered in a warring blender. The extraction process of the plant leaves of VA (425 g), PG (970 g), FM (370 g) and shaft of CC (615 g) were prepared by dissolving it in 2.9 L, 3.19 L, 3.5 L and 2.2 L respectively for 72 hr with intermittent shaking. Thereafter, the solution was filtered using a filter paper. The filtrate was then concentrated in vacuo at 35°C using a rotator vacuum evaporator (Buchi Rotavapor, R110 Schweiz). The extracts were oven dried at 37°C, and the respective percentage yield (3.00 g, 2.65 g, 5.34 g and 1.76 g) were stored until ready to use. The aliquot portion of each of the extracts were weighed and dissolved in normal saline for use on each day of the experiment.
Induction of diabetes
Diabetes mellitus was experimentally-induced in groups B, C, D, E, F, and G by a single intraperitoneal injection of 65 mg/kg body weight of streptozotocin (Tocris Bioscience, UK) dissolved in 0.1 M sodium citrate buffer (pH 6.3) [21]. Diabetes was confirmed in animals 48 hours after induction, by determining fasting blood glucose level using a digital glucometer (Accu-chek ® Advantage, Roche Diagnostic, Germany) consisting of a digital meter and the test strips using blood samples obtained from the tail vein of the rats. The animals were stabilized for twenty eight days before the commencement of extract and glimepiride administration. The fasting blood glucose was subsequently monitored throughout the experimental period. Animals in group A were given equal volume of citrate buffer used in dissolving streptozotocin intraperitoneally.
Method of administration of extracts
The animals were fed orally using orogastric tube. The animals were held with a glove with the left hand such that the neck region was held by the fingers to still the neck while being fed. Treatment was done at 07.00 hour every day before the animals were fed over a period of two weeks (14 days).
Sacrifice and specimen collection
The animals were sacrificed by cervical dislocation 24 hours after the expiration of research. The kidneys were excised following midlineabdominal incision. The kidney which is reddish-brown organ situated posteriorly behind the peritoneum on each side of the vertebral column were excised and weighed.
Histological evaluation
The harvested kidneys were fixed in 10% formal saline for a minimum of 48 hours and process routinely for paraffin embedding. Serial sections were obtained at 5 µm from a rotary microtome (Bright B5040, Huntington England) and stained using routine haematoxylin and eosin method. Stained sections were viewed under a LEICA digital microscope (DM 750) and photomicropgraphs were taken with the aid of an attached camera (Leica ICC50).
Histomorphometric analysis
The stained sections were subjected to morphometric analysis recommended by World Health Organization W.H.O. [22]. which included: dividing the eye piece occulometer into two 100 small divisions, the stage micrometer scale was made up to 1 mm divided into 0.1 mm divisions and each 0.1 mm was divided into 0.01 mm, the eye piece scale (occulometer) was inserted into the eye piece of the microscope by removing the superior lens thus placing the scale on the field stop, the stage micrometer was also placed on the stage of the microscope, the stage scale was focused by the low power objective lens (x4), the stage and the eye piece scales were adjusted until there was a parallel point between the two scales, the number of the eye piece divisions and its corresponding stage measurements was noted; (if 70 occulometer divisions equal to 14 μm, all the objective lens were thus calibrated). Calibration was needed for each microscope use. The occulometer fixed into the Olympus Microscope was then focused through stained sections of the tissue to allow for the measurement of the parameters.
Statistical analysis
Data were expressed as mean ± SEM. Data were analysed using One-way ANOVA, followed by Student Newman-Keuls (SNK) test for multiple comparisons. Significant difference was taken as p<0.05.
Effects of extracts on relative weight of kidney
As shown in Table 1, the relative weights of the kidney was significantly reduced in all groups (B-G) compared to control group (A).
Effects of extracts on kidney histology
As shown in Figure 1, Control animals showed normal kidney histology. The glomeruli were well demonstrated with normal bowman space. The renal tubules filling the bulk of the kidney parenchyma were clearly observed. Diabetic kidney, in Group B, showed atrophy of the glomeruli and the tubules were fairly preserved. Administration of the extracts and antidiabetic drug improves cellular regeneration which is quite prominent in Groups C and G.
Effects of extracts on histomorphometric glomerular density
As shown in Table 2, there was significant decrease (p<0.05) in glomerular density of diabetic animals (Group B) compared to control, extracts (Groups D, E, F) and drug treated groups. Group C (treated with VA extract) showed no significant improvement (p<0.05) in glomerular density while group E showed significant increase in density compared to control.
Discussion
Long-term damage, dysfunction and failure of the kidneys are a major complication of diabetes mellitus [1]. Disorders of the kidneys are serious secondary consequence of diabetes, resulting in end stagerenal diseases. Increased glucose levels in the blood have been shown to lead to oxidative stress, which is considered as one of the causative factor for diabetes-associated kidney disorders [23]. STZ-induced diabetic rodents are seen to develop kidney disorders similar to the early stage of human diabetic-associated disorders of the kidney [24]. Renal hypertrophy has been reported in diabetes [23].
Also diabetic nephropathy has been known to cause renal failure thus leading to mortality and morbidity.
However the histological and histomorphometric evaluation of the present study shows atrophy rather than hypertrophy in the glomeruli of diabetic animals which is validated by significant decrease in its density and shrinkage. These observations were also characterized by diminished cellular proliferation, decreased cellular volume and ischemia.
The primary function of the glomeruli is to assist in the production of ultrafiltrate of the plasma such as Na +, water and urea for further processing by the renal tubules thus playing a vital role in the maintenance of fluid and electrolyte homeostasis.
Administration of the extracts improves the histoarchitecture of the kidney and by extension restores its functionality. The groups administered with PG extract demonstrated a distinct regenerative capacity over the other three extract. This was closely followed by the group administered with FM extract.
Previous studies have reported some similar histopathological findings [24,25]. The plant extracts used for the study, are common herbal plant used traditionally in the management of diabetes, amongst the Yorubas of Ile-Ife, Nigeria. Three of these plants -VA, CC, and PG; have been reported to possess anti-diabetic properties [3,9,13]. The four medicinal plants used in this study are well known for their antioxidant properties which are due to their high level content of flavonoids [7,8,11,15,20]. The present study has provided useful information in the management of kidney related disorders resulting from diabetes. | 2019-03-13T13:31:27.713Z | 2014-01-01T00:00:00.000 | {
"year": 2014,
"sha1": "db06d6c4373b11bd74d2ea69536d9b6c0d6d53e2",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/open-access/effects-of-four-herbal-plants-on-kidney-histomorphology-in-stzinduced-diabetic-wistar-rats-2157-7099.1000210.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "386933863558505ce1828cfe9d2265038a06f34a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13670165 | pes2o/s2orc | v3-fos-license | Fe I Oscillator Strengths for the Gaia-ESO Survey
The Gaia-ESO Public Spectroscopic Survey (GES) is conducting a large-scale study of multi-element chemical abundances of some 100 000 stars in the Milky Way with the ultimate aim of quantifying the formation history and evolution of young, mature and ancient Galactic populations. However, in preparing for the analysis of GES spectra, it has been noted that atomic oscillator strengths of important Fe I lines required to correctly model stellar line intensities are missing from the atomic database. Here, we present new experimental oscillator strengths derived from branching fractions and level lifetimes, for 142 transitions of Fe I between 3526 {\AA} and 10864 {\AA}, of which at least 38 are urgently needed by GES. We also assess the impact of these new data on solar spectral synthesis and demonstrate that for 36 lines that appear unblended in the Sun, Fe abundance measurements yield a small line-by-line scatter (0.08 dex) with a mean abundance of 7.44 dex in good agreement with recent publications.
INTRODUCTION
The Gaia-ESO Public Spectroscopic Survey (GES) is currently taking place at the European Southern Observatory (ESO), employing the Fibre Large Array Multi Element Spectrograph (FLAMES) instrument at the Very Large Telescope (VLT) facility. Its aim is to obtain high quality spectroscopy of some 100 000 stars from all major components of the Milky Way to quantify the "kinematic multichemical element abundance distribution functions of the Milky Way Bulge, the thick Disc, the thin Disc, and the Halo stellar components, as well as a very significant sample of 100 open clusters" (Gilmore et al. 2012). Over the course of the survey, chemical abundances will be measured for alpha and iron-peak elements in all stars with visual magnitude less than nineteen. These data will probe stellar nucleosynthesis by examining nuclear statistical equilibrium and the alpha-chain. Ultimately, the abundances and radial velocities will be combined with high-precision position and proper motion measurements from the European Space Agency's Gaia mission, to "quantify the formation history and evolution of young, mature and ancient Galactic populations" (Perryman et al. 2001). Gilmore et al. (2012) also state that "Considerable effort will be invested in abundance calibration and ESO archive re-analysis to ensure maximum future utility." To achieve these high-level aims, it is vital that fundamental atomic data be available for lines in the GES spectral range: ⋆ E-mail:m.ruffoni@imperial.ac.uk 4800Å to 6800Å for measurements with the high-resolution FLAMES Ultraviolet and Visual Echelle Spectrograph (UVES) and 8500Å to 9000Å for measurements with the mid-resolution FLAMES Giraffe spectrograph. The availability of absorption oscillator strengths, f (usually used as the log(gf ), where g is the statistical weight of the lower level), is particularly important for the correct modelling and analysis of stellar line intensities; especially so for abundant elements such as iron, which is also used to infer fundamental stellar parameters.
However, in preparing a list of iron lines to be targeted during the analysis of GES spectra, the GES line list team noted that of 449 well-resolved lines of neutral iron (Fe I) expected to be visible with sufficient signal-to-noise ratio, only 167 have published log(gf) values measured in the laboratory with uncertainties below 25 %. Experimental log(gf) values with large uncertainties (greater than 50 % in many cases) were available for an additional 162 lines. For the final 120 lines, no experimental log(gf )s were available at all. A similar observation was made by Bigot and Thévenin (2006) for lines of interest to the Gaia mission.
As a result of this inadequacy in the atomic database, and similar inadequacies observed by other astronomers (see Ruffoni et al. (2013a) and Pickering et al. (2011), for example), we have undertaken a new study of the Fe I spectrum with the aim of providing accurate log(gf ) values for lines of astrophysical significance. In Section 3 of this paper, we report accurate log(gf )s for 142 Fe I lines, 64 of which have been measured experimentally for the first time. The log(gf ) values of at least 38 of these lines are urgently needed for the GES survey.
EXPERIMENTAL PROCEDURE
Typically, log(gf )s are obtained in the laboratory from measurements of atomic transition probabilities, A, (Thorne at al. 2007).
where the subscript u denotes a target upper energy level, and ul, a transition from this level to a lower level, l, that results in emission of photons of wavelength λ (nm). gu is the statistical weight of the upper level. The A ul values are found by combining experimental branching fractions, BF ul , with radiative lifetimes, τu (Huber and Sandeman 1986).
The BF ul for a given transition is the ratio of its A ul to the sum of all A ul associated with u. This is equivalent to the ratio of observed relative line intensities in photons/s for these transitions.
This approach does not depend on any form of equilibrium in the population distribution over different levels, but it is essential that all significant transitions from u be included in the sum over l. The BFs measured for this work were extracted from Fe I spectra acquired by Fourier transform (FT) spectroscopy, as described in Section 2.1. The radiative lifetimes required to solve Equation 2 were obtained through laser induced fluorescence (LIF), and are discussed in Section 2.2.
Branching Fraction Measurements
The BFs reported here were obtained from Fe I emission line spectra measured in two overlapping spectral ranges between 8200 cm −1 and 35500 cm −1 (between 1220 nm and 282 nm), labelled A and B in Table 1.
Spectrum A was measured between 8200 cm −1 and 25500 cm −1 (3920.5Å and 12191.8Å) on the 2 m Fourier transform (FT) spectrometer at the National Institute of Standards and Technology (NIST) (Nave et al. 1997). The Fe I emission was generated from an iron cathode mounted in a water cooled hollow cathode lamp (HCL) running at a current of 2.0 A in a Ne atmosphere of 370 Pa pressure. The response of the spectrometer as a function of wavenumber was obtained by measuring the spectrum of a calibrated tungsten (W) halogen lamp with spectral radiance known to ±1.1 % between 250 nm and 2400 nm. W lamp spectra were acquired both before and after measurements of the Fe/Ne HCL spectrum to verify that the spectrometer response remained stable. 220 individual Fe/Ne HCL spectra were acquired over two days and coadded to improve the signal-to-noise ratio of weak lines. However, due to different detector configurations being used on each day, the spectrometer response function varied significantly between the two. As a result, the files Fe080311.001 to .003 (containing 110 spectra, acquired with a Si photodiode on each outputs of the FT spectrometer) were coadded and intensity calibrated using the spectral response function labelled "Spectrum A (1)" in Figure 1, while Fe080411 B.001 to .003 (containing the remaining 110 spectra, acquired with a single Si photodiode mounted on the unbalanced output of the FT spectrometer) were coadded and calibrated using the response function labelled "Spectrum A (2)". These response functions were obtained with the aid of the FAST package (Ruffoni 2013b). The two intensity calibrated line spectra were then themselves coadded to produce the final spectrum.
Spectrum B was measured between 20000 cm −1 and 35500 cm −1 (2816.1Å and 4998.6Å) on the Imperial College VUV spectrometer (Thorne et al. 1996), which is based on the laboratory prototype designed by Thorne et al. (1987). The Fe emission was generated from an iron cathode mounted in a new HCL designed and manufactured at Imperial College London (IC). The lamp was operated at 700 mA in a Ne atmosphere of 170 Pa pressure to provide reasonable signal-to-noise ratio in the weaker lines while avoiding self-absorption effects in the stronger lines.
The spectrometer response function for spectrum B is also shown in Figure 1, and was again obtained from a calibrated W lamp, measured before and after each Fe/Ne HCL measurement. Uncertainties in the relative spectral radiance of the W lamp used at IC, and calibrated by the National Physical Laboratory (NPL), do not exceed ±1.4 % between 410 nm and 800 nm, and rise to ±2.8 % at 300 nm.
Many of the upper energy levels studied here are linked to transitions that produced spectral lines contained entirely within the range of either spectrum A or B. In these cases, all branching fractions pertaining to those levels were derived from a single spectrum. Where lines associated with a given upper level spanned both spectra, their intensities were put on a common relative scale by comparing the intensity of lines in the overlap region between the two spectra. The process of intensity calibrating overlapping spectra is discussed in detail in Pickering et al. (2001a) and Pickering et al. (2001b).
For each target upper level, the predicted transitions to lower levels were obtained from the semi-empirical calculations of Kurucz (2007). Emission lines from these transitions were then identified in our Fe spectra, and the XGremlin package (Nave et al. 1997) used to fit Voigt profiles to those that were observed above the noise limit. The residuals from each fit were examined to ensure that the observed line profiles were free from selfabsorption and not blended with other features.
The spectra and fit results from XGremlin were then loaded into the FAST package (Ruffoni 2013b), where the BFs for each observed target line were measured. Lines which were too weak to be observed -typically those predicted by Kurucz (2007) to contribute less than 1 % of the total upper level BF -were not considered, as were lines that were either blended or outside the measured spectral range. Their predicted contribution to the total BF was assigned to a 'residual' value, which was used to scale the sum over l of the measured line intensity, I ul , in Equation 3.
The calculation of experimental uncertainties in BFs measured by FT spectroscopy with FAST has been discussed in our recent papers (Ruffoni et al. 2013a;Ruffoni 2013b). The uncertainty in a given BF, ∆BF ul , is where I ul is the calibrated relative intensity of the emission line associated with the electronic transition from level u to level l, and ∆I ul is the uncertainty in intensity of this line due to its measured signal-to-noise ratio and the uncertainty in the intensity of the standard lamp. From Equation 2, it then follows that the uncertainty in A ul is Note: The identification of commercial products does not imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the items identified are necessarily the best available for the purpose.
a The equivalent wavelength ranges are: Spectrum A: λ = 3920.5Åto 12191.8Å, Spectrum B: λ = 2816.1Åto 4998.6Å. b The named spectra were coadded to improve the signal-to-noise ratio of weak lines. Table 1.
where ∆τ ul is the uncertainty in our measured upper level lifetime. Finally, the uncertainty in log(gf ) of a given line is
Upper Level Radiative Lifetimes
Radiative lifetimes are measured to ±5 % using time-resolved laser-induced fluorescence (LIF) on an atomic beam of iron atoms. A diagram of the apparatus is shown in O' Brian et al. (1991). The beam is produced by sputtering iron atoms in a hollow cathode discharge. The electrical discharge is operated in ≈50 Pa argon gas. A DC current of ≈30 mA maintains the discharge between ≈10 A, 10 µs duration pulses at 30 Hz repetition rate. The hollow cathode, which is lined with a foil of pure iron, is closed on one end except for a 1 mm hole which is flared on one side to act as a nozzle. Energetic argon ions accelerated through the cathode fall potential efficiently sputter the iron from the surface of the cathode. The iron atoms (neutral as well as singly-ionized) are differentially pumped through the nozzle amidst a flow of argon gas into a low pressure (10 −2 Pa) scattering chamber. This "beam" is slow (neutrals are moving ∼5 × 10 4 cm/s and ions somewhat faster) and weakly collimated. Measurement of the odd-parity level lifetime required singlestep laser excitation. In this technique the atomic beam is intersected at right angles by a single beam from a nitrogen laserpumped dye laser 1 cm below the nozzle. The delay between the discharge pulse and the laser pulse is adjustable, and optimized typ-ically at ∼20 µs, which corresponds to the average transit time of the iron atoms. The scattering volume is at the center of a set of Helmholtz coils which zeroes the magnetic field to within ±2 µT. This very low field ensures that the excited iron atoms do not precess about the Earth's magnetic field, thus eliminating the potential for Zeeman quantum beats in the fluorescence. The dye laser is tunable over the range 205 nm to 720 nm using a large selection of dyes as well as frequency doubling crystals. It has a bandwidth of ∼0.2 cm −1 , a half-width duration of ∼3 ns and, more importantly for this work, terminates completely in a few ns. The laser allows for selective excitation of the level under study, eliminating the problem of cascade from higher-lying levels that plagued earlier, non-selective techniques. The laser is tuned to a transition between the ground state or a low-lying metastable level and the level under study. Identifying the correct transition is non-trivial, particularly for a dense, line rich spectrum such as Fe I and Fe II. The laser is tuned to within ≈0.1 nm of the transition by adjusting the angle of the grating, which is the tuning element of the laser, while measuring the wavelength with a 0.5 m monochromator. A LIF spectrum of 0.5 nm to 1.0 nm range is then recorded using a boxcar averager by slowly changing the pressure in an enclosed volume surrounding the grating. Pressure scanning provides exceptional linearity and reproducibility. The pressure scanned spectrum is then compared to the published linelist from the NIST database 1 to correctly identify the line of interest.
Fluorescence is collected in a direction mutually orthogonal to the atomic and laser beams through a pair of fused-silica lenses comprising an f/1 optical system. A spectral filter, either a broadband colored-glass filter or a narrowband multilayer dielectric filter, is inserted between the two lenses where the fluorescence is approximately collimated. The filter is chosen to maximize fluorescence throughput while reducing or eliminating scattered laser light and eliminating possible cascade from lower-lying levels. Fluorescence is focused onto the photocathode of a RCA 1P28A photomultiplier tube (PMT) and the PMT signal is recorded using a Tektronix SCD1000 transient digitizer. The bandwidth of the PMT, digitizer and associated electronics is adequate to measure lifetimes down to ∼2 ns. The lifetimes reported here are in the 10 ns to 25 ns range and are well within the bandwidth limits. The characteristics of this PMT, i.e. fast rise time and high spectral response in the UV and visible, are favourable for radiative lifetime measurements.
The digitizer is triggered with the signal from a fast photodiode which is illuminated by light picked off from the nitrogen laser. Recording of the fluorescence by the digitizer is delayed until after the dye laser pulse has completely terminated, making deconvolution of the laser temporal profile and fluorescence signals unnecessary. Each data record consists of an average of 640 fluorescence decays followed by an average of 640 background traces with the laser tuned off-line. The data is divided into an early time and a late time interval for analysis. A linear least-square fit to a single exponential is performed on the background subtracted fluorescence decay to determine a lifetime for each interval. Comparison of the lifetimes in the two intervals is a sensitive indicator of whether the decay is a clean exponential or whether some systematic effect has rendered it non-exponential. Five of these decay times are averaged together to determine the lifetime. The lifetime of each odd-parity level is measured twice, using two different laser transitions. This redundancy helps to ensure that the transition is classified correctly, free from blends, and is identified correctly in the experiment.
Measurement of the even-parity levels reported here required two-step laser excitation. The introduction of a second laser results in an added layer of complexity in the excitation of the level and timing of the experiment, as well as more stringent requirements for the filtering of the fluorescence. The fluorescence detection, recording and analysis is identical to the one-laser experiment. While it is possible to pump two dye lasers using one nitrogen laser, this limits the power available in either laser beam. Instead, we used two dye lasers each with its own nitrogen laser pump. The delay generator which, in the one laser experiment is used to trigger the laser ∼20 µs after the discharge pulse, is in this case used to trigger a second dual gate generator that has very precise timing (±1 ns) between its two gates. These gates are used to trigger the two nitrogen lasers which pump the dye lasers. Because the two nitrogen lasers have different thyratron charging and firing mechanisms, there is a substantial amount of timing jitter (approximately ±20 ns) between the resulting dye laser pulses. This jitter results in some additional shot-to-shot fluctuation in the final measurement as the population in the intermediate level has decayed more or less from its peak. The lifetimes of all the intermediate levels used but one is substantially longer than this jitter (60 ns to 85 ns as measured by O'Brian et al. (1991)), so the added shot-to-shot noise was not severe. Even the measurement with the short-lived (9.6 ns as measured by O'Brian et al. (1991)) intermediate level had only ∼2 % statistical scatter in the final average. The delay between the two lasers is adjusted such that the laser which drives the transition from the intermediate level to the even-parity level being studied (laser 2) arrives on average ∼20 ns after that which drives a transition between the ground or low-lying metastable level and an intermediate odd-parity level (laser 1). The trigger signal for the boxcar and digitizer was from the fast photodiode illuminated with light from the laser 2 nitrogen laser.
The two lasers are sent through the scattering chamber at slight angles relative to each other, such that they intersect in the viewing volume. Once laser 1 is tuned onto the appropriate transition to drive the intermediate level, it is left there for the duration of the measurement. A narrowband, multilayer dielectric filter is inserted in the collection optics which completely blocks fluorescence from the intermediate level but transmits fluorescence from the upper level. Laser 2 was tuned on and off the transition to provide the fluorescence and background traces as in the one-step experiment. The fluorescence was observed to go away when either laser 1 or laser 2 was blocked and the other laser was allowed to pass through the system, ensuring that it was indeed from a two-step process. This provides the assurance that the correct lifetime is being measured that a redundant measurement gives in the one-step experiment. Each two-step lifetime was therefore measured only once. Systematic effects such as Zeeman quantum beats and bandwidth limits are well-studied and controlled in the experiment. Another effect, the flight out of view effect, is caused by atoms leaving the viewing volume before fluorescing. This effect is only a problem for long lifetimes, greater than 300 ns for neutrals and greater than 100 ns for ions, and is not a problem for the current set of lifetimes. In addition to understanding and minimizing these systematics, we also regularly measure a set of benchmark lifetimes, to compare our measured values to the known lifetimes. These benchmarks are lifetimes that are either very well known from theoretical calculations, or from an experiment which has significantly smaller and generally different systematic uncertainties from our own. For the current set of lifetimes, we measured three benchmarks which approximately bracketed the range of values reported here. These are: 2p 2 P 3/2 level of singly ionized Be at 8.8519(8) ns (variational method calculation (Yan et al. 1998)); the 3p 2 P 3/2 level of neutral Na at 16.23(1) ns (accuracy of 0.1 % at 90 % confidence level) taken from the recent NIST critical compilation of Kelleher & Podobedova (2008); and the 2p2 4p ′ [1/2]1 level of neutral Ar at 27.85(7) ns (beam-gas-laser-spectroscopy (Volz & Schmoranzer 1998)). Benchmarks are measured in exactly the same way as the Fe I lifetimes except that the cathode lining is changed in the cases of the Be + and Na measurements. With these benchmarks we are able to quantify and make small corrections for any residual systematic effects ensuring that our final results are well within the stated uncertainty of ±5 %. A recent comparison of LIF measurements in Sm II by Lawler et al. (2008) suggests that the ±5 % is a conservative estimate of the lifetime uncertainty.
The lifetime results are given in Table 2. A total of 1 oddparity and 8 even-parity level lifetimes were measured; most for the first time. The even-parity e 5 D4 level at 44677.003 cm −1 was also measured by Marek et al. (1979) using delayed coincidence detection after laser excitation, and agrees with our lifetime to about ±1 %. This good level of agreement is what we have come to expect between modern, laser-based methods. Table 2 lists the Fe I upper levels that were targeted in this study. They were selected because their branches to lower levels produce many spectral lines of interest to the GES survey that currently have either no experimentally measured log(gf ) value in the literature, or a log(gf ) known to worse than ±25%. We also included two levels, those at 43633.530 cm −1 and 51461.667 cm −1 , for which accurate lifetimes and log(gf )s were reported by O'Brian et al. (1991). These served primarily as a means to check the accuracy of log(gf )s produced with the aid of the FAST code, but in re- Step 1
RESULTS
Step Radiative lifetime for odd parity Fe I levels using single step excitation. Note: The configuration, term, and energy level data are taken from Nave et al. (1994).
b Fluorescence was observed through ∼ 10 nm bandpass multi-layer dielectric filters. The filter angle was adjusted where needed to centre the bandpass at the indicated wavelength. c Marek et al. (1979). measuring them we were also able to improve upon the experimental uncertainty achieved by O'Brian et al. (1991) and provide log(gf )s for a number of weaker branches not included in their paper. Some further lines reported by O'Brian et al. (1991) appear in branches from other upper levels, as do a few lines for which accurate log(gf )s were reported by Blackwell et al. (1982) and Bard et al. (1991). Again, these served as a means to check the accuracy of our results.
Our measured branching fractions, transition probabilities, and log(gf )s are listed in Table 3 along with the most accurate log(gf )s previously available in the literature. The lower level terms, and transition vacuum wavenumbers and air wavelengths were taken from Nave et al. (1994), where possible. For the small number of lines not included in Nave et al. (1994), the transition vacuum wavenumber and air wavelength shown were obtained from our FT spectra by calibrating the measured wavenumber scale to match the calibrated scale used by Nave et al. (1994). These lines are marked in Table 3 by a '*' in the Lower Level column. Table 3 is sorted in order of ascending transition wavenumber, with lines grouped by common upper level energy. For each set of lines, the upper level energy, configuration, term, and J value, and measured lifetime are given as a header row. The unobserved 'residual' BF, described in Section 2.1, is given in the BF column at the end of each set. The lines that contribute to these residuals are given in Table 4 where they have either been observed in previous studies, or predicted by Kurucz (2007) to contribute more than 1 % to the total BF. Reasons for their omission in this study are given.
For six of the eleven upper levels (those at 43633.530 cm −1 , 44677.003 cm −1 , 47377.952 cm −1 , 47960.937 cm −1 , 48702.532 cm −1 , and 51294.217 cm −1 ) the residual BF amounted to less than 5%, and arose solely from lines predicted by Kurucz (2007) to contribute to the total set of branches that were too weak to be observed experimentally. For the remaining five levels (those at 51461.667 cm −1 , 51770.554 cm −1 , 52039.889 cm −1 , 52067.446 cm −1 , and 54683.318 cm −1 ) a large majority of branches were observed, but at least one stronger line was unavailable due to being unobserved above the spectral noise, blended with another line, or significantly separated in wavenumber from the rest of the branches (which prevents correct intensity calibration). In all cases, the missing BF was taken from previously published values, if they existed, or from Kurucz's calculations otherwise, as shown in Table 4. Any error in these values will affect the overall normalisation of log(gf )s for the level in question, in turn leading to a systematic error in their value. However, we expect this error to be small, and so have neglected it, for two reasons. Firstly, there is good agreement between our log(gf )s and those from O' Brian et al. (1991) for branches from the 51461.667 cm −1 level, which has a residual BF of 0.124 (the largest of all levels) and secondly, this residual can be varied by as much as ±20% without the normalisation error exceeding the random uncertainty in log(gf ) of any of the branches.
Lines that are of particular interest for the GES survey are marked in Table 3 in the "GES Target?" column. In some cases the log(gf )s for these lines have been measured in earlier studies, in which case we have sought to reduce their uncertainty. For lines originally measured by May et al. (1974), the quoted published log(gf )s are the corrected values given by Fuhr and Wiese (2006) in their recent critical compilation of Fe I log(gf )s. In preparing their compilation, these authors noted that the lifetimes used by May et al. (1974) originate from data produced in the 1960s and early 1970s. Comparing these to the cascade-free LIF lifetimes measured by O'Brian et al. (1991), they found that for 13 energy levels between 52000 cm −1 and 57000 cm −1 the lifetimes given by O'Brian et al. (1991) were systematically shorter by about 20%, most likely due to the absence of cascade effects. For levels below 36000 cm −1 , this systematic error vanished. Fuhr and Wiese (2006) therefore corrected the log(gf )s given by May et al. (1974) for levels above 36000 cm −1 to make them consistent with the lifetime data of O'Brian et al. (1991). For the remaining log(gf )s, Fuhr and Wiese (2006) found fair agreement with the results of O' Brian et al. (1991) and Blackwell et al. (1982) where they overlapped. However, the scatter was "quite large", suggesting that the uncertainties given by May et al. (1974) should be significantly larger. In Table 3, the uncertainties in log(gf )s from May et al. (1974) are therefore given as a letter 'D' or 'E' to follow the notation used by Fuhr and Wiese (2006). A letter 'D' indicates that the uncertainty is likely to be up to 50%, whereas an 'E' indicates a probable uncertainty greater than 50%, but within a factor of two in most cases. Figure 2 shows a comparison between our new log(gf )s and those published previously. The top panel shows the difference between our values and those reported by O'Brian et al. (1991), Blackwell et al. (1982), and Bard et al. (1991). The long dashed, short dashed and dotted horizonatal lines indicate uncertainties of ±2 %, ±10 % and ±25 %, respectively, corresponding to uncertainties coded ' A', 'B' and 'C' by Fuhr and Wiese (2006). The work of Blackwell et al. (1982) continues to serve as a goldstandard for Fe I log(gf )s in the literature. Five lines from their study are also included our work, and the log(gf ) for each agrees within their combined experimental uncertainty of ±5%. There is also very good agreement with the results of O' Brian et al. (1991) and Bard et al. (1991). 25 of the 29 log(gf )s from these papers agree within the combined experimental uncertainties with no discernible systematic offset between the published results and our new values. Together, these testify to the general accuracy of our log(gf ) measurements and the accuracy of the FAST code in extracting log(gf )s from FT spectra.
The lower panel of Figure 2 shows the difference between our values and the corrected log(gf )s given by Fuhr and Wiese (2006) for the data reported by May et al. (1974). The dashed and dotted lines this time indicate uncertainties of ±50 % and ±100 %, respectively, which correspond to uncertainties coded 'D' and 'E' by Fuhr and Wiese (2006). 31 of the 39 corrected log(gf )s from May et al. (1974) agree with our new values when considering the enlarged uncertainties attributed to them by Fuhr and Wiese (2006), but there is considerable scatter in the results, as was also noted by Fuhr and Wiese (2006). There is also a systematic offset of log(gf ) (New−P ub) = 0.12 for these lines. Our new log(gf )s for these lines are accompanied by considerably smaller uncertainties; typically less than 25%, with some as low as 5% for stronger lines.
IMPACT ON SOLAR SPECTRAL SYNTHESIS
The Sun offers an excellent test-bed for new atomic data, with its high-resolution spectrum (Kurucz et al. 1984) and accurately Blackwell et al. (1982), and Bard et al. (1991), which agree well with our new log(gf) values. The lower pane shows the corrected results from May et al. (1974), which have a considerably lower accuracy.
known fundamental parameters ("The Astronomical Almanac" 2013). To assess the impact of our new log(gf )s on stellar syntheses, and also verify their general accuracy, we have determined line-by-line solar Fe abundances for a subset of 36 lines listed in Table 3 using both our new log(gf )s and the best previously published values that are not of astrophysical nature. These lines, shown in Table 5, were selected as they are blend-free at the spectral resolution of the Kitt Peak Fourier Transform Spectrometer (R ≈ 200000) flux atlas (Kurucz et al. 1984), and are accompanied by good broadening parameters and accurate continuum placement. The synthesis and abundance determination were performed under the assumption of local thermodynamic equilibrium (LTE), with the one dimensional, plane-parallel radiative transfer code SME (Valenti & Piskunov 1996), using a MARCS model atmosphere (Gustafsson et al. 2008). We adopted a solar effective temperature T eff = 5777 K, a surface gravity log(grav.) = 4.44, a microturbulence of ξvmic = 1.0 km s −1 , and a projected rotational velocity of vrot sin(i) = 2.0 km s −1 . The radial-tangential macro-turbulence velocity, ξvmac, was varied between 1.5 km s −1 and 2.5 km s −1 to match the observed profile. The instrumental profile was assumed to be Gaussian. The line profiles were fitted individually using χ 2 - where N (Fe) and N (H) are the number of iron and hydrogen atoms per unit volume, respectively. Reassuringly, the new experimental data result in a small lineto-line scatter (0.08 dex) 2 and a mean abundance of 7.44, which is in good agreement with recent publications, such as 7.43 ± 0.02 from Bergemann et al. (2012) (MARCS, LTE result). In contrast, the best previously published values (omitting the discrepant semiempirical values shown in Figure 3) produce an abundance of 7.49 ± 0.13, with the significantly larger scatter driven by the lines with no previous laboratory measurements. The observed and bestfit synthetic profiles of three of these lines are shown in Figure 4. They all fall within the GES wavelength windows, and two of them are also in the near-infrared Gaia Radial Velocity Spectrometer window (Katz et al. 2004).
Other lines with significant improvements in the solar modelling, but not shown in Table 5, are those at 4079.2Å, 4933.9Å, and 5171.7Å, which are partly blended with astrophysically interesting lines such as the Ba II 4934.0Å line, the Mn I 4079.2Å line, and the Mg-I triplet line at 5172.7Å.
SUMMARY
In Table 3, we have provided new log(gf ) values for 142 Fe I lines from 12 upper levels, which include 38 lines of particular interest for the analysis of stellar spectra obtained by the GES survey. Where log(gf )s existed for these lines in the literature, we have found good agreement with our new values, which in many cases have smaller experimental uncertainties than those previously reported. This is especially true for uncertainties in log(gf )s from May et al. (1974), which have been reduced from 50% or more to less than 25% in most cases.
This work represents part of an on-going collaboration between Imperial College London, U. Wisconsin, and NIST to provide the astronomy community with Fe I log(gf ) values needed for the analysis of astrophysical spectra. Further publications will follow in the near future.
ACKNOWLEDGEMENTS
MPR and JCP would like to thank the UK Science and Technology Facilities Council (STFC) for supporting this research and the European Science Foundation (ESF), under GREAT/ESF grant number 5435, for funding international travel to discuss research plans with the wider GES team. EDH and JEL acknowledge the support of the US National Science Foundation (NSF) for funding the LIF lifetime measurements under grants AST-0907732 and AST-121105. KL acknowledges support by the European Union FP7 programme through European Research Council (ERC) grant number 320360.
Please note that the identification of commercial products in this paper does not imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the items identified are necessarily the best available for the purpose. Table 5. 1991,1995) and are expressed in the standard packed notation where the integer component is the broadening cross section, σ, in atomic units, and the decimal component is the dimensionless velocity parameter, α. Values less than zero are the log of the VdW broadening parameter, γ 6 (rad s −1 ), per unit perturber number density, N (cm −3 ), at 10000 K (i.e. log[γ 6 /N ] in units of rad s −1 cm 3 ). These were used only when ABO data were unavailable. See Gray (2005) for more details. b Data from May et al. (1974) are given the uncertainty codes 'D' and 'E' to follow the notation used by Fuhr and Wiese (2006). A letter 'D' indicates that the uncertainty is likely to be up to 50 %. A letter 'E' indicates a probable uncertainty greater than 50 % but within a factor of two in most cases. All numeric uncertainties are quoted as they appear in the source publication. | 2014-04-22T18:10:05.000Z | 2014-04-22T00:00:00.000 | {
"year": 2014,
"sha1": "99fcd60119379712a2bfae2882b9ce457bf87100",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/441/4/3127/13765247/stu780.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "99fcd60119379712a2bfae2882b9ce457bf87100",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
234351359 | pes2o/s2orc | v3-fos-license | Sarcoidosis presenting with glazy mucoid sputum and dyspnea: a case report
Background Patients with pulmonary sarcoidosis commonly present with a dry cough; a productive cough suggests a complicating airway infection or an alternative diagnosis such as tuberculosis or bronchiectasis. Case presentation A 36-year-old European (Frisian) woman recently diagnosed with pulmonary sarcoidosis presented with debilitating exertional dyspnea and cough productive of glazy mucoid sputum. Several different attempts including video-assisted thoracoscopic biopsies failed to reach a second or alternative diagnosis including an infectious, autoimmune or collagen-vascular condition. She responded to steroids but with poor tolerance to this treatment, which could not be tapered. After she was started on anti-tumor necrosis factor alpha (TNF-α) therapy with infliximab, 200 mg at three-monthly intervals, she has been fine for well over a decade. Conclusions In this patient with sarcoidosis who had a productive cough accompanied by fever, an extensive workup and prolonged follow-up, an alternative or second diagnosis could be ruled out; we therefore conclude that this highly unusual presentation is part of the clinical spectrum of sarcoidosis.
Introduction
Sarcoidosis is a chronic inflammatory condition characterized by granulomatous inflammation of unknown origin [1]. Both pulmonary and extrapulmonary symptoms and signs may be present as clinically recognizable syndromic patterns, but unusual presentations may be challenging [2]. A dry cough is common [3], but a productive cough suggests an alternative diagnosis.
We present the case history of a patient that meets the classical radiographic and histopathological pattern of sarcoidosis, complicated by a cough productive of glazy, mucoid sputum. Based on an extensive diagnostic workup, combined with a persistent beneficial response to anti-inflammatory treatment alone, without any antimicrobial or other treatment modalities, we propose that this unusual, unique presentation should be considered part of the spectrum of the symptomatology of sarcoidosis.
Case presentation
In 2004, a then 36-year-old European (Frisian) woman was referred because of fever, dyspnea and a cough productive of shiny glazy mucoid sputum, accompanied by arthralgias. She was a lifetime non-smoker, had worked as a part-time teacher for hair dressing students but had no inhalational exposure to organic dust. Chest auscultation revealed coarse and fine crackles especially over the right anterior lung field, and sporadic scattered wheezes. Laboratory findings showed only mildly elevated C-reactive protein; no blood eosinophilia was found. At the first manifestation of her chest symptoms, sputum cultures grew Staphylococcus aureus, Acinetobacter baumannii, Calcoaceticus complex and Haemophilus influenzae; she had received targeted antimicrobial treatment without relief of her symptoms. Her chest X-ray and computed tomography (CT) scan showed mediastinal and bi-hilar lymphadenopathy, mainly suggestive for sarcoidosis stage 1 (Fig. 1a, b). Bronchoscopic lung lavage showed lymphocytic inflammation; bronchoscopic biopsies revealed loose granulomas. Cultures from blood, sputum and lavage fluid did not show bacterial, fungal or mycobacterial pathogens. Her chest symptoms improved after starting 30 mg of prednisolone daily, although during steroid therapy she did not feel well and could hardly sleep. Two years later, after several attempts to wean her from steroids, all of which were followed by recurrence of all symptoms including fever and productive cough, she was referred to our University Medical Center. She had then tapered steroids to 15 mg daily, with osteoporosis prophylaxis. No diagnosis other than the initial diagnosis of sarcoidosis could be made; attempts to further taper steroids failed. In two other specialized centers for sarcoidosis in the Netherlands, her clinical presentation with productive cough had been considered incompatible with the diagnosis of sarcoidosis; therefore, an infectious condition was suspected but not confirmed. In an attempt to further taper prednisolone, she was started on inhaled budesonide combined with salmeterol. We introduced methotrexate (15 mg weekly) as a steroid-sparing regimen, as suggested in Dutch national guidelines at the time. She however experienced gastrointestinal side effects, while steroids could not be tapered during methotrexate treatment, which we therefore subsequently stopped. When she experienced a subsequent exacerbation of disease activity in 2006, with fever, dyspnea and cough productive of the same whitish glazy material, she was admitted to the hospital. Her past medical history revealed no new information; she had had two uncomplicated pregnancies with two healthy children; the family history was negative for sarcoidosis, tuberculosis and bronchiectasis. Apart from the obstetric care, she had never received medical or socio-psychological care, or been prescribed medications other than those for her current chest symptoms. She had only traveled to Mediterranean countries for family holidays, with no exposure to respiratory infections, fumes, or organic or inorganic dusts. She was a lifetime nonsmoker, and no one in the family smoked indoors. Her alcohol intake was limited to an occasional glass of wine on weekends; there was no illicit drug use. Because of her chest symptoms, she had given up her work; she denied any earlier change in her condition when she had occasionally tried to resume work. Her medications included inhaled budesonide 250 µg and salmeterol 50 µg inhaled twice daily, oral prednisolone 5 mg daily, calcium 500 mg, and actonel 35 mg/ week. On examination, she was in distress: blood pressure 95/55 mmHg; pulse 99 beats per minute; pulse oxygen saturation 98%, respiration 25 breaths per minute. Temperature was 38.6°C. No skin or eye abnormalities were detected-in particular no evidence of erythema nodosum, induration of scars or iridocyclitis was noted, and no enlarged lymph nodes were found on palpation. There was an expiratory wheeze, no crackles; heart sounds were normal. Abdomen and extremities were normal. Routine lab exams showed increased C-reactive protein to 110 mg/L; white blood cell count 19.4 ×10 9 /L, deemed as consistent with steroid use; hemoglobin 6.8 mmol/L (mild anemia); all other blood chemistry results including liver enzymes, electrolytes and renal function parameters were in the normal range. Arterial blood gas analysis showed pH 7.55, partial pressure of carbon dioxide 3.3 kPa, partial pressure of oxygen 9.9 kPa, oxygen saturation 97%, and bicarbonate ions 21 mmol/L; gas exchange for oxygen was impaired: the calculated alveolar-arterial oxygen difference was 6 kPa (normal range, 1-2 kPa). Blood and sputum cultures and a multiplex polymerase chain reaction (PCR) test for common respiratory pathogens (influenza, respiratory syncytial virus, coronaviruses, rhinoviruses, Human metapneumovirus and Mycoplasma pneumoniae) and urine tests for Legionella pneumophila type 1 and S. pneumoniae were all negative: no bacterial, fungal, parasitic or viral pathogens were identified. Pulmonary function testing showed mild restriction. High-resolution CT scanning showed, besides mediastinal and bi-hilar lymphadenopathy, ground-glass attenuation predominantly in the upper lobes without bronchiectasis (Fig. 2a-d).
Bronchoscopy with bronchoalveolar lavage showed lymphocytic inflammation; no mycobacterial, bacterial, fungal or viral pathogens were identified by culture or PCR. Video-assisted thoracoscopic biopsies of the right middle and upper lobes showed granulomas compatible with sarcoidosis but no other diagnostic clues (Fig. 3). Biochemical analysis of sputum showed nondiagnostic mucopolysaccharides; cultures remained negative.
Considering that her diagnosis-although with highly unusual presentation-best fit the earlier diagnosis of sarcoidosis [1,4], we started her on infliximab [5]. We argued that TNF-α is the cytokine that plays a central role in the formation and maintenance of the granulomatous inflammatory response, even though most patients with pulmonary sarcoidosis benefit little from this treatment [6]. Infliximab is a chimeric, monoclonal immunoglobulin G1 (IgG1) antibody with dual effects: it neutralizes the effect of circulating TNF-α and resolves granulomas in affected tissues [7]. She received 4 mg/kg (200 mg) infliximab intravenously at 3-and later 6-12-week intervals and made a remarkable recovery; she resumed her part-time work as a teacher after a dropout of several years. On an attempt 2 years later to wean her from infliximab, she experienced a relapse, and after restarting three-monthly infliximab she has not experienced any relapses or intercurrent medical or surgical problems in subsequent years. At the time of writing this report, she was well.
Discussion
This patient with sarcoidosis-radiographically, stage 1-presented with a highly unusual debilitating syndrome with cough productive of mucoid glazy material, without any evidence of infection, accompanied by fever and transient arthralgias.
Sarcoidosis patients typically have a dry cough [1]; production of sputum suggests an alternative diagnosis-mycobacterial infection, granulomatous airway involvement of Crohn's disease [8] or diffuse panbronchiolitis [9]. Most patients with a productive cough typically also have bacterial organisms present in their sputum such as Haemophilus influenzae or Pseudomonas spp; in the 14 years we followed her, she never had bowel symptoms, making Crohn's disease unlikely. She never had symptoms suggesting paranasal sinusitis, and her cough and sputum production subsided without macrolide use; these observations make the diagnosis of diffuse panbronchiolitis unlikely, and we therefore propose that all of her symptoms are consistent with a highly unusual presentation of sarcoidosis.
The workup included chemical and microbiological analysis of sputum and high-resolution CT, followed by bronchoscopic and video-assisted thoracoscopic biopsies, which were also cultured and subjected to PCR to detect a possible infectious origin. Taking all the evidence together, we conclude that infectious, metabolic, allergic, neoplastic and collagen-vascular disorders other than sarcoidosis could be ruled out.
We found one previously reported case of sarcoidosis presenting with a productive cough-but complicating bronchiectasis was not ruled out [10]. The sustained response to anti-TNF-α therapy during 12 years of follow-up suggests the latter [11]. As the incidence of sarcoidosis appears to increase over time, less common presentations might also become more prevalent [12].
Conclusion
We describe a highly unusual presentation of sarcoidosis, with a cough productive of glazy mucoid sputum, accompanied by fever and transient arthralgias, radiographically stage 1, without evidence of complicating infection, and followed by complete resolution of symptoms with TNF-alpha inhibitor (infliximab) therapy, a response that persisted for over 12 years, and a relapse of symptoms following each of two different attempts to taper her infliximab therapy. This highly unusual presentation should be considered in patients with sarcoidosis and productive cough, after airway infection and bronchiectasis have been ruled out. | 2021-05-11T14:27:43.204Z | 2021-05-11T00:00:00.000 | {
"year": 2021,
"sha1": "82a0dad1c3366eb805beb0645b4f56717d08d90d",
"oa_license": "CCBY",
"oa_url": "https://jmedicalcasereports.biomedcentral.com/track/pdf/10.1186/s13256-021-02809-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "82a0dad1c3366eb805beb0645b4f56717d08d90d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
100392067 | pes2o/s2orc | v3-fos-license | Using GC-MS to Analyze Bio-Oil Produced from Pyrolysis of Agricultural Wastes-Discarded Soybean Frying Oil, Coffee and Eucalyptus Sawdust in the Presence of 5% Hydrogen and Argon
Now-a-days Bio-Oils (Biodiesel or Biofuels) are becoming more famous and attractive for the people in all over the world because of their good aspects for the people and environment around us. Biodiesel is an oxygenated fuel consisting of long chain fatty acid which contain 10-15% oxygen by weight [1,2] and it contains neither sulfur, nor aroma. These facts lead biodiesel to enhance more complete combustion and less emission of particulate matter. The biomass pyrolysis process is an economically feasible option for producing chemicals and/or fuels [3,4]. The bio-oil resulting from the pyrolysis process consists of a mixture of more than 300 organic compounds [5]. In terms of environmental issue biodiesel is more adoptable compare to fossil fuel as it forms low carbon and smoke which are responsible for global warming [6,7]. On the other hand biodiesel has higher molecular weight, density, viscosity and pour point than conventional diesel fuel [8,9]. Higher molecular weight and viscosity of biodiesel causes low volatility and poor fuel atomization, injector coking, piston ring sticking and leading incomplete combustion [10] as well as it has cold flow property which is a barrier to use it in cold or chill weather [11] anyhow the best benefit of Bio-oils is that they are preparing from renewable sources like corpse, plants, trees and residues etc. Approximately 100 years ago, Rudolf Diesel tested Bio oil as the fuel for his engine that was available with him [12,13]. According to scientists and researchers there are 350 oil containing crops and plants identified, among them only soybean, rapeseed, coffee, sunflower, cottonseed, peanut, safflower, and coconut oils are considered that they have the potential and quality of alternative fuels for diesel engines [14,15]. Bio oils have the capacity to substitute for a part or fraction of the petroleum products, distillates and petroleum based petrochemicals in the future. Due to being more expensive than petroleum, bio-oil fuels are nowadays not petroleum competitive fuels. However, due to the misuse, high expenditure and increases in petroleum prices and the uncertainties concerning petroleum availability, there is renewed interest in using Bio-oils in Diesel engines [16]. The emergence of transesterification can be dated back as early as 1846 when Rochieder described glycerol preparation through methanolysis of castor oil and since that time, alcoholysis has been studied in many parts of the world. Scientists, researchers have also investigated the important reaction conditions and parameters on the alcoholysis of triglycerides, such as tallow, fish oils, sunflower, soybean, rapeseed, linseed oils, cottonseed, sunflower, safflower, and peanut [17,18]. Soybean oil was transesterified into ethyl and methylesters, and comparisons of the performances of the fuels with diesel were made [19,20]. Also, methylesters have been prepared from palm oil by transesterification using methanol in the presence of a catalyst (NaOH) or (KOH) in a batch reactor [21]. Ethan
Introduction
Now-a-days Bio-Oils (Biodiesel or Biofuels) are becoming more famous and attractive for the people in all over the world because of their good aspects for the people and environment around us. Biodiesel is an oxygenated fuel consisting of long chain fatty acid which contain 10-15% oxygen by weight [1,2] and it contains neither sulfur, nor aroma. These facts lead biodiesel to enhance more complete combustion and less emission of particulate matter. The biomass pyrolysis process is an economically feasible option for producing chemicals and/or fuels [3,4]. The bio-oil resulting from the pyrolysis process consists of a mixture of more than 300 organic compounds [5]. In terms of environmental issue biodiesel is more adoptable compare to fossil fuel as it forms low carbon and smoke which are responsible for global warming [6,7]. On the other hand biodiesel has higher molecular weight, density, viscosity and pour point than conventional diesel fuel [8,9]. Higher molecular weight and viscosity of biodiesel causes low volatility and poor fuel atomization, injector coking, piston ring sticking and leading incomplete combustion [10] as well as it has cold flow property which is a barrier to use it in cold or chill weather [11] anyhow the best benefit of Bio-oils is that they are preparing from renewable sources like corpse, plants, trees and residues etc. Approximately 100 years ago, Rudolf Diesel tested Bio oil as the fuel for his engine that was available with him [12,13]. According to scientists and researchers there are 350 oil containing crops and plants identified, among them only soybean, rapeseed, coffee, sunflower, cottonseed, peanut, safflower, and coconut oils are considered that they have the potential and quality of alternative fuels for diesel engines [14,15]. Bio oils have the capacity to substitute for a part or fraction of the petroleum products, distillates and petroleum based petrochemicals in the future. Due to being more expensive than petroleum, bio-oil fuels are nowadays not petroleum competitive fuels. However, due to the misuse, high expenditure and increases in petroleum prices and the uncertainties concerning petroleum availability, there is renewed interest in using Bio-oils in Diesel engines [16]. The emergence of transesterification can be dated back as early as 1846 when Rochieder described glycerol preparation through methanolysis of castor oil and since that time, alcoholysis has been studied in many parts of the world. Scientists, researchers have also investigated the important reaction conditions and parameters on the alcoholysis of triglycerides, such as tallow, fish oils, sunflower, soybean, rapeseed, linseed oils, cottonseed, sunflower, safflower, and peanut [17,18]. Soybean oil was transesterified into ethyl and methylesters, and comparisons of the performances of the fuels with diesel were made [19,20]. Also, methylesters have been prepared from palm oil by transesterification using methanol in the presence of a catalyst (NaOH) or (KOH) in a batch reactor [21]. Ethan oil is a preferred alcohol in the transesterification process compared to methanol because it is derived from natural agricultural products and is renewable and biologically less objectionable in the environment. The success of rapeseed ethylester production would mean that biodiesel`s two main raw materials would be agriculturally produced, renewable and environmentally friend [22].
Methyl, ethyl, 2-propyl and butyl esters were prepared from canola and linseed oils through transesterification using KOH and/or sodium alkoxides as catalysts. In addition, methyl and ethyl esters were prepared from rapeseed and sunflower oils using the same catalysts [23,24].
Experimental
Materials: Discarded soybean frying oil, coffee, eucalyptus sawdust and other reagents Discarded soybean frying oil, coffee grounds and eucalyptus sawdust were collected from Porto Alegre a Brazilian city. The biooil was obtained by pyrolysis of a mixture (1:1:1 in mass) of discarded soybean frying oil, coffee grounds and eucalyptus sawdust. The frying oil was mixed to the solids after their granulometric reduction (till 0.21 mm). Calcium oxide was added to this mixture (at 20% in mass) and sufficient amount of water to produce a malleable mass that could be fixed and conformed in cylinders (50 mm × 180 mm). After building the cylinders, they were dried at environmental temperature during 3 days. Before the pyrolysis, the system is purged during 20 minutes with Argon with 5% of hydrogen (100 mL/min). The ultimate aim of hydrogenation and Argon is to improve stability and fuel quality by decreasing the contents of organic acids and aldehydes as well as other reactive compounds, as oxygenated and nitrogenated species because if we were not used H and Ar then we found above species which not only lead to high corrosiveness and acidity, but also set up many obstacles to applications.
Production of CSSB
The bio-oil was produced from the pyrolysis of discarded soybean frying oil, coffee grounds and eucalyptus sawdust in the presence of 5% hydrogen and argon. A round block shape structure of sample was made inside the filter paper from (filter paper as side wall of the sample block to keep the biomass tight) biomass, while the weight of this sample is kept 400 g after preparation of this sample block it was kept inside a stainless steel chamber of pyrolysis system which is further connected to two other chambers which are shown in diagram in Figure 1.
The temperature of chamber which has biomass was increased from 15°C to 800°C with help of heater, temperature controller cabinet, and condenser, through which biomass was converted to biogas and then the biogas was condensed in other two chambers which condensed fractions of biogas to bio-oil on temperature 100°C and 5°C respectively, the two condensed fractions from these chambers (HTPO and LTPO) were collected and introduced to further analysis.
GC-MS analysis of HTPO and LTPO (CSSB)
The bio-oil identification and composition determination were performed on a GC Agilent series 6890 with a Agilent mass selective detector of series 5973. A capillary polar wax column, polyethylene glycol (PEG)-coated (length of 30 m, internal diameter of 0.25 mm, and film thickness of 0.25 μm).
Chromatographic conditions were as follows: İnjection volume of 0.2 μL, oven at 40°C (1 min) 6°C min −1 up to 300°C (10/Min) split mode with a ratio of 100:1 and injection temperature of 290°C. Time taken was 54.3 minutes, He (helium) as carrier gas with a flow rate of 2.9 mL min −1 .
Results and Discussion
Chemical composition of CSSB CSSB is a dark and sticky liquid the compounds detected in CSSB can be classified into hydrocarbons, alcohols, phenol, ethers, aldehydes, ketones, carboxylic acids, and other esters. But large peaks of GC/ MS mostly shows aromatic, aliphatic and cyclic hydrocarbons while small peaks show other groups. Library match used for identification of compounds based on probability score and each compound was detected very clearly and with high probability value. According to GC/MS analysis summarized in Tables 1 and 2 mostly aromatics and aliphatic groups were enriched in the sample. After GC/MS analysis each peak of chromatogram was matched with library one by one, where
Enrichment of chemicals in CSSB sample
According to GC/MS analysis summarized in Tables 1 and 3, C11-C17 alkanes, alkene, cyclic hydrocarbons and aromatic hydrocarbons were enriched in the CSSB sample.
Enrichment of C11-C17 aliphatic hydrocarbons (alkane, alkene, cyclic): As Tables 1 and 4 shows, aliphatic hydrocarbons with C11-C17 are predominant in the sample with a % area of 51.255, 34.843, and 31.204 in LTCO, LTPO and HTCO respectively. Figure 4 and Table 2 show only aliphatic hydrocarbons in CSSB. Table 4 and Figure 5 show only aromatic hydrocarbons detected in CSSB and also occupied large part of the CSSB with in the sample with a % area of 0.246, 30.008, 32.241, 24.892 in HTPO, LTCO, LTPO and HTCO respectively. Enrichment of other compounds: Alcohols, Aldehydes, Ketones, Ethers, Esters, Phenols, and Nitrogenous also contained some part of the CSSB while in these classes Phenols and ketones were occupied more space as compared to others as shown in Table 5. Which mean that in these we also have Phenols and ketones, which need to be separated. Figure 6 shows % area of alcohols, ethers, ketones, phenols, N-compounds, aliphatic, aromatic and cyclic compounds in HTPO (blue) and LTPO (red) while Figure 7. Shows % area of alcohols, ethers, ketones, phenols, N-compounds, aliphatic, aromatic and cyclic compounds in HTCO (blue) and LTCO (red) and their detail is also given in Table 5.
Conclusion
More than 120 compounds were detected in the CSSB. Among them, aromatic, aliphatic and cyclic hydrocarbons, especially alkanes, alkenes and benzene containing compounds were dominant. A laboratory scale effort is made in this work however to improve efficiency and process thus, this process can be successfully applied in large-scale operations because the demand for liquid transportation fuels is increasing day by day, and biofuels might be one of the best solutions for this problem. Technologies for converting biomass to biodiesel also are at various stages of development, which include the pretreatment of biomass. Although cost of biomass may be high or the costs of processing it can be high but for the time being it may be an alternative for fossil fuels, Future work is going to improve the recovery of phenols, ketones and other chemicals from the CSSB. | 2019-04-08T13:12:01.500Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "69d9d9acd9b35e4a29158d22aefc4af46bf0a13a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2155-9872.1000300",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b2ed7d3a84021b0234b259f4141542b619871b2c",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
7726227 | pes2o/s2orc | v3-fos-license | Identification of Degenerate Nuclei and Development of a SCAR Marker for Flammulina velutipes
Flammulina velutipes is one of the major edible mushrooms in the world. Recently, abnormalities that have a negative impact on crop production have been reported in this mushroom. These symptoms include slow vegetative growth, a compact mycelial mat, and few or even no fruiting bodies. The morphologies and fruiting capabilities of monokaryons of wild-type and degenerate strains that arose through arthrospore formation were investigated through test crossing. Only one monokaryotic group of the degenerate strains and its hybrid strains showed abnormal phenotypes. Because the monokaryotic arthrospore has the same nucleus as the parent strain, these results indicated that only one aberrant nucleus of the two nuclei in the degenerate strain was responsible for the degeneracy. A sequence-characterized amplified region marker that is linked to the degenerate monokaryon was identified based on a polymorphic sequence that was generated using random primers. Comparative analyses revealed the presence of a degenerate-specific genomic region in a telomere, which arose via the transfer of a genomic fragment harboring a putative helicase gene. Our findings have narrowed down the potential molecular targets responsible for this phenotype for future studies and have provided a marker for the detection of degenerate strains.
Introduction
The basidiomycete Flammulina velutipes is one of the most cultivated edible mushrooms. One of the reasons for the recent increase in mushroom production is consumer interest in their high content of macromolecules that possess antitumor, immunomodulatory, and antiviral properties, such as polysaccharides and glycoproteins [1,2].
Filamentous fungi that are grown on a nutritionally rich medium frequently exhibit instability manifested in morphological and physiological variations. The morphological variations include a lack of or reduction in sporulation, fluffy mycelial-type growths, and variations in hyphal pigmentation [3,4]. Previous studies implicated infections with double-stranded RNA viruses [5] and the instability of fungal nuclear and mitochondrial genomes in changes in fungal morphology and physiology. Such changes might be expected because fungal genomes are dynamic and capable of a rapid accumulation of genome rearrangements, particularly amplifications and deletions [6]. Likewise, edible mushrooms that were preserved by serial passaging in culture on nutritionally rich media exhibited abnormal mycelial growth and poor yields [7]. Abnormalities in Agaricus bisporus led to a lower yield and inferior product quality, resulting in economic losses [8,9,10]. These abnormalities may be due to changes in the ribosomal-DNA copy number, the loss of heterozygosity at specific loci, de-heterokaryotization, chromosomal loss, or chromosomallength polymorphisms [10]. In the case of F. velutipes, abnormalities such as malformed fruiting bodies and the complete loss of fruiting-body development have been observed [11]. Despite the relative frequency of these occurrences, the underlying cause is not entirely clear. Notwithstanding its economic importance, many basic questions involving the biology of this fungus remain unanswered; hence, a method for detecting mutant strains of F. velutipes would be very useful. F. velutipes, unlike many basidiomycetes, produces abundant arthrospores in both monokaryotic and dikaryotic mycelia [12]. Moreover, most of the arthrospores contain only one haploid nucleus, which is generated without resorting to fruiting-body formation and chromosomal rearrangements [13]. Thus, the normality of each nucleus can be easily determined by test crossing. Additionally, molecular selection markers such as sequence-characterized amplified regions (SCARs) are required to assist mushroom farmers in cultivating wild-type strains rather than degenerate clones. Herein, we describe the characterization and identification of a degenerate nucleus and report a specific marker for differentiating wild-type and degenerate nuclei.
Strains and pure cultures
The wild-type (Fv1-5) strain and 3 degenerate strains (Fv1-5 d1 , Fv1-5 d2 , and Fv1-5 d3 ) of F. velutipes were obtained from various mushroom farms (Table 1). The mycelia were grown on mushroom complete medium (MCM) at 25uC. Because F. velutipes mycelia are typically transferred as a plug rather than a single cell, the collected degenerate strains most likely contained some normal cells. To enhance the purity of the degenerate strains, the mycelia were serially cultured on modified BTBsawdust agar containing 0.45% peptone, 0.75% yeast extract, 0.5% oak sawdust, 0.025% bromothymol blue (BTB, Sigma-Aldrich, St. Louis, MO, USA), and 2% agar, growth on which has been reported as an indicator of normal function [11]. At each round of isolation, the mutants that did not decolorize the BTB agar and exhibited abnormal morphologies, such as a reduced growth rate and compact mycelial colonies, were transferred to fresh medium via cells at the end of the mycelia. The isolated degenerate and wild-type strains were grown on MCM agar for subsequent analyses, and their mycelial growth rates along a perpendicular line were determined at 3, 5, and 7 d (dikaryons) or at 8, 10, and 15 d (monokaryons) after inoculation. The experiments were performed in triplicate.
Monokaryon isolation, test crossing, and fructification
The purified wild-type and degenerate strains were grown on MCM agar for 7 d at 25uC. Arthrospores were obtained from the solid medium by adding 1 to 2 mL of sterile water to the cultures and gently prodding them for a few seconds. The suspension was serially diluted, plated onto MCM agar after filtering through glass wool, and grown at 25uC. The germinated mycelia were isolated, and the clamp connections were observed using phase-contrast microscopy at 406 magnification (Olympus IX71; Olympus, Tokyo, Japan). Monokaryons with no clamp connection were subjected to reciprocal crossing to identify their mating type (Table 1). To determine the abnormal monokaryon group in the two compatible groups that were derived from the degenerate strain, all possible test-crossing combinations were performed between monokaryons derived from the wild-type and degenerate strains. The hybrids were subjected to growth-rate analysis and the fruiting test. A substrate containing pine sawdust (23%), corncob (29%), rice bran (18%), beet pulp (4%), wheat bran (14%), cottonseed hull (4%), shell powder (4%), and soybean powder (4%), with a 65% water content, was autoclaved at 121uC for 1.5 h, cooled to 20uC, inoculated with four plugs (161 cm) of cultured hybrids, and finally placed in an incubation room maintained at 19uC to 20uC for spawn running. On the 35th day of the spawn running period, the outer area of the substrate was removed by scraping to induce the formation of primordia. Then, the cultures were placed in a cultivation room that was maintained at 15uC and 90% relative humidity to induce fruiting. When the fruiting bodies reached a length of approximately 2 cm, the temperature was lowered to 5uC for 5 to 7 d to normalize the size of the fruiting bodies. Subsequently, the cultures were maintained at 10uC with 75 to 80% relative humidity (RH) to allow the fruiting bodies to grow.
Extraction of genomic DNA
Genomic DNA was extracted from dikaryons and monokaryons using the Exogene Plant SV kit (GeneAll Biotechnology, Korea) according to the manufacturer's protocol. The yields of DNA were quantified using a NanoDropND-1000 UV spectrophotometer (NanoDrop Technologies, Wilmington, DE, USA).
Identification of an abnormality-associated marker
A draft genomic sequence of the monokaryotic strain (Mono3 derived from meiotic spore of Fv1-5) of F. velutipes that was obtained in a previous study (http://112.220.192.2/fve/) and the complete genomic sequence of KACC42780 [14] were used as sources for SSR marker development and comparative analysis. The SSR candidates were identified using SSR Locator I [15]. Primers were designed accordingly, using the Primer3 program [16]. Random-amplified polymorphic DNA (RAPD) PCR was performed in 20-mL mixtures containing 30 ng of genomic DNA, 16 e-Taq buffer, 0.2 mM dNTPs, 0.2 mmol of Taq polymerase (Solgent, Korea), and 1 mM random primers of the OPB, OPC, OPD, OPE, OPF, OPG and OPI series (Operon Tech., California, USA). PCRs were performed using a Gene Atlas system (ASTEC, Japan) and the following protocol: an initial denaturation step of 5 min at 95uC; followed by 40 cycles of 1 min of denaturation at 95uC, 1 min of annealing at 40uC, and a 2-min extension at 72uC; with a final extension step of 5 min at 72uC. The SSR PCR cycles consisted of an initial denaturation step of 3 min at 95uC; followed by 35 cycles of 20 s at 95uC, 40 s at 52uC, and 30 s at 72uC; with a final step of 5 min at 72uC. The PCR products were resolved using 1% (RAPD) and 3% (w/v: SSR) agarose gels (Life Technologies, USA) in TAE buffer (400 mM Tris, 200 mM sodium acetate, and 20 mM EDTA, pH 8.3) containing RedSafe (Intron, Korea). The specific degenerate DNA bands were excised from the gel, purified using an Expin PCR SV kit (GeneAll Biotechnology, Korea) as described in the manufacturer's protocol, ligated into a vector using a Dr. TA TOPO cloning kit (Doctor Protein, Korea) and were then used for bacterial transformation. The recombinant plasmids were isolated using a Plasmid DNA purification Hybrid-Q kit (GeneAll Biotechnology, Korea) and were sequenced (Macrogen Corp, Korea). To increase the reproducibility and reliability of the results obtained using the random primer set, we constructed a SCAR marker (22-mer) based on the sequence of the polymorphic RAPD band (Table S1).
Comparative analysis
The acquired sequences specific to the degenerate monokaryons were aligned with the genomic sequences of the monokaryotic strain Mono3 using the DNAMAN program (Lynnon Corp, Canada) to determine its flanking sequence. Because the degenerate-specific region of the degenerate (D4) strain was determined according to scaffold 59 of the Mono3 genomic sequence, PCR was performed using primer sets spaced every 1 kb (for the sequence from 1 to 10 kb), 10 kb (for the sequence from 10 to 60 kb) or 60 kb (for the sequence from 60 to 121 kb) based on the scaffold 59 sequence for synteny analysis of the wild-type (W4) and D4 strains. To determine the genomic region corresponding to the degenerate-specific sequence and the boundary regions for the W4 and D4 genomic sequences, the primer sets for 9,244 to 10,004 and 9,500 to 10,500 and other regions were used to amplify DNA fragments, which were sequenced (Table S1). PCR using the same mixture described above was performed in 20 mL volumes using the following protocol: initial denaturation for 5 min at 95uC, followed by 35 cycles of 1 min denaturation at 94uC, 1 min annealing at 60uC, and a 90-s extension at 72uC. The PCR products were evaluated using electrophoresis on 1% (w/v) agarose gels. The PCR products were ligated into a vector using a Dr. TA TOPO cloning kit and sequenced in the forward and reverse directions using M13 primers. The sequences were assembled using the DNAMAN program (Lynnon Corp). Three sequences from the W4, D4 and KACC42780 strains were aligned using the DNAMAN program. The sequence specific to the D4 strain was analyzed to predict the presence of genes and the functions and domains of the gene products using the FGENESH program (http://www.softberry.com/), the UniProt program (http://www.uniprot.org/) and the Protein BLAST program from the NCBI. Tandem repeats in the genomic DNA were identified using the Tandem Repeats Finder program (http://tandem.bu. edu/trf/trf.html).
Results and Discussion
Growth characteristics and morphological traits of the degenerate mycelium The degenerate strains exhibited abnormal phenotypes, including reduced growth rates and the formation of compact mycelial mats compared to the wild-type strain (Fig. 1A); however, the extent of these abnormalities was strain-dependent (data not shown). The wild-type F. velutipes strain elicited a color change in BTB from blue to yellow, whereas the degenerate strains did not [11]. The results of the decolorization of the dye by these fungi suggest that laccase and peroxidase were responsible for the degradation activity via oxidative reactions [17]. The phenolic structure of BTB, which includes a methyl group, is similar to that of lignin and is targeted by ligninolytic enzymes just as lignin is degraded [18]. During the initial transfers (i.e., within the first and second rounds of transfer), the non-decolorized regions were scattered at the edge of the growing mycelial colony in the degenerate strains, whereas the wild-type strain completely decolorized the BTB. The decolorization activity was enhanced when the MCM + BTB was supplemented with sawdust (data not shown). After serial transfers of the non-decolorizing regions to medium supplemented with BTB and sawdust, the non-decolorizing strains Fv1-5 d1 , Fv1-5 d2 , and Fv1-5 d3 , which exhibited consistent phenotypes throughout the mycelial mats, were obtained. The growth rate of the degenerate mycelia (Fv1-5 d1 ) was 1.87 cm/3 d, whereas that of the wild-type mycelia was 3.36 cm/3 d (Fig. 1B). Poor quality and yield have also been reported to be accompanied by the abnormal microscopic morphology of the mycelia of A. bisporus, including the branching pattern and thickened hyphae [8,9]. However, no difference between the wild-type and the mutant strains was observed in our study, consistent with the results of the study of Magae et al. (2005) on mycelial morphology at the microscopic level (data not shown).
To identify the abnormal-nucleus group, two compatible monokaryons of the wild-type strain and the degenerate strains were prepared through arthrospore formation followed by mating. Three isolates of each mating group of the wild-type and degenerate strains were selected for further study. The mating types were divided into two compatible categories, with the wildtype group ''a'' (W1, W2, and W3) and the degenerate group ''a d '' (D1, D2, and D3) on one side and the wild-type group ''b'' (W4, W5, and W6) and the degenerate group ''b d '' (D4, D5 and D6) on the other (Table 1, Fig. 2A). The growth rate of the mycelia in group b d was less than half that of its wild-type counterpart, but the mycelial mat of the former was more compact (Fig. 2B). In contrast, the morphology (Fig. 2B) and growth rate (Fig. 2D) of the other monokaryons (a d ) were not distinguishable from those of the wild-type group (a). To elucidate the effect of monokaryotic abnormalities on the dikaryon, hybrids were obtained from interand intra-compatible combinations of the monokaryons of Fv1-5 and Fv1-5 d1 . Clamp connections were observed despite the presence of chromosomal abnormalities (data not shown), indicating that the formation of clamp connections is not influenced by the degeneration of these strains. The hybrids a6b d (W16D4, W26D5, W36D6) and a d 6b d (D16D4, D26D5, and D36D6) exhibited phenotypes similar to those of Fv1-5 d1 , including slow propagation of the mycelia and the formation of a mycelial mat on the MCM agar plates (Figs. 2C, 2E). The phenotypes of the hybrids (a6b and a d 6b; W16W4, W26W5, W36W6, D16W4, D26W5, and D36W6) were not significantly different from those of the wild-type hybrids. Previous reports [19,20] have identified mitochondrial disorders as causative agents of mycelial abnormalities, but our results excluded that possibility because the group a d and b d monokaryons shared an identical mitochondrial pool, yet only one group showed a distinctly degenerate phenotype.
Fruiting-body formation by the hybrids
To compare the fruiting-body formation of the wild-type and degenerate monokaryons, hybrid mycelia were cultivated in bottles on sawdust medium at a commercial scale. The D16W4 hybrid showed spawn running and developed characteristics similar to those of the wild-type strain (Fig. 3A). In contrast, for the W16D4 hybrid, the spawn did not completely run on the sawdust medium at 35 d nor were primordia or fruiting bodies observed (Fig. 3B). Other sibling combinations of W16D4 showed consistent results (data not shown). In contrast, the hybrids (the a6b group and the a d 6b group) showed normal spawn running, primordial emergence, pinheading, and fruiting-body development that were undistinguishable from those of the wild-type (Fig. 3A, B). Considering the abovementioned observations, the b d group appeared to be responsible for the serious defect in fruiting-body differentiation.
Identification of an abnormality-associated marker
Polymorphisms were frequently detected in the RAPD PCR products of compatible monokaryons (groups a d and b d ), whereas polymorphisms were very rare in the RAPD PCR products of groups b and b d . Of the 438 primers tested, only the OPD-05 primer detected a polymorphism at approximately 2.1 kb (Fig. 4A). The SCAR marker was designed based on a specific amplified sequence obtained from a single fragment (Fig. 4B), which appears to be present in the b d , a, and a d strains but not in the b strain. However, this marker could not discriminate between the dikaryotic wild-type and dikaryotic degenerate strains. The SCAR-marker primer set consistently amplified the degeneratespecific band from other monokaryons (the same mating group as b) of the degenerate strains (Fv1-5 d2 and Fv1-5 d3 ) (data not shown). These results suggested that the transfer of a genomic region that included the SCAR primer binding sites from an a strain nucleus to a b strain nucleus. Although this screening procedure was not applied to other F. velutipes cultivars in the present study, Fv1-5 is a widespread cultivar in Asia, and the degenerate strains collected from farms exhibited similar abnormalities, including compact mycelia and slow growth. Thus, our procedure will be helpful to commercial farmers for distinguishing degenerate mycelia from the complex mixture of normal and degenerate structures that are frequently found on mushroom farms.
Genomic localization of the region responsible for the abnormalities
The alignment of the sequence acquired from D4 with the Mono3 genomic sequence [21] showed more than 99% sequence similarity with bases 2,859 to 5,041 of scaffold 59 (data not shown). Based on the synteny analysis of the wild-type and degenerate genomic regions corresponding to this scaffold, the region from approximately 1 to 10 kb showed that specific bands of the expected size could be detected in D4 but not in its counterpart W4 (Fig. 5A). In contrast, similar band patterns in the wild-type and degenerate strains were detected in the regions beyond 10 kb (Fig. 5B). To determine the boundary region, PCR was performed using the primers that hybridized at 9,244 bp in the genomic sequence that only yielded a band for D4. However, primers that hybridized at 9,500 bp yielded amplicons for D4 and W4, indicating that the boundary region of the wild-type and degenerate genomic region lie between 9,244 and 9,500 bp (Fig. 5A, B). The alignment of the sequences from D4, W4 and KACC42780 showed that the region corresponding to the degenerated region (1 to 9,244 bp of scaffold 59) was detected only in the D4 genome, indicating the involvement of this region in the degenerate phenotypes (Fig. S1, S2). Comparative analysis of the genomic region between 9.5 to 9.7 kb of D4 with that of the KACC42780 genome revealed that the location of the degeneratespecific region is near the terminus of chromosome 8 (2116 kb from the end) (Fig. S2). At the 3' terminus of the D4 genome, multiple telomeric repeating units (TTAGGG) that were found in the telomeric region of P. ostreatus (Perez. 2009) were detected (Fig. S1), indicating that the degenerate-specific sequence is likely to be a telomeric region. The similarity between this region in scaffold 59 (Mono3) and its corresponding region in Ch8 (KACC42780) ranged from 30 to 79% because Mono3 is a highly inbred cultivar, whereas KACC42780 was isolated as a wild-type strain.
Based on the FGENESH, UniProt and protein blast analyses, the region corresponding to the degenerate-specific genomic region of the D4 strain encodes a protein of 2,432 amino acids and a molecular weight of 268 kDa, which showed similarity to the putative ATP-dependent helicase C17A2.12 of Thanatephorus cucumeris and contains HELICc and HepA domains (Fig. S1). These helicases are reportedly involved in various functions, such as replication, transcription, translation, recombination, DNA repair, and ribosomal biogenesis [22]. Abnormal helicase activity causes several developmental defects in fungi and plants, such as slow growth, premature aging and aberrant assembly of chromosomes [23,24]. These observations suggest a close relationship between the abnormalities observed and this helicase. The absence of a 1-10 kb sequence from the W4 genome (wild-type) and its presence in the telomeric region in the D4 genome (degenerate), which displays a high rate of recombination and translocation in P. ostreatus, yeast, and humans [25,26,27], supports the hypothesis that a genomic region harboring a putative helicase was transferred from an a strain nucleus to a b strain nucleus to form the b d strain. Thus, this putative helicase might be required for vegetative growth and fruiting-body development in F. velutipes. Likewise, the transfer of the foreign helicase gene might imbalance its activity and result in the observed abnormalities. However we cannot rule out the possibility that other region is responsible for the degenerations because F. velutipes genome is e (35 MB)Chromosomal abnormalities can cause the degeneration of mushrooms [10,29]. In fungi, extraordinary chromosome rearrangement resulting from insertion/deletion, duplication and translocation within and among chromosomes has been reported (for review, see [28]). Some of these chromosomal alterations may cause fungi to gain a new pathogenicity or adaptation to a novel environment [28]. However, in mushroom fungi, in which strain stability is critical for stable cultivation, genomic rearrangement should be avoided. Recombination predominantly occurs during meiosis, but this strain of edible mushrooms likely degenerated during serial passaging (mitosis). In further studies, the mechanism by which a genomic region from an a group cell was transferred to a b group nucleus to form a degenerated b (b d ) strain should be addressed. Moreover, F. velutipes produces abundant arthrospores in both monokaryotic and dikaryotic mycelia [12] on media used in farms. Thus, the nucleus from a group of mycelia can be transferred to a diploid strain (Fv1-5) via plasmogamy. We attempted to identify the mobile elements that may cause genomic rearrangements [29,30] throughout the flanking sequences of the putative helicase gene, but no such element was detected. Somatic recombination has been reported in fungi [29] and may be a possible explanation for the mechanism by which a genomic region from an "a" group strain, corresponding to degeneration, is integrated into a "b" strain genome. The results of the present study might be helpful for identifying the potential molecular targets responsible for the degenerate phenotype. Further studies are required to determine whether helicases are directly involved in the abnormalities of F. velutipes and to confirm the mechanism by which a genomic region is transferred from one nucleus to another. Figure S1 Alignment of the sequences corresponding to the degenerate-specific region from the degenerate (D4), wild-type (W1) and KACC42780 strains. Gene prediction was conducted using FGENESH, and the protein function and domain analyses were conducted using the UniProt (http://www.uniprot.org/) program and the Protein BLAST program of NCBI. The exons of the putative helicase gene are shown in bold and the start and stop codons are underlined. * indicates the residues that are identical in the three sequences. . and arrows indicate the sequence amplified using the SCAR marker primers that discriminated the degenerates. = and arrows indicate the genomic sequences corresponding to the HELICc and HepA domains. The telomeric repeating units (TTAGGG) are in shaded boxes. (PDF) Figure S2 Deduced map of the genomic region corresponding to the degenerate-specific sequence of the wild-type (W1), degenerate (D4) and KACC42780 strains. Unknown but deduced sequences are shown in boxes indicated with a broken line. The arrow indicates the direction of transcription of the putative helicase gene. The black boxes in the W1 and D4 sequences indicate high similarity (99%), and the gray boxes indicate low similarity (30 to 70%). (TIF)
Author Contributions
Conceived and designed the experiments: J-SR SYK. Performed the experiments: J-SR SYK. Analyzed the data: J-SR K-HK CHI. Contributed reagents/materials/analysis tools: CYL WSK. Contributed to the writing of the manuscript: J-SR SYK AA. | 2018-04-03T06:13:39.763Z | 2014-09-15T00:00:00.000 | {
"year": 2014,
"sha1": "07dd927315656cd714a3b74397164b4b34110aea",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0107207&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "07dd927315656cd714a3b74397164b4b34110aea",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
16110131 | pes2o/s2orc | v3-fos-license | Informed Source Separation using Iterative Reconstruction
This paper presents a technique for Informed Source Separation (ISS) of a single channel mixture, based on the Multiple Input Spectrogram Inversion method. The reconstruction of the source signals is iterative, alternating between a time- frequency consistency enforcement and a re-mixing constraint. A dual resolution technique is also proposed, for sharper transients reconstruction. The two algorithms are compared to a state-of-the-art Wiener-based ISS technique, on a database of fourteen monophonic mixtures, with standard source separation objective measures. Experimental results show that the proposed algorithms outperform both this reference technique and the oracle Wiener filter by up to 3dB in distortion, at the cost of a significantly heavier computation.
I. INTRODUCTION
Audio source separation has attracted a lot of interest in the last decade, partly due to significant theoretical and algorithmic progress, but also in view of the wide range of applications for multimedia. Should it be in video games, web conferencing or active music listening, to name but a few, extraction of the individual sources that compose a mixture is of paramount importance. While blind source separation techniques (e.g. [1]) have made tremendous progress, in the general case they still cannot guarantee a sufficient separation quality for the abovenoted applications when the number of sources gets much larger than the number of audio channels (in many cases, only 1 or 2 channels are available). The recent paradigm of Informed Source Separation (ISS) addresses this limitation, by providing to the separation algorithm a small amount of extra information about the original sources and the mixing function. This information is chosen at the encoder in order to maximize the quality of separation at the decoder. ISS can then be seen as a combination of source separation and audio coding techniques, taking advantage of both simultaneously. Actually, the challenge of ISS is to find the best balance between the final quality of the separated tracks and the amount of extra information, so that is can easily be transmitted alongside the mix, or even watermarked into it.
Techniques such as [2], [3], [4] for stereo mixtures, and [5], [6], also applicable to monophonic mixtures, are all based on the same principle: coding energy information about each source in order to facilitate the posterior separation. Sources are then recovered by adaptive filtering of the mixture. For the sake of clarity, we will assume a monophonic case, in a linear and instantaneous mixing (further extensions will be discussed in the discussion Section) : J sources s j (t), j = 1 . . . J, are linearly mixed into the mix signal m(t) = j s j (t). If the local time-frequency energy of all sources is known, noted |S k (f, t)| 2 , k = 1 . . . J, then the individual source s j (t) can be estimated from the mix m(t) using a generalized timefrequency Wiener filter in the Short-Time Fourier Transform (STFT) domain. Computing the Wiener filter α j of source j is equivalent to computing the relative energy contribution of the source with respect to the total energy of the sources. At a given time-frequency bin (t, f ), one has : The estimated sources j (t) is then computed as the inverse STFT (e.g., with overlap-add techniques) of the weighted signal α j (t, f )M (t, f ), with M the STFT of the mix m. This framework has the advantage that, by construction, the filters α j sum to unity, and this guarantees that the so-called re-mixing constraint is satisfied : The main limitation, however, is in the estimation of the phase: only the magnitudeS j (t, f ) of each source is estimated by this adaptive Wiener filter, and the reconstruction uses the phase of the mixture. While this might be a valid approximation for very sparse sources, when 2 sources, or more, are active in the same time-frequency bin, this leads to biased estimations, and therefore potentially audible artifacts. In order to overcome this issue, alternative source separation techniques have been designed [7], [8], taking advantage of the redundancy of the STFT representation. They are based on the classical algorithm of Griffin and Lim (G&L) [9], that iteratively reconstructs the signal knowing only its magnitude STFT. Again, these techniques only use the energy information of each source as prior information, but perform iterative phase reconstruction. For instance, the techniques developed in [7], [8] are shown to outperform the standard Wiener filter. However, in return, reconstructing the phases breaks the remixing constraint (2).
The goal of this paper is to propose a new ISS framework, based on a joint estimation of the source signals by an iterative reconstruction of their phase. It is based on a technique called Multiple Input Spectrogram Inversion (MISI) [10], that at each iteration distributes the remixing error e = m(t) − js j amongst the estimated sources and therefore enforces the remixing constraint. It should be noted that, within the context of ISS, it uses the same prior information (spectrograms 1 , or quantized versions thereof) as the classical Wiener estimate. 1 The word spectrogram is used here to refer to the squared magnitude of the STFT arXiv:1202.2075v1 [cs.ET] 9 Feb 2012 Therefore, the results of the oracle Wiener estimate will be used as baseline throughout this paper, "oracle" meaning here with perfect (non-quantized) knowledge of the spectrogram of every source.
In short, the two main contributions of this article can be summarized as follows : • the modification of the MISI technique to fit within a framework of ISS. The original MISI technique [10] benefits from a high overlap between analysis frames (typically 87.5 %), and the spectrograms are assumed to be perfectly known. The associated high coding costs are not compatible with a realistic ISS application, where the amount of side information must be as small as possible.
We show that a controlled quantization, combined with a relaxed distribution of the remixing error, leads to good results even at small rates of side information. • a dual-resolution technique that adds small analysis windows at transients, significantly improving the audio quality where it is most needed, at the cost of a small -but controlled -increase of the amount of side information. All these experimental configurations are evaluated for a variety of musical pieces, in a context of ISS.
The paper is organized as follows: a state of the art is given in Section II, where the G&L and MISI techniques are presented. In Section III, we propose an improvement to MISI, with preliminary experiments and discussion. In Section IV, we address the problem of transients and update our method with a dual-resolution analysis. In Section V, the full ISS framework is presented, describing both coding, decoding and reconstruction strategies. Experimental results are presented in Section VI, with a discussion on various design parameters. Finally, Section VII concludes this study.
II. STATE OF THE ART
A. Signal reconstruction from magnitude spectrogram By nature, an STFT computed with an overlap between adjacent windows is a redundant representation. As a consequence, any set of complex numbers S ∈ C M ×N does not systematically represent a real signal in the time-frequency (TF) plane. As formalized in [11], the function G = is not a bijection, rather a projection of a complex set S ∈ C M ×N into the sub-space of the so-called "consistent" STFTs, which are the TF representations that are invariant trough G.
The G&L algorithm [9] is a simple iterative scheme to estimate the phase of the STFT from a magnitude spectrogram |S|. At each iteration k, the phase of the STFT is updated with the phase of the consistent STFT obtained from the previous iteration, leading to an estimate: It is shown in [9] that each iteration decreases the objective function However, this algorithm has intrinsic limitations. Firstly, it processes the full signal at each iteration, which prevents an online implementation. This has been addressed in other implementations based on the same paradigm, see e.g. Zhu et al. [12] for online processing and LeRoux et al. [11] for a computational speedup. Secondly, the convergence of the objective function does not guarantee the reconstruction of the original signal, because of phase indetermination. The reader is redirected to [13] for a complete review on iterative reconstruction algorithms and their convergence issues.
B. Re-mixing constraint and MISI
In an effort to improve the convergence of the reconstruction within a source separation context, Gunawan et al. [10] proposed the MISI technique, that extracts additional phase information from the mixture. Here, the estimated sources should not only be consistent in terms of time-frequency (TF) representation, they should also satisfy the re-mixing constraint, so that the re-mixing of the estimated sources is close enough to the original mixture. Let us consider the timefrequency remixing error E m so that: Note that E m = 0 when using the Wiener filter. In the case of an iterative G&L phase reconstruction, E m = 0 at any iteration. Here, MISI distributes the error equally amongst the sources, leading to the corrected source at iteration k, C (k) j : where J is the number of sources. Therefore, if the spectrogram of the source is perfectly known, it only consists in adapting the G&L technique with an additional phase update based on the re-mixing error: and the MISI algorithm alternates steps 4, 5 and 6. It should be emphasized that, with MISI, the time-domain estimated sources do not satisfy the remixing constraint (equation (2)), step (4) playing a role only in the estimation of the phase.
III. ENHANCING THE ITERATIVE RECONSTRUCTION
The MISI technique [10] presented in the previous section assumes that the spectrogram of every source is perfectly known. However, in the framework of ISS, we have to transmit the spectrogram information of each source with a data rate that is as small as possible, i.e. with quantization. At low bit rates (coarse quantization), the spectrograms may be degraded up to the point that modulus reconstruction is necessary. Therefore we will not only perform a phase reconstruction as in MISI, but a full TF reconstruction (phase and modulus) from the knowledge of both the mixture and the degraded spectrogram.
A. Activity-based error distribution
It is here assumed that only a degraded version of the source spectrogram is given. Equation (5) can still be used to rebuild both magnitude and phase of the STFT. However, a direct application of this technique leads to severe crosstalk, as some re-mixing error gets distributed on sources that are silent.
In order to only distribute the error where needed, we define a TF domain where a source is considered active based on its normalized contribution α j , as given by the Wiener estimate in eqn. 1. For the source j, the activity domain Ψ j (equation (7)) is the binary TF indicator where the normalized contribution α j of a source j is above some activity threshold ρ Now, the error is distributed only where sources are active: where D(n, m) is a TF error distribution parameter. It is possible to compute D(n, m) as the number N a of active sources at TF bin (n, m) (i.e., D(n, m) = j Ψ j (n, m)). However, it was noticed experimentally that a fixed D such that D >> N a provides better results. This means that only a small portion of the error is added at each iteration, and that the successive TF consistency constraint enforcements (the G function) validate or invalidate the added information. The exact tuning of parameters D and ρ is based on experiments, as discussed in section III-B. We expect that the lower ρ, the lesser the artifacts of the reconstruction, but also the higher the crosstalk (sources interferences) because the remixing error is distributed on a higher number of bins.
B. Preliminary experiments
A first test is performed to validate the proposed design, and to experiment on the various parameters. We use a monophonic music mixture of electro-jazz at a 16bits/44.1kHz format. Five instruments are playing in this mixture : a bass, a drum set, a percussion, an electric piano and a saxophone. These instruments present characteristics that interfere with one another. For instance, the bass guitar and the electric piano are heavily interfering in low frequencies, whereas drums and percussions both have strong transients. The saxophone is very breathy but the breath contribution is far below the energy of the harmonics.
The spectrograms are log-quantized (in dB, cf [14], [6]) with three quantization steps : u = 0 (no quantization), 2 and 4dB. For each of these three conditions, we use two overlap values of 50% and 75% and a window size of 2048 samples at 44,1kHz sampling rate. Two values of the activity threshold are tested: ρ = .1 and .01. The phase of each source is initialized with the phase of the mixture, and 50 iterations were performed.
We test 3 variants of the proposed separation method : 1) M1 : with D = 40 and activity detection.
2) M2 : with D = N a and activity detection.
3) M3 : with D = N a and no activity detection.
For this evaluation, we use the three objective criteria of the BSS Eval toolbox [15], namely the Source to Distortion Ratio (SDR), the Source to Interference Ratio (SIR) and the Source to Artifact Ratio (SAR). Results given on Figure 1 are relative to the Oracle Wiener filter estimation performances, taken as reference. In the present experiment the absolute mean (respectively, standard deviation) of the Oracle Wiener filter were : SDR = 9.0 (1.3) dB, SIR = 21 (5.1) dB, SAR = 9.4 (1.2) dB for both 50% and 75% overlap. Results of MISI on the same signal are given on Figure 2.
C. Discussion
The results are presented on Figures 1 and 2 and the reconstructed sources are available on the demo webpage [16]. The performance of unquantized MISI is very high, but decreases rapidly when quantization increases. This is directly linked to the fact that the spectrogram is constrained, which would be even more problematic when part of this spectrogram is missing, for bitrate reduction purposes. The activity-based error distribution (M1 and M2 vs M3) improves significantly the three objective criteria both in mean and standard deviation. This is expected as the activity domain prevents reconstruction of a source on a bin where its contribution to the mixture is negligible. One can also see that lowering the activity threshold ρ (from .1 -upper line -to .01 -lower line -) improves the SAR but lowers the SIR: a lower value of ρ distributes the error on a larger amount of bins. While this provides less "holes" in the reconstructed TF representation (higher SAR), it also involves more crosstalk between sources (lower SIR). In every condition, the tradeoff between SIR and SAR when lowering ρ seems to be a loss of about 1dB on the SIR for a gain of 1dB on the SAR. Since the SIR is already high on the oracle Wiener filter (> 15dB), it seems a better tradeoff to favor SAR, in order to improve the global SDR gain. Therefore, the lower value ρ = .01 will be used for the rest of the paper.
The improvements brought by D >> N a (M1) compared to D = N a (M2) are less important. The precise choice of D is experimented on fig. 3. Large values of D seem to provide a better convergence: the energy of the error that is distributed to a source but that does not belong to it (on a consistency basis) will be easily discarded because of its small value and because of the energy smearing effect of the G function.
When the spectrogram is quantized with u = 4 dB quantization step, the reconstruction performance reaches a maximum with D = 40 for 50 iterations. Finally, the effect of spectrogram quantization is clear. As expected, increasing the quantization steps lowers the SDR but also dramatically lowers the SAR because of added artifacts caused by the quantization. Figure 4 presents the SDR improvement when varying the quantization step u, for algorithm M1. Even for a relatively high quantization step of 4dB, results still outperform the oracle Wiener filter.
To summarize the results of this preliminary experiment, we have shown that -at least for the sounds under test -the proposed method M1 (activity detection, D = 40) can outperform the oracle Wiener filter, while keeping the amount of side information low, with a crude quantization of the spectrograms (u = 4 dB). However, these results are not perfect, especially in terms of perception. When listening to the sound examples (available online [16]), one can hear a number of artifacts, especially at transients. Indeed, transient reconstruction from a spectrogram or from a Wiener filter is a well-known issue [17], as time domain localization is mainly transmitted by the phase. The next section alleviates this problem by using multiple analysis windows.
IV. IMPROVING TRANSIENTS RECONSTRUCTION
The missing phase information at transients leads to a smearing of the energy, pre-echo or an impression of over smoothness of the attack. In order to prevent these issues, a window switching can be used, with shorter STFT at transients [17], [18], [19]. In Advanced Audio Coding (AAC) for instance, the window switches from 2048 to 256 samples when a transient is detected. Here, because we want the same TF grid for sources that can have very different TF resolution requirements, we do not switch between window sizes but rather use a dual resolution at transients, keeping both window sizes. Note that this leads to a small overhead in terms of amount of side information to encode (both short-and longwindow spectrograms have to be quantized and transmitted at transients), but does not require transition windows.
A. Transients detection
We use the same non-uniform STFT grid for every source and for the mixture, keeping the ability of TF addition and subtraction for error distribution. In order to obtain this nonuniform grid, we process in three steps at the coding stage: 1) a binary transient indicator T j (t) is computed for each source j, using the Complex Spectrum Difference [20]: T j equals to 1 if a transient is detected at time t, 0 otherwise. 2) The transients are combined in T all so that where ⊕ is the logical OR function.
3) T all is cleaned so that the time between two consecutive transients is greater or equal to the length of two large windows. The non-uniform STFT is therefore constructed by concatenation of the large-window STFT on all frames, plus of shortwindow STFT on transient frames in T all . Figure 5 shows this dual-resolution STFT when a transient is detected.
B. Experiments
In order to evaluate the improvements brought by dualresolution, we use the same sound samples as before : an electro-jazz piece of 15 seconds of music composed of 5 sources. The same parameters are also used: 50 iterations, D = 40, ρ = 0.01, and two overlap values : 50% and 75%. The large and small window sizes are set to 2048 and 256 samples, respectively.
Results are presented on Figure 6, showing improvement over the Wiener filter as before. Note that we used the same Wiener filter reference (single-resolution) throughout this experiment. Transient detection with 50% overlap (leading to an increase in data size from 15 to 25%, depending on the number of detected transients), are close to the results obtained with an uniform STFT at 75% overlap (100% more data): transient detection brings the same separation benefits as increasing the overlap, with the added value of sharper transients. Audio examples are available on the demo web page [16].
V. PRACTICAL IMPLEMENTATION IN AN ISS FRAMEWORK
This section presents the new source reconstruction method in a full ISS framework. We call our method Informed Source Separation using Iterative Reconstruction (ISSIR). First the coding scheme will be presented, together with parameter tuning. Then, the decoding scheme will be presented.
A. Coder
Data coding is used to format and compact the information needed for the posterior reconstruction. The size of this coded data is of prime importance : Logarithmic bin grouping in subbands, for 25 subbands and 129 bins • In the case of watermarking within the mixture (which would then be coded in PCM), high capacity watermarking may be available [21], limited by a constraint of perceptual near-transparency. The lower the bit rate, the higher the quality of the final watermarked mixture, used for the source reconstruction. • In the case of a compressed file format for the mixture, the side-information could be embedded as meta-data (AAC allows meta-data chunks, for instance). In this case, the size of the data is also important in order to keep the difference between the coded audio file and the original audio file to a minimum.
Of course, increasing the bit rate would eventually lead to the particular case where simple perceptual coding of all the sources (for instance with MPEG 2/4 AAC) would be more efficient than informed separation. In order to achieve optimal data compaction, we make the following observation: most of the music signals are sparse and mostly described by their most energetic bins. Therefore, spectrograms coding should not require the description of TF bins with an energy threshold T lower than e.g. -20dB below the maximum energy bin of the TF representation. What we propose is then to discard the bins that are lower than T in Energy. T is the first parameter to be adjusted in order to fit the target bit rate, with T ≤ −20 dB. Note that former work, e.g. [6], also threshold the spectrogram, but much lower in energy (-80 dB). The second parameter for data compaction is the quantization of the spectrogram with step u. As seen before, increasing u decreases the reconstruction quality but lowers the number of energy levels to be encoded. Since increasing u did not change much the entropy of the data distribution, we choose u = 1dB for the whole experiment. The third parameter ρ used for the activity domain is set to .01 and is not modified in our experiments.
The data size of the activity domain is then fixed throughout the experiments. In order to compact this information even more, we group time-frequency bins on the frequency scale using logarithmic rules similar to the Equivalent Rectangular Bandwidth (ERB [22]) scale. This psychoacoustic-based compression technique has also been used in informed source separation in [4], [5]. For the experiments in this paper we use 75, 125 or 250 non overlapping bands on large windows (1025 coded bins) and 25 bands on small windows (129 coded bins), as presented on Figure 7.
Additional parameters such as spectrogram normalization coefficients, STFT structure, transient location and quantization step are transmitted apart: such information represents a negligible amount of data as most of it is fixed for the whole file duration. At the end of the coding stage, a basic entropy coding (in our experiment setup, we used bzip2) is added. Figure 8 shows the coding scheme, with the feedback loop for the adjustment of the model parameters to the target bit rate in kb/source/s. The target bit rate is a mean amongst the sources, as some sources will require more information to be encoded than others. Such framework allows mean data rates as low as 2kb/source/s.
B. Decoder
The decoder performs all the previous operations backwards. It first initializes each source using the log-quantized data and the phase of the mixture M . Then, the iterative reconstruction is run for K iterations and the signals are finally reconstructed using the decoded activity domain Ψ j .
VI. EXPERIMENTS
In this section we validate our complete ISSIR framework on different types of monophonic mixtures. As the problem of informed source separation is essentially a tradeoff between bit rate and quality, we perform the experiments by setting different thresholds T and filter bank sizes for the single and dual window STFT algorithm presented before. The baseline for comparison is a state-of-the-art ISS framework based on Wiener filtering [6], where JPEG image coding is simply used to encode the spectrograms. For a fair comparison, we also use this method with the same ERB-based filter bank grouping. For reference, we also compute the results of the original MISI method, with spectrogram quantization and coding.
The test database is composed of 14 short monophonic mixtures from the Quaero database 2 , from 15 to 40 s long, with various musical styles (pop, rock, industrial rock, electro jazz, disco) and different instruments. Each mixture is composed of 5 to 10 different sources, for a total of 90 source signals. The relation between the sources and the mixture is linear, instantaneous and stationary ; however, the sources include various effects such as dynamic processing, reverberation or equalization, so that the resulting mixtures are close to what would have been obtained by a sound engineer on a Digital Audio Workstation. Figure 9 presents the mean and standard deviation of the improvements over the oracle Wiener filter for the whole database. As before, SDR, SIR and SAR are used for the comparison of the different methods. Reported bit rates are averaged over the whole database, at a given experimental condition. Four mixtures under Creative Commons license are given as audio examples on the demo web page [16]: • Arbaa (Electro Jazz) -mixture nb. 2 -5 sources • Farkaa (Reggae) -mixture nb. 4 -7 sources • Nine Inch Nails (Industrial Rock) -mixture nb. 8 -7 sources • Shannon Hurley (Pop) -mixture nb. 12 -8 sources
A. Bit rates and overall quality
As expected, increasing the bit rate improves the reconstruction on all criteria. The two ISSIR algorithms always outperform the baseline method of [6], although not significantly at very low bit rates when the non-uniform filterbank is used. The dual-resolution framework requires more data, and only outperforms the single resolution algorithm for bit rates higher than 10kb/source/s, where the latter tends to reach its maximum of 1.7dB improvement over the oracle Wiener filter. At 32kb/source/s, the dual resolution method reaches its own maximum of approx. 3dB improvement over the oracle Wiener filter. For even higher bit rates, MISI gives significantly better results, but the high amount of total side information is not compatible with a realistic ISS usage. Fig. 10. Separation results compared to the oracle Wiener filter for every tested mixture with mean and standard deviation for a bit rate of 10kb/source/second.
B. Performance as a function of the sound file
The previous experiments are associated with a strong variance: results are highly dependent both on the type of music and on the sources. Figure 10 presents the SDR results for the 14 sound files, at an average bit rate of 10kb/source/s. It can be observed that the variations are happening both from mixture to mixture and within the mixture. At this bit rate, the dual resolution algorithm may not always perform better than the single resolution algorithms, as can be seen for mixtures 3, 5, 13, and 14. However, the proposed technique (single or dual) always outperforms the reference method of [6].
C. Computation time
Since the proposed reconstruction algorithm is iterative, the decoding requires a heavier computation load than simple Wiener estimates. A Matlab implementation of the dualresolution scheme led to computation times of 6 to 9 s per second of signal, for 50 iterations, on a standard computer.
As a proof of concept, the single resolution iterative reconstruction was also implemented in parallel with the OpenCl [23] API, using a fast iterative signal reconstruction [11]. On a medium range graphic card, the computation time dropped to .3 to .4 s per second of signal. The adaptation of this fast scheme to the dual resolution case is, however, not straightforward.
D. Complex mixtures
In the case of complex mixtures (multichannel, convolutive, etc), the main issue is the error distribution as in equation (8), that requires itself a partial inversion of the mixing function. In fact, actual source separation is done at this level, and this paper shows that a simple binary mask at this stage is sufficient in order to achieve good results on monophonic mixtures. The framework presented in this paper could then be adapted for a vast variety of source separation methods, especially in the cases when the mixing function is known. In the case of multichannel mixtures, for instance, error repartition distribution be done using beamforming techniques.
VII. CONCLUSION
This paper proposes a complete framework for informed source separation using an iterative reconstruction, called Informed Source Separation using Iterative Reconstruction (ISSIR). In experiments on various types of music, ISSIR outperforms on standard objective criteria a state-of-the-art ISS technique based on JPEG compression of the spectrogram, and even the oracle Wiener filtering by up to 3dB in sourceto-distortion ratio.
Future work should focus on the optimization of the algorithm in order to lighten the computation load, and on its extension to multichannel and convolutive mixtures. Psychoacoustic models should also be considered as a way to compact and shape the side information. Finally, formal listening tests should confirm the objective results, although it should be emphasized that setting up a whole methodology for such ISS listening tests (that is not established as in other fields, e.g., audio coding), is a work in itself that goes beyond the current study. | 2012-02-09T10:39:07.000Z | 2012-02-09T00:00:00.000 | {
"year": 2013,
"sha1": "fc9f6eba4d977e0d6d341b58ebf485334184cef3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1202.2075",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "cd23570308fa55b5683ec3b6d26908cea0eeba68",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
119198070 | pes2o/s2orc | v3-fos-license | Space-time deformations as extended conformal transformations
A definition of space-time metric deformations on an $n$-dimensional manifold is given. We show that such deformations can be regarded as extended conformal transformations. In particular, their features can be related to the perturbation theory giving a natural picture by which gravitational waves are described by small deformations of the metric. As further result, deformations can be related to approximate Killing vectors (approximate symmetries) by which it is possible to parameterize the deformed region of a given manifold. The perspectives and some possible physical applications of such an approach are discussed.
I. INTRODUCTION
The issue to consider a general way to deform the space-time metrics is not new. It has been posed in different ways and is related to several physical problems ranging from the spontaneous symmetry breaking of unification theories up to gravitational waves, considered as space-time perturbations. In cosmology, for example, one faces the problem to describe an observationally lumpy universe at small scales which becomes isotropic and homogeneous at very large scales according to the Cosmological Principle. In this context, it is crucial to find a way to connect background and locally perturbed metrics [1]. For example, McVittie [2] considered a metric which behaves as a Schwarzschild one at short ranges and as a Friedman-Lemaitre-Robertson-Walker metric at very large scales. Gautreau [3] calculated the metric generated by a Schwarzschild mass embedded in a Friedman cosmological fluid trying to address the same problem. On the other hand, the post-newtonian parameterization, as a standard, can be considered as a deformation of a background, asymptotically flat Minkowski metric.
In general, the deformation problem has been explicitly posed by Coll and collaborators [4,5,10] who conjectured the possibility to obtain any metric from the deformation of a space-time with constant curvature. The problem was solved only for 3-dimensional spaces but a straightforward extension should be to achieve the same result for space-times of any dimension.
In principle, new exact solutions of the Einstein field equations can be obtained by studying perturbations. In particular, dealing with perturbations as Lorentz matrices of scalar fields Φ A C reveals particularly useful. Firstly they transform as scalars with respect the coordinate transformations. Secondly, they are dimensionless and, in each point, the matrix Φ A C behaves as the element of a group. As we shall see below, such an approach can be related to the conformal transformations giving an "extended" interpretation and a straightforward physical meaning of them (see [7,8] and references therein for a comprehensive review). Furthermore scalar fields related to space-time deformations have a straightforward physical interpretation which could contribute to explain several fundamental issues as the Higgs mechanism in unification theories, the inflation in cosmology and other pictures where scalar fields play a fundamental role in dynamics.
In this paper, we are going to discuss the properties of the deforming matrices Φ A C and we will derive, from the Einstein equations, the field equations for them, showing how them can parameterize the deformed metrics, according to the boundary and initial conditions and to the energy-momentum tensor.
The layout of the paper is the following. In Sec.II, we define the space-time perturbations in the framework of the metric formalism giving the notion of first and second deformation matrices. Sec.III is devoted to the main properties of deformations. In particular, we discuss how deformation matrices can be split in their trace, traceless and skew parts. We derive the contributions of deformation to the geodesic equation and, starting from the curvature Riemann tensor, the general equation of deformations. In Sec.IV we discuss the notion of linear perturbations under the standard of deformations. In particular, we recast the equation of gravitational waves and the transverse traceless gauge under the standard of deformations. Sec.V is devoted to discuss the action of deformations on the Killing vectors. The result consists in achieving a notion of approximate symmetry. Discussion and conclusions are given in Sec.VI. In Appendix, we discuss in details how deformations act on affine connections.
II. GENERALITIES ON SPACE-TIME DEFORMATIONS
In order to start our considerations, let us take into account a metric g on a space-time manifold M. Such a metric is assumed to be an exact solution of the Einstein field equations. We can decompose it by a co-tetrad field ω A (x) Let us define now a new tetrad field ω = Φ A C (x) ω C , with Φ A C (x) a matrix of scalar fields. Finally we introduce a space-time M with the metric g defined in the following way where also γ CD (x) is a matrix of fields which are scalars with respect to the coordinate transformations.
otherwise we say that g is a deformation of g and M is a deformed M. If all the functions of Φ A C (x) are continuous, then there is a one -to -one correspondence between the points of M and the points of M.
In particular, if ξ is a Killing vector for g on M, the corresponding vector ξ on M could not necessarily be a Killing vector.
A particular subset of these deformation matrices is given by which define conformal transformations of the metric, In this sense, the deformations defined by Eq. (2) can be regarded as a generalization of the conformal transformations.
We call the matrices Φ A C (x) first deformation matrices, while we can refer to as the second deformation matrices, which, as seen above, are also matrices of scalar fields. They generalize the Minkowski matrix η AB with constant elements in the definition of the metric. A further restriction on the matrices Φ A C comes from the theorem proved by Riemann by which an n-dimensional metric has n(n− 1)/2 degrees of freedom (see [5] for details). With this definitions in mind, let us consider the main properties of deforming matrices.
Let us take into account a four dimensional space-time with Lorentzian signature. A family of matrices Φ
are defined on such a space-time. These functions are not necessarily continuous and can connect space-times with different topologies. A singular scalar field introduces a deformed manifold M with a space-time singularity.
As it is well known, the Lorentz matrices Λ A C leave the Minkowski metric invariant and then It follows that Φ A C give rise to right cosets of the Lorentz group, i.e. they are the elements of the quotient group GL(4, R)/SO (3,1). On the other hand, a right-multiplication of Φ A C by a Lorentz matrix induces a different deformation matrix.
The inverse deformed metric is in its symmetric and antisymmetric parts where Ω = Φ A A , Θ AB is the traceless symmetric part and ϕ AB is the skew symmetric part of the first deformation matrix respectively. Then standard conformal transformations are nothing else but deformations with Θ AB = ϕ AB = 0 [9].
Finding the inverse matrix Φ −1 A C in terms of Ω, Θ AB and ϕ AB is not immediate, but as above, it can be split in the three terms where α, Ψ A C and Σ A C are respectively the trace, the traceless symmetric part and the antisymmetric part of the inverse deformation matrix. The second deformation matrix, from the above decomposition, takes the form and then In general, the deformed metric can be split asg where In particular, if Θ AB = 0, the deformed metric simplifies to and, if Ω = 1, the deformation of a metric consists in adding to the background metric a tensor γ ab . We have to remember that all these quantities are not independent as, by the theorem mentioned in [5], they have to form at most six independent functions in a four dimensional space-time.
Similarly the controvariant deformed metric can be always decomposed in the following way Let us find the relation between γ ab and λ ab . By using g ab g bc = δ c a , we obtain if the deformations are conformal transformations, we have α = Ω −1 , so assuming such a condition, one obtain the following matrix equation and and finally where (δ + Ω −2 γ) −1 is the inverse tensor of (δ b a + Ω −2 γ b a ). To each matrix Φ A B , we can associate a (1,1)-tensor φ a b defined by such that which can be decomposed as in Eq.(16). Vice-versa from a (1,1)-tensor φ a b , we can define a matrix of scalar fields as The Levi Civita connection corresponding to the metric (14) is related to the original connection by the relation (see the Appendix for details) (see [9]), where Therefore, in a deformed space-time, the connection deformation acts like a force that deviates the test particles from the geodesic motion in the unperturbed space-time. As a matter of fact the geodesic equation for the deformed space-time The deformed Riemann curvature tensor is then while the deformed Ricci tensor obtained by contraction is and the curvature scalar From the above curvature quantities, we obtain finally the equations for the deformations. In the vacuum case, we simply have where R ab must be regarded as a known function. In presence of matter, we consider the equation we are assuming, for the sake of simplicity 8πG = c = 1. This last equation can be improved by considering the Einstein field equations and then is the most general equation for deformations.
IV. METRIC DEFORMATIONS AS PERTURBATIONS AND GRAVITATIONAL WAVES
Metric deformations can be used to describe perturbations. To this aim we can simply consider the deformations together with their derivatives With this approximation, immediately we find the inverse relation As a remarkable example, we have that gravitational waves are generally described, in linear approximation, as perturbations of the Minkowski metric In our case, we can extend in a covariant way such an approximation. If ϕ AB is an antisymmetric matrix, we have where the first order terms in ϕ A B vanish and γ ab is of second order Consequently where Let us consider the background metric g ab , solution of the Einstein equations in the vacuum We obtain the equation of perturbations considering only the linear terms in Eq.(32) and neglecting the contributions of quadratic terms. We find and, by the explicit form of C d ab , this equation becomes Imposing the transverse traceless gauge on γ ab , i.e. the standard gauge conditions ∇ a γ ab = 0 (48) and γ = γ a a = 0 (49) Eq.(47) reduces to see also [9]. In our context, this equation is a linearized equation for deformations and it is straightforward to consider perturbations and, in particular, gravitational waves, as small deformations of the metric. This result can be immediately translated into the above scalar field matrix equations. Note that such an equation can be applied to the conformal part of the deformation, when the general decomposition is considered.
As an example, let us take into account the deformation matrix equations applied to the Minkowski metric, when the deformation matrix assumes the form (36). In this case, the equations (47), become ordinary wave equations for γ ab . Considering the deformation matrices, these equations become, for a tetrad field of constant vectors, The above gauge conditions are now and This result shows that the gravitational waves can be fully recovered starting from the scalar fields which describe the deformations of the metric. In other words, such scalar fields can assume the meaning of gravitational wave modes.
V. APPROXIMATE KILLING VECTORS
Another important issue which can be addressed starting from space-time deformations is related to the symmetries. In particular, they assume a fundamental role in describing when a symmetry is preserved or broken under the action of a given field. In General Relativity, the Killing vectors are always related to the presence of given space-time symmetries [9].
Let us take an exact solution of the Einstein equations, which satisfies the Killing equation where ξ, being the generator of an infinitesimal coordinate transformation, is a Killing vector. If we take a deformation of the metric with the scalar matrix with and being we have If there is some region D of the deformed space-time M def ormed where we say that ξ is an approximate Killing vector on D. In other words, these approximate Killing vectors allow to "control" the space-time symmetries under the action of a given deformation.
VI. DISCUSSION AND CONCLUSIONS
In this paper, we have proposed a novel definition of space-time metric deformations parameterizing them in terms of scalar field matrices. The main result is that deformations can be described as extended conformal transformations. This fact gives a straightforward physical interpretation of conformal transformations: conformally related metrics can be seen as the "background" and the "perturbed" metrics. In other words, the relations between the Jordan frame and the Einstein frame can be directly interpreted through the action of the deformation matrices contributing to solve the issue of what the true physical frame is [7,8].
Besides, space-time metric deformations can be immediately recast in terms of perturbation theory allowing a completely covariant approach to the problem of gravitational waves.
Results related to those presented here has been proposed in [4,5]. There it is shown that any metric in a three dimensional manifold can be decomposed in the form where h ab is a metric with constant curvature, σ(x) is a scalar function, s a is a three-vector and ǫ = ±1. A relation has to be imposed between σ and s a and then the metric can be defined, at most, by three independent functions. In a subsequent paper [6], Llosa and Soler showed that (61) can be generalized to arbitrary dimensions by the form where g ab is a constant curvature metric, F ab is a two-form, λ(x) and µ(x) are two scalar functions. These results are fully recovered and generalized from our approach as soon as the deformation of a constant metric is considered and suitable conditions on the tensor Θ AB are imposed. In general, we have shown that, when we turn to the tensor formalism, we can work with arbitrary metrics and arbitrary deforming γ ab tensors. In principle, by arbitrary deformation matrices, not necessarily real, we can pass from a given metric to any other metric. As an example, a noteworthy result has been achieved by Newman and Janis [11]: They showed that, through a complex coordinate transformation, it is always possible to achieve a Kerr metric from a Schwarzschild one. In our language, this means that a space-time deformation allows to pass from a spherical symmetry to a cylindrical one. Furthermore, it has been shown [12,13] that three dimensional black hole solutions can be found by identifying 3-dimensional anti-de Sitter space on which acts a discrete subgroup of SO (2,2).
In all these examples, the transformations which lead to the results are considered as "coordinate transformations". We think that this definition is a little bit misleading since one does not covariantly perform the same transformations on all the tensors defined on the manifold. On the other hand, our definition of metric deformations and deformed manifolds can be straightforwardly related to the standard notion of perturbations since, in principle, it works on a given region D of the deformed space-time (see, for example, [14,15]).
VII. APPENDIX
We can calculate the modified connectionΓ c ab in many alternative ways. Let us introduce the tetrad e A and cotetrad ω B satisfying the orthogonality relation i eA ω B = δ B A (63) and the non-integrability condition (anholonomy) The corresponding connection is If we deform the metric as in (2), we have two alternative ways to write this expression: either writing the "deformation" of the metric in the space of tetrads or "deforming" the tetrad field as in the following expression In the first case, the contribution of the Christoffel symbols, constructed by the metric γ AB , appearŝ In the second case, using (64), we can define the new anholonomy objectsĈ A BC .
After some calculations, we havê As we are assuming a constant metric in tetradic space, the deformed connection iŝ Substituting (69) in (70), the final expression ofΓ A BC , as a function of Ω A BC , Φ A B , Φ −1 D C and e a G iŝ where | 2007-12-03T09:39:58.000Z | 2007-12-03T00:00:00.000 | {
"year": 2007,
"sha1": "76fc0390f6ab862b4967a9d6fd769efb119819cb",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0712.0238",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "76fc0390f6ab862b4967a9d6fd769efb119819cb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
262350457 | pes2o/s2orc | v3-fos-license | Research gaps for three main tropical diseases in the People’s Republic of China
This scoping review analyzes the research gaps of three diseases: schistosomiasis japonica, malaria and echinococcosis. Based on available data in the P.R. China, we highlight the gaps between control capacity and prevalence levels, and between diagnostic/drug development and population need for treatment at different stages of the national control programme. After reviewing the literature from 848 original studies and consultations with experts in the field, the gaps were identified as follows. Firstly, the malaria research gaps include (i) deficiency of active testing in the public community and no appropriate technique to evaluate elimination, (ii) lack of sensitive diagnostic tools for asymptomatic patients, (iii) lack of safe drugs for mass administration. Secondly, gaps in research of schistosomiasis include (i) incongruent policy in the implementation of integrated control strategy for schistosomiasis, (ii) lack of effective tools for Oncomelania sp. snail control, (iii) lack of a more sensitive and cheaper diagnostic test for large population samples, (iv) lack of new drugs in addition to praziquantel. Thirdly, gaps in research of echinococcosis include (i) low capacity in field epidemiology studies, (ii) lack of sanitation improvement studies in epidemic areas, (iii) lack of a sensitivity test for early diagnosis, (iv) lack of more effective drugs for short-term treatment. We believe these three diseases can eventually be eliminated in mainland China if all the research gaps are abridged in a short period of time.
Background
Schistosomiasis, malaria and echinococcosis are three types of tropical diseases that threaten more than two billion people worldwide [1][2][3]. These diseases mostly affect poor rural communities in developing countries [4] and those who are infected with such pathogens not only suffer indisposition but also have various degrees of morbidity, which induce poorer standards of living [5][6][7][8][9].
Continuous economic growth over the last 30 years has allowed the government of the People's Republic of China (P.R. China) to continuously increased its budget for parasitic disease control, resulting in a significant reduction in disease burden and transmission capacity throughout the country [10][11][12]. Furthermore, an effective national strategy has successfully brought down the prevalence levels of schistosomiasis japonica and malaria compared to levels of 50 years ago [13][14][15]. Echinococcosis has also been controlled to a stable level according to the fiveyear national surveillance report made by the Ministry of Health (MOH). Such achievements can be attributed to political commitments, control strategies adapted to the national control programme, as well as the innovative research and research capacity building [16].
The major research achievements of last 30 years are reflected in the progress in drug development, evaluation of diagnostics with the national control programme and the formation of a strong team for operational research, providing the required information and tools for the national control programmes. Due to these diseases being at various stages of the national control programme the sensitivity of diagnosis is also different (See Figure 1). It is believed that without advances in operational research, it will be an uphill struggle to reach the aim of elimination for schistosomiasis japonica and malaria in P.R. China. Moreover, without a new strategy and technical support, the currently controlled situation for echinococcosis may worsen.
Review
The main purpose of this paper is to summarize and disseminate relevant research findings on the three parasitic diseases, so as to identify research gaps in the existing literature. A consultation study was undertaken through the platform of Chinese Network on Drug and Diagnostic Innovation (China NDI, www.chinandi.org.cn) focusing on gaps (i) between research capacity and prevalence, and (ii) between diagnostic/drug R&D capacity and population need for treatment at different stages of the national control programme.
Methods
Our scoping review uses an adapted version of the ARKSEY and O'MALLEY (2005) framework [17], involving the following steps: 1. Identification and development of the research questions.
The overarching research questions are as follows: What is the gap between control capacity and prevalence among these three diseases? What is the gap between diagnostic/drug R&D capacity and population needs for treatment at the different stages of the national control programmes?
2. Location, screening and selection of relevant publications. Terms were searched for in two most common databases:PubMed database (http://www.pubmed. com) and international articles from Medline database, as well as the Wanfang database (http://www.wanfangdata.com.cn) for Chinese articles. Cqvip database, Cnki database and Wanfang database are the three most popular Chinese periodical databases in China. The percentage of overlap in core journals of these three databases is more than 95% in the research areas relevant to this scoping review (biotechnology, medicine and health). We selected the Wanfang database after comparing the quality of information by testing key words among three databases mentioned above a .
Publication selection
The searching procedure yielded 10,835 abstracts. A team constituting of eight professional researchers were responsible for publication selection. Level 1 relevancy testing went as follows: 1) Title, author and abstracts were scanned to determine whether they were relevant with epidemiology, diagnosis and chemotherapy of the three parasitic diseases we focused on in China. 2) The study must have been launched in China and include at least one Chinese author. After the level 1 relevancy testing, 1677 citations were identified for inclusion in this paper. Full articles were then obtained for level 2 relevancy testing. Team members selected and eliminated those publications that focused on single case report or treatment, vaccines, animal models, species validation, surgical treatment and phylogenetic study. Through this filtering, 860 articles were deemed relevant and selected for inclusion. Of these articles, 12 were excluded in data extraction as they could not classified into epidemiology, diagnosis and chemotherapy by their full text. A total of 848 articles were included in this research, with the review process is outlined in Figure 2. 4. Data extraction.
The data extracted was consolidated in a 'data extraction form' using a database programme, which divided the three different diseases along three research fields (epidemiology, diagnosis and chemotherapy). We collated a mixture of general information about the study along with specific information relevant to our research and recorded the information as follows: Author(s), year of publication, location and study site(s) Intervention type Populations Aim Methodology Outcome measures Results This data formed the basis of the analysis. We sought a uniform approach to all 848 studies included in the review. In practice we found out that approximately 8% of the included articles did not present all the information we needed. 5. Consultation exercise.
As indicated in the background, this scoping study also included a consultation element. Since 2010, more than 300 scholars attended our public meetings to discuss drugs, diagnostics innovation and control for tropical diseases in China. The minutes of these meetings were extracted for gap analysis and considered in comparison to the literature review. 6. Collating, summarizing and reporting the results Once the outlined steps were completed, we were able to present our narrative account of findings. Particular attention was given to a basic numerical analysis of the studies included in the review. Since there was great diversity and/or overlaps among reports, we rejected much of the very detailed information in order to make the table clearer. We hope these tables can present readers with an intuitive impression of the three diseases in China.
In addition the team evaluated the full text of each paper and supplementary information was collated in the corresponding summary table.
Gaps between research capacity and disease prevalence
After we extracted and analyzed the chosen articles, the relevant data was collected and summarized for thematic findings from the full text of the articles (Table 1).
Malaria
Prevalence In 2011, there were 4479 malaria cases reported through the infectious diseases reporting system from 782 counties of 27 Provinces in P.R. China (total 2856 counties of 31 provinces). 68.9% of the reported cases were in Yunnan, Anhui, Jiangsu, Henan and Sichuan provinces [18]. According to the Chinese Health Statistical Digest published by the Ministry of Health of P.R. China, the number of total malaria cases reported has declined significantly over the past ten years ( Figure 3). Yet imported malaria cases began to dominate and became prevalent throughout the country after almost two years [19]. The imported malaria cases accounted for 66.4% of total malaria cases in 2011.
Gaps in eliminating malaria When the goal for the national malaria control programme changes from control to elimination, the strategy on target population and control methods will need to be revised accordingly [20,21]. There are two gaps identified in eliminating malaria.
Gap 1: Lack of active surveillance in the public communities.
In the pre-elimination control stage, malaria carriers may be the main source of infection in the remaining epidemic area [22,23]. According to this research, only 5% malaria articles are relevant to active surveillance in the epidemic area. This condition was also identified from the national survey of malaria in 2010. In that year, the population tested for malaria in fever clinics was 7.1 million, while the field site survey population was only 0.14 million. Therefore, we recommend that more active surveillance activities be put in place to diagnose malaria carriers in suspicious residual epidemic sites in the future. Gap 2: There is no appropriate technique to effectively evaluate elimination. When indigenous malaria cases decreased sharply, the proportion of imported cases of falciparum malaria accounted that for all malaria cases increased from 1.5% in 2003 to 31.6% in 2011 [10,18,[24][25][26][27][28][29]. However, there are still no articles relevant to the technique that can accurately distinguish imported cases from local cases [30]; this has also led to the problem of judging between new cases and recurring cases. In the later stage of malaria elimination, it is difficult to evaluate whether an area has really eliminated malaria or not. Thus, effective biomarkers urgently need to be developed in order to distinguish imported cases from local cases.
Schistosomiasis japonica
Prevalence In China, schistosomiasis japonica was epidemic throughout 12 provinces after liberation in the 1950s. By 1995 five provinces had blocked the transmission of Schistosoma japonicum [31]. However, transmissions still occurred in provinces along the Yangtze River and its southern areas in 2010, particularly in Hunan, Hubei, Jiangxi, Anhui and Jiangsu, and in the mountainous and hilly regions of Sichuan and Yunnan provinces. From 2004 to 2012, the number of acute cases of schistosomiasis japonica reported has dramatically declined from 816 to 13 [32][33][34][35][36][37][38][39] (Figure 4). However, there are still approximately 68 million individuals at risk [39]. In this study, 238 articles were judged relevant to epidemiology of S. japonicum in China. In Table 1, the average number of each survey article for human, Oncomelania sp snail and definitive host in animals are respectively 192061, 57251 and 49773. In these articles, most of the studies are reported from the 12 provinces mentioned above. To support the large amount of surveillance work that is needed to collect this information, China has thousands of professional staff responsible for the supervision and control of schistosomiasis japonica. For Hunan province, the Hunan Institute of Parasitic Diseases (HIPD) is an important guarantor of schistosomiasis control work throughout the province. The institute has over 400 professional staff and a hospital equipped with 300 beds for treating patients with schistosomiasis [40].
Gaps in controlling schistosomiasis japonica
Gap 1: Present policy cannot completely ensure the execution of an integrated intervention strategy for schistosomiasis control.
In China the national control programmes for schistosomiasis japonica is now at the transmission control stage. The goal at present is to decrease the schistosomiasis infection rate to 1% by 2015. Chinese authorities have also discussed the possibility of achieving elimination of schistosomiasis japonica by the year 2020. To realize these goals, a fourpronged approach has been investigated since 2008 [41,42]. The objective of the approach is to interrupt the environmental contamination of schistosome eggs as follows: First, replace buffaloes with tractors. Second, restrict marshland used for pasturing and encourage fenced cattle-farming. Third, improve sanitation facilities in houses. Fourth and last, provide toilets for mobile populations (e.g. fishermen).
In general, the integrated measures for schistosomiasis japonica control with an emphasis on controlling the sources of infection have had strong effects [43][44][45]. However, in some less developed areasparticularly regions retaining traditional customs of using buffaloes for farmingconsiderable resistance has been faced for implementation because the strategy is contrary to the interests of the local economy [46][47][48]. Some marshland is still contaminated by infected cattle and many residents are under threat of schistosomiasis japonica [29]. [58]. Echinococcosis is also a kind of zoonosis [59]. According to the "Prevention and treatment of echinococcosis Action Plan (2010-2015)", the number of infected livestock in P. R. China amounts to about 50 million every year. Thus echinococcosis is one of the main reasons leading a large amount of herdsmen to poverty [60].
Gaps in control of echinococcosis Gap 1: Low capacity for controlling echinococcosis According to Chinese echinococcosis control policy, it is important to treat each dog in epidemic areas every month to control the main infection source [61,62]. The strategy is actually relatively straightforward, but the execution is very hard considering the enormous amount of dogs in these pasturing areas, which cover 40% of land in P.R. China [63][64][65]. In addition, most of the residents in these areas lack the corresponding prevention knowledge and are reluctant to change their living habits, such as feeding flesh viscera of sheep to dogs and touching dog fur [66][67][68][69]. Gap 2: Manpower restrictions in field survey.
According to the studies of echinococcosis in relevant papers, the total and average survey population (6,022) is far less than for the other two diseases (192,061 and 110,958). A low population density in epidemic areas, along with manpower restrictions are the main reasons for this situation occurring. It is imperative to train a larger number of echinococcosis researchers and improve their remuneration for these epidemic areas. Gap 3: Not enough support for hygiene and sanitation in high transmission areas.
In high-transmission regions, the hygiene and sanitation conditions are usually unsatisfactory [70,71]. Most of these areas lack access to tap water; households collect water several times a day from contaminated lakes or rivers for personal use and drinking without boiling [72,73]. In addition the limited supply of water is not sufficient enough for washing hands [72,73]. These risk factors are hard to change for certain living environments and industries, especially in poor domestic economic situations.
Gaps between diagnostic/drug development and population need for treatment
Relevant data has been collected and summarized to find the research gaps in diagnostic/drug development in the three diseases (Tables 2 and 3). [74]. GICA as a diagnostic tool appears to be the popular area in research and development as it is easy to use and has a lower cost, but the level of sensitivity is not satisfactory (88%-95%). We concluded from [76][77][78]. Since an enormous amount of samples are to be identified for schistosomiasis elimination in the near future, the ideal diagnosis method should be more sensitive and significantly cheaper than any of the presently available methods. Gap 3: Lack of sensitive test for early diagnosis of echinococcosis.
Gaps in diagnostic development
In China, ultrasonography is the most frequently used test for echinococcosis diagnosis [79,80]. Immunodiagnostic tests are also widely used in fieldwork. The kits used most widely are the ELISA-based serological tests, using either an Echinococcus granulosus hydatid cyst fluid antigen or an E. multilocularis crude vesicular fluid for primary screening. The sensitivity for hepatic cases of CE (Cystic echinococcosis) ranges from 85% to 98%. For AE (Alveolar echinococcosis), the use of purified or recombinant E. multilocularis antigens exhibit high diagnostic sensitivities ranging from between 91% and 100%, with overall specificities of 98%-100% [81]. However, for echinococcosis formed in other organs except for liver, the specificities are less than 50%. DIGFA seems to be the current area of interest for research and development.
In general, more imaging procedures should be used in echinococcosis epidemic areas to diagnose patients. For Immunodiagnostic tests, the gap is to develop new serological tests with higher sensitivity and specificity (>95%) than those based on the use of hydatid fluid and then bring them to a world standard. The resulting drug should also be straightforward and cheap to produce, with the capacity to diagnose early stage echinococcosis and allow for early treatment.
Gaps in drug development
Gap 1: Lack of safe drugs for both mass production and active again hypnozoites of P. vivax. Antimalarial drugs are essential in moving from control to elimination [82]. A 8-day therapeutic schedule by using chloroquine plus primaquine for P. vivax has now replaced the old schedule for 21-days and 14-days in China. The patient cure rates for 8-days chemotherapy is 96.9%-100%. Also an artemisinin-based combination therapy is widely used in China to treat P. Falciparum with cure rate of 82.4%-100%. Still, no new drug has been successfully developed in the past 10 years in China. P.vivax was detected in more than 99% of local malaria cases diagnosed microscopically. A characteristic of P. vivax infections is relapses originating from hypnozoites. There is a need for a radical cure to abate the source of infection during the course of malaria elimination [83]. New drugs are to be developed for hypnozoites and should be safe for patients with G6PD deficiency. Gap 2: The public needs a new drug for prophylactic treatment of schistosomiasis. Praziquantel is the sole drug for treatment and morbidity control of schistosomiasis japonica. Praziquantel is safe, cheap and effective against adult worms. In China, cure rates of up to 85-95.3% have been achieved but complete cures (100%) are seldom. Praziquantel does not prevent reinfection because it's effects only last a few hours and it cannot kill immature worms [84]. Artemether was proved to kill immature worms over the first 21 days post inoculation in laboratory [85,86], yet there has been no further research to support its application and dissemination in large-scale chemotherapy combined with praziquantel for antischistosoma japonicum.
Many animal experiments have been conducted using mefloquine for antischistosomiasis [87][88][89]. The results indicate that mefloquine exhibits an extensive and severe damage both to juvenile and adult S. japonicum harbored in mice. Nevertheless, this is still a long way from field application when there is a public need also for a new drug in addition to praziquantel. Firstly, the new drug is required to be cheap, safe and effective Control of the wildlife transmission cycles of E multilocularis is difficult so that it can be used for mass drug administration. Secondly, it should not produce the cross-resistance with praziquantel. Thirdly, it will preferably be effective against both juvenile and adult worms. Gap 3: Lack of a more effective drug for a short-term treatment of echinococcosis. In China, more than 94% echinococcosis patients were treated by chemotherapy, with only 6% of patients requiring an operation.
Changing the dosage forms of albendazole tablets is the main achievement in echinococcosis chemotherapy in China [90,91]. Both albendazole emulsion and liposomal albendazole have proved more effective than albendazole tablet for E. granulosus [92][93][94]. However, there are shortages in the current chemotherapy approach as it is a long course of treatment (at least 3 months lifelong) and has relative low cure rates (8.2%-74.5%) [94][95][96]. An adverse effect rate of more than 20% is another main reason for patients to give up their medicine course [97,98]. Therefore, a new more efficient drug urgently needs to be developed.
Gaps identified by consultations
The minutes of four consultation meetings were extracted into a list of the most important gaps presented by experts. This list in addition to the full text of included articles provides further detail to our study (See Table 4).
Discussion
The choice of scoping framework in the article through gap identification is based on our previous experience of scoping studies. The scoping review has centered on the identification and development of key research questions. The questions concern location, screening and selection of relevant publications; publication selection; data extraction; collating, summarizing and reporting the results. The resulting research gaps reported in our study also relied on two main sources. First, the literature review from which we extracted the papers to use and then summarized information from selected papers. Second, the gaps extracted in consultation. Overall we have identified, summarized and reported on about 13 research gaps in this paper. Experts in this study then gave corresponding advice on how to overcome gaps (See Table 5).
In this research, we endorse a consultation element to enhance a scoping review study. Choosing this route can enhance the veracity of final gap analysis and also ensure that the results more practically applicable. For instance, in the consultation meetings, experts presented the idea that P.R China needed a new technique to evaluate malaria elimination. Since P. R. China announced the objective to eliminate malaria only two years ago, few papers mentioned this gap. If we used a traditional systematic review, we may have not identified more immediate and forward-looking research gaps. Although such an element may be considered an 'optional extra' for scoping review, the consultation exercise did indeed provide 'added value' to the literature review. The limit of this research is in the deficiency of vaccine analysis. We do not have any professional team members working on the vaccine area. Our concern was in giving a fair evaluation on the part of vaccine research and therefore we have not incorporated vaccine research into the theme of the study.
When comparing the research priorities among the three diseases, malaria, schistosomiasis and echinococcosis, in aspects of diagnostics and drugs the major difference was found in two areas. One is the requirements of sensitivity and specificity in diagnosis, and the other is effectiveness of drug treatment, due to the different stage of the diseases control programme. For instance, for the preelimination stage of malaria, higher sensitivity and specificity diagnostics is needed both in individual and in population diagnosis [99]. At the same time, the tools used in the certification of malaria elimination, also needs to be further developed [100]. This is a concern as to the fact that innovative drugs or drug combinations are given a higher priority to reduce the risk of drug resistance [101,102]. For the transmission control stage of schistosomiasis, it is necessary for high sensitivity tools to screen the at risk population, in order to guide the MDA or selective chemotherapy based on different endemicity levels and further reduce transmission risks [5,77,78]. New drugs need to be developed for prophylactic treatment to effectively for target all stages of the parasites [2]. For the morbidity or infection control stage of echinococcosis, it is necessary to develop the early diagnostics, for individual and population uses [103]. Effective drugs with a shorttime chemotherapy also requires development [3]. Both early diagnostics and short-time schemes for the effective treatment will contribute significantly to the reduce burden of disease [104]. Therefore, this comparative approach provides a clear indication of the diagnostic drugs that are necessary to be used in the different stages of the control programme.
The major problem has been that investment into the national control or elimination programmes has fallen after the burden of disease. We suggest that sustained investment in the development of diagnostics and drugs is required. This needs to be well budgeted for at all stages, with the argument put to policy makers that a marginal effect applies in the different stages of the national control programme, whether it be from morbidity or infection control to elimination.
Conclusions
Today, although some areas of research on schistosomiasis japonica, malaria and echinococcosis have been neglected, they are now attracting high levels of concern from many nations, including P.R China. The government is providing sustained financial and technical support underpinned by key control targets. Thus we believe these three diseases can eventually be controlled or eliminated in mainland China following years of sustained efforts. ). In Chinese Wanfangdata, articles were searched by ("disease name"[MeSH Terms] or "disease name "[All Fields]) and ("therapy" [MeSH Terms] or "treatment" or "drug-resistance" [MeSH Terms] or "adverse event" [MeSH Terms] or "diagnosis" [MeSH Terms] or "examination" or "test" or "epidemiology" [MeSH Terms] or "survey" or "surveillance" or "investigation" or "monitoring from June 2007 to June 2012.
Additional file
Additional file 1: Multilingual abstracts in the six official working languages of the United Nations.
revised the manuscript and provided intellectual input to the interpretation of the findings. BJ, LLX, CSL, LLH, LPD and NBW conceived the project and carried out data collection and analysis. SZL, ZGX, WPW, WH conceived the project and revised the manuscript. HBZ mainly conceived the project and revised the manuscript. All authors read and approved the final manuscript. | 2017-07-01T17:24:25.934Z | 2013-07-29T00:00:00.000 | {
"year": 2013,
"sha1": "312f70634bec1fa0a606946b2860840d145117dd",
"oa_license": "CCBY",
"oa_url": "https://idpjournal.biomedcentral.com/counter/pdf/10.1186/2049-9957-2-15",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a7c301fc6dddf1864de701816e73f33f2037478c",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234414898 | pes2o/s2orc | v3-fos-license | Decision making analysis for water distribution improvement projects
Regional water utility company at Jakarta is determined to improve the quality of service to its customers. The water distribution system must have criteria for quantity, quality, and continuity of flow. To maintain the criteria of company, it continues to make improvements. Based on the evaluation of water utility company that they could decrease of non-revenue water was 1% per year with company target was 2% per year. Decision support systems increasingly play an important role in the area of area control that will be used as improvement project on pipes to customers. Some supporting methods that will be used are electre and linear programming (LP). Electre will be used in decision making analysis on determining the repair area. Based on the results of electre analysis, it can be seen that B01, B02, B13, and B20 have the highest priority that can be recommended for improvement projects. LP can be used as tools for selecting the project to get maximum profit from limited of budget. Based on the results of LP analysis, it can be seen that it selected 13 projects from 16 projects with the constraint’s budget of maintenance is 16.7 BIDR.
Introduction
The Problem-As water distribution system that they had problems such as corrosion of the pipe and pipe leakage. Therefore preventive action is the most appropriate step that a history of maintenance can be used to produce maintenance standards on a regular basis. The purpose of these preventive action is an important objective for future water distribution maintenance planning. At the same time, steps to increase capital for the rehabilitation of the pipeline became a concern of the management. Maintenance management systems by integrating routine maintenance and capital improvement planning are important goals for future water distribution system.
Method and materials
This study provides an electre method and linear programming for the problem of water maintenance project. This study presents an approach for budgeting maintenance costs in water distribution networks. The decision making process starts to choose project which it will be plan by company in 2020. Rehabilitation and replacement of alternatives were evaluated for each pipeline, based on field study.
Object sample
The sample was selected at West Jakarta. Water utility company have divided 20 permanent service areas. Permanent services area can make it easy to find out what the value of water loss is for each area, so that priority can be made on data as shown below on Table 1 and mapping area on figure 1.
Method
The method in the decision making proses are electre and linear programming. The electre will use to select three permanent area which it will analysis on the field. The linear programming will use to choose projects with the objective target is maximum sales. The final result from this method are project decision maker and the cost of project. Its purposes is maximum profit for company with the constraint's budget.
Electre
Make a decision matrix based on consideration of making a decision and normalizing the values in the decision matrix. A set of categories must be a priority defined. The definition of a category is based on the fact that all potential action which are assigned to it will be considered further in the same way. In IOP Publishing doi:10.1088/1757-899X/1007/1/012011 3 sorting problematic, each action is considered independently from the others in order to determine the categories [3]. Then the best alternative is the alternative that dominates the other alternatives.
Linear Programming
Linear programming is a mathematical method that has a linear characteristic to find solution with the step of maximizing the objective function of an arrangement of constraints [1]. The models is binary problems. Binary problem, each variable can only take on the value of 0 or 1. This may represent the selection or rejection. The objective of target from linear programming is maximum sales in the potency sales of project.
Evaluation of Water System Distribution
The evaluation data can show the root problems. The period of data is January 2018 until to August 2019. From Figure 2, it can show the average supply and sales. The value of NRW (non-revenue water) is 42.1%.
Electre
The step in the decision making process used six criteria. The six criteria are supply, sales, nonrevenue water, complaint, number of standard customers, and number of key account customers. The criteria is shown below on Table 2. The criteria will use to convert with the primary data and the result of conversion will be used by electre formula. The primary data can be shown on Table 3 and the result of conversion can be show on Table 4. The data processing in this research is the ranking of the water service areas for the recommendation of project area by using the electre formula. The process produces concordance and dis-concordance values for each alternative. Based on the results of electre, it can be seen that B01, B02, B13, and B20 have the highest priority that can be recommended for improvement projects. The result can be shown on Table 5 and mapping area can be show on Figure 3. represents projects and third column represent capital required for the project. Examination of these results shows that the project in which implementation was most deferred are project B01-001, B01-002, B01-004, B01-005, B02-001, B13-001, B13-002, B13-003, B20-001, B20-002, B20-003, B20-004, and B20-005.
Conclusion
The decision making process is not decided by only one participant, but by many participants. This is because decision making covers the interests of many participants that must be considered, so that the decisions made can be satisfactory for all participants. In decision making there are many alternative decisions [3]. Often decision makers in making decisions use intuition, so the results of decision making are not always right. Based on the results of electre analysis, it can be seen that B01, B02, B13, and B20 have the highest priority that can be recommended for improvement projects. Examination of these results shows that the project in which implementation was most deferred are project B01-001, B01-002, B01-004, B01-005, B02-001, B13-001, B13-002, B13-003, B20-001, B20-002, B20-003, B20-004, and B20-005. Three permanent area and thirteen project were planned by model. | 2021-01-07T09:07:09.752Z | 2020-12-31T00:00:00.000 | {
"year": 2020,
"sha1": "8582b1e7edce6d21f8793d478fdf25c5504ce446",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/1007/1/012011",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "cd803dc45b04c74d8e7bdc5d2329d2a605373efa",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
18823630 | pes2o/s2orc | v3-fos-license | Supporting the Appropriation of Ict: End-user Development in Civil Societies
Introduction Information and Communication Technology (ICT) has become an important factor in our personal lives as well as in our social organizations—at work, at home, in our hospitals, in the political institutions and in the public media. While in work settings the dynamics of shared business goals, shared task systems, and professional delegation structures result in a relatively predictable and organized design context, the more open-ended and less organized contexts of home or society present considerable challenges for applications of ICT. The goals and interests of the diverse actors in these more general contexts are quite unstable and unpredictable; home and society provide only weak structures of specialization and delegation regarding the use of ICTs. One approach to these challenges is to cede design power to the participating users, so that they can develop solutions that match problems and intentions for action.
Introduction
Information and Communication Technology (ICT) has become an important factor in our personal lives as well as in our social organizations-at work, at home, in our hospitals, in the political institutions and in the public media.While in work settings the dynamics of shared business goals, shared task systems, and professional delegation structures result in a relatively predictable and organized design context, the more open-ended and less organized contexts of home or society present considerable challenges for applications of ICT.The goals and interests of the diverse actors in these more general contexts are quite unstable and unpredictable; home and society provide only weak structures of specialization and delegation regarding the use of ICTs.One approach to these challenges is to cede design power to the participating users, so that they can develop solutions that match problems and intentions for action.
There have always been motivations to involve users in the design and development of ICTs.On the one hand, the quality of products might be improved by involving end users in the early phases of design (the "User-Centred Design" tradition); on the other hand, end users have claimed the right to participate in the development of ICTs that affect their (working) environments (e.g., the Scandinavian tradition of "Participatory design").Beyond these approaches to "change design" by changing design methodologies or other aspects of the setting of professional design work, there have also been approaches to "design for change" by offering technologies and tools that offer the flexibility to be thoroughly modified at use time (Henderson and Kyng, 1991).The latter approaches have been proffered under the label of 'Tailoring Support' and 'End-User Development' (Sutcliffe andMehandijev 2004, Lieberman et al. 2005), and complement earlier research on 'End-User Computing' and Adaptability/Adaptivity.
Active support for technology appropriation
At some point it is no longer sufficient to provide the necessary flexibility for (re-)configuring tools and technologies while in use.It is also necessary to provide stronger support for managing this flexibility.Keeping the tool interaction simple, and providing good manuals may be one strategy, but the adaptation and appropriation of tools is often more a social activity than a problem of individual learning and use.Knowledge sharing and delegation structures often develop, although in home and other informal usage settings these structure are likely to be much more spontaneous and less organized than in professional environments.End-User Development methods can address the social aspects of computing by treating users as a '(virtual) community of tool/technology users', and by providing support for different appropriation activities that users can engage in to make use of a technology.Examples of such activities (Pipek 2005) These are support ideas derived from the observation of activities that users perform to make use of a technology.They have been partially addressed in earlier research, for example by providing flexibility through component-based approaches (Morch et al., 2004), or by offering sandboxes for tool exploration (Wulf & Golombek, 2001).' Pipek (2005) also gave the example of 'Use Discourse Environments' as one possibility to support the user community in some of these appropriation activities.These environments tightly integrate communication mechanisms with representations of the technologies under consideration, for instance by integrating discourse processes with the configuration facilities of tools, or by providing easy citations of technologies and configuration settings in online discussion forums.By these means, technology needs and usages become more easily describable by end users, and communication among people sharing a similar use background (typically not the professional tool designer) is eased.However, evaluations of these environments suggest that the problem cannot be solved by offering technological support alone; additional social or organizational measures (establishing/mediating conventions, stimulation of communication) must also be considered to guarantee long-term success.
Supporting 'Virtual Communities of Technology Practice
The approach to actively support user communities in their appropriation activities promises to alleviate the lack of professional support in home/volunteering settings of ICT usage.It may stimulate the spreading of good practice among users, and it offers a platform to actively deal with conflicts that occur between different stakeholders involved in a shared activity that involves ICT use (e.g., conflicts about visibility of actions and about the configuration of access rights).
−
include: Basic Technological Support: Building highly flexible systems − Articulation Support: Support for technology-related articulations (real and online) − Historicity Support: Visualise appropriation as a process of emerging technologies and usages, e.g. by documenting earlier configuration decisions, providing retrievable storage of configuration and usage descriptions.− Decision Support: If an agreement is required in a collaborative appropriation activity, providing voting, polling, etc. Journal of Community Informatics (2:2) 2006 http://ci-journal.net/index.php/ciej/article/view/343/249− Demonstration Support: Support showing usages from one user (group) to another user (group), provide necessary communication channels.− Observation Support: Support the visualisation of (accumulated) information on the use of tools and functions in an organisational context.− Simulation Support: Show effects of possible usage in a exemplified or actual organisational setting (only makes sense if the necessary computational basis can be established).− Exploration Support: Combination of simulation with extended support for technology configurations and test bed manipulations, individual vs. collaborative exploration modes.− Explanation Support: Explain reasons for application behaviour, fully automated support vs. user-useror user-expert-communication. − Delegation Support: Support delegation patterns within configuration activities; provide remote configuration facilities.− (Re-) Design support: feedback to designers on the appropriation processes | 2016-02-01T17:59:50.645Z | 2006-02-08T00:00:00.000 | {
"year": 2006,
"sha1": "5951d37bb6ae46dfb36918869aa665ecee257f74",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.15353/joci.v2i2.2091",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "5951d37bb6ae46dfb36918869aa665ecee257f74",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Engineering",
"Business",
"Computer Science"
]
} |
233489582 | pes2o/s2orc | v3-fos-license | Study on Text Reconstruction of English Textbooks in Chinese Senior High School from the Perspective of Language Images
Text reconstruction of textbook is a matter of usual means which can help the classroom guidance of students' learning and critical thinking development. The image configuration of English textbooks in Chinese senior high school becomes one of the new hot topics on logical thinking of English language learning and cognitive symbols now. And the competence of text reconstruction of English teachers is very important knowledge structure for their cross-culture cultivation of cognitive development of Chinese senior students. This study aims to: 1) find out how to stimulate students’ new interests in language classroom learning from text reconstruction of English textbooks; 2) investigate both teachers’ and students’ interests attitudes towards the text reconstruction of English textbooks published by People’s Education Press (Abbreviated as PEP); 3) optimize the language logical relationship between language and images in English textbooks in Chinese senior high school, by the mix methods of qualitative and quantitative research based on the theoretical foundation of language image theory and multimodal discourse analysis. A total of 200 teachers from China were investigated in this survey study. The conclusion are: 1) Language image can be found in English textbooks of senior high school, but the logical system of language image has not yet formed; 2) English teachers need improvement the training of text reconstruction to scientifically and accurately understand, extract the logical relationship between images and texts from English textbooks. Suggestions for English teachers are to: 1) broaden the content of different knowledge types of language images related to the text theme; 2) pay attention to cognition cultivation of students' perception on image logical system, so as to have a better understanding of knowledge structure on logical level system, cultural connotation and humanistic thinking expressed in the text; 3) pay attention to improve the classroom competence of text reconstruction which can help the students understand the deeper meaning between language and images from textbooks explanation process.
Introduction
The Ministry of education of the people's Republic of China issued the English curriculum standards for Chinese senior high schools (2017 Edition) on 2018 [1], focusing on "viewing competence" in language skills with "visual literacy" as the key ability which enriches the connotation of core literacy, this guidance help teachers and students to effectively use the images in English textbooks, and also provide a new round of critical thinking of text reconstruction for English textbook in Chinese senior high school. Text reconstruction of textbook is a matter of knowledge structure which can focus the logical relationship between language and image, so as to improve classroom guidance of students' learning and critical thinking development. And the more effective English teachers use in text reconstruction, the better knowledge structure and mind mapping of students will make. The language image of textbook which should be paid attention to in classroom teaching for English teachers, refers to the images which can 47 Bin Lu et al.: Study on Text Reconstruction of English Textbooks in Chinese Senior High School from the Perspective of Language Images supplement and supply the explanation for text content in English textbook. Since the cultivation of students' English language ability has been changed from focusing only on language knowledge and language ability, to on the value orientation and thinking quality in knowledge structure between different cultures. This is a turning from language centered education to human development centered competence education. From the perspective of cognition competence of senior high school students, students' cognitive thinking mode also should be paid attention to enlightening complementary mode of images and it is conducive to the formation of students' cognitive concept, knowledge system and cross-cultural awareness between different cultures. Based on the view of "Text Reconstruction" proposed by Pu Zhu [2], and "Language image theory" proposed by Wittgenstein, this study is to investigate the current situation of English teachers' text reconstruction of English textbooks in Chinese senior high school, and to test the attitude towards contents of language images, such as the language translation or images of Chinese elements, for the future suggestion on compilation reform of English textbooks. And the study also comes from the foundation of earlier discussion between teachers and students on text reconstruction of researcher [3].
Research Background
The key of text reconstruction is a matter of creative use on teaching materials and the embodiment skill of English teachers who will make the secondary development of teaching materials in English textbooks. Before the concept of text reconstruction is proposed, many Chinese experts and scholars have already studied the integration, creative use, and secondary development of textbooks. They have contributed a lot to the domain of English teaching. For example, Hongzhen Yu discussed in details how to apply the secondary development for teaching materials from the aspects of principles, methods, dimensions, strategies and techniques [4]. Jimei Xia elaborated on how teachers can effectively use teaching materials from four perspectives: teaching materials, learning materials, using materials, and researching materials [5]. Xiaotang Cheng mentioned the selection and adjustment of textbooks in The Analysis and Design of English Textbooks [6]. Teaching focus in English classroom is changed from traditional "teaching materials" to "using teaching materials." It can be seen that text reconstruction is exactly the interpretation and extension of creative using English textbooks and will help the cultivation of students' cross-culture competence.
Foreign experts have not yet clear systematical study of the concept of text reconstruction. But it can be seen from the related researches that they have advocated the creative use of teaching materials in the 1990s. For example, McDonough and Shaw put forward the concept of external factors and internal factors when talking about the choice and adjustment of English textbooks [7]. Cunningsworth believed that teachers had a wide range of choices and adjustments to the textbooks [8]. McGrath (2002) argued that the arrangement of textbook and the student's learning method should be reconsidered [9]. Tomlinson pointed out that there was a lack of theoretical basis for the framework or steps of the textbook development process [10]. Waters (2006) contended that English reading materials were simple and the content was asymmetrical with the students' psychological level, including emotional complexity and cognitive difficulty [11]. In summary, some scholars in foreign countries have involved various aspects of creative use of textbooks. Among them, the adjustment of using teaching materials methods, language selection, classroom situation, teaching process, topic and culture in English textbooks have given some guidance to develop this research.
Main Concepts
A scientific research results need to form a comprehensive understanding on the core concepts. Language image and text reconstruction are key concepts in this study.
Language Image
"Language image" theory was proposed by Wittgenstein [12]. Wittgenstein holds that language is not a matter of words or sentences, but a basic proposition. A propositional symbol is a fact, proposition is a real image, so language is the image of the world. According to Wittgenstein's linguistic image theory, proposition is the image of reality, and image is a fact; image represents its meaning; proposition is language. With the development of criticism, language image theory of Wittgenstein changed to use theory [13].
Text Reconstruction
Pu Zhu, an English teaching and research expert in Shanghai, China, put forward the concept of text reconstruction in 2010 and pointed out that text reconstruction teaching is also called independent paragraph teaching. It means that in the process of communication, reader should abandon the habit of taking words and sentences as the basis to promote the learning of discourse and context, and put the learned words, phrases, sentence patterns and other scattered knowledge into the appropriate paragraphs for overall understanding and application. In addition, Pu Zhu advocates the effective learning of sentences with verbs and sentences based on paragraph and context. That is to say, teachers are required to teach words and sentence patterns in specific contexts, stimulate students' interest in learning, mobilize students' pragmatic interest, so as to improve the effectiveness of learning. Weibo Mao defines text reconstruction as a teacher's creative adaptation and integration of the original text of the textbook based on the content of the textbook and the level of students, so as to form a new language learning material with situational and operability context, it can help to improve students' comprehensive language ability [14]. said: "the original intention of images or illustrations in books is to decorate books and increase readers' interest, but that power can be more than the images which is lack of words. "Specifically speaking, the functions of textbook images are mainly reflected in the following three aspects: firstly, textbook language images can stimulate new interest in language learning. Interest is the best teacher, inspiring interest is an important task in classroom teaching. Color cartoon pictures in textbooks can stimulate students' interest, meet their visual needs, stimulate their curiosity and improve their learning efficiency. Teachers can use images to create teaching situations, introduce teaching contents, and add corresponding games or interactive links to make the course process more flexible. For example, the design of English image on language expression in accordance with the logical expression of sports science will improve students' cognitive level of learning sports in English language learning and promote their own interest in physical exercise. Secondly, textbook language images can increase the amount of information. The images in textbooks are highly concentrated. Students can acquire rich knowledge through cognitive images and stimulate rich associations of images. Thirdly, images are intuitive. Students can quickly obtain information through cognitive images, and the memory effect of pictures will also assist the memory effect of relevant text content. Therefore, the image research of English textbooks not only provides the language, culture, customs and regional characteristics of different language countries, but also provides students with more specific and intuitive research objects, deepening the cultivation of cross-cultural awareness.
Text Reconstruction Representation
The "text reconstruction" of textbooks mainly refers to the appropriate deletion, adjustment and processing of textbook contents by teachers and students in the process of classroom implementation on the basis of the first development of textbooks, and the reasonable selection and development of other teaching textbooks, so as to make them better adapt to the specific education and teaching situation and students' learning requirements. In the process of teaching, teachers are no longer given the right to impart and deduce the content of textbooks, but to teach them creatively. In addition to text mining, it should also pay attention to the logical expression of image language and the auxiliary effect of text reconstruction in the text reconstruction of English textbooks in Chinese senior high school, which is of great significance to the cultivation of students' intercultural communicative thinking ability in senior high school.
Research Significance
In practical significance, the study can stimulate students' interest in learning English language, increase the amount of information on language images, and optimize the complementary structure of English textbooks. The colorful cartoon images in the textbook can stimulate the students' interest, meet the students' visual needs, arouse the students' curiosity and improve their learning efficiency. Teachers can use illustrations to create teaching situations, introduce teaching contents, and add corresponding games or interactive links to make the course process more flexible. And students can acquire rich knowledge and stimulate their rich association of images through cognitive images. At the same time, images are intuitive, students can quickly obtain information through cognitive images, and the memory effect of images can also assist the memory effect of relevant text content. Through the study of language images in English textbooks in Chinese senior high school, it is instructive to pay attention to the representation of image language, image color, image modeling, image narrative and image logical representation in the future textbooks. It is also helpful for teachers to reconstruct texts and further optimize teaching materials.
Research Problems
This study mainly explores the following three research problems: 1) What's the characteristics of language images in the PEP version of English textbooks of senior high school and the education value of text reconstruction? 2) What are English teachers' attitude towards text reconstruction adopted in the process of English classroom teaching? 3) What's the contents' attitude towards language images representation of English textbooks of senior high school and suggestions for text reconstruction?
Research Objectives
The research objectives are following: 1) To find out how to stimulate students' new interests in text reconstruction of language teaching, characteristics of language images in the PEP version of English textbooks of senior high school and the education value of text reconstruction. 2) To investigate English teachers' attitude towards text reconstruction adopted in the process of English classroom teaching. 3) To optimize the contents' attitude towards language images representation of English textbooks of senior high school and suggestions for text reconstruction.
Research Methods
This study focuses on characteristics of language image and teachers' attitudes towards language image and text reconstruction of English textbooks which is published by PEP in Chinese senior high school. The research designed as a mixed research methods of qualitative and quantitative method. It uses content analysis to collect qualitative data from English textbooks of senior high school and English textbooks published by PEP and uses survey questionnaire of quantitative method to collect data from the English teachers in Henan Province, China.
While making the deep understanding of text Perspective of Language Images reconstruction, this study should use text analysis method which aims to make statistic analysis of images in English textbooks and analyze the logical relationship between language in text and images to find a better way of achieving text reconstruction. The process of studying the knowledge structure of graphic logical relationship is divided into three steps. The first step is to make macro statistic analysis of the images presented in English textbooks, images in English textbooks are sort out and classified, such as on Chinese elements, foreign elements, society, nature, science and technology and so on. The second step is to collect data analysis the above elements in the teaching materials, in order to find the distribution of images in the text. The third step is to study the auxiliary role of images, supply the attitude towards the way of text reconstruction among English teachers, and improve the development and utilization of the complementary of pictures and texts in teaching materials in a wider range.
In the process of researching the teachers' attitude toward of different images and pictures of English textbook when they are involved in text reconstruction, the questionnaire is carried out to know whether English teachers' attitude on different contents of images of English textbooks, especially the attitude of teachers are changed when they begin to pay attention to the application of text reconstruction based on the language image theory. And in quantitative research, English teachers' attitude towards different culture elements on images in English textbooks will be discussed from the following five aspects: teachers' attitude towards the importance of image text construction; teachers' general attitudes on the contents; content forms; usage of images; and students' attitudes towards textbook images from the teacher's perspective in English textbooks of Chinese senior high school.
Research Hypothesis
Eight hypotheses have been set for study purpose. All these are as following: H1: There are systematic language images in English textbooks of Chinese senior high school.
H2: There is less positive correlation between the teacher with different gender and the attention of language image and text reconstruction in English textbooks.
H3: There is less positive correlation between the teacher who have ages and the attention of language image and text reconstruction in English textbooks.
H4: There is less positive correlation between the teacher with professional ranking and the attention of language image and text reconstruction in English textbooks.
H5: There is less positive correlation between the teacher with the different cities and the attention of language image and text reconstruction in English textbooks.
H6: There is less positive correlation between the teacher who teaches different grade and the attention of language image and text reconstruction in English textbooks.
H7: There is less positive correlation between the teacher who have different length of teaching and the attention of language image and text reconstruction in English textbooks.
H8: There is less positive correlation between the versions of English teachers using and the attention of language image and text reconstruction in English textbooks
Data Collecting Method
This study analyzes the characteristics of language image in English textbooks of Chinese senior high school, and investigates the English teachers' attitude towards language image and text reconstruction in their classroom teaching. Taking language image distributed by "life", "nature", "humanities", "technology" and "others" groups to be statistic analysis. 200 questionnaires towards language image and text reconstruction in English textbooks of senior high school for English teachers from different cities of China will be discussed. Based on the content of language image in English textbooks of senior high school and the attitude of English teachers to language image and text reconstruction, this study puts forward some suggestions on how to further improve text reconstruction of language image in English textbooks of senior high school.
The method to collect data are as following: To define the concepts of language image and text reconstruction in English textbooks of senior high school; To design the questionnaire survey of teachers on data collection; and carry out the quantitative research of English teachers on attitude towards language image and text reconstruction in English textbooks of senior high school; To collect the data and data analysis; The design of teachers' questionnaire includes 2 parts: the first is basic information survey of the participants, including gender, age group, professional ranking, school, grade, and length of teaching; the second is 16 single choices designed by using Likert scale (Level 5), and the principle of scoring including "1"refers to "strongly disagree", "2" refers to "disagree", "3" refers to "not sure", "4" refers to "agree", "5" refers to "strongly agree". In the scale of questionnaire, it includes 5 Item s following: Item 1: English teachers' cognition towards the importance of paying attention to logical relationship between image and text in English textbooks.
Item 2: English teachers' cognition towards the relationship between image and text in English textbooks.
Item 3: English teachers' cognition towards the the distribution of the relationship between images and texts in English textbooks.
Item 4: English teachers' cognition towards the dealing of the relationship between images and texts in English classroom teaching.
Item 5: English teachers' attitude towards the scientific and reasonable relationship between images and texts of students preference.
Data Analysis and Discussion
In this part, the characteristics and functions of images in textbooks will be investigated on the foundation of data analysis. The total number of images in the five volumes of Education Journal 2021; 10(2): 46-58 50 the English textbook published by PEP was counted and sorted in two cognitive stages. The first is the visual cognition stage, which is to classify and describe the images of English textbooks from the statistic data level. The second is the sense cognition stage, which makes a variety of comparative analysis on the images of this set of textbooks [15]. Therefore, this study aims to: 1) find out how to stimulate students' new interests in language learning from text reconstruction of English textbooks; 2) investigate both teachers and students interests attitudes towards text reconstruction of English textbooks in Chinese senior high school; 3) optimize the language logical relationship between language and images in English textbooks by the mix methods of qualitative and quantitative research based on the theoretical foundation of language image theory and multimodal discourse analysis. The specific analysis is shown in the following table:
The Classification of Language Images in English Textbooks
In the process of image language induction, there are many images that can match and express the theme content, but the content of these images is lack of system and logical connotation. In the volume 1, unit 1 "earthquakes" in English textbooks in Chinese senior high school, almost all the contents are the description of the earthquake and the comparison between the before and the after statement of the earthquake, lack of humanistic care, rescue activities, life education, image guidance and education content; for the relief scene, there is only one Tangshan earthquake rescue image language presentation. In fact, Wenchuan earthquake rescue scene can be added, and supply the materials of the content or personal experience is more likely to deepen students' understanding of the text and pay more attention to the self rescue about the earthquake disaster in China. It is more likely to have emotional resonance of reverence for life, stimulate the study potential of solving the natural disaster for all human being in the world. In addition, there is a lack of description of earthquake science history, development and its representatives technology. For example, the seismograph invented by Heng Zhang, a famous ancient Chinese scientist, is the first step in ancient earthquake exploration and early warning in Chinese history. It will be helpful to add the image logic of science development of Chinese earthquake and technology history vertically and horizontally in textbooks.
The Content Distribution of Language Images in English Textbooks
The language image in this set of textbooks is more appropriate, but the image logic is still scattered and lack of systematic. There are more "life" images and less "science and technology" images at the basic level, which indicates that in English textbooks, the relevant contents of international scientific and technological development are insufficient and can not keep pace with the society development; at the same time, there are more "humanistic" images and less "natural" images, which indicates that in English textbooks, there are many images of "humanity" and "nature". These two element "life" and "humanistic" are very important to construct the knowledge structure of students, but the textbook should pay attention to the balance of education materials for the universal view on human and nature. The importance of images should be paid attention to as well as the importance of language learning. The images of English culture in foreign countries are more involved, and more logical expression lies in the primary communication level among people greetings, rather than rising to the advanced level of harmonious coexistence between man and nature. It will be helpful to form an image logical system which combining the cognitive and psychological characteristics of students in the process of cultural expression between foreign counties and China, therefore, the richness of image content needs to be improved, language images can be reconstruct in English textbooks from the cognition perspective. In order to find out the related representation characteristics and distribution proportion, it should broaden the content of different types of language images related to the theme, pay attention to the cultivation of students' sensory awareness, so as to have a better understanding of the degree of language learning, logical system, cultural connotation and humanistic thought expressed in the textbooks. From this point, this study accept the hypothesis 1 "There are uneven language images in English textbooks of senior high school".
Reliability Statistics and Validity Analysis of Questionnaire
The following data collection is based on the English teachers who participate practical teaching and real questionnaire survey. This study uses SPSS 26.0 for data analysis. Through the analysis, it analyzes the presentation characteristics of images, the logical relationship between images and texts, and the cognition of teachers and students on the distribution of the relationship between pictures and texts in the textbook. Reliability analysis is used to study whether the data is true and reliable, that is, whether the research sample has answered the question (questionnaire P6). The commonly used method is Cronbach's alpha, which is generally greater than 0.7. It can be seen from table 3 that Cronbach's alpha is 0.934, greater than 0.7, so the reliability of the scale is high. .000 Validity analysis is used to determine whether the research questions effectively express the conceptual information of research variables or dimensions, that is, whether the design of research questions is reasonable. The Kaiser-Meyer-Olkin Measure of Sampling Adequacy. is 0.901, greater than 0.7, so the scale has high validity. In order to have a better understanding on the situation of English teachers who use English textbooks to make text reconstruction in Chinese senior high school, this study distributed questionnaires to teachers online in different cities and schools. The questionnaires were anonymous.
Correlation Analysis of Basic Information and Attention (Total Score)
According to the survey data, 77.5% of the female teachers fill in the form, and 74.0% of the teachers come from Henan. The age of teachers is mainly between 31 and 40 years old, indicating that most of them are young and middle-aged teachers. Young and middle-aged teachers usually have a wide range of knowledge, strong plasticity, certain affinity, and strong sensitivity and attention to teaching, they are more likely to complete the text reconstruction in diversity methods. From the data collection, the most part of English textbooks are from PEP, accounting for 48.5%. Teachers have certain teaching experience and are distributed in three different grades, among which 53.5% have middle and senior professional rankings. Therefore, teachers are serious when filling in the questionnaire, and the survey results are rigorous.
It can be seen from the figure 1 that the total scores of 200 senior high school English teachers are normal distribution roughly, and Pearson correlation analysis can be used to analyze the correlation. As can be seen from the table 6, Pearson correlation coefficient is 0.115, 0.115 < 0.3, sig. is 0.104, 0.104 > 0.05, so it can be judged that there is no significant positive correlation between gender and total score. So, this study accept the hypothesis 2 "There is less positive correlation between the teacher with different gender and the attention of language image and text reconstruction in English textbooks". As can be seen from the table 7, Pearson correlation coefficient is -0.052, -0.052 < 0.3, which can be judged as uncorrelated; sig. is 0.465, 0.465 > 0.05, which is not significant positive correlation between age and total score. So, this study accept the hypothesis 3 "There is less positive correlation between the teacher who have ages and the attention of language image and text reconstruction in English textbooks." As can be seen from the table 8, Pearson correlation coefficient is -0.089, -0.089 < 0.3, which can be judged as uncorrelated; sig. is 0.212, 0.212 > 0.05, which is not significant positive correlation between the title and the total score. So, this study accept the hypothesis 4 "There is less positive correlation between the teacher with professional ranking and the attention of language image and text reconstruction in English textbooks". As can be seen from the table 9, Pearson correlation coefficient is 0.022, 0.022 < 0.3, it can be judged that there is no correlation; sig. is 0.755, 0.755 > 0.05, so it is not significantly positive correlation between school and total score. So, this study accept the hypothesis 5 "There is less positive correlation between the teacher with the different cities and the attention of language image and text reconstruction in English textbooks". As can be seen from the table 10, Pearson correlation coefficient is -0.085, -0.085 < 0.3, which can be judged as uncorrelated; sig. is 0.230, 0.230 > 0.05, so it is not significantly positive correlation between grade and total score. So, this study accept the hypothesis 6 "There is less positive correlation between the teacher who teaches different grade and the attention of language image and text reconstruction in English textbooks". As can be seen from the table 11, Pearson correlation coefficient is -0.049, -0.049 < 0.3, which can be judged as uncorrelated; sig. is 0.487, 0.487 > 0.05, so it is not significantly positive correlation between the length of teaching and the total score. So, this study accept the hypothesis 7 "There is less positive correlation between the teacher who have different length of teaching and the attention of language image and text reconstruction in English textbooks". As can be seen from the table 12, Pearson correlation coefficient is 0.083, 0.083 is less than 0.3, which can be judged as uncorrelated; sig. is 0.245, 0.245 > 0.05, so it is not significantly positive correlation between the versions of teachers using and the total score. So, this study accept the hypothesis 8 "There is less positive correlation between the versions of English teachers using and the attention of language image and text reconstruction in English textbooks". Part 1, Part 2, Part 3, Part 4 and Part 5 in the table 13 respectively refer to the importance of reasonable graphic relationship, teachers' cognition of graphic relationship in textbooks, teachers' cognition of distribution of graphic relationship in specific contents of textbooks, teachers' handling of graphic relationship in teaching and students' attitude towards scientific and reasonable graphic relationship from the perspective of teachers. It can be seen from the table that the correlation coefficient values of each of the five factors are greater than 0.3, indicating that they are related to each other; sig. is 0.000, indicating that P < 0.05, there is a linear correlation.
The next part of the questionnaire is a descriptive analysis of English teachers' attitudes towards language images and text reconstruction in English textbooks of senior high school. agreed. The mean value was 4.67, and the standard deviation was 0.568. The average value of the above three questions is close to 5, which indicates that teachers pay more attention to the importance of reasonable graphic relationship between the images and texts; the standard deviation is relatively close to 0, which indicates that the degree of dispersion is low, and teachers' attitude on this issue is more concentrated. Table 15 shows teachers' cognition of the relationship between images and texts in English textbooks. As for question 12, 148 people (74.0%) agreed very much, and 49 people (24.5%) agreed. The mean value was 4.73, and the standard deviation was 0.480. For question 13, 142 people (71.0%) agreed with it very much; 53 people (26.5%) agreed with it; the mean value was 4.68; the standard deviation was 0.538. As for question 14, 116 people (58.0%) agreed very much, 64 people (32%) agreed, the mean value was 4.44, and the standard deviation was 0.787. The average value of the above three questions is close to 5, which indicates that teachers have a certain cognition of the relationship between images and texts in English textbooks; the standard deviation is relatively close to 0, which indicates that the degree of dispersion is low, and teachers' cognition on this issue is relatively concentrated. people hold agreement, accounting for 24.5%; the average is 4.38; the standard deviation is 0.824. The average value of the above four questions is close to 5, which indicates that teachers have a certain cognition of the relationship between images and texts in the specific content of English textbook; the standard deviation is relatively close to 0, which indicates that the degree of dispersion is low, and teachers' cognition on this issue is relatively concentrated. accounting for 33%; the average is 4.57; the standard deviation is 0.631. The average value of the above four questions is close to 5, which indicates that teachers attach importance to the processing of the relationship between pictures and texts in teaching; the standard deviation is relatively close to 0, which indicates that the degree of dispersion is low, and teachers' processing of this problem is more concentrated. Table 18 shows the attitude of students towards scientific and reasonable graphic relationship from the perspective of teachers. As for question 11, 138 people (69%) agreed very much; 52 people (26%) agreed; the mean value was 4.64; the standard deviation was 0.595. As for question 23, 155 people (77.5%) agreed very much, and 43 people (21.5%) agreed. The mean value was 4.77, and the standard deviation was 0.448. The average value of the above two questions is close to 5, which indicates that from the perspective of teachers, students' attitude towards scientific and reasonable graphic relationship is important; the standard deviation is relatively close to 0, which indicates that the degree of dispersion is low, and students' treatment of this problem is more concentrated.
Main Findings and Conclusion
Through the above research discussion that the image presented in English textbooks as a very important teaching resources, has an irreplaceable role of text reconstruction. Both teachers and students should not ignore the role of images in English textbooks and English classroom teaching, not underestimate the role of language images in text reconstruction of English teaching.
The main findings are as follows: 1) English teachers' own knowledge foundation for text reconstruction is not enough and need enlarge the knowledge structure; 2) English teachers are not aware of the importance of images in English textbooks; 3) English teachers do not develop the habit of reading images in preparation for class; 4) English teachers are not aware of the importance of text reconstruction for students knowledge structure and cognition competence.
At same time, it can be seen some problems in the response of images using in classroom learning: 1) Students are lack of certain knowledge structure on logical relationship between language and images; 2) Students are lack of autonomous learning; 3) Students are lack of skills on effective methods of reading pictures and its deeper connotations.
Suggestions for English teachers are to: 1) broaden the content of different knowledge types of language images related to the text theme; 2) pay attention to the cultivation of students' perception and cognition, so as to have a better understanding of cognition level system, cultural connotation and humanistic thinking expressed in the text; 3) pay attention to improve English classroom competence of text reconstruction which can help the students understand the deeper meaning between language and images from textbooks explanation.
From the data analysis, it is found that textbook are lack of some bright images to match the text, but according to the needs of the text, through some analysis, a lot of implicit and explicit knowledge information can be obtained from the images to help students understand the text and obtain some skills or knowledge, and get some enlightenment. If teachers pay attention to guide students to analyze and understand the image in teaching when they are in the process of text reconstruction, it will play a great auxiliary role in understanding the text and improvement of language learning. Although there are still many problems in the image configuration of the selected textbooks, if teachers can carefully analyze the textbook images and flexibly use the textbook images, the classroom atmosphere can be more active and the classroom teaching efficiency can be improved.
The limitations of this study lie in the following two aspects: 1) English teachers in senior high schools are not enough to fully represent the overall situation of all English teachers in senior high schools; 2) some teachers may not answer question to follow their real teaching practice. From view of the above limitations, this study will make further improvements: 1) improve the scopes of the respondents; 2) conduct in-depth interviews with respondents to understand their attitudes deeply. | 2021-05-04T12:44:27.677Z | 2021-03-26T00:00:00.000 | {
"year": 2021,
"sha1": "68e75d8214473018e4753cf1c6ee960af9fb82f7",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.edu.20211002.12.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "68e75d8214473018e4753cf1c6ee960af9fb82f7",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
18648197 | pes2o/s2orc | v3-fos-license | Stabilization of a Wind Farm Using Static Var Compensators (svc) Based Fuzzy Logic Controller
In this article, Static VAR Compensator (SVC) has been used to improve transient stability and power oscillation damping of a wind farm connected to power system. Different fault types and different fault durations were considered for the study to investigate the effect of the SVC based FLC on system stability. Different locations are considered for the SVC at the studied system. The proposed controller provides the wind farm system with damping effect during transient condition and provides much smoother and quicker response after fault clearance. The proportional plus integral (PI) controller is used for the comparative study.
INTRODUCTION
Energy consumption increase, the deficit in fossil fuel and increased energy demand turned the world's attention towards the renewable energy hoping that it may provide some of its energy needs. Wind energy becomes one of the mainstream power sources in many countries all over the world. Besides, being consumer and environment friendly, it requires shorter construction time. Due to technical development it becomes one of the most competitive sources of renewable energy. However, wind power has some disadvantages. Such as, wind powered generators are induction generators (IG). That absorbs reactive power during its normal operating condition. This may cause low voltage and dynamic instability in the power system connected to. There are two major types of IG, which are used very widely. The first one is the squirrel cage induction generator and the second one is the doubly fed induction generator (DFIG) [1][2][3][4][5].
DFIG is a 'special' variable speed induction generator that is widely used as modern large wind turbine generators [6]. It's one of the most important generators for wind energy conversion systems. Both grid connected and stand-alone operation are feasible. The most important advantages of the variable speed wind turbines as compared with conventional constant speed system are the improved dynamic -188-behavior, resulting in the reduction of the drive train mechanical stress, electrical power fluctuation, and also increasing of captured power [7]. Many studies in system stability propose flexible AC transmission system (FACTS) devices as an effective method to improve system stability. FACTS controllers such as TCSC, SVC, STATCOM, SSSC, UPFC, IPFC, HPFC can enhance different power system performance parameters such as voltage profile, damping of oscillations, load ability, reduce the active and reactive power losses, subsynchronous resonance (SSR) problems, transient stability, and dynamic performance [8]. Wind energy system (WES) could be effectively stabilized with DFIG system or STATCOM system as described in [9], Using STATCOM and SVC significantly increase system stability. I.e. SVC and STATCOM devices increase the buses voltage, power limits, line powers, and loading capability of the network [10].
THE static VAR compensator (SVC) is a shunt type of FACTS devices using power electronics to regulate voltage, control power flow and improve transient stability in power system. The SVC regulates voltage at its terminals by controlling the amount of reactive power injected into or absorbed from the power system [11]. It has been widely used in power systems for voltage regulation, dynamic stability enhancement and power factor correction [11]- [15]. Many techniques have been used in the control of SVC such as fuzzy logic control (FLC) [16]- [17], neural network [5] and neuro fuzzy logic [18].
FLC is one of the best and most successful techniques among expert control strategies, and is well known as an important tool to control non-linear, complex, and ill-defined systems. FLC set theory provides an effective control based on the knowledge and technical experience of operators and the establishment of intelligent control is founded to be favor in industry [5].
WIND TURBINE DOUBLY FED INDUCTION GENERATOR
The wind turbine (WT) with DFIG system is an induction type generator in which the stator windings are directly connected to the three-phase grid and the rotor windings are connected to grid through three-phase back-to-back pulse width modulation (PWM) converters. The back-to-back PWM converter includes three parts: rotor side converter (RSC), grid side converter (GSC) and DC link capacitor placed between the two converters. It's controller includes three parts: rotor side converter controller, grid side converter controller and wind turbine controller as shown in Fig. (1), in which the grid-side converter and rotor-side converter are controlled independently of each other [4]. The main idea is that the rotor-side converter controls the active and reactive power by controlling the rotor current components, while the stator-side converter controls the DC-link voltages and ensures a converter operation at unity power factor (zero reactive power). Depending on operating conditions of the rotor, the power is fed into or out of the rotor. In an over synchronous condition, power flows from the rotor via the converter to the grid, whereas power flows in the opposite direction in a sub-synchronous condition. In both cases, the stator feeds power into the grid [19].
Wind Turbine Model
The wind turbine is characterized as in [20] by non-dimensional curves of the power coefficient p C as a function of both tip speed ratio λ and the blade pitch angle -189-β, The tip speed ratio λ is the ratio of linear speed at the tip of blades to the speed of the wind. It can be expressed as follows: where R is the WT rotor radius, Ω is the mechanical angular velocity of the WT rotor and w V is the wind velocity. For the wind turbine used in the study, the following form approximates p C as a function of λ and β: The mechanical torque of the wind turbine, Tm can be calculated using equation [3]: where ρ is the air density and A is the swept area by the blade Figure 1. Structure of DFIG wind power generation system
DFIG Mathematical Model [19,21]
The voltage and magnetic flux of the stator can be written as in equations (4-7).
STATIC VAR COMPENSATOR (SVC)
Static VAR compensators, commonly known as SVCs and provides an excellent source of rapidly controllable reactive compensation for dynamic voltage control through its utilization of a thyristor switching/controlled reactive devices that have a faster control over the bus voltage and require more sophisticated controllers compared to the mechanical switched conventional devices. SVCs are shunt connected FACT's devices capable of generating or absorbing reactive power by controlling of output capacitive or inductive current. Fig. (2) shows the SVC configurations: the Thyristor Controlled Reactor (TCR), the Thyristor Switched Reactor (TSR) and the Thyristor Switched Capacitor (TSC) or a combination of all three in parallel configurations. The TCR uses firing angle control to continuously increase or decrease the inductive current whereas in the TSR the connected inductors are switched in and out stepwise thus with no continuous control of firing angle [22]. One of the major reasons for installing a SVC is to improve dynamic voltage control and thus increase system loadability and provide damping of system oscillation [23]. Also, SVC increases power transfer during low voltage conditions while fault on the system by decreasing generator acceleration and vice versa when the fault is cleared. So, it reduces the adverse impact of the fault on the generator's ability to maintain synchronism. The SVCs in use nowadays are of variable susceptance type [16]. Fig. (3) shows Schematic Diagram of SVC control system simulated in MATLAB. The control system consists of: A measurement system for measuring the positive-sequence voltage to be controlled, A voltage regulator that uses the voltage error (difference between the measured voltage m V and the reference voltage ref V ) to determine the SVC susceptance B needed to keep the system voltage constant, A distribution unit that determines the TSCs (and eventually TSRs) that must be switched in and out, and computes the firing angle α of TCRs and A synchronizing system using a phase-locked loop (PLL) synchronized on the secondary voltages and a pulse generator that send appropriate pulses to the thyristors [11]. The SVC can be operated in two different modes: In voltage regulation mode and in Var control mode (the SVC susceptance is kept constant) [22].
SVC Mathematical Model [17]
TCR is designed by an inductance coil L connected in series with two antiparallel poled thyristors. Through the variation of the firing angle α , the amplitude of the fundamental reactive current can be controlled as follow: Where V is the amplitude of the voltage, ω is the angular frequency of the voltage and L is the inductance of the thyristor. When the firing angle α is 90• TCR is full conduction and the current reach its maximum. When the firing angle α is 180•.TCR is disconnected and the current reaches zero. By associating a fixed capacitor to the TCR (Fig. 4), the resulting SVC susceptance SVC B is given by: TSC is designed by a fixed condenser switched on and off using a bidirectional thyristors interconnected in series. So, its current varies from 0 to I max . The condenser is connected to a coil in order to avoid a resonance with the supply network. The current that flows through the capacitor is given by the following expression: and V is the fundamental voltage.
The combination of both TCR and TSC provides a good dynamic compensation. The combined reactances are given by
SVC V-I Characteristics
When the SVC is operated in voltage regulation mode, the SVC voltage varies between V min and V max as shown in Fig. (4). It is aimed to maintain the voltage at a -192-desired constant value. The calculation of the slope allows us to know the voltage drop. The characteristic equation of the slope is given as in [17] by equation (17). The V-I characteristic and operating region of SVC described by three regions as in [22] as follows: 1) Regulation zone: this zone is governed by (18) 2) Zone of under-voltage: in this zone the SVC behaves like a pure capacitor.
3) Zone of over-voltage: in this zone, the SVC behaves like a pure inductance.
PROPORTIONAL INTEGRATOR (PI) SVC CONTROL
Due to simplest structure, easy designing and low cost, PI controller is used in SVC as voltage regulator in most industries [11]. The SVC can be operated to provide reactive power control or closed-loop AC voltage control. For closed-loop AC voltage control, the line voltage, as measured at the point of connection, is compared to a reference value and an error signal is produced. This is passed to a PI controller to generate the required susceptance value (B) [24]. The SVC based PI controller is shown in Fig. (5).
FUZZY LOGIC CONTROL
Fuzzy Logic has attracted considerable attention as a novel computational system because of the variety of advantages it offers over the conventional computational systems [11]. Fuzzy logic controllers have been successfully applied to control nonlinear dynamic systems [12,13] especially in the field of adaptive control by making use of on-line training. Unlike other classical control methods, such controllers are model free controllers, i.e. they do not require an exact mathematical model of the controlled system. Moreover, rapidity and robustness are the most profound and interesting properties in comparison to the other classical schemes [25]. One problem in design of a SVC for good performance is the tuning of PI controller which may not be achieved in a simplistic manner. Fuzzy controller is one of the nonlinear and robust control methods which are based on expert knowledge and there is no need to have the accurate model of the system. There are two main types of Fuzzy Logic Controllers (FLCs): Mamdani's type and Takagi-Sugeno (T-S) [24].
Fuzzy Logic Principles to Control a SVC
Fuzzy logic can be used to develop the control laws such as the calculation of the susceptance B or in the Power System Stabilizers (PSS). Fuzzy logic enables the formalization of the uncertainty due to a global comprehensive knowledge of a complex nonlinear system. This approach involves three basic steps: the fuzzyfication, the elaboration of the inference rules and the defuzzyfication. In this work due to the simplicity of Mamdani's model and ease of implementation in hardware, Mamdani's type is considered in this paper. Our main objective is to replace the PI regulator with a fuzzy controller in order to determine the susceptance B in the SVC device as shown in Fig. (6). Input/Output membership function of FLC is shown in Fig. (7).
STUDIED SYSTEM DESCRIPTION
A power system has a wind farm consists of six 1.5 MW wind turbines connected to a 25 kV distribution system exporting power to a 120 kV grid through a 30 km 25 kV feeder. A 2300V, 2 MVA plant consisting of a motor load (1.68 MW induction motor at 0.93 PF) and of a 200 kW resistive load is connected on the same feeder at bus B25. A 1.5 MW load is also connected on the 575 V bus of the wind farm [6,26]. The single-line diagram of this system is illustrated in Fig (8). A SVC of 6 MVAR capacitive and -2.5 MVAR inductive Reactive power is connected on the 575 V bus [24] which is taken as the monitoring point of the whole studied wind farm for monitoring: the total exported (generated) active power from the wind farm to the grid, the total absorbed reactive power from the grid, flowing current and the terminal voltage at B575 of the wind farm. Each wind turbines has a protection system monitoring voltage, current and generator speed. The simulation is carried out at wind speed 8m/sec, and zero pitch angle. The wind farm must stay connected during fault, with the voltage at interconnection point dropping to zero. The simulation model is carried out using the MATLAB software. The wind farm DFIG parameters are listed in Table 1 [26], the set parameters of the protection system of wind farm are illustrated in Table 2 [27]. Fig. (10), the results demonstrate that even with stronger fault the proposed FLC is very effective control method, The fault with in the limit of the wind farm protection devices so it still connected with the grid with or without SVC. The simulation results show that SVC with both controller support the wind farm to stay connected to the grid during these sever disturbance. But the system without SVC could not support the system i.e. the protection devices disconnect the wind farm. The proposed controller provides better damping performance, lower oscillation and fast response, as shown in Fig. (11 Figure 11. Three line to ground fault Case (4): Different fault durations effect are studied under three line to ground fault at bus B25 with 50 ms, 80 ms and100 ms durations respectively with the proposed FLC control system. With increasing fault duration system oscillation increased and the system response is slower. At fault duration 100ms the system could not support the wind farm any more. The system protection trip out and disconnect the wind farm from grid as shown in Fig. (12).
CONCLUSION
It can be concluded from the simulation results that the SVC with both controller provide better performance i.e. the SVC support the system reactive power. Fuzzy control technique is a robust control methods i.e. FLC controller provides smoother much quicker response and remarkably improve damping performance of the studied system under sever disturbance condition. The SVC location affect greatly on the performance of wind farm. When the SVC is close to the wind farm it could support power system during transient state it provides much better performance and quicker response. | 2016-01-29T17:58:53.149Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "438c67add819fe3be2fe4805c076c8c2d84c5ca7",
"oa_license": "CCBY",
"oa_url": "http://www.hrpub.org/download/20150510/AEP4-18103659.pdf",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "438c67add819fe3be2fe4805c076c8c2d84c5ca7",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
167800734 | pes2o/s2orc | v3-fos-license | Distinguishing Financially Healthy From Unhealthy SMEs in China
This paper presents empirical evidence on the financial characteristics of Chinese entrepreneurial SMEs, both state-owned and private enterprises, listed in the Chinese stock markets during 2006-2009. Building on extant literature and using a parametric approach on 359 sample SMEs for 2006-2009 period, the study examines the financial characteristics that are embedded in the financially healthy and unhealthy Chinese SMEs. The findings of the study suggest that financially healthy Chinese SMEs have strength in terms of liquidity, profitability, and leverage ratios as compared to financially unhealthy SMEs with the exception of a few cases in leverage and profitability. These findings are worth noting to understand the uniqueness of financial characteristics of the Chinese SMEs and useful for policy makers to deal with the issues related to financially distressed SMEs in China.
Introduction
SMEs' (small-and medium-sized enterprises) existence and contribution are predominant in most of the economies across the globe in both developed and emerging countries. Economic activities of vast majority of SMEs flow towards big enterprises and other sectors of the economy. It is perceived that no economy can sustain without a vibrant SMEs to reinforce social well-being and equity. The support of financially healthy SMEs is seem to be indispensable more for emerging countries like China than industrialized ones. This motivates conducting the study on the uniqueness of SMEs of the world's fastest growing emerging economy-China. China has achieved great success in the economic development over the past three decades. The report of National Bureau of Statistics 2011 cited that the GDP had reached an average of 8% in 2008, 9.2% in 2009 plus a further increase to 10.3% in 201010.3% in (Chinadaily, 2011. With the start of China's reforms in late 1970s, SMEs in China has begun to flourish, as symbolized by the booming township and village enterprises (TVEs) in rural areas. After nearly three decades of development, the number of SMEs in China amounted to 22 million (China Labor Statistical Yearbook, 2005), and the share of SMEs in the total number of enterprises was 99.3% in 2004 (Yu, 2007). Moreover, since 2006 China has become the international hub in attracting both foreign investments, both direct and portfolio, and currency reserves. In spite of these facts, the achievement was not reflected in the overall performance of the stock markets in Shanghai and Shenzhen. There has been a sharp fall in 2001 within these markets that led many businesses to collapse for numerous reasons. With the introduction of the first Bankruptcy Law, which came into effect on November 1988, many companies, especially those non-listed companies declared either for liquidation or bankruptcy. Many researchers and other associated stakeholders, with academic rigor and insight, identified the underlying causes of the markets meltdown, such as, inadequate market transparency, poor government regulation, lack of sound and reliable models to support the assessment of a company's financial situation and identification of potential distress (Altman, Heine, Zhang, & Yen, 2007). These obstacles have a major influence on all types of enterprises in China, whether they are large or small, private or state-owned enterprises (SOEs).
Hypothesis Development
In Chinese context, SMEs include state-owned SMEs, urban concentration SMEs, township enterprises, private and individual enterprises. Majority of the SMEs are non-state-owned. SMEs in China are involved in many major economic sectors: industry (including manufacturing, mining, electricity, production and supply of fuel gas and domestic water), construction, transportation, the postal service, wholesale and retail sales, lodging and catering. These are classified as small or medium enterprises in terms of sales and/or the amount of total assets as well as the number of employees. The classification criteria can be seen in Table 1. In this study a financially healthy SME refers to a firm that has no distressed qualities or criteria such as, bond default, bank loan default, delisting of a company, government intervention via special financing, the filing for bankruptcy and liquidation. Therefore, a firm that does not possess these qualities can then be classified as a financially non-distressed firm. In this study, the financially healthy qualities of a firm also include the presence of positive operating cash flow and profit at the time the sample was taken. The 769 financial statements such as Balance Sheets, Comprehensive Income, and Statement of Cash Flow of the companies listed on the Shenzhen Stock Exchange between 2006-2009 that had showed healthy qualities were to be used as samples of financially non-distressed/healthy SMEs. Similarly, the companies that had defaulted on bonds and loans, had sought financial aid through government intervention and showed negative operating cash flow and low profit margins were used to represent the financially stressed/unhealthy firms. Of the SMEs that possessed these qualities on the Chinese Stock Markets a total 188 financial statements were then collected and used in this study. Thus, by categorizing the ratios into liquidity, leverage and profitability ratios, the main hypothesis of the study is adopted as: "There are significant differences in the financial ratios of healthy and unhealthy Chinese SMEs".
Based on this, the related sub-hypotheses can be applied to the SMEs in China: H 1 : The liquidity of the financially healthy Chinese SMEs is higher than that of unhealthy SMEs. H 2 : The leverage of the healthy Chinese SMEs is less than that of unhealthy SMEs. H 3 : The profitability of the healthy Chinese SMEs is higher than that of unhealthy SMEs.
Research Methodology
The study focuses on the small-and medium-sized enterprises (SMEs) as listed on the China SMEs Listed Boards in Shenzhen stock exchange. The total number of companies listed on the SME board on 3 March 2011 was 564. Out of the total numbers, 359 companies were selected as the sample of the study utilizing the secondary available on-line resources, such as publically published financial statements. The sample comprises about 64 percent of total SMEs listed board in Shenzhen, which seems quite representative. A total of 957 financial statements of the selected China SMEs were collected from 2006-2009 (4 years) and were used to determine the differences between financially unhealthy SMEs' and healthy firms as represented on the Shenzhen Stock Exchange. This research employed parametric (Dependent and Independent Paired Sample T-Test) approach in the Statistical Package for the Social Sciences (SPSS) program in the process of data analysis. The validity of the study was limited to the reliability of the financial ratios collected from on-line financial statements of the listed SMEs. The study employed an analysis of numerous financial ratios that were able to differentiate financially unhealthy firms from healthy firms, using three significance levels 0.05, 0.01, and 0.001.
Table 2 Variable Definition
No. Ratios Names Definitions Liquidity measures: Liquidity refers to how quickly and cheaply an asset can be converted into cash or in other words the ability of current assets to meet current liabilities when due.
1
CACL Current assets to current liability ratio (Unit: Time) The amount of cash, account receivables, bills, inventory and other current assets divided by current liability 2 WCTA Working capital to total assets ratio (Unit: Per cent) The current assets less current liability as a percentage of total assets 3 CFCL Cash flow to current liability ratio (Unit: Per cent) The net total cash flow as a percentage of current liability Leverage measures: Leverage, also known as gearing or levering, refers to the use of debt to supplement investment or the degree to which a business is utilizing borrowed money.
4
LLTA Long-term liability to total assets ratio (Unit: Per cent) The amount of long-term liabilities as a percentage of total assets 5 TLTA Total liability to total assets ratio (Unit: Per cent) The amount of short-term and long-term liabilities as a percentage of total assets 6 de Debt to equity ratio (Unit: Time) The amount of debt divided by equity Profitability measures: Profitability refers to an ability of a firm to generate net income on a consistent basis.
7
TITA Total income to total assets ratio (Unit: Per cent) The amount of total core and other income as a percentage of total assets 8 INTEBIT Interest expense to earnings before interest and tax ratio (Unit: Per cent) The amount of interest expenses as a percentage of earnings before interest and tax expenses 9 EBITTA Earnings before interest and tax expenses to total assets ratio (Unit: Per cent) All earnings before interest and tax expenses as a percentage of total assets 10 EAITTA Earnings after interest and tax expenses to total assets ratio (Unit: Per cent) All earnings after interest and tax expenses as a percentage of total assets These two different groups of SMEs, i.e., healthy (769) and unhealthy (188) samples are making a total of 957 sets of financial statements. As defined in Table 2, a total of 10 independent variables were selected based on: (1) the most commonly used in previous studies; and (2) the availability of the data. These variables were divided into three categories according to the set hypotheses.
Results and Discussions
To start with some basic characteristics of the sample companies, it is observed that there is a mixture of ownership types. While the majority of 79% (283 companies) SMEs are private enterprises, the remaining 21% (76 companies) being state-owned enterprises (SOEs). Again, the majority of these SMEs (90%) have been established between 6-20 years. Specifically, 55% companies have been established between 6-10 years time and 36% between 11-20 years.
The study divides the 10 ratios of healthy and unhealthy firms into three categories for both groups. Means of the three liquidity and four profitability ratios of healthy firms show higher liquidity and profitability than that of unhealthy firms, while unhealthy firms show higher leverage than that of healthy firms. Even though the cash flow to current liability ratio (CFCL) of healthy firms shows higher liquidity (healthy firms 66.94, and unhealthy firms 12.59) but with higher standard deviation (SD. 289.97) the test of inferential statistic is needed, the case of total income to total assets ratio (TITA) was likewise (SD. 78.53). Notes. *** Significant at 0.1% level (0.001); * Significant at 5% level (0.05); NS: Not significant.
The study aims to examine the uniqueness of financially healthy and unhealthy SMEs in China and to identify whether there is any significant difference between them in terms of the selected financial variables by using a parametric method (independent sample T-test).
Liquidity: In Table 3 on the column T-test, it is documented that financially healthy firms had liquidity ratios significantly higher as compared to that of unhealthy firms when taking into account the resulting calculated ratio means of CACL (current assets to current liability), WCTA (working capital to total assets) and CFCL (cash flow to current liability). The results on parametric T-test showed statistical significance on CACL ratio t (df 1 955) = 5.862, p < 0.001; on WCTA ratio t (955) = 2.539, p < 0.05; and on CFCL ratio t (955) = 4.674, p < 0.001. It is noted that unhealthy firms possessed higher current assets when compared to that of healthy firms with the net cash flow being over 60% of current liabilities. Yet, CACL and CFCL variables showed significantly strong differences between the two groups align with the WCTA variable showed weakly significant difference. These findings provide a general sense that liquidity ratios of financial healthy SMEs are superior to that of financially unhealthy SMEs. Thus, the first hypothesis is fully accepted.
Leverage: In regards to leverage ratios, LLTA (long-term liability to total assets) ratio did not show a significant difference between the two groups where parametric T-test provided the same result of the non-significance. This may be because of the main use of the short-term liability rather than of long-term one. The ratio of CLTA (current liability to total assets) amounted to 32.82% and 42.51%, respectively for the healthy and unhealthy SMEs. However, when compared with TLTA (total liability to total assets) ratio, a significant difference is evident between the healthy and unhealthy. The results on parametric T-test showed statistical significance on TLTA ratio t (955) = -6.025, p < 0.001. Again, the mean result of the DE (debt to equity) ratio did not show a significant difference between the two groups. Therefore, the second hypothesis is partially accepted.
Profitability: Unlike liquidity ratios, the profitability ratios showed mixed results regarding significant differences between the healthy and unhealthy SMEs. Both TITA (total income to total assets) and INTEBIT (interest expense to earnings before interest and tax ratio) ratios failed to reveal any significant difference between the two groups, even though means of the two groups seem to show the difference. However, the high standard deviation of 359.24 of unhealthy firms demonstrates that means of unhealthy firms were scattered. On the other hand, the profitability differences between both groups of SMEs can still be observed through the EBITTA (Earnings before interest and tax expenses to total assets) and EAITTA (Earnings after interest and tax expenses to total assets) ratios, EBITTA ratio t (955) = 4.082, p < 0.001, and EAITTA ratio t (955) = 8.577, p < 0.001. Thus, the third hypothesis is partially accepted.
The findings provide an insight regarding financial characteristics of both healthy and unhealthy SMEs and lead to the conclusion that liquidity and profitability of the financially healthy enterprises, or the healthy SMEs were greater when compared to the financially stressed enterprises or the unhealthy SMEs. On the contrary, the unhealthy firms had higher liability, especially current liability, than that of healthy firms. With the non-significance result of TITA and INTTA ratios, it is considered that both healthy and unhealthy firms had similar ratios of total income and interest expense when compared with total assets. This is also in line with the LLTA and DE ratios where no statistical significance was found. This brings to the conclusion that the funding resource of both groups came from the combination of their own funding (equity) with other funds being raised through short-term liability generated from creditors, suppliers and loans from banking institutions. Although these firms were listed on the stock exchange to raise funds for capital investment from the open market, direct financing still remains a big challenge for most of the SMEs. Given the condition of listed SMEs, it is harder to evaluate the difficulties that unlisted SMEs generally confront to meet their financing need.
The results also support the conclusion that the unhealthy firms faced difficulties in several areas of business such as the cost of manufacturing goods, distribution costs and the final cost to the consumer, as well as the internal administrative and general expenses (the significant difference of EBITTA ratio). Both groups have to deal with high taxation costs due to the significance of the EAITTA ratio, in particular during the global economic crisis when the costs of export were higher than usual. When taxation costs and fees account for 20% of the total business costs including tax incentives for SMEs (Zhou, Guo, & Lu, 2010), the simple solution would appear to be the lowering of tax rates and increased tax relief, yet this requires the support of the relevant organizations such as government departments and commercial banking institutions.
Conclusion
The differences of the financial ratios between the two groups are tested and confirmed. Both leverage and profitability ratios show partial distinctiveness between financially healthy and unhealthy SMEs. This result is beyond expectation for the financially healthy companies' tabling a better financial performance when compared with the financially unhealthy companies. Yet this is not supported in every financial variables/ratios undertaken to test. Considering the internal financial structure of enterprises within both groups, it is found that the financially healthy firms have higher liquidity and profitability than that of the unhealthy firms, while the unhealthy firms have higher liability, especially current liability, than that of healthy firms. It is important to note that long-term liability to assets ratio (LLTA) of both non-financially distressed and financially distressed firms does not show significant difference, which implies that both groups do not greatly finance their business with long-term liability but with short-term liabilities and equity. Therefore, the significant statistical findings fail to support the main hypothesis completely rather partially.
This result contradicts the result of previous researchers such as of Davidson and Dutia (1991) who found distressed firms had high proportion of long-term liability. This might be a unique nature of Chinese enterprises that have a high tendency of self-financing of their business. Both healthy and unhealthy enterprises tended to obtain short-term rather than long-term liability. Yet, the current liability and equity facilitated financially stressed SMEs to continue business operations with a degree of profitability. In other countries unhealthy firms would normally possess a high proportion of both long-term liability to total assets ratio (for example; the ratio may be over 100 percent of total assets such as in the case of Thai distressed SMEs) and total liability to total assets ratio (over 300 percent of total assets in the case of Thai distressed SMEs (Terdpaopong, 2009;Terdpaopong & Mihret, 2011). Interestingly this is not the case in China. The results enable to conceive the distinguishing financial characteristics and differences between healthy and unhealthy SMEs in China in terms of the percentage of long-term liability to total asset, thus making obvious the unique financial characteristics of Chinese SMEs.
This study has some limitations in terms of numbers and types of independent variables. All of the variables are financial ratios, which were derived from financial statements of the samples firms. However, the wide range of variables including non-financial data such as age of business, education of business owners or managers, change of auditors, and other qualitative details of business managers, number of years established could assist the researchers to effectively detect the signs of a financial problem. There are a number of areas that require further academic focus, such as the establishment of a clear and concise definition of financially distressed SMEs used in academic research, including the identification of the causes of failure and other difficulties faced by SMEs, the detection of the indicators of potential future failure and the development of sophisticated econometric models for predicting potential failures. | 2019-05-29T13:11:48.181Z | 2012-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "9e1803470f3ca6ea84840cfcf1b93814a57bd395",
"oa_license": "CCBYNC",
"oa_url": "http://www.davidpublisher.com/Public/uploads/Contribute/5514b4a292a5f.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "37767b5e69b4e1341c8b3923143196ea2baea323",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
39054716 | pes2o/s2orc | v3-fos-license | Management of Spontaneous Rectal Perforation at Up-Coming Medical College Hospital : A Rare Occurrence
Spontaneous perforation of large bowel compare to small bowel is very uncommon and among them “spontaneous rectal perforation” is very rare cause of acute abdomen in surgical practice. The presentation of such entity is similar to any other cause of peritonitis & pre-operatively definite diagnosis remain a diagnostic dilemma and require an emergency surgical intervention to have the best outcome. We report a case of a 83 years male presented to our Emergency Department referred from B. P. Koirala Institute of Health Sciences with feature suggestive of peritonitis, hollow viscous perforation with evidence of free air under right dome of diaphragm in chest X-ray PA view. An emergency exploratory laparotomy was performed with provisional diagnosis of duodenal ulcer perforation as being commonest in this region with differential diagnosis of appendicular perforation or small bowel perforation with peritonitis. But on exploration no perforation was found in duodenum, small bowel, appendix and large bowel etc. Lastly with much difficulty the rectal perforation could be located while removing the plaques adhered to the rectal wall. Seeing such cases a surgeon should be aware of the possibility of this fatal disease despite rare incidence. It is very important to recognize such condition at an early stage because it carries high mortality if surgical intervention is not done at early stage. Intra-operatively too a thorough search should be made towards rectum if perforation is not found easily at other sites. Hence the better prognosis is with early surgical intervention; worst the prognosis with late intervention. Birat Journal of Health Sciences 2016 1(1): 87-90
INTRODUCTION
Hollow viscous perforation is very common entity in surgical practice leading to acute abdomen.Usually the rectal perforation occurs in cases of pre-existing pathology such as trauma, iatrogenic injury eg.During colonoscopy, however spontaneous rectal perforation is 1 very extremly rare.There are no specific clinical manifestations of such disease and it usually leads to severe peritonitis in absence of sign and symptoms by the time diagnosis is made.Hence, a surgeon should be aware of such condition and must have high suspicion of index for diagnosing such cases and early surgical exploration is required to have a good outcome and to reduce mortality rate.We report a case of severe peritonitis due to spontaneous rectal perforation which was successfully managed surgically and discharged with th functioning status loop colostomy on 8 post-operative day.Patient was brought back after 20 days in Emergency with pus discharge from main wound and drain site with status functioning colostomy.Patient was examined and found to have burst abdomen with a cavity on right paracolic gutter so a corrugated drain was kept through right drain site and interrupted suture applied to the main wound.Patient was discharged after two days with advice: daily dressing, high protein high calories diet and care of the colostomy with regular follow up in OPD.On follow up after two months, patient was well tolerating orally and normal functioning colostomy and well wound healed.Patient was advised to report to SOPD for closure of loop colostomy after three months.initial resuscitation.Intra-operatively Guss of air and about 1000ml of faeco-purulent, foul smelling thick fluid was found spread all over peritoneal cavity mainly in the pelvis.A solitary perforation of approximately 0.5cms diameter with healthy margins was identified in anterior wall of the rectum about 20 cms from anal verge and about 12 cms proximal to peritoneal reflection.There was no diverticular diseases of the colon, Sigmoid colon was loaded with fecal matter.A simple closure of the perforation followed by a diversion transverse loop colostomy was performed after thorough irrigation of the peritoneal cavity.No biopsy was taken from the perforation site for histopathlogical examination.A tube drain kept in the pelvis.Abdomen closed with vicryl no 1 as mass closure and skin closed with ethilon 2/0 interrupted.
On table maturation of the colostomy was performed and non-adhesive dressing applied.The post-operative period was uneventful.Patient was managed in ICU for two days and then shifted to the general surgical ward and discharged on th 8 post-operative day.
Patient was brought back after 20 days in Emergency with pus discharge from main wound and drain site with status normal functioning colostomy.Patient was examined and found to have burst abdomen with a cavity on right para-colic gutter.So a corrugated drain was kept through right drain site and interrupted suture applied to the main wound.Patient was discharged after two days with advice: daily dressing, high protein high calories diet and care of the colostomy with regular follow up in OPD.On follow up after two months, patient was well tolerating orally and normal functioning colostomy and wound was well healed (Figure 3, 4).Patient was advised to report to SOPD for closure of loop colostomy after three months.
DISCUSSION
The incidence of spontaneous perforation of sigmoid colon or rectum is very rare.It does occur in all age groups, the youngest reported case being six years old and the oldest 2 ninety six years old.The perforation is often associated with intra-luminal pressure during defection with a pre-existing pathology such as diverticulitis, colitis, ulceration, malignancy, irradiation, adhesions or as a consequence of iatrogenic 1 injuries and blunt trauma abdomen.Chronic straining due to pre-existing disease causes progressive deepening of recto-vesical and recto-uterine pouches leading to thinning of rectal wall.Various theories have been postulated in an effort to explain the mechanism of spontaneous rectal perforation.Among them the most prevailing are: 1. Intramural hematoma formation resulting in dissection and weakening of the rectal wall.
2. Congenital anal dysplasia coexisting with a weakened area of rectal wall.delayed treatment, the mortality rate is over 60% .
Though spontaneous rectal perforation is very rare, surgeons working at periphery with limited resources should have high suspicious index to diagnose such case to have early diagnosis CONCLUSION so that at earliest surgical intervention performed to have best outcome.Rectal perforation is one of the important causes of acute abdomen and should always be kept in mind to suspect when a patient with chronic constipation presents with severe abdominal pain and absence of feature of peritonitis.The mortality rate for this condition is very high, although the incidence is very low and timely intervention can save the life in most cases.
Figure 1 :Figure 2 :
Figure 1 : CXR showing free air under right dome of diaphragm
3 .
Progressive deepening of the pouch of Douglas in combination with sudden increase of intra-abdominal 3 pressure could cause rupture of the rectum.The perforation occurs almost always in the distal part of the colon because of the physiological characteristics of rectosigmoid colon such as lower water content of the stool, relatively poor blood supply and high pressure due to the 4 narrowed intra-luminal diameter and it involves the anterior wall of the rectum just proximal to the peritoneal reflection at 4 the anti-mesenteric border of the recto-sigmoid junction.In most cases, it presents with features of diffuse peritonitis and X-ray chest shows free air under right dome of diaphragm.The treatment of this entity requires urgent surgical exploration with closure of the perforation and proximal | 2017-10-24T05:14:01.813Z | 2017-03-31T00:00:00.000 | {
"year": 2017,
"sha1": "eb0df44521057c269b304a680c617ea048763e78",
"oa_license": "CCBY",
"oa_url": "https://www.nepjol.info/index.php/bjhs/article/download/17108/13912",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "eb0df44521057c269b304a680c617ea048763e78",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9317471 | pes2o/s2orc | v3-fos-license | 10 Protein Kinases in the Pathogenesis of Muscle Wasting
The skeletal muscle is a very heterogeneous tissue, that is in charge of a broad range of functions such as movement, stability, heat production and cold tolerance. It represents approximately 50% of total body protein, and plays a central role in whole body metabolism (Bassel-Duby & Olson, 2006). In the last two decades, the skeletal muscle, previously considered as a mere protein reservoir, has been shown to release cytokines and other humoral factors (Pedersen & Febbraio, 2008). This tissue plays a pivotal role in the overall energy balance. Indeed, it regulates lipid flux, takes up and stores most of plasma glucose, and modulates insulin sensitivity. In this regard, the skeletal muscle likely plays a crucial role in pathological states characterized by peripheral insulin resistance such as obesity, as also suggested by recent evidence showing the occurrence of a cross-talk between muscle and the adipose tissue (reviewed in Clarke & Henry, 2010).
Introduction
The skeletal muscle is a very heterogeneous tissue, that is in charge of a broad range of functions such as movement, stability, heat production and cold tolerance.It represents approximately 50% of total body protein, and plays a central role in whole body metabolism (Bassel-Duby & Olson, 2006).In the last two decades, the skeletal muscle, previously considered as a mere protein reservoir, has been shown to release cytokines and other humoral factors (Pedersen & Febbraio, 2008).This tissue plays a pivotal role in the overall energy balance.Indeed, it regulates lipid flux, takes up and stores most of plasma glucose, and modulates insulin sensitivity.In this regard, the skeletal muscle likely plays a crucial role in pathological states characterized by peripheral insulin resistance such as obesity, as also suggested by recent evidence showing the occurrence of a cross-talk between muscle and the adipose tissue (reviewed in Clarke & Henry, 2010).
The human body comprises about six hundred different muscles, composed of multinucleated cells organized to form muscle fibers.The myofiber contains many parallel myofibrils, characterized by alternating light (I) and dark (A) bands.The latters are bisected by a dark region (H zone), while I bands comprise a dark Z line (Z disk).The interspace between two Z disks is termed sarcomere, the functional unit of the myofibril.Myofibril number defines the cross-sectional area (CSA) of the myofiber, and determines the forcegenerating capacity.Myofibrillar contractile proteins myosin and actin form thick and thin filaments, respectively.Muscle myosin consists of two heavy chains (MyHC), endowed with ATPase activity, and two pairs of light chains (MyLC).Seven different genes coding for embryonic, neonatal and adult MHC isoforms have been described in humans.Myosin is organized in units assembled in a mobile side by side complex, where the head of myosin is at the distal tip of the filament and the tail at the center, rendering the thick filaments bipolar.Myosin heads interact with titin, connecting thick filaments to the Z disk.In the thin filaments, globular actin monomers are arranged in a double helical conformation, associated with tropomyosin, troponin, and nebulin, that regulate the interactions between actin and myosin.Troponin binds to Ca 2+ released from intacellular stores, releasing myogenic activity when cocultured with primary myoblasts or in response to muscle injury or Wnt signaling.The existence of distinct subsets of myogenic cells likely suggests that multiple mechanisms may support regeneration in the adult skeletal muscle, although the contribution of these cells to the maintenance or repair of skeletal muscle under physiologic conditions is uncertain, and their therapeutic potential has not been clearly established.
Physiological regulation of skeletal muscle mass
In principle, changes of myofiber number and/or dimension, this latter better defined by CSA, result in modulations of the skeletal muscle mass.However, modifications of myofiber number are rarely seen, while variations of CSA may occur frequently.In particular, CSA increases during normal growth or hypertrophy induced, for example, by exercise, and decreases in conditions of inactivity, injury, disease, or aging.
While muscle hypertrophy reflects an accumulation of contractile proteins, the opposite occurs in skeletal muscle atrophy, where both CSA and content of contractile proteins are reduced.On this lines, protein content is the main factor regulating skeletal muscle mass.The amount of proteins in a cell is strictly regulated by the balance between synthesis and degradation rates.In healthy state, protein synthesis and breakdown do not exceed each other, allowing normal protein turnover without modifying the skeletal muscle mass.Although modulations of both sides of turnover eventually converge to produce a new steady-state, physiologic muscle hypertrophy mainly result from increased rates of protein synthesis, which responds earlier than degradation to the inducing stimuli.By contrast, increased breakdown rates are in charge of protein depletion in many situations characterized by muscle atrophy.
A complex interplay among humoral mediators such as insulin and IGFs, and amino acids is involved in regulating the rates of intracellular protein synthesis.In this regard, signaling through the insulin/IGF-1 receptor, as well as increased amino acid levels, have been shown to simultaneously stimulate synthesis and inhibit protein catabolism.Protein synthesis induction by classical anabolic signals such as insulin or IGF-1 relies on the activation of a transduction pathway involving phosphoinositide-3-kinase (PI3K), Akt/PKB, mTOR (mammalian Target Of Rapamycin), and p70 S6K (p70 ribosomal S6 kinase).As an example, this pathway has been shown to account for the generation of muscle hypertrophy induced by resistance exercise (reviewed in Adamo & Farrar, 2006).The demonstration that the PI3K/Akt/mTOR pathway is crucial to skeletal muscle increase in size has come from studies reporting that the expression of a constitutively active form of Akt in skeletal muscle cells, or its conditional activation in the skeletal muscle of adult rats, result in the appearance of a hypertrophic phenotype (Rommel et al., 2001;Lai et al., 2004).Similar patterns can be reproduced by administering a mixture of the 3 branched-chain amino acids (BCAAs: leucine, isoleucine, and valine), or even leucine alone.In addition to provide substrate for the assembly of new proteins, amino acids interfere with different transduction pathways involved in the regulation of mRNA translation (Kadowaki & Kanazawa, 2003).In particular, increased leucine intracellular concentrations have been shown to enhance the rate of translation by activating p70 S6K and eIF-4F (eukaryotic Initiation Factor 4F), independently from Akt (Lang and Frost, 2005).The body protein-sparing effect of leucine has been suggested by the observation that nitrogen balance in fasting volunteers treated with leucine alone or with BCAA keto acid analogues is improved (Choudry et al., 2006).
The regulation of protein synthesis exerted by amino acids mainly relies on mTOR, a serine/threonine kinase crucially involved in cell growth.mTOR stimulates protein synthesis through three key regulatory proteins: p70 S6K , 4E-BP1 (eukaryotic initiation factor 4E-binding protein 1) and eIF-4G (eukaryotic Initiation Factor 4G).Reduced mTORmediated signaling has been reported in the skeletal muscle of fasted rats compared with the fed state.As expected, also the levels of phosphorylated 4E-BP1 are decreased in fasted animal; this would result in eIF-4E sequestration, inhibiting the assembly of the initiation complex eIF-4F.By contrast, 4E-BP1 is markedly hyperphosphorylated in the skeletal muscle of rats fed a high protein diet, promoting the formation of the eIF-4F complex.Moreover, leucine also promotes phosphorylation of p70 S6K (Anthony et al., 2000).Akt activation also induces GSK-3 phosphorylation, thus resulting in its inactivation.GSK-3 negatively regulates molecules involved in several anabolic processes and most of its effects are mediated by the PI3K/Akt pathway, in which GSK-3 acts both as a downstream target and as a negative regulator (Hanada et al., 2004).Consistently, non-competitive inhibition of GSK-3, by means of transfection with a dominant-negative cDNA, or by pharmacological compounds, activates the PI3K/Akt pathway, resulting in myotube hypertrophy (Rommel et al., 2001;Van der Velden et al., 2006).In addition, the increased proteolysis observed in muscles isolated from burned rats can be prevented by addition of GSK-3 inhibitors to the incubation medium (Fang et al., 2005).Other studies have shown that GSK-3 is involved in the pathogenesis of Alzheimer disease, prionic diseases, Hungtington corea, and gp-120 HIV-related neurotoxicity (Jope, 2003).All these considerations suggest that specific GSK- As for protein breakdown, it is also highly relevant to muscle homeostasis.Indeed, this process not only accounts for the degradation of damaged proteins as well as of regulatory molecules such as cyclins and their inhibitors, but also plays a crucial role in maintaining the right cellular size (Waterlow, 1984).The mobilization of muscle protein may have a physiological significance when aimed at providing substrates for both gluconeogenesis and the synthesis of the acute phase reactants.However, up-regulations of protein degradation rates exceeding protein synthesis may result in skeletal muscle wasting (see below).From this point of view, the intuitive means to counteract the loss of muscle mass resulting from protein hypercatabolism, is to increase protein synthesis.However, in terms of rate equations, protein synthesis is a zero-order process, while degradation of the bulk of cell proteins is a first-order process described by a fractional rate constant.Consequently, under a given set of regulations, the size of the protein pool does not affect the fraction of proteins degraded.This means that, if the breakdown rate constant is higher than physiologic levels, protein loss will occur irrespectively of the protein synthesis rate (cf.Costelli and Baccino, 2003).
Mechanisms involved in muscle mass changes
Muscle protein mass is regulated by both anabolic and catabolic signals.In particular, alterations in the balance between the two result in modulations of the skeletal muscle size, towards accretion or depletion when anabolic or catabolic pathways are prevailing, respectively.In this regard, pathological muscle depletion is characterized by a negative nitrogen balance which results from disruption of the equilibrium between anabolism and catabolism, due to reduced synthesis, increased breakdown, or both.
Generally speaking, muscle hypertrophy, either compensatory or due to working overload, is associated with up-regulation of protein synthesis.As reported above, particularly relevant in this regard are the activation of the PI3K/Akt/mTOR pathway induced by engagement of both insulin and IGF-1 receptors, as well as amino acid availability, BCAA in particular (see above).In addition, an important role in skeletal muscle enlargement is played by the protein phosphatase calcineurin (Musarò et al., 1999).As an example, rat myoblasts exposed to IGF-1 show a marked hypertrophic response that involves the enhancement of calcineurin expression, and that can be inhibited by the immunosuppressant agent cyclosporin A (Musarò et al., 1999).Similar observations have been performed also in the whole animal, where muscle hypertrophy induced by functional overload can be prevented by pharmacological inhibition of calcineurin with cyclosporin A or FK506 (Dunn et al., 1999).
While the regulation of protein synthesis is substantially well defined, the mechanisms underlying the activation of cell protein degradation to supraphysiological levels have not been completely elucidated.Intracellular proteolysis in the skeletal muscle is operated by several systems.The lysosomal and the proteasomal ones, are able to degrade proteins into amino acids or small peptides.By contrast, both the Ca 2+ -dependent and the caspase pathways, characterized by a restricted catabolic specificity, only lead to a limited proteolysis of their substrates.
The ATP-ubiquitin-dependent proteasome system is mainly involved in the degradation of damaged or rapid-turnover proteins.Degradation of its substrates mostly requires the covalent attachment of at least four ubiquitin molecules; the presence of such polyubiquitin chain targets the substrate to the 26S proteasome, a large cytosolic proteolytic complex.Both proteasomal activity and substrate ubiquitylation are ATP-dependent processes.About twenty years ago, the ubiquitin-proteasome system has been shown to contribute significantly also to bulk protein degradation.This has become clear when increased expression of molecules pertaining to this proteolytic system have been reported in experimental conditions characterized by muscle wasting, the more so when two musclespecific ubiquitin ligases, namely MAFbx/atrogin-1 and MuRF1 have been identified (reviewed in Costelli and Baccino, 2003).The former, in particular, is a component of the SCF complex, involved in targeting proteins for proteasomal degradation; such complex is formed by two molecules, SKP-1 (S) and Cullin-1 (C), that may be associated with a large series of F-box subunits (F), responsible for substrate specificity (Kipreos and Pagano, 2000).The results reported in the literature show that muscle wasting in several conditions such as sepsis, denervation, AIDS, diabetes, and cancer is associated with increased gene expression of both atrogin-1 and MuRF1 (Lecker et al., 2004).While the mechanisms regulating these ubiquitin ligases are not yet completely elucidated, hyperexpression of atrogin-1 has been proposed to depend on reduced signaling through the insulin/IGF-1 anabolic pathway (Sandri et al., 2004;Stitt et al., 2004), while activation of the NF-B transcription factor, likely cytokine-dependent, seems to drive the increase of MuRF1 mRNA levels (Cai et al., 2004).
The autophagic-lysosomal degradative pathway, relatively non selective, is mostly responsible for the degradation of long-lived proteins as well as for the disposal of damaged organelles (reviewed in Scott & Klionsky, 1998).Autophagy relies on the sequestration of portions of cytoplasm into double-membrane vesicles (autophagosomes).These fuse with lysosomes, where the autophagic body is lysed, its content broken down, and the resulting degradation products made available for recycling (see Scott and Klionsky, 1998).Autophagy has been described in mammalian cells since the 1960s, however the underlying molecular mechanisms have been elucidated in the last years, with the identification of a set of genes named ATG (autophagy-related; Klionsky et al., 2003).Autophagy occurs at a basal rate in normal growth conditions, however, it can be markedly enhanced by specific environmental stresses.A crucial role in the regulation of autophagic rate is played by mTOR (see above).Under nutrient-rich conditions mTOR is active and autophagy is inhibited.By contrast, mTor is inactivated by nutrient starvation, and autophagic degradation is enhanced (Codogno & Mejier, 2005).The contribution of autophagy to skeletal muscle protein breakdown has been recognized only in the last years, although an altered lysosomal function has previously been reported in several myopathies (Bechet et al., 2005).In this regard, the skeletal muscle has been shown to respond to a classical autophagic stimulus such as starvation by increasing the levels of the autophagic marker LC3B-II (Mizushima et al., 2004).These results are consistent and further substantiate previous reports showing that autophagy is the main proteolytic pathway involved in the amino acid-dependent regulation of proteolysis in cultured myotubes (Bechet et al., 2005).On this line, increased gene expression of cathepsins L or B has been reported in the skeletal muscle of septic or tumor-bearing animals (Deval et al., 2001) as well as in muscle biopsies from lung cancer patients (Jagoe et al., 2002).In addition, skeletal muscle wasting in tumor-bearing rats has been shown to be associated with enhanced activity of lysosomal proteases (Greenbaum and Sutherland, 1983;Tessitore et al., 1993).Consistently, administration of leupeptin, an inhibitor of cysteine proteases, counteracts the loss of muscle mass that occurs in sepsis and in experimental cancer cachexia (Ruff and Secrist, 1984;Tessitore et al., 1994).More recently, ATGs have been shown to be induced in muscle by denervation or fasting, through a FoxO3-dependent mechanism (Zhao et al., 2007).In this regard, FoxO3 has been proposed to regulate both autophagy and proteasome-dependent proteolysis (Zhao et al., 2007).However, a sort of hierarchy appears to exist between these two processes, since a parallel study shows that autophagic degradation induced by starvation or FoxO3 overexpression is sufficient to determine muscle depletion even if the ubiquitinproteasome degradation is blocked using pharmacological or genetic approaches (Mammucari et al., 2007).
Quite intriguing is the role of the Ca 2+ -dependent proteolytic system in the pathogenesis of muscle protein hypercatabolism.Cysteine proteases called calpains, and a physiological inhibitor named calpastatin, are the components of the Ca 2+ -dependent proteolytic system.Calpains have been involved in processes such as cell proliferation, differentiation, migration, apoptotic death, and gene expression (Suzuki et al., 2004).A number of proteins, among which protein kinase C, Cdk5, Ca 2+ /calmodulin-dependent protein kinase IV, calcineurin, titin and nebulin have been proposed as in vivo calpain substrates (reviewed in Suzuki et al., 2004).Due to restricted specificity, however, calpain action is limited, and generally leads to irreversible modifications of the substrates, resulting in activity modulations or in increased susceptibility to the action of other degradative pathways (cf.Saido et al., 1994;Williams et al., 1999).Although thiol proteinase inhibitors have been proposed to be ineffective in counteracting muscle protein degradation in experimental cachexia (Temparis et al., 1994;Baracos et al., 1995), other reports have shown that administration of leupeptin is able to protect rats bearing the Yoshida ascites hepatoma AH-130 from muscle wasting, and that Ca 2+ -dependent proteolysis is activated in muscles and heart of the AH-130 hosts (Costelli et al., 2001, unpublished observations).Similar observations suggesting the involvement of calpains in the pathogenesis of muscle depletion have been reported in septic rats administered dantrolene, an inhibitor of intracellular Ca 2+ release; such treatment results in prevention of muscle wasting as well as of hyperexpression of calpains and of molecules pertaining to the ubiquitin-proteasome system (Williams et al., 1999;Wray et al., 2002).These reports are particularly intriguing since they propose that Ca 2+ -dependent proteolysis may be a necessary step to allow the release of myofibrillar proteins from the sarcomere, rendering them susceptible to degradation by the ubiquitin-proteasome system.Finally, a report has demonstrated that hyperexpression of calpastatin partially protects mice from unloaded-induced muscle atrophy (Tidball & Spencer, 2002).
Similarly to calpains, also the caspase system, can only operate a partial proteolysis of its substrates.Caspases are a family of cysteine proteases mostly known for their role in the initiation and execution of the apoptotic process.Few years ago, some studies proposed that caspase 3 could also share with calpains the role as triggers of the initial proteolytic step needed to render myofibrillar proteins available for degradation by the proteasome.In this regard, recombinant caspase-3 has been shown to cleave actomyosin complexes, and caspase-3 inhibitors can prevent the accumulation of actin fragments in the skeletal muscles of diabetic or uremic rats (Du et al., 2004).Consistently with these observation, caspase-3 knock-out mice have been shown to be resistant to denervation-induced muscle atrophy (Plant et al., 2009).In addition, myofibrillar proteins damaged by oxidation appear more susceptible to degradation by caspase-3 (Smuder et al., 2010), while a recent study reports that caspase 3 cleaves specific proteasome subunits in myotube cultures, leading to enhanced proteasome enzymatic activity (Wang et al., 2010).Finally, muscle atrophy that occurs in Duchenne muscular dystrophy or in heart failure has been associated with reduced myonuclei number, suggesting that caspases may contribute to muscle depletion also by inducing apoptotic events (Sandri, 2002).
While several evidence support the concept that hypercatabolism is the major cause of muscle protein depletion, the trigger(s) of such enhanced metabolism remain still elusive.in this regard, humoral mediators are now widely accepted to play a crucial role.Indeed, altered production/release of classical hormones and cytokines generates a complex network that results in inhibition of anabolic and/or anticatabolic signals, favoring the degradative side of protein turnover.Consistently, the muscle wasting pattern observed in experimental and human cachexia or in aging-associated sarcopenia has been shown to be prevented by insulin administration or by local overexpression of IGF-1 (Tessitore et al., 1994;Musarò et al., 2001;Lundholm et al., 2007).On the other side, circulating glucocorticoids are frequently elevated at supraphysiological levels in several chronic pathologies, and have been shown to exert a clear catabolic effect (see Schakman et al., 2009).At least the proteasome and the lysosomaldependent proteolytic systems are susceptible of regulation by the hormonal milieu.Indeed, insulin is one of the most powerful autophagy inhibitors (Pfeifer, 1977), is able to reduce the expression of both ubiquitin and 14-kDa E2 mRNA, and to down-regulate proteasome activities (Wang et al., 2006).By contrast, glucocorticoid treatment increases the expression of ubiquitin, 14-kDa E2 and 20S proteasome subunit in rat skeletal muscle (see Schakman et al., 2009).Muscle wasting and modulations of ubiquitin expression and proteasome activities have also been reported in experimental animals treated with the cytokines TNF or IL-1 (Tisdale, 2008).The relevance of cytokines to the onset of muscle wasting at least in cancer cachexia have been demonstrated by studies showing that loss of muscle mass, protein hypercatabolism and ubiquitin hyperexpression can be prevented by administration of antibodies against TNF, IFN or IL-6 (reviewed in Costelli and Baccino, 2003).Consistently with these observations, perturbation in cytokine homeostasis have been reported also in cancer patients, where a positive correlation with both disease progression and mortality rate takes place (Attard-Montalto et al., 1998;Nakashima et al., 1998).In addition, proinflammatory cytokines have been shown to contribute to muscle depletion also in non-neoplastic chronic diseases.Indeed, sepsis is characterized by increased circulating levels of TNF, IL-1 and IL-6, that appear correlated with severity and lethality.Similarly, a shift towards the proinflammatory side of the cytokine balance has been reported in patients affected by AIDS (Kedzierska & Crowe, 2001), likely accounting for muscle protein hypercatabolism frequently occurring in such patients before the adoption of combined anti-retroviral therapy (HAART; Mangili et al., 2006).Finally, also the sarcopenia and the loss of muscle quality that characterize aging are associated with enhanced levels of proinflammatory mediators (Lee et al., 2007).
In addition to altered protein turnover rates, modulations of the myogenic process have been proposed to contribute to the pathogenesis of muscle wasting.In this regard, one key mediator of muscle depletion, TNF, has been reported to regulate myogenesis with opposite outcomes.Local increase of TNF in cardiotoxin-injured muscle has been shown to promote regeneration (Chen et al. 2005), while systemic increase of TNF in vivo and elevated concentrations of the cytokine in vitro inhibit skeletal myogenesis (Guttridge et al. 2000;Coletti et al. 2002;2005).In particular, exposure of C2C12 myotube cultures to TNF leads to down-regulation of both MyoD and myogenin (Guttridge et al., 2000).MyoD appears down-regulated also in a TNF-dependent experimental model of cancer cachexia (Costelli et al., 2005).A different study has shown that TNF induces MyoD degradation through an unusual mechanism involving NF B activation (Guttridge et al. 2000), while recently MyoD hjas been demonstrated to be a substrate of the ubiquitin ligase atrogin-1 (Tintignac et al., 2005).Down-regulation of myogenesis may also depend on impaired stem cell recruitment.In this regard, deregulation of stem cell number or activation has been shown to result in decreased muscle mass (Nicolas et al., 2005).Moreover, TNF has been proposed to abrogate stem cell function, resulting in delayed or impaired muscle regeneration in mice after injury (Moresi et al., 2008).A compromised regenerative capacity has also been reported in tumor-bearing mice (Coletti et al., 2005;Penna et al., 2010a); such a pattern is associated with the appearance of hematopoietic stem cell infiltration the skeletal muscle, quantitatively more important in the tumor hosts than in controls (Coletti et al., 2005).Muscle atrophy induced in mice by aging or hindlimb suspension has also been associated with loss of muscle precursor cells, that results in reduced regenerative potential (Mitchell and Pavlath 2004).
Protein kinases in the pathogenesis of skeletal muscle wasting
Few kinase systems have been involved in the pathogenesis of muscle atrophy, the one regulated by growth factors such as insulin or IGF-1, the Mitogen Activated Protein Kinases (MAPKs), and the energy sensor AMP-activated protein kinase (AMPK).
Insulin/IGF-1 receptors are endowed with an intrinsic tyrosine kinase activity, that is stimulated by interaction with the specific ligands.After engagement, receptor autophosphorylation allows the recruitment of IRS (insulin receptor substrate) factors.Tyrosine-phosphorylated IRS activates PI3K, producing phosphoinositide-3,4,5triphosphate (PIP3).PIP3 acts on phosphoinositide-dependent kinase 1 (PDK1), which in turn phosphorylates and activates Akt.This kinase is well known for mediating anabolic signals (see above) through the indirect activation of mTOR, that requires the inhibition of TSC (tuberous sclerosis complex).Once phosphorylated, mTOR may participate to two different protein complexes, the Raptor-containing TORC1, sensitive to inhibition by rapamycin, and the Rictor-containing TORC2, which cannot be blocked by rapamycin (reviewed in Schiaffino & Mammucari, 2011).While the latter is required for Akt activation, mTORC1 phosphorylates p70 S6K , stimulating protein synthesis.In addition to TORC1, protein synthesis induction also relies on Akt-dependent GSK3 inhibition, that consequently removes the blockade impinging on the elongation factor eIF-2B.Active Akt also down-regulates protein breakdown by inactivating FoxO factors, thus inhibiting the transcription of the so called 'atrogenes', among which the muscle-specific ubiquitin ligases atrogin-1 and MuRF1 (Sandri et al., 2004;Stitt et al., 2004).FoxO3, in particular, has been proposed to contribute also to the regulation of LC3, an essential actor in the hyperactivation of the autophagic-lysosomal proteolysis (Zhao et al., 2007).Akt activation is influenced by several regulative mechanisms.Indeed, it is inhibited by p70 S6K , through IRS inactivation by phosphorylation of serine residues, while it is induced by mTORC2 (see above).The PI3K/Akt pathway plays a pivotal role in modulating the skeletal muscle mass; indeed, it is upregulated in conditions characterized by muscle hypertrophy, while its disruption results in muscle atrophy (Glass, 2010).Not only, a hypertrophic phenotype occurs when Akt is hyperexpressed in skeletal muscle cells or is conditionally activated in the muscle of adult rats (Rommel et al., 2001;Lai et al., 2004).In addition, a protection against denervation-induced atrophy has been shown in transgenic mice overexpressing Akt (Bodine et al., 2001).Perturbations of the IGF-1 signaling pathway have been reported in both in vitro and in vivo models of muscle atrophy (reviewed in Glass, 2010).Indeed, the levels of active Akt are significantly reduced in C2C12 myotubes exposed to glucocorticoids or nutrient deprivation (Sandri et al., 2004).Decreased activity of the PI3K/Akt pathway has also been shown to occur in muscle wasting induced by denervation (Hornberger et al., 2001), disuse (Sugiura et al., 2005), aging (Clavel et al., 2006) or glucocorticoid treatment (Schakman et al., 2008).By contrast, levels of phosphorylated Akt in the skeletal muscle of tumor-bearing animals are comparable to controls, or even increased (Penna et al., 2010b), although a down-regulation of Akt activation has been reported in patients affected by pancreatic cancer (Schmitt et al., 2007).The maintenance of p-Akt levels in experimental cancer cachexia is particularly intriguing, since circulating IGF-1 and insulin levels are markedly reduced in the tumor-bearing animals (Costelli et al., 2006), and muscle wasting can be prevented by administration of insulin, though not of IGF-1 (Costelli et al., 2006;Tessitore et al., 1994).Akt phosphorylation mainly relies on the balance between the activity of PI3K and the phosphatases PTEN and PP2A.In particular, reduced PTEN activation has been observed in the skeletal muscle of fasted animals, likely to counteract Akt downregulation, in the attempt to preserve muscle proteins (Hu et al., 2007).However, both phosphatases are comparably expressed in the skeletal muscle of control and tumor-bearing www.intechopen.comAdvances in Protein Kinases 252 animals.In addition to Akt, also other molecules involved in the regulation of protein synthesis, such as eIF2, eIF-4B, p70 S6K , are in an active state in the skeletal muscle of tumorbearing animals; however, previous results show that the rates of protein synthesis are not increased, but just maintained at control levels (Costelli et al., 2005, Tessitore et al., 1994).Whether this results from the lack of specific aminoacids or from activation/inactivation of other unknown mechanisms is not clear.In this regard, an inhibition of protein synthesis could result from the atrogin-1-dependent degradation of eIF-3F, a scaffold protein that coordinates both mTOR-and p70 S6K -mediated translation.On the same line, protein synthesis has been proposed to be regulated by MuRF-1, independently from the PI3K/Akt pathway (Clarke et al., 2007;Koyama et al., 2008).
Four main MAPKs have been identified in mammals: JNK (1-3) and p38 (-), activated by stress conditions, and the extracellular signal related ERK 1/2 (hereafter referred to as ERK) and ERK5, or big MAPK (Raman et al., 2007).MAPKs are activated by phosphorylation of both threonine and tyrosine residues by MAPK-kinases (MKKs) and inactivated by specific phosphatases such as the MAPK-phosphatase 1 (MKP-1; Raman et al., 2007).MAPKs are recognized as being of crucial importance in the process of myogenesis, although their role in the different steps of new fiber formation and specification still needs to be clarified.As an example, Ras-dependent ERK activation has been shown to lead to MHC-I expression, resulting in slow-fiber type differentiation (Murgia et al., 2000).These observations however, are in contrast with different studies reporting that ERK activation inhibits myotube formation (Miyake et al., 2009), while recent reports show that ERK activation is higher in fast-than in slow-twitch muscles (Shi et al., 2007) and that inhibition of MAPK signaling leads to a shift of fast fibers towards the slow-twitch phenotype (Shi et al., 2008).The activation of p38 appears required to phosphorylate substrates involved in myogenesis, as well as to induce MHC-IIx expression in myoblasts (Meissner et al., 2007).Indeed, p38 modulates the expression of myogenic regulatory factors (MRFs), such as Myf5, and the activities of transcription factors belonging to the MEF2 and MyoD families.A reciprocal regulation has been proposed to exist between p38 and ERK.While the former inhibits ERK, withdrawing myocytes from the cell cycle and enhancing muscle differentiation, ERK inhibition results in marked activation of p38 (Keren et al., 2006).In this regard, the interaction between these two kinases, likely leading to a defective activation of p38, has been proposed to play a role in the development of rhabdomyosarcoma (Puri et al., 2000).In addition to the reciprocal regulation with ERK, a cross-talk between p38 and JNK also takes place.Initially described in cardiomyocytes, it has now been demonstrated also in the skeletal muscle.In particular, p38 has been shown to antagonize the proliferative signal driven in myoblasts by JNK-dependent cyclin D1 transcription, shifting cells towards differentiation (Perdiguero et al., 2007).Consistently, p38 deficient myoblasts are characterized by a prominent JNK phosphorylation, that appears to depend, partially at least, on reduced expression of MKP-1 (Perdiguero et al., 2007).Finally, JNK has been also involved in the activation of caspases in atrophying skeletal muscles (Supinski et al., 2009).
Several situations characterized by muscle wasting, among which aging, type II diabetes, COPD, and inflammatory myopathies are associated with increased MAPK phosphorylation, p38 in particular (reviewed in Glass, 2010).Activation of p38 stimulates atrophy by enhancing the expression of atrogin-1 and MuRF1 (Li et al., 2005;Romanello et al., 2010) This is also evident from in vitro experiments, showing that the increased expression of atrogin-1 and MuRF-1 induced by TNF-in C2C12 myotubes as well as the induction of the ubiquitin-specific protease-19 by cigarette smoke in L6 cultures are prevented by p38 inhibitors (Li et al., 2005;Liu et al., 2011).Similarly, atrogin-1 upregulation and muscle mass depletion induced by lipopolysaccharide (LPS) in mice depend on p38 activation; indeed, such effects are inhibited by curcumin administration, that leaves intact LPS ability to modulate both NF-B and Akt activity (Jin & Li, 2007).LPS exerts its bioactivity through Toll-like receptors (TLR), in particular TLR4, expressed on both macrophages and muscle cells.Signaling through this receptor may significantly impinge on muscle protein degradation for multiple reasones: TLR4 engagement leads to p38 and NF-B activation; this could result in the upregulation of atrogin-1 and MuRF1 in the muscle, directly, or indirectly, through the release of proinflammatory mediators by macrophages.In addition, TLR4 has recently been involved in the activation of autophagy, increasing autophagosome formation by a p38-dependent mechanism (Doyle et al., 2011).Activation of p38 has also been shown to occur in response to mechanical or electrical stimulation, and functional overload of the skeletal muscle (Boppart et al., 2001;Huey, 2006;Sakamoto et al., 2003;), suggesting that this kinase plays a role in both anabolic and catabolic responses.Among the targets of p38 is MAPK-kinase 2 (MKK2), that appears involved in mediating p38 nuclear export (Gorog et al., 2009).MKK2 is phosphorylated by p38 at two threonine residues, both necessary for the activation (Engel et al., 1995); phosphorylation at T317 allows MKK2 export in a complex containing p38 itself (Ben-Levy et al., 1998;Meng et al., 2002).Heat shock protein 27 (HSP27, Stokoe et al., 1992), involved in the regulation of actin filament dynamics, is a substrate of MKK2 (Guay et al., 1997).Phosphorylation of HSP27 is increased in skeletal muscle hypertrophy and decreased during atrophy (Huey, 2006;Kawano et al., 2007), while HSP27 hyperexpression is able to reduce skeletal muscle depletion due to disuse (Dodd et al., 2009).Finally, MKK2 expression is reduced also in denervation-induced atrophy (Norrby and Tagerud, 2010) .The occurrence of a cross-talk between MKK2/p38 and PI3K/Akt/mTor pathways has been proposed.In this regard, the MKK2/p38 complex exported from the nucleus appears to interact with a cytoplasmic HSP27/Akt complex (Wu et al., 2007).Similarly to Akt, also MKK2 can phosphorylate TSC2 and FoxO1, thus impinging on both protein synthesis and catabolism (reviewed in Rosner et al., 2008).
The involvement of ERK in the pathogenesis of skeletal muscle atrophy is quite controversial.ERK inactivation has been shown to result in muscle atrophy in the rat, irrespective of the fiber type (Shi et al., 2009), and to inhibit the hypertrophic response induced in fast muscles by treatment of the animals with 2 -adrenergic agonists or IGF-1 (Haddad & Adams, 2004;Shi et al., 2007).In addition, reduced levels of phosphorylated ERK have been demonstrated in age-induced sarcopenia (Carlson et al., 2009).In C2C12 myotubes, ERK inhibition appeares required to stimulate ubiquitin ligase expression (Shi et al., 2008).Consistently, ubiquitin hyperexpression induced in L6 myotubes by glucocorticoids has been shown to depend on the activity of both MEK, the kinase upstream of ERK, and the Sp1 transcription factor (Marinovic et al., 2002).Constrasting observations have been reported, however.As an example, ERK activation in C2C12 cultures has been shown to result in reduced myotube size (Rommel et al., 1999), while its inhibition leads to a hypertrophic phenotype similar to that elicited by IGF-1 (Rommel et al., 1999).On the same line, the protection exerted against oxidative stress-induced damage in both C2C12 and L6 myocytes by treatment with IGF-1 has been proposed to involve ERK activity (Yang et al., 2010).Finally, muscle atrophy due to immobilization by hind-limb suspension has been associated with increased levels of phosphorylated ERK (Kato et al., 2002).In addition to contribute in modulating adult skeletal mass, ERK has also been involved in the myogenic process.Indeed, FGF-induced activation of ERK has been shown to enhance the regenerative capacity of human satellite cells isolated from both young and old subjects, while proliferating fusion-competent myoblasts cannot be observed when ERK is inhibited (Carlson et al., 2009).Several humoral factors, such as IGF-1, proinflammatory cytokines and myostatin can contribute to activate ERK in the skeletal muscle.In particular, recent observations from our laboratory have shown that TNF-induced myotube reduction in size in C2C12 cultures is associated with ERK activation and increased myostatin expression (Lenk et al., 2009).Similar observation have been reported also in the skeletal muscle of tumor-bearing mice (Penna et al., 2010a).In this regard, myostatin has been previously proposed to activate ERK and to repress differentiation of C2C12 myocytes (Yang et al., 2006), pointing to a causal relationship between myostatin and ERK biological activities.
Increased levels of phosphorylated JNK in the skeletal muscle are characteristically observed in conditions of insulin resistance, such as obesity or type II diabetes (Masharani et al., 2011).JNK activation mediates insulin resistance by enhancing IRS phosphorylation at serine residues, thus inhibiting the transduction of IGF-1/insulin-dependent signals (Masharani et al., 2011).Oxidative stress consequent to lipotoxicity, as well as proinflammatory mediators (cytokines and others, such as homocysteine), derived or not from the adipose tissue, likely participate in activating JNK.This latter, together with ERK and p38 MAPKs, is also activated in the skeletal muscle after exercise; such activation depends on exercise-induced oxidative stress, being prevented by treatment of healthy volunteers with the antioxidant N-acetylcysteine (Petersen et al., 2011).Both JNK and its preferential substrate c-Jun are activated in the muscle of patients with chronic kidney failure (Verzola et al., 2011).Recent reports have shown that the activation of JNK that characterizes denervation-induced muscle atrophy can be prevented by targeted ablation of the adapter protein TRAF6 (Paul et al., 2010).By contrast, no changes in the levels of phosphorylated JNK have been observed in the skeletal muscle of animals bearing experimental tumors (Penna et al., 2010a).The signal transduction pathway dependent on JNK plays a role in the apoptotic response in several cell systems (Dhanasekaran & Reddy, 2008).In this regard, muscle injury induced by cardiotoxin injection has been shown to be initially associated with JNK activation and perturbations in the Bax/Bcl-2 system, and subsequently with classical signs of apoptotic death such as cytochrome c release from mitochondria, caspase activation, PARP cleavage (Sinha-Hikim et al., 2007).The mechanisms underlying JNK-dependent apoptosis in cardiotoxin muscle injury is still unclear, however, increased NO production through iNOS induction might be involved (Sinha-Hikim et al., 2007).Consistently with these observation, diaphragm weakness induced by endotoxin treatment has been associated with JNK phosphorylation and caspase 8 activation (Supinski et al., 2009).
Finally, quite recent evidence support a role for AMPK in the pathogenesis of skeletal muscle wasting.This kinase mainly works as a sensor of intracellular energetic balance, but is also involved in the regulation of protein turnover.AMPK is switched on when the energy state of the cell is low; in the skeletal muscle, also fiber contraction, which is an energy dissipating process, leads to AMPK activation (Mihaylova & Shaw, 2011).The interference exerted by AMPK on protein synthesis is mainly related to its ability to inhibit mTOR signaling (Mihaylova & Shaw, 2011).On the other side, AMPK has been shown to modulate protein degradation rates.Indeed, administration of AICAR, an AMPK agonist, to mice results in increased levels of phosphorylated AMPK, associated with enhanced atrogin-1 expression through a FoxO-dependent mechanism; such a pattern is inhibited by treating the animals with Compound C, an AMPK inhibitor (Nakashima et al., 2007;Romanello et al., 2010).AMPK activation has also been reported in the skeletal muscle of tumor-bearing animals (Penna et al., 2010a ;White et al., 2011), where it is associated with marked alterations of mitochondrial morphology (Penna et al., unpublished observations).The AMPK-dependent pathway links the alterations in the mitochondrial system, including reduced ATP production, with the onset of muscle atrophy.In this regard, energy deficiency could result in AMPK-dependent FoxO activation.Taking into account that FoxO transcription factors have been also involved in the regulation of autophagy, and that this latter process is in charge of sequestration and degradation of damaged organelles, FoxO activation may contribute to mitochondrial loss, further enhancing energy imbalance.
Protein kinase inhibitors to prevent skeletal muscle wasting
Muscle wasting is now well accepted to derive from metabolic alterations due to the combined action of several factors that act in a complex network involving different signal transduction pathways.The result of such networking is clearly reflected on muscle protein turnover, ultimately leading to the onset of a protein hypercatabolic state.Muscle wasting in patients affected by chronic diseases, but also in 'healthy' elderly people (sarcopenia of aging), is a highly debilitating condition, that markedly impairs quality of life, recovery from illnesses, and tolerance to therapies.The result is a significant complications in the management of these persons, also with important consequences at the social care level.In this regard, therapeutic approaches aimed at interfering pharmacologically with the onset of tissue wasting need to be pursued.On the bases of results obtained in experimental models, a number of drugs have been proposed to counteract the development of muscle wasting.Among these are protein kinase inhibitors; the rationale for their use stands up from the observations reported by several studies demonstrating that protein kinases are of crucial relevance to the activation/inactivation of mechanisms involved in the depletion/preservation of skeletal muscle mass.
About 25 years ago the first natural kinase inhibitor, namely staurosporine, able to block protein kinase C but also many other kinases, has been discovered.Subsequently, the specific inhibitor of p38, SB203580, has become available, opening the research of heterocyclic "drug-like" structures able to distinguish between different kinases.As an example, SB203580 binds to p38, but not to the closely related JNK (Dar and Shokat, 2011).Up to now, nine kinase inhibitors have currently been approved by the FDA (imatinib,gefitininb,sorafenib,erlotininb,sunitinib,dasatinib,nilotinib,lapatininb,pazopanib,PLX4032), however, there are many other small molecules endowed with similar properties.Most of these inhibitors bind in the ATP site, thus preventing kinase activation.The most relevant use for the inhibitors approved by FDA, mainly working as tyrosine-kinase blockers, is in the antineoplastic therapy.In this regard, imatinib, whose main target is the BCR-Abl kinase, has been the first to be used in the treatment of chronic myelogenous leukemia, while sorafenib is currently administered to patients affected by renal or hepatocellular carcinoma (Dar and Shokat, 2011).
Only recently protein kinase inhibitors have been proposed as a means to counteract the onset of skeletal muscle wasting.In this regard, we have recently demonstrated that treatment of mice bearing the C26 tumor with the ERK inhibitor PD98059 partially but significantly protects tumor hosts from the onset of body weight loss and muscle mass depletion (Penna et al., 2010a).ERK inhibition also results in normalization of atrogin-1 hyperexpression, independently from the state of activation of Akt.Among the targets of ERK is the AP-1 transcription factor, which is activated in tumor-bearing animals (Costelli et al., 2005) and may contribute to muscle atrophy, since this latter is improved inhibiting AP-1 by a c-jun dominant negative (TAM67; Moore-Carrasco et al., 2006) .AP-1 regulated genes may contribute to muscle depletion; as an example, cyclin D1 expression (Moore-Carrasco et al., 2006) could induce satellite cell proliferation, that not necessarily is followed by differentiation, resulting in impaired myogenesis.The differential expression of specific factors defines the phenotype of satellite cells.In particular, while MyoD can be detected more or less throughout the myogenic process, high levels of Pax7 associated with low myogenin expression characterize proliferating satellite cells; an opposite pattern can be observed in differentiating cells (Halevy et al., 2004).Indeed, previous reports have shown that Pax7 hyperexpression results in inhibition of myogenesis (Olguin & Olwin, 2004).Low rates of myogenesis (satellite cell activation and differentiation) participate to the maintenance of physiological skeletal muscle mass (Nicolas et al., 2005).This is confirmed by observations showing that aging-or hindlimb suspension-induced muscle atrophy is associated with a reduced regenerative potential (Mitchell & Pavlath, 2004).Consistently, Pax7 expression is significantly increased in the muscle of C26 hosts with respect to controls, while myogenin levels are reduced.The pattern of Pax7 and myogenin expression in the C26-bearing mice is compatible with an impaired regenerative process and suggests the possibility that activated satellite cells accumulate in tumor host muscle because of either enhanced proliferation or impaired differentiation or both.Altered expression of myogenic factors has previously been reported in AH-130 hepatoma-bearing rats (Costelli et al., 2005), in cancer patients (Ramamoorthy et al., 2009), and in an experimental model of chronic kidney disease (Zhang et al., 2010).In the latter report, downregulation of IGF-1 signaling appears responsible for impaired regeneration (Zhang et al., 2010).The results obtained in our laboratory suggest an alternative mechanism based on ERK activation: when the C26 hosts are treated with PD98059, and ERK is thus inhibited, Pax7 and myogenin expression is restored to control values.These observations suggest that ERK activation likely contributes to maintain satellite cells in an undifferentiated state (Penna et al., 2010a).
Despite several reports have shown that p38 is involved in the induction of atrogenes as well as in the hyperactivation of protein degradation in different model systems, there are no studies demonstrating that its pharmacological inhibition in experimental animals may protect the skeletal muscle from wasting.In this regard, we have tested the effectiveness of the p38 inhibitor SB203580 in preventing skeletal muscle depletion in animals implanted with the C26 tumor.After a subcutaneous inoculum of about 10 6 C26 cells, death of the animals occurs in about 15 days; tumor growth is associated with progressive loss of body weight, depletion of both the skeletal muscle and the adipose tissue mass, as well as with markedly increased circulating levels of IL-6 (Penna et al., 2010a).Control and tumor-bearing mice have been treated daily with subcutaneous injection of 3 or 30 mg/kg initial body weight of SB203580, dissolved in DMSO and then diluted in saline, starting from the day after tumor implantation.No significant differences could be observed as for both muscle force (evaluated by grasping test; Fig. 1A), food intake (Fig. 1B) and tumor mass (C26: 231±93 mg, C26+SB203580 30 mg/kg: 249±31 mg).By contrast, an increase of body weight occurred in the group of tumor bearers that received SB203580 at the dose of 30 mg/kg when compared to both treated or untreated controls (Fig. 1C).Body weight accretion, however, does not reflect an effect on the skeletal muscle mass, that remains close to the values of untreated tumor hosts, while the weight of both liver and spleen is significantly higher in treated than in untreated C26 bearers, possibly reflecting drug toxicity (Fig. 1D).The results obtained by treating the C26 hosts with SB203580 could sound unexpected, since p38 has been shown to be involved in different mechanisms contributing to skeletal muscle depletion (see above).However, when analyzing the state of activation of MAPKs in two different experimental models, namely rats bearing the AH-130 hepatoma and the C26 hosts, just ERK appears phosphorylated (active; Penna et al., 2010a), suggesting that the usual pattern of p38 involvement in muscle wasting likely does not apply to cancer cachexia.The other way round, p38 inhibition could have lead to a differential modulation of the mechanisms impinging on cancer-associated muscle wasting, mainly protein hypercatabolism and impaired myogenesis, worsening the latter and improving the former, or vice versa, resulting in lack of changes of muscle mass.Further work is warranted to clarify this point.
Finally, imatinib mesylate (IM) administration has been shown to improve muscle pathology in mdx mice, an accepted model for the Duchenne-type muscle dystrophy.In particular, IM-treated mice show a less degree of muscle necrosis, inflammation and fibrosis than control animals.Such effects appear to depend on the inhibition exerted by the drug on the activation of both c-Abl and PDGFR on both peritoneal macrophages and muscleresident fibroblasts (Huang et al., 2009).Similar observations have been independently reported by another group (Bizario et al., 2009).Both reports suggest that treatment with IM exerts a marked anti-inflammatory effect, lowering the levels of proinflammatory cytokines.Keeping this in mind, we decided to test the effectiveness of IM in preventing muscle wasting in cancer cachexia, where the role played by the inflammatory state is widely accepted.The above cited C26 model has been used.IM has been administered with daily subcutaneous injection at the concentration of 400 mg/kg initial body weight, dissolved in water.Food intake have been recorded daily.At day 15 the animals have been sacrificed to evaluate the effects exerted by the treatment on body and tissue weight.The data reported in Figure 2 show that IM does not induce any detectable modifications when administered to healthy mice, demonstrating that the drug itself does not exert toxic effects on the animals.The treatment, however, is not able to correct the wasting pattern caused by the growth of the C26 tumor.Indeed, loss of body weight (Fig. 2A), cumulative food intake reduction (Fig. 2B), and mass depletion of gastrocnemius, tibialis anterior and heart (Fig. 2C) are comparable between treated and untreated tumor hosts.By contrast, spleen hypertrophy, a constant finding in the C26 hosts (Penna et al.,unpublished observations;Fig. 1D,2C), is completely prevented by treatment with IM (Fig. 2C), confirming the antiinflammatory effect of this drug.Finally, no significant differences could be observed as for tumor mass between mice administered IM or vehicle (C26: 325±86 mg, C26+IM: 253±79 mg, not statistically significant).
The results show that while able to improve muscle phenotype in mdx mice (Bizario et al., 2009;Huang et al., 2009), IM is ineffective in preventing muscle wasting in tumor-bearing animals, although it likely exert an anti-inflammatory action, as shown by the protection against spleen hypertrophy.These observations may suggest that the tyrosine kinases blocked by IM might not be involved in the pathogenesis of muscle wasting in cancer cachexia.However, the lack of effect could also depend on the different inflammatory situation occurring in the muscle of mdx mice and of tumor-bearing animals.Indeed, while the former is characterized by a marked inflammatory infiltrate, associated with an important fibrotic response, these alterations are quite lacking in the latter.
Conclusions
Evidence coming from different experimental models have demonstrated the possibility to interfere with the onset of muscle protein hypercatabolism by several means, such as exercise, nutritional, and pharmacological interventions, better if combined.The growing amount of knowledge about the mechanisms underlying the alterations of muscle protein metabolism are highly relevant in this regard.Particular attention deserves the observation that several experimental results point to kinases as crucially involved in the activation/enhancement of the mechanisms leading to skeletal muscle depletion, either in physiological or pathological states.On this line, the availability of specific kinase inhibitors has opened the way to a direct evaluation of their possibility to be used as therapeutic tools to treat conditions characterized by skeletal muscle wasting.At present the results available in the literature are very few, and at least some of them appear encouraging.However, a note of care should be introduced, since inhibiting protein kinases would also impinge on transduction pathways physiologically relevant, rendering unavoidable an accurate estimate of the risk/benefit ratio.
3 i n h i b i t o r s c o u l d b e u s e f u l f r o m a c l i n i c a l p o i n t o f v i e w i n o r d e r t o c o r r e c t m us c l e hypotrophy in wasting diseases such as cancer, HIV, cardiac cachexia, diabetes and to interfere with the pathogenic mechanisms of above cited neurologic diseases.
(A): voluntary muscle strength, evaluated by dynamometer, expressed in Newton; (B) food intake, expressed as the means amount of food (g) consumed be the animals every two days; (C) body weight (g), inclusive of the tumor; (D) muscle and tissue weight, expressed as percentages of controls.Date are represented as means ± SD (where not indicated SD is within 10% of the means), n = 8 for each experimental group.Significance of the differences: *p<0.05 vs. untreated controls, § p<0.05 vs. untreated C26 hosts.
Fig. 1 .
Fig. 1.Effect of treatment with SB203580 on cachexia in mice bearing the C26 colon carcinoma.
(A): body weight (g), inclusive of the tumor; (B) cumulative food intake (g) over the whole experimental period; (C) muscle and tissue weight, expressed as percentages of controls.Date are represented as means ± SD (where not indicated SD is within 10% of the means), n = 8 for each experimental group.Significance of the differences: *p<0.05 vs. untreated controls, § p<0.05 vs. untreated C26 hosts.
Fig. 2 .
Fig. 2. Effect of treatment with imatinib mesylate (IM) on cachexia in mice bearing the C26 colon carcinoma. | 2017-09-17T06:42:02.009Z | 2012-06-05T00:00:00.000 | {
"year": 2012,
"sha1": "4f6ff171efcac110fad5b2d355c51e0c09130795",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5772/38476",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "548218727bdcd01bfd04abd79de53c8306365e87",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
7882300 | pes2o/s2orc | v3-fos-license | Robust motion estimation on a low-power multi-core DSP
This paper addresses the efficient implementation of a robust gradient-based optical flow model in a low-power platform based on a multi-core digital signal processor (DSP). The aim of this work was to carry out a feasibility study on the use of these devices in autonomous systems such as robot navigation, biomedical assistance, or tracking, with not only power restrictions but also real-time requirements. We consider the C6678 DSP from Texas Instruments (Dallas, TX, USA) as the target platform of our implementation. The interest of this research is particularly relevant in optical flow scope because this system can be considered as an alternative solution for mid-range video resolutions when a combination of in-processor parallelism with optimizations such as efficient memory-hierarchy exploitation and multi-processor parallelization are applied.
Introduction
Motion estimation has been deeply investigated during the last 50 years; however, it is still considered by the scientific community as an emerging field of special interest due to the plethora of applications that supports the interpretation of the real world, such as navigation, sports tracking, surveillance, video compression, robotics, vehicular technology, etc.It is also useful in the neuroscience field, where the task of modeling neuromorphic algorithms and systems, which fit well according to the human brain evidences, is an open and common research problem.
Motion estimation determines motion vectors and describes the transformation of an entire two-dimensional (2D) image into another, usually taken from contiguous frames in a video sequence using pixels or specific parts such as shaped patches or rectangular blocks.
Motion relies on three dimensions, but images are a projection of the three-dimensional scene onto a two-dimensional plane, therefore posing a mathematically ill-posed problem [1][2][3], usually known as 'aperture problem' .To overcome these drawbacks, external knowledge regarding the behavior of objects, such as rigid body constraints or other models that might approximate the *Correspondence: figual@ucm.esDepto.Arquitectura de Computadores y Automática, Universidad Complutense de Madrid, Madrid 28040, Spain motion of a real video camera, becomes necessary.These models are based on the motion of rotation, translation, and zoom, in all three dimensions.
The optical flow paradigm is not exactly the same concept as motion estimation, although they frequently come up associated.Optical flow is the apparent motion of image objects or pixels between frames [3].Two assumptions are usually applied to optical flow [4]: -Brightness constancy: although the 2D position of the image discriminant characteristics, such as brightness, color, etc., may change, they keep their value constant over time.Algorithms for estimating optical flow exploit this assumption in various ways to compute a velocity field that describes the horizontal and vertical motions of every pixel in the image.-Spatial smoothness: it appears from the observation that pixels in the neighborhood usually belong to the same surface and are inclined to present the same image motion.
Optical flow has many drawbacks that increase the burden of estimating it.For instance, the optical flow is ambiguous in homogeneous image regions due to the brightness constancy assumption.Additionally, in real scenes, the assumption is violated at the motion boundaries as well as by occlusions, noise, illumination changing, reflections, shadows, etc.Therefore, only http://asp.eurasipjournals.com/content/2013/1/99 the synthetic-made motion can be recovered with no ambiguity.These two assumptions may lead to errors in the flow estimates.
There are a number of common examples which deliver a non-null value for motion estimation but a zero value for optical flow, e.g., a rotating sphere under constant illumination.Similarly, a static sphere with changing light will deliver optical flow, while the motion field remains null [1], or an old barber pole in motion that shows a real velocity field perpendicular to the estimated optical flow.
Classifying the state of the art in algorithms and techniques, we find a common taxonomy used to estimate optical flow.They generally fall into one of the following categories: -Pattern-matching methods [3] are probably the most intuitive methods.They operate by comparing the positions of image structure between adjacent frames and inferring velocity from the change in location.
The aim of block-matching methods is to estimate motion vectors for each macro-block within a specific and fixed search window in the reference frame.These exhaustive or semi-exhaustive search algorithms match all macro-blocks within a search window in the reference frame to estimate the optimal macro-block in order to fit with the minimum block-matching error metric.-Motion energy methods are probabilistic methods that use space-time oriented filters tuned to respond optimally to specific image velocities.Banks of such filters are used to respond to a range of visual motion possibilities [1].Therefore, motion estimation is not unique for every single stimulus.These methods usually work under Fourier space.-Gradient-based or differential technique family uses derivatives of image intensity in space and time.
Combinations and ratios of these derivatives yield explicit measures of velocity [2,5].The particular implementation of the algorithm used in this paper belongs to this family and is based on Johnston's work [6,7].The multi-channel gradient model (McGM) was developed as part of a research effort aimed at improving our understanding of the human visual system.This model also allows us to make predictions that can be tested through psychophysical experimentation as separate motion illusions that are observed by humans in experiments [8].
One of the main drawbacks of the McGM model is the high hardware requirements needed to achieve real-time processing.On one hand, McGM presents an uptrend in temporal data storage which is translated into nonnegligible memory requirements; on the other hand, optical flow processing requires important computational capabilities to meet real-time requirements.Previous works [9,10] have fulfilled those requirements by means of exploitation of inherent data parallelism of McGM using both modern multi-core processors and hardware accelerators such as field-programmable gate arrays (FPGAs) and graphics processing units (GPUs).
The limitation in power consumption of current embedded devices makes it necessary to consider energy-related issues in the implementation of optical flow algorithms.There are in the literature ad hoc solutions to solve the motion estimation problem with power constraints.As an example, there are countless proposals under lowpower conditions for pattern-matching family algorithms, but most are in the video compression field [11,12].Another approach with central processing units (CPUs) [13] presents a parallel scheme applied to a model based on well-known Lucas-Kanade approach, which reduces power consumption in terms of thermal design power (TDP) and still meets the real-time requirements when low-power chipsets (TDPs of 20 to 30 W) are used.Moreover, Honegger et al. [14] implement a low-power stereo vision system with FPGA based on the Nios II processor (Altera, San Jose, CA, USA).Furthermore, processor manufacturers are now concerned for concepts such as green computing.The aim is to develop more efficient chips not only in terms of performance rates (throughput measured in terms of floating-point operations per second (FLOPS) or Mbits per second) but also energy efficiency [15].Besides modern and efficient multi-core CPUs, hardware accelerators such as GPUs or Intel MIC (Intel Corp., Santa Clara, CA, USA), or reconfigurable devices (FPGAs), one of the latest additions on specificpurpose architectures applied to general-purpose computing are low-power digital signal processors (DSPs).One of the primary examples in this field is the C6678 multi-core DSP from Texas Instruments (TI; Dallas, TX, USA) that combines a theoretical peak performance of 128 GFLOPs (billions of floating-point operations per second) with a power consumption of roughly 10 W per chip.Besides, one of the most appealing features is the ease of programming, adopting well-known programming models for sequential and parallel implementations.
Our contribution provides an efficient implementation for an optical flow gradient-based model using a lowpower DSP exploiting different levels of parallelism.To the best knowledge of the authors, this is the first attempt to use a DSP architecture to implement a robust optical flow gradient-based model.There are only few approaches existing in the literature exploiting gradient-based motion estimation methods in DSP platforms as the one proposed by Shirai et al. [16] in early 1990s, implementing the classical method of Horn-Schunck algorithm [17] using many boards with a TMS320C40 DSP each.This algorithm supplements optical flow constraint with regularizing smooth terms, while our work uses spatio-temporal constancy.Besides, performance and/or energy consumption is not considered in that work.Rowenkap et al. [18] implemented in 1997 the same algorithm as the previous work, using the same DSP and reaching up a throughput of 5 frames per second (fps) and 15 fps (for 128 2 image resolution) when using one and three DSPs, respectively.The last work considered is the neuromorphic implementation of Steiner [19] that uses the Srinivasan algorithm [20] on a dsPIC33FJ128MC804 processor; this algorithm is based on simple stage procedure of image interpolation.
The challenge addressed in this paper is based on the efficient exploitation of available resources in TI's C6678 DSP, taking into account the particular features of McGM: -Exploit loop-level parallelism by means of a very long instruction word (VLIW) processor capability available in TI's C6678 DSP.-Take advantage of data-level parallelism by means of multi-media extensions capability.-Make use of thread-level parallelism available in TI's C6678 multi-core DSP.-Exploit the memory system hierarchy with the efficient use of cache levels and on-chip shared memory.
Our experimental evaluation includes a comparison of the DSP implementation with other state-of-the-art architectures, including general-purpose multi-core CPUs and other low-power architectures.
The rest of the paper is organized as follows.Section 2 moves through a specific neuromorphic model and describes the particularities of each stage.Section 3 gives an overview of the DSP architecture, together with the main motivations for choosing this platform for our motion estimation approach.In Section 4, we give details about the specifics of the implementation of McGM on the DSP and provide an experimental analysis of the implementation.Finally, Section 5 provides some concluding remarks and outlines future research lines.
Multi-channel gradient model
The McGM model, proposed by Johnston et al. [6,7], implements a processing vision scheme described by Hess and Snowden [21], combining the interaction between ocular vision and brain perception and simplifying the human vision model [8].In order to solve the problems with the basic motion constraint equation, many gradient measurements have been introduced (Gaussian derivatives) into the velocity measure via Taylor expansion representation of the local space-time structure.
Figure 1 shows a simplified scheme of the necessary stages to be completed.From the point of view of data processing, the McGM algorithm involves an increasing temporal data generation in each stage.Hence, this algorithm may be considered as a data-expansive processing algorithm.Moreover, the nature of its dataflow makes it essential to fully conclude a given stage before starting the next one, which inhibits the ability to apply latency reduction techniques similar to those addressed in a pipelined processor since the stage time differs substantially.The only way to reduce motion estimation latency is to minimize the computation time at stage level.This work focuses on optimal exploitation of the high data-and looplevel parallelism available in each McGM stage.In order to clarify these aspects, Figure 2 shows the processing dataflow, while the memory consumption at each stage is detailed in next subsections.
FIR-filtering temporal
In the finite impulse response (FIR)-filtering temporal stage, the McGM algorithm models three different temporal channels based on the experiments carried out by Hess and Snowden [21] about visual channels discovered in human beings: one low-pass filter and two band-pass filters with a center frequency of 10 and 18 Hz, respectively.Input signal is filtered according to Equation 1, where α and τ represent the peak and spread of the log-time Gaussian function, respectively. (1) In practice, for an input movie of N frames with nx × ny resolution, this stage produces approximately (N − L) × nx×ny×nTemp filt temporal data as indicated in Figure 2 as T1, T2, and T3 (nTemp filt = 3) in temporal filtering stage; these intermediate data structures are provided to the spatial filtering stage as inputs.
FIR-spatial derivatives
The FIR-spatial derivative stage is based on space domain computation where the shape of the receptive fields from the primitive visual cortex is modeled using either Gabor functions or a derivative set of Gaussians [22].A kernel function kernel = e is derived to obtain upper order differential operators using Hermite polynomials.In our case, the nth derivative http://asp.eurasipjournals.com/content/2013/1/99can be obtained by multiplying the corresponding Hermite polynomial by the original Gaussian function: being σ the variance in normal distribution.
From the point of view of data-path processing, for nSpat filters Gaussian filters (see Figure 2), this stage generates (N − L) × nx × ny × nTemp filters × nSpat filters output data.
Steering filtering
The steering filtering stage synthesizes filters at arbitrary orientations formed by a linear combination of other filters in a small basis set.More specifically, if we call m and n the order in directions x and y, respectively, θ the projected angle and D the derivative operator, the general expression is obtained as a linear combination of one filter on the same order basis (G 0 ), see Equation 4.
From the data-path perspective, this is the most memory-consuming and computational-demanding stage.Resource consumption is closely related to the number of orientations to consider, which is denoted by nθs orientations in Figure 2.More specifically, the amount of data produced at this stage is quantified close to ((N −L)×nx×ny×nTemp filters×nSpat filters×nθs).
Product and Taylor and quotients
Quotient calculation is the last stage derived from the common pathway.The goal here is to compute a quotient for every sextet's component: From the point of view of dataflow, McGM changes its trend at this point and starts to converge, which means a considerable reduction in the amount of data to compute.Data stored is approximately (N − L) × nx × ny × nθs × 6.
Velocity primitives
The velocity primitive stage implements the modulus and phase estimation with separate expressions.After that, speed measurements -parallel and orthogonal to the primary directions -are taken to yield a vector of speed measures (parallel and orthogonal speed components.) The raw measurements of speed are also conditioned by including the measurements of the image structure XY /XX and XY /YY where the final conditioned speed vectors results in the number of orientations at which the speed is evaluated: Inverse speed is calculated in a similar way: Modulus and phase extraction corresponds to the final velocity vector, which is computed from the velocity components previously calculated.
Modulus and phase
Finally, the motion modulus is calculated through a quotient of determinants: The direction of motion is extracted by calculating a measurement for phase that is then combined across all speed-related measures: Lastly, modulus and phase are size of (N − L) × nx × ny (one piece of data per input pixel).
Overview of the C6678 DSP architecture
The C6678 digital signal processor from Texas Instruments is a high-performance, low-power DSP with floating-point capabilities [23].It presents eight C66x VLIW cores and runs at 1 Ghz.The whole device dissipates a maximum power of 10 W.Besides low-power, high-performance, and floating-point capabilities, one of the strengths of the C6678 device is the amount of standard peripherals it supports: PCIe interface to communicate with a CPU host, Serial Rapid I/O, and Hyperlink for fast-and low-latency inter-and intra-chip communication, or direct memory access (DMA) to overlap http://asp.eurasipjournals.com/content/2013/1/99computation with transfers between the external memory and on-chip memory.
C66x core architecture
The C66x core illustrated in Figure 3 is the base of the multi-core C6678 DSP architecture.It is implemented as a VLIW architecture, taking advantage of different levels of parallelism: • Instruction-level parallelism.In the core, eight different functional units are arranged in two independent sides.Each one of the sides has four processing units, namely L, M, S, and D. The M units are devoted to multiplication operations.The D unit performs address calculations and load/store instructions.The L and S units are reserved for additions and subtractions, logical, branch, and bitwise operations.Thus, this eight-way VLIW machine can issue eight instructions in parallel per cycle.In our case, we will use OpenMP as the tool to manage thread-level parallelism.
Memory hierarchy
The memory hierarchy for the C6678 device is shown in Figure 3 The compiler supports OpenMP 3.0 to allow rapid porting of existing multi-threaded codes to multi-core DSPs.The OpenMP runtime performs the appropriate cache control operations to maintain the consistency of the shared memory when required, but special precaution must be taken to keep data coherence for shared variables, as no hardware support for cache coherence across cores is provided.
Work environment
All codes were evaluated using a TMDXEVM6678LE evaluation module that includes an on-board C6678 processor running at 1 GHz.The board has 512 MB of DDR3 RAM memory available for image storage or generation.Our tests were developed on top of SYS/BIOS using the OpenMP implementation from Texas Instruments, MCSDK version 2.1, and Code Generation Tools version 7.4.1 with OpenMP support enabled.Single-precision floating-point arithmetic was used for all the experiments.We have not observed any precision issue in our DSP implementations compared with previous results in other architectures [9,24,25].Therefore, our experimental section will be focused exclusively on a performance analysis instead of a qualitative analysis of the obtained numerical results.
Implementation and experimental results
In this section, we present relevant algorithmic and implementation details of each stage of the McGM method.Whenever possible, we provide a list of incremental optimizations applied in order to improve the performance of our implementation on the multi-core DSP.Due to the high-computational requirements of the first three stages of the algorithm (temporal filtering, spatial filtering, and steering), we will focus on those parts.However, some notes about the last stages, together with experimental results, are also given.
The optimizations proposed are DSP specific and address four of the most appealing features of the architecture: instruction, data, and thread parallelism extraction, and the exploitation of the flexibility of the memory hierarchy, plus the usage of DMA to overlap computation and communication.
Relevant parameters for McGM
Evaluating the performance of McGM is a hard task, mainly due to the large amount of parameters that can be modified in order to tune the algorithm behavior.Many of those parameters have a great impact not only on the precision of the solution but also on the overall attained performance.Table 1 lists the main configurable parameters associated with the first three stages of McGM.The column labeled as 'Typical values' provides an overview of the most common values, although different ones can be used to vary the motion estimation accuracy.
In Table 1, we also add four different parameter configurations that will be used for global throughput evaluation.Although all experimental results are reported for video sequences with square frames, our implementation is prepared for non-squared images, and no qualitative differences in the performance results have been observed.
McGM implementation on the DSP
The main implementation details and optimization techniques applied in our McGM porting task to the multicore DSP are detailed in this section.As exposed in the algorithm description, we will divide the overall procedure into stages, describing each one in detail.Many of the optimization techniques applied for the DSP are quite similar for all stages.Therefore, the common optimization techniques are explained in detail next.
1. Basic implementation.We establish a baseline C implementation for comparison purposes.It includes the necessary compiler optimization flags and common optimization techniques to avoid unnecessary calculations and benefit from data locality and cache hierarchy.No further DSP-specific optimizations are applied in the code of this naive implementation.2. DMA and memory-hierarchy optimization.One of the strengths of the DSP is the ability of explicitly managing on-chip memory levels (L1 cache, L2 cache, and MSMC memory).Thus, one can define buffers, assign them to a particular memory-hierarchy level (using the appropriate #pragma annotations in the code), and perform data copies between them as necessary.In addition, DMA capabilities are offered in order to overlap data transfers between memory levels and computation.The usage of blocking and double-buffering is required.This involves the allocation of the current block of each frame to be processed and the next block which is being transferred through DMA while CPU computation is in progress.This technique effectively hides memory latencies, improving the overall throughput.In our case, we have mapped the temporal buffers that accommodate blocks of the input frames to the on-chip MSMC memory, in order to improve memory throughput in the computation stage.3. Loop optimization.VLIW architectures require a careful loop optimization in order to let the compiler effectively apply techniques such as software pipelining, loop unrolling, and data prefetching [26].
In general, the aim is to keep the (eight) functional units of the core fully occupied as long as possible.
To achieve this goal, the developer guides the compiler about safe loop unrolling factors, fixed unroll counts (using appropriate #pragma constructions), or pointer disambiguation (using restrict keyword on those pointers that will not overlap during the computation) by means of the mentioned tags or pragmas.Even though this type of optimizations is not critical in superscalar processors that defer the extraction of instruction-level parallelism to execution time, it becomes crucial for VLIW architectures, even more for algorithms heavily based on loops as McGM.We have performed a full search to find the optimal unroll factor for each loop in the algorithm.4. SIMD vectorization.As mentioned in Section 3, each C66x core is able to execute single-cycle arithmetic and load/store instructions on vector registers up to 128-bit wide.Naturally, this feature is supported at ISA level and can be programmed using intrinsics [26].In McGM, data parallelism is massive and can be exploited by means of SIMD instructions in many scenarios.Intermediate data structures are stored using single-precision floating point (32-bit wide).Thus, in the convolution step, input data can be grouped and processed in a SIMD fashion using 128-bit registers (usually referred as quad registers) for multiplications and 64-bit registers for additions.Given that the C66x architecture can execute up to eight SP multiplications (four per each M unit) and eight SP additions (two per each L and S unit), each core can potentially execute up to eight SP multiplication-additions per cycle if SIMD is correctly exploited.At this stage, we load and operate on four consecutive pixels of the image, unrolling the corresponding loop by a factor 4. Special caution must be taken in order to meet the memory alignment restrictions of the load/store vector instructions; to meet them, we apply zero-padding to the input image when necessary, according to its specific dimensions.5. Loop parallelization.Up to this point, all the optimizations have been focused on exploiting parallelism at core level.The last stage of the optimization involves the exploitation of thread-level parallelism to leverage the multiple cores in the DSP.
The parallelization is carried out by means of OpenMP.Special care must be taken with shared variables, as no cache coherence is automatically maintained.Thus, data structures must be zero-padded to fill a complete cache line and to avoid false sharing, and explicit cache write-back and/or invalidate operations must be performed in order to keep coherence between local memories to each core.
Stage 1: temporal filtering
Algorithm and implementation In order to obtain the temporal derivative of the image, it is necessary to perform a convolution of each image sequence with each one of the three temporal filters obtained ( low-pass and two band-pass filters.)Algorithm 1 outlines the basic behavior of the temporal filtering stage.Usually, for all stages, the calculation of the corresponding filter is performed off-line if http://asp.eurasipjournals.com/content/2013/1/99necessary, prior to computation.As the number of temporal filters usually remains constant and is reduced (i.e., nTemp filters = 3), performance rates of this stage greatly depend on the window size (L) in which we apply the temporal filters and on frame dimensions (nx × ny).As output, nTemp filters matrices of the same dimensions as each input frame are generated as a result of the convolution of each frame with the corresponding convolution filter.These matrices will be the input for the second stage (spatial filtering.)DSP optimizations and performance results Figure 4 reports the experimental results obtained after the implementation and optimization of the temporal filtering stage.Throughput results are given in terms of frames per second, considering increasing square frame dimensions and increasing window sizes (L) for the temporal convolution.We do not report results for a different number of temporal filters, as three is the most common configuration for McGM.We compare the throughput attained by the basic implementation using one core, with that of a version with all the exposed optimizations applied on one core, and parallelized across the eight available cores in the C6678.
At this stage, the critical factors affecting performance are frame size (nx × ny) and temporal window size (L).In general, for a fixed L, throughput decreases for increasing frame dimensions.For a fixed frame dimension, the impact of increasing the window size is also translated into a decrease in performance, although not in a relevant factor.
Independently from the evaluated frame resolution and window dimensions, core-level optimizations (usage of DMA, loop optimizations, and SIMD vectorization) are translated into performance improvements between ×1.5 and ×2, depending on the specific selected parameters.When OpenMP parallelization is applied, the throughput improvement yields between ×5.5 and ×7 compared with the optimized sequential version.In general, the throughput obtained by applying the complete set of optimizations improves the original basic implementation in a factor between ×7 and ×14.We would like to remark the multiplicative effects observed when both in-core and multi-core optimizations are carried out.
Stage 2: spatial filtering
Algorithm and implementation From the algorithmic point of view, spatial filtering does not dramatically differ from the previous stage, see Algorithm 2. For each one of the spatial filters generated a priori and each one of the temporal-filtered frames, we apply a bidimensional convolution.Note that the amount of generated data increases compared with that received from the previous stage in a factor of nSpat filters.The window size in the convolution (T parameter) is the key in terms of precision and performance.As a result of this stage, we obtain a set of intermediate spatially filtered frames that will be provided as an input to the steering stage.
DSP optimizations and performance results
Besides the basic implementation derived from the algorithmic definition of the stage, our optimizations ( loop optimization, vectorization, and parallelization) are focused on the bi-dimensional convolution kernel in order to adapt it to DSP architecture specifications.More specifically, we leverage the separability of the bi-dimensional convolution to perform and highly optimize one-dimensional (1D) vertical and horizontal convolutions, applying optimizations at instruction level (loop unrolling), data level (vectorization in the 1D convolution loop body), and thread level across cores (through OpenMP).
Figure 5 reports the experimental results obtained after the implementation and optimization of the spatial filtering stage.Results are presented for different frame dimensions and increasing spatial window sizes.As previous considerations for the temporal stage, a comparison between a baseline version, an in-core-level optimization, and optimized version across multi-core has also been performed.
At this stage, frame size (nx × ny) and spatial window (T) substantially impact performance rates.As for the previous stage, when fixing T, throughput decreases for increasing frame dimensions.However, for a fixed frame dimension, the impact of increasing the spatial window size is translated into higher throughput; from our analysis, our separate bi-dimensional convolution implementations attain better performance as window size increases, mainly due to the avoidance of memory latency effects.This improvement, though, is expected to stabilize for larger window sizes (that are usually not common in McGM).
Core-level optimizations are translated into performance improvements between ×1.6 and ×2.2, depending on the evaluated frame and window dimensions.The thread-level parallelization yields an improvement between ×5 and ×6.5 when comparing with the optimized sequential version.In general, the throughput obtained by applying the complete set of core-level optimizations and thread-level parallelization improves the original basic implementation in a factor between ×8 and ×13.
Stage 3: steering filtering
Algorithm and implementation Algorithm 3 describes the necessary steps to perform the steering stage in the McGM method.Basically, the algorithm proceeds by applying a convolution between each spatial-filtered frame obtained from the previous stage (I in the algorithm), and an oriented filter F θ previously calculated.The response of each one of the temporal-and spatial-filtered frames to this oriented filter will be the output of this stage.
Algorithm 3 R = stage III (spac filt, N, L, nTemp filters, nSpat filters, nOrtho Orders, nθs ) for θ = 0 to nθs do for oo = 0 to nOrtho Orders do for sf = 0 to nSpat filters do for tf = 0 to nTemp filters do for fr = 0 ≤ N − L do frame = frames in (fr) for all p = pixel ∈ frame do
DSP optimizations and performance results
The optimizations applied to this stage are in the same way as those presented from the previous stages.Data parallelism is heavily exploited when possible, and loops are optimized after a deep search of the optimal unrolling parameters.OpenMP is used to extract thread-level parallelism and leverage the power of the eight cores in the C6678.Special caution must be taken at this stage with memory consumption, as it reaches the maximum memory requirements of the McGM algorithm.More specifically, at this point, both the spatial-filtered frames and their steering filtering must coexist in the memory.However, this potential issue is conditioned by input algorithm parameters which are known beforehand.
Figure 6 reports the experimental results obtained after the implementation and optimization of the steering stage.Results are presented for different frame dimensions and different number of angles (orientations).As in previous stages, we compare the throughput attained by the basic implementation with that of a version with all the exposed core-level optimizations applied on one core, and with these optimizations together when the computation is distributed across eight cores.
At this stage, the factors affecting performance are frame size (nx × ny) and number of orientations (nθs).For a fixed number of angles, throughput decreases for increasing frame dimensions.For a common resolution, increasing the number of angles considered also yields higher throughput.Core-level optimizations are more significant here, being the reason the higher arithmetic intensity in the loop bodies.These optimizations yield performance improvements between ×4 and ×5, depending on the evaluated frame dimensions and number of angles.The thread-level parallelization yields an improvement between ×1.5 and ×2.5 taking as a reference the optimized sequential version, with higher improvements as the number of orientations is increased.In general, the throughput obtained by applying the complete set of optimizations outperforms the basic implementation in a factor between ×10 and ×12.5.
Final stages
Final stages are not considered in detail as they are mainly compressive from the data perspective and usually require a non-significant fraction of time.Related to this, Figure 7 reports a detailed analysis of the percentage of time devoted to each stage in a typical execution of the McGM algorithm.Similar results have been observed for other experimental configurations.In general, the first three stages of McGM consume around 90% of the total execution time.The remaining time is dedicated to compressive stages, memory management routines, and precalculation of filters previous to the execution.However, we observed similar benefits than those for the previous stages when applying the equivalent core-level and thread-level optimizations on the final stages, and they will be included in the global throughput results in Section 4.3.In general, throughput is reduced for increasing frame resolutions.While this rate is high for the minimum tested resolution (up to 650 fps for 32 × 32 frames), it dramatically decreases for larger frames, achieving a minimum of 9.74 fps for the largest resolution tested (128 × 128).Differences between several parameter configurations are specially significant for small frame dimensions but not critical for the rest.Comparing the global performance results with those for each one of the stages presented in Figures 4, 5, and 6, the main insight is that the steering stage is the clear limiting factor.Global throughput is far from that attained in the temporal and spatial filtering stages (that were in the order of thousands of frames per second, depending on the resolution) and closer to that attained for the steering stage.This confirms the time breakdown detailed in Figure 7, which illustrates that 90% of the overall execution time is devoted to this stage.
In order to put results into perspective, Table 3 compares the throughput (in terms of frames per second) for a collection of platforms representative of current multi-core technology.We have selected a high-end general-purpose processor (Intel Xeon) and two different architectures as representatives of current low-power solutions, namely: -TI DSP C6678 processor (eight cores) at The table also reports the TDP in order to give an overview of the peak power consumption for each one of the platforms.Note that the TI C6678 DSP can be considered as a low-power architecture, especially compared with the Intel Xeon (10 vs. 190 W when the two sockets of the latter are used).However, it is still far from the reduced power dissipated by the ARM Cortex A9.
Clearly, the multi-threaded implementation of McGM on the eight cores of the Intel Xeon yields the highest throughput rate from all the evaluated frame dimensions.For input images of 128 × 128 pixels, the throughput rate is roughly 21 fps.When only one core of the Intel Xeon is used, this rate is reduced to 4 fps.Our optimized implementation on the C6678 DSP outperforms the sequential results on the Intel Xeon, achieving a peak rate for the largest tested frame dimensions of 9.74 fps.Considering a rate around 20 fps, acceptable for being considered as real-time processing (performance rates meeting real-time processing are in italic in Table 3), the parallel
Power efficiency considerations
This throughput rates must be considered in the context of the real power dissipated by each platform.To illustrate the power efficiency of each platform when executing McGM, Table 3 also provides a comparative analysis of the efficiency of each architecture in terms of thousands of pixels processed per second (kpps) per watt.The best power efficiency ratios are indicated with superscript letters in the table.Note that even though the ultra-lowpower ARM is the most efficient architecture for the smallest input images (32 × 32), the TI DSP is clearly the most efficient platform for larger images.In this sense, the TI DSP offers a trade-off between performance and power that can be of wide appeal for those applications and scenarios in which power consumption is a restriction, but real time is still a requirement for medium/large image inputs.General-purpose multi-core architectures deliver lower rates in terms of power efficiency but are a requirement if real-time processing is needed for the largest tested images.Compared with the other two low-power architectures (Intel Atom and ARM Cortex A9), real-time processing is only achieved for low-resolution images (32×32 in both cases).Thus, our DSP implementation, and the DSP architecture itself, can be considered as an appealing architecture not only when low power is desired but also when throughput is a limiting requirement.
Conclusions
In this paper, we have presented a detailed performance study of an optimized implementation of a robust motion estimation algorithm based of a gradient model (McGM) on a low-power multi-core DSP.Our study reports a general description of each stage of the multi-channel algorithm, with several optimizations that offer appealing throughput gains for a wide range of execution parameters.
We do not propose the TI DSP architecture as a replacement of high-end current architectures, like novel multi-core CPUs or many-core GPUs, but as an attractive solution for scenarios with tight power-consumption requirements.DSPs allow trade-off between performance, precision, and power consumption, with clear gains compared with other low-power architectures in terms of throughput (fps).In particular, while real-time processing is attained only for low-resolution image sequences on current low-power architectures (typically 32 × 32 frames), our implementations elevates this condition up to images with resolution 96 × 96 or higher, depending on the inputs execution parameters.These results outperform those on a single core of a general-purpose processor and are highly competitive with optimized parallel versions in exchange of a dramatic reduction in power requirements.
These encouraging results open the chance to consider these architectures in mobile devices where power consumption is a severe limiting factor, but throughput is a requirement.Our power consumption considerations are based on estimated peak dissipated power as provided by manufactures in the processor specifications.Nevertheless, to be more accurate in terms of power consumption, we will consider as future work a more detailed energy evaluation study, offering real measurements at both core and system level.
Figure 1
Figure 1 Scheme of the multi-channel gradient model with several stages.
Figure 2
Figure 2 Data processing in the multi-channel gradient model through several stages.(I) Temporal filtering, (II) spatial filtering, (III) steering, (IV) product and Taylor, (V) speed and inverse speed, and (VI) velocity and direction.
Figure 4
Figure 4 Throughput of the DSP implementation of the temporal filtering stage.For different frame dimensions and temporal window sizes.
Figure 5
Figure 5 Throughput of the DSP implementation of the spatial filtering stage.For different frame dimensions and spatial window sizes.
Figure 6 Figure 7
Figure 6 Throughput of the DSP implementation of the steering stage.For different frame dimensions and number of angles.
Table 2 Throughput of the DSP implementation of McGM for different parameter configurations
Besides the isolated throughput attained by each individual stage of the algorithm analyzed in detail in previous sections, an overall view of the performance attained by the complete pipeline execution is necessary.Table 2 reports the throughput, in terms of frames per second, of the complete McGM implementation, considering only the most optimized version of each stage.Due to the wide variety of parameter combinations in the algorithm, we have chosen four different representative configurations, labeled as 'Conf. 1 to Conf. 4, ' whose parameters are detailed in Table 1.Results are provided for increasing frame resolutions.
Table 3 Throughput and power efficiency of McGM implementations on different architectures, using Conf. 4 and different frame sizes
Numbers in italic meet real-time requirements.a The best efficiency achieved for each frame size.kpps, thousands of pixels processed per second.http://asp.eurasipjournals.com/content/2013/1/99implementation on the Intel Xeon can attain real-time on resolutions up to 128 × 128, meanwhile the TI DSP can attain real-time processing on up to 96 × 96 frame dimensions.Given the scalability observations extracted from our experimental results, we do not observe any relevant limitation for better performance results in future multicore DSP architectures, possibly equipped with a larger number of cores. | 2017-07-06T16:24:13.429Z | 2013-05-10T00:00:00.000 | {
"year": 2013,
"sha1": "fa3d901535d6853fa902d2cf11ba29b7f05bd392",
"oa_license": "CCBY",
"oa_url": "https://asp-eurasipjournals.springeropen.com/counter/pdf/10.1186/1687-6180-2013-99",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "fcbb8681b89db45f063d1173611ba3d101ce5ae2",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
261768141 | pes2o/s2orc | v3-fos-license | Standardising policy in a nonstandard way: a public/private standardisation process in Norway
Abstract Standards developed by standard-setting organisations (SSOs) – sometimes labelled private rulemaking – are part of larger practices of governance in most societies yet are underinvestigated from a policy process perspective. Utilising and developing the multiple streams approach (MSA), this article investigates a policy process moving between government and the SSO Standards Norway (SN). The study finds standardisation by SSOs to be an ambiguous institutional arrangement. Strong institutional barriers in theory did not work as such in the case investigated. This article argues that the differentiation between responsibility for process (SN) and content (committee) makes the standardisation process vulnerable. The concept of “institutional deficit” is introduced to describe a potential mismatch between SSOs producing policy in a government-like institution, but where the SSOs are not capable of taking responsibility for policies in a government-like way. This article finds the adjusted MSA useful in this potentially least likely case.
Introduction
Standards developed by standard-setting organisations (SSOs), sometimes labelled voluntary standards (Fouilleux and Loconto 2017) or private rulemaking (Weimer 2006;Büthe and Mattli 2011), are part of larger practices of governance in most societies today.SSO standards were originally technical instruments of socioeconomic coordination (Higgins and Hallström 2007), but a substantial part of what is standardised today are organisational processes (Bartley 2018), such as risk management and corporate responsibility (Rasche 2010;Aven and Ylönen 2019).Although social science has engaged with standardisation more broadly, relative to its ubiquitous development (Brunsson et al. 2012), limited scholarly attention has been paid to SSO's standardisation (Timmermans and Epstein 2010), and the knowledge is fragmented (Botzem and Dobusch 2012;Bartley 2018).Since SSOs are ruled by private law, they have largely escaped investigations from a public policy perspective.SSO standards are important for public policy, however.Regulatory work that was earlier done by the state is often produced by SSOs (Gustafsson 2020) in decentred networks of public and private actors (Ansell and Baur 2018).Standards are often intertwined with public regulation (Frankel and Højbjerg 2007;Galland 2017) and are much used as a basis for public policies (Olsen 2020).SSO's standardisation should thus be investigated from a public policy perspective.
This article presents a case study of a standardisation process in Norway, investigated as a policy process.The investigation follows the policy proposal (a risk assessment approach) as it "travels" through three empirically distinct phases: First, when three governmental agencies within the police and military developed a guideline on terrorism protection (NSM et al. 2010), including a security-risk assessment approach.Second, this approach was introduced to Standards Norway (SN), the national SSO in Norway as an option for standardisation.The initiative resulted in a Norwegian standard on security-risk assessment (Standards Norway 2014).Third, after the standard had been published, a debate unfolded among security and risk professionals and public servants about the usefulness of the approach presented in the standard (Busmundrud et al. 2015;Heyerdahl 2022aHeyerdahl , 2022b)).
The three distinct phases of the process, combined with the stable policy proposal (the security-risk assessment approach), allow for a within-case, longitudinal comparative design (Gerring 2007), investigating how different institutional contexts enable and constrain the policy process (Zahariadis 2016).The study builds on a primarily abductive logic (Timmermans and Tavory 2012;Ashworth et al. 2019), where the multiple streams approach (MSA) is utilised and further developed through engagement with empirical findings (Kingdon 2014;Herweg et al. 2018).The article asks: How can we account for the establishment of the standard utilizing an MSA perspective, and how does the different institutional contexts enable and constrain the policy process?The three phases are investigated, before the article zooms in on the institutional characteristics of standardisation by SSOs, and how it shaped the process.Prioritising SSO standardisation does not imply a "free pass" for the governmental part of the process but reflects that standardisation is underinvestigated from a policy process perspective, compared to governmental policymaking.This article also discusses the usefulness of the MSA for the case at hand.
The study contributes to our understanding of how policies (in this case a riskassessment approach) are created in flux between government and private institutions, and how such policy processes might unfold.The case constitutes a national standardisation process.Most literature studying SSOs' standardisation investigates it trans-or internationally.A national standardisation process, however, can provide insights beyond the national context.SN generally follows rules and norms for standardisation set by the International Organization for Standardization (ISO), making the case an example of such standardisation.A standardisation process in a small country like Norway is moreover relatively "simple", potentially shedding light on characteristics of standardisation that may be blurred in a more complex, international context.Finally, the national level allows for a comparison of the standardisation process with a policy process within government, contributing to cross-fertilisation between public policy perspectives and the SSO literature.
This article notes the many ambiguities of SSO standardisation and how it creates possibilities for manoeuvring.It finds that contrary to the governmental phase, the institutional structures of standardisation did not withstand pressure and could be circumvented.In theory, SN has strong institutional barriers such as a consensus requirement.The barrier did not work as such, however.Instead, an institutional restructuring was created to solve a disagreement.This article argues that the differentiation between responsibility for process (SN) and policy content (committee) makes the standardisation process vulnerable.Finding common ground with other research on SSO standardisation, this article introduces the concept of "institutional deficit:" the SSO produces policy in a government-like institution, 1 but the SSO is not structured such that it takes responsibility for policies in a government-like way.
This article contributes to the policy process literature in three ways.First, the policy process perspective enables a comparison across organisational boundaries, shedding light on the institutional construction of standardisation by SSOs and the difference to traditional, hierarchical government.Second, the study is an atypical MSA study in that the process is miniscule compared to most MSA studies, with little public exposure and with public servants and professionals as central actors.This article thus explores the MSA's ability to "travel" to a very different environment.
Third, this article further develops the MSA framework.The MSA, originally developed by Kingdon (2014), has recently been elaborated both theoretically and empirically (i.e.Cairney and Jones 2016;Herweg 2016;Shephard et al. 2021).This article builds on, and refines, the call to "bring institutions back" into the MSA theory (Zohlnhöfer et al. 2016;Sager and Thomann 2017;Reardon 2018) and investigates the link between institutional context and the policy process.Institutions are seen both as formal structures (Zohlnhöfer and Rüb 2016) and as knowledge and ideas (Schmidt 2008).The study thus responds to calls for integrating mainstream and interpretive policy studies (Durnová and Weible 2020).A second development of the MSA consists of the process streams being understood as logics, not dependent on quantitative complexity.This enables the MSA to be utilised also in small policy processes.
The standard under investigation pertained to security-risk assessment.Security refers here to risks posed by intentional, malicious acts, unlike safety risks, related to accidents and natural disasters (Jore 2019).As professional areas, security and safety come from different traditions (Pettersen Gould and Bieder 2020); security has been linked to defence and crime prevention, and safety to areas such as engineering and management.The policy in this analysis, the risk assessment approach, differs from traditional understandings of risk, where risk is presented as a combination of probability and consequence.In the risk approach investigated, risk is a combination of asset, threat, and vulnerability, without reference to probability (SN 5832;Heyerdahl 2022a).Key to the debate has been the role of probability in 1 By «government-like» we here simply refer to government being responsible for both the process and content of governmental policymaking.security-risk assessments.This is not a uniquely Norwegian question, as probabilistic risk assessment approaches to security have been debated also in other countries, such as related to US Homeland Security (National Research Council 2010;Brown and Cox 2011;Mueller and Stewart 2011).
In the next subsection, SSO standardisation is presented, followed by chapters on theory and method.The case is then investigated as three historical phases, each consisting of empirical findings and an analysis utilising the adjusted MSA framework.The influence of knowledge as a structuring institution is included at the end of the chapter.The subsequent discussion and conclusion focus on the ambiguous features of standardisation by SSOs, how this influenced the case, and how it could be interpreted.The usefulness of the MSA is discussed before we appeal for more policy-oriented research on SSO standardisation.
Standards and standardisation
Standards can be defined as rules "for common and voluntary use, decided by one or several people or organizations" (Brunsson et al. 2012, 616).Standards from SSOs differ from government regulation as they are voluntary, although they sometimes become de facto binding (Jacobsson and Brunsson 2000).Their legitimacy is intimately linked to their capacity to solve problems and improve policy, so called output legitimacy (Botzem and Dobusch 2012): Standards are assumed to be best practice, or as ISO puts it, the "formula that describes the best way of doing something : : : the distilled wisdom of people with expertise in their subject matter" (ISO n.d.; see also Jacobsen 2000).
The authority of standards also depends on trust in the standardisation process: the procedural or input legitimacy (Botzem and Dobusch 2012).Roughly, a standardisation process of the kind investigated has three key principles: 1) it is open to all relevant parties, 2) participation is voluntary, and 3) the process strives for consensus (Standards Norway 2018b; see also Wiegmann et al. 2017).Standardisation is thus an arena for deliberation and bargaining between participants.It may draw authority through its inclusiveness of a broad range of actors (Boström 2006) or by including key stakeholders (Engen 2020).This links standardisation to ideas of network governance, where private and public actors interact, aiming at what is deemed more competent, knowledge-based, problem-solving policy, either in contrast to or supplementing, traditional, public government (Torfing and Sorensen 2014;Pierre and Peters 2020).
The impact and potential coupling of public policy and SSO standardisation has been investigated, such as the delegation of authority to SSOs through EU's New Approach (Egan 1998;Borraz 2007) and investigations of government involvement in committee-based standardisation, either through "hard" or "entrepreneurial" approaches (Wiegmann et al. 2017).
SSOs are often nongovernmental, voluntary meta-organisations2 that create and publish formal, written standards (Jacobsson and Brunsson 2000;Higgins and Hallström 2007).ISO and their national member organisations are examples of committee-based standardisation (Wiegmann et al. Blind 2017).The committees settle the content of the standards, whereas the SSOs facilitate the process.At large, most committee members come from industry, a smaller part from public administration or NGOs (Gustafsson 2020).
The interest in this article is in the national SSO "Standards Norway" (SN), Norway's member of ISO and the European Committee for Standardisation (CEN).SN contributes with experts to, and participate in the governing of, ISO and CEN and is obliged by regulation for standardisation set by CEN and ISO (Standards Norway 2018b).Most standards published by SN are implementations of international or transnational standards, but sometimes it is the other way around.3SN is part of the Norwegian polity, as a national SSO is an EU requirement4 and SN is obliged to implement all CEN standards (Standards Norway 2018b).
Theoretical framework: the MSA The MSA theory draws on bounded rationality theory from organisational studies (Cohen et al. 1972;Zahariadis 2003;Kingdon 2014).Key to the MSA is the idea of three analytically independent process streams: problems, policies, and politics (Ackrill et al. 2013).Problems are conditions perceived as in need of being changed, and where government action is needed (Béland and Howlett 2016).Problems may be pressing itself upon the system, such as a crisis, or build on feedback such as from indicators (Kingdon 2014).The second process stream consists of policies presented as potential solutions.5It is the ideas and accumulation of knowledge generating policy proposals.This stream consists of, but are not limited to, expert knowledge.Ideas and policy proposals "float" around in a "policy primeval soup" and wait for the right moment to be presented as the solution to a problem (Kingdon 2014, 19;Béland 2016).Consensus building is based on persuasion (the better alternative wins through) and diffusion (ideas spread).
The political stream manifests itself in the political system, influenced by organised political forces (Kingdon 2014;Herweg et al. 2018).Consensus within this stream is built through bargaining and building coalitions.It is the "winning" and "loosing", the ability to build a majority or not, which is defining the stream.Kingdon does not mention power, but the political stream clearly includes the struggle for power (Herweg and Zahariadis 2017).Governmental actors influence processes in two ways, according to Kingdon, through change in key personnel and questions of jurisdiction or "turf" (2014).
Kingdon's theory is strongly identified with the individual agent and the concept of policy entrepreneurs (PEs) (Jabotinsky and Cohen 2019).PEs are "advocates who are willing to invest their resourcestime, energy, reputation, moneyto promote a position in return for anticipated future gain" (Kingdon 2014, 179).Successful PEs can frame the problem and present a policy alternative as a solution.PEs need to "soften up" policy communities, by educating and convincing them (Kingdon 2014).
The most sited component of the MSA is the idea of policy windows (Jones et al. 2016).A policy window is open when a problem is pressing, a solution exists, and the political conditions are such that a majority can be created.It is an opportunity for PEs to actively couple the streams (Dolan 2021), either putting an issue on the agenda or make policy decisions (Zohlnhöfer et al. 2016).
A provocative assumption in the MSA is stream independence, as it "thwarts the seemingly natural expectation that problems and solutions must be logically tied to each other" (Winkel and Leipold 2016, 110).Stream independence is empirically not always the case, it is an analytical assumption, which makes it possible to "uncover rather than assume rationality" (Herweg et al. 2018, 39).
The MSA builds on two conditions, that of ambiguity and of temporal sorting (Zahariadis 2003).Ambiguity refers to "a state of having many ways of thinking about the same circumstances or phenomena" (Feldman, cited in Zahariadis 2003, 2-3).MSA is applicable only if there is ambiguity (Zahariadis 2003), as this opens room for manoeuvring (Herweg et al. 2018) and sometimes "venue shopping" (Ackrill et al. 2013).Higher ambiguity in an institutional context increases PEs' chance of manoeuvring and utilises a policy window (Bolukbasi and Yıldırım 2022).Temporal sorting implies that time and temporal order, more than consequential considerations, are key in the decisionmaking (Cohen et al. 1972).Choices are made because of a simultaneous materialisation of factors in time, more than that these factors are inherently correlated (Zahariadis 2003).
The MSA has been criticised for neglecting the impact of institutions and structural characteristics (Cairney and Heikkila 2014;Béland 2016;Zahariadis 2016;Zohlnhöfer et al. 2016;Koebele 2021).Zohlnhöfer, Herweg, and Huß argue that especially when analysing decisionmaking, formal political institutions must be included in a systematic way, as they "define which majority will suffice and which actors need to agree to adopt a policy" (2016,250).
A different MSA literature attending to institutions goes in an ideational or discursive direction.Policy is in this view about interpreting reality, and policymaking is a struggle over interpretations (Winkel and Leipold 2016).The MSA is combined with framing theory, to investigate how PEs (Brown 2020), elite actors (Fawcett et al. 2019), or networks (Reardon 2018) frame problems and solutions to influence policy.Béland suggests knowledge regimes as a refinement of MSA, highlighting the role of institutions in mediating the impact of ideas (2016).Winkel and Leipold investigate the MSA through an interpretive lens, utilising understandings from interpretive policy analysis and discursive institutionalism (2016; Hajer and Versteeg 2005;Schmidt 2008).Here, discourse, the structure and constructs of meaning, is regarded as a decisive institutional context of policy developments (Schmidt 2010), and the policy streams are conceived of in terms of perceptions of problems, policies, and politics (Winkel and Leipold 2016).
Application and development of the MSA
This case study is an atypical MSA study; one could argue it is outside the scope of the original theory, as the case lacks involvement by politicians, interest groups, or public opinion, all key to the MSA.The actors involved in the case were public servants, participants in the standardisation process, and risk and security experts.When the MSA is still viewed as suitable, it is because of the underlying assumptions of ambiguity and analytical stream independence, enabling an investigation of how the streams shaped the process, as well as the role of actors and especially PEs for the outcomes both within each phase and across phases.The MSA needs, however, adjustment to fit the case at hand.
Kingdon regarded the empirical processes he studied as "extraordinarily complex" (2014,20), pointing to the many participants, policy ideas, etc. Complexity is linked to metaphors of scale, such as policy ideas floating in a "primeval soup".Zahariadis (2013) specifies that also issue complexity, not necessarily institutional complexity, makes the MSA useful.This article theorises that complexity and room for manoeuvring do not require quantitative complexity but can manifest itself in qualitative characteristics of the process.The streams are thus seen as different logics.The problem stream encompasses what needs to be changed, the policy stream how it should be changed, and the political stream who decides.Regarding the streams as logics detaches them from the link to specific actors or organisations.Actors can move between activities related to different streams and have an impact across streams.This moves the analysis in an interpretative direction (Weible and Schlager 2016).For the policy stream to be active, it also does not have to be quantities of ideas, such as with a "primeval soup".Only that at least one policy idea exists about how something should be changed.
There are fewer pregiven criteria to rely on when the streams are seen as logics.The analysis thus rests on interpretations of reasoning and actions within institutional contexts, such as the content of a discourse (i.e. did they discuss what the problem is, did they argue in favour of a certain policy, or how to solve a disagreement?),but also what influenced the process (i.e. a single veto-power's decision).See also method section below.
A second adjustment of the MSA is that the present article attends to institutional contexts in two ways, reflecting theoretical developments described above.First, formal institutional structures are linked to the political stream (Zohlnhöfer et al. 2016).Institutions "define the rules of the political game, and as such they define who can play and how they play" (Steinmo 2015, 181).Crucially in this article, they decide veto possibilities (Thelen and Mahoney 2010).
Second, the professional background and knowledge of those involved in the different phases shape how problems are framed, which policy proposals can be communicated about and how.This builds on insights of how discourse shape, and is shaped, in a policy process.Table 1 summarises the analytical framework.
Method and data
The case study builds on abduction, where a "situational fit" between observed facts and theory is searched for (Alvesson and Sköldberg 2018;Ashworth et al. 2019).Key to the abductive process is the refinement of theories when existing theories are unable to frame findings (Timmermans and Tavory 2012).The analytical framework is thus developed during the study, and "tested" in part against new data, but also through revisiting old data in a back-and-forth process.
This article is based on a case study of the establishment of a standard in Norway (2006Norway ( -2018)), studied as a policy process.The investigation starts with an initiative to coordinate two governmental guidelines and ends when the standard had been in place for some time, and the controversy had "faded".The study compares three phases with distinctly different institutional arrangements.This enables a withincase, longitudinal comparative design (Gerring 2007).
Very little is known about national standardisation processes by SSOs from a social science perspective, and it is thus hard to know what type of case this is in the larger universe of national SSO standardisation processes (Ragin and Becker 1992;Levy 2008).The study is thus exploratory, aiming at analytical insights.As an MSA study, it can be viewed as a least likely case (Levy 2008).This article thus explores the theory's ability to "travel", its ability to "make sense" or "sensitise" (Blumer 1954;Timmermans and Tavory 2012) in a very different environment.
The data used are primarily documents (government archives, reports, popular and academic writing) and interviews. 6Of a more supplementary nature is fieldwork at a standardisation course,7 webpages (blogs, newspaper, advertisement), and audio recordings from a conference (FFI 2015).
The interview data consist of 40 transcribed interviews with 34 people from the government, the private sector, SN, and academic/research institutions, selected through a combination of strategic and snowball sampling, see Table 2. Nine of the interviews were conducted by Busmundrud et al, with verified interview summaries included in an appendix (2015).The other interviews were conducted by the present author (2018)(2019)(2020)(2021).The interviews were in-depth, mostly face-to-face, and semistructured, with thematic questions allowing for flexibility and dialogue.Interviews are anonymised to encourage an open dialogue.In all the phases, interview is a key data source.In the first phase, documents (letters, minutes, notes and drafts) archived by the Ministry of Justice and Public Security are also central.In the second phase, interviews are the main data source.The third phase builds on a combination of written material and interviews.In general, there is high coherence between the various descriptions/data sources.
Interviews and key documents were coded in Nvivo, with memos supplementing the coding as an analytical tool.Coding was initially sorting based (Tjora 2018), pertaining to the MSA but also topics raised in the interviews.The analytical process consisted of coding, analysing, and refining key codes, comparing with, and refining the analytical framework, revisiting the data (refine coding, compare data, memos) and collection of new data (additional interviews), as well as sensitising the analysis through engagement with literature.
The present author has a background of nearly 20 years in the civil service, see Supplementary material for elaborations.
From a minor detail to a popular standard: the establishment of a standard for security-risk assessment Phase 1: the development of a guideline on terrorism protection The first phase of the process took place within government, when the Norwegian Police Security Agency (PST), the National Police Directorate (POD), and the Norwegian National Security Agency (NSM) developed a guideline on terrorism protection.NSM originated from the military, but all three agencies were hierarchically under the jurisdiction of the Ministry of Justice and Public Security (hereafter the Ministry).
In the aftermath of 9/11, two classified guidelines on terrorism protection were produced, one by NSM and one by the police.The Ministry saw two guidelines on the same topic as not communicating a unified, coherent message from the government and gave the three agencies an assignment to publish one guideline together. 9OD disagreed to the assignment.10After some dispute, stalemate, and attempts to rephrase the assignment, the Ministry insisted on the original goal of a unified guideline.11By May 2008, a common draft guideline had been completed, needing only formal sign-off by the three agencies.At this time, a new employee entered a key position in PST, proposing that the guideline should present risk management and -assessment as a suitable tool in terrorism protection.Proposing a change at this point was controversial.A draft had been finished, the deadline was overdue, and it had been more difficult and time-consuming than anticipated.The new employee argued for his/her approach and got the support of PST and NSM.12Both agencies became convinced that introducing risk management would offer better guidance on terrorism protection.
POD objected, wanting to finalise the existing draft. 13It also rejected a new draft guideline from PST and NSM and once again proposed two guidelines to the Ministry.The Ministry once again rejected this, insisting on the need for one, unified guideline.A bargaining process between representatives of the three agencies resulted in a finalised guideline in 2010 (NSM et al. 2010).It stated that terrorism protection planning should be conducted using risk management andassessment, in line with the new employee's proposal.
Utilising the MSA, the problem stream clearly defined the Ministry's perspective in its insistence that a single, unified guideline should be produced.The Ministry did not get involved in policy questions, and it had only one thing on the agenda: a unified message by the government on terrorism protection.When the Ministry supported a rewriting of the draft to introduce risk management into the guideline, it supported the majority of agencies (2 against 1).
Kingdon states that changes in policy processes often occur through a change in key personnel (2014).This is the case here, when a new employee introduced the idea that terrorism protection should be conducted through risk assessment andmanagement.At this point, the policy stream was defining the course of events.The process was prolonged to incorporate a new policy solution.Attention to policy also brought the underlying problem somewhat to the fore, as it was argued that planning through risk management was the better solution to protect against terrorism.The problem and policy streams were empirically linked, as is often the case (Winkel and Leipold 2016).
Initially, the question of jurisdictional boundaries was key to the process.Should a policy (guideline) from different agencies (the police/the military) be coordinated?This is the civil service version of the political stream, about battles over turf (Kingdon 2014).Although the policy stream changed the course of events when the new employee argued for a different policy, the political stream was the defining logic most of the time, both before and after.Little substantive policy discussion, stalemates, and attempts to rephrase the terms of the assignment indicate that the process was mostly shaped by the question of who gets it their way, that is, who decides.The battle was so fierce that it was sarcastically referred to by some as "the suicide project".The reason the process resulted in a policy was, we may conclude, the clear hierarchical structure and the single veto-power, the Ministry.
The new employee, and eventually more people, worked actively, persistently, and with a willingness to "fight it through" to get a guideline with their preferred policy.They did not put the guideline itself on the agenda, but they put risk assessment/management on the agenda within the framework of the guideline.Put together, their actions were much in line with the MSA concept of PEs.Table 3 sums up the first phase.
Phase 2: standardisation of the risk assessment approach Independently of the process described above, an initiative was taken by SN to identify the need for standards for crime protection within the building and construction sector, and the relevant committee (SNC 296) initiated a working group (WG).The key PEs who developed the risk assessment approach in the terrorism guideline got involved in the working group.
The movement to SN radically changed the institutional context of the process.First, it was no longer under the jurisdiction of the government, although some of the key actors were civil servants.Unlike in governmental processes, however, their position was not privileged as the consensus requirement meant that all participants had veto-power.Second, all the actors were new, except for the PEs who had taken the initiative to standardise the approach.The people involved now primarily came from the building and construction sector and physical security.Finally, the claim to authority changed; the source was not governmental but linked to the authority of SSO standards.
The SNC 296 committee had originally envisioned standards on physical security, but a PE took an initiative to change priorities: I saw this as my golden opportunity, because I : : : was convinced, that my approach was better than what had been there before.How do I spread this?That has a lot to do with being in a position of power.In [agency X] I was in a position of power, you are the organization.Whether true or not, people think a person coming from [X] knows a lot about security, just because they come from that organization.And then I thought, let's make this more universal.So I pushed for making standards, but not the ones they [the SN board] wanted.They wanted to make standards on buildings and technical matters : : : I was a bit smart and saw an opportunity and said, 'let's make a standard on terminology and one on [risk assessment] method.
The interviewee refers to him/herself as an active shaper of the process ("I was a bit smart") but also of an opportunity that opened up ("my golden opportunity"), in line with the MSA concept of a policy window.
The need for standards on risk management within a security framework was a new idea in need of acceptance.The WG and the SN committee got convinced about the advantages of risk management standards and changed their priorities accordingly.A series of standards on security-risk management was proposed (NS 583X-series).The policy stream thus played a key role initially.
The policy proposal was tightly linked to a new problem description.The problem was now framed as a lack of foundational professional standards tailormade to the field of protective security (intentional, malicious acts), contrasted to the field of safety (accidents, natural disasters) (SNC 296, Crime Protection Working Group 2009).One of the standards initiated was on risk assessment for security.A draft standard was developed, building on the risk assessment approach presented in the guideline on terrorism protection previously agreed on by NSM, PST, and POD (phase 1).The approach to risk differed from traditional approaches to risk, as described in the introduction.
When the draft standard was finished, a new person from a key governmental organisation working with safety joined the 296 committee.This new member, supported by his/her organisation, disagreed with the draft standard, arguing that it was unfortunate that a risk assessment standard defined risk differently than established risk standards (ISO 2018;Standards Norway 2008).
The proponents of the approach did not alter their position, however.The key argument for producing the standard was exactly to customise the understanding of risk to the field of security.Those arguing in favour of the draft standard were frustrated that someone outside the field of security wanted to stop what they regarded as a professional development of the field.
A key requirement by SN is that new standards should be consistent with existing national and international (CEN, ISO) standards.Consensus within the committee was also required.Both were lacking, so SN stopped the process, entering a period of stalemate.
To find a solution, those in favour of the draft standard proposed a conceptual change.The standard should not be one on risk assessment (for security) but on security-risk assessment, with the difference being between risk and security-risk.Everyone accepted this solution as a compromise.The approach could no longer be confused with other risk assessment standards.A new concept (security-risk) and a new practice (security-risk assessment) were established.SN's two concerns, consensus and consistency, were no longer a barrier, and the standard NS 5832 was published (Standards Norway 2014).
The solution was, arguably, a political solution, not a question of policy.Although there were some attempts to discuss substance when the new committee member entered the process, there was not much policy discussion, and the process soon went into a stalemate.It was a question of who would "win", with the politics stream as the defining logic.This made the decision-making process dependent on the rules and regulations of standardisation.Since standards are based on consensus, and all participants have veto-power, opposition from one party sufficed, easily creating a stalemate.When the concept security-risk was introduced, the purpose was to get around this stalemate through creating two professional domains: security-risk assessment and risk assessment.
Requirements for coherence and consensus are strong institutional barriers imposed on the standardisation process.We argue that they did not work as such in this case.Instead, they became incentives for creating a differentiation between two types of risk, representing an institutional restructuring between two professional "turfs" on which to professionalise.See discussion below.
Finally, we need to look at the role of human agency in the second phase.The PEs who moved the policy to SN were active in both identifying the policy window and initiating a change of what should be standardised.They convinced key people of their perspective, enlarging the group who worked in favour of the standard.They did not give in to requirements to comply with established NS and ISO standards on risk or objections from established risk assessment milieus through the new committee member.They also presented a creative solution to solve the problem of the stalemate.We may conclude that proponents of the new standard took the role of PEs as described in the MSA also within the second phase of the process.
As in the first phase, change of personnel played a decisive role.First when the PEs influenced the type of standards that were made a priority, but also when the new committee member acted as an antithesis to a PE, saying stop.Table 4 sums up the second phase.
Phase 3: after standardisationthe nonevent We have worked : : : in accordance with the recommendations from the National Security Agency, by using the approach from the Norwegian Standard 5832.
Chief Police Officer Odd Reidar Humlegård, Open Hearing regarding the Office of the Audit General of Norway's report on protective security measures (The Norwegian Storting 2018).
As the quote above indicates, the new standard (NS 5832) became, in some areas, a point of reference for professional conduct within protective security work.It was one of SN's best-selling standards, according to interviewees.14NSM recommended the standard in guidelines (2015, 2016a), using it as a basis for its risk assessments (2016b).The police and other government authorities also used it as a professional basis (such as The Norwegian Coastal Administration 2018; PST 2022).Risk assessments of the physical security of Ministries were conducted based on the approach (Busmundrud et al. 2015).
Primarily after the standardisation, risk scholars and practitioners started debating the standard and its risk assessment approach.The Norwegian Defence Estates Agency (FB) commissioned a report from the Norwegian Defence Research Establishment (FFI) to compare the two approaches (SN 5814 and SN 5832).The report found weaknesses in both standards, but it especially criticised the lack of probability/likelihood as part of the expression of risk in the NS 5832 standard (Busmundrud et al. 2015).The report demonstrated some fierce disagreement, with one interviewee describing the controversy as "almost like a religious war" (2015,45).
As part of the dissemination of the report, FFI organised a conference (FFI 2015).
One participant reflected on the sudden engagement:
There was little interest until it was published.Then something happened.Interesting.No-one wanted to participate in the [standardization] work, no-one cared during the hearing.But when it was published : : : many people loved it.Finally!Most people.But there were also some who disagreed : : : .I think there were great discussions : : : that brings the profession forward.It was a lot worse when we just sat there and no-one cared.There were 200 people at, and a waiting list for, a [FFI] conference on risk assessment : : : In the SN committee, there were 8 people, maybe 4 showed up.Whoever wants to can show up at these committees.Suddenly, afterwards, 30 people showed up.Then they chose to show up.Now they wanted to participate.
The interviewee contrasts the process during the standardisation, with few people engaged with the situation afterwards with a lot of interest.Although the quote expresses enthusiasm for the attention from a wider audience, the debate mostly took place within each separate professional community.The academic risk assessment community wrote academically on risk assessment approaches for security threats (i.e.Maal et al. 2016;Askeland et al. 2017;Jore 2019); security professionals mainly worked practically with security, as consultants and within government, and did not engage with the academic-or traditional risk-assessment community.The standard was disseminated through courses and practical security-risk assessment work.
The third phase represents a puzzle.Why did a policy debate emerge after the standardisation?In the first two phases few, if any, people with a knowledge background from traditional risk assessment were involved in the process, except for the new member of the standardisation committee.After the standardisation, a broader risk assessment community came to regard it as relevant to their professional domain and became involved.
Why did they bother?Standards are voluntary and could, we may assume, just be ignored.The dilemma is that standards are also important.When the approach was published as a Norwegian Standard, it was sanctioned as good practice and as "expert knowledge stored in the form of rules" (Jacobsen 2000, 41): A completely different weight is gained when you can refer to a : : : standard.That is beyond doubt.Referring to our guideline compared to a standard would probably mean a whole lot for people in charge : : : If I had been in charge, I would have felt more confident that it was "best practice", that this is something you can trust and base your decisions on.A [government] guideline does not have the same weight, of course.When you can refer to a wider professional group, that has agreed on a standard, that gives it a completely different weight.
The interviewee, coming from a governmental agency, sees standards as communicating quality and best practice to a much greater extent than government guidelines.As the chief police officer's quote at the beginning of this section illustrates, following standards communicates professional conduct.All the interviewees asked about standards see them as trusted to be good practice, although a few questions if this is really the case.Standards seem to convey neutral, apolitical, professional best practices, communicating "pure policy", not linked to any organisation with (narrow) interests and political agendas.
The risk assessment professionals that started to raise questions after the standard came out got involved, we may thus argue, because when the security-risk approach became a standard, the "bar was raised".It was no longer a policy idea floating around, it was not "just" a governmental guideline; the authority of the standardisation institute had sanctioned it as a sound, professional risk assessment approach.
One interviewee commented that some people felt like they had "been asleep at their desks", suddenly getting involved after the standard was published.There is no obligation to participate in standardisation, though.Standardisation is timeconsuming, voluntary work.The flat decision-making structure does not privilege any position, making the reward and outcome from participation uncertain.The dilemma is that standards are "innocent" (voluntary, consensual) while potent (sanctioning good practice).
The standard drew criticism not only from the risk or safety community but also from security professionals, such as from FFI and FB (Busmundrud et al. 2015;FFI 2015).Given that standards are supposed to be based on broad consensus, why was the standard not reassessed within the framework of SN?If the legitimacy of standards builds on the premise of broad consensus, then consensus in a "narrow" group should not be enough?
There are two reasons for the standard not being reassessed, we argue.First, many security professionals were positive towards the standard.Since standards are voluntary and a marked product once it is produced, one can buy it or not.If standards are supposed to convey consensus and good practice, a marked demand is, however, not enough.The second reason for SN not reassessing the standard amid criticism may be found in SN's role as mere process facilitator, a "neutral link between involved parties" (Standards Norway n.d.).All professional judgement is outsourced to committees and working groups.No-one mobilised SN or the SN committee; that is, activated the political stream.SN thus did not relate to the policy concerns that had arisen regarding its own standard.
A few incremental changes did occur, however.NSM, the key government agency within protective security, eventually stopped promoting the standard as the (only) preferred one and was no longer represented on the NS 296 committee.Regarding the standard itself, nothing substantial happened, and the debate faded.The process was no longer a process.
Viewed through the MSA lens, the main active stream in the third phase was the policy stream.The policy question that had not previously been debated at any length, the quality of the risk assessment approach, now drew attention.The policy discussion was to some extent linked to the problem stream, in the sense that concerns were raised as to potential unfortunate consequences of two risk assessment standards, such as creating a need for two separate professional milieus in organisations.The standard was thus discussed both as policy and as a potential problem.Table 5 sums up the last phase.
Educational background and knowledge
Lastly, we turn to knowledge background, part of the analytical framework that has so far not been systematically discussed.This is best observed across phases.In all the phases, the established knowledge base was challenged by new people with new perspectives.In the first phase, a knowledge base from public administration within the police/the military was challenged by the introduction of risk management.In the second phase, the building and construction knowledge base in the committee was challenged, first by the PE who introduced risk management and then by the new committee member whose knowledge background was from traditional risk assessment.In the third phase, the key knowledge base came from the risk assessment community and safety backgrounds (academics, public administration) but also from security (FFI, FB).At least partly, these perspectives had a stronger link to academia, and practically oriented security professionals mostly did not participate in their debates.
The mismatch in the different phases between the knowledge backgrounds of the establishment and the challengers meant that new people could formulate perspectives the "old" knowledge base had a weak basis for dealing with.We may counterfactually argue that this resulted in an "easier match" for the PEs than if the established knowledge base had been familiar with risk assessment in the two first phases.It might also explain why the process moved so quickly from policy to politics in the two first phases.Instead of further investigating policy options, something that needs ideas, concepts, and vocabularies to discuss with, the process moved into stalemates and thus "politics".
Utilising Carstensen and Schmidt, we may conclude that in the two first phases it was power through ideas, "the capacity of actors to persuade other actors" (2016,318), in the last phase it was "power in ideas", when a hegemony decided on which ideas were considered in the SN committee (2016).Put differently, "[p]ast policies empower some groups over others" (Bolukbasi and Yıldırım 2022, 12).
Summing up the three phases, in the end, it is simply one dimension which makes this case into a separate process, into one case.This is the policy proposal (the risk assessment approach), "travelling" through all the phases.Additionally, the people who moved the proposal from the first to the second phase, the PEs, are essential in it being a process.Different formal institutional arrangements, new people, and new knowledge bases in each phase make for a fractured process, see Table 6.
Discussion and conclusion
In the following, this article zooms in on SSO standardisation and how it creates ambiguities and links it to the case in question.A brief discussion of the usefulness of the MSA framework follows before concluding remarks.
Before zooming in on the SSO standardisation, however, a short discussion of the first, governmental phase, is called for.We noted above that there was little policy debate in the first phase, and different approaches to risk assessment were not investigated or elaborated upon.This shortcoming of the first phase spilled over to the second phase, as the only approach considered initially in the second phase was the risk assessment approach from the first phase.One could thus argue that the problem in phase 2 lies in phase 1.Although there is some merit to this argument, SSO standardisation builds on producing quality standards independent of government.The SSO standardisation process should thus withstand scrutiny, independent of the previous, governmental phase.
Standardisation: an ambiguous institutional arrangement
The two last phases were shaped by the standardisation institute, and we have pointed out ambiguous characteristics of standardisation as they have unfolded during the case.We noted that standards are "innocent" (voluntary, consensual) The role of government in standardisation is also ambiguous.On the one hand, standards are sometimes government policy, as people from the government participate in standardisation, influencing the content of standards.Standards are also a normative and professional basis for governmental conduct (Olsen 2020).Public authority "plays an important role in legitimizing the genesis of standards" (Botzem and Dobusch 2012, 739;Gustafsson and Tamm Hallström 2018).On the other hand, government members do not have a privileged position in framing the standards, and the government is not responsible.Standards and standardisation are, and are not, government policy.This can be seen in the case under scrutiny.Actors from government moved in and out of the committee.More illuminating is how NSM changed from promoting the standard to merely presenting it as one possibility.Since NSM was not responsible, it did not have to work out a new policy, and it could simply change how it referred to the standard.Standardisation gives government organisations flexibility and room for manoeuvring.
Responsibility for standards is also ambiguous.Formally speaking, a standard is issued by SN.The claim to authority, and the de facto responsibility for the content of the standard, lies, however, in the committee.Committee membership is voluntary work.Members come and go, they are not responsible in a more fundamental way for the quality and impact of standards. 15In the case at hand, when the standard was criticised from a wider professional community, no one representing SN or the relevant committee felt responsible for going into the policy discourse.The standardisation thus led to a decoupling of policy and politics, leading to stream independence.
Hajer describes a situation in much policymaking today of an institutional void, where actors negotiate and conceptualise rules and boundaries during policymaking (Hajer 2003;Leong 2017).The potential risk of SSO standardisation might be better described as creating an institutional deficit.SSOs produce policy in a governmentlike institution, but the SSO is not structured such that it takes responsibility for policies in a government-like way.There may become a mismatch between what has been produced (standards) and the means to govern what has been produced (shifting, voluntary committee members).Responsibility may become diluted (Brunsson 2000), and it becomes unclear who governs (Gustafsson and Tamm Hallström 2018).This deficit may not occur in a single standardisation process.
Over time, however, the constant evolving membership of committees (Wiegmann et al. 2022), and SSOs as mere process facilitators, may create an institutional deficit.
The small body of literature using the MSA on standardisation by SSOs points in similar directions of institutional deficits.Rashid and Simpson (2019) conclude that SSOs have assumed a public policy-making role in wireless communication but have failed to fill this role.Tang et al. point to how SSOs have created "a plethora of rules and procedures" (2019, 502) in the international trade system, but where there is a "general lack of a centralized authority responsible for developing a consistent policy in the regulatory sphere" (2019, 514).Harcourt et al. note a spillover tendency into SSOs in international internet governance (2020).Actors are attracted to SSOs, they argue, because decision-making is seen as more efficient, but also because it changes who coordinates, filters type of influence and resources available.They all describe SSOs as institutions that offer government-like regulation, but where responsibility is diluted.
Utilising the comparison between the governmental (phase 1) and the SSO (phase 2) policymaking, we may observe a difference in how "firm" the formal institutions stood amid disagreement.When the process took place within government, the hierarchical structure and single veto-power resulted in a decision, creating "winners" and "losers".The rules of the formal system structured the policymaking.During the standardisation, on the other hand, the solution to the disagreement created an institutional restructuring.The success of the PEs in the second phase was, arguably, linked to the process being manoeuvred into something it was officially not.The norms of standardisation state that a broad group of experts create policy through consensus.This was reversed, in that the policy field was split in two so that the group who needed to agree were narrowed down to those who agreed on policy.The possibility to manoeuvre around the formal barriers thus became an important reason for the establishment of the standard.Rules and boundaries were negotiated during policymaking (Hajer 2003).One can argue that, contrary to the governmental phase, the SSO institution in this case did not only structure policymaking, the policymaking structured the institution.
Utilising the MSA on a least likely Norwegian case We have introduced, and further developed, the MSA to investigate a policy process very different from the theory's origin.The study is exploratory, utilising an abductive approach, and it thus does not test the theory.A discussion of the "situational fit" (Timmermans and Tavory 2012) between the adjusted MSA and the case is, however, called for.
Although the case lacks key characteristics of most MSA studies, we find many MSA assumptions and concepts useful when investigating the case.One such premise is stream independence.Empirically, we note that stream independence was often not the case, such as the movement from the policy stream to politics when agreement on policy was not accomplished.The case also shows examples of empirical stream independence.Most notably in the third phase, when the policy stream was active, but where the potential political stream was not activated.Seeing stream independence as an analytical, not empirical, assumption makes the premise of stream independence useful, we argue.It makes it possible to investigate how the different parts of the policy process unfold, where the relationships between problems, policies, and politics are investigated, not assumed (Herweg et al. 2018).
Kingdon describes two ways governmental actors influence processes: through turnover of key personnel and "turf" (2014).This fits the case well.New people decisively influenced the process, as described above.The same is the case with "turf", both the objection to a coordinated terror guideline in phase 1 and the differentiation between two types of risk have to do with boundaries or "turfs".The key MSA assumption that policymaking depends on active coupling by PEs is also supported by the study.
The case is investigated through a comparison between phases.The movement and manoeuvring from one phase to the next, the "venue shopping" (Ackrill et al. 2013) are, however, also important.To grasp policymaking today, we may need to see processes as part of larger structures of polycentric governance (Berardo and Lubell 2016) or governing regimes (Gustafsson 2020), where private and public governance interact.The MSA fits polycentric governance well, with the theory's independence of organisational boundaries, the premise of ambiguity and temporal sorting, analytical stream independence, and PEs ability to conduct venue shopping.
The many calls to incorporate institutional characteristics into the MSA framework are supported by the study.PEs are important; they may seize the moment, but they act within institutional structures paramount to the outcome.
All in all, the MSA enables a sensitivity to the coincidental, to simultaneity and timing (Christensen et al. 2018), but also to the strategic and opportunistic act of coupling the streams (Greer 2015).The study suggests that also small policy processes consist of different streams, here seen as logics, that need to be coupled in ripe policy windows by PEs.We may conclude that in a least likely case like the one in question, given some adjustments, the MSA sensitises the analysis in meaningful ways.
Concluding remarks
This article asked how we can account for the establishment of the standard and the role of institutional context in this regard.In the first phase, the risk assessment approach was institutionalised, we may conclude, because a new employee introduced it, and the Ministry utilised its single veto-power.This article notes the many ambiguities of SSO standardisation and how it creates possibilities for manoeuvring.In the second phase, key to the establishment of the standard was the introduction of a new concept (security-risk), circumventing policy disagreement, thereby creating an institutional differentiation between two professional "turfs".Strong institutional barriers in theory did thus not work as such in practice.We argue that the differentiation between responsibility for process (SN) and content (committee) makes the standardisation process vulnerable to stream detachment.In the third phase, SN (potential political stream) did not relate to criticism of its own standard (policy stream).Building also on other SSO research, we raise a concern for a potential "institutional deficit", a potential mismatch between SSOs producing policies, but where the SSO is not structured such that it manages to take de facto responsibility for these policies.
The public policy literature is diverse but mainly centred on government processes.As this case indicates, we should not take the public/private distinction for granted, since it "has de facto become a boundary within a political order" (Frankel and Højbjerg 2007, 96, italics in original).The journey into a Norwegian standardisation process shows that the process stream perspective can shed light on some key characteristics of standardisation by SSOs.Importantly, we do not know if the case investigated represents a typical SSO process, an extreme one, or something in-between.Conclusions have been drawn in part pertaining to the case itself, in part analytically.Further research on SSO standardisation from a policy process perspective is called for, as it has become a global and highly influential phenomenon.
Supplementary material.To view supplementary material for this article, please visit https://doi.org/10.1017/S0143814X23000223
Table 6 .
Overview of the three phases, institutional dimensions, and the MSA
Table 6 .
(Rasche and Seid 2019)sly activated and potent (sanctioning good practice) simultaneously.Standards from SSOs are also ambiguous in another sense: They are framed as making the world better and more efficient, a reason for time-consuming voluntary work.SSO standardisation is also legitimised as being good for business (Jacobsson and Brunsson 2000; Menonpublication 2018), implying interest-based reasons for participating in standard development.When a standard is finished, it becomes a marked product sold by SSOs(Rasche and Seid 2019).In this case, key actors started working in private sector as consultants, and the standard became a product that could be utilised in consultancy practice.Standards are, in other words, both common goods and business opportunities. | 2023-09-14T15:16:33.179Z | 2023-09-12T00:00:00.000 | {
"year": 2023,
"sha1": "dd291bcfd57122c5e965b6926cdacf1b1fb233fb",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/5ABB33A01AD3F6E96DB99E1B8D3D9C16/S0143814X23000223a.pdf/div-class-title-standardising-policy-in-a-nonstandard-way-a-public-private-standardisation-process-in-norway-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Cambridge",
"pdf_hash": "98d2c3a957dc310622a22272982d9cc6f94357e5",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": []
} |
67775715 | pes2o/s2orc | v3-fos-license | -bipyridine) Ruthenium(II) Films Deposited on Titanium Oxide-Coated, Fluorine-Doped Tin Oxide for an Efficient Solar Cell
Dye-sensitized titanium oxide electrodes were prepared by immobilizing a novel ruthenium complex, di(isothiocy-anato)bis(4-methyl-4’-vinyl-2,2’-bipyridine)ruthenium(II) [(NCS) 2 (mvbpy) 2 Ru(II)] or the ruthenium complex/sodium 4-vinylbenzenesulfonate onto the surface of a titanium oxide-coated, fluorine-doped tin oxide (TiO 2 /FTO) electrode through a new electrochemically initiated film formation method, in which the electrolysis step and the film deposition step were individually performed. The incident photon-to-current conversion efficiency (IPCE) of the Ru complex film on a TiO 2 /FTO electrode was disappointedly insufficient (1.2% at 440 nm). In sharp contrast, the Ru(II) complex/so-dium 4-vinylbenzenesulfonate composite film deposited on the surface of a TiO 2 /FTO electrode showed maximum IPCE of 31.7% at 438 nm. Conversion Efficiency
Dyes are commonly immobilized onto a semiconductor (typically TiO 2 ) surface with a functional group, such as carboxyl, phosphono or thio group in the dyes. Up to now, Ru(II) complexes without these functional groups cannot be candidates in DSSC research due to impossibility of immobilization; this drawback has limited the development of effective photosensitizers.
1
H NMR spectra were measured on a Bruker Ascend 400 spectrometer. Chemical shifts were determined with respect to the residual solvent peak (δ in ppm, J in Hz). Matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) mass spectra were acquired on a Bruker autoflex speed-KE in a reflection mode with α-cyano-4-hydroxycinnamic acid (CCA) as a matrix. FT-IR spectra were recorded on a JASCO FT/IR-610 spectrometer as KBr pellets. Ultraviolet-visible absorption (UV-vis) spectra were measured on a Perkin Elmer Lambda 19 UV/vis/NIR spectrometer. Cyclic voltammograms were recorded on a CH instrument 701 electrochemical analyzer at a sweep rate of 100 mV·s −1 by using 0.1 M tetrabutylammonium perchlorate (TBAP) as an electrolyte solution, Pt wire as a counter electrode, and Ag/ AgCl as a reference electrode.
2-(4'-Methyl-2,2'-bipyridin-4-yl)ethanol
4,4'-Dimethyl-2,2'-bipyridine (6.8 g, 36.91 mmol) was dissolved in dry THF (400 mL), and the solution was cooled down to 0˚C by ice/water. Lithium diisopropylamide (20.3 mL, 2 M in THF, 40.60 mmol) was added dropwise to the solution. The purple solution was stirred at 0˚C for 2 h, and then a large excess amount of dry paraformaldehyde (5.5 g, 183 mmol as formaldehyde) was added all at once to the solution. The mixture was allowed to warm to room temperature and stir for 24 h. The color of the solution gradually turned from purple to light yellow. The reaction was quenched with distilled water (200 mL), and the solution was concentrated to about 50 mL under reduced pressure. The residue was extracted with dichloromethane (3 × 50 mL), and the extracts were concentrated under reduced pressure. The yellow viscous liquid remained was purified on neutral alumina gel column chromatography; after washing off unreacted 4,4'-dimethyl-2,2'-bipyridine with hexane/ethyl acetate (19:1, v/v), the yellow-colored fraction eluted with hexane/ethyl acetate (2:1, v/v) was collected. The fraction was condensed under reduced pressure to afford light-yellow oil (2.27 g, 29%). 1
Immobilization
The immobilization was performed in three steps under argon atmosphere. All solvents used were purged with argon for 15 min.
Step 1: The electrochemical treatment of an electrode was performed in a one-compartment cell according to the three-electrode method (TiO 2 -coated, fluorine-doped tin oxide (TiO 2 /FTO) as a working electrode, Pt wire as a counter electrode, and Ag/AgCl as a reference electrode) by using a CH instrument 701 electrochemical analyzer. These electrodes were dipped in an electrolytic solution containing 0.1 M of TBAP, and swept between 0-n-0 V (n = 2, 4, 6, 8, and 10) at a rate of 100 mV·s −1 with bubbling with argon during the electrolysis.
Step 2: After the electrolysis, the working electrode was soaked in a washing solvent for several seconds to remove adsorbents on the TiO 2 /FTO surface.
Step 3: The electrode was dipped into a solution of (NCS) 2 (mvbpy) 2 Ru(II) (1 mM) or the Ru complex/sodium 4-vinylbenzenesulfonate (1 mM and 1 mM, respectively) in DMF (30 mL), and kept for appropriate hours in the dark under argon atmosphere. Then, the electrode was taken out, washed with acetone (30 mL), and dried at room temperature in air to give the Ru complex film.
Formation of Di(isothiocyanato) bis(4-methyl-4'-vinyl-2,2'-bipyridine) Ruthenium(II) film on the Surface of a Titanium Oxide-Coated, Fluorine-Doped Tin Oxide Electrode
Because the present Ru(II) complex, di(isothiocyanato) bis(4-methyl-4'-vinyl-2,2'-bipyridine)ruthenium(II) [(NCS) 2 (mvbpy) 2 Ru(II)], has no functional group reactive with TiO 2 such as carboxyl, phosphono or thio group, this complex cannot be immobilized onto the surface of TiO 2 -coated, fluorine-doped tin oxide (TiO 2 / FTO) electrode with conventional film formation methods that have been adopted in the fabrication of DSSCs. Paying our attention to the fact that the Ru complex has vinyl groups, we at first tried to immobilize the Ru complex onto a TiO 2 /FTO surface by the electrolysis of a solution of the Ru complex. However, the attempt resulted in failure, most likely due to the decomposition of the Ru complex during the electrolysis. After numerous trials, we finally found a stepwise, electro-chemically induced film formation method to give a film-like deposit of the Ru complex on the surface of TiO 2 /FTO. This method is composed of three steps: Step 1, the electrolysis of a TiO 2 /FTO electrode only; Step 2, the washing of the electrode with a sol-vent; Step 3, the immobilization of the Ru complex upon immersing the washed electrode in a solution of the Ru complex. Figure 1 schematically represents the immo-bilization procedure. The immobilization was found to be very sensitive to the history of the potential scanned onto a TiO 2 /FTO electrode (Step 1). Figures 2 and 3 show the cyclic voltammograms during the electrolysis of the electrodes in ranges of 0-n-0 V (n = 2, 4, 6, 8, and 10) at a rate of 100 mV·s −1 and the UV-vis spectra of the resultant electrodes. As shown in Figure 2, an irregular trace in the cyclic voltammograms was observed in a range of 1 -3.5 V independent on the potential applied, which would correspond to irreversible oxidation.
Moreover, it is clear from Figure 3 that the electrochemically initiated film deposition of the Ru complex took place only when the electrode, potentially swept between 0-8-0 V at a rate of 100mV·s −1 , was used. As shown in Figure 4, the electrode was colored after Step 3, undoubtedly indicating the film deposition. Moreover, the film deposited was insoluble in any sol-vents including aprotic polar solvents such as DMF and DMSO. These changes strongly support the deposition of the Ru complex through some reaction.
The electrochemically initiated film deposition method was also affected by solvents used in Steps 1-3. As shown in Figure 5, the solvent used in Step 1 influenced, to considerable extent, the quantity of the Ru complex which finally immobilized on the surface of a TiO 2 /FTO electrode; although dichloromethane gave a better result than acetonitrile or methanol did, it was very hard to maintain steady conditions for the electrolysis due to the evaporation of dichloromethane upon bubbling argon during the electrolysis. Then, acetonitrile, which was better than methanol, was used in Step 1. In Step 2, no film was formed on the surface of TiO 2 /FTO, when the electrode was washed with DMF instead of acetonitrile. Because the Ru complex is soluble only in DMF and DMSO among common organic solvents, the immobilization, Step 3, was carried out by using DMF or DMSO. To our surprise, the film was just formed on a TiO 2 /FTO surface, only when DMF was used as the solvent. Figure 6 shows the effect of the dipping time in Step 3 on the electrochemically initiated film deposition. The absorption arising from the Ru complex on the surface of TiO 2 /FTO increased continuously with prolonging the dipping time. In sharp contrast, when a TiO 2 /FTO electrode without any electrochemical treatment was directly dipped into a DMF solution of the Ru complex for 24 h, no absorption of the Ru complex was observed, indicating that the film was not formed at all on the TiO 2 /FTO surface without electrochemical treatment.
Step 1 (electrolysis) Step 2 (washing) Step 3 (immobilization) Figure 5. Effect of solvents on the formation of the Ru complex deposit.
Active Species Generated on the Surface of TiO 2 /FTO by the Electrochemical Treatment
In order to know the active species generated on the TiO 2 /FTO surface by the electrochemical treatment (Step 1), 2,2,6,6-tetramethylpiperidine-1-oxyl (TEMPO), a typical radical capture, was added to the washing solvent in Step 2 (acetonitrile); after the electrolysis in Step 1, the electrode was transferred into acetonitrile containing TEMPO (50 mM) for 2 h in Step 2. Then, the electrode was immersed into a DMF solution of Ru complex for 24 h (Step 3). As shown in Figure 7, the deposition of the Ru complex was considerably inhibited by TEMPO. On the basis of this result, we supposed the formation of a radical species, which may react with the vinyl group in the Ru complex. The radical formation would be consistent with the fact that irreversible oxidation took place during the electrolysis of a TiO 2 /FTO electrode. The real character of the active species on the TiO 2 /FTO surface and the reaction mechanism for the deposition of the Ru complex are not clear at present; we will report elsewhere.
Incident Photon-to-Current Conversion Efficiency of the Ru Complex Film
Incident photon-to-current conversion efficiency (IPCE), which defines light to electricity conversion efficiency at a certain wavelength (λ), is one of important parameters for the evaluation of a dye-sensitized solar cell (DSSC) performance. The maximum IPCE of the Ru complex film, obtained by the present method, was disappointedly only 1.2% at 440 nm, under the standard AM 1.5 G irradiation conditions. The low efficiency would arise from a fast back electron transfer. In general, electron transport through films takes place in association with a successive electron transfer between neighboring redox centers [21]. After an electron transfer from an excited dye to a neighboring ground-state dye, two kinds of charged moieties, positively and negatively charged moieties, were formed. Strong electrostatic interaction between the two charged dyes can accelerate electron recombination and decrease the IPCE.
Improvement of Incident Photon-to-Current Conversion Efficiency by Using the Ru Complex/Sodium 4-vinylbenzensesulfonate Composite
Then, we conversely considered that IPCE would be improved when the electron recombination between the layers is blocked. On the basis of the consideration, we coexisted sodium 4-vinylbenzenesulfonate with the Ru complex in the immobilization (Step 3) with an expectation that the sodium cations and the sulfonate groups would partially stabilize electron-accepting dye moieties and electron-donating dye moieties, respectively. Composite films consisting of the Ru complex and sodium 4-vinylbenzenesulfonate were similarly deposited by using a DMF solution containing the Ru complex and sodium 4-vinylbenzenesulfonate (1:1 molar ratio) in the immobilization (Step 3). Although the UV-vis absorption of the composite film decreased with the addition of sodium 4-vinylbenzenesulfonate, the IPCE of the compos- ite film was dramatically increased to show maximum IPCE of 31.7% at 438 nm under the standard AM 1.5 G irradiation conditions. The highly improved IPCE would arise from the electronic neutrality of the composite even after charge separation, which was secured by sodium 4-vinylbennzenesulfonate, that is, sodium cations would neutralize electron-accepting dyes while sulfonate groups would neutralize electron-donating dyes. Thus, the composite film could provide much higher IPCE by the inhibition of a back electron transfer.
Conclusion
A novel ruthenium complex di(isothiocyanato)bis(4methyl-4'-vinyl-2,2'-bipyridine)ruthenium(II) [(NCS) 2 (mvbpy) 2 Ru(II)] having vinyl groups was synthesized by a method similar to that of (bpy) 2 (NCS) 2 Ru(II), in which 4-methyl-4'-vinyl-2,2'-bipyridine (mvbpy) was used in place of 2,2'-bipyridine (bpy). The Ru complex could be immobilized on the surface of a TiO 2 -coated, fluorinedoped tin oxide (TiO 2 /FTO) electrode by a newly developed, electrochemically induced film formation method, which consists of three steps: the electrolysis of an elec-trode, the washing of the electrode, and the film deposition of the Ru complex upon immersing the electrode in a solution of the Ru complex. This is the first example of the immobilization of a Ru(II) complex without a functional group reactive with TiO 2 , such as -COOH, -PO(OH) 3 , -OH, and -SH. The Ru(II) complex film on TiO 2 /FTO thus obtained gave maximum incident photonto-current efficiency (IPCE) of 1.2%. In contrast, the composite film consisting of the Ru(II) complex and sodium 4-vinylbenzenesulfonate showed much higher maximum IPCE of 31.7% at 438 nm. The new method for the generation of active sites on the surface of a TiO 2 / FTO electrode by electrolysis and the new strategy for the stabilization of charge separated dyes would bring about the development of highly efficient dye-sensitized solar cells.
Acknowledgements
We acknowledge Dr. Masataka Ohtani and Dr. Yasuhiro Ishida (RIKEN, Advanced Science Institute) for the MALDI-TOF MS measurement. | 2018-12-21T04:54:25.711Z | 2013-05-14T00:00:00.000 | {
"year": 2013,
"sha1": "7a080b70a94201ef2eb66176a7eb682fd6ab5112",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=31973",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "722c72b5263f53068f89cb1a0c543f01baf599ed",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
225361091 | pes2o/s2orc | v3-fos-license | Changes Detected in Five Bioclimatic Indices in Large Romanian Cities over the Period 1961–2016
: Bioclimatic indices are very important tools to evaluate the thermal stress of the human body. The aims of this study were to analyze the general bioclimatic conditions in ten big cities in Romania and to find out if there has been any change in five bioclimatic indices over a 56-year period: 1961–2016. The indices considered were: equivalent temperature, e ff ective temperature, cooling power, universal thermal climate index and temperature-humidity index. They were calculated based on the daily meteorological data of air temperature, relative humidity, and wind speed recorded in 10 weather stations in Romania: Bucharest-B ă neasa, Botos , ani, Cluj-Napoca, Constant , a, Craiova, Galat , i, Ias , i, Oradea, Sibiu and Timis , oara. The features investigated for trend detection consisted of the frequency and length of the occurrence period for each class and for each index. The test used for trend detection was Mann-Kendall and the magnitude of the trend (the slope) was calculated by employing Sen’s slope method. The main results are based on frequency analysis. Three indices showed comfort class as dominant whereas the other two indicated cold stress conditions as dominant in the area. There was a shift from the cold stress conditions to the warm and hot ones for all the indices. The most stressful conditions for hot extremes did not indicate significant change. The climate in the big cities of Romania became milder during the cold season and hotter during the warm period of the year. The analysis of the length of each thermal class indicated mainly longer occurrence periods during the year for comfortable or warm stress classes.
Introduction
Much interest is shown nowadays, in the era of climate change, to information about the local impact of climate changes in every domain of our lives, from politics to public health and even science [1]. It is already widely known that one of the main consequences of global air temperature rises is the increasing frequency of intense, extreme events [2][3][4]. They lead, mostly during extreme seasons, to severe impact on different social and economic sectors, causing serious health problems to the population and disturbing agriculture, transportation, building industry and tourism activities. It is considered that the most affected climatic variables will be temperature, precipitation, humidity and wind, mostly because of their increase in extreme values, in terms of frequency, intensity and persistence [5]. Furthermore, international specialists in territorial planning state that climate change also affects, in a negative way, life quality in urban areas, especially through heat waves that occur in urban climate islands (ICU), to an increase in the energy and water consumption, as well as to an increase in polluting substances. Additionally, there is a higher risk of infrastructure damage and winter tourism. For all these, the mitigation and adaptation strategies propose both general and specific measures such as the continuous information and education of the population, preservation of the natural resources, encouraging infrastructure and geo-system resilience processes, as well as expanding green spaces and parks, building green infrastructure, innovation and diversification of winter tourism through finding new solutions independent of snow [6]. However, some of the effects of ICU under high temperature conditions (e.g., heat waves intensification) on human body are represented by the extreme thermal stress for the cardiovascular system and increase in the mortality rate, especially for children and elderly people [7,8]. Therefore, some of the proposed solutions by the same institution are developing pilot projects for acclimatization, infrastructure and green spaces, as well as the development of local, regional and national climatic change adaptation strategies.
In this context, common people show a high interest for knowledge, forecasts and predictions about weather conditions [5,9].
It has been believed since ancient times that weather plays an important role on the human body, but the latest scientific research has proved it. The sensitivity of the organs and the psycho-physiological reactions are increased by external atmospheric conditions. The adaptation ability to sudden changes in weather conditions differ from individual to individual and widely depends on genetic predisposition and specific characteristics [10]. Over the last 20 years, a multitude of studies have been conducted in order to explain the relationships between humans and their environment, trying to investigate how thermal comfort or thermal stress in outdoor environment influences human behavior during daily activities. Two comprehensive reviews have been developed [11,12], whereas some other research papers focused on specific issues. The great majority of these studies have focused on thermal sensations in urban areas [1,[13][14][15][16][17][18][19][20][21][22][23][24][25][26].
One of the most realistic and objective ways to assess thermal perception and stress for humans is based on using appropriate indices [1]. Even though inside the meteorological and biometeorological community there are different opinions about the indices that should be used or not, in general it is considered that the more indices used, the better and more reliable the image of the changes is created [5,27]. Indices, serving as tools for heat stress and thermal comfort analysis, have a major role in describing the combined effect of meteorological variables on humans in terms of thermal stress or comfort [28].
In Romania, the number of biometeorology studies is quite small, most of them being focused on bioclimatology and more likely presenting theoretical aspects [29][30][31][32][33][34][35]. In 2008, impressive work was done by Nicoleta Ionac and Sterie Ciulache that represented the Romanian Bioclimatic Atlas [36]. The number of studies that integrated biometeorological indices is quite low. One of them focused on the analysis of some biometeorological indices in the southern Dobrogea, which is one of the most hot and dry regions of the country [37]. Most recently, a bioclimatic analysis in the context of urban environment and tourism was developed, based on the temperature-humidity index, described by fractal Higuchi Dimension, and covering a period of 17 years (2001-2017) in a mid-sized city (Focs , ani). The study emphasized the increasing air temperature defined by this index [38].
The main objectives of this study are: (i). to assess the general bioclimatic conditions, based on five bioclimatic indices in 10 of the largest cities of Romania, and (ii). to find out if there has been any change in the bioclimatic indices over a 56 year period (1961-2016) in terms of duration of their occurrence period (DOP) and frequency of occurrence considering the number of days for each class (FO). With this study, we intend to present regional differences in changes in bioclimatic conditions in Romania.
Study Area
Romania is located in South-Eastern Europe, on the north-western shore of the Black Sea (extending on about 9 • on longitude and approximately on 5 • on latitude) and it covers more than 237,000 km 2 (Figure 1). With a general temperate continental climate, there is a large diversity of climate sub-types induced by several important influences: extreme continental in the eastern and southern regions, more humid conditions generated by the moist oceanic air masses originated over the North Atlantic in central and western regions and altitude influenced climate in the mountain region. In general, they lead to specific bioclimatic conditions: milder and more humid in the western regions and hotter in summer and colder in winter and dryer over the entire year in eastern and southern regions. The mean multiannual temperature varies from more than 11.0 • C in the southern regions and on the coastline to sub-zero values in the mountains. Extreme temperatures can reach more than 40 • C in the summertime, and below −20 • C in the wintertime. The largest daily and annual temperature ranges are specific to eastern and southern Romania. In terms of precipitation, the multiannual amount is above 500 mm in western and central regions, but between 300 to 700 mm in Eastern and Southern Romania [39]. In the mountains, the value rises above 1000 mm. Generally, the important difference in climatic conditions between western and central regions on the one hand and the eastern, southern, and southeastern ones on the other hand, are imposed by the presence of the Carpathians, which are considered to be a natural barrier for western moist air masses toward Eastern Europe [8]. In Romania, most studies conducted on climate change focused on air temperature and precipitation and they revealed significant increase in air temperature (mean and extreme temperature indices) and no significant change for precipitation [8,[39][40][41][42][43]. Sunshine hours significantly increased, especially in spring and summer, whereas wind speed significantly decreased in most of the locations considered [41]. No study on changes in the relative humidity or cloudiness has been conducted in Romania so far.
Atmosphere 2020, 11, x FOR PEER REVIEW 3 of 24 the North Atlantic in central and western regions and altitude influenced climate in the mountain region. In general, they lead to specific bioclimatic conditions: milder and more humid in the western regions and hotter in summer and colder in winter and dryer over the entire year in eastern and southern regions. The mean multiannual temperature varies from more than 11.0 °C in the southern regions and on the coastline to sub-zero values in the mountains. Extreme temperatures can reach more than 40 °C in the summertime, and below −20 °C in the wintertime. The largest daily and annual temperature ranges are specific to eastern and southern Romania. In terms of precipitation, the multiannual amount is above 500 mm in western and central regions, but between 300 to 700 mm in Eastern and Southern Romania [39]. In the mountains, the value rises above 1000 mm. Generally, the important difference in climatic conditions between western and central regions on the one hand and the eastern, southern, and southeastern ones on the other hand, are imposed by the presence of the Carpathians, which are considered to be a natural barrier for western moist air masses toward Eastern Europe [8]. In Romania, most studies conducted on climate change focused on air temperature and precipitation and they revealed significant increase in air temperature (mean and extreme temperature indices) and no significant change for precipitation [8,[39][40][41][42][43]. Sunshine hours significantly increased, especially in spring and summer, whereas wind speed significantly decreased in most of the locations considered [41]. No study on changes in the relative humidity or cloudiness has been conducted in Romania so far.
Data Used
The historical data derived from direct observations at 10 weather stations were used to calculate a set of five bioclimatic indices over a 56-year period (1961-2016). The meteorological parameters employed for indices calculation were daily average data for: air temperature (T) (°C), relative humidity (RH) (%), wind speed at 10 m (v10) (m/s), and cloudiness (N) (%). In this research, we also used the daily maximum air temperature (TX) and the daily minimum relative humidity (RHmin).
The weather stations used in the study provide a good spatial coverage for the whole country and include all the climatic regions of Romania. Their location is presented in Figure 1 and their
Data Used
The historical data derived from direct observations at 10 weather stations were used to calculate a set of five bioclimatic indices over a 56-year period (1961-2016). The meteorological parameters Atmosphere 2020, 11, 819 4 of 24 employed for indices calculation were daily average data for: air temperature (T) ( • C), relative humidity (RH) (%), wind speed at 10 m (v10) (m/s), and cloudiness (N) (%). In this research, we also used the daily maximum air temperature (TX) and the daily minimum relative humidity (RHmin).
The weather stations used in the study provide a good spatial coverage for the whole country and include all the climatic regions of Romania. Their location is presented in Figure 1 and their geographical coordinates and elevation are listed in Table 1. Since all the weather stations considered are inside the built area of the cities (except for Craiova weather stations, which is located 200 m away from the built area limit), we consider that they catch weather conditions quite well in the cities low-rise building areas. The climatic data for this study were derived from four main sources. Most of them were provided by the National Meteorological Administration (NMA). The missing values for two of the weather stations were supplemented with data from the existing online databases. The missing data from the Bucharest-Băneasa weather station for N for 2001 were completed with data available on www.meteomanz.com. For the Cluj-Napoca weather station, the T values were freely downloaded from the European Climate Assessment and Dataset project (ECA&D) database (non-blend data) [40], and from www.meteomanz.com [44]. For 2016, the values of the parameters of v10, RH and N were downloaded from the databases www.meteomanz.com and www.rp5.ru [45]. Moreover, for all the weather stations and the entire period, the TX values were extracted from ECA&D and www. meteomanz.com databasis [46,47]. While four databases were used, the data homogeneity was assured by the common source of the data: raw SYNOP messages issued by the weather stations considered. Random checking was performed for the common periods where possible.
Biometeorological Indices
In the present study, we analyzed the following bioclimatic indices: the equivalent temperature (TeK), the Effective Temperature (TE), the Cooling Power (H), the Universal Thermal Climate Index (UTCI), and the Temperature-Humidity Index (THI). They are largely used for the bioclimatic conditions assessment in different regions [18,20,23,[47][48][49]. For some cities in Romania, these indicators were calculated and presented in other short studies [50,51].
To calculate all the indices mentioned above, as well as their parameters, the freely available software BioKlima 2.6 (https://www.igipz.pan.pl/bioklima.html) [52] was used. For wind speed conversion from 10 m altitude (v10 m) to 1.2 m (v), we employed the formula for wind speed extrapolation (1), available on: https://websites.pmc.ucsc.edu/~{}jnoble/wind/extrap/ [53]. (1) where: -v is the velocity to be calculated at height z; -z is the height above the ground level for velocity v; -v ref is the known velocity at height z ref ; -z ref is the reference height where v ref is known; -z 0 represents the roughness length in the current wind direction. Below we presented briefly the main information related to the indicators we calculated (focusing on the calculation formula and bioclimatic comfort classes).
TeK Calculation
TeK evaluates the influence of air temperature and water vapor pressure (e) on the human body. This index was introduced by Dufton [54,55] and Bedford described its use [56,57]. For its calculation, Equation (2) was used: Vapor pressure (e) (hPa) was alos derived from air temperature and relative humidity by using the BioKlima software [52] as Equation (3): where T is the air temperature and RH is the relative humidity.
TE Calculation
TE evaluates the common influence of air temperature, wind speed, and the relative humidity of air. The index establishes a relationship between the identical state of the human body thermoregulatory capacity (warm and cold stress sensation perception) and the differing temperature and humidity of the surrounding environment [48]. It was calculated by using two different Equations (4) and (5), depending on the wind speed values [49].
-for v ≤ 0.2 m/s: -at v > 0.2 m/s: where T is the air temperature, v is the wind speed, and RH is the relative humidity.
H Calculation
The H (W/m 2 ) index was calculated according to Hill's empirical Equations (6) and (7), depending on the wind speed [49,[58][59][60]: -at v ≤ 1 m/s: -at v > 1 m/s: Atmosphere 2020, 11, 819 where T is the air temperature and v is the wind speed. For this index, thermal sensations are assessed according to the modified scale developed by Petrovič and Kacvinsky [49,58,59].
UTCI Calculation
UTCI is the most comprehensive index for calculating heat stress in outdoor environment [61,62]. Its values are calculated as a polynomial regression function up to the sixth order and the input data include meteorological (temperature, mean radiation temperature, the pressure of water vapor or relative humidity, and wind speed (at the elevation of 10 m), and non-meteorological (a metabolic rate of 135 W m −2 and a walking speed of 1.1 m s −1 , clothing thermal resistance) data [57,60,63].
The mean radiant temperature was calculated using Equation (8) [63]: where R is the solar radiation absorbed by the outer layer of clothing in a standing man. This was calculated using the statistical SolAlt model [63][64][65][66][67]. They were identified based on Equations (9) and (10) [63].
where T is air temperature, e is the vapour pressure, Tg is the temperature of the ground surface, as in Equations (11)-(13) [63]: for N < 80% and t ≥ 0 for N < 80% and t < 0 where T is the air temperature, and N is the cloudiness.
THI Calculation
THI is an index developed for warm and hot periods of the year and it is the only index used by the NMA in Romania in the national weather forecast and for releasing early warning messages during summertime for heat stress [35].
In this study, it was calculated based on Equation (14), considering daily maximum temperatures and the daily minimum relative humidity.
where TX is the daily maximum temperature and RHmin is the daily minimum relative humidity.
Thermal Comfort and Discomfort Classes
Each of the thermal indices has been described as having a specified number of classes, usually ranging from extreme cold to extreme heat discomfort. In most situations, the classes do not coincide among the indices and this situation makes comparison among indices very difficult. The thermal comfort classes of the indicators used for this study are summarized in Table 2. Table 2. Classes for thermal comfort and discomfort of the used indices.
Thermal Sensation/Index
TeK
Calculating Indices Parameters
Investigating features such as FO and DOP is of great importance to highlight the possible period of a certain thermal comfort/stress class during the year and to identify any change all over the analyzed period. Therefore, for all the indices, first we identified the feature (thermal comfort/discomfort class) of each day and year in the 56-year period. After that, the FO was calculated for each year. FO is the number of days included in each class (a certain comfort/discomfort condition), between the first and the last day of occurrence of the specified class and year.
For the DOP calculation, the first and the last days of occurrence of each class for every year were detected. The DOP was calculated as the total number of consecutive days between the first and the last day of occurrence for each class each year, no matter if all the days belonged to the same class or not.
As a calculation procedure, for warm and hot stress classes, a number from 1 to 366 was assigned to each day of the year, from the 1st of January to the 31st of December. However, for the cold stress classes, the numbering was made differently, so that the DOP, which was calculated as the difference between the last day and the first day of occurrence, could be homogenous and not present long missing periods. For instance, the first three classes of comfort for the H index are missing every year during the summer months, hence, the numbering for these classes started from the 1st of September to the 31st of July, so that the length of the DOP could be as exactly as possible.
All these operations were made using the Macro option in Microsoft Excel 2013 software.
Trend Detection and Mapping Methods
For trend detection in the two features of the considered indies, we used the Mann-Kendall test [70,71] and the magnitude of the trend (the slope) was calculated by employing the Sen's slope method [72]. The significance level was established at 0.05.
All the calculations were performed by employing XLSTAT ProPlus software (Addinsoft, Paris, France) and the spatial distribution of trend types was mapped using ArcMap10.2 software (ESRI, Bucharest, Romania). For most of the cases, the DOP exceeds 150 days/year, with the cool and slightly cool classes exceeding 200 days/year, or even 250 days/year in western Romania; thus, these conditions are mostly accomplished all year round, except for hot summer days, which are characterized by DOP varying from 100 to 150 days/year corresponding to slightly sultry sensations, respectively less than 50 days/year (apart from Constant , a) for sultry conditions. However, the DOP values are considerable higher for the extra-Carpathians regions, especially for those located in plain and low hilly or tableland areas. The lowest values characterize the center of the country (Cluj-Napoca and Sibiu weather stations) ( Figure 2).
Changes Detected in TeK Index Parameters
Increasing trends were dominant for both parameters considered. Most of them were found to be statistically insignificant, when all the data sets were considered: 53% of the FO series and 72% of the DOP (Figure 3a,c).
The analysis revealed that the FO by comfort/discomfort classes increased for half of the locations, and for 20% of them, the increase was found to be statistically significant (Figure 3a). They were specific mainly to the slightly sultry and sultry conditions classes for eastern and western regions (Figures 3b and 4). Out of the total data series, 40% indicated downward trends and 17% were characterized by statistical significance (Figure 3a). They were specific mainly to the cold stress and comfortable sensation classes. For most of the cases, the DOP exceeds 150 days/year, with the cool and slightly cool classes exceeding 200 days/year, or even 250 days/year in western Romania; thus, these conditions are mostly accomplished all year round, except for hot summer days, which are characterized by DOP varying from 100 to 150 days/year corresponding to slightly sultry sensations, respectively less than 50 days/year (apart from Constanța) for sultry conditions. However, the DOP values are considerable higher for the extra-Carpathians regions, especially for those located in plain and low hilly or tableland areas. The lowest values characterize the center of the country (Cluj-Napoca and Sibiu weather stations) (Figure 2).
Changes Detected in TeK Index Parameters
Increasing trends were dominant for both parameters considered. Most of them were found to be statistically insignificant, when all the data sets were considered: 53% of the FO series and 72% of the DOP (Figure 3a,c).
The analysis revealed that the FO by comfort/discomfort classes increased for half of the locations, and for 20% of them, the increase was found to be statistically significant (Figure 3a). They were specific mainly to the slightly sultry and sultry conditions classes for eastern and western regions 3b and 4). Out of the total data series, 40% indicated downward trends and 17% were characterized by statistical significance (Figure 3a). They were specific mainly to the cold stress and comfortable sensation classes. In the case of the DOP, only 15% of the data series indicated significant changes (10% increasing and 5% decreasing). In general, the DOP for cool and cold conditions decreased, whereas for the other classes (slightly cool, comfortable, slightly sultry and sultry) it increased, yet not statistically significant. The only region in Romania where a significant change was detected is the southeastern one, located on the seaside of the Black Sea: the length of comfortable and hot stress periods increased over the 56- and 5% decreasing). In general, the DOP for cool and cold conditions decreased, whereas for the other classes (slightly cool, comfortable, slightly sultry and sultry) it increased, yet not statistically significant. The only region in Romania where a significant change was detected is the southeastern one, located on the seaside of the Black Sea: the length of comfortable and hot stress periods increased over the 56year period. A few stationary trends were detected, but they were not spatially coherent (Figure 4). In the case of the DOP, only 15% of the data series indicated significant changes (10% increasing and 5% decreasing). In general, the DOP for cool and cold conditions decreased, whereas for the other classes (slightly cool, comfortable, slightly sultry and sultry) it increased, yet not statistically significant. The only region in Romania where a significant change was detected is the southeastern one, located on the seaside of the Black Sea: the length of comfortable and hot stress periods increased over the 56-year period. A few stationary trends were detected, but they were not spatially coherent (Figure 4).
FO and DOP Spatial Distribution
For the considered cities, the extreme heat discomfort class of TE (Hot) had a very low frequency at all stations, at a maximum of 2 days/year. Since it seemed irrelevant, we decided not to include it in this study.
The best represented class for the FO was that corresponding to very cold conditions: 90-140 days/year, as average values, depending on the location. The maximum values reached 140-180 days/year. It was followed by the cool and cold sensation classes, with more than 90 days/year, as average values. According to this index, the comfortable or warm stress sensation classes were less frequent, usually less than 15-20 days/year ( Figure 5).
By analyzing the DOP, we found that the most representative class was that characterized by cold sensation one, which was present throughout the entire year, with more than 300 days/year. It was followed by very cold and cool ones covering about 200 days/year. Based on this index, the comfortable and warm conditions were the least frequent and did not exceed maximum values of 60 days/year ( Figure 5).
Changes Detected in the TE Index Parameters
The analyses revealed that most of the detected trends for the TE index increased, reaching up to 50% of the total data series considered for the FO parameter in each class and 53% of the series for the DOP; statistically significant trends represent more than 30% in both cases (Figure 6a,c). The statistically significant decreasing ones were specific to the FO in the case of very cold and cool days, respectively, for the DOP of very cold conditions. at all stations, at a maximum of 2 days/year. Since it seemed irrelevant, we decided not to include it in this study.
The best represented class for the FO was that corresponding to very cold conditions: 90-140 days/year, as average values, depending on the location. The maximum values reached 140-180 days/year. It was followed by the cool and cold sensation classes, with more than 90 days/year, as average values. According to this index, the comfortable or warm stress sensation classes were less frequent, usually less than 15-20 days/year ( Figure 5). By analyzing the DOP, we found that the most representative class was that characterized by cold sensation one, which was present throughout the entire year, with more than 300 days/year. It was followed by very cold and cool ones covering about 200 days/year. Based on this index, the comfortable and warm conditions were the least frequent and did not exceed maximum values of 60 days/year ( Figure 5).
Changes Detected in the TE Index Parameters
The analyses revealed that most of the detected trends for the TE index increased, reaching up to 50% of the total data series considered for the FO parameter in each class and 53% of the series for Atmosphere 2020, 11, x FOR PEER REVIEW 11 of 24 the DOP; statistically significant trends represent more than 30% in both cases (Figure 6a,c). The statistically significant decreasing ones were specific to the FO in the case of very cold and cool days, respectively, for the DOP of very cold conditions. Significant increases were found in cold, fresh, comfortable and warm conditions for both FO and DOP (Figure 6b,d). While the very cold and cool stress classes had the highest frequencies in terms of the number of days over the considered period, this parameter indicated a downward trend for all the locations and for most of them it was detected to be statistically significant. For the fresh, comfortable and warm sensation classes, the great majority of upward trends were found statistically significant (Figure 7). When attention was paid to the DOP, significant negative trends were detected for all the locations considered during very cold days. For eight of them, the decrease was statistically significant and for Significant increases were found in cold, fresh, comfortable and warm conditions for both FO and DOP (Figure 6b,d).
While the very cold and cool stress classes had the highest frequencies in terms of the number of days over the considered period, this parameter indicated a downward trend for all the locations and for most of them it was detected to be statistically significant. For the fresh, comfortable and warm sensation classes, the great majority of upward trends were found statistically significant (Figure 7). When attention was paid to the DOP, significant negative trends were detected for all the locations considered during very cold days. For eight of them, the decrease was statistically significant and for the other classes, significant increase was specific mainly to the eastern and south-eastern cities of the country. For comfortable and warm conditions, significant increase was specific to western cities. However, in the case of fresh, comfortable and warm classes, increasing trend, statistically significant or not, was dominant. Only a few series indicated no change.
FO and DOP Spatial Distribution
This index presents most of the days in the "middle" classes (cool, slightly cool, neutral and hot sensation), the extreme ones (extremely cold and windy, very cold and very hot) lasting less than 10 days/year. The cold conditions had the highest frequency in Eastern Romania (Constanta, Galati and Iasi). The class that covered the majority of days was the neutral one (80-130 days/year). The other three classes (cool, slightly cool and hot) did not differ very much in regards to FO values: the average ones were not higher than 100 days/year. By classes, the minimum FO values ranged from 50 days/year at Timișoara for cool conditions, to 55 days/year for hot days, at Galați and to 70 days/year for the slightly cool class (Figure 8).
The mean DOP varied between 300 and 350 days/year at most weather stations, for the cool, slightly cool sensation and neutral conditions, assuming that they were present over almost the entire year. For the cold and hot classes, it summed around 100-150 days/year. In some cases, the frequency was much higher: cold days at Iași exceeded 300 days/year. The frequency of the extreme classes was less than 50 days/year (Figure 8).
FO and DOP Spatial Distribution
This index presents most of the days in the "middle" classes (cool, slightly cool, neutral and hot sensation), the extreme ones (extremely cold and windy, very cold and very hot) lasting less than 10 days/year. The cold conditions had the highest frequency in Eastern Romania (Constanta, Galati and Iasi). The class that covered the majority of days was the neutral one (80-130 days/year). The other three classes (cool, slightly cool and hot) did not differ very much in regards to FO values: the average ones were not higher than 100 days/year. By classes, the minimum FO values ranged from 50 days/year at Timis , oara for cool conditions, to 55 days/year for hot days, at Galat , i and to 70 days/year for the slightly cool class (Figure 8).
Changes Detected in the H Index Parameters
The trend type for which FO had the highest share was the significantly decreasing one, covering 31% of the data sets, followed by the significantly increasing one (24% of the data sets). The remaining series were equally distributed between not significant upward and downward trends and no trend (15% for each type) (Figure 9a).
Slightly cool, hot and very hot conditions significantly increased in regards to FO in most of the cities, (e.g., 7 out of 10 for slightly cool conditions) ( Figure 10).
For the DOP series, decreasing trends were dominant (58% of the series). Among them, 37% were detected to be statistically significant (Figure 9c) and they characterized especially the cold stress classes, from extremely cold to neutral (Figure 9b). The increasing trends were detected for classes from cool to very hot conditions, for both parameters. The cold sensation class was characterized by significant decrease for all the locations considered (Figure 9b,d). The mean DOP varied between 300 and 350 days/year at most weather stations, for the cool, slightly cool sensation and neutral conditions, assuming that they were present over almost the entire year. For the cold and hot classes, it summed around 100-150 days/year. In some cases, the frequency was much higher: cold days at Ias , i exceeded 300 days/year. The frequency of the extreme classes was less than 50 days/year (Figure 8).
Changes Detected in the H Index Parameters
The trend type for which FO had the highest share was the significantly decreasing one, covering 31% of the data sets, followed by the significantly increasing one (24% of the data sets). The remaining series were equally distributed between not significant upward and downward trends and no trend (15% for each type) (Figure 9a). The DOP indicated negative slopes, equally significant and insignificant for the extremely cold class. The significant decrease was specific to eastern and southeastern Romania. Additionally, no trend characterized four cities, located mainly in the southwestern part of Romania, but this might be attributed to the lack of data (due to extremely low values, the trend could not be calculated) ( Figure 10). Slightly cool, hot and very hot conditions significantly increased in regards to FO in most of the cities, (e.g., 7 out of 10 for slightly cool conditions) ( Figure 10). The DOP indicated negative slopes, equally significant and insignificant for the extremely cold class. The significant decrease was specific to eastern and southeastern Romania. Additionally, no trend characterized four cities, located mainly in the southwestern part of Romania, but this might be attributed to the lack of data (due to extremely low values, the trend could not be calculated) ( Figure 10). For the DOP series, decreasing trends were dominant (58% of the series). Among them, 37% were detected to be statistically significant (Figure 9c) and they characterized especially the cold stress classes, from extremely cold to neutral (Figure 9b). The increasing trends were detected for classes from cool to very hot conditions, for both parameters. The cold sensation class was characterized by significant decrease for all the locations considered (Figure 9b,d).
The DOP indicated negative slopes, equally significant and insignificant for the extremely cold class. The significant decrease was specific to eastern and southeastern Romania. Additionally, no trend characterized four cities, located mainly in the southwestern part of Romania, but this might be attributed to the lack of data (due to extremely low values, the trend could not be calculated) ( Figure 10).
FO and DOP Spatial Distribution
For this index, no extreme heat stress days were identified over the entire period considered, in the analyzed cities of Romania. Only four very strong heat stress days were registered at one single location, and thus, due to the extremely low frequency, the two classes (extreme heat stress and very strong heat stress) were not considered for further analysis.
According to the UTCI, in terms of FO, most days were characterized by no thermal stress (140-180 days/year). The slight cold stress days ranged from 60 days/year, at Galat , i, to 80 days/year, at Sibiu; the number of moderate cold stress days varied between 50 days/year, at Timis , oara, and 80 days/year, at Ias , i, and the moderate heat stress characterized, as average values, 30-60 days/year. The remaining classes covered less than 50 days/year for all the locations (Figure 11).
The DOP reached the maximum length for slight cold stress at all the weather stations, except for Bucharest, reaching up to almost 350 days/year, followed by the no thermal stress class. Moderate cold stress was only recorded over 350 days/year at three stations: Botos , ani, Galat , i and Ias , i. It was present elsewhere for less than 200 days/year. The interval when strong and very strong cold stress days can occur did not exceed 150 days at any station. The strong heat stress and extreme cold stress conditions were found to last no more than 50 days/year; however, extreme cold stress occurred at four weather stations over the entire period analyzed (Figure 11).
Changes Detected in the UTCI Index Parameters
For both FO and DOP of the UTCI index, mainly no change was detected in 41% of the FO series ( Figure 12a) and 42% of the DOP series (Figure 12c). This could be explained by the small number of days attributed to extreme thermal (hot or cold) discomfort conditions. Significant changes were found in 38% of data sets considered for FO, respectively, and in 37% of the DOP. The increasing trends were dominant for the heat stress classes (mainly for strong and moderate heat stress), whereas decreasing trends became dominant for the cold stress conditions, especially in the case of strong and very strong cold classes (Figure 12b,d). The DOP also indicated significant increasing trends for the heat stress classes, as well as for slight and moderate cold stress ones. For the no thermal stress class, the statistically not significant increase indicated a higher rate than the significant one. classes covered less than 50 days/year for all the locations (Figure 11).
The DOP reached the maximum length for slight cold stress at all the weather stations, except for Bucharest, reaching up to almost 350 days/year, followed by the no thermal stress class. Moderate cold stress was only recorded over 350 days/year at three stations: Botoșani, Galați and Iași. It was present elsewhere for less than 200 days/year. The interval when strong and very strong cold stress days can occur did not exceed 150 days at any station. The strong heat stress and extreme cold stress conditions were found to last no more than 50 days/year; however, extreme cold stress occurred at four weather stations over the entire period analyzed (Figure 11).
Changes Detected in the UTCI Index Parameters
For both FO and DOP of the UTCI index, mainly no change was detected in 41% of the FO series ( Figure 12a) and 42% of the DOP series (Figure 12c). This could be explained by the small number of days attributed to extreme thermal (hot or cold) discomfort conditions. Significant changes were found in 38% of data sets considered for FO, respectively, and in 37% of the DOP. The increasing trends were dominant for the heat stress classes (mainly for strong and moderate heat stress), whereas decreasing trends became dominant for the cold stress conditions, especially in the case of strong and very strong cold classes (Figure 12b,d). The DOP also indicated significant increasing trends for the heat stress classes, as well as for slight and moderate cold stress ones. For the no thermal stress class, the statistically not significant increase indicated a higher rate than the significant one. As a spatial distribution, the DOP seemed to increase significantly for the strong heat stress in the western and southern regions of the country; for the other regions, no change was detected. Moreover, the no thermal stress class was found to have mainly a positive trend all over the country, except for the northern-most city (Botoșani), for which an insignificant decrease was detected. The upward trends were significant in the extreme western and eastern cities. Statistically significant negative slopes were specific to most cities for the duration of the periods characterized by strong and As a spatial distribution, the DOP seemed to increase significantly for the strong heat stress in the western and southern regions of the country; for the other regions, no change was detected. Moreover, the no thermal stress class was found to have mainly a positive trend all over the country, except for the northern-most city (Botos , ani), for which an insignificant decrease was detected. The upward trends were significant in the extreme western and eastern cities. Statistically significant negative slopes were specific to most cities for the duration of the periods characterized by strong and very strong cold stress. Only two cities experienced no trends (Oradea and Bucharest) for the strong cold stress class, and just a single one (Oradea) had a decreasing downward trend for the very strong cold stress class ( Figure 13). A statistically significant increasing trend for FO in the case of strong and moderate heat stress classes, as well as for the slight cold stress class was detected for the great majority of the locations considered. The significant decreasing trends were specific for the strong, very strong, and extreme cold stress classes ( Figure 13).
A great number of locations with stationary trends detected in the extreme cold stress class, for both FO and DOP (Figure 13), may be explained by the very low number of days characterized by the given conditions corresponding to both parameters.
FO and DOP Spatial Distribution
According to the THI classification, the most days of the year belong to comfortable sensation class, their average number varying between 140 days, at Craiova, and 150 days, at Sibiu. Chill and cold sensation classes characterized, on average, approximately the same number of days, but they varied with the location from 50 to 70 days/year. Very cold days covered between 30 (Constanța) and 60 days/year (Botoșani, Cluj-Napoca and Iași). The most numerous hot days reached their highest mean value of 50 days/year in Bucharest, and the lowest one, 25 days/year, in Sibiu. The mean multiannual number of very hot days did not exceed 20 days/year, on average, whereas their maximum values went up to 50 days/year. Sultry conditions occurred occasionally (less than 5 days/year) (Figure 14). A statistically significant increasing trend for FO in the case of strong and moderate heat stress classes, as well as for the slight cold stress class was detected for the great majority of the locations considered. The significant decreasing trends were specific for the strong, very strong, and extreme cold stress classes ( Figure 13).
A great number of locations with stationary trends detected in the extreme cold stress class, for both FO and DOP (Figure 13), may be explained by the very low number of days characterized by the given conditions corresponding to both parameters.
FO and DOP Spatial Distribution
According to the THI classification, the most days of the year belong to comfortable sensation class, their average number varying between 140 days, at Craiova, and 150 days, at Sibiu. Chill and cold sensation classes characterized, on average, approximately the same number of days, but they varied with the location from 50 to 70 days/year. Very cold days covered between 30 (Constant , a) and 60 days/year (Botos , ani, Cluj-Napoca and Ias , i). The most numerous hot days reached their highest mean value of 50 days/year in Bucharest, and the lowest one, 25 days/year, in Sibiu. The mean multiannual number of very hot days did not exceed 20 days/year, on average, whereas their maximum values went up to 50 days/year. Sultry conditions occurred occasionally (less than 5 days/year) ( Figure 14). Thermal classes with the longest DOP are those characterized by chill and comfortable sensations, which cover more than 250 days/year, as well as the cold one, which mostly accounted for 150 days/year at each location. The frequency of the hot and cold classes can exceed 100 days/year, while the sultry days were quite rare in the region-they only occurred in four cities for less than 10 days/year.
Changes Detected in the THI Index Parameters
Half of the trends identified for the FO data series were increasing, and among them, 36% were found statistically significant (Figure 15a), whereas the other half was shared between decreasing trends and no trend classes: 34% had a downward trend (and among them 21% are statistically significant) and the remaining series indicated no change. From a more detailed perspective, the significant increase was mainly specific to hot and very hot conditions (90% and 80%, respectively, of the locations), whereas a statistically significant decrease was dominant for very cold (90%) and comfortable (60%) thermal conditions. For sultry, chill and cold sensations, the great majority of the series indicated no significant change (Figures 15b and 16).
Further analysis revealed that 62% of the DOP series indicated an increase and 26% of them were detected to be statistically significant (Figure 15c). They were specific mainly to very hot, hot, comfortable and chill conditions (Figure 15d) and to the eastern cities ( Figure 16). The length of the occurrence period showed no change for extreme thermal conditions for the majority of the locations considered: sultry (90%), cold and very cold (40%) (Figures 15d and 16). Thermal classes with the longest DOP are those characterized by chill and comfortable sensations, which cover more than 250 days/year, as well as the cold one, which mostly accounted for 150 days/year at each location. The frequency of the hot and cold classes can exceed 100 days/year, while the sultry days were quite rare in the region-they only occurred in four cities for less than 10 days/year.
Changes Detected in the THI Index Parameters
Half of the trends identified for the FO data series were increasing, and among them, 36% were found statistically significant (Figure 15a), whereas the other half was shared between decreasing trends and no trend classes: 34% had a downward trend (and among them 21% are statistically significant) and the remaining series indicated no change. From a more detailed perspective, the significant increase was mainly specific to hot and very hot conditions (90% and 80%, respectively, of the locations), whereas a statistically significant decrease was dominant for very cold (90%) and comfortable (60%) thermal conditions. For sultry, chill and cold sensations, the great majority of the series indicated no significant change (Figures 15b and 16). Further analysis revealed that 62% of the DOP series indicated an increase and 26% of them were detected to be statistically significant (Figure 15c). They were specific mainly to very hot, hot, comfortable and chill conditions (Figure 15d) and to the eastern cities ( Figure 16). The length of the occurrence period showed no change for extreme thermal conditions for the majority of the locations considered: sultry (90%), cold and very cold (40%) (Figures 15d and 16).
Discussion and Conclusions
With this study, we intended to present regional differences in changes in bioclimatic conditions in Romania. The detailed analysis for each index trend points out that, in general, both parameters considered in this study, the frequency and length of occurrence period, indicated a similar pattern for the majority of indices and their thermal conditions classes. Since the cities were randomly distributed across the country, we can conclude that, although aimed to develop a study on a local scale, the similar patterns identified lead to the conclusion that we can talk about a regional scale.
From 1961 until 2016, regardless of the calculation method, its purpose or the number of classes, the trend analysis for FO reveals a shift from cold stress conditions to warm and hot ones for each index. However, the most stressful conditions for hot extremes did not indicate significant change. Under these circumstances, one can say that the climate in the big cities of Romania became milder during the cold season and hotter during the warm periods of the year. Further, all indices, except for TE, indicated a general negative trend in the number of comfortable days in terms of thermal sensation. For the DOP, the conclusion was similar, with a longer occurrence interval during the year for comfortable or warm stress classes, except for the H index.
However, based on the FO analysis, comfort (H, UTCI and THI indices) or even cold stress (TE or TeK indices) conditions, were still dominant in the area. Thus, the increasing trends detected will intensify the hot stress, according to some indices, or on the contrary, they will lead to a more comfortable climate, based on other indices.
Due to the previously obtained results indicating significant changes in temperature, and especially in extreme hot temperatures and sunshine hours in Romania over a similar period with that considered for this study, temperature and sunshine seems to be the triggering factors for the identified changes in bioclimatic indices.
Since the great majority of the weather stations are inside the built areas of the selected cities, they are representative for peripheral urban areas with low-rise buildings and green spaces around them. The general assumption is that, for all the indices, under the impact of the urban heat islands, in the high-rise building or dense building areas specific to central areas and to big neighborhoods, the hot stress intensifies, especially during extreme high temperature conditions (such as heat waves), and the cold stress diminishes during the cold season. All considered cities for this study are mid-size and large cities: from more than 100,000 inhabitants to more than 2,000,000 inhabitans (Bucharest). However, most of them shelter between 200,000 and 350,000 people. In their central areas, we can expect that, during the warm season, under the high-rise building impact and rush hours traffic conditions [73,74], the bioclimatic conditions considerably modify compared to those identified in the peripheral areas and become more stressful in terms of hot stress. For the assessment of the real bioclimatic conditions in all types of local climate zones of the cities, installing urban climate monitoring systems is of crucial importance. They will allow identification of the most critical areas in terms of thermal stress (hot spots) as wells as the most comfortable ones. Moreover, the difference between the results of the indices considered for this study leads to the conclusion that a scientific validation by people's perception of these indices on a national scale for the bioclimate conditions of Romania should become a research priority. After validation of thermal stress classes, the index identified as the most relevant should be used for general biometeorological forecasting and considered for implementation in the early warning system for extreme weather events. In the case that different indices are to be found appropriate for different regions, a regional approach should be considered and implemented by the Regional Weather Forecast Centers for a more efficient protection of the population. Under these circumstances, this study could become an extremely useful tool for local and regional authorities in order to adopt the best adaptation measures in terms of thermal stress in the urban areas considered. | 2020-08-06T09:06:09.374Z | 2020-08-03T00:00:00.000 | {
"year": 2020,
"sha1": "d51c156cc444369bf5f64100eca21379dd631c03",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4433/11/8/819/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1c9bb6c28e72739e1c4ebcbca02eb2c961670777",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
1719292 | pes2o/s2orc | v3-fos-license | Polynomial Poisson structures on affine solvmanifolds
A $n$-dimensional Lie group $G$ equipped with a left invariant symplectic form $\om^+$ is called a symplectic Lie group. It is well-known that $\om^+$ induces a left invariant affine structure on $G$. Relatively to this affine structure we show that the left invariant Poisson tensor $\pi^+$ corresponding to $\om^+$ is polynomial of degree 1 and any right invariant $k$-multivector field on $G$ is polynomial of degree at most $k$. If $G$ is unimodular, the symplectic form $\om^+$ is also polynomial and the volume form $\wedge^{\frac{n}2}\om^+$ is parallel. We show also that any left invariant tensor field on a nilpotent symplectic Lie group is polynomial, in particular, any left invariant Poisson structure on a nilpotent symplectic Lie group is polynomial. Because many symplectic Lie groups admit uniform lattices, we get a large class of polynomial Poisson structures on compact affine solvmanifolds.
Introduction and main results
Recall that an affine manifold is a differential manifold M together with a special atlas of coordinate charts such that all coordinate changes extend to affine automorphisms of IR n . These distinguished charts are called affine charts. The data of a flat and torsion free connection ∇ on a manifold M is equivalent to the data of an affine structure. A tensor field on an affine manifold M is called polynomial if in affine coordinates its coefficients are polynomial functions. A Poisson structure on an affine manifold is called polynomial if the space of local polynomial functions is closed under the Poisson bracket. In an equivalent way this means that the associated Poisson bivector is polynomial. For some general results on polynomial tensor fields see [4,6,7,8,16]. Let us describe briefly the affine structure associated to a Lie group endowed with a left invariant symplectic form. This affine structure is the context on which we will state our main results on the polynomial nature of some tensor fields and some Poisson structures. Let G be a Lie group with Lie algebra G = T e G, where e stands for the unit of G. For any tensor T on G, we denote by T + and T − respectively the left invariant tensor field and the right invariant tensor field on G associated to T . If ω is a scalar non degenerate 2-cocycle of G, the differential 2-form ω + on G is a left invariant symplectic form on G and (G, ω + ) is called a symplectic Lie group. Symplectic Lie groups were studied by several authors see, for instance, [1,2,3,10,11,13,14]. A connected Lie group G is symplectic if and only if its universal coveringĜ admits an etale representation by affine transformations of G * with linear part the coadjoint representation ofĜ and infinitesimal part a skew-symmetric 1-cocycle (see [12]). This implies that the formula where u, v, w ∈ G, defines a left invariant flat and torsion free connection ∇. This affine structure will be called the affine structure associated to the symplectic Lie group (G, ω + ).
Let us state our mains results.
Theorem 1.1 Let (G, ω + ) be a connected symplectic Lie group of dimension n endowed with the associated affine structure. Then the following assertions hold.
1. In a neighborhood of any element of G, there exists an affine chart (x 1 , . . . , x n ) such that, for any i, j = 1, . . . , n, the Poisson bracket of x i and x j associated to ω + is given by where C k ij are constants of structure of the Lie algebra of G and µ ij are constants.
2. Any right invariant k-multivector field on G is polynomial of degree at most k.
3. If G is unimodular then the symplectic form ω + is polynomial of degree at most n−1, the volume form ∧ n 2 ω + is parallel and any right invariant differential form on G is polynomial. Theorem 1.2 Let (G, ω + ) be a connected nilpotent symplectic Lie group endowed with the associated affine structure. Then any left invariant multivector field on G is polynomial. In particular, any left invariant Poisson structure on G is polynomial.
There are some interesting implications of Theorems 1.1 and 1.2.
1. Let (G, ω + ) be a connected n-dimensional symplectic Lie group. If G admits an uniform lattice, it is well known that G is unimodular. On the other hand, according to a result of Medina-Lichnerowicz [11], the associated affine structure to (G, ω + ) is geodesically complete if and only if G is unimodular and, in this case G is solvable.
Consequently if Γ is an (uniform) lattice in G then M = Γ\G is a compact solvmanifold which carries an affine structure and a symplectic form ω such that: (a) the Poisson bracket corresponding to ω is polynomial of degree 1; (b) the symplectic form ω is polynomial of degree at most n − 1 and the volume form ∧ n 2 ω is parallel. Note that a compact affine manifold with a parallel volume form possesses interesting properties (see [9]). 3. Let (G, ω + ) be a symplectic Lie group and let r ∈ G ∧G be the solution of the classical Yang-Baxter equation associated to ω. According to Theorem 1.1, r + is a polynomial Poisson structure of degree 1 and r − is a polynomial Poisson structure of degree at most 2. Thus, we recover a result of Diatta-Medina (see [5]) which states that the Lie-Poisson bivector r + − r − is polynomial of degree 2.
The paper is organized as follows. In Section 2, we give some properties of the affine structure associated to a symplectic Lie group. Proofs of Theorems 1.1 and 1.2 are developed in Section 3. Using Theorem 1.1 and results of Medina-Revoy on lattices in symplectic Lie groups [14], we exhibit in Section 4 and infinity of non homeomorphic compact affine solvmanifolds endowed with polynomial Poisson tensors.
Acknowledgment This work was finalized during the stay of the first author at the university Montpelier II. The first author thanks the department of mathematics for having invited him.
2 Some properties of the affine structure associated to a symplectic Lie group This section is a preparation of Section 3 in which we will prove Theorems 1.1 and 1.2. First, we will consider the affine structure given by (1) from a different point of view which will be useful through the paper. Let π + be a left invariant Poisson bivector on a Lie group G and denote by π + # : T * G −→ T G the associated homomorphism. Recall that the Koszul bracket associated to π + is given by where α and β are differential 1-forms on G and L denotes the Lie derivative. This bracket endows Ω 1 (G) with a structure of a Lie algebra and, for any ]. An easy calculation gives that, for any vector field X on G and for any differential 1-form α, We deduce easily from (2) that for any right invariant 1-forms α − and β − and for any right invariant vector field X − , Thus, for any right invariant 1-forms α − and β − , the bracket [α − , β − ] π + vanishes and hence With this remark in mind, we consider a connected symplectic Lie group (G, ω + ) and we denote by π + the associated left invariant Poisson tensor. From (3), for any basis ) is a commuting parallelism of vector fields on G. This defines a flat and torsion free linear connection ∇ by putting for any right invariant 1-form on G. Moreover, for any right invariant vector field X − , we get from (2) [ Then X − is an affine infinitesimal transformation and hence ∇ is a left invariant linear connection. Now, let us compare the linear connection ∇ defined by (1) and ∇. More precisely, we will show that ∇ = ∇.
The symplectic form ω + gives arise to an isomorphism ω ♭ : T G −→ T * G, u → ω + (u, .). This isomorphism and its inverse (ω ♭ ) −1 : T * G −→ T G define an isomorphism between the space of tensor field of type (p, q) and the space of tensor field of type (q, p). For any tensor field T , we denote by T ω its image under this isomorphism. For instance, for any vector field X and any 1-form α, X ω is the 1-form i X ω + and α ω = −π + # (α). It is obvious that (T ω ) ω = T.
where L X + is the Lie derivative in the direction of X + .
A tensor field T is parallel with respect to ∇ if and only if T ω is right invariant.
Proof. Note that the second assertion is an immediate consequence of the first one. Let us establish the first assertion. Since, for any tensor fields T 1 and T 2 , (T 1 ⊗ T 2 ) ω = T ω 1 ⊗ T ω 2 and since both L X + and ∇ X + are derivative, it suffices to establish the relation for left invariant vector fields and left invariant differential 1-forms. Let Y + and Z + be left invariant vector fields. We have and the formula follows. One can deduce easily the formula for a left invariant 1-form.2 An immediate consequence of this proposition is that, ∇π + # (α − ) = 0 for any right invariant 1-form α − and hence ∇ = ∇. Now, given a connected symplectic Lie group (G, ω + ), let us construct an affine atlas corresponding ∇.
Since ω + is left invariant , for any u ∈ G, the vector field u − is symplectic, i.e., Thus, for any basis (u 1 , . . . , u n ) of G, there exists in a neighborhood of any element of G a local coordinates (x 1 , . . . , x n ) such that We get from Proposition 2.1 that ∇dx i = 0 and we deduce that (x 1 , . . . , x n ) are affine coordinates. Now we will express ω + and π + in the affine coordinates constructed above. Fix a basis (u 1 , . . . , u n ) of G, denote by (α 1 , . . . , α n ) its dual basis and consider the affine coordinates (x 1 , . . . , x n ) given by (4). In this coordinates we have The following proposition will play a crucial role in the proof of Theorem 1.1.
Proposition 2.2 Let (G, ω + ) be a connected symplectic Lie group endowed with the associated affine structure. Then, for any Proof. The right invariant vector fields u − , v − are symplectic and hence [u − , v − ] is hamiltonian and we have By using Proposition 2.1, we get ∇dω + (u − , v − ) = 0 and the result follows.2 1≤i,j≤n , we have Consider now the volume form Ω + = ∧ n 2 ω + . We have, from (7), and det(A) = 1 Proposition 2.3 Let (G, ω + ) be a connected unimodular symplectic Lie group endowed with the associated affine structure and let π + be the left invariant Poisson structure corresponding to ω + . Then π + (α − , β − ) is a polynomial function of degree at most n − 1, for any differential forms α − , β − .
Proof. Since G is unimodular, Ω + is a right invariant and then Ω + (u − 1 , . . . , u − n ) is constant. Hence, from (11), det(A) is a constant. On the other hand, from Proposition 2.2, the coefficients of A are polynomial functions of degree 1, consequently the coefficients of the inverse A −1 are polynomial of degree at most n − 1 and the proposition follows from (9).2 The following Lemma will be useful in the proof of Theorem 1.2. Proof. We have from (7) u So if f is polynomial, u − (f ) is polynomial according to Proposition 2.2. For the converse, we deduce from (12) that 1. Fix a basis (u 1 , . . . , u n ) of G, denote by (α 1 , . . . , α n ) its dual basis and consider the affine coordinates (x 1 , . . . , x n ) given by (4). For any thus d{x i , x j } = n k=1 C k ij dx k and the desired relation follows. (7) and Proposition 2.2 we deduce that any right invariant vector field on G is polynomial of degree at most 1 and hence any right invariant k-multivector field must be polynomial of degree at most k.
From
3. This is an immediate consequence of Proposition 2.3, (5), (9) and (10). 2 Proof of Theorem 1.2 To prove the theorem, it suffices to show that if G is nilpotent then any left invariant vector field is polynomial. Fix a basis (u 1 , . . . , u n ) of G and consider the affine coordinates (x 1 , . . . , x n ) given by (4). Let u + be a left invariant vector field on G. We have and by induction, we get for any 1 ≤ j 1 , . . . , j r ≤ n. Since G is nilpotent, we get for r large and we deduce from Lemma 2.1 that ω + (u − i , u + ) is a polynomial function and the theorem is proved.2
Examples
In this section, we give a large class of four dimensional solvmanifolds which admit polynomial symplectic forms and polynomial Poisson structures. The construction is based on Theorems 1.1 and 1.2 and on the results of Medina-Revoy on Lattices in four dimensional symplectic Lie groups (see [14,15]). Indeed, in [12] Medina and Revoy showed that there are four non abelian real unimodular Lie algebras of dimension four endowed with a scalar non degenerate 2-cocycle. Moreover, the connected and simply connected Lie group of any of such Lie algebras has an infinity of non isomorphic lattices.
For any Lie algebra in the list of Medina-Revoy, we consider the corresponding connected and simply connected Lie group endowed with a left invariant symplectic form, we give a global affine chart and we express the symplectic form and the Poisson bivector in this chart. Finally, we give a description of lattices in this group.
Example 4.1
1. We consider the Lie group G 1 = IR 4 with the product We denote by G 1 its Lie algebra and by (e 1 , e 2 , e 3 , e 4 ) the canonical basis of G 1 . We have We consider the scalar non degenerate 2-cocycle on G 1 given by ω = e * 4 ∧ e * 3 + e * 1 ∧ e * 2 . A direct computation gives that the corresponding symplectic 2-form on G 1 is given by and We put and we get an affine chart (X, Y, Z, T ).
Here the matrix
1≤i,j≤4 is given by and hence, by using (6),
The inverse of A is given by
It is clear that Γ = {(m, n, k, 2r)/m, n, k, r ∈ Z Z} is an uniform lattice in G 1 . Actually, G 1 admits an infinity non isomorphic lattices.
We denote by G 2 its Lie algebra and by (e 1 , e 2 , e 3 , e 4 ) the canonical basis of G 2 . We have The nonzero brackets of G 2 are the following: [e 1 , e 2 ] = e 2 and [e 1 , e 3 ] = −e 3 .
We consider the scalar non degenerate 2-cocycle on G 2 given by ω = e * 1 ∧ e * 4 + e * 2 ∧ e * 3 . A direct computation gives that the corresponding symplectic 2-form on G 2 is given by We put (X, Y, Z, T ) = (t + yz, z, −y, −x) and we get an affine chart.
1≤i,j≤4 is given by and hence, by using (6), The inverse of A is given by and hence The Lie group G 2 is a direct product of G ′ 2 with the abelian group IR where the multiplication in G ′ 2 is given by Hence, if Γ 1 is a lattice in G ′ 2 then Γ = Γ 1 × Z Z is a lattice in G 2 . But, according to [13], a lattice 3. We consider the Lie group G 3 = IR 4 with the product (x, y, z, t)(x ′ , y ′ , z ′ , t ′ ) = (x+x ′ , y+y ′ cos(x)−z ′ sin(x), z+y ′ sin(x)+z ′ cos(x), t+t ′ ).
We denote by G 3 its Lie algebra and by (e 1 , e 2 , e 3 , e 4 ) the canonical basis of G 3 . We have e + 1 = ∂ x , e + 2 = cos x∂ y + sin x∂ z , e + 3 = − sin x∂ y + cos x∂ z , e + 4 = ∂ t , The nonzero brackets of G 3 are the following: We consider the scalar non degenerate 2-cocycle on G 3 given by ω = e * 1 ∧ e * 4 + e * 2 ∧ e * 3 . A direct computation gives that the corresponding symplectic 2-form on G 2 is given by We put (X, Y, Z, T ) = (t − yz, z, −y, −x) and we get an affine chart.
1≤i,j≤4 is given by and hence, by using (6), The inverse of A is given by 1≤i,j≤4 is given by and hence, by using (6) The inverse of A is given by where p, q, r are fixed integers numbers with pqr = 0, is a lattice in N 3 . It is commensurable with Γ 1,1,1 . In fact, Γ p,q,r contains Γ 1,1,1 as a subgroup of index p 2 q 2 r.
To get examples in dimension greater or equal to 6, one cane use the results given in [1] or [10]. | 2008-02-04T09:00:45.000Z | 2008-02-04T00:00:00.000 | {
"year": 2008,
"sha1": "45f21ab44678febbf491a9711524a20785d56692",
"oa_license": null,
"oa_url": "http://www.intlpress.com/site/pub/files/_fulltext/journals/jsg/2011/0009/0003/JSG-2011-0009-0003-a004.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "f4c23cc9112d090db7bd2c8db269dad5f9272cc2",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
257425756 | pes2o/s2orc | v3-fos-license | Metabolic syndrome after liver transplant in patients at the specialized Center San Vicente Fundación, Rionegro, Antioquia, Colombia, 2013-2017
ABSTRACT The medical records of all liver transplant patients attended at the Centro Especializado San Vicente Fundación between January 2013 and June 2017 were reviewed in order to determine the frequency of post-transplant metabolic syndrome (MS). We collected sociodemographic data, pathological history, toxicological history, complications, and ATP III criteria in a validated instrument. The statistical analysis was carried out with OpenEpi 3.01; p<0.05 was considered as statistically significant. Of the 102 reviewed medical records, 73 met the inclusion criteria (no MS diagnosis prior to transplant and complete information for the instrument) and were analyzed. Most patients were male (59%), older adults (64%) and married (62%). The frequency of MS after liver transplant was 66%. The association between MS and history of hypertension and diabetes was significant. We confirmed that MS is a frequent complication in liver transplant recipients and that history of hypertension and diabetes are the most frequent associated factors.
INTRODUCTION
After renal transplantation, liver transplantation is the most frequent in the world; according to the Global Observatory on Donation and Transplantation, 139,024 organ transplants were performed in 2017, of which 65% were renal transplants and 23% were liver transplants (1) .Specifically in Colombia, for the year 2017, a total of 1342 patients received solid organ transplantation, of which 21% corresponded to liver transplantation (2) .Regional # 2 of the National Donation and Transplantation Network that includes the authorized institutions of the departments of Antioquia, San Andrés and Providencia, Chocó, Córdoba, and Caldas, was the second in terms of number of donors and transplants performed, with 334 transplants, of which 19% were liver transplants.In addition, the Centro Especializado San Vicente Fundación Rionegro performed the most transplants in 2017 within the five specialized centers with 122 transplants; 6% of these were liver transplants (2) .
The optimization of the surgical technique and immunosuppressive treatment has made it possible to achieve excellent survival rates after liver transplantation, reaching 90% after one https://doi.org/10.17843/rpmesp.2022.394.11992Motivation for the study: there is a lack of studies in Latin America on the frequency of metabolic syndrome in patients who receive liver transplants.
Main findings: two-thirds (66%) of patients who received liver transplantation between 2013 and 2017 at the Specialized Center San Vicente Fundación de Rionegro, Antioquia, Colombia, subsequently presented metabolic syndrome.
Implications: this study confirms that liver transplant recipients very frequently develop metabolic syndrome; however, the frequency found by this study (66%) was almost double that reported in other regions of the world, suggesting that patients from the Specialized Center San Vicente Fundación de Rionegro, Antioquia, Colombia, may present some additional condition.
KEY MESSAGES
year and 80% after five years; nevertheless, this high survival rate is accompanied by an increase in medical complications such as the development of de novo neoplasms, recurrence of the underlying disease and metabolic and cardiovascular complications, which currently are the main causes of death unrelated to the graft (3) .Metabolic syndrome (MS) is a chronic condition in which metabolic abnormalities coexist, such as central obesity, increased triglycerides, atherogenic dyslipidemia, hyperglycemia, and arterial hypertension, which constitutes risk factors for developing cerebrovascular events and type 2 diabetes mellitus (4,5) .Several international organizations and scientific groups have defined the diagnostic criteria for MS, (4) .
The ATP III defined the presence of three of the following five factors as determinants for the diagnosis of MS: a) abdominal obesity (>102 cm in men and >88 cm in women) measured by abdominal perimeter; b) hypertriglyceridemia >150 mg/dL, c) low HDL levels (men <40mg/dL and women HDL <50 mg/dL); d) blood pressure >130/85 mmHg; e) glycemia >100 mg/dL (5.6 mmol/L).
Previous studies report that the prevalence of MS ranges from 44% to 58% in liver transplant patients (3,6,7) .Furthermore, it has been reported that MS increases 1.78 times the risk of developing cardiovascular disease and death, and this syndrome, together with immunosuppression, represents the main risk factor for the development of cardiovascular disease, which is associated in 19% to 42% of the cases to all deaths not associated with the graft (3) , which increases the use of resources allocated to health care due to an increase in the number of hospitalizations during the first year after liver transplantation (8,9) .There are no clear data from Colombia on the prevalence of MS in liver transplant patients, which limits the available knowledge on the epidemiological behavior of this syndrome, besides, there are no studies carried out in patients from the San Vicente Fundación Hospital in Rionegro that could allow us to establish a relationship between the appearan-ce of MS and risk factors associated with this population.This information is important, because it could help address the modifiable risk factors in order to avoid the appearance of the syndrome or to minimize the negative impact on the selected group of patients, thus reducing morbidity and mortality and improving the quality of life, both physically and psychologically, since this syndrome can affect the life style of patients.In addition, timely intervention reduces the demand for health resources.
This study aimed to determine the frequency of MS in liver transplant patients from the Centro Especializado San Vicente Fundación Rionegro between 2013 and 2017, to describe the sociodemographic characteristics of these patients, and to explore the association of post-transplant MS with possible risk factors in the studied population.
Type of study and population
Retrospective observational study conducted on liver transplant patients from the Centro Especializado San Vicente Fundación Rionegro between January 2013 and June 2017.
Inclusion and exclusion criteria
We included the medical records of all patients who underwent liver transplantation between January 2013 and
Statistical analysis
Online Openepi software version 3.01, and Office Excel were used for data analysis.Univariate analysis was carried out by calculating absolute frequencies and relative frequencies.
The Chi-square test or Fisher's exact test, as appropriate for the calculation of expected values, were used for the bivariate analysis.Statistical significance was considered as a p-value of less than 0.05.
Ethical considerations
The project was evaluated and approved by the research committee of the Fundación Universitaria San Martín, act 010 of 2017.The informed consent and confidentiality agreement were signed with the Centro Especializado San Vicente Fundación de Rionegro.
FINDINGS
A total of 112 medical records corresponding to all liver transplant recipients who attended the institution between January 2013 and June 2017 were reviewed.Of these, 39 were excluded from the analysis due to the following reasons (Figure 1): 18, because of previous MS diagnosis; 19, because they did not have enough information for the diagnosis of MS; and 2 because the patients were transferred to another hospital and could not be followed-up, so they were considered as losses.
A total of 73 medical records were included in the analysis; the frequency of MS was 65.8%.The sociodemographic variables of the patients are described in Table 1.Most were men (59%), elder (64%), mestizos (90%) and were married (62%).Most patients (37%) had upper-secondary education and 43% were employed.Cryptogenic cirrhosis (19%) and hepatocellular carcinoma (14%) were the most frequent indications for liver transplantation (Table 2).Regarding pathologic history prior to transplantation, arterial hypertension was found in 32% of the patients, diabetes in 23%, dyslipidemia in 14% and cardiovascular disease in 6%; in addition, 36% of the patients consumed alcohol and 25% smoked cigarettes.
The year with the highest number of transplants was 2015, with 16%.The most commonly used drugs in immunosuppressive therapies were mycophenolate (50 patients), tacrolimus (45 patients) and prednisone (33 patients), but it should be considered that each patient had several drugs within their scheme so the association between immunosuppressive treatment and MS could not be evaluated.Six deaths were reported (8% mortality) during the follow-up period of this study; the most frequent cause of death was septic shock (67%) (Table 2).The present study confirms, once again, that liver transplant patients very frequently develop MS; however, our results showed that the prevalence of MS (66%) was almost double that reported by Thoefner et al. in a systematic review and meta-analysis carried out in 2015 that included 16 papers and 3,539 liver transplant patients, in whom they found a prevalence of MS of 39% (range 16-64%) and an incidence of MS of 35% (range 21-49%) (10) .This suggests that patients from the Centro Especializado San Vicente Fundación de Rionegro, Antioquia, Colombia, may present some additional condition, which should be studied, that makes them more likely to develop MS compared to liver transplant recipients from other regions of the world.from around the world (11)(12)(13)(14)(15)(16)(17)(18)(19) as well as with the meta-analysis by Thoefner et al. (10) , which confirms the high impact of history of diabetes on the development of MS after liver transplantation (OR = 4.03; 95%CI = 2.81-5.80;I 2 = 0%).
In this study it was not possible to evaluate the association between immunosuppressive therapy and the incidence of MS because the therapy includes the combination of several drugs and the patients underwent changes in both doses and drugs within treatment.However, the systematic review by Thoefner et al. (10) found that immunosuppressive drugs are not a risk factor for MS after liver transplantation.
There are two main limitations for this study.First, we could only include a limited number of medical records (73 of 112), because only those had complete information.The second limitation is that it was not possible to include obesity as a variable during the analysis because no information on the body mass index or abdominal perimeter of the patients was found in the medical records.
In conclusion, the prevalence of post-liver transplant MS in the Centro Especializado San Vicente Fundación de Rionegro, Antioquia, Colombia is high, even higher than the prevalence reported in other regions of the world.History of arterial hypertension and diabetes are the most relevant associated factors for the development of this syndrome.
Author contributions: SFJE conceptualized, designed the methodology, analyzed the data, managed research activities, and reviewed the final version.AFEM wrote the initial draft and final version.MTM, RCAM and LCDA conceptualized, designed the methodology, conducted the research, analyzed the data, wrote the initial draft, and reviewed the final version.TRLG conceptualized, designed the methodology, managed research activities, and reviewed the final version.
Funding: this research was funded by the Fundación Universitaria San Martín and the Centro Especializado San Vicente Fundación, Rionegro, Colombia.
Conflicts of interest: none to declare.
such as the National Cholesterol Education Program -Adult Treatment Panel III (NCEP-ATP III), the World Health Organization, the International Diabetes Federation and the European Study Group on Insulin Resistance (EGIR); being the criteria of the International Diabetes Federation and the Adult Treatment Panel III (ATP III) in its modified version 2015, the most widely used criteria for the diagnosis of MS
Figure 1 .
Figure 1.Diagram of medical record selection in order to calculate the frequency of metabolic syndrome in liver transplant recipients at the Hospital San Vicente Fundación, Rionegro, Antioquia, Colombia, 2013-2017.
Table 3 .
Factors associated with the occurrence of metabolic syndrome after liver transplantation in patients from the Hospital San Vicente Fundación, Rionegro, Antioquia, Colombia, 2013-2017.
a Chi-square test b Fisher's exact test | 2023-01-12T16:34:28.647Z | 2022-12-22T00:00:00.000 | {
"year": 2022,
"sha1": "828e0a763693e538aa1b11d96f30ab2d81b31d8b",
"oa_license": "CCBY",
"oa_url": "https://rpmesp.ins.gob.pe/index.php/rpmesp/article/download/11992/5173",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6b4815e23505c77419d664ddfbee4283afbe2aff",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261598068 | pes2o/s2orc | v3-fos-license | The Effect of a Very-Low-Calorie Diet (VLCD) vs. a Moderate Energy Deficit Diet in Obese Women with Polycystic Ovary Syndrome (PCOS)—A Randomised Controlled Trial
We performed an open-label, randomised controlled trial to compare the effects of a very-low-calorie diet (VLCD) vs. moderate energy deficit approach on body weight, body composition, free androgen index (FAI), and metabolic markers in obese women with polycystic ovary syndrome (PCOS). Forty eligible patients were randomly assigned to a VLCD (n = 21) or a conventional energy deficit approach (n = 19) over the same period. After eight weeks, both groups experienced significant weight loss; however, this was greater in the VLCD arm (−10.9% vs. −3.9%, p < 0.0001). There was also a trend towards a reduction in FAI in the VLCD group compared to the energy deficit group (−32.3% vs. −7.7%, p = 0.07). In the VLCD arm, two women (18%) had a biochemical remission of PCOS (FAI < 4); this was not the case for any of the participants in the energy deficit arm. There was a significant within-group increase in the sex-hormone-binding globulin (p = 0.002) and reductions in fasting blood glucose (p = 0.010) and waist to hip ratio (p = 0.04) in the VLCD arm, but not in the energy deficit arm. The VLCD resulted in significantly greater weight reduction and was accompanied by more pronounced improvements in hyperandrogenaemia, body composition, and several metabolic parameters in obese women with PCOS as compared to the energy deficit approach.
Introduction
Polycystic ovary syndrome (PCOS) is the most prevalent endocrine disorder, affecting 5-21% of reproductive-aged women [1,2].These prevalence rates are reported to range depending on the definition employed and the population under investigation [1,2].Insulin resistance (IR) and resulting hyperandrogenism are cardinal features of PCOS contributing to clinical symptoms, including hirsutism, acne, and polycystic ovary morphology on ultrasound [3].PCOS is recognised as a leading cause of anovulatory infertility, while in the case of pregnancy, it increases the risk of associated complications [3,4].In addition Nutrients 2023, 15, 3872 2 of 15 to these unfavourable reproductive consequences, women with PCOS are at greater risk of metabolic disorders, including type 2 diabetes mellitus (T2DM), metabolic syndrome, and cardiovascular disease.They are also more likely to experience compromised psychological wellbeing, as evidenced by a high prevalence of anxiety, depression, and body dissatisfaction alongside the lower quality of life reported in this population [5][6][7].
Many women with PCOS experience difficulties in maintaining healthy body weight.Indeed, previous research has shown up to 75% of women with PCOS are overweight/ obese [8], whilst affected women may experience increased weight gain longitudinally [9].Obesity and, in particular, central type obesity, appear to exacerbate IR, hyperandrogenism, reproductive disturbances, and cardiovascular risk factors and intensify psychological consequences [9].Conversely, weight loss has beneficial effects on PCOS-related outcomes.Lifestyle modifications (diet, physical activity, and behavioural changes) and weight management are recommended as first-line therapy for PCOS to enhance hormonal abnormalities and fertility and prevent long-term metabolic complications [3].Lifestyle interventions and weight loss are also recommended before conception and initiation of infertility treatments [3] and may lead to higher ovulation rates than treatment with oral contraceptives [10].Recent data also suggest improvements in psychological outcomes after weight loss in PCOS [11].
Studies involving dietary energy restriction for weight loss in PCOS have mainly focused on moderate reductions in energy intake to induce a deficit of 500-1000 kcal/d with/without the use of anti-obesity/anti-diabetes medication [12][13][14][15][16], whilst the effects of very-low-calorie diets (VLCDs) remain understudied in this population [17][18][19].VLCDs are defined as dietary plans that provide ≤800 kcal/d.They typically involve partial or complete replacement of meals with synthetic formulas (e.g., shakes, soups, or bars), which are commonly nutritionally replete (i.e., sufficient amounts of vitamins and minerals) to meet dietary requirements.Although VLCDs are recommended for short periods (8-16 weeks) and under medical supervision due to their extreme caloric restriction, they can result in rapid weight loss (20-30%) and, potentially, weight loss maintenance [20].A growing body of evidence advocates the use of VLCDs in adults with T2DM, as adherence to this type of diet has been shown to augment insulin secretion from the pancreas and reduce HbA1c levels to pre-diabetic and normal levels, thus reversing T2DM [21].Women living with PCOS have a similar metabolic profile to patients with T2DM [22], and thus, VLCDs may be an attractive, yet underexplored, option in this population.
Thus, the present study aimed to compare the effects of a VLCD vs. a conventional energy deficit approach on body weight and body composition, androgen levels, and other hormonal and metabolic parameters in overweight/obese women with PCOS.
Study Objectives and Design
The primary study objective was to assess the effects of VLCD and conventional energy deficit diet on change in free androgen index (FAI), while the secondary study objective was to assess the effect of both diets on changes in weight, waist circumference, body composition, and other metabolic parameters.
This open-label, randomised, comparative study in women with PCOS was performed in the Academic Diabetes, Endocrinology and Metabolism research centre at Hull Royal Infirmary.Participants were included if they were women wishing to lose weight, aged between 18 and 45 years, had a body mass index (BMI) between 30 and ≤45 kg/m 2 (based on the dimensions of the DEXA scanner), and were diagnosed with PCOS based on the Rotterdam criteria (biochemical hyperandrogenism, as indicated by a FAI > 4, and self-reported oligomenorrhoea (cycle length > 35 days and nine or fewer periods per year) or amenorrhoea (absence of menses for a period ≥ 3 months)).To be included, participants must have been willing to use a reliable form of non-hormonal contraception throughout the duration of the study.Women with differential diagnoses of non-classical 21-hydroxylase deficiency, hyperprolactinaemia, Cushing's disease, and androgen-secreting tumours were excluded from participation.Additional exclusion criteria included menopause and perimenopause, pregnancy or intention to become pregnant, breastfeeding, weight loss > 5 kg within the last 6 months, substance abuse, acute illness, diagnosis of diabetes, history or presence of malignant neoplasms within the last 5 years, history of gallstones/gout, inadequately controlled thyroid disorder, diagnosis of eating disorder or purging in the last 12 months (based on patient reporting, results of Eating Disorder Inventory, 3 Referral Form (EDI-3 RF) interpreted by a clinical psychologist), known intolerance to the ingredients of investigational products used in the study (e.g., soy, lactose, gluten), or coeliac disease.Participants who were using the following drugs (within the last three months) were also excluded from participation unless cessation of the drug was agreed upon between the medical team and the patient and a wash-out period of 4-8 weeks was achieved; these drugs were: oral hormonal contraceptives and hormone-releasing implants, anti-androgen (e.g., spironolactone, flutamide, finasteride), metformin or other insulin-sensitising medications, clomiphene citrate or estrogen modulators, gonadotropin-releasing hormone (GnRH) modulators (e.g., leuprolide), Minoxidil, anti-obesity drugs, or other medication that may affect appetite (e.g., oral steroids).All participants provided their written informed consent.Ethical approval has been granted by Yorkshire and Humber-Sheffield Research Ethics Committee, NHS, HRA (17/2/17 REC-16/YH/0518).
Participants attended an initial visit (Visit 1), during which they were screened against inclusion and exclusion criteria by medical history and clinical examination, routine blood tests (i.e., full blood count, liver function tests, urea and electrolytes, clotting screen), and anthropometric measurements.To screen out eating disorders and provide a baseline level for anxiety and depression, participants completed an eating disorder questionnaire, EDI-3 RF, and the Beck inventory questionnaires to assess levels of anxiety and depression.At screening, these were assessed by a psychologist, and a decision was made as to whether they should be excluded.Eligible participants were randomly assigned either a VLCD or a conventional approach of moderate energy deficit for 16 weeks (8 weeks intervention and 8 weeks of diet reintroduction and follow-up).The allocation was generated using an individual that was independent of the study team (unit manager), to ensure that the allocation was truly random and unbiased.Eligible participants were randomised on a 1:1 ratio using an online web-based randomisation service (https://www.randomizer.org/(accessed on 1 January 2017)).
During Visit 2 (baseline), conducted within 4 weeks of Visit 1, participants underwent anthropometric evaluation (weight, BMI, waist circumference (WC), and hip circumference (HC)) and an evaluation of body composition.They also had blood samples taken (fasting glucose, fasting insulin, HOMA-IR, total cholesterol, low-density lipoprotein cholesterol (LDL-C), high-density lipoprotein cholesterol (HDL-C), triglycerides (TG), and high-sensitivity C-reactive protein (hs-CRP)), and their blood pressure measured.For the first eight weeks, the VLCD group was instructed to follow a prescription of 800 kcal a day (irrespective of their baseline body weight), in the form of soups and drinks made from pre-prepared sachets provided by the Cambridge Weight Plan™ company (Corby, UK).Each meal replacement drink provided 200 kcal, 21 g CHO, 15 g of protein, 3-4 g fats, and was nutritionally complete for micronutrients as they are specifically designed to be used as a sole source of nutrition and total meal replacement diets (The Cambridge Weight Plan Ltd., (Corby, UK)).Participants in this group were provided support and information regarding the consumption of the food replacement sachets, fibre supplement prescription, and fluid consumption.After these first eight weeks, participants in the VLCD arm were given a stepped return, an increase of 200 kcal/2 weeks whilst reducing meal replacement drinks until ~1600 kcal/d was reached.The energy deficit approach group acted as the control group in this trial.The kcal prescription was bespoke for each patient and calculated using the Henry equation based on gender, age, and weight to ascertain basal metabolic rate [23], which was then multiplied by physical activity level (PAL).Once the patients' daily kcal requirements had been calculated, a deficit of 600 kcal from requirements was applied [24].Both groups received dietetic support and education on different aspects including portion sizes and kcals, practical measures to achieve the given energy prescription, and healthy eating practices based on the "Eat Well Guidelines" (https://www.nhs.uk/live-well/eat-well/food-guidelines-and-food-labels/the-eatwell-guide/ (accessed on 4 January 2017)).Participants returned for review two weeks after commencement on the VLCD and conventional energy deficit approach (Visit 3), and thereafter support was provided every two weeks via face-to-face or telephonic consultation (Visits 4, 5, 7, 8, and 9).For the purposes of these visits, participants were asked to complete a 3-day food and mood diary (2 weekdays and 1 weekend day), which was reviewed by research staff at each review appointment.Specifically, this review visit included an assessment of bowel habits, dietary intake, compliance, level of motivation, support/education, and encouragement.The details of data collected at each visit are given in Supplementary Table S1.The decision to use a diary for only 3 days (2 weekdays and 1 weekend day) instead of a full diary was based on practical considerations related to the study design and participant burden.
The primary study results presented are from Visit 6, corresponding to the 8-week follow-up, during which all the measurements performed during Visit 2 (baseline) were repeated.
Procedures
Height and weight were recorded with participants wearing light clothing and no shoes using a stadiometer and a weighing scale (MS-4202L, Marsden Weighing Machine Group Limited, Rotherham, UK).BMI was calculated as weight (kg) divided by the square of height (m 2 ).Blood pressure was measured using an automated device (NPB-3900; Nellcor Puritan Bennett, Pleasanton, CA, USA); for this measurement, subjects were seated quietly for at least 5 min and with the right arm supported at heart level.Three readings were taken, each at least 2 min apart, and then the mean value of the readings was calculated.Waist circumference was measured using a tape measure.The tape measure was wrapped around participants' waist at the midway point between the bottom of the ribs and the top of the hips (iliac crest).The participants were encouraged to breathe naturally during the procedure, relax their abdominal muscles and not hold their breath.Body composition including total fat and trunk mass, lean body mass (LBM), fat-free mass, bone mineral content (BMC), and bone mineral density (BMD) were measured at baseline and follow-up visits by dual-energy X-ray absorptiometry (DEXA).Oral glucose tolerance was performed after an overnight fast using a 75 g glucose load.
Biochemical Analysis
Venous blood samples were collected in the fasting state after an overnight fast and after 2 h OGTT at baseline and at 8 weeks.Serum and plasma samples were separated by centrifugation at 2000× g for 15 min at 4 • C, and the aliquots were sent immediately for routine biochemical analysis or stored at −80 • C until batch analysis.Serum insulin was assayed using chemiluminescent immunoassay on the Beckman Coulter Uni-Cel DxI 800 analyser (Beckman Coulter UK Ltd., High Wycombe, UK).Plasma glucose was measured using a Beckman AU 5800 analyser (Beckman-Coulter, High Wycombe, UK) according to the manufacturer's recommended protocol.Insulin resistance was computed using homeostatic model assessment-insulin resistance (HOMA-IR = (fasting serum insulin (µU/mL) × fasting plasma glucose (mmol/L))/22.5).Serum testosterone was quantified using isotope-dilution liquid chromatography tandem mass spectrometry (LC-MS/MS).Sex-hormone-binding globulin (SHBG) was measured using a chemiluminescent immunoassay on the UniCel DxI 800 analyser (Beckman-Coulter, High Wycombe, UK), applying the manufacturer's recommended protocol.The FAI was calculated as: (total testosterone/SHBG) × 100.An FAI of ≥4 was considered as significant hyperandrogenaemia, and <4 at follow-up was considered as biochemical remission of PCOS.Free testosterone and FAI are effective in detecting elevated androgen levels.In women, a significant amount of testosterone is bound to SHBG, making the interpretation of free testosterone levels more difficult.The FAI compensates for this dependence on SHBG by taking it into account.There are no universally accepted definitions for biochemical remission of PCOS; however, FAI levels of 5 or higher are considered indicative of PCOS [25,26].The cut-off value of FAI for diagnosis of PCOS is lab specific, and in our hospital, a FAI of more than 4 is regarded as indicative of PCOS.Hence, we defined FAI < 4 as biochemical remission of PCOS.Total cholesterol, triglycerides, HDL-C, alanine aminotransferase (ALT), and aspartate aminotransferase (AST) levels were measured enzymatically using a Beckman AU 5800 analyser (Beckman-Coulter, High Wycombe, UK).Low-density lipoprotein cholesterol (LDL-C) was calculated using the Friedwald equation.
Statistical Analysis
The continuous variables in the study were summarised as means ± SD, while the categorical data are presented as n (%).Mean changes from baseline to 8-week follow-up within each treatment group were analysed using a paired t-test.Mean differences for all parameters expressed as % change from baseline between groups were analysed using independent samples t-tests.The power of this study was nominally based on a correlation between weight loss and free-androgen reduction.Ten patients completing will allow us to detect a correlation of 0.75 (power = 0.80, alpha = 0.1).Alpha was one-tailed, since we were only interested in a one-directional change.All the statistical analyses were performed in R4.1.1 (https://www.r-project.org/(accessed on 1 January 2021)) with p-values of less than 0.05 denoting statistical significance.
Results
Figure 1 shows the study Consort diagram.We screened 63 women living with PCOS, out of which 23 were not randomised due to not meeting the eligibility criterion.Overall, 21 women were randomised to the VLCD arm, and 19 women living with PCOS were randomised to a conventional energy deficit diet.Subsequently, 11 participants in the VLCD arm and 11 participants in the conventional energy deficit arm completed the 8-week follow-up.Table 1 shows the demographic characteristics of the study population.
Effect on Free Androgen Index (FAI)
Table 2 and Figure 2 shows the effects of the VLCD and the conventional energy deficit approach on FAI.In the VLCD arm, there was a statistically significant reduction in the FAI at the 8-week follow-up (−32.3%change from baseline levels; baseline: 9.9 ± 4.3, 8-week follow-up: 6.1 ± 1.9, p = 0.005), while the conventional energy deficit group experienced a 7.7% reduction in FAI, which was, however, not statistically significant (p = 0.26).Betweengroup comparisons of the mean % reductions from baseline in FAI showed a trend towards a greater reduction In the VLCD group (p = 0.07).In the VLCD arm, 36% (4 out of 11) participants had more than 50% reduction in FAI, and 73% (8 out of 11) of the participants had more than 20% reduction in FAI at the end of the 8-week period.In the conventional energy deficit arm, 9% (1 out of 11) of the participants had more than 50% reduction in FAI, and 36% (4 out of 11) of the participants had more than 20% reduction in FAI at the 8-week follow-up.Two women in the VLCD arm (18%), but none in the conventional energy deficit arm, had a biochemical remission of PCOS (FAI < 4).Across both study arms, there was a significant correlation between weight loss and reductions in FAI (r 2 = 0.51 and p = 0.01).
Figure 2 shows the box and whisker plot comparing free androgen index (FAI) at baseline, after the intervention and after the reintroduction of the diet in the two study arms.The symbol "X" in the box and whisker plot shows the mean value, and the orange dot shows a value 1.5 times the interquartile range above the upper or lower quartile.of 11) of the participants had more than 20% reduction in FAI at the end of the 8-week period.In the conventional energy deficit arm, 9% (1 out of 11) of the participants had more than 50% reduction in FAI, and 36% (4 out of 11) of the participants had more than 20% reduction in FAI at the 8-week follow-up.Two women in the VLCD arm (18%), but none in the conventional energy deficit arm, had a biochemical remission of PCOS (FAI < 4).Across both study arms, there was a significant correlation between weight loss and reductions in FAI (r 2 = 0.51 and p = 0.01).The p-value is derived from a t-test comparing baseline FAI and FAI at eight weeks' follow-up in the moderate energy deficit arm and VLCD arm.
Body Weight and Waist Circumference
Table 2 and Figures 3 and 4 show the effects of the VLCD and conventional energy deficit approach on body weight and waist circumference in women living with PCOS.Participants in the VLCD arm experienced a significant 10.9% reduction in their body weight after 8 weeks of VLCD (baseline: 107.1 ± 13.6 kg, 8-week follow-up: 95.4 ± 13.2 kg, p < 0.0001).Participants who followed the conventional energy deficit approach also had a significant reduction in their body weight and had a 3.9% reduction in their body weight (baseline: 108.3 ± 20.5 kg, 8-week follow-up: 104.1 ± 20.6 kg, p < 0.0001).Comparisons between groups revealed significantly greater weight loss in the VLCD group compared to the conventional energy deficit group (p < 0.0001).There was a significant reduction in waist circumference in the VLCD arm (Figure 3) (baseline: 114.4 ± 12.6 cm, 8-week follow-up: 102.9 ± 9.1 cm, p = 0.003), but not in the conventional energy deficit arm.In the VLCD arm, all the participants lost >5% of their body weight, with seven participants losing >10% of body weight; in the energy deficit arm, four participants lost >5% of body weight, and none of them lost >10% body weight.
Figure 3 shows the box and whisker plot comparing weight at baseline, after the intervention, and after the reintroduction of the diet in the two study arms.The symbol "X" in the box and whisker plot shows the mean value.
The p-value is derived from a t-test comparing baseline weight and weight at eight weeks' follow-up in the moderate energy deficit arm and VLCD arm.
Figure 4 shows the box and whisker plot comparing WC at baseline, after the intervention, and after the reintroduction of the diet in the two study arms.The symbol "X" in the box and whisker plot shows the mean value.
The p-value is derived from a t-test comparing baseline WC and WC at eight weeks follow-up in the moderate energy deficit arm and VLCD arm.
Metabolic Parameters
In the VLCD group, there was a significant increase in the SHBG levels (p = 0.002) and significant reductions in total cholesterol (p = 0.01) and fasting blood glucose (p = 0.01) levels after eight weeks of intervention, however, there were no significant changes in 2 h glucose levels after an OGTT, nor to HBA1c or TG levels.Total cholesterol levels were also reduced in the energy deficit group (p = 0.01); however, no further significant changes were seen for other metabolic parameters within the same timeframe.The increase in SHBG (p = 0.02) and the reduction in fasting blood glucose levels (p = 0.04) were significantly larger in the VLCD arm as compared to the conventional energy deficit arm.There was also a significant reduction in HOMA-IR in both the VLCD (p = 0.0007) arm and conventional energy deficit group (p = 0.009) but no significant difference between the two arms (p = 0.24).
Parameters of Body Composition
There were significant reductions in total and trunk fat in both study arms (Table 3); however, these were more pronounced in the VLCD group as compared to the conventional energy deficit group (total body fat: −15.8% vs. −4.9%,p < 0.0001; trunk fat: 17.3% vs. −5.2%p < 0.0001).Both diets were associated with significant reductions in LBM and fatfree body mass (FFM) (p < 0.05), although these changes were smaller in the conventional energy deficit group (LBM, p = 0.002; FFM, p = 0.001).There were no significant changes in BMC or BMD in either study arm.
FAI and Weight at 16-Week Follow-Up
Between 8 and 16 weeks (end of the VLCD diet and reintroduction of a normal diet), two participants were lost to follow-up, and four participants did not have their FAI and weight measured in the VLCD arm.One participant withdrew from the study in the energy deficit diet arm over the same period.At the end of the 16-week period, participants in the VLCD arm had statistically significantly more weight loss than the energy deficit arm (−14.3% vs. −6.4% p = 0.0001); however, there were no significant differences in FAI changes (−15.9% vs. −19.6%p = 0.79) between the two groups.
Side Effects in Study Arms
The most prevalent side effect in both the study arms was gastrointestinal disturbances.The majority of the study participants experienced transient constipation, bloating, and minor abdominal discomfort at some point during the eight weeks of the VLCD or energy deficit arm; however, these were resolved after prescribing a fibre supplement and/or providing advice for fluid intake.One of the study participants in the VLCD arm was admitted to the hospital with acute cholecystitis and had an uneventful recovery.No other major side effects were reported during the trial period.
Discussion
In this first randomised controlled trial looking into the effects of a VLCD compared to a conventional energy deficit approach on PCOS-related outcomes, we showed that although both strategies can induce short-term weight loss with favourable changes in body composition, implementation of a VLCD resulted in greater weight loss and more pronounced improvements in body composition, hyperandrogenaemia, and metabolic parameters in obese women with PCOS.
Excess weight is an independent risk factor for hyperandrogenaemia, insulin resistance, and menstrual irregularities in women living with PCOS.Weight loss with lifestyle and dietary changes is the mainstay of management of women with PCOS, and it is associated with significant improvements in hyperandrogenaemia, menstrual irregularities, ovulation, and emotional wellbeing in this population [3].Several dietary interventions have been proposed for the management of PCOS, including VLCDs [18, 19,27], energy deficit diets [28,29], and low-GI [30] and ketogenic diets [31,32].There is, however, no consensus on dietary interventions for optimal weight loss strategies [33].Recent data from people living with T2DM have shown up to 15% weight loss with VLCDs over twelve weeks in this population [21], which was sustained in over one-third of the study participants at the end of two years.Since the insulin resistance state in people living with T2DM is also seen in women living with PCOS [22], this diet is an attractive option for weight loss in PCOS.
In this study, we presented both within-group and between-group analyses.Although the within-group analysis in a setting for RCT could be biased due to small sample sizes [34], the effectiveness of VLCDs in T2DM is established, and we wanted to compare its effects in the PCOS population.Indeed, in our study, we showed that participants in the VLCD arm lost on average ~11% of their initial body weight after 8 weeks of following a VLCD, while some participants who successfully completed the 16-week follow-up had a mean weight loss of 16%.Participants in the conventional energy deficit group experienced a significant, albeit modest, weight loss (−3.9% from baseline at 8 weeks), suggesting that the use of VLCDs is superior for weight loss in this population, at least in the short term.Our results are in line with previous research [17,35] investigating the effect of VLCDs in women living with PCOS.A mean weight loss of 17% was reported in a retrospective analysis [35] of a 12-week community-based dietary intervention (LighterLife Total (LLT)) consisting of a commercial VLCD in combination with group behavioural change sessions in women with PCOS.Interestingly, this study included a control group of women without PCOS, who experienced a similar weight loss after the use of a VLCD over the same period.In another study involving the implementation of an energy-restricted diet providing 1000 kcal/day, Tolino et al. [17] reported that 54% of the participants had a ≥5% reduction of their baseline weight; nevertheless, no control group was available.Moran et al. [9], tested the efficacy of partial meal replacement with commercial products as part of an energy-restricted diet, and participants reduced their body weight by 5.6 ± 2.4 kg after an 8-week period.Notably, their energy restrictions were more moderate (1000-1200 kcal/d) compared to the energy prescription in our VLCD group (800 kcal/d based on full meal replacement).We reported a slightly lower weight loss with the VLCD in our study as compared to other T2DM cohorts [21]; this could be due to the difference in the mechanism of insulin resistance in women with PCOS as compared to T2DM, and further studies are needed to confirm this effect.
There are very limited data on the effects of VLCDs on hyperandrogenaemia and metabolic outcomes in women with PCOS [18, 19,27,36].In this RCT, we showed that a VLCD caused a significant mean 32% reduction in FAI in the VLCD arm and a non-significant 8% reduction in the conventional energy deficit arm.An observational study [36] compared three months of weight reduction by using a VLCD with the use of an oral contraceptive (OC) pill containing norethisterone in women living with PCOS.This study showed a significant reduction in free testosterone in both the VLCD and OC arm; however, as expected, no significant reductions in body weight or BMI were observed in the OC arm.In the study of Tolino et al., caloric restriction (1000 kcal/d) for four weeks resulted in an increase in SHBG levels and decreases in free testosterone and insulin, with consequent improvement in symptoms of PCOS.Taken together, these findings suggest that a VLCD can be used as an effective management strategy in women living with PCOS who require both amelioration of symptoms of excess androgens and weight reduction.
In this study, we showed that the significant weight loss after eight weeks after both interventions was due to improvements in total and trunk fat masses but also due to reductions in FFM, with these results being in line with earlier studies in PCOS and non-PCOS populations [19].These changes were greater in magnitude in the VLCD group and can be explained by the greater weight loss experienced by this group.There is increasing evidence that hyperandrogenism in women with PCOS contributes to insulin resistance and metabolic dysfunction in women with PCOS by favouring abdominal and visceral adiposity [37].The improvement in hyperandrogenism and central obesity mediated by VLCD can improve metabolic dysfunction and associated complications in women with PCOS.Indeed, in parallel with weight loss, reductions in body fat, and hyperandrogenaemia, we observed favourable changes in metabolic markers, including decreases in total cholesterol and fasting blood glucose, in particular in the VLCD group.
This study shows that both VLCD and energy deficit diets can be effectively used in women with PCOS to achieve weight loss with relatively few side effects.One of the study participants in the VLCD had abdominal pain, was diagnosed with cholecystitis, and recovered completely.Some participants had some gastrointestinal side effects, such as constipation and bloating, after starting the VLCD; they were prescribed a fibre supplement to resolve these symptoms, and none of them withdrew from the study due to this reason.It should also be noted that there was a rise in AST and ALT levels in the VLCD arm, which, however, did not reach significance.Increases in hepatic enzymes are a known consequence of rapid weight loss [38], and given the high prevalence of non-alcoholic fatty liver disease in women with PCOS [39], careful monitoring of liver function is warranted while using VLCD.Neither the VLCD nor the conventional energy deficit approach was associated with significant reductions in BMC or BMD in this study.There are conflicting data [40][41][42][43] in the literature on the effect of short-term weight loss on BMC and BMD, and future studies are needed to confirm these findings using longer-term protocols.
Our study had several limitations.This study was a small, single-centre trial, and thus, future multicentre trials will be needed to further understand the feasibility and efficacy of VLCDs in PCOS.Furthermore, half of the study participants did not complete the clinical trial after randomisation in both study arms.Similar to our study, some previous weight loss studies in PCOS have also reported low completion rates, whilst qualitative studies have shed light on a number of barriers to weight management in this population.In our study, several participants randomised to the conventional energy deficit approach were disappointed by their randomisation order, and some of them were lost to followup a few weeks after the initiation of the intervention despite the repeated efforts of the research team to keep them in the study and explain to them the benefits of weight loss and the advantages of the conventional energy deficit approach.In the VLCD arm, many participants limited their engagement with the trial in the food reintroduction period.This limited engagement could be because the main study dietician who was in constant touch with the study participants had moved away in the later stage of the trial and highlights the importance of the interpersonal touch during dietary intervention studies in young populations.It is also possible that the participants did not feel the need to engage with the trial protocols after losing a considerable amount of weight.Other challenges mentioned by the participants over the follow-up were fatigue with sticking to study regimens, challenges in keeping up with social life and family issues, and commitment for follow-up.Finally, the last follow-up after the end of the intervention (eight weeks) was at twelve weeks, and the sustainability of weight loss with these two approaches beyond this period will have to be examined with studies with longer follow-up periods.
Conclusions
In summary, the results of this randomised controlled trial comparing a VLCD based on full meal replacement with a conventional energy deficit approach in PCOS suggests that both approaches can be used to achieve short-term weight loss in this population.However, the study found that the VLCD resulted in greater weight loss and more pronounced improvements in body composition, hyperandrogenism, and metabolic aspects related to PCOS.While these findings are promising, these are based on a small, single-centre study, and further large multicentre RCTs are needed to evaluate the widespread use of VLCDs and moderately energy-restricted diets for managing overweight/obesity in women with PCOS.
Figure 2 .
Figure 2. Comparison of baseline FAI, FAI at completion of intervention, and after the diet reintroduction in the VLCD and moderate energy deficit approach.
Figure 2 .
Figure 2. Comparison of baseline FAI, FAI at completion of intervention, and after the diet reintroduction in the VLCD and moderate energy deficit approach.
Figure 3 .
Figure 3.Comparison of baseline weight, weight at completion of intervention, and after the diet reintroduction in the VLCD and moderate energy deficit approach.
Figure 4 .
Figure 4. Comparison of baseline waist circumference (WC), WC at completion of intervention, and after the diet reintroduction in the VLCD and moderate energy deficit approach.
Figure 3 . 15 Figure 3 .
Figure 3.Comparison of baseline weight, weight at completion of intervention, and after the diet reintroduction in the VLCD and moderate energy deficit approach.
Figure 4 .
Figure 4. Comparison of baseline waist circumference (WC), WC at completion of intervention, and after the diet reintroduction in the VLCD and moderate energy deficit approach.Figure 4. Comparison of baseline waist circumference (WC), WC at completion of intervention, and after the diet reintroduction in the VLCD and moderate energy deficit approach.
Figure 4 .
Figure 4. Comparison of baseline waist circumference (WC), WC at completion of intervention, and after the diet reintroduction in the VLCD and moderate energy deficit approach.Figure 4. Comparison of baseline waist circumference (WC), WC at completion of intervention, and after the diet reintroduction in the VLCD and moderate energy deficit approach.
Table 1 .
Baseline characteristics of the study population.
Table 2 .
Changes in metabolic and hormonal parameters in women with PCOS with VLCD and moderate energy deficit at 8-week follow-up.
Table 2 .
Changes in metabolic and hormonal parameters in women with PCOS with VLCD and moderate energy deficit at 8-week follow-up.VLCD Arm (n = 11)Moderate Energy Deficit Arm (n =
Table 3 .
Changes in parameters of body composition in women with PCOS with VLCD and moderate energy deficit at 8-week follow-up.BMC = bone mineral content.Results are expressed as mean (±SD) or percentages.Significant p-values are indicated in bold.1p-value for pre-post changes within group. 2 p-value for difference in % changes between groups. | 2023-09-08T15:20:03.562Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "b8d561ecf9cd9f6b75a177ea41e3f1152436c0db",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/15/18/3872/pdf?version=1693980027",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d252780112bb8afffdc4bfe1aa1d3b13201a1861",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
55186193 | pes2o/s2orc | v3-fos-license | Estimates of genetic parameters for test day milk yields of a Holstein Friesian herd in Turkey with random regression models
Genetic parameters for test day milk yields of Holstein Friesian cows have been estimated using a random regression model (RRM). Data consisted of 1487 monthly test day milk yield records of cows calving between 1987 and 1993 in Sarmısaklı Farm, Turkey. Data were restricted to have at least 150d and at maximum 308d length of first lactations. Additive genetic and permanent environmental (co)variances were modeled with the same order polynomial regressions. Residual (measurement) error variance was assumed to be constant throughout lactation. The quadratic (k=3) order orthogonal polynomial regression was found to be sufficient. Heritability estimates for test day milk yields were high at the middle of the lactation and ranged from 0.07 to 0.32. Genetic correlations of milk yields between consecutive test days were high, but decreased as the interval between tests days increased. Genetic correlations ranged from 0.51 to 0.99. Residual error variance was estimated 13.77 kg.
Introduction
Test day records are expressions of a trait that changes over time (SWALVE, 1995a;Van der WERF et al., 1998).These records are used to predict total 305-d yields which are required to evaluate the additive genetic merit of sires and cows in traditional evaluation (ALI and SCHAEFFER, 1987).For the genetic evaluation of dairy cows using individual test day yields rather than total lactation production has a number of advantages.Test day models (TDM) allow: 1-Direct correction for environmental effects on the test day, 2-Better accounting for variation in number of tests recorded per animal, 3-Accounting for variation in the shape of the lactation curve (Van der WERF et al., 1998;SWALVE and GUO, 1999).A common approach to investigate genetic associations between test day yields is to consider every yield at each time period as a separate trait and then to estimate the genetic correlations between these traits.This approach has some disadvantages when large numbers of test day yields are considered.Biological interpretation of a large number of correlations is furthermore often difficult (VEERKAMP and THOMPSON, 1999).At the same time, heritability estimates for test day yields are usually less than for 305 day milk yields (SWALVE, 1995a;BAFFOUR-AWUAH et al., 1996;STRABEL and MISZTAL, 1999).ALI and SCHAEFFER (1987), obtained heritability estimates of first lactation 305 day milk yields of 0.28, 0.31 and 0.30 with three different TDM.PTAK and SCHAEFFER (1993) used repeatability models for first lactation test day milk yields.They assumed constant genetic variance and genetic correlations among test day records.SWALVE (1995b) estimated the heritability of 0.39 for 305 day milk yield while they were changed 0.18 to 0.36 from several test day milk records.These heritability estimates were higher than the estimates reported by ALI and SCHAEFFER (1987) with repeatability models.Alternatively, some researchers (ALI and SCHAEFFER, 1987;KIRKPATRICK et al., 1994;Van der WERF et al., 1998;GUO and SWALVE, 1997;JAMROZIK, 1997;MEYER and HILL, 1997;HORSTICK and DISTL, 2002;AMIN, 2003) have utilized covariances among all test day yields to improve the accuracy of predictions.The covariance structure is described by a covariance function, estimated by fitting a set of orthogonal polynomials or other defined covariables as random regressions on time of repeated records (OLORI et al., 1999b).A random regression model (RRM) allows different shapes of lactation curves for each cow by the inclusion of random regression coefficients for each animal (SCHAEFFER and DEKKERS, 1994).Using RRM, lactation curve for an individual cow is described by two sets of regressions on days in milk (DIM).Fixed regressions for all cows describe the general shape of lactation for cows belonging to the same subclass, for example regions, age at calving and season of calving, and the random parts of regressions for each cow describe the genetic deviation of individual regression from the fixed regressions, which allows each cow to have a genetically different shape of lactation curve (JAMROZIK et al., 1997;SWALVE and GUO, 1999).Random regression coefficients have been suggested by Henderson (1984), but SCHAEFFER andDEKKERS (1994) firstly improved this model to a RRM.In the literature many studies have been recently used RRM to estimate genetic parameters for production traits (Van der WERF et al., 1998;STRABEL and MISZTAL, 1999;VEERKAMP and THOMPSON, 1999;LIU et al., 2000;RÖHE et al., 2000;HORSTICK and DISTL, 2002).In these studies, third order RRM was used.Van der WERF et al. (1998) found high estimates of heritabilities at the periphery of the trajectory opposite to STRABEL and MISZTAL (1999); VEERKAMP andTHOMPSON (1999) andLIU et al. (2000).On the other hand, in most studies, different orders RRM were used.High estimates of heritability (0.59-0.40) for daily milk yield have been reported by JAMROZIK and SCHAEFFER (1997) and KETTUNEN et al. (1997, 1998) when using function of ALI and SCHAEFFER (1987).Conversely, the researches (REKAYA et al., 1999;OLORI et al., 1999b;LIU et al., 2000;POOL et al., 2000;ROMERO and CARABANO, 2003) have found less extreme heritability estimates, varied from 0.20 to 0.46, at the beginning and end of lactation from different RRM.Considering with previous studies, there was no any study on estimation of genetic parameters for test day milk yields of Turkish Holstein Friesian by using a RRM.In this study, the first goal was to decide the order of Legendre polynomial which gives the best fit of RRM with different orders of fit.The second aim was to estimate the additive genetic and permanent environmental (co)variances and heritability values for first lactation test day milk yields of Holstein Friesian cows using a RRM.
Data
The complete data set comprised monthly 1506 test day milk yields of Holstein Friesian cows obtained from Sarmısaklı Farm, in the Northwest region of Turkey.The cows were daughters of 56 sires and 119 dams, calved from 1987 through 1993.Total of 139 animals evaluated and there were 184 test date subclasses.Test day milk yields were recorded at successive 28-d periods throughout lactation and these periods were considered as monthly intervals (TD1-TD11).Test day months are used as the time variable rather than days in milk.Lactations were used to have at least 150d and at maximum 308d long of first lactations.
Model
The following RRM was used in the analysis: where ij y is the i th test day record of the j th cow, i TD is the fixed effect of test day (month) of recording i, jm β is the m th fixed regression coefficients, jm α and jm p are the m th random regression coefficients for additive genetic and permanent environmental effects of cow j, B k , A k and p k are the order of fitted fixed, random additive and random permanent regression coefficients, ij t is the i th standardized lactation month of the j th animal, m φ is the m th polynomial evaluated for the age ij t , and ij e is the random residual effect.
The matrix notation of the model can be written as, y= Xb+Za+Wp+e Where, vector b includes fixed regression coefficients jm β and TD effects, vector a and p include random regression coefficients for additive genetic and permanent environmental effects, e is the vector of residual effects and X, Z and W are the incidence matrices.The (co)variance structure for random effects in the model was defined as: G is the genetic covariance matrix of the random regression coefficients, assumed to be the same for all cows, A is the additive genetic relationship matrix among animals, 2 p σ is the variance of the permanent environmental effects, I is the identity matrix, R is the diagonal matrix of residual variance, ⊗ is the kronecker product function (SEARLE, 1982).Variance components were estimated by derivative-free REML (DFREML) using a RRM with the DXMRR statistical package (MEYER, 1997).Because of reducing the number of parameters to be estimated and reducing the dimension of the likelihood searches, residual variance was assumed to be constant throughout lactation.Additive genetic and permanent environmental (co)variances were modeled with the same order of polynomial regressions.Legendre polynomials were used because they are orthogonal, normalized and resulted in a better convergence and more accurate results as compared to conventional polynomials (KIRKPATRICK et al., 1990).Significant differences in the fit of Legendre polynomials with order from k=2 to k=6 were tested using a chi-square test ( 2χ ) of the likelihood.
Results
The maximum log likelihood values and changes in the log likelihoods from the models with different orders of fit and one measurement error class were presented in Table 1.Log likelihood values and their changes were decreased with increasing order of model.The changes in the log likelihood for quadratic, cubic and quartic model have been found to be significant (P<0.05).When comparisons of the models based on significant differences in the fit of models were tested using a 2 χ test of the likelihood, the quadratic polynomial had the largest changes (5.54%) of log likelihood values.The changes in likelihood for cubic, quartic and quintic models were only 2.79%, 2.26% and 0.67%, respectively (Table 1).Because the first three eigenvalues of the additive genetic coefficient matrix were large for all orders, the first three eigenvalues were shown in Table 2.In this table, proportion of each eigenvalue in total was also given to determine their importance.Since changes in the likelihood value of quadratic polynomial regression is the largest, this model was chosen as sufficient model to fit additive genetic and permanent environmental (co)variances with the fixed regression in this study.Moreover, the first, second and third eigenvalues obtained in the quadratic order of fit for the additive genetic covariance function were 13.55; 0.46 and 0.61E-03, respectively.The first eigenvalue of the coefficient matrix of the additive genetic covariance function accounted for about 97% of total eigenvalues for quadratic model.
On the contrary to the first one, second and third eigenvalues have negligible proportions of the of total eigenvalues that is related to variation in the additive genetic variance.
Eigenfunctions of additive genetic coefficient matrix of covariance function for the quadratic model are plotted in Figure 1.First eigenfunctions slightly decreased when the test days increased.Contrary to the first one, second and third eigenfunctions show an increasing pattern throughout test days.Heritabilities for test day yields were given in Table 3.Also, tendency of heritability estimates for test day milk yields from the quadratic order of fit was plotted in Figure 2. Heritabilities for test day milk yields were ranged from 0.07 to 0.32.It is obvious that the test day milk yields at beginning and end of lactation have lower heritability than the test days in the middle part.
Additive genetic and phenotypic correlations between test day milk yields were given in Table 3.The additive genetic correlations were higher than the phenotypic correlations.While the additive genetic correlations were changed from 0.51 to 0.99, the phenotypic correlations for test day milk yields varied from 0.11 to 0.60.Discussion Use of RRM makes it possible to study changes in test day records over time and a better understanding of lactation genetics (SWALVE and GUO, 1999;JAKOBSEN et al., 2001).But, feasibility of the RRM depends on the order of fit because of the computational difficulties.Therefore, order of fit should not exceed three (POOL et al., 2000), although regression models with higher orders estimate parameters more accurately (JAMROZIK et al., 1997;Van der WERF et al., 1998;STRABEL and MISZTAL, 1999).As a matter of fact, many studies show that third order polynomial RRM was found sufficient if complete lactations were used for parameter estimation (Van der WERF et al., 1998;POOL and MEUWISSEN, 1999;OLORI et al., 1999b;VEERKAMP and THOMPSON, 1999;KETTUNEN et al., 2000;STRABEL et al., 2003).In this study, to reduce computational problems, same order of fit for fixed and random effects was used.Results indicated that the maximum log likelihood values increased when the order of fit increased.The third order polynomial gave the best fit when considering the largest changes in log likelihood with the difference between number of parameters as a degrees of freedom versus table value of 2 χ .Moreover, the first three eigenvalues of the additive genetic coefficient matrix were greater in all models.These results support that the third order of fit is sufficient for modeling of test day records random regression coefficients as mentioned in literature (OLORI et al., 1999b;POOL and MEUWISSEN, 1999).
The eigenvalue is proportional to the amount of genetic variation in the population corresponding to related eigenfunction (KIRKPATRICK et al., 1990).For the third order model, the first eigenvalue of the coefficient matrix of the additive genetic covariance function accounted for about 97% of total eigenvalues.The large first eigenvalue can be concluded that selection on first eigenfunction will cause quick changes on the lactation curve.On the other hand, second and third eigenvalues accounted for about 3% of total eigenvalues and represent an unimportant proportion of the variation in additive genetic variance.Similar results were reported by Van der WERF et al. (1998) andAKBAŞ et al. (2004).Eigenfunction values from the first eigenfunctions were almost constant at early part of lactation, after that started to decrease at the second part of lactation.This means that factors are equally affect on most of the genetic variation of milk yield at early part of lactation while it was not the case for test days in the second part of lactation.
Since the first eigenfunction is related to largest eigenvalue, selection based on the factors will slightly decrease the milk yields for late test days.The second and third eigenfunctions show increasing pattern with increasing test days.This change reveal that a factor or factors have effects on milk yield different level in early and late stages of lactation (OLORI et al., 1999b).However, the second and third eigenvalues are small; selection based on the factors defined by third eigenfunctions not alters the milk yields.
Heritability estimates for test day milk yields were similar with the results reported by LIU et al. (2000); KETTUNEN et al. (2000) and DRUET et al. (2003).But, they were significantly higher as compared to results from STRABEL and MISZTAL (1999), LIDAUER and MANTYSAARI (1999), and lower as compared to results from BAFFOUR-AWUAH et al. (1996);JAMROZIK and SCHAEFFER (1997); KETTUNEN et al. (1998);OLORI et al. (1999b);POOL et al. (2000); ROMERO and CARABANO (2003).Shape of the heritability curve from the third order polynomial model resulted in decreasing patterns when the test days were increased.The heritability values at the beginning and the last part of lactation were lower than the estimates obtained for test day milk yields in the middle part as described by VEERKAMP and THOMPSON (1999);LIU et al. (2000) andPOOL et al. (2000).
The additive genetic correlations between milk yields obtained at different test days were higher than the phenotypic correlations.Genetic and phenotypic correlations between milk yields obtained at consecutive test days were positive and high but they were decreased as the interval between tests days increased.Furthermore, the phenotypic correlations of first test day milk yields with other test milk yields were relatively low.These results agreed with the results from previous works (KETTUNEN et al., 1998;KETTUNEN et al., 2000;VEERKAMP and THOMPSON, 1999).These results suggest that selection for increased milk yield at any test day, will effect positively on milk yields in late test days.
Changes in additive genetic variance over test days were similar as reported in the literature (VEERKAMP and THOMPSON, 1999;OLORI et al., 1999b;POOL et al., 2000) which was showing higher values in the middle of lactation.However OLORI et al. (1999b) found slightly higher variance estimates than the value estimated in this study for the middle part.On the other hand, opposite to additive genetic variance, phenotypic and permanent environmental variances were increased after second part of the lactation.This finding also was very similar to reported pattern (OLORI et al., 1999a, b;POOL et al., 2000).
In this study, error variance was assumed constant throughout lactation.This assumption causes residual variance in early lactation to be underestimated in RRM models but has no significant effect on the other variance components (OLORI et al., 1999a).The estimates of total variance and heritability for milk yield at early stage of lactation are affected by the constant error assumption.Also, this may partly explain the inconsistent heritability in early and late stages of lactation (OLORI et al., 1999a).Finally, to estimate the genetic structure of test day milk yields in Turkish Holstein Friesian, further investigations using larger data set seems to be required with varying orders of RRM and also accounting for heterogeneous measurement error variances in the analysis of test day milk yields.
Table 2
Eigenvalues with their relative proportions of coefficient matrix of the additive genetic covariance functions (Die ersten drei Eigenvalues der Koeffizientenmatrix und ihr unterschiedliches Verhältnis zur additiven genetischen Estimates of additive genetic, phenotypic and permanent environmental variances were ranged from 3.01 to 9.32; 4.78 to 21.84, and 26.40 to 38 61, respectively.Residual error variance was 13.77 kg 2 .Additive genetic, phenotypic, permanent environmental variances over test days were plotted in Figure3.Shapes of curves for the phenotypic and permanent environmental variances from the third order polynomial model match oscillatory patterns.Permanent environmental variances over test days show opposite changes with additive genetic and phenotypic variance. | 2019-04-02T13:12:05.914Z | 2007-10-10T00:00:00.000 | {
"year": 2007,
"sha1": "1ea04875313e4724dedae11b6b81eb45de7e4e67",
"oa_license": "CCBY",
"oa_url": "https://aab.copernicus.org/articles/50/327/2007/aab-50-327-2007.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "d1d1b27b4273ff95f3e520c9a5fd0982a8f5f666",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
49210377 | pes2o/s2orc | v3-fos-license | Correlation Tracking via Robust Region Proposals
Recently, correlation filter-based trackers have received extensive attention due to their simplicity and superior speed. However, such trackers perform poorly when the target undergoes occlusion, viewpoint change or other challenging attributes due to pre-defined sampling strategy. To tackle these issues, in this paper, we propose an adaptive region proposal scheme to facilitate visual tracking. To be more specific, a novel tracking monitoring indicator is advocated to forecast tracking failure. Afterwards, we incorporate detection and scale proposals respectively, to recover from model drift as well as handle aspect ratio variation. We test the proposed algorithm on several challenging sequences, which have demonstrated that the proposed tracker performs favourably against state-of-the-art trackers.
Introduction
VISUAL tracking is one of the fundamental task in computer vision with a plethora of applications such as robotics, video surveillance, human-computer interaction etc. Recently, tracking approaches based on Correlation Filters (CF) [1][2][3][4][5][6] have received considerable attentions due to the high computational efficiency with the use of fast Fourier Transforms and their outstanding performance in public evaluation dataset and benchmark. Such approaches regress all the circular-shifted versions of input features to soft labels generated, producing less ambiguous response map than the traditional binary classifier, which show greater stability. Despite of their huge success and development, such trackers still suffer from the model drift problem due to the limitation of tracking application itself. Unlike other tasks with clearlydefined target category, the goal of visual tracking is to estimate all kinds of targets' state and trajectory with only reliable information given at the initial frame. However, such information may be ambiguous or misleading under some circumstances. Fig.1 illustrates a vehicle tracking procedure. At the first frame, the car back is assigned to be target by the bounding box. As the sequence goes on, the target's pose varies significantly. It's hard to determine either the car back bounded in yellow or the whole car bounded in red is the better tracking result. Up to now, an explicit definition about the target to be tracked is still absent. After looking up several common online tracking datasets, we obtain an unwritten assumption that the tracking target is usually the whole object rather than its constitute parts. With only the initial bounding box given (in the left), it's blurry to judge which tracking result (the red or yellow ones) is more accurate, as the viewpoint changes significantly.
Hence, the traditional CF trackers which adopt the templatematching approach is easy to overfit and drift gradually under challenging sequences due to the absence of target's prior knowledge. To tackle such limitation, in this paper, we propose an adaptive region proposal scheme to facilitate tracking. We firstly advocate a novel criterion to monitor tracking condition as well as determine potential failure. In addition, we show that generating a small number of highquality candidate samples with the objectness information [7] taken into account, is impactful in recovering from tracking failure caused by the challenging attributes. For scale and aspect ratio estimation, the proposed tracker performs more effective than some existing scale-adaptive correlation-based tracking methods due to the flexible sampling manner and weaker assumption. Experiments on challenging sequences have demonstrated that the proposed algorithm performs favourably against the existing state-of-the-art trackers.
Proposed Method
In this section, we propose a two-stream tracking framework using the adaptive region proposal method. Firstly, we give a brief introduction of the baseline tracker [3]. Afterwards, we propose a tracking monitoring indicator to determine the tracking confidence each frame. When the tracking condition is desirable, the tracker would utilize scale proposals to handle scale and aspect ratio variations. On the other hand, if the confidence score is low, which indicates a tracking failure, we would generate detection proposals to search for the region that may contain losing target.
Kernelized Correlation Filter
The correlation filter tracker could be broken down into three components, namely training, detection and updating. In training section, KCF tracker would learn a filter , which minimize the error between the training samples φ( ) and regression labels y . The training goal could be formulated as: λ is the regularization parameter that penalizes over-fitting. Eq. (1) could be solved directly in Fourier domain since a circulant matrix can be diagonalized using DFT matrix as: Here, ʘ denotes Element-wise Product Operation, ^ and * indicates Discrete Fourier Transform and Conjugate Operation.
Kernel trick is applied to obtain a more powerful filter in the case of the non-linear regression. Under such condition, w would be denoted by a linear combination of the training samples = ∑ φ( ), where α is the dual parameter of . Then the optimization solution transfers under ̂ as: After training process, the detection section is carried out on an image patch z in the newly coming frame within a M × N window, which is centred at the last target position. The response could be derived as follows: ℱ −1 denotes the Inverse DFT (IDFT) and ̂ is the so-called kernel correlation whose th i element is k( , x). Therefore, the position of the target at each frame could be determined by the maximum response f (z)max. At last, in order to maintain the historical appearance of the target, linear interpolation is incorporated for updating the dual coefficients α and base sample templatex with a fixed learning rate η as:
Tracking Confidence Monitoring
Most of the traditional correlation filters-based tracker update their model in each frame or at a fixed interval. However, we argue that such strategy may introduce noise or background information when tracking is inaccurate. In this section, we advocate a novel criterion to evaluate the tracking results. To be more specific, we would consider the maximum value as well as the distribution of response map coinstantaneous.
Since the response map indicates features similarity between the target template and input samples across searching window. Under the ideal condition, there should only have one sharp peak in targets actual position. However, the response map may fluctuate due to surrounding distractors, temporal occlusion or other challenging factors. For example, in Fig.2 the response of the target is lower than the one of the background or distractors. In such cases, adopting the maximum searching strategy utilized in traditional KCF would lead to model drift. Hence, in this section we propose a novel criterion called average peak-sidelobe ratio (APSR) to evaluate the response map in order to reveal the tracking condition precisely as follow: Here ( ) and ( ) indicate the maximum and minimum value of the response map respectively. The denominator denotes the mean of the response map except the peak value, which is used to evaluate the side-lode. The APSR value would be large if there is only one sharp peak. We would record the peak value ( ) and the APSR value for each frame and compare them with their historical average values as threshold to determine whether model drift occurs. Fig. 2 shows the response map in Jogging sequence, where the response value of the distractor is competitive with the target. There KCF tracker would drift from the target due to its inflexible updating scheme. While with the use of the APSR criteria, whose value decreases rapidly when occlusion occurs. Our tracker could forecast the potential distraction and recover from model drift once the occlusion ends.
Integrating Objectness Proposals into Tracking
In this subsection, we employ re-detection and scale estimation based upon EdgeBox method [8] because of its fast speed and high recall. EdgeBox firstly computes an edge response for each pixel in the input image using a Structured Edge detector [9]. Then it traverses the whole image in a sliding window manner and scores every sampled box to select high-quality candidate samples. In [10], EdgeBox is applied as a postprocessing step to improve tracker's adaptability to scale and aspect ratio change. In [11], Gao et al. employ EdgeBox method to facilitate the Struck tracker. Even though this is not the first time that introduce EdgeBox approach into tracking framework. It should be mentioned that these methods are substantially different from our work, where we integrate region proposal method into correlation tracking framework recurring to a novel monitoring criterion. Instead of only handling scale variation scene, the proposed algorithm could also address tracking failure issue as well.
As mentioned above, the tracker would calculate the value of ( ) and APSR to determine whether tracking failure occurs. Once it happens, we would activate re-detection module by performing detection over several instance regionproposals, which cover the target. Instead of directly applying the computed high-scored proposals for tracking, we argue that the object instance level should be taken into account, since EdgeBox is a generic proposal generator, which may prone to generate false positive samples in a cluttered background. To this end, we incorporated an online updated SVM classifier to learn the target appearance as in [11]. But we update the SVM classifier using some generated instance-aware proposals as training data only if the tracking condition is ideal, which is decided by the monitoring criterion. We select the optimal detection results based upon the following objective function: * = max ∈ ( ) + × (P t−1 , P c i ) Here, Bt = {c1, c2, ...., cn} denotes candidate samples union generated by Edgebox method at frame t, P t−1 indicates the center of the tracking box at last frame, P c i is the center of the i th candidate sample at current frame, function f (· ) is the response output between the template and candidate samples as introduced before. The second item denotes that taking motion constrain into consideration so as to reject model drift caused by distractors with similar appearance. Here, ζ is the penalty parameter to balance these two factors. We employ the same function to represent the motion constrain in [11] as: Here, b is the diagonal length of the searching window size. It should be mentioned that the selected optimal position may vary abruptly and significantly. In order to maintain the consistency and robustness, we update the location ( * , * ) with a damping factor γ 1 : On the other hand, if the monitoring criterion indicates an ideal tracking circumstance, we would present how to tackle the scale variation and aspect ratio changes by incorporating scale proposals. Firstly, the tracker would generate numerous proposals centred at Pt−1. Secondly, we would sort them by their objectness score and pick up the top 200 candidates for further processing. Proposal rejection technique would be applied to filter the proposal whose intersection over union (IoU) with the bounding box is smaller than 0.6 or larger than 0.9. We consider the samples whose overlap rate exceed 0.9 are much like the current tracking results, while for the candidates whose IoU with the tracking results is lower than 0.6 are much likely to be false proposals. Afterwards, for proposals after rejection, we would resize those candidate samples with different size and aspect ratio into a fixed size (normally the template's size) and compute the response as Eq.
(4) in spatial domain. Similar to detection procedure, we choose the optimal scale proposal as target's current size which yield the maximum response. Meanwhile, the updating strategy is same as the one used in re-detection procedure in order to guarantee the target size changes smoothly.
Experiments
To evaluate the effectiveness and robustness of our algorithm, we empirically validate the proposed tracker on 5 challenging sequences from Online Tracking Benchmark [12] with other state-of-the-art methods. These trackers could be broadly categorized into three classes: (1) baseline CF trackers including KCF [3] and DSST [5], (2) trackers using region proposal method or redetection module such as EBT [11] and LCT [6], (3) other representative trackers reported in Tracking Benchmark such as Struck [13] and TLD [14] methods.
Quantitative results
We evaluate all the trackers by adopting one common criteria: the overlap ratio. We denote the ratio R = S(BT∩BG)
S(BT∪BG)
, where BT denotes the tracking results and BG is the ground-truth bounding box, R indicates the IoU of such boxes. The overlap ratio shows the percentage of frames with R > t, throughout all threshold t ∈ [0,1]. The average overlap rate is shown in Table 1. It demonstrates that our tracker outperforms other state-of-the-art methods in these sequences. Fig.3 illustrates qualitative results in sequences with challenging sequences compared with the other state-of-theart trackers. Occlusion is a big challenge for visual tracking, as it would destroy the holistic appearance of the target. We test 2 sequences (David3, Jogging2) having severe occlusion. One can see that only LCT, EBT and our tracker could lock the target precisely due to the monitor indicator and detection module. It should be mentioned that even TLD is able to redetect the target, it is sensitive to similar disturbance. In Dog1, the target undergoes large scale change. STRUCK, TLD and KCF couldn't adapt to such appearance variation. While DSST, LCT, EBT and our tracker could tail this challenging state for the entire video, which could be attributed to the scale searching strategy (scale pyramid or scale-instance proposals). However, for the sequences with viewpoint change and aspect ratio variation (Freeman3, CarScale), DSST and LCT tracker gradually drift from the target due to the inflexible pre-defined candidate sampling scheme. Moreover, even though EBT tracker could handle the aspect ratio change as our tracker, it could hardly recover from tracking failure caused by occlusion and fast appearance change. While with the objectness information, motion trajectory and tracking confidence all taken into consideration, our tracker could deal with the above issues, and performs better than other trackers.
Conclusion
In this paper, an effective two-stream framework is presented to enable tracking condition monitoring, failure redetection as well as scale adaptation. Specifically, we employ the region proposal technique, which could generate few yet high-quality candidate samples, into the well-known correlation tracking approach. With the consideration of tracking confidence and target objectness information, the proposed tracker performs favourably against other state-of-the-art trackers. | 2018-06-14T13:30:30.000Z | 2018-06-14T00:00:00.000 | {
"year": 2019,
"sha1": "84efa1995711871c4601896e19847d5004cf6c3f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1049/joe.2019.0307",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "84efa1995711871c4601896e19847d5004cf6c3f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
248505957 | pes2o/s2orc | v3-fos-license | On multi-soliton solutions to a generalized inhomogeneous nonlinear Schrodinger equation for the Heisenberg ferromagnetic spin chain
A generalized inhomogeneous higher-order nonlinear Schrodinger (GIHNLS) equation for the Heisenberg ferromagnetic spin chain system in (1+1)-dimensions under zero boundary condition at infinity is taken into account. The spectral analysis is first performed to generate a related matrix Riemann-Hilbert problem on the real axis. Then, through solving the resulting matrix Riemann-Hilbert problem by taking the jump matrix to be the identity matrix, the general bright multi-soliton solutions to the GIHNLS equation are attained. Furthermore, the one- and two-soliton solutions are written out and analyzed by figures.
Introduction
Solitons are stable, nonlinear pulses which show a fine balance between nonlinearity and dispersion. They often arise from some real physical phenomena described by integrable nonlinear partial differential equations (NLPDEs) modelling shallow water waves, nonlinear optics, electrical network pulses and many other applications in mathematical physics [1][2][3]. Both theoretical and experimental investigations [4][5][6] have been made on solitons. The derivation of abundant soliton solutions [7,8] to NLPDEs has been closely concerned by scholars from mathematics and physics, and a variety of approaches and their extentions have been established and applicable to NLPDEs up to present, such as the Hirota's bilinear method [9,10], the Darboux transformation [11,12], the Riemann-Hilbert method [13][14][15], and the Lie symmetry method [16,17]. In the past years, a considerable literature has grown up around the applications of the Riemann-Hilbert technique to solve integrable NLPDEs with zero or nonzero boundary condition, some of which include the coupled NLS equation [18], the Kundu-Eckhaus equation [19], the six-component fourth-order AKNS system [20], the multicomponent mKdV system [21], the -coupled Hirota equation [22], and the fifth-order NLS equation [23].
In this paper, we focus on a generalized inhomogeneous higher-order nonlinear Schrödinger (GIHNLS) equation for the Heisenberg ferromagnetic spin system [26] in (1+1)-dimensions + xxxx + 8 | | 2 + 2 2 * + 4 * + 6 * 2 + 6 | | 4 + ( 1 2 − 3 ) + (1 − 6 ) 2 * − ℎ = 0, (1) where denotes the complex function of the scaled spatial variable and temporal variable , the real number is a perturbation parameter, the real number ℎ stands for the inhomogeneities in the medium [24,25], and the asterisk and subscripts mean the complex conjugation and partial derivatives, respectively. Equation (1) is an integrable model. When ℎ = 0, Eq. (1) reduces to the fourth-order NLS equation, which governs the Davydov solitons in the alpha helical protein with higher-order effects [27]. In the past, many studies have been conducted on Eq. (1). The Lax pair [24] was first presented. The gauge transformation was used to construct soliton solutions [26]. The generalized Darboux technique was applied to generate some higher-order rogue wave solutions [28]. In a follow-up study, some solutions were computed by Hirota's bilinear method, and infinitely many conservation laws were derived based upon the AKNS system [29].
The rest of the paper is arranged as follows. In Section 2, we formulate a matrix Riemann-Hilbert problem by carrying out the spectral analysis and obtain the reconstruction formula of potential. In Section 3, we gain soliton solutions from a specific Riemann-Hilbert problem on the real axis, in which the jump matrix is taken as the identity matrix. The final section is a brief conclusion.
Matrix Riemann-Hilbert problem
What we intend to describe in this section is a matrix Riemann-Hilbert problem. We start by considering the Lax pair [28] for Eq. (1) where = ( 1 , 2 ) T is the spectral function, the symbol T stands for the vector transpose, and ∈ C is a spectral parameter. And Equivalently, the Lax pair (2) and (3) reads in which Λ = diag(1, −1) and In our analysis, we suppose the potential to be vanished rapidly at infinity. It is evident to see from (4) and (5) Thus we introduce the transformation which enable us to convert the Lax pair (4) and (5) into where the square brackets denote the usual matrix commutator, namely, In what follows, the spectral problem (6) will be analyzed, and will be treated as a constant. We represent the matrix Jost solutions ± ( , ) as with the boundary conditions The above subscripts in refer to which end of the -axis the boundary conditions are required for, and I 2 is the identity matrix of size 2. Using the boundary conditions (9), one obtains Volterra-type integral equations From (10) and (11), we find that [ + ] 1 and [ − ] 2 are analytic for ∈ C − and continuous for ∈ C − ∪ R, while [ − ] 1 and [ + ] 2 are analytic for ∈ C + and continuous for ∈ C + ∪ R, where C − and C + are the lower and upper half -plane. Applying the Abel's identity, we reveal that det ± are independent of , since tr = 0.
A matrix Riemann-Hilbert problem is associated with two matrix analytic functions. In view of the analytic properties of ± , the analytic function in C + is given by in which Because 1 solves (6), we make an asymptotic expansion for 1 at large- and substitute the asymptotic expansion into (6). Comparing the coefficients of the same powers of yields (1) : Thus, we see that For construction of a matrix Riemann-Hilbert problem, we still need the analytic counterpart of 2 in C − .
Inserting ± ( , ) into (12) gives The −1 ± ( , ) are then substituted into (17) yielding Thus, 1 and 2 can be represented as Having presented two matrix functions 1 and 2 which are analytic in C + and C − , respectively, a matrix Riemann-Hilbert problem on the real axis can be formed below in which we have denoted that 1 → + as ∈ C + → R and 2 → − as ∈ C − → R. And the canonical normalization conditions are given by Next, we are going to present the reconstruction formula of the potential. Since 1 ( , ) solves (6), expanding 1 ( , ) at large-as and inserting this expansion into (6), we see that 1 . By now, we have achieved the reconstruction for the potential.
Soliton solutions
For calculation of soliton solutions to Eq. (1), we make an assumption that det 1 ( ) and det 2 ( ) can be zeros at certain discrete locations in analytic domains. Based on det ± = 1, (13) and (18) as well as the scattering relation (12), we reveal that det 1 ( ) = 11 ( ) and det 2 ( ) = 11 ( ). That is to say, det 1 ( ) and det 2 ( ) have the same zeros as 11 ( ) and 11 ( ). We now need the locations of zeros. Notice that the potential matrix satisfies the anti-Hermitian property † = − , where † means the matrix Hermitian.
After taking the Hermitian to (13) and considering (18), we find that † and From this, it is found that each zero of det 1 produces each zero * of det 2 . Let be a free natural number. Generally, we assume that det 1 and det 2 have some simple zeros at ∈ C + andˆ= * ∈ C − , respectively. For this case, each of the kernel of 1 ( ) and 2 (ˆ) contains a single basis column vector or row vectorˆ: By taking the Hermitian to (22) and utilizing (21), we get Then computing -derivative and -derivative in (22) respectively, and using (6) and (7) yields Therefore, we get In view of the relation (24), we see that where 0 and † 0 are constants. For presenting soliton solutions, we consider the reflectionless case, namely, ( , ) = I 2 . This resulting special Riemann-Hilbert problem [30] possesses the solutions From Eq. (25), we derive Combining the established results with 0 = ( , ) T and = − + (8 4 + 6 2 − 2 − ℎ ) , the general -soliton solution to Eq. (1) can be written as In what follows, we intend to discuss one-, two-, and three-soliton solutions graphically.
(ii) For = 2, two-soliton solution is given by Through assuming 1 = 2 , 1 = 2 = 1, and | 1 | 2 = 2 1 , the solution (29) reads in which In order to show interaction behaviors between two solitons, some graphs are plotted and two cases are under consideration here.
We first consider the case of two solitons traveling at different velocities. In this case, the solution parameters in (30) are first chosen as 1 = 1, 2 = 1, 1 = 1 10 + 3 , 2 = 1 10 + 2 , 1 = 0, = 1, ℎ = 1. According to these values, some plots are made to shed light on the localization and dynamical behaviors. Figure 2(a) shows the localized structure of this solution on ( , )-plane clearly, which is a typical cross two bright solitons. It can be observed that the overtaking collision between the solitons takes place as depicted in Fig. 2(b), where two solitons with different velocities move together towards the same direction along the -axis. The (taller) soliton with a larger amplitude travels much faster than the other (shorter) soliton with a smaller amplitude, and the taller soliton catches up with the shorter soliton over time. Both solitons then continue to proceed in the same direction. At the moment = 0, the amplitude value for two solitons reaches the maximum. Before and after the collision, their speeds and shapes are unchanged. In other words, the overtaking is an elastic interaction.
In Fig. 3, we show the head-on collision between two solitons with the parameters as 1 = 1, 2 = 1, 1 = 1 10 + 2 , 2 = 1 6 + 3 , 1 = 0, = 1, ℎ = 1. The taller soliton crashes the shorter one in the opposite direction of the -axis. After the collision, their amplitudes, widths, speeds, and directions are same as those before only except phase shifts, see Fig. 3(b). Evidently, the head-on interaction of two solitons is also elastic.
With regard to the second case, we consider that two solitons travel at same speeds. The solution parameters in (30) are specified as 1 = 1, 2 = 1, 1 = 3 , 2 = 2 , 1 = 0, = 1, ℎ = 1. The bound state of two solitons is shown on ( , )-plane in Fig. 4, in which two solitons are localized spatially and keep together in propagation.
Indeed, this solution indicates the breather, namely, when two solitons propagate, the amplitude function is Following the similar lines as our disscussion on two solitons, we now examine the dynamics among three solitons. The parameter values in (31) are first given by 1 = 1, 2 = 1, 3 = 1, 1 = 1, 2 = 1, 3 = 1, 1 = 1 10 + 2 , 2 = 1 10 + 2 3 , 3 = 1 10 + 3 , = 1, ℎ = 1. Based on these values, a special solution can be gained at once. And we can know the velocity relation for the three solitons 1 < 2 < 3 . Here we have denoted that the solitons from left to right in Fig. 5(a) are 1 , 2 , and 3 . Figure 5 presents an elastic overtaking process among three solitons moving together towards the negative direction of the -axis. Ultimately as time evolves, 2 overtakes 1 , and 3 overtakes 1 and 2 . When = 0, the peak amplitude is maximum. Then, we take the parameters as 1 = 1, 2 = 1, 3 = 1, 1 = 1, 2 = 1, 3 = 1, 1 = 1 10 + 2 , 2 = 1 6 + 3 , 3 = 1 8 + , = 1, ℎ = 1. Denoting that the solitons from left to right in Fig. 6(a) are 1 , 2 , and 3 respectively, it is found in Fig. 6 that 1 moves towards the positive direction of the -axis, which is opposite to the propagation direction of 2 and 3 . As time goes on, 1 collides head-on with 2 and 3 , while 3 overtakes 2 . After the interactions, the three solitons 1 , 2 , and 3 continue to move along their original directions. Both head-on and overtaking interactions in the process are elastic. Additionally, the head-on interaction of two solitons in bound state with another soliton during propagation can be observed in Fig. 7. And Fig. 8 shows the evolution of bound state of three solitons.
Conclusion
In this study, a generalized inhomogeneous higher-order nonlinear Schrödinger equation for the Heisenberg ferromagnetic spin chain system in (1+1)-dimensions with the zero boundary condition was taken into account.
A matrix Riemann-Hilbert problem was built, based on which multi-bright-soliton solutions to the examined equation were explored eventually. Moreover, the explicit forms of one-, two-, and three-bright-soliton solutions were given, and a few vivid plots were made to exhibit their spatial structures in three-dimensions and dynamical behaviors in two-dimensions after specifying the parameter values properly with the aid of Maple software.
Data availability
Our manuscript has no associated data. | 2022-05-04T01:15:51.049Z | 2022-05-03T00:00:00.000 | {
"year": 2022,
"sha1": "1ec0ad9a367f2d46247c1a3619ebacaf7dffe427",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2205.01318",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "abf6faae5f8fde1159b02570864d23225edd7730",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
261005789 | pes2o/s2orc | v3-fos-license | Delay-differential SEIR modeling for improved modelling of infection dynamics
SEIR (Susceptible–Exposed–Infected–Recovered) approach is a classic modeling method that is frequently used to study infectious diseases. However, in the vast majority of such models transitions from one population group to another are described using the mass-action law. That causes inability to reproduce observable dynamics of an infection such as the incubation period or progression of the disease's symptoms. In this paper, we propose a new approach to simulate the epidemic dynamics based on a system of differential equations with time delays and instant transitions to approximate durations of transition processes more correctly and make model parameters more clear. The suggested approach can be applied not only to Covid-19 but also to the study of other infectious diseases. We utilized it in the development of the delay-based model of the COVID-19 pandemic in Germany and France. The model takes into account testing of different population groups, symptoms progression from mild to critical, vaccination, duration of protective immunity and new virus strains. The stringency index was used as a generalized characteristic of the non-pharmaceutical government interventions in corresponding countries to contain the virus spread. The parameter identifiability analysis demonstrated that the presented modeling approach enables to significantly reduce the number of parameters and make them more identifiable. Both models are publicly available.
Despite the fact that initial results of the numerical study of SEIR models played an essential role in determining both basic laws of the primary development of the COVID-19 pandemic and core characteristics of the current pandemic situation, in the overwhelming majority this type of models use mass action laws to describe the transitions between states (for example, from the incubation period to the symptomatic).Because of that, such models cannot always adequately reproduce the dynamics of such transitions.The methodological constraint of the SEIR models can be solved by using delayed differential equations which are able to explicitly capture the durations of the latent, quarantine, and recovery periods 20,21 .Thus, Shayak and coathours numerically investigated the simplest retarded logistic equation with time delay to model the spread of COVID-19 in a city and demonstrated that solution of the model is significantly sensitive to small changes in the parameter values 22 .At the same time, more conventional SEIR-based delay differential equation models were proposed to reproduce the COVID-19 dynamics in Germany, China, South Korea, India and Japan [23][24][25][26] and to predict the epidemic dynamics in Italy and Spain when it was in its early stages.However, these models did not take into account asymptomatic carriers and non-testing subpopulations as well as the progression of the disease's severity.
Herein, we propose novel mathematical model based on the model developed by 16 using differential equations with weighted sums of delayed argument mixed with instant processes, which allow us not only model transition processes adequately to clinically observed data, but also directly quantify the proportion of hospitalized patients with moderate and severe symptoms, on an ICU, asymptomatic, tested and untested among them, which can be compared with the available statistics.The main goal of the study is to present a new approach to model epidemiological processes combining delay-differential terms and instant processes which may be fitted separately from the rest of the model.This approach reduces the number of model parameters and makes them more epidemiologically interpretable and more identifiable compared to classic SEIR approach.The results of numerical analysis and model validation are demonstrated by the example of two European countries, Germany and France.
Results
Model structure.The final version of the proposed delay differential equations (DDE) model consists of the following subpopulations or groups (Fig. 1): 1. S-susceptible to the SARS-CoV-2 virus.2. V-vaccinated subpopulation, considered to be immune to the virus.
3. E-exposed to the virus.After the incubation period they will transit either to asymptomatic or symptomatic.Here we do not use additional subgroups due to equal time intervals for both transitions.4. A-asymptomatic individuals, which will recover over time but can infect others 27 .5. I-mild symptomatic group.It comprises three subgroups: with onset symptoms ( I O ) then they are instantly divided into those who will recover ( I R ) and those who will progress to the severe symptomatic ( I H ) .
Transition is done according to the fraction of severe symptomatic among those who show any symptoms ( F H ). 6. H-severe symptomatic group which comprises four subgroups: 1) with just onset symptoms ( H O ).They instantaneously transit into subgroups of individuals who will eventually recover ( H R ) , die ( H D ) or progress to critically ill ( H C ) .Transitions are performed according to the disease lethality ( F D ) and the fraction of critically ill ( F C ). 7. C-critically ill group where ICU is required in order to recover.If no ICU is available these patients will die.All critically ill patients are considered to be automatically tested for the virus infection.8. R-recovered from COVID-19.
Figure 1.
Overall SEIR-like model with instant and delayed processes.All abbreviations of the population groups described in the main text.9. D-deceased due to COVID-19.10.All infected subgroups (except critically ill) also have "registered" or "tested for COVID-19" counterparts: A T , E T , I T , H T , R T , D T .In the model patients may be tested at three different stages (1) when being exposed to the virus.It is done through contact tracing procedures.Percentage of exposed to the virus who will be tested and registered is set by T E parameter; (2) upon symptoms onset.Percentage of mildly symptomatic individuals who will be registered is given by parameter T I ; (3) upon severe symptoms onset.Percentage of severely symptomatic individuals who will be registered is given by parameter T H .
Most transitions in the model are described as either instant processes or as preliminary fitted processes (blue and green arrows, correspondingly, in Fig. 1).To fit delayed processes we used data from 28 for incubation period and 29 for other epidemiological processes.
Process of release from a hospital may be fitted using data provided by Our World in Data for France for recovery\dying in hospitals.To this end we constructed a partial model describing the process of admitting hospital, transition to ICU and leaving hospital (Fig. 2).Given daily numbers of the hospital admission, daily number of ICU admissions and daily number of hospital patients we fitted the process of leaving hospital utilizing formula (3) (see "Methods").It should be noted that we assume that all severely ill patients are tested and moved to the hospital.However, it is not always the case and should be addressed in the updated version of the model.
Overall scheme of the hospitalization model in SBGN format as well as a result of the model fitting to statistical data on hospitalization in France are presented in Fig. 2. It should be noted that hospital stay duration is shown to be the same for the whole pandemic duration and not dependent on the virus strain.
There are also four transitions in the model treated differently: 1. Duration of the protective immunity.We imply that the immunity acquired either after recovery or vaccination lasts 180 days according to the average experimental evaluations 30,31 .2. Vaccination.Based on the known statistical data we established the number of individuals vaccinated each day in a certain country.For model purposes we allow the vaccination only for susceptible individuals (either never infected before or those who lost their immunity through time).The kinetic law for this process is zeroth-order: dV /dt = k V with k V changing each day based on the tabular data for the certain country.We also assume that vaccines have 100% efficiency immediately after the first dose, and the immunity via vaccination declines according to the duration period (see item 1). 3. ICU admittance.We considered this process to require a free ICU and be instant in most cases.However, lower value of the kinetic constant may be used to reflect the fact that not everyone who needs ICU gets it: 4. The infection process.Transition from susceptible to exposed is defined using Total Infection Coefficient (TIC) which we calculated similar to the model described in 16 with modifications: S1-S6).We have divided the overall time duration into four intervals or waves.Values of some model parameters were changed between waves to reflect changes in the pandemic progression.The most significant changes were made to the infection coefficient which causes a spike in new cases and reflects the spread of new more contagious SARS-Cov-2 variants [33][34][35][36] .
1.The First wave: This interval starts somewhere in January 2020.From this time point infected individuals started to arrive in the country in significant amounts.Patient zero in Germany entered the country on 20 January and was registered on 27 January 37 .We assumed in the model that import of infection to Germany began on 20 January.This import was ended on 16 March 2020 (t = 76), when the European Union as a whole announced the closure of all its external borders to non-citizens 38 .We assumed the import rate to be linearly increasing during that time period.Maximum number of infected individuals per day was estimated to be 500 individuals per day just before borders were closed.
For France the first case was identified on 24th January.However, individuals infected by SARS-CoV-2 were present as early as December 2019 according to some sources 39,40 .Unfortunately, we do not have data on how many infected individuals arrived in France or Germany before borders were closed on 16 March 2020.We kept the same number of persons per day for France as for Germany and estimated the start of infection import to France to be 15 January.
2. The second wave-starting from summer 2020 the number of new cases began to rise again implying the second epidemic wave in the region with many more registered cases.It may be attributed to a new European strain (EU1) emerging in both countries.However, its transmissibility is considered to be the same as for the original variant 41 .Another reason is relaxing anti epidemic restrictions which can be traced by lower levels of the Stringency Index.In both models it caused a second wave in accordance with existing statistics.Despite the fact that the number of cases is much higher than during the first wave, the number of hospitalized patients is roughly the same.In both models it was reflected by changing the fraction of severely ill among symptomatic patients H F .New value was fitted according to the statistical data for patients in hospitals and ICU.New values were set in the model at time = 200 (20.07.2020) which agrees well with the appearance of the new strain in both countries according to the covariants.orgweb site.However, it should also be noted that the number of cases at the first pandemic wave could be dramatically underestimated due to limitations on testing compared to further deployment of the testing system.To estimate the correct number of initial cases data on seroprevalence during the first wave is required which is also limited.
3. The third wave-starting from early 2021 a new significant rise in numbers of cases begins in both countries.It may be connected with the spread of new strains of the virus.Indeed, new virus lineage with additional mutations in the spike region, B.1.1.7 strain 42 , was rapidly spreading in some European countries at this time period.This Alpha strain is much more contagious and has increased mortality rate according to the cohort study [33][34][35] .This was modeled by multiplying all probabilities to be infected upon contact by the same multiplier.The multiplier's value was implied to be the same for both countries and fitted to be 1.6.Thus, the probability of being infected upon contact is 60% larger for the new variant which is consistent with the estimated range of the transmissibility of the Alpha strain compared to the predecessor lineage 43 .Fraction of severely ill H F was not changed at that time, as the previous value still agreed well with statistical data.Start of the third wave in the model was fitted for both countries on two different dates.Here it should be noted that the obtained profile of new cases for France does not agree well with the statistical data from ourworldindata.org.
4. The fourth wave starts in June 2021 and can be attributed to another B.1.617.2 virus variant (Delta) which is significantly more contagious than previous ones.New infection coefficient was fitted to be 2.3 times larger than for the Wuhan strain which agrees with estimates in the published data 44,45 .Fraction of severity ill H F was fitted to be even less than for two previous strains to reflect a lowered ratio of hospitalized patients to registered.Despite the risk of hospital admission for COVID-19 was approximately doubled in patients with the Delta compared to the Alpha strain 36 , the overall hospital admissions involving COVID-19 in 2021 were significantly lower than in Described four waves cover the first two years of the pandemic.As can be seen from simulation results (Figs. 3 and 4) the model accurately reproduces the reported new cases per week and total number of cases as well as the number of hospitalized patients on ICU and total deaths in each country over time of the pandemic for two years of the pandemic.
Sources used for deriving model parameters include: data on average number of contacts between individuals 46 , fatality rate on ICU 47,48 , transimissibility of the original virus strain [49][50][51] .Particular values and ranges of parameters may be found in Supplementary material 1.
Automatic model generation for other countries.Consequent development of our approach is automatic generation of epidemiological models for other countries (both European and non-European).Most of the model processes are fixed by fitting weighted sums of delays (see Methods).Because of that, the number of parameters whose values should be estimated is significantly lowered.That, in turn, allowed us to carry out estimation procedures automatically for other countries using statistical data provided by Our World in Data.It is worth to note that generated models are preliminary and further fine-tuning procedure is required to quantitatively reproduce observed epidemiological dynamics.However, it provides the fast generation of the initial version of epidemiological model for a certain country.
Automatic model generation goes as follows: 1. Copy of the base model is created.4. Parameters of the generated model are estimated so that weekly new cases simulated by the model agree with statistical data for a particular country.At the current stage, we estimated the model parameters at the first year of the epidemic in a certain country.Parameters, which values were estimated, are presented in Supplementary material 2 (Table S1).
Simulation results of the automatically generated and fitted models for 12 countries are presented in Supplementary material 2 (Figures S1-S4).It is worth emphasizing that simulation results quite well agree with statistical data for most European countries, while optimization of the model describing COVID-19 epidemiology in Non-European countries like Brazil and Argentina was not able to adequately reproduce the observed trajectory of the epidemics.For example, one can see that the number of total cases in South Korea agrees well with real data, but the model failed to demonstrate the first two waves of pandemic.That is due to the fact that those waves are short and presented only by a few data points (2-3 weeks) and automatic parameter estimation ignores these small waves.Probably that manual model fitting and optimization are required in those cases.Of course, the generated models are not a final in silico tool to describe and predict epidemics even in European countries, but they may serve as a base for models of COVID-19 epidemic in corresponding countries.
Discussion
We have proposed the methodology to overcome some shortcomings of the classic SEIR-based epidemiological models via the novel epidemiological model which utilizes DDEs to take into account different time scales of epidemiological processes and instantaneous splitting procedure to describe competing processes.The essential benefits of the developed model are: 1. Most of the epidemiological processes (symptoms onset, recovery, dying etc.) are described using kinetic laws with delayed arguments.These modeling processes can be fitted separately from the rest of the model and applied kinetic laws provide more precise reproduction of real properties of those processes than massaction laws with a single parameter.2. If a model has two or more competing transitions, a division into separate subpopulations removes their undesired mutual influence and allows to simulate fast and slow transitions with correct fractions of patients undergoing each transition.3. Model parameters that are not parts of fitted processes described previously have direct mechanistic meaning (i.e.disease lethality, susceptibility, probability of different symptoms severity) and can be drawn from the statistics.
Vol:.(1234567890) Combination of these advantages makes the model more reliable with real properties of the pandemic and eliminates most of the abstract parameters usually used in SEIR-like models.It is worth noting that there are other studies addressing these issues using Erlang distribution 52 and delayed equations 53 .However, to our best www.nature.com/scientificreports/with the virus and importing infected individuals into the modeling region or country.Particularly, parameters describing fractions of different symptoms' severity should be correctly attributed to the patient's age that requires extension of the model.One of the crucial issues in the case of COVID-19 epidemiological modeling is to correctly transfer government NPIs (limit on mass gathering, lockdown, curfew, etc.) into the model parameters.One can easily see that introduction of a "social distance" multiplier to the infection rate and fitting its value to experimental data enables it to reproduce almost any observable epidemiological trajectory.In order to tackle this problem we tried to utilize the Stringency Index 32 instead of trying to fit the "social distance" factor over time of the pandemic in a certain country.However, we still have quite abstract aggregated numerical values of the indicator.Thus, further step for the model development in this direction is to use individual components or NPIs of the stringency index and attempt to assess their individual effect on the epidemic dynamics.
Scientific
It should be noted that the main focus of the study is to present a novel, as far as we know, combination of delay-differential terms and instant processes which may be fitted separately from the rest of the SEIR model in order to simulate the pandemic.However, this approach and developed model for COVID-19 epidemic have some constraints and limitations.Firstly, the delay-differential version of the SEIR model like others assumes population homogeneity and it could be overcomed only by application of agent-based modeling approach.Secondly, Covid-19 pandemic shows wave-like behavior in any country and the model analysis and fitting show that predictive power of this type of SEIR model like others is low and COVID-19 trajectory can not be predicted by the pre-fitted model without additional modifications taking into account an emergence of new viral strains with different epidemiological characteristics and consequent refitting of the model.So we associate each wave with appearance and spread of new more contagious variants of the virus and model it by changing infection parameters.However, it may be modeled more correctly using a modified version of the model with two or more similar modules where each of them is containing a full disease progressing scheme taking into account separate strains.This is also a part of our roadmap in the development of the model.The model fitting to statistical data for both countries demonstrated the decrease of the infection fatality rate during the COVID-19 epidemic which corresponds to early statements that the fatality rate of the Delta or B.1.617.2 variant of COVID-19, for example, is lower than the original variant.However, it might be caused by the age of unvaccinated people who were infected by the Delta virus strains and hospitalized with severe symptoms.According to the report published by Public Health England 54 , for instance, the majority of COVID-19 cases caused by the Delta variant were detected in people under 50 years old in the UK which are less likely to die from COVID-19 compared to those older than 50.So the current comparison of the case fatality rate of the B.1.617.2 variant with that of the wild-type virus is biased due to vaccination strategy in some European countries and age stratification of the population.This statistical misleading indicates the necessity to specify the developed model for each age group and integrate them in a more complex DDE model considering age distribution in a certain country.In addition to that, the vaccination scheme is oversimplified in the current version of the model and does not consider essential factors like two-stage and booster vaccinations, the vaccine efficiency against different strains (in regards to protective immunity to be infected and to show more serious symptoms), vaccination of asymptomatic cohort, different strategies or vaccine campaigns specific for particular country.So the roadmap for the extension and further model development includes the next biological aspects of the virus and epidemiology of the COVID-19 pandemic: • Age-specific modules (infection and death rates, hospitalization and severity of the disease, B and T cell response) according to the statistical data 55 .• Explicit strain emergence with other viral indicators like infectivity, resistance to neutralization, vaccine effectiveness in the population 36,[56][57][58][59][60][61][62] .• Waning B and T cell immunity and neutralization activity of specific antibodies [63][64][65][66] .
• Different vaccination strategies and its effectiveness against infection and severed outcomes for emerging viral strains.
Methods
SEIR-like model.The overwhelming majority of SEIR-like models uses the mass action kinetic law (mainly, the first-order rate law) for transitions between different stages of a disease(e.g. between exposed and infectious periods).Explicit drawbacks of this approach are that: 1. Parameters of those reactions are quite abstract and can not be easily related to real biological characteristics of the virus.2. The model fails to correctly represent processes delayed in time.
Here we will try to address those two problems.In the current study we use SBGN-Systems Biology Graphical Notation 68 for visual representation of mathematical models.Let's consider a SEIR-like model with two levels of symptom severity (Fig. 5).
Duration of processes.
A numerical value of the parameter α in the model (1) is related to the median incubation period in the population.For example, if we set α = ln(2)/5.1 then 50% of individuals who were exposed to the virus at time t = 0 will have become symptomatic at time t = 5.1 days.
Distribution of incubation period in this model compared to the experimental data from 28 is presented in Fig. 6A.One can easily observe that it is inconsistent with statistical data on incubation period.For example, according to the first-order model, almost 10% of the infected have their symptom onset within one day of the infection which does not match the data..
Unfortunately, having only one parameter α we can not fit the curve to this statistical data.This is also the case for other processes delayed in time with certain distribution of length.
A possible solution to overcome the issue is use of different forms of the kinetic law: (1) www.nature.com/scientificreports/Herein, we use a weighted sum of the delayed number of exposed individuals.We have a 2*m parameters which can be estimated to reproduce an experimental data.Typically, m = 1 or m = 2 is enough to comprehensively fit the data, keeping the number of parameters reasonably low.
For example, to fit the data from 28 we employ only two delays: Simulation results of the incubation period's model demonstrating differences between theoretical curves obtained using two methodologies are presented in Fig. 6.
Another benefit of the time-delay based approach is an opportunity to reproduce an experimental data on diverse epidemiological processes (incubation period, recovery, worsening of symptoms from mild to severe and others) once and separately from the rest of the model structure based on the known distributions of duration of those processes for particular infectious disease only.
Competing processes.Another opportunity to improve the original model and bring it closer to reality is to consider different possible transitions from the same subgroup.For example, infected patients may either recover or progress to severe symptoms (Fig. 7).In that case parameters δ and γ 1 are related not only to durations of corresponding processes but to recovery rate for mild symptomatic (or fraction of severe symptomatic among mild symptomatic).An issue with two alternatives arises when the fast process has less probability and therefore smaller fraction of patients involved in this direction of the infectious process.According to the statistics 29 , the process of worsening of symptoms (median time is 5 days) is faster than recovery (median time equals to 14 days) implying that δ value should be larger than γ 1 value.As in the previous subsection, we may set these parameters as δ = ln(2)/5, γ 1 = ln(2)/14.However, in that case (Fig. 7) the model will show that the fraction of patients who transit to the severe symptoms is larger than the fraction recovered which is not the case in reality 69 .
The possible solution to overcome the discrepancy is to consider those two processes separately.Firstly, patients will instantaneously transit from "Symptomatic" to "Symptomatic who will recover in future" ( I R ) and then from I to R. In similar way another fraction of symptomatic patient will instantaneously transit to "Symp- tomatic who will need hospitalization" ( I H ) and only afterwards from I to H.The updated model is presented in Fig. 8 and corresponding model equations are: where F R = 0.2 -a fraction of infectious individuals who will not have worse symptoms.F H = 0.8 -a fraction of infectious individuals who will have worse symptoms.F R + F H = 1.K Large -constant value which is large enough to render reactions instant.
The total number of individuals with mild symptoms is calculated as a sum of all subgroups: (3) This technique can be combined with delayed equations described in the previous section.Thus, we can construct a version of the model (1) taking into account the duration of some infectious processes and the existence of competing processes.The final version of the model is presented in Fig. 9.
Advantage of the updated model is that instead of 8 parameters which do not explicitly correspond to real characteristics of infectious processes and has to be fitted to experimental data we have only three parameters, two fraction parameters: F H -fraction of symptomatic individuals with severe symptoms, F D -disease lethality that can be drawn from statistical data and processes that are fitted to experimental data separately from the rest of the model.Comparison is given in Table 1.Advantage of the updated model is that only three epidemiologically interpretable parameters-infection rates β 1 , β 2 , β 3 have to be fitted to experimental data instead of 8 parameters α, β 1 , β 2 , β 3 , γ 1 , γ 2 , δ, µ which do not explicitly correspond to real characteristics of infectious pro- cesses.We also have two fraction parameters which values may be explicitly derived from statistics: F H -fraction of symptomatic individuals with severe symptoms, F D -disease lethality.All other parameters of the new model are fitted preliminary to experimental data on corresponding processes separately from the rest of the model.
Comparison is given in Table 1.Decreasing of parameters number also leads to better identifiability of remaining parameters.In order to demonstrate that we fitted two type of the SEIR model (classic and delay-differential) to data on the first 180 days of the pandemic in Germany in 2020.We simulate a start of the pandemic by the simplistic way and use two additional parameters: Start-day, when the infection was imported to the country and E Start -a number of individuals in incubation period imported to the country at day Start .Results of the parameter estimation and parameter identifiability analysis for both delay-based ad classic SEIR models are presented in Supplementary material 3, more details may be also found in corresponding Jupyter Notebook at https:// gitlab.sirius-web.org/ covid-19/ dde-epide miolo gy-model.According to the analysis for delay-based SEIR model: β 1 is identifiable parameter (i.e. if its value is changed from estimated β 1 = 0.165 and fixed, then the model can not be successfully fitted by changing other parameters values); β 2 -is non-identifiable in very small range [0, 0.0026] (i.e. its value may be anywhere in this range), value of β 3 -is also non-identifiable in quite a small range [0, 0.029].We can see that all three parameters can be identified quite precisely.
However, the parameter identifiability analysis has been demonstrated that for classic SEIR model: β 1 is not-identifiable in the range [0, 9.92], β 2 , β 3 are not identifiable in the range [0, 10].If we fix the value of one of these parameters anywhere in that range we may fit the model by changing values of other parameters.Start is identifiable, where the identified value is day 83 (23.03.2020), while E start is non-identifiable in the range [10570, 2000] and α is non-identifiable in the range [0.57, 10].Other parameters are also not-identifiable in large ranges.
Thus, the delay-differential modelling approach not only provides a smaller number of epidemiologically interpretable parameters, but also improves the identifiability of the model parameters compared to classic SEIR model.
Initial model.
As a basis for our model we used the previously created SEIR model 16 of the COVID-19 epidemic.This model differs from the most SEIR models by differentiating between tested and non-tested infected subjects.It was created in the Systems Biology software COPASI 70 which allows one to specify the kinetics of the process mechanistically.COPASI translates these specifications into differential equations which it integrates either as a function of time, or by requiring steady state.The software honors restrictions as specified in terms of algebraic equations and 'events' which instantaneously change numeric values of the model parameters triggered by logical expressions transiting from "false" to "true".COPASI models are SBML compatible, and can be exported into the format.The latter greatly facilitates model reuse and reproduction.Data sources.Statistical data for Germany and France was taken from Our World In Data web site (https:// ourwo rldin data.org/).This web-portal provides the data on the total number of cases, new cases each day, number of hospitalized, transferred to ICU patients, vaccinated individuals and total number of deaths.
To take into account statistical data on government actions in the model we employed the Stringency Index developed by the Blavatnik School of Government of the University of Oxford 32 www.nature.com/scientificreports/platform also incorporates a module for the automatic and manual parameter fitting to an experimental data.Models developed in the BioUML are based on main standards in systems biology: (1) SBML-Systems Biology markup Language 72 for mathematical description and (2) SBGN for visual representation.A model can be built and edited in the platform as a visual diagram (e.g. in SBGN notation) based on which a Java code is generated for model simulations.Additionally, BioUML is integrated with Jupyter hub (https:// jupyt er.org/) for interactive data and model analysis as well as an essential and user-friendly tool for reproducibility of the simulation results.
Parameter identifiability.Assessing of the parameter identifiability was conducted using a method implemented in the platform 73 .Identifiability analysis examines whether a set of model parameter values can be uniquely estimated from a given model and data set.According to the methodology, the agreement of experimental data with the observables predicted by the model is measured by an objective function, commonly the weighted sum of squared errors.The analysis goes as follows: one of the parameters is selected, its value is fixed and the model is fitted to observed data using other parameters.Then the fixed parameter value is changed in order to find a range of values for a given parameter in which the model could not be fitted to observed data.An area in which the model may be fitted regarding a given parameter value (inside this region) is the range in which this parameter is unidentifiable (its value cannot be uniquely found considering current data, model and set of parameters for estimation).Then the procedure repeats for the next model parameter.
Figure 2 .
Figure 2. (A): Partial model of the hospitalization due to Covid-19 using delay equations for hospital release.Data on daily hospital and ICU admission taken from ourworldindata.org.(B): Results of the model fitting to the number of hospitalized patients in France.Model was fitted until time = 550 days.
Reports | (2023) 13:13439 | https://doi.org/10.1038/s41598-023-40008-9www.nature.com/scientificreports/ 4. Overall decreasing of the parameters number makes them more identifiable as demonstrated in comparison between delay-based and classic models in this study.It also allows for more fast and simple adaptation of the model to other regions and countries.
Figure 6 .
Figure 6.A comparison between two models of the incubation period: (A): based on the mass-action kinetics law (2).(B): using weighted sum of delays (4).Statistical data of the incubation period quantiles are taken from28 .
Figure 7 .
Figure 7.The simple mass-action model with two competing processes.(A): SBGN representation of the model, here I-infected individuals, R-recovered, H-patients with severe symptoms; (B): Simulation results of the model.
Figure 8 .
Figure 8.A part of the epidemiological model with alternative competing processes.(A): Visual representation in SBGN format; (B): Simulation results of the model for fractions of recovered and patients with severe symptoms.
Figure 10 .
Figure 10.Stringency of government measures (blue) and new reported cases per day (yellow). https://doi.org/10.1038/s41598-023-40008-9 2020 (e.g., see ONS data on COVID-19 latest insights: Hospitals.9 June 2022).Apparently, the increased number of vaccinated people, protective effectiveness of the developed vaccines in preventing SARS-CoV-2 infections as well as a much greater proportion of the population cohort who is recovered from COVID-19 by the moment of Delta variant's emergence in 2021 ensured or can explain the ratio decline in this year compared to the first pandemic year.Once again, the time point at which new parameter values were introduced into the model was fitted for both countries for two different dates to obtain the required profiles of new cases.
Table 1 .
Comparison of parameters in classic SEIRHD model and delay-based model. | 2023-08-20T06:17:31.676Z | 2023-08-18T00:00:00.000 | {
"year": 2023,
"sha1": "bd3f03f5f590853eedc8e75e95787b0df84bdd95",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-023-40008-9.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "242ee2a323952165e5b4485bba1d0a7c67e223d8",
"s2fieldsofstudy": [
"Mathematics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15244630 | pes2o/s2orc | v3-fos-license | Enzyme-activated intracellular drug delivery with tubule clay nanoformulation
Fabrication of stimuli-triggered drug delivery vehicle s is an important milestone in treating cancer. Here we demonstrate the selective anticancer drug delivery into human cells with biocompatible 50-nm diameter halloysite nanotube carriers. Physically-adsorbed dextrin end stoppers secure the intercellular release of brilliant green. Drug-loaded nanotubes penetrate through the cellular membranes and their uptake efficiency depends on the cells growth rate. Intercellular glycosyl hydrolases-mediated decomposition of the dextrin tube-end stoppers triggers the release of the lumen-loaded brilliant green, which allowed for preferable elimination of human lung carcinoma cells (А549) as compared with hepatoma cells (Hep3b). The enzyme-activated intracellular delivery of brilliant green using dextrin-coated halloysite nanotubes is a promising platform for anticancer treatment.
The targeted delivery of drugs directly into biological cells needs nanoscale design of the pharmaceutic vehicles. Nanoscopic particles are promising drug carriers due to their adjustable size, shape, porosity and surface properties 1 . The unquestioned potential of nanosized carriers as platforms for drugs transportation into the cells has stimulated the design of a variety of liposome and micellar systems 2 , polymeric conjugates 3 , porous silica 4 and magnetic nanoparticles 5 , graphene oxide nanosheets 6 , supramolecular containers 7 and mesoporous carbon nanospheres 1 . Nanodrug transportation trough the cellular membranes accomplished with the controlled release of the encapsulated cargo triggered by an external or internal impact is the ultimate goal for stimuli-responsive drug nanocarriers 8 .
To achieve these, nanocontainers loaded with drugs are provided with the additional surface coating responsive towards the triggering stimuli 9 . So far, either external (laser irradiation 10 ) and internal (intercellular pH gradient 11 ) factors or a combination of both 12 have been employed to initiate the intercellular release from internalised drug nanocontainers. Typically, drug-loaded nanoparticles are coated with a responsive coating (i.e. pH-responsive silane layer), which restricts the release while the carriers are outside the cells 13 . Drugs entrapped within the self-assembled block copolymer nanoparticles were released inside the cells after photolysis induced by laser irradiation or carbon nanotube decapping 14 . Although these approaches appear to be effective in vitro, the use of external triggers, such as laser irradiation, is not always applicable for the treatment of internal organs, whereas pH-triggered release requires the introduction of synthetic polymers and may lead to the increased toxicity.
Despite the numerous reports describing on-command drug delivery, it appears that the pH-responsive 14 , electric field 15 or light-activated 10 carriers are too complicated for the real-life applications. The systems based on the activation of the drug release by intracellular enzymes are regarded as a promising alternative 16 . Particularly, hydrocarbon molecules covalently linked to silanes were utilised to control the glycoside hydrolase-triggered release of anticancer drug doxorubicin from mesoporous silica nanoparticles (MSN) and demonstrated in vitro the effective action against cancer cells 17 . MSN are used in fabrication of drug delivery carriers 13 , although their applications are limited by small pore size (2)(3) Scientific RepoRts | 5:10560 | DOi: 10.1038/srep10560 nm) and potential in vivo toxicity 18 . Carbon nanotubes are utilised as the potent drug carriers since they can be easily internalised by mammalian cells 19 and provide the room for drugs until they reach the target cell, while the open tube end gives an access to the inner volume 20 . However, carbon nanotubes are regarded as potentially toxic materials 21 , thus stimulating the quest for the alternative tubular carriers 22 .
Clay nanotubes have been suggested as versatile nanocarriers 23 , combining the effective drug loading into tubule lumen and well-developed techniques of surface modification. Halloysite is tubule aluminosilicate clay with an external diameter of 50-60 nm, lumen diameter of 12-15 nm and a length of ~1 μ m 24 . Its SiO 2 surface is negative at neutral pH and Al 2 O 3 inner lumen surface is positive which allows selective loading of clay nanotubes with charged drug molecules 25 or modification with nanoparticles 26 . Halloysite nanotubes (HNTs) form a stable dispersion in water, their colloidal properties are similar to 100-nm diameter silica nanoparticles. Halloysite has a good biocompatibility, which was assessed for both cell cultures 27 and whole animals 28 . Various drugs can be loaded in halloysite nanotubes from concentrated solutions or from melt. Dried loaded nanotubes may be kept for a long time, and release drugs while exposed to water within 10-20 hrs (e.g. gentamicin, ciprofloxacin, tetracycline, dexamethasone, and brilliant green). The polymeric coating clogs the tube ends and slows down the drug release rate from hours to days and weeks 29 , allowing for fabrication of drug-delivery systems based on DNA-wrapped nanotubes 25 and antimicrobial coatings 30 .
Here we report the fabrication of a novel drug delivery system based on HNTs loaded with a model anticancer drug and coated with dextrin (DX) cleavable by intercellular glycosyl hydrolases for controlled release inside cells. As a model drug, we utilised triazole dye brilliant green (BG) capable for suppressing mitochondria in the malignant cells 31 . This platform benefits from the effective uptake of the biocompatible clay nanotubes by human cancer cells, followed by the hydrolysis of the dextrin coating with cellular glycosyl hydrolases 17 , facilitating the release of the drug from the nanotubes and the subsequent inhibition of mitochondria in the cells. The designed cell targeting nanocontainers have to be biocompatible, provide an effective drug encapsulation, and the stimuli-responsive coating should be affected by cytoplasmic factors, preferably enzymes. Halloysite nanotubes served as transmembrane carriers, and utilised two biocompatible compounds-brilliant green and as a cytotoxic substance 31 and dextrin as an enzyme-activated tube-end stopper 17 . Cell cultivation. A549 -human lung carcinoma epithelial cells; and Hep3b -hepatocellular carcinoma cells were obtained from American Type Culture Collection (ATCC, Rockville, Maryland, USA). Cells were incubated in humidified atmosphere with 5% CO 2 at 37 °C. Cells were cultivated in Dulbecco modified minimal Eagle's medium (DMEM) with L-glutamine supplemented with 100 U mL −1 penicillin, 100 μ g mL −1 streptomycin and 10% of fetal bovine serum (PAA laboratories). Typically, the cells were passaged after approaching 85-90% confluence with 0.05% trypsin-EDTA solution during 5 min incubation and split at a ratio 1:10.
Fabrication of HNTs loaded with BG and coated by dextrin-stoppers.
Prior to loading with BG, HNTs were washed twice with ethanol and then with sterile water and dispersed using an ultrasonic bath 30 . Loading was initiated by adding 1 ml of 1% BG solution in 60% aqueous ethanol was added to 100 mg of washed HNTs, and the solution was sonicated for 30 s. Next, the vial filled with HNTs and BG was placed in vacuum desiccator for 1 h to ensure the vacuum-facilitated suction of BG into the HNTs lumen. Then, to remove the free BG, the suspension was washed twice with sterile water followed by centrifugation. Finally, BG-loaded HNTs were dried at 45 °C and milled to fine powder. The efficiency of BG loading was measured gravimetrically, using the identical samples of HNTs, the control sample (BG free) was used to evaluate the loss of HNTs during washing procedures. 10 mg of BG-loaded HNTs were mixed with 1 ml of aqueous dextrin solution (10 mg mL −1 ) and sonicated for 10 s. Next, the vial containing nanotubes and dextrin was placed in vacuum desiccator for 1 h, followed by washing with sterile water to remove the unbound dextrin. At all stages, the zeta-potential of HNTs in water was monitored using a Malvern Zetasizer Nano ZS instrument with standard U-shaped cells.
Electron microscopy imaging of HNTs. A Carl Zeiss Libra instrument was used to obtain TEM images of pure and BG-loaded HNTs. A drop of dilute HNTs suspension was placed onto formvar-coated copper grids (Agar) and left to evaporate, then the HNTs were imaged at 120 V accelerating voltage. An Auriga (Carl Zeiss) instrument was used to obtain SEM images of pure and dextrin-coated HNTs (DX-HNTs). Samples were sputter-coated with Au (60%) and Pd (40%) alloy using a Q150R (Quorum Technologies) instrument. Images were obtained at 3 × 10 −4 Pa working pressure and 10 kV accelerating voltage using the InLens detection mode.
Release kinetics investigation. The aqueous (10 mg mL −1 ) suspension of BG-loaded HNTs (with and without dextrin stoppers) was incubated while stirring for 24 h. Then the supernatant was removed by centrifugation and analysed at 623 nm using Lambda 35 UV/Vis Spectrophotometer (Perkin Elmer, USA) to estimate the amount of BG released from the HNTs. To estimate the total amount of BG loaded into the nanotubes, which could be released, 100 mg of BG-loaded HNTs in 20 ml water was sonicated for 10 min, then the total amount of the released BG was measured as described above.
Cellular uptake of DX-HNTs.
To investigate the uptake of DX-HNTs, 10 5 cells were seeded in each well of 12-well culture plates. Then, 25, 50 or 100 μ g of DX-HNTs were added into wells. After 24 hours of incubation the plates were analysed using DIC contrast microscopy (Carl Zeiss Axio Observer inverted microscope (Germany) operated using ZEN software) to evaluate the confluent growth of the cells.
Enhanced dark-field microscopy imaging (EDF). The observation of distribution of DX-HNTs in cells was performed using EDF microscopy. The cells were grown on glass cover slips and stained with DAPI (1 μ g mL −1 ) for 5 min. Then they were fixed by cold acetone (− 20 °C) and embedded in the mounting media. EDF microscopy images were obtained using a CytoViva® enhanced dark-field condenser attached to an Olympus BX51 upright microscope equipped with fluorite 100x objectives and DAGE CCD camera. CytoViva® Dual Mode Fluorescence system (UV excitation) was used to visualise DX-HNTs in human cells with DAPI nuclei stain. Transmission electron microscopy. TEM images of the thin-sectioned cells and DX-HNTs were obtained using a JEOL 1200 EX microscope operating at 80 kV. The cells were fixed with 2.5% glutaraldehyde, gradually dehydrated using a series of ethanol solutions, embedded into Epon resin, and then thin sections were cut using a LKB ultramicrotome equipped with a diamond knife and mounted on copper grids. The thin-sectioned cells were stained with 2% aqueous uranyl acetate and lead citrate.
Atomic force microscopy (AFM). Distribution of DX-HNTs inside cells was investigated using a
Dimension Icon microscope (Bruker, USA) using Scan Asyst Peak Force Tapping (in air) mode. Cells were grown on glass cover slips, fixated, washed with water to remove salt and debris, dehydrated and imaged in air. Scan Asyst in Air cantilevers (tip radius − 2 nm, spring constant − 0.4 n m −1 ) (Bruker) were used throughout. AFM images obtained were processed using Nanoscope Analysis software v1.5 (Bruker, USA).
Cell viability investigation via viability staining. Fluorescent microscopy was employed to assess the ratios of viable and necrotic cells stained using acridine orange/ethidium bromide (AO/EtBr) dyes. AO intercalates with the DNA and RNA of cells (green fluorescence), whereas EtBr penetrates only into necrotic (dead) cells nuclei (red "dead" fluorescence). Cells were treated with DX-HNTs and then plated into 96-well plates (10 4 cells per well) and incubated for 24 h. Then 10 μ l of AO/EtBr (1% and 0,5% respectively) solution was added to each well for 5 min, then washed twice with DPBS and imaged. Cell index monitoring. xCELLigence Real-Time Cell Analyzer (ACEA Biosciences, USA) was employed to investigate the cell index (a dimensionless parameter which indicates the adhesive properties of cells and proliferation rate by measuring the electrical impedance) in cell cultures studied. Cells were treated with DX-HNTs and then plated in 12-well plate (E-plate) with gold electrodes on bottom at a density 7 × 10 3 cells per well. The plates were installed in xCELLigence analyser, which was placed in humidified atmosphere with 5% CO 2 at 37 °C for 24 h. The cell index was monitored in real time using the xCELLigence software.
LD50 estimation.
To estimate the LD50 value for BG-loaded HNTs with and without dextrin coating we used resazurin assay. Cells treated with HNTs were seeded in 96-plates at a density of 7 × 10 3 cells per well and cultivated for 24 h (the concentration of BG-loaded HNTs with and without dextrin coating was 25, 50 and 100 μ g respectively per 10 5 cells). After 24 hours of incubation 1 μ l of resazurin solution (0.4 mg mL −1 ) was added in each well and plates were incubated overnight. The absorbance was measuredat 570 nm using a microplate reader (Multiskan FC, Thermo Fisher Scientific, USA). The efficiency of resazurin reduction by cells is directly proportional to the number of viable cells 32 . The meanings of absorbance of control sample were considered as 100% viability, all further calculations have been done related the number of contact cells. The concentration of BG-loaded HNTs, which caused approximately 50% of cells, was labelled as LD50.
Results and discussion
Halloysite nanotubes employed in this study were characterised using scanning electron and atomic force microscopy ( Fig. 1), to demonstrate the typical sizes of the HNTs (ca. 15 nm lumen and 50 nm outer diameter, and 1-1.5 μ m length).
Our strategy is schematically shown in (Fig. 2). HNTs lumens were first loaded with brilliant green using the straightforward vacuum suction technique 30 . Next, BG-loaded nanotubes were directly surface-functionalised with dextrin, which produced an enzyme-responsive coating with tube-end clogging to ensure content of BG molecules during delivery and induced release inside the cells.
Transmission electron microscopy (TEM) images (Fig. 3 A,B) demonstrate the brilliant green loading into the lumens of halloysite after 1 h incubation with 1 wt% BG in ethanol, followed by washing, drying, and milling to fine powder, as described in Experimental section ( Figure S1, Supporting Information).
The loading efficiency (40 mg per 100 mg HNTs) was determined gravimetrically. This exceeds the halloysite nanotube lumen volume and indicates that the loading occurs both inside the tubes and in the outside pores formed by loosely rolled external alumosilicate sheet.
Next, the BG-loaded nanotubes were similarly coated with dextrin via vacuum-facilitated deposition, resulting in formation of physically-adsorbed dextrin layer on HNTs (DX-HNTs), which could be clearly seen in scanning electron microscopy (SEM) images (Fig. 3 C,D). Zeta-potential of HNTs (Fig. 4 A) was monitored after BG loading and dextrin coating, suggesting that dextrin coating will reduce the release. In fact, 60 wt% release of BG from the uncapped halloysite took 24 hrs. Dextrin stoppers reduce the release of BG twofold, as compared with uncoated tubes, allowing for enhanced drug delivery with tubes opening (Fig. 4 B).
This technique of enzyme-activated carriers with polysaccharides stoppers is easier to produce since it does not require any covalent modification of nanotubes, unlike in the earlier reported system based on chemical binding of saccharides onto mesoporous nanoparticles 17 .
The important issue in using halloysite nanotubes as drug delivery vehicles is the ability of HNTs to penetrate through the cellular membranes. It appears that the efficiency of HNTs uptake depends on the cells growth rate of different cell cultures. We have chosen two types of cells-adenocarcinomic human alveolar basal epithelial cells (А549) and human hepatoma cells (Hep3b). These cells exhibit the significantly different proliferation rates, which allows to investigate the uptake as a function of the cell growth speed. These cells were subjected in culture with increasing concentrations of halloysite formulation of DX-HNTs (from 25 to 100 μ g per 100 000 cells, BG-free). After 24 hours of incubation, cells were microscopically analysed. First, we employed the enhanced dark-field (EDF) microscopy (Fig. 5), demonstrating that the uptake behaviour of A549 cells and Hep3b cells was distinctly different. Lung carcinoma cells internalised DX-HNTs and concentrated them in perinuclear regions (counter-stained with DAPI and visualised using transmission-light fluorescence microscopy) as clearly visible aggregates, whereas Hep3b cells appear to randomly distribute the DX-HNTs in cytoplasm.
Suspended cells, detached from the culture plates, were imaged to demonstrate the spatial distribution of DX-HNTs inside these two cultures ( Figure S2, Supporting Information), demonstrating the differential uptake. These findings have been confirmed further using TEM images (Fig. 6) of A549 and Hep3b cells incubated with 100 μ g DX-HNTs per 100 000 cells, the highest concentration used. TEM images suggest that in A549 cells nanotube aggregates seen in dark-field images (Fig. 5 A) are randomly distributed in lysosomes, which could facilitate the enzymatic decomposition of dextrin tube-end stoppers with subsequent enhanced cargo release. On the contrary, Hep3b cells accumulate nanotubes preferentially on cellular membranes, thus reducing the access of intracellular glycosyl hydrolases (Fig. 5 B). We also assume that in case of Hep3b cells nanotubes remain suspended in media rather than being actively internalised by cells. We employed atomic force microscopy (AFM) to investigate the distribution of DX-HNTs in fixated dry samples of A549 and Hep3b cells (Fig. 7). Typically, AFM imaging demonstrates surface-absorbed nanotubes, however, as the imaging is applied to dried samples where the volume of the cells is no longer preserved, cell membranes collapse and solid nanotubes aggregates can be probed using AFM. One can clearly distinguish relatively large amorphous and quite smooth aggregates inside the A549 cells from well-resolved surface-adsorbed nanotubes on Hep3b cells membranes (numerous single tubes can be seen). We attribute the smooth topography of larger DX-HNTs aggregates in A549 cells to the dried membrane\cytoplasm films over the internalized nanotube aggregates. AFM images confirm that the overall morphology of both A549 and Hep3b cells remains unaffected by the increasing concentrations of DX-HNTs. However, we noted that internalized DX-HNTs in A549 formed crater-like concentric circular regions around the nucleus (seen both in top-view and side-view images), while only negligible numbers of nanotubes were detected in the distal regions or on the membrane close to the nucleus. On the contrary, in Hep3B cells the nanotubes the detected mostly on the cellular membranes contributing to the increased height of the cupola-like position of the cells (side-view). These results suggest that the DX-HNTs are preferentially internalised by certain types of cells, which may be exploited in differential treatment of the target cells. Previously, several reports indicated that the HNTs are relatively non-toxic towards human cells 27 , microorganisms 33 and soil organisms 28 . Here we carefully investigated the effects of dextrin-coated HNTs on A549 and Hep3b cells employing a set of physiological activity test. First, we stained the cells treated with the increasing concentrations of DX-HNTs with acridine orange\ethidium bromide dyes, allowing distinguishing the viable cells from the necrotic ones.
Viable cells appear green on fluorescence microscopy images, suggesting the intact membranes, while the lack of red ethidium bromide-mediated fluorescence in viable cells confirms the integrity of cellular membranes (unlike in necrotic cells). The results (Fig. 8) demonstrate that DX-HNTs per se, even at higher concentrations, do not affect the both A549 and Hep3b cells, since the percentage of viable cells is only slightly reduced.
In addition, we investigated the effects of DX-HNTs on cytoskeleton formation. We found that the HNTs taken up by A549 (Fig. 8C and D) and Hep3b (data not shown) cells do not induce any detectable changes in cytoskeleton organisation in cells.
Next, we analysed the redox metabolism indicator resazurin transformation in DX-HNTs-treated cells. Viable cells reduce resazurin into the pink resorufin product detectable using spectrophotometry. We found that metabolic activity in DX-HNTs-treated cells was not affected if compared with control cells (Fig. 9A). We also investigated the attachment and growth rate of DX-HNTs-treated cells employing the real-time Cell Index monitoring. As shown in Fig. 9 B, C, the higher (100 μ g DX-HNTs per 10 5 cells) concentration of nanotubes did somewhat reduce the growth dynamics, however the lower concentrations investigated did not affect the cells. Hence, we concluded that HNTs equipped with dextrin stoppers are not toxic to the cells.
Finally, we focused on the delivery of brilliant green into A549 and Hep3b cells. We assume that the dextrin coating will seal the drug inside the nanotubes before their internalisation, while after, the coating is hydrolysed by intercellular glycosyl hydrolases, the nanotube content will be released into the cytoplasm and kill the target cell (Fig. 10).
We found that the dextrin stoppers substantially reduce the toxicity of the nanotube BG formulation towards Hep3b (two-fold reduction) (Fig. 11 A, Figure S3) whereas the median lethal dose (LD50) value of BG nanotube formulation for A549 cells did not depend on dextrin coating (Fig. 11 B, Figure S4). This may be related to the differential uptake of DX-HNTs in these two types of human cancer cells discussed above. The A549 cells take up most of the DX-HNTs nanocarriers available from the media, on the contrary, the uptake in Hep3b cells is smaller (Figs 6 and 7), and BG-loaded DX-HNTs do not have significant influence on Hep3b cells even after decomposition of enzyme-activated coating. Most of the BG-loaded DX-HNTs remain outside the cells and dextrin stoppers are not affected by the lysosome enzymes (no release of brilliant green occurred). The reduced amount of free brilliant green spontaneously released from the nanotubes is not sufficient to kill the Hep3b cell, and they remain viable and continue to proliferate ( Figure S4). As a result, the selective killing of the A549 cells can be achieved, while Hep3b cells remain viable at the same concentration (i.e., 25-50 μ g per 10 5 cells in case of BG-HNTs).
Conclusions
We developed a novel strategy for drug loading into clay nanotubes coated with dextrin to clog the tube opening until the cell absorbs these nanocarriers. The accumulation and enzymatically induced release of drug occurred exclusively in cells prone to internalization of the nanotubes with higher proliferation rates, which is a characteristic of malignant cells. It means that non-malignant cells will not suffer from the introduction of halloysite with anticancer drug, as a result, drug-loaded DX-HNTs will be accumulated selectively in tumor cells. This would allow using several weak drugs with low cytotoxic effects, when the high concentration of agent inside cells is required to damage and kill them, whereas at lower concentrations these drugs are not harmful. The non-cancerous cells with bad "HNTs-appetite" resulting from the slow proliferation and level of metabolism will not be affected by the drug, unlike current directly-applied anticancer drugs. | 2016-05-10T05:41:37.381Z | 2015-05-15T00:00:00.000 | {
"year": 2015,
"sha1": "eae786186c984712b4df826d768c74e54f2c2d46",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep10560.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eae786186c984712b4df826d768c74e54f2c2d46",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry",
"Materials Science"
]
} |
2409015 | pes2o/s2orc | v3-fos-license | Standard Logics Are Valuation-Nonmonotonic
It has recently been discovered that both quantum and classical propositional logics can be modelled by classes of non-orthomodular and thus non-distributive lattices that properly contain standard orthomodular and Boolean classes, respectively. In this paper we prove that these logics are complete even for those classes of the former lattices from which the standard orthomodular lattices and Boolean algebras are excluded. We also show that neither quantum nor classical computers can be founded on the latter models. It follows that logics are"valuation-nonmonotonic"in the sense that their possible models (corresponding to their possible hardware implementations) and the valuations for them drastically change when we add new conditions to their defining conditions. These valuations can even be completely separated by putting them into disjoint lattice classes by a technique presented in the paper.
Introduction
A good deal of artificial intelligence research is focused on artificial neural networks, on the one hand, and on default/nonmonotonic logic, on the other. Neural networks are characterized by heavy reliance on logic gates. On the other hand, nonmonotonic inference rules formalize generalizations of standard logic that admit changes in the sense that values of propositions may change when new information (axioms) is added to or old information is deleted from the system. In this paper, we show that already standard logics (classical as well as quantum)-whose monotonicity is usually taken for granted-are nonmonotonic at both the level of logic gates that implement them and the level of its valuations, i.e., mappings from the logic to its models.
We consider two standard logics (in contrast to, e.g., modal logics) in this paper: propositional classical logic and propositional quantum logic. In practice, classical logic relies almost exclusively on the {0,1} valuation, i.e., the two-valued truth table valuation, for its propositional part. This valuation extends to the sentences of all theories that make use of classical logic, such as set theory, model theory, and the foundations of mathematics. However, there are also non-standard valuations generated by non-distributive lattices, which correctly model classical propositional logic, and by non-orthomodular lattices, which correctly model quantum logic. An immediate consequence of this valuation dichotomy is that classical logic modelled by such nondistributive lattices does not underlie present-day classical computers, since non-standard valuations cannot be used to run them. Only classical logic modelled by a Boolean algebra and having a {0,1} valuation can serve us for such a purpose. Hence, whenever we want to utilize a logic for a particular application we have to specify the model we would use as well.
Before we go into details in the next sections, we should be more specific about our distinction of standard vs. non-standard valuations. Let us illustrate it with a graphical representation of the O6 lattice given in Figure 1, which can serve as a model for classical logic in the same way that {0,1} Boolean algebra can. Lines in the figure mean ordering. Thus we have 0 ≤ x ≤ y ≤ 1 and 0 ≤ y ′ ≤ x ′ ≤ 1, where 0 and 1 are the least and the greatest elements of the lattice, respectively. Can this model be given a linearly ordered or numerical interpretation, for instance the interpretation provided by the probabilistic semantics for classical logic [1]? The answer is no, because when x = y = 0, 1, an ordering between x and either x ′ or y ′ and between y and either x ′ or y ′ is not defined, and it is assumed that it cannot be defined. Hence, symbols 1 and 0 in the figure cannot be interpreted as the numbers 1 and 0. If they were numbers, 0 < x < y < 1 and 0 < y ′ < x ′ < 1 would imply that x, y and x ′ , y ′ were also numbers and we would, for example have x = 0.3 and x ′ = 0.7. This means we would have x < x ′ and it yields x ∩ x < x ′ ∩ x = 0, i.e., x = 0, which is a contradiction, since x = 0.
Therefore when we speak of standard valuation of propositions of classical logic, we mean any valuation for which we can establish a correspondence with real numbers and their ordering, i.e., whose corresponding model can be totally ordered. For instance, with two-valued ({TRUE,FALSE}) Boolean algebra we can ascribe the number 1 to TRUE and the number 0 to FALSE, and in the probabilistic interpretation of classical logic [1] all values from the interval [0,1] are real numbers which are totally ordered. When we deal with values from our O6 example above, there is no way to establish a correspondence of O6 elements with real numbers, and we shall call such a valuation non-standard. The point here is that the latter valuation cannot be implemented in present-day binary computers-whose hardware usually deals with numerical values such as voltage-and consequently also not in the corresponding artificial intelligence, at the level of the underlying logic gates building their hardware.
This means that a statement from a logic can be "true" or "false" in one model in one way and in some other model in another way. When it "holds" (i.e., is "true") in a standard model, say the two-valued Boolean algebra, we can ascribe a number to it, say "1". When it "holds" in a non-standard model, meaning, e.g., that it is equal to 1 in Figure 1, we cannot do so and we cannot evaluate the model for the statement directly with binary logic gates.
It is usually taken for granted that logic is about propositions and their values. For example, we are tempted to assume that proposition p meaning "Material point q is at position r at time t" is either true or false. However, with non-standard valuations x and y from Figure 1, we can ascribe neither a truth value nor even a probability to p, although "p or non-p" is certainly always valid meaning p ∪ p ′ = 1. The {0,1} Boolean algebra and the probabilistic model, on the other hand, are the only known classical logical models that allow ascribing {0,1} standard (i.e., numerical) values to propositions and hence "found[ing] the mathematical theories of logic and probabilities" [2]. Classical logic defined by nothing but its axiomatic syntax is a more general theory, in terms of the possible valuations it may have, than its non-isomorphic semantics (e.g., a predicate logical calculus with standard valuation 1 which is nothing but a "predicate Boolean algebra").
The standard-non-standard dichotomy can be even better understood with the example of quantum logic which-when taken together with its orthomodular lattice model-underlies Hilbert space and therefore could be implemented into would-be quantum computers and eventually into quantum artificial intelligence. According to the Kochen-Specker theorem, a {0, 1} valuation for quantum logic does not exist, 2 but there is an analogy between a Boolean algebra (distributive ortholattice) and an orthomodular (ortho)lattice that underlies the Hilbert space of quantum mechanics. Every orthomodular lattice is a model of quantum logic just as every Boolean algebra (distributive ortholattice) is a model of classical logic. However, as with classical logic, there are also non-orthomodular lattices which are models of quantum logic but on which no Hilbert space can be built. Therefore quantum logic in general (not modelled by any model, i.e., without any semantics), or more precisely its syntax, would be of limited use if we wanted to implement it into quantum computers. Only one of its models-an orthomodular lattice-can serve us for this goal, and therefore we call valuations defined on the elements of the latter model-standard valuations, as opposed to non-standard valuations on the former non-orthomodular models.
In this paper, we prove the nonmonotonicity of both classical and quantum logic with respect to particular intrinsically different, disjoint classes of models. The result separates two kinds of models that have so far been assumed to belong to overlapping classes. In particular, general families of non-distributive and non-orthomodular lattices called weakly orthomodular and weakly distributive ortholattices (WOML and WDOL) that are models of quantum and classical propositional logics, respectively, for which we pre- viously proved soundness and completeness [7,8], do include their standard models, orthomodular lattices (OML) and Boolean algebras (BA) [distributive ortholattices (DOL)]. Here we prove that these lattices can be separated in the sense that the logics can also be modelled by WOML and WDOL from which the standard orthomodular and Boolean algebras are excluded. 3 Soundness and completeness of these propositional logics are proved.
Specifically, we consider the proper subclasses of these lattice families that exclude those lattices that are orthomodular (for the WOML case) and distributive (for the WDOL case), i.e., WOML\OML and WDOL\BA (where "\" denotes set-theoretical difference). Using them as the basis for a modification of the standard Lindenbaum algebra technique, we present a new result showing that quantum and classical propositional logics are respectively complete for these proper subclasses, in and of themselves, as models. In other words, even after removing every lattice from WOML (WDOL) in which the orthomodular (distributive) law holds, quantum (classical) propositional logic is still complete for the remaining lattices.
In both classical and quantum logics, when we add new conditions to the defining conditions of the lattices that model the logics, we get new lattices that also model these logics but with changed valuations for the propositions from the logics. This property of standard logics and valuations of their propositions is what we call valuation-nonmonotonicity. The more conditions we add, the fewer choices we have for valuations. This is why we consider subclasses that exclude lattices obtained by adding new conditions. For instance, WOML\OML will provide us only with valuations on weakly orthomodular lattices that are not orthomodular, and by adding the orthomodularity condition to WOML we get OML, which contains only valuations on orthomodular lattices. Apart from the orthomodularity condition, there are many more (if not infinitely many) conditions in between WOML and OML that all provide different valuations and new proper subclasses, as we show and discuss in Sections 8 and 9 below.
We will study the quantum logic case first, since the results we obtain for WOMLs will automatically hold for WDOLs and simplify our subsequent presentation of the latter. In Section 2, we define orthomodular and weakly orthomodular (ortho)lattices, and in Section 3 distributive and weakly distributive ones. In Section 4, we define the classes of proper weakly orthomodular and proper weakly distributive ortholattices. In Section 5, we define quantum and classical logics and prove their soundness for the models defined in Section 4. In Sections 6 and 7, we prove the completeness of quantum logic for WOML\OML and WDOL\BA models respectively. In Section 8, we define valuation-nonmonotonicity, and in Sections 8 and 9, we discuss the differences between the completeness proofs for WOML\OML, WDOL\BA, WOMLi\OML, WDOLi\BA, WOML\WOMLi, and WDOL\WDOLi we obtain in Sections 6-9 and the completeness proofs for WOML and WDOL we obtained in [7,8]. And finally, we discuss and summarize the results we obtained in this paper in Section 10.
, we say that a commutes with b, which we write as aCb.
Definition 2.6 If, in an ortholattice, a ≡ ((a ∩ b) ∪ (a ∩ b ′ )) = 1, we say that a weakly commutes with b, and we write this as aC w b.
Definition 2.7 The commutator of a and b, C(a, b), is defined as Definition 2.8 (Pavičić and Megill [7]) An ortholattice in which the following condition holds: is called a weakly orthomodular ortholattice (WOML).
Using Definition 2.2, we can also express Eq. (9) as either of the two following equations, which are equivalent in an ortholattice: Definition 2.9 An ortholattice in which either of the following conditions hold: [11] a is called an orthomodular lattice (OML).
The equations of Definition 2.1 determine a (proper) class of lattices, called an equational variety, [12, p. 352] that we designate OL. Thus the term OL will have two meanings, depending on context. When we say a lattice is an OL, we mean that the equations of Definition 2.1 hold in that lattice. When we say a lattice is in OL, we mean that it belongs to the equational variety OL determined by those equations. While these two statements are of course equivalent, the distinction will matter when we say such things as "the class OL properly includes the class OML." Similar remarks apply to OML, WOML, and the other varieties in this paper.
We recall that whereas every OML is a WOML, there are WOMLs that are not OMLs. [7] In particular, the lattice O6 (Fig. 1) is a WOML but is not an OML. On the one hand, the equations that hold in OML properly include those that hold in WOML, since WOML is a strictly more general class of lattices. But there is also a sense in which the equations of WOML can be considered to properly include those of OML, via a mapping that Theorem 2.11 below describes. First, we need a technical lemma.
Proof. Most of these conditions are proved in [7], and the others are straightforward.
Theorem 2.11
The equational theory of OMLs can be simulated by a proper subset of the equational theory of WOMLs.
Proof. The equational theory of OML consists of equality axioms (a = a, ; the OL axioms, Eqs. (1)-(6); and the OML law, Eq. (12). Any theorem of the equational variety of OMLs can be proved with a sequence of applications of these axioms. We construct a mapping from these axioms into equations that hold in WOMLs as follows. We map each axiom, which is an equation in the form t = s or an inference of the form t 1 = t 2 . . . ⇒ t = s (where t, s, and t 1 , t 2 , . . . are terms), to the equation t ≡ s = 1 or the inference t 1 ≡ t 2 = 1 . . . ⇒ t ≡ s = 1. These mappings hold in any WOML by Eqs. (14)-(26), respectively, of Lemma 2.10. We then simulate the OML proof by replacing each axiom reference in the proof with its corresponding WOML mapping. The result will be a proof that holds in the equational variety of WOMLs. Such a mapped proof will use only a proper subset of the equations that hold in WOML: any equation whose right-hand side does not equal 1, such as a = a, will never be used. Proof. In any ortholattice, t = 1 iff t ≡ 1 = 1 by Eq. (28). Therefore, the inference of the theorem can be restated as follows: Proof. Theorem 2.12 shows that all equations of this form hold in a WOML.
Theorem 2.15 (Foulis-Holland theorem, F-H)
In any OML, if at least two of the three conditions aCb, aCc, and bCc hold, then the distributive law Proof. See [12, p. 25].
Proof. By Lemma 2.14, we can replace the conditions with aC w b, aC w c, and bC w c. Then the conclusion follows from F-H and Theorem 2.11. As Theorem 2.11 shows, if t and s are terms, then the equation t ≡ s = 1 holds in all WOMLs iff the equation t = s holds in all OMLs. One might naively expect, then, that if t = s is the OML law, then t ≡ s = 1 will be the WOML law. This is not always the case: the OML law given by Eq. (13), in fact, it holds in any OL. However, there is a version of the OML law with this property, as the following theorem shows.
Theorem 2.17
An ortholattice is an OML iff it satisfies the following equation: An ortholattice is a WOML iff it satisfies the following equation: Proof. For Eq. (30): It is easy to verify that Eq. (30) holds in an OML, for example by applying F-H: On the other hand, this equation fails in lattice O6 ( Fig. 1), meaning it implies the orthomodular law by Theorem 2 of [12, p. 22]. It is also instructive to prove Eq. (13) directly: a∪(a ′ ∩(a∪b)) = a∪((a∪b) where the penultimate step follows from Eq. (30) with a ∪ b substituted for b, and all other steps hold in OL.
On the other hand, substituting b ′ and a ′ for a and b in Eq. (31), we have Theorem 2.18 An ortholattice is a WOML iff it satisfies the following condition: Proof. See Theorem 3.9 of [7].
Definition 3.2 An ortholattice to which the following condition is added: is called a distributive ortholattice (DOL) or (much more often) a Boolean algebra (BA).
We recall that whereas every BA is a WDOL, there are WDOLs that are not BAs. [7] In particular, the lattice O6 (Fig. 1) is a WDOL but is not a BA.
The first part of the following theorem will turn out to be very useful, because it will let us reuse all of the results we have already obtained for WOMLs. (b) Non-WDOL from [13], Fig. 3.
On the other hand, the modular (and therefore WOML) lattice MO2 (Fig. 2a) violates Eq. (33). If we put x for a and y for b, the equation evaluates to 0 = 1.
We are now in a position to prove two important equivalents to the WDOL law. We call them weak distributive laws, since they provide analogs to the distributive law of Boolean algebras.
Theorem 3.4 An ortholattice is a WDOL iff it satisfies either of the following equations: Proof. First, we prove these laws can be derived from each other in any OL. Assuming Eq. (35) and using the fact that (a∩b)∪(a∩c) ≤ (a∩(b∪c), in any , which is the WOML law. This lets us use our previous WOML results.
Starting from the last equality in the first sentence of the previous paragraph, in any OL we also have 1 = ((a∩(b∪c)) → 0 ((a∩b)∪(a∩c)) = (a∩(b∪c)) → 1 ). Therefore, using the footnote to Definition 2.3 and Theorem 2.12, it follows that in any WOML, and therefore (by the previous paragraph) in any OL, Eq. Theorem 3.6 An ortholattice is a WDOL iff it satisfies the following condition: Proof. First, we show that Eq. (39) implies the WOML law. Putting d for a and d ∩ e for b, the hypothesis becomes, in an OL, Also putting e for c, the conclusion becomes, in an OL, The other conjunct is satisfied similarly, by symmetry.
Expanding the definition of ≡ 0 and discarding the left-hand conjunct, we have An essential characteristic of the WDOL law and its equivalents is that they must fail in the modular (and therefore OML and WOML) lattice MO2. However, such a failure is not sufficient to ensure that we have a WDOL law equivalent.
Theorem 3.7 The following condition holds in all WDOLs: It also fails in modular lattice MO2. However, when added to the equations for OL, it does not determine the equations of WDOL.
Proof. To verify that this condition holds in any WDOL, we first convert the hypothesis to the OL-equivalent hypothesis (a ≡ 0 b) ≡ 1 = 1 using Eq. (29).
By using the WDOL law C(a, b) = 1 to satisfy the hypotheses of any uses of wF-H, it is then easy to prove that this condition holds in any WDOL. In particular, the reverse implication holds in any OL. The failure of Eq. (40) in MO2 is verified by putting x for a and y for b; then the left-hand side holds but the right-hand side becomes 0 = 1. On the other hand, it does not imply the WDOL law nor even the WOML law: it passes in the non-WOML lattice of Figure 2b. On the one hand, the equations that hold in BA properly include those that hold in WDOL, since WDOL is a strictly more general class of lattices. But there is also a sense in which the equations of WDOL can be considered to properly include those of BA, via a mapping that Theorem 3.8 below describes. Proof. The equational theory of BA consists of equality axioms (see the proof of Theorem 2.11); the OL axioms, Eqs. (1)-(6); and the distributive law, Eq. (34). Any theorem of the equational variety of BAs can be proved with a sequence of applications of these axioms. We construct a mapping from these axioms into equations that hold in WDOLs as follows. We map each axiom, which is an equation in the form t = s or an inference of the form t 1 = t 2 . . . ⇒ t = s (where t, s, and t 1 , t 2 , . . . are terms), to the equation t ≡ 0 s = 1 or the inference t 1 ≡ 0 t 2 = 1 . . . ⇒ t ≡ 0 s = 1. These mappings hold in any WDOL by Eqs. (14)-(25) and (35), respectively, after converting ≡ to ≡ 0 with Eq. (40). We then simulate the BA proof by replacing each axiom reference in the proof with its corresponding WDOL mapping. The result will be a proof that holds in the equational variety of WDOLs. Such a mapped proof will use only a proper subset of the equations that hold in WDOL: any equation whose right-hand side does not equal 1, such as a = a, will never be used. Proof. In any ortholattice, t = 1 iff t ≡ 0 1 = 1 by Eq. (29). Therefore, the inference of the theorem can be restated as follows: But this is exactly what we prove when we simulate the original BA proof of the inference in WDOL, using the method in the proof of Theorem 3.8. Thus by Theorem 3.8, the inference holds in WDOL.
The Classes of Proper Weakly Orthomodular and Proper Weakly Distributive Ortholattices
One of the main aims of our paper is to prove that both quantum and classical logics are sound and complete with respect to at least a class of all weakly orthomodular lattices (WOMLs) in which orthomodularity fails for every lattice and a class of all weakly distributive lattices (WDOLs) in which distributivity fails for every lattice, respectively. To prove the soundness and completeness of quantum logic we shall consider a new class of lattices that belong to the class WOML but not to the class OML. We will denote the resulting class WOML-OML. In other words, WOML-OML denotes the set-theoretical difference WOML \ OML. A member of the class WOML-OML is a lattice, specifically a member of the class WOML, and we will call such a lattice a proper WOML. Thus a proper WOML is one that satisfies the WOML equations but violates the OML equations. Lattice O6 is an example of a proper WOML. Lattice MO2 is an example of a WOML that is not a proper WOML, i.e., that does not belong to the class WOML-OML, since it also belongs to the class OML.
Notice that WOML-OML is not an equational variety like WOML, because we cannot turn WOML into WOML-OML by adding new equational conditions to those defining WOML. If we try to add the orthomodularity condition (12) [14,11] to WOML-OML, we will get the empty set.
In Section 6 we shall show that quantum logics is complete for WOML-OML: every wff whose valuation equals 1 for all members of WOML-OML is a provable statement in quantum logic. This is not necessarily obvious a priori: quantum logic (QL) is not necessarily complete for an arbitrary collection of WOMLs. For example, it is not complete for the subset of WOML-OML consisting of the singleton set {O6}, since O6 is a model for classical logic.
The significance of this result can be explained as follows. Since QL is already complete for OML models, it might be argued that completeness for the more general WOML models ( [7]) has its origin in the OML members of the equational variety WOML, rather than being an intrinsic property of the non-OML members. We show that this is not the case by completely removing all OMLs from the picture.
In order for the completeness proof to go through, we will have to construct a special Lindenbaum algebra that belongs to WOML-OML. This requires a modification to the standard Lindenbaum algebra (which, in the standard proof, "wants" to be an OML). The technique that we use, involving cutting down the equivalence classes for the Lindenbaum algebra to force it to belong to WOML-OML, might be useful for other completeness proofs that are not amenable to the standard Lindenbaum-algebra approach.
Following an analogous blueprint, in Section 7 we will also show that classical logic is complete for the class of models WDOL-BA, defined as the set-theoretical difference WDOL \ BA (where WDOL and BA here denote equational varieties), which again by definition has nothing to do with Boolean algebras. In fact, a simpler result is possible: Schechter [15, p. 272] has proved that classical logic (CL) is complete for the single WDOL lattice O6. Schechter's result can be strengthened to show that classical logic is complete for any subset of WDOL. This is an immediate consequence of the fact that classical logic is maximal, i.e., no extension of it can be consistent. So if classical logic is sound for a model, it is automatically complete for that model.
Logics and Their Soundness for Our Models
Logic (L) is a language consisting of propositions and a set of conditions and rules imposed on them called axioms and rules of inference. The propositions we use are well-formed formulae (wffs), defined as fol-lows. We denote elementary, or primitive, propositions by p 0 , p 1 , p 2 , ..., and have the following primitive connectives: ¬ (negation) and ∨ (disjunction). The set of wffs is defined recursively as follows: p j is a wff for j = 0, 1, 2, ...
¬A is a wff if A is a wff.
A ∨ B is a wff if A and B are wffs.
We introduce conjunction with the following definition: The operations of implication are the following ones (classical, Sasaki, and Kalmbach) [16]: We also define the equivalence operations as follows: Connectives bind from weakest to strongest in the order →, ≡, ∨, ∧, ¬. Let F • be the set of all propositions, i.e., of all wffs. Of the above connectives, ∨ and ¬ are primitive ones. Wffs containing ∨ and ¬ within logic L are used to build an algebra F = F • , ¬, ∨ . In L, a set of axioms and rules of inference are imposed on F . From a set of axioms by means of rules of inference, we get other expressions which we call theorems. Axioms themselves are also theorems. A special symbol ⊢ is used to denote the set of theorems. Hence A ∈ ⊢ iff A is a theorem. The statement A ∈ ⊢ is usually written as ⊢ A. We read this: "A is provable" since if A is a theorem, then there is a proof for it. We present the axiom systems of our propositional logics in schemata form (so that we dispense with the rule of substitution).
Quantum Logic and Its Soundness for WOML-OML Models
We present Kalmbach's quantum logic because it is the system that has been investigated in the greatest detail in her book [12] and elsewhere [17,13]. Quantum logic (QL) is defined as a language consisting of propositions and connectives (operations) as introduced above, and the following axioms and a rule of inference. We will use ⊢ QL to denote provability from the axioms and rule of QL and omit the subscript when it is clear from context (such as in the list of axioms that follow). Axioms To prove soundness means to prove that all axioms as well as the rules of inference (and therefore all theorems) of QL hold in its models. Whenever the base set L of a model belongs to WOML-OML, we say (informally) that the model belongs to WOML-OML. In particular, if we say "for all models in WOML-OML" or "for all proper WOML models," we mean for all base sets in WOML-OML and for all valuations on each base set. The term "model" may refer either to a specific pair L, h or to all possible such pairs with the base set L, depending on context. For brevity, whenever we do not make it explicit, the notations M A and Γ M A will always be implicitly quantified over all models of the appropriate type, in this section for all proper WOML models M. Similarly, when we say "valid" without qualification, we will mean valid in all models of that type.
Rule of Inference (Modus Ponens)
The following theorem shows that if A is a theorem of QL, then A is valid in any proper WOML model. In [7,8] we proved the soundness for WOML and OML. We now prove the soundness of quantum logic by means of WOML-OML, i.e., that if A is a theorem in QL, then A is valid in any proper WOML model, i.e., in any WOML-OML model.
Proof. By Theorem 29 of [18], any WOML is a model for QL. Therefore, any proper WOML is also a model.
Classical Logic and Its Soundness for WDOL-BA Models
We make use of the PM classical logical system CL (Whitehead and Russell's Principia Mathematica axiomatization in Hilbert and Ackermann's presentation [19] but in schemata form so that we dispense with their rule of substitution). In this system, the connectives ∨ and ¬ are primitive, and the → 0 connective shown in the axioms is implicitly understood to be expanded according to its definition. We will use ⊢ CL to denote provability from the axioms and rule of CL, omitting the subscript when it is clear from context. Axioms Rule of Inference (Modus Ponens) We assume that the only legitimate way of inferring theorems in CL is by means of these axioms and the Modus Ponens rule. We make no assumption about valuations of the primitive propositions from which wffs are built, but instead are interested in wffs that are valid in the underlying models. Soundness and completeness will show that those theorems that can be inferred from the axioms and the rule of inference are exactly those that are valid.
We define derivability in CL, Γ ⊢ CL A or just Γ ⊢ A, in the same way as we do for system QL. The models and validity of formulae in a model are also defined as for QL above.
The following theorem shows that if A is a theorem of CL, then A is valid in any proper WDOL model.
In [7,8] we proved the soundness for WDOL and BA. We now prove the soundness of classical logic by means of WDOL-BA, i.e., that if A is a theorem in CL, then A is valid in any proper WDOL model, i.e., in any WDOL-BA model.
Proof. By Theorem 30 of [18], any WDOL is a model for CL. Therefore, any proper WDOL is also a model.
The Completeness of Quantum Logic for WOML-OML Models
Our main task in proving the soundness of QL in the previous section was to show that all axioms as well as the rules of inference (and therefore all theorems) from QL hold in WOML-OML. The task of proving the completeness of QL is the opposite one: we have to impose the structure of WOML-OML on the set F • of formulae of QL.
We start with a relation of congruence, i.e., a relation of equivalence compatible with the operations in QL. We make use of an equivalence relation to establish a correspondence between formulae of QL and formulae of WOML-OML. The resulting equivalence classes stand for elements of a proper WOML (i.e., a member of WOML-OML) and enable the completeness proof of QL by means of WOML-OML.
Our definition of congruence involves a special set of valuations on lattice O6 (shown in Figure 1) called O6 and defined as follows. The purpose of O6 is to let us refine the equivalence classes used for the completeness proof, so that the Lindenbaum algebra will be a proper WOML, i.e., one that is not orthomodular. This is accomplished by conjoining the term (∀o ∈ O6)[(∀X ∈ Γ)(o(X) = 1) ⇒ o(A) = o(B)] to the equivalence relation definition, meaning that for equivalence we require also that (whenever the valuations o of the wffs in Γ are all 1) the valuations of wffs A and B map to the same point in the lattice O6. Thus wffs A ∨ B and A ∨ (¬A ∧ (A ∨ B)) become members of two separate equivalence classes, what by Theorem 6.7 below, amounts to non-orthomodularity of WOML. Without the conjoined term, these two wffs would belong to the same equivalence class. The point of doing this is to provide a completeness proof that is not dependent in any way on the orthomodular law and to show that completeness does not require that any of the underlying models be OMLs.
Theorem 6.2 The relation of equivalence ≈ Γ,QL or just ≈, defined as is a relation of congruence in the algebra F , where Γ ⊆ F •
Proof.
Let us first prove that ≈ is an equivalence relation. A ≈ A follows from A1 [Eq. (41)] of system QL and the identity law of equality. If Γ ⊢ A ≡ B, we can detach the left-hand side of A12 to conclude Γ ⊢ B ≡ A, through the use of A13 and repeated uses of A14 and R1. From this and commutativity of equality, we conclude A ≈ B ⇒ B ≈ A. (For brevity we will not usually mention further uses of A12, A13, A14, and R1 in what follows.) The proof of transitivity runs as follows.
In the last line above, Γ ⊢ A ≡ C follows from A2, and the last metaconjunction reduces to o(A) = o(C) by transitivity of equality. Hence the conclusion A ≈ C by definition.
In order to be a relation of congruence, the relation of equivalence must be compatible with the operations ¬ and ∨. These proofs run as follows.
In the second step of Eq. (64), we used A3. In the second step of Eq. (65), we used A4 and A10. For the quantified part of these expressions, we applied the definition of O6. Proof. This is Theorem 3.27 from [7], and the proof provided there runs as follows. We assume F A ∨ B)). Therefore a∪b = a∪(a ′ ∩(a∪b)), providing a counterexample to the orthomodular law for F • / ≈.
Proof. Follows from Lemma 6.5 and Theorem 6.7. Now we are able to prove the completeness of QL, i.e., that if a formula A is a consequence of a set of wffs Γ in all WOML-OML models, then Γ ⊢ A. In particular, when Γ = ∅, all valid formulae are provable in QL. (Recall from the note below Definition 5.9 that the left-hand side of the metaimplication below is implicitly quantified over all proper WOML models M.) Proof. Γ M A means that in all proper WOML models M, if f (X) = 1 for all X in Γ, then f (A) = 1 holds. In particular, it holds for M A = A, f , which is a proper WOML model by Lemma 6.8. Therefore, in the Lindenbaum algebra A, if f (X) = 1 for all X in Γ, then f (A) = 1 holds. By Lemma 6.6, it follows that Γ ⊢ A.
The Completeness of Classical Logic for WDOL-BA Models
We have to impose the structure of WDOL-BA on the set F • of formulae of CL. We start with a relation of congruence, i.e., a relation of equivalence compatible with the operations in CL. We make use of an equivalence relation to establish a correspondence between formulae of QL and formulae of WDOL-BA. The resulting equivalence classes stand for elements of a proper WDOL (i.e., a member of WDOL-BA) and enable the completeness proof of QL by means of WDOL-BA. We will closely follow the procedure outlined in Section 6 and will often implicitly assume that definitions and theorems given in that section for QL have a completely analogous form for CL.
is a relation of congruence in the algebra F .
Lemma 7.3 In the Lindenbaum algebra
Proof. As given in [18].
Theorem 7.4 Distributivity does not hold in A.
Proof.
Proof. Follows from Lemma 7.2 and Theorem 7.4.
Proof. Analogous to the proof of Theorem 6.9.
Valuation-Nonmonotonicity
In Sections 5, 6, and 7 we prove the soundness and completeness of both quantum (QL) and classical (CL) standard logic for proper weakly orthomodular (WOML-OML) and weakly distributive (WDOL-BA) ortholattices, respectively. As we stressed in the Introduction and in Section 4, WOML-OML is the class of all those ortholattices (see Definition 2.1) that satisfy Definition 2.8 (WOML) but do not satisfy Definition 2.9. Analogously, WDOL-BA includes all those ortholattices that satisfy Definition 3.5 but do not satisfy Definition 3.2.
The set-theoretical differences WOML\OML (WOML-OMLs) and WDOL\BA (WDOL-BAs) determine valuations that quantum and classical logic can respectively make use of. The set of valuations that can be assigned to logical propositions are simply elements of any of particular lattices, e.g., O6 given in Figure 1 If we add Eq. (68) to the WOML conditions, we get a family of latticeslet us call it WOMLi-which is strictly smaller than WOML and strictly larger than OML. One of its valuations is obviously on the O6 lattice but not on the Rose-Wilkinson lattice. In analogy to the way we introduced proper WOMLs in Section 4, we can define WOMLi-OML as the class WOMLi\OML, each member of which is a proper WOMLi. Now the class WOML contains both the Rose-Wilkinson and O6 lattices. The class WOMLi-OML will contain O6 but not the Rose-Wilkinson lattice. The class OML will contain neither Rose-Wilkinson nor O6. A slight modification of the proof of Section 6 (by replacing WOML with WOMLi) shows that quantum logic is complete for WOMLi-OML, and it is also complete for WOMLi itself as follows from the completeness proofs of quantum logic for WOML given in [7,8].
Alternatively, we can obtain a hierarchy of classes of models for quantum logic by adding conditions to the equations determining the class WOML. Rather than restricting WOML by subtracting OML from it (to obtain WOML-OML), we restrict WOML by adding new conditions (stronger than the WOML law but weaker than the orthomodular law) to its defining equations to obtain smaller equational varieties, in between OML and WOML. We obtain the analogous hierarchy for classical logic by substituting "WDOL" for "WOML," "BA" for "OML," and "distributive" for "orthomodular." For instance, if we start with WOML, we can choose any model from it we wish: O6, Rose-Wilkinson, Beran 7b [20, Fig. 7b], or any other WOML lattice. When we add the condition (68) we can no longer use, e.g., the Rose-Wilkinson lattice/valuation. When we add the orthomodular law, we can no longer use O6 or Rose-Wilkinson or Beran 7b valuations. Thus by adding conditions to the definitions of WOML and WDOL, we change values (valuations) of logical propositions and we call this valuation non-monotonicity. More formally: [7]), respectively. The soundness and completeness proofs for OML and BA are well known. See, e.g., [17] and [19]. Soundness and completeness proofs for any lattice in between WOML and OML and in between WDOL and BA follow from the respective proofs for WOML and OML. For the soundness part of the proof, this is because any such WOMLj or WDOLj (j = 1, 2, . . .) is a WOML or WDOL, respectively. We can obtain a proof that quantum (classical) logic is complete for WOMLj (WDOLj) by rewriting the completeness proof of Section 6 (7) so that the set of mappings to O6 that refines the equivalence relations is replaced by a set of mappings to a lattice that satisfies WOMLj (WDOLj) but violates WOMLj+1 (WDOLj+1), e.g., the Rose-Wilkinson lattice for WOMLj = WOML and WOMLj+1 = WOMLi. The part of the proof that refers to adding conditions is obvious from the very definitions of WOML, WOMLj, OML, WDOL, WDOLj, and BA.
We stress here that we cannot mix up the two alternative ways of choosing valuations (restricting classes and forming set differences vs. valuationnonmonotonicity), because if we added, e.g., the conditions defining OML (BA) to WOML-OML (WDOL-BA), we would simply get empty sets.
Completeness for Smaller Model Subclasses
The reader familiar with the authors' earlier completeness proofs in [7] will notice that the new proofs here, in Sections 6 and 7, are identical except for the replacement of WOML (WDOL) with WOML-OML (WDOL-BA) in certain places. This yields a stronger result for each logic (QL and CL), i.e., each is complete for a smaller class of models. If a logic is complete for a class of models, it obviously continues to be complete if more models for the logic are added to that class. Thus the earlier completeness results follow immediately from the new ones, since WOML is obtained from WOML-OML by adding back the OML models for QL (and analogously WDOL for CL).
The key idea that allowed us to exclude OML from WOML in the QL completeness proof was refinement of the equivalence relation in Theorem 6.2 with the set of mappings O6. This resulted in smaller equivalence classes, allowing us to construct a Lindenbaum algebra that violated the orthomodular law and is thus a proper WOML.
In fact, the O6 "trick" is not limited to the use of lattice O6. We can rewrite the completeness proof for e.g. QL using any lattice that is a proper WOML (a WOML but not an OML) in place of O6. This will result in a completeness proof for a different class of models that can be an even smaller subclass of WOML.
For example, the Rose-Wilkinson lattice of Figure 3 is a proper WOML. If we use it in place of O6, an analogous completeness proof shows that QL is complete for the class WOML\WOMLi, which is strictly smaller than WOML-OML. Since WOML\WOMLi doesn't include O6, this shows that QL is complete for a class of models that is not only unrelated to OMLs but is even unrelated to the "natural" OML counterexample O6, which up to now has served as our prototypical WOML example.
As mentioned earlier, for classical logic CL, we have an even stronger completeness result that it is complete for single WDOL lattices, not just classes of them. For example, it turns out that the Rose-Wilkinson lattice is also a proper WDOL (as well as a proper WOML). Thus the Rose-Wilkinson lattice, by itself, provides a model for which classical logic is sound and complete, showing that the hexagon O6 is not the only "exotic" non-Boolean lattice model for CL.
Conclusion
The main result we obtained in the previous sections is that logics can be modelled by disjoint classes of different ortholattices. Classical logic can be modelled by non-distributive lattices and quantum logic by nonorthomodular lattices. These lattices represent different disjoint valuation sets, where the valuation is a mapping from propositions to a lattice. Thus by adding conditions (axioms) to the original definition of an ortholattice we determine classes of lattices that in turn determine valuations that one can ascribe to logical propositions. We call the latter property of logical propositions valuation-nonmonotonicity (see Theorem 8.1). But by considering disjoint classes of lattices we can further restrict valuations we want to use. This can be done as follows.
We considered varieties of classical non-distributive weakly distributive lattice (WDOL, see Definition 3.4) models of classical propositional logic and non-orthomodular weakly orthomodular lattice (WOML, Definition 2.9) models of quantum quantum propositional logics and proved their soundness and completeness for those models (see Theorems 5.10, 5.11, 6.9, and 7. 6) In particular, we considered subclasses of WDOL and WOML that do not contain Boolean algebras (BAs, Definition 3.2) and orthomodular lattices (OMLs, Definition 2.9), respectively, while in Sections 8 and 9 we also considered a possibly infinite sequence of subclasses of WDOL and WOML that do not contain lattices WDOLi and WOMLi, respectively, which in turn properly contain BA and OML, and for all of which we have proved the soundness and completeness. We denoted these classes (varieties of WDOL and WMOL) as WDOL At the level of logical gates, classical or quantum, with today's technology for computers and artificial intelligence, we can use only bits and qubits, respectively, i.e., only valuations corresponding to {0,1} BA and OML, respectively. And when we talk about logics today, we take for granted that they have the latter valuation-{TRUE,FALSE} in the case of classical logic and Hasse (Greechie) diagrams in the case of quantum logic [21]. This is because a valuation is all we use to implement a logic. In its final application, we do not use a logic as given by its axioms and rules of inferences but instead as given by its models. Actually, logics given only by their axioms and rules of inferences (in Sections 5.1 and 5.2), i.e., without any models and any valuations, cannot be implemented in any hardware at all.
It would be interesting to investigate how other valuations, i.e., various ortholattices, might be implemented in complex circuits. That would provide us with the possibility of controlling essentially different algebraic structures (logical models) implemented into radically different hardware (logic circuits consisting of logic gates) by the same logic as defined by its axioms and rules of inference. | 2008-12-14T16:49:12.000Z | 2008-12-01T00:00:00.000 | {
"year": 2008,
"sha1": "76910b08fb78fe8fb242af3075daffc8703c1e16",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0812.2702v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "245360829b2c966481c2a43b32411c63d4612458",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science",
"Physics"
]
} |
22716565 | pes2o/s2orc | v3-fos-license | Behavior-Dependent Activity and Synaptic Organization of Septo-hippocampal GABAergic Neurons Selectively Targeting the Hippocampal CA3 Area
Summary Rhythmic medial septal (MS) GABAergic input coordinates cortical theta oscillations. However, the rules of innervation of cortical cells and regions by diverse septal neurons are unknown. We report a specialized population of septal GABAergic neurons, the Teevra cells, selectively innervating the hippocampal CA3 area bypassing CA1, CA2, and the dentate gyrus. Parvalbumin-immunopositive Teevra cells show the highest rhythmicity among MS neurons and fire with short burst duration (median, 38 ms) preferentially at the trough of both CA1 theta and slow irregular oscillations, coincident with highest hippocampal excitability. Teevra cells synaptically target GABAergic axo-axonic and some CCK interneurons in restricted septo-temporal CA3 segments. The rhythmicity of their firing decreases from septal to temporal termination of individual axons. We hypothesize that Teevra neurons coordinate oscillatory activity across the septo-temporal axis, phasing the firing of specific CA3 interneurons, thereby contributing to the selection of pyramidal cell assemblies at the theta trough via disinhibition. Video Abstract
INTRODUCTION
Activity in the hippocampal CA1 area is spatially and temporally tuned during context-dependent behavior and the spiking of pyramidal cells and interneurons is organized within theta and gamma frequency oscillatory timescales. This temporal organization is supported by well-characterized glutamatergic projections from CA3 (Amaral and Witter, 1989;Middleton and McHugh, 2016) as well as from the entorhinal cortex (EC) (Brun et al., 2008;Witter et al., 1988). These inputs mediate both dendritic excitation and feedforward inhibition (Buzsá ki 1984) of pyramidal cells. In addition to these cortical inputs, medial septal (MS) cholinergic (Gielow and Zaborszky, 2017), glutamatergic (Justus et al.,2017; Huh et al., 2010;Robinson et al., 2016;Fuhrmann et al., 2015), and GABAergic neurons innervating the hippocampus are part of a subcortical theta rhythm generating network involving the brainstem, thalamus, and hypothalamus (Vertes and Kocsis, 1997). Disruptions of MS input results in loss of theta power and impaired performance in spatial learning (Winson, 1978), disrupted learning in contextual fear conditioning (Calandreau et al., 2007), and a slowed rate of acquisition of delayed eyeblink conditioning (Berry and Thompson, 1979). A striking yet underappreciated feature of GABAergic septal afferents to the hippocampus is the extensive targeting of interneurons in CA3 and the hilus and granule layer of the dentate gyrus (DG) compared to CA1 (Freund and Antal, 1988). The key role of CA3 pyramidal cells in the hippocampal circuit is underlined by their bilateral projections, topographically organized through highly interconnected cell assemblies and providing the numerically largest innervation to CA1 (Witter, 2007). Interestingly, CA3 inactivation does not hamper rate coding in CA1; however, it is required for the emergence of theta sequences (Foster and Wilson, 2007;Middleton and McHugh, 2016). Furthermore, the CA3 area and the DG are likely to be involved in distinct aspects of spatial coding (Neunuebel and Knierim, 2014) raising the hypothesis that septal GABAergic inputs to the hippocampal subsystems might have distinct connectional and temporal organization. However, the organization of MS inputs to hippocampal or cortical areas at single-cell resolution are largely unknown.
In CA3-CA1, pyramidal cells are active at the trough relative to dorsal hippocampal CA1 theta oscillations Lasztó czi et al., 2011). This activity is coordinated by diverse local interneurons of the hippocampus, which provide temporally coordinated rhythmic inhibition to distinct pyramidal subcellular compartments (Somogyi et al., 2013). Some of these GABAergic interneurons, e.g., axo-axonic cells (Viney et al., 2013), do not fire when the pyramidal cells are most excitable, whereas others (e.g., bistratified and O-LM cells, Katona et al., 2014) fire maximally together with the overall population of pyramidal cells. The way these differences among GABAergic interneurons are brought about in the network is beginning to emerge from analysis of their long range synaptic inputs (Leã o et al., 2012;Fuhrmann et al., 2015;Kaifosh et al., 2013). A key missing link is the theta firing-phase preferences of subcortical inputs to defined types of hippocampal interneuron.
Do septo-cortical long-range projection neurons follow target-region-specific axonal distributions and cell-type-specific theta-phase firing preferences? In order to define the contribution of septal inputs at single-cell resolution, we set out to determine whether rhythmic septal neurons with similar activity patterns project to the same or distinct hippocampal areas. We used a combination of extracellular multiunit recordings, targeted single-neuron recording, and juxtacellular labeling (Pinault, 1996) in behaving head-fixed mice to reveal the rules of septo-hippocampal connectivity. Here, we report the activity patterns of a distinct group of rhythmic MS GABAergic neurons, named ''Teevra cells,'' which selectively target interneurons in spatially restricted domains of the CA3 region of the hippocampus but do not innervate DG or CA2 and only minimally CA1. We have determined their molecular profiles, synaptic partners, and organizational principles along the hippocampal septo-temporal axis.
Subpopulations of MS Rhythmic Neurons Based on Spike Train Dynamics: Teevra and Komal Neurons
Using multichannel extracellular probes, we recorded neuronal activity in the septum of head-fixed mice (n = 7) during running (RUN) and pauses (REST) while they navigated on a virtual linear maze. The location of the probe and recording sites were established histologically in fixed brain sections post hoc, and further analysis was restricted to the cases where several recording sites were confirmed to be in the medial septum (MS) (n = 4 mice, Figure 1A). The action potential firing frequency of recorded neurons in the MS during running varied widely (median: 23.98 Hz; interquartile range [IQR]: 13.4-38.5 Hz, n = 81 neurons) and was higher than adjacent lateral septal (LS) neurons (median: 2.55 Hz, IQR: 1-7.1 Hz, n = 18 neurons; Kruskal-Wallis test, p < 10 À8 ). All MS neurons recorded in this configuration were phase coupled to ongoing theta oscillations recorded in dorsal hippocampal CA1, whereas this was the case only for 27% of LS neurons (Rayleigh test, p < 0.05). Thus, MS neurons differed from adjacent LS neurons both by their firing rate during locomotion and phase coupling to local field potential (LFP) theta oscillations in CA1.
Rhythmic burst firing is considered to be a characteristic feature of MS GABAergic neurons (Borhegyi et al., 2004;Dragoi et al., 1999;King et al., 1998;Simon et al., 2006). We observed a striking diversity in the burst duration and extent of rhythmicity of action potential firing among simultaneously recorded MS neurons ( Figure 1A). To capture this, we estimated the burst duration (see STAR Methods) and calculated a rhythmicity index (RI; see STAR Methods), which bounded from 0 to 1 in order of increasing rhythmicity. During RUN periods, simultaneously recorded MS neurons (range: 5 to 19) exhibited varying burst duration (median: 55 ms, IQR: 40.6-87.9 ms, n = 81 neurons) and extent of rhythmicity (median: 0.13, IQR: 0.04-0.31, n = 81 neurons). Additionally, the preferential mean firing phase of the individual cells, collectively, covered the entire theta cycle as referenced to ongoing LFP theta oscillations in the pyramidal layer of CA1. We further found that simultaneously recorded individual MS neurons could increase, decrease, or not change their mean firing rate between REST and RUN periods, and this was consistent for a given cell for different periods of the recording session. In order to capture this behavioral state dependence, we computed a rate change score from REST to RUN, which bounded from À1 to 1 (see STAR Methods).
Similar activity dynamics could be identified and measured from extracellular glass electrode recordings of single MS neurons in behaving head-fixed mice (n = 65 neurons, N = 24 mice). Further analysis was restricted to MS neurons with a rhythmicity index >0.1 from both recording configurations (n = 43 tetrode, n = 46 glass electrode). We have calculated the rate change score and burst duration for all neurons (n = 89) and fed them to an unsupervised hierarchical clustering algorithm to explore the characteristics of major clusters ( Figure 1B).
The mean firing-phase preference of septal neurons with respect to ongoing theta oscillations recorded in dorsal CA1 provides information about possible temporal specializations in their activity and influence. We tested whether Teevra and Komal neurons were different in the mean firing-phase preference relative to CA1 theta, a parameter not used in the clustering. The pooled firing-phase preferences of Teevra and Komal neurons were (D) Preferential theta phase of firing of Teevra and Komal cells with rhythmicity index as the radius (RUN periods). Most Komal cells (purple) fire preferentially at the peak of CA1 pyramidal cell layer theta oscillations, whereas most Teevra cells (green) fire phase coupled to the trough with increasing rhythmicity index. See also Movie S1 and explanation and Figure S2.
significantly different (Figures 1D and S1; p < 0.002, Watson's U 2 test, difference of circular means = 160 ), with most Teevra neurons firing preferentially around the trough while most Komal neurons preferring the peak of dorsal CA1 stratum pyramidale theta LFP. Note that within both groups there are individual neurons with diverse firing-phase preferences. For Teevra cells, the trough phase preference correlated with a higher rhythmicity index (angular-linear correlation coefficient: 0.49, p = 0.003, n = 48, Figure 1D).
Rhythmic Activity of Teevra Cells Is Coincident with Heightened CA1 Excitation
Having identified distinct groups of MS neurons based on activity dynamics, we selected the largest group, the Teevra cells, which had the highest rhythmicity index (median: 0.3, IQR: 0.18-0.55, n = 48), for testing the hypothesis that these neurons represent a distinct population in the septo-cortical circuit. The rhythmicity indices of the other groups were group 1 (median: 0.19, IQR: 0.1-0.3, n = 4), group 3 (median: 0.19, IQR: 0.15-0.32, n = 23); group 4 (median: 0.19, IQR: 0.12-0.29, n = 14) (p = 0.039, 4 groups, Kruskal-Wallis test). The identification of Teevra cells was achieved by single-unit extracellular recording for cell selection based on firing patterns, and subsequent juxtacellular labeling (n = 13; Table 1) to aid their visualization and anatomical analysis ( Figure 2A). Hippocampal circuit activity is known to be influenced by the behavioral state of the animal, a feature thought to reflect particular stage of information processing. To assess the contribution of Teevra cells to the hippocampal circuit, first we evaluated the behavioral state dependent change in rhythmicity index from REST to RUN periods. We found that Teevra cells maintained rhythmic bursts discharge during both REST and RUN periods ( Figures 2B and 2C), but their rhythmicity increased during RUN (median rhythmicity index rest: 0.07, IQR: 0.04-0.1; median rhythmicity index run: 0.3, IQR: 0.18-0.55, Wilcoxon paired-sample test, p = 1.6 3 10 À09 ). Consistent with the increase in theta frequency during running (S1awi nska and Kasicki, 1998), the oscillatory frequency (OF) of Teevra cells also increased during RUN (median oscillatory frequency rest: 6.4 Hz, IQR: 6.04-6.5 Hz; median oscillatory frequency run: 7 Hz, IQR: 7-7.34 Hz, Wilcoxon paired-sample test, p = 4.9 3 10 À07 ). Examples of juxtacellularly labeled Teevra neurons AJ42m and AJ45h (Figures 2B and 2C) show such an increase in rhythmicity index and in the oscillatory frequency of firing. The bursts of Teevra cells often started on the descending phase of CA1 theta in line with the reported increase in CA3 pyramidal cell firing (Mizuseki et al., 2009).
Next, we explored the activity patterns of Teevra cells during bouts of REST periods when the hippocampal CA1 field potential was dominated by large amplitude irregular activity (LIA). Interestingly, firing of Teevra cells coupled to the falling transition of the LIA following the variable duration of the slow cycles ( Figure 2E) and mirroring the theta trough coupling during RUN. Thus, during both REST and RUN, Teevra cells become active at times of LFP troughs, coincident with heightened excitation in CA1 (Mizuseki et al., 2009). Their bursting follows the frequency at which LFP troughs occur both during regular theta oscillations and the irregular slower waves at 2-6 Hz, which are accompanied by high-frequency bursts of Teevra cells at the negative phase of the wave.
In rats, MS GABAergic cells could be active or inhibited during sharp wave ripple (SWR) oscillations (Borhegyi et al., 2004;Dragoi et al., 1999;Viney et al., 2013). Under our behavioral paradigm of head-fixed mice, sharp wave ripples (130-240 Hz) were infrequent but could be observed for some Teevra cells (n = 4), which did not change their firing significantly during ripple events ( Figure 2F) (legend continued on next page) Teevra Cells Are GABAergic and Immunopositive for Parvalbumin and SATB1 but Not for mGluR1a in the Somatic Membrane Teevra cells comprised a distinct subpopulation of MS neurons based on physiological parameters. Next, we have tested whether they represent a distinct cell type according to molecular markers and transmitter phenotype. Labeled Teevra cells were immunopositive for the calcium binding protein parvalbumin (PV, Figure 3A) (n = 9/10 tested), the transcription factor SATB1 ( Figure 3B, n = 10/10 tested), but lacked detectable immunoreactivity for mGluR1a in the somatic plasma membrane (n = 10/11 tested). A weak cytoplasmic signal may represent a pool of receptor in the endoplasmic reticulum. Approximately half of the PV + neurons were immunopositive for SATB1 in the entire MS complex (unpublished data) and showed all four possible combinations of immunoreactivity for PV and mGluR1a, including double-immunonegative neurons. This indicates a differentiation among various GABAergic MS neurons to be defined in future studies with respect to their projections. Labeled Teevra cells were also tested for molecular phenotype of their boutons; all tested cells (n = 4/4; Table 1) were immunopositive for VGAT but not VGlut2 ( Figure 3C) confirming that they were GABAergic and not glutamatergic neurons. Teevra cells emitted axonal collaterals and boutons in the MS (n = 5/5 tested). These collaterals targeted mainly PV + soma or dendrites (n = 10/11 tested targets, AJ42m axon, Figure 3D; Table S1). All of the tested neurons innervated on their soma in the MS were PV + and SATB1 + (n = 5/5). Varicosities of the axon of Teevra cell AJ42m were tested for the presence of synapses using labeling for the postsynaptic junction scaffolding protein gephyrin (Lardi-Studler et al., 2007), and the majority of boutons formed GABAergic synapses (n = 42/47 tested varicosities, Figure 3D). The identity of the PV + MS neurons innervated by Teevra cells remains to be determined. They may be other Teevra cells synchronized by local interconnections (Leã o et al., 2015) or GABAergic MS neurons projecting to other cortical areas or different types of interneuron as suggested by Borhegyi et al. (2004).
GABAergic Teevra Cells Preferentially Innervate the CA3 and Target PV + Axo-Axonic Cells as well as CCK + Interneurons MS GABAergic neurons innervate all hippocampal regions and many extra-hippocampal cortices (Freund and Antal, 1988;Unal et al., 2015), though it is not known whether single neurons innervate one or multiple cortical areas. We are unaware of the target area visualization of single GABAergic septal neurons with known activity patterns in the literature. Accordingly, to explain the basis of the influence of Teevra cells on cortical activity, we tested the distribution of their axonal terminals. All labeled Teevra cells projected either the left or the right hippocampus. Among the labeled Teevra cells whose axon could be followed to branches in the gray matter (n = 11/13), all innervated the CA3 region of the hippocampus preferentially, and no branches or varicosities were observed in the DG or CA2 ( Figure 3E and 4; Table 1). The axons of Teevra cells traveled to the hippocampus either via the dorsal fornix (n = 2) or the fimbria (n = 11). In CA3, Teevra cells innervated interneurons (Figure 4). Of a total of 472 sampled boutons from 12 coronal hippocampal sections (n = 3 cells, section thickness 70-80 mm), 91.5% of boutons were in CA3 (n = 432), and only 8.5% boutons were observed in CA1 (n = 40). The main axon terminated in the hippocampus and no branches were observed in the retrosplenial cortex, the subiculum, the pre-and para-subiculum, or the entorhinal cortex. This preferential termination in CA3 was accompanied by a septotemporal specialization of axonal branching with collaterals innervating only a restricted septo-temporal domain in CA3. For single Teevra cells, the majority of axonal branches and boutons were observed only through 5-8 coronal sections (section thickness: 70-80 mm). Although we cannot exclude the possibility of incomplete labeling, most terminal axon collaterals ended in boutons indicating a restricted area of termination. The most sensitive axon visualization method of horseradish peroxidase (HRP) reaction following freeze-thaw permeabilization and diaminobenzidine (DAB) reaction end-product intensification with osmium (see STAR Methods) was applied to the full course of labeled axons to detect potential collateral branches. It is unlikely that we missed substantial projections to CA1. The intrahippocampal spatial positions of the collaterals of the least rhythmic labeled Teevra cell in temporal CA3 (MS11b) did not overlap at all with those of the most rhythmic neuron (AJ42m) in septal CA3, showing the change in rhythmicity together with spatial progression along the septo-temporal extent of the hippocampal formation.
To test the synaptic targets of Teevra cells in CA3, we analyzed two labeled Teevra cells (AJ42m and MS90g). The targets of the few boutons encountered in CA1 were not tested. The axon of AJ42m was the most strongly labeled as its collateral branches could be followed to terminal boutons throughout the axonal arbor in CA3, both by fluorescence microscopy and following HRP reaction. We determined the molecular characteristics of 22 cellular target profiles (Table S1). In CA3, 18 out of 22 tested targets were PV + , 11 were dendrites, and 7 somata. The PV + somatic profiles were all SATB1-immunonegative neurons (D) Preferential coupling of spikes to CA1 theta troughs during RUN. (E) Rhythmic burst firing of a Teevra neuron (AJ43n) during REST, a period dominated by large amplitude irregular activity (LIA) in CA1 stratum pyramidale (top). Consecutive zero crossings at falling transition of the LFP (red lines) are marked. Spike raster plot (middle) and normalized spike firing histogram (bottom) show correlation of spikes (gray dots) with the timing of slow LFP oscillation cycles. Consecutive LFP zero crossings (>200 ms apart) are ordered according to their duration and marked by red lines; spikes identified within two consecutive cycles are colored black; time 0 is the zero crossing of LFP falling transitions. Note additional burst at $6 Hz between time points marked by red lines. (F) During ripple oscillations (pink, top), a Teevra neuron (MS11b) does not change its firing probability. Spike raster plot (middle) and firing probability (bottom) show sustained activity during ripple epochs (pink bars in the histogram) compared to ±0.4 s (gray) surrounding the peak of sharp wave ripple events. Raster plots were aligned to the peak sharp wave ripple power, and pink lines delineate the beginnings and end of sharp wave ripples; spikes within the sharp wave ripple period are colored black. Time 0 is the peak power of each ripple oscillatory event. (legend on next page) ( Figures 5A and S2). Another Teevra neuron, MS90g, was more weakly labeled, and, although we followed the axon and branches both by fluorescence microscopy and HRP reactions, the terminal boutons were not well resolved in fluorescence microscopy. Triple immunoreactions for PV, SATB1, and CCK in immunofluorescence followed by HRP reaction for bouton visualization helped to identify 6 innervated somata by light microscopy. All targeted neurons were PV + and SATB1 immunonegative (Table S1). This combination is a strong indicator of axo-axonic cells in the CA1 and CA3 areas (Viney et al., 2013).
To demonstrate the remarkable cell-type selectivity of Teevra cells, we estimated the proportion of PV + neurons that could be classified as axo-axonic cells in the CA3 region of mouse hippocampus based on their molecular marker combination (30% of PV cells, Figure S3). This has allowed us to estimate the probability of the observed number of axo-axonic cells as synaptic targets, if all PV + cells were innervated uniformly. The probability for AJ42m (7 axo-axonic cell targets found sequentially) is p = 0.3 7 = 2.1 3 10 À4 , and for MS90g (6 axo-axonic cell targets found sequentially) is p = 0.3 6 = 7.2 3 10 À4 . Therefore, we conclude that Teevra cells selectively target axo-axonic among PV + neurons of the CA3 hippocampal area, but this does not exclude other interneuron types as we also found some CCK + interneurons (4 somata, AJ42m, Figure 5B) as target cells. These 4 CCK + targeted profiles were confirmed to be PV -, SATB1 -, somatostatin -(SST), and calbindin -(CB). This combination of molecular markers is indicative of CCK basket cells (Lasztó czi et al., 2011) in CA3. Interestingly, both these neuronal populations fire spikes at the peak or ascending phase of dorsal CA1 pyramidal layer theta LFP oscillations (Somogyi et al., 2013), out of phase with the trough firing of MS Teevra cells.
Electron microscopic examination of the main axons of Teevra cells in the fimbria adjacent to the CA3 area showed that they are covered by myelin sheaths (AJ45h, Figure 5C), which are 2 to 3 times thicker than those of nearby axons of CA3 pyramidal cells. The main axonal branches in CA3 are also myelinated, and the terminal collaterals are unmyelinated and 0.1 to 0.3 mm thick forming boutons in clusters. We tested the probability of predicting synaptic junctions based on axonal swellings next to a target cell.
All the boutons in a tested area (n = 11 boutons, AJ45h) formed type II synaptic junctions ( Figure 5C) with two nearby interneuron somata in CA3b stratum pyramidale (n = 5 and 6 boutons per soma, respectively). The boutons of AJ42m tested by electron microscopy (n = 12) were distributed in CA3c strata pyramidale, lucidum, and radiatum. They formed synapses with an interneuron soma (n = 5 synapses, Figure 5C), 4 interneuron dendrites identi-fied by receiving additional type I synapses ( Figure 5C), and 3 unidentified dendritic shafts, which received no additional synapse in the range of sections that were followed.
Rhythmicity of Teevra Cells along the Septo-temporal Axis of the Hippocampus Theta oscillations are traveling waves along the septo-temporal and medio-lateral extent of the hippocampal formation, and the power but not frequency of theta oscillations decreases along the longitudinal axis (Lubenov and Siapas, 2009;Patel et al., 2012;Long et al., 2015). Location of the somata of Teevra cells relative to the midline of the MS predicted the axonal distribution in the left or right hippocampus respectively as their axons did not cross the midline (Table 1). Teevra cells have multiple thick, long, and non-spiny dendrites originating from the soma. The dendrites branch infrequently in the septum and may cross the septal midline ( Figure 6A). As Teevra cells innervated distinct and restricted domains of CA3 along the septo-temporal axis of the hippocampus, we asked whether the rhythmicity index, oscillatory frequency, and firing-phase preference of septo-CA3 projecting Teevra neurons showed any correlation with the innervated hippocampal area. Using 3D Euclidian distance along the hippocampal formation (see STAR Methods), we have observed that the rhythmicity of the firing of septal neurons decreases the more caudal the termination in the hippocampus (linear correlation coefficient: r = À0.96, p = 0.0001, n = 8 neurons, Figure 6B), but the oscillatory burst firing at theta frequency does not change (p = 0.27, n = 8 neurons, Figure 6B). This shows that the depth of modulation of rhythmic GABAergic input to CA3 from the MS decreases the more caudo-ventral the termination in CA3. We have also assessed the mean firing-phase distribution of neurons across the septo-temporal axis and found no correlation between these two variables (angular-linear correlation coefficient: r = 0.38, p = 0.57, n = 8 neurons).
DISCUSSION
We have demonstrated that a population of septal GABAergic neurons selectively target the CA3 region, which predicts that other regions of the hippocampus and related cortical areas also receive region and target cell-type-specific subcortical inputs. This organizational principle endows distinct cell types in the MS and diagonal band nuclei with a flexible role in coordinating functionally related cortical areas with each parallel pathway adapted to the specific role and requirements of its target area. Teevra cells formed the largest subpopulation of Figure 5. Postsynaptic Targets of Teevra Neurons Are PV + or CCK + Interneurons (A) Left, Axonal terminals (green) of Teevra cell AJ42m innervate PV + (magenta, asterisks) and SATB1-negative (cyan) cells in a basket-like formation also following their dendrites (merged channels). Nearby PV + and SATB1 + cells (double arrows) and a CCK + cell (single arrow) were not innervated. CCK and PV were sequentially reacted and imaged. Right, two innervated cells from left panel channel by channel. (B) Axonal terminals (green, arrows) of Teevra cell AJ42m innervate two interneurons (asterisks), which are CCK + (red) and SATB1and PV -(magenta, sequential reactions) in CA3 str. radiatum. Bottom left of image, non-targeted SATB1 + PV-immunonegative neuron. MS rhythmic neurons, which have been hypothesized to be the coordinators of hippocampal theta oscillations (Alonso et al., 1987;Gaztelu and Buñ o, 1982;Gogolá k et al., 1968;Petsche et al., 1962;Stumpf et al., 1962). Teevra neurons have a short burst duration, do not significantly change their firing rate from REST to RUN, and fire action potentials at dorsal hippocampal CA1 troughs recorded in stratum pyramidale, coincident with the maximal firing of CA1 pyramidal cells (Mizuseki et al., 2009;Csicsvari et al., 1999).
We focused on neurons showing rhythmicity index of more than 0.1, and all such cells labeled and tested were GABAergic, but this does not exclude that less rhythmic GABAergic neurons also exist in the MS. Identified initially by their activity patterns, subsequent juxtacellular labeling of Teevra cells revealed their axonal termination area and synaptic target neurons. The most remarkable feature of Teevra cells is their selective termination in restricted spatial domains along septo-temporal axis of CA3, largely avoiding other hippocampal areas. These findings reveal an unexpected sophistication in the spatiotemporal organization of septo-hippocampal projection. Based on our analysis of the synaptic targets of Teevra cells, and assuming that the high-frequency bursts fired at the trough of the CA1 theta leads to inhibition, we propose that Teevra cells innervate those CA3 interneurons, such as axo-axonic cells and CCK basket cells (Lasztó czi et al., 2011;Somogyi et al., 2013), which preferentially fire around the peak of theta. This would lead to the disinhibition of CA3 pyramidal cell assemblies (Tó th et al., 1997), driving pyramidal cell firing in CA1 at the trough of theta in the pyramidal layer. Because axo-axonic cells do not innervate other interneurons, the coincidence of Teevra cell firing and the highest discharge probability of pyramidal cells also supports a disinhibitory role (Figure 7). Consistent with the proposed disinhibition of CA3 pyramidal cells by Teevra cells, pyramidal cells fire at the highest rate during the trough of CA1 theta oscillations in anesthetized rats (Lasztó czi et al., 2011). In this temporally coordinated circuit, during retrieval of stored contextual associations around the theta trough (Hasselmo et al., 2002), disinhibition provided by Teevra cells may enable the CA3 pyramidal cell output to contribute to temporal coding in the CA1 ensemble (Ferná ndez-Ruiz et al., 2017; Middleton and McHugh, 2016). Such a proposed role remains to be tested directly.
Synaptic Targets of Theta Synchronized Teevra Cells in CA3
Following the discovery that septal GABAergic neurons selectively innervate hippocampal interneurons (Freund and Antal, 1988), it was hypothesized that PV + septal neurons firing at the peak of CA1 theta inhibit both trough firing MS neurons and GABAergic hippocampal neurons that innervated pyramidal cell dendrites in CA1; in turn, theta trough-preferring MS neurons innervate both the peak firing MS neurons and ''peri-somatic'' terminating inhibitory cells in CA1 (Borhegyi et al., 2004). Indeed, some trough-preferring Teevra cells innervate other PV + MS neurons, but the firing phase of these target cells is unknown. Moreover, none of the theta trough-firing rhythmic Teevra cells innervated the CA1 significantly and instead targeted PV + axoaxonic cells and CCK + cells in the CA3. The binary phase preference hypothesis is also complicated by the fact that both hippocampal GABAergic cells (Klausberger and Somogyi 2008) and theta rhythmically firing GABAergic MS neurons fire at all phases of CA1 theta. The key missing information for explaining this diversity has been the axonal area and target cell preference of any septal neuron with known theta-phase firing. The Teevra cells reported here show one example of a sophisticated and highly selective septo-hippocampal connection. Information on the firing-phase preference of identified CA3 interneurons is sparse in awake animals. In anaesthetized rats, identified axo-axonic cells in CA3 fired bursts of spikes at the peak of dorsal CA1 theta oscillations (Varga et al., 2014;Viney et al., 2013). Axo-axonic cells innervate exclusively the axon initial segment of pyramidal neurons, which is particularly well developed in CA3 with up to 150 synapses on a single-axon initial segment (Kosaka, 1980). Their action is mediated by GABA A receptors (Buhl et al., 1994) with fast ($1.7 ms) inhibitory postsynaptic currents in their synaptic targets (Ganter et al., 2004), at the site where the action potential is generated. Interneurons expressing CCK also fire spikes at around the peak of CA1 theta oscillations (Lasztó czi et al., 2011). Thus, the axo-axonic and CCK target cells, postsynaptic to Teevra neurons, fire preferentially at the theta peak, counter-phased with the rhythmic input from Teevra cells at the trough of theta. This counterphase firing has been suggested earlier for CA1 (Somogyi et al., 2013) and might indeed be a biological mechanism of theta-phase modulation of long-range synaptic partners. However, theta peak firing MS neurons with long burst duration had no terminals in the CA1 or CA3, but instead their main axon projected beyond the hippocampus likely innervating extrahippocampal structures. Our results do not exclude the innervation of various other interneuron types by Teevra cells in CA3 such as PV + basket cells, which fire phase locked to the trough of CA1 theta (Tukker et al., 2013), but coincident firing with and synaptic input from Teevra cells is unlikely. It is possible that other MS GABAergic neuronal types with theta-phase preference different from Teevra cells also innervate CA3. Besides a dominant role of septal GABAergic neurons in determining interneuronal firing-phase preference, interaction with other rhythmic synaptic inputs, e.g., from the raphe nuclei, supramammillary nucleus, and from local interneurons and pyramidal cells, may also contribute to the determination of theta-phase firing preference of interneurons. The effects of a potential synaptic influence of CCK-expressing interneurons, which are innervated by Teevra cell, on axo-axonic cells, similar to PV + basket cells innervated by CCK + interneurons in CA1 (Karson et al., 2009) remain to be tested.
Synaptic inputs to CA1 are temporally organized within theta and gamma timescales. The CA3 pyramidal cell input at the (A) from recorded data, referenced to dorsal CA1 pyramidal cell layer LFP. On average, Teevra cells (n = 12, current study) discharge maximally at the trough of CA1 theta oscillations inhibiting AACs (data from Viney et al., 2013, non-anesthetized rat, CA1 and CA2 AACs averaged) and putative CCK basket cells (Lasztó czi et al., 2011, anesthetized rat, CA3) leading to disinhibition of CA3 pyramidal cells, which provide the largest excitatory input to CA1 pyramidal cells (average firing probabilities from Mizuseki et al., 2009). The cell-type-specific temporal modulation of firing rates during theta cycles contributes to the implementation of oscillatory increases and decreases of excitability in pyramidal cell networks via subcellular compartment specific disinhibition. trough and descending phase of CA1 pyramidal layer theta coincides with slow gamma (30-80 Hz) oscillations, while the medium gamma (60-120 Hz) is coupled to the peak of pyramidale theta oscillations (Ferná ndez-Ruiz et al., 2017;Lasztó czi and Klausberger, 2014;Schomburg et al., 2014). It was previously suggested that MS neuronal firing at gamma intraburst frequencies (30-120 Hz) might contribute to these oscillations (Borhegyi et al., 2004;Viney et al., 2013). However, two oscillations occurring at the same frequency might not lead to amplification unless they are phase coupled. Unlike under anesthesia, MS Teevra neurons in awake animals, on average, had higher intraburst frequencies than medium gamma (up to 300 Hz during RUN). The phase coupling of Teevra cell spikes to various gamma oscillations, especially putative CA3 coordinated slow gamma, remains to be investigated with multielectrode arrays to reveal current sources.
Teevra Cell Firing Is Modulated over Multiple Timescales: Within Theta Cycles and during Rest and Running
A fascinating feature of firing of Teevra cells is that the firing rate did not change from rest to running significantly or could even decrease during RUN. This is in contrast to the firing of most CA1 interneurons, which increased their firing rate from rest to running periods (Czurkó et al., 2011;Varga et al., 2012) possibly due to a combination of decreased inhibition from MS GABAergic cells, increased excitatory input from CA3 pyramidal cells and/or long range inputs (Fuhrmann et al., 2015). Although the firing rate of Teevra cells may not be different during REST and RUN periods, the temporal dynamics of firing pattern changed with a clear increase in the rhythmicity index and oscillatory frequency during running. If on average each action potential provides a similar amount of GABA released at the synaptic terminals, why do Teevra cells maintain a high level of GABA release during rest when their targets are much less theta rhythmic? We have shown that Teevra cells continue to fire at the negative deflections of slow irregular activity, increasing their firing around the hippocampal LFP troughs, irrespective of the frequency at which these occur, mirroring the heightened excitation in CA1.
Sharp wave ripple episodes are more frequent during rest and consummatory behavior representing the highest population activity in the hippocampus. During sharp wave ripples, some MS GABAergic neurons are strongly active (Borhegyi et al., 2004;Viney et al., 2013) while others are inhibited (Borhegyi et al., 2004;Dragoi et al., 1999) or do not change their firing like Teevra cells. MS GABAergic cells are innervated by hippocampo-septal GABAergic projection neurons (Tó th et al., 1993), which are strongly activated during sharp wave ripple events (Jinno et al., 2007;Katona et al., 2017). These MS cells might correspond to the sharp wave ripple inhibited rhythmic population (Dragoi et al., 1999). Teevra cells on the other hand did not change their firing rate during sharp wave ripples suggesting that these are not targets of hippocampo-septal projection neurons. Thus, during REST periods, Teevra cells differentiated between sharp wave ripples and the less regular duration increased hippocampal excitability events, when their firing was strongly coupled to hippocampal LFP troughs.
Teevra Cells Spatial Organization and Traveling Theta Waves
The power of theta oscillations decreases along the longitudinal hippocampal axis and the phase of theta shifts by 180 across the septo-temporal extent of the hippocampus (Lubenov and Siapas, 2009;Patel et al., 2012). Strikingly, we have found a strong correlation of the rhythmicity of Teevra neurons along the longitudinal axis. Highly rhythmic septal Teevra neurons innervate interneurons in the septal pole and less rhythmic neurons innervate interneurons in the temporal pole of the hippocampus, while the oscillatory frequency at which MS input is organized does not change along this axis. This parallels the decrease in theta power and reduction in theta rhythmic neurons in ventral compared to dorsal CA3 (Royer et al., 2010).
Multiple mechanisms have been suggested for the generation of theta waves. One of the suggestions was the existence of a chain of oscillators residing within the septal area and theta waves would be a reflection of the phase delayed septal outputs (Patel et al., 2012). The septo-temporally restricted axons demonstrated here could support mechanisms of locally variable theta oscillations, as suggested by Kang et al. (2015). However, we have found no correlation between the firing-phase preference and position of the axon.
The axons of septal GABAergic neurons are heavily myelinated (Borhegyi et al., 2004), which we confirmed here, and exhibit high conduction velocities (0.5-5 m/s, Jones et al., 1999). Thus, the conduction delay is unlikely to contribute to a significant delay of transmission as it is an order of magnitude faster than the propagation velocity (0.16 m/s, Patel et al., 2012) of theta waves. However, delays of similar theta wave velocity were reported in hippocampal CA3 slices in vitro (Miles et al., 1988) while blocking glutamatergic transmission; thus, the hippocampal system is capable of generating such a wave through inhibitory connections. In vivo, the uniquely positioned Teevra neurons might provide a temporally coherent synchronized GABAergic input, which rhythmically inhibits CA3 interneurons along the hippocampal long axis, thus coordinating pyramidal cell excitability.
Outlook
We have defined a novel septo-hippocampal GABAergic cell type using congruent neuronal features including physiological parameters, molecular expression profiles, and axonal termination area. The results demonstrate the cellular diversity in the MS and provide a spatiotemporal framework for understanding the long-range, parallel subcortical innervation, which coordinates network oscillations in the cortex via local inhibitory neurons. We hypothesize that such cortical region-specific GABAergic innervation by physiologically distinct septal neuronal types supports the coordination of network oscillations.
STAR+METHODS
Detailed methods are provided in the online version of this paper and include the following:
ACKNOWLEDGMENTS
We would like to thank Linda Katona for training and critical advice during the course of the project. We thank Rob Stewart and Linda Katona for providing scripts for rhythmicity and sharp wave ripple analysis. We thank Thomas Klausberger and Balint Lasztó czi for training and advice on analyzing the slow irregular activity. We thank Vitor Lopes dos Santos, Balazs Hangya, Stephanie Trouche, and John Tukker for comments on an earlier version of the manuscript. We thank Michael Howarth for 3D reconstruction of AJ48j and Amar Sharma for somato-dendritic reconstructions. We thank Kristina Wagner, Ben Micklem, and Katja Hartwich for excellent technical assistance and Joszef Somogyi for advice on confocal microscopy. A. Tó th, K., Freund, T.F., and Miles, R. (1997). Disinhibition of rat hippocampal pyramidal cells by GABAergic afferents from the septum. J. Physiol. 500, 463-474.
CONTACT FOR REAGENT AND RESOURCE SHARING
Further information and requests for reagents and resource may be directed to the Lead Contact, Peter Somogyi (peter.somogyi@ pharm.ox.ac.uk).
EXPERIMENTAL MODEL AND SUBJECT DETAILS
Extracellular electrophysiological recordings were performed in adult male C57Bl6/J mice using either multichannel silicon probes (n = 7 mice) or glass electrodes (n = 24 mice). At the time of surgery, mice ranged between 3 and 6 months of age. Mice were housed with littermates until surgical implantation of the head-plate, after which they were housed individually and maintained on a 12 hr light/dark schedule with lights off at 7:00 pm. All behavioral training and recording occurred in the light phase. All procedures involving experimental animals were under approved personal and project licenses in accordance with the Animals (Scientific Procedures) Act, 1986 (UK) and associated regulations.
Surgery
For surgical implantation, mice were deeply anesthetized with isoflurane (induction chamber 3% -4% v/v with airflow and reduced to 1% -2% v/v after the animal was positioned in the stereotaxic apparatus) and given a subcutaneous injection of buprenorphine (Vetergesic, 0.08 mg/kg). The skull was exposed under aseptic conditions, and three sites were marked: the bregma, MS craniotomy (antero-posterior (AP): +0.86 mm, medio-lateral (ML): 0 mm, dorso-ventral (DV): 3.5 mm, 0 ) and hippocampal craniotomy (AP: À2.4 mm, ML: 1.7 mm, DV: 1.2 mm, 10 postero-anterior angle). The head-plate (either a 0.7g or 1.1g version, custom made at the Department of Physics, Oxford University) used for head-fixation was secured to the skull using dental cement and three small M1 screws. Two such screws were fixed to the skull above the cerebellum and served as the ground and electrical reference for the recordings.
Behavioral procedures
Upon recovery from the surgery (typically 3 to 4 days), mice were trained to run on an air-flow suspended styrofoam ball (for multichannel silicon probes) or spherical treadmill (for single cell recordings). After recovery, food restriction (to 90% of initial pre-surgery body weight) was aimed to motivate running and mice received small drops of sucrose (20% solution) reward upon reaching the end of the linear maze on the styrofoam ball (jetball, PhenoSys). Mice that performed anticipatory licking and whose tail was balanced, as if running on a linear track, were considered trained. This training was a crucial behavioral control for each animal.
Recordings and single unit identification
The recording sessions lasted between 30 to 60 min in case of multi-channel silicon probe recordings and up to 3 hr for single cell recordings while aiming for neurons with a particular firing pattern. The activity of MS cells was recorded during multiple running (RUN) and resting (REST) periods. LFP theta oscillation was recorded in the pyramidal cell layer or in stratum oriens in CA1 and referenced to a screw in contact with the dura above the cerebellum, with 0 set as the trough. We defined this location in CA1 by positive going ''sharp waves'' in the LFP during REST, which were recognized by the co-occurrence of 130-230 Hz filtered ''ripple'' oscillations. Sharp waves appear as a negative potential in stratum radiatum. Wide-band (0.1 -6000 Hz; 20 kHz sampling rate) recordings were performed using a 2 shank acute silicon probe (150 mm intershank distance; 2 tetrodes per shank; 25 mm spacing between contacts within a tetrode, Neuronexus) connected to a RA16-AC preamplifier (Tucker-Davies). Recordings were then digitally high pass filtered (0.8 -5 kHz) and neuronal spikes were detected using a threshold crossing based algorithm. Detected spikes were automatically sorted using the algorithm implemented in KlustaKwik (Kadir et al., 2014), followed by manual adjustment of the clusters to obtain well-isolated single units, based on cross-correlations, spike-waveform and refractory periods. Multiunit or noise clusters or those with less than 300 spikes were discarded from analysis.
Juxtacellular labeling
Extracellular single-cell recording and juxtacellular labeling with neurobiotin reveals the identity of the labeled cells in conjunction with their in vivo activity patterns, target area, and synaptic targets. Using multi-unit recordings in awake head-fixed mice, we were able to identify the stereotyped activity patterns of Teevra neurons. Spikes of putative Teevra neurons were recorded during RUN and REST periods (Movie S1) using an extracellular glass electrode filled with 3% neurobiotin in 0.5 M NaCl and the spike output was converted into audio signal. Subsequently, if the neuron was deemed to be a Teevra cell, based on the ''sound of the burst'' corresponding to a short burst duration, and if the firing rate from REST to RUN did not appear to change, an attempt was made to label the cells with neurobiotin using the juxtacellular method (Pinault, 1996). We found four classes of MS rhythmic neurons. In addition to the most numerous Teevra cells, the second group, which we named Komal (''soft sound of burst'') could reliably be differentiated from Teevra cell based on their long burst duration and increase in firing rate during RUN. The potential success of juxtacellular labeling is predicted by analyzing cellular health post-labeling in vivo, by comparing the spiking of the cells before and after the firing modulation attempt. Successful labeling attempts were defined as those in which the cell was modulated for more than 30 s and action potentials could still be seen and heard after the modulation attempt. In such cases, neurobiotin was left to be transported in the neurons for 4 to 8 hr to allow for potential terminal bouton labeling, which was assessed after processing the brain.
Tissue processing and immunohistochemistry Mice were deeply anesthetized with sodium pentobarbital (50 mg/kg, i.p.) and transcardially perfused with saline followed by a fixative solution (4% paraformaldehyde, 15% v/v saturated picric acid, 0.05% glutaraldehyde in 0.1 M PB at pH 7.4). Some brains were postfixed overnight in glutaraldehyde-free fixative. After washing in 0.1 M PB, coronal sections (70-80 mm) were cut using a vibratome (Leica VT 1000S, Leica Microsystems) and stored in 0.1M PB with 0.05% sodium azide at 4 C for further processing. The neurobiotinlabeled processes could be visualized in selected sets of sections using streptavidin-conjugated fluorophores in tissue sections previously permeabilized by Tris-buffered saline (TBS) with 0.3% Triton X-100 (TBS-Tx) or through rapid 2x freeze-thaw (FT) over liquid nitrogen (cryoprotected in 20% sucrose in 0.1 M PB). To visualize proteins within selected labeled cellular domains or in their synaptic targets, the sections were first blocked for 1 hr at room temperature (RT) in 20% normal horse serum (NHS) and then incubated in primary antibody solution containing 1% NHS for 2 to 4 days at 4 C. For the specificity of the method, negative controls which lacked the primary antibodies were processed in parallel. Primary antibodies (Key resource table) were detected with fluorophore-conjugated secondary antibodies for wide-field epifluorescence and confocal microscopy (Ferraguti et al., 2004;Lasztó czi et al., 2011). After primary antibody incubation, sections were washed three times for 10 min and transferred to a secondary antibody solution containing 1% NHS for 4 hr at RT or overnight at 4 C. Following secondary antibody incubation, sections were washed three times for 10 min each and mounted on glass slides in VectaShield. Our strategy exploits the distinct subcellular locations of target proteins (e.g nucleus, plasma membrane). Using this approach, the labeled process or synaptic target neurons could be simultaneously processed with 4 primary antibodies and subsequently multiple times if the cellular domain to be tested lacked detectable immunoreactivity in the previous round of reactions or if the molecule was not in the same cellular compartment. We were able to test the same cellular profile in a given section for up to 6 different molecules and different sections for up to 8 molecules of a single cell. For light microscopic visualization, the neurobiotin signal was amplified by incubating the TBS-Tx or FT processed sections in avidin-biotin complex (VECTASTAIN Elite ABC HRP Kit) for 3 to 7 days depending on the strength of labeling (incubating for longer duration sometimes improved the neurobiotin visualization). The sections were then processed using horseradish peroxidase based diaminobenzidine (DAB) reactions, using the glucose oxidase method for the generation of H 2 O 2 and nickel-intensified DAB as chromogen. The sections were osmium tetroxide treated (0.5%-1% in 0.1 M PB), sequentially dehydrated, and mounted on slides in epoxy resin (Unal et al., 2015). Selected sections for electron microscopy were incubated in 2% wt/vol uranyl acetate during dehydration.
Anatomical data analysis
Light and florescence microscopy Wide field epifluorescence microscopy and confocal microscopy (Carl Zeiss LSM 710) were used to evaluate antibody reacted sections. Sections were first observed with a wide field epifluorescence microscope (Leitz DMRB; Leica Microsystems) equipped with PL Fluotar objectives. Multichannel fluorescence images were acquired with ZEN 2008 software v5.0 on a Zeiss LSM 710 laser scanning confocal microscope (Zeiss Microscopy), equipped with DIC M27 Plan-Apochromat 40/1.3 numerical aperture, DIC M27 Plan Apochromat 63/1.4 numerical aperture, and Plan-Apochromat 100/1.46 numerical aperture oil-immersion objectives. The following channel specifications were used (laser/excitation wavelength, beam splitter, emission spectral filter) for detection of Alexa405, Alexa488/EYFP, Cy3, and Cy5: 405-30 solid-state 405 nm with attenuation filter ND04, MBS-405, 409-499 nm; argon 488 nm, MBS-488, 493-542 nm; HeNe 543 nm, MBS-458/543, 552-639 nm; HeNe 633 nm, MBS-488/543/633, 637-757 nm. The pinhole size was set to 1 Airy unit for the shortest wavelength while, to keep all channels at the same optical slice thickness, the pinhole sizes of the longer wavelength channels were adjusted to values close to 1 Airy unit. Thus, optical section thickness for all channels was based on the set pinhole size for the shortest wavelength channel.
The osmium-treated sections mounted in resin after the HRP enzyme reaction (DAB) were analyzed using transmitted light microscope. To reveal the axonal arbor of Teevra cells in the CA3 and sample boutons in different regions and layers, two-dimensional reconstructions were made with a drawing tube attached to the transmitted light microscope equipped with a 63X oil-immersion objective. For all visualized axons, the first and last antero-poterior section containing boutons and the last section with the axon was established.
Electron microscopy
We tested the correspondence of light microscopically predicted putative synapses between target cells and axonal varicosities by electron microscopy of selected axon collaterals in osmium-treated sections. This enabled us to determine the probability of determining synaptic junctions based on axonal swellings next to a target profile. Serial sections (70 nm) were cut and mounted on single-slot, pioloform-coated copper grids for conventional transmission electron microscopy. Images were acquired with a Gatan UltraScan 1000 CCD camera. All neurobiotin containing boutons cut at the section plane were followed in serial sections to locate synaptic junctions. We identified synapses as Gray's type I (often called asymmetrical) and type II (often called symmetrical) based on their fine structure; type I synapses having a thick postsynaptic density, whereas type II synapses are characterized by a thin postsynaptic density (Gray, 1959).
Calculation of 3D distance in the hippocampus
We used two estimates to compute the distance along the hippocampus where the first branch or branch with boutons was observed, (1) the linear distance of the soma section to the section containing the first axonal collateral in the hippocampus as a measure of the anterior-posterior distance from the septum and, (2) using the Scalable Brain Atlas (Bakker et al., 2015) we approximated the medio-lateral, antero-posterior and dorso-ventral coordinates of the spatial location in the hippocampus to compute the 3D Euclidian distance of each coordinate from the septal pole of the hippocampal formation (ML: 0; AP: À0.98; DV: À1.4). These distances were highly correlated (linear correlation coefficient: r = 0.89, p = 0.003).
Electrophysiological data analysis Data were analyzed in MATLAB (R2014a, MathWorks) and Spike2 (CED). RUN was defined as locomotion, whereby the jetball (in virtual reality) or circular treadmill (for single cell experiments, Table 1) was physically advanced by the mouse, based on virtual reality feedback or video observations. For single cell experiments, this additionally included signals from a motion sensor within the circular treadmill. The initiation of locomotion was also included in RUN. Small postural shifts in the absence of limb motion were excluded from RUN. For labeled Teevra cells MS71c, MS73c, and MS90g, in addition to video analysis, signals from an external accelerometer in contact with a running disc were used to define RUN. REST periods included everything outside RUN periods. There was no or little contamination from sleep as the animals were mostly awake and alert even during REST periods. To detect the zero crossings of the LIA, the CA1 reference LFP channel was downsampled to 1 kHz, the signal was rectified (time constant: 0.08 s) and the zero crossings were detected at falling level (minimum interval: 200 ms).
Rhythmicity Index
Rhythmicity index is based on the ''theta index'' (Royer et al., 2010), but it has fewer parameters (3 versus 6 in Royer et al.), for a more robust fit. Data were prepared by calculating the spike time autocorrelogram (bin width 10 ms, maximum lag 500 ms) for spikes defined in periods of RUN or REST. Next, the autocorrelogram was normalized by dividing the peak value between 100 and 200 ms (range chosen to match theta frequency first side band), and center values were clipped so that overall maximum is 1. We then fit a linear trend line to the above (dotted line in the figures) and perform a nonlinear fit (using MATLAB lsqnonlin function) to the detrended data. The fitting function is a Gaussian-modulated cosine function with three parameters: (1) cosine (theta) frequency in Hz (between 4 and 8); (2) the peak value of the Gaussian scaling function (high value indicates strong short-latency theta modulation) and (3) standard deviation (width) of Gaussian scaling function (high value indicates prolonged theta modulation). The solid red lines in Figures 2 and 6 are the fitted sinusoid functions (oscillatory frequency of the neuron), and the trends are shown by dotted lines. A coefficient of determination is measured at this stage to measure the goodness of fit. After fitting, the rhythmicity index is calculated as follows: (1) for each peak and trough in 50 to 500 ms, the absolute value of the fitted sinusoid is divided by the corresponding trend line value (between zero and one) and (2) the rhythmicity index is taken as the mean of these trend-normalized peak values. Burst duration Spikes were gathered by growing group such that nearest neighbor spikes were iteratively added unless interspike interval (ISI) > threshold; when both neighboring ISIs are > threshold burst was delimited; min spikes = 2, threshold = mean ISI. Mean burst durations during RUN are reported throughout. Rate change score and speed correlation Mean firing rates (Hz) were calculated during RUN and REST periods. A rate change score was then computed according to the following formula: rate change score = rateðRUNÞ À rateðRESTÞ rateðRUNÞ + rateðRESTÞ Score ranges from À1 (decrease of rate during running) to 1 (increase of rate during running). The speed modulation of MS unit firing was also calculated for each neuron recorded using multi-channel probes. Data were binned at 1cm/s. For each speed bin (> 2cm/s), the corresponding firing frequency was calculated as the number of detected action potentials while the mouse was moving at this speed, divided by the duration the mouse ran at this speed. The firing frequencies per speed bin were linearly fitted with a weighting function for each bin equal to the square root of its duration. A linear correlation coefficient was computed for each neuron and it was determined if the firing is positively modulated (r > 0, p < 0.05), negatively modulated (r < 0, p < 0.05) or not correlated with the running speed of the animal (p > 0.05).
Mean firing phase preference
For each recorded neuron, we determined the mean depth of theta modulation and the preferential mean theta phase of firing (circular mean ± circular SD) using Rayleigh's method (Zar, 1999).
Hierarchical clustering
All analyses were performed using ''Cluster Analysis'' toolbox in MATLAB (R2014a, MathWorks). All parameters (m) for all neurons (n) are prepared in an ''m-by-n'' data matrix. The data are linearly rescaled and set to a new defined min and max (À1 to 1, rescale function, MATLAB) and stored in data matrix (e.g., X). Next, we calculate the pairwise Euclidian distance between each pair of observations in X (using MATLAB ''pdist'' function) and store this in a matrix (e.g., D). Next, we calculate the linkages and create a hierarchical cluster tree. MATLAB function linkage returns a tree (e.g., Z) that encodes hierarchical clusters of the real matrix X using ''Ward'' method to measure the distance between clusters. A tree encoded by Z is plotted (using MATLAB function ''dendrogram'') and four major branches are selected (using MATLAB function ''cluster'') and the smallest height at which a horizontal cut through the tree leaves 4 clusters is identified (using MATLAB function ''maxclust''). The cluster value assignment for each neuron is stored in a matrix (e.g., C). The silhouette method is used to determine the goodness of clustering. The silhouette value (MATLAB, Cluster Analysis toolbox, range: À1 to 1) for each point is a measure of how similar that point is to points in its own cluster versus points in other clusters according to the following formula: where a i is the average distance from the i th point to the other points in the same cluster as i, and b i is the minimum average distance from the i th point to points in a different cluster, minimized over clusters. Silhouette value for a cluster is reported as the mean silhouette of its individual members. Large positive values indicate that the cluster is compact and distinct from other clusters.
QUANTIFICATION AND STATISTICAL ANALYSIS
Standard functions and custom-made scripts in MATLAB were used to perform all analysis. We have not estimated the minimal population sample for statistical power, but the number of animals and labeled neurons were similar to or larger than those employed in previous works (Katona et al., 2014;Varga et al., 2014;Viney et al., 2013). The data were tested for normal distribution. Parametric tests were used for normally distributed data and non-parametric tests were applied to all other data. For a comparisons of firing phase preferences of different cell types, we used Watsons U2 test. Kruskal-Wallis one-way analysis of variance was used to compare two groups. Box-plots represent median and 25 th _ 75 th percentiles and their whiskers show data range. Outliers are shown as a ''+''sign.
DATA AND SOFTWARE AVAILABILITY
The software used for data acquisition and analysis is available for download. Data will be made available upon request. | 2017-11-30T18:36:12.614Z | 2017-12-20T00:00:00.000 | {
"year": 2017,
"sha1": "694f35366c0a3741393f09cdb249f17c0a2402ab",
"oa_license": "CCBY",
"oa_url": "http://www.cell.com/article/S0896627317310267/pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "694f35366c0a3741393f09cdb249f17c0a2402ab",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
250497854 | pes2o/s2orc | v3-fos-license | Effect of Qiangdi 863 Nanosynergids Treated Water, Nitrogen, Phosphorous and Potassium Fertilizers on Rice Growth Physiology and Grain Quality
Nanotechnology is an emerging technique that helps in solving the biotic and abiotic agricultural issues leading to enhance crop productivity. Therefore, it was hypothesized to check the effect of Qiangdi 863 nano synergids biological-assisted growth apparatus and nitrogen, phosphorous, and potassium (NPK) fertilizers improving rice germination, early growth, physiology, and yield. An experiment was performed on five rice varieties for three consecutive years (2017–2019). The nanosynergids-treated water (NTW) significantly improved the speed of germination (25.3, 35.6, and 32.3%), final emergence percentage (100%) and seed emergence energy percentage (80, 95, and 90%), radical (1.25, 1.7, and 2.35 cm) and plumule growth (1.29, 1.24, and 1.66 cm), soil plant analysis development (46, 45, and 47), antioxidant enzymatic activities, such as catalase activity (34,376 μg–1FW h–1, 33,264 μg–1FW h–1, and 34,453 μg–1F W h–1), superoxide dismutase (18,456 μg–1F W h–1, 19,445 μg–1F W h–1, and 19,954 μg–1F W h–1), peroxide (745 Ug–1F W, 734 Ug–1F W, and 752 Ug–1F W), production and declined malondialdehyde (4.5 μmolg–1F W, 5.1 μmolg–1F W, and 4.2 μmolg–1F W) for all years respectively in KSK 133. The application of nano-treated irrigated water enriched the biomass of rice seedlings. The overall nano synergid treatments successfully enhanced the endogenous hormones as salicylic acid (6,016.27 p mol/L, 5823.22 p mol/L, and 5922.12 p mol/L), jasmonates (JA) (5,175.6 p mol/L, 4231 p mol/L, and 5014.21 p mol/L) brassinosteroids (BR) (618.2 p mol/L, 546.83 p mol/L, and 582.1 p mol/L) quantification and yield 1000 grain weight (22.3, 22, and 23.2 g) of KSK 133. Hence, the overall results proved that NTW could effectively enhance the early growth and yield of rice varieties.
INTRODUCTION
Almost half of the world's population (2.7 billion people) depends on rice to satisfy their food requirements. In the last 30 years, the worldwide area specified for rice cultivation is 155.5 million hectares with about 0.39% annual growth. Most probably, this number might increase from 2.7 billion to 4.6 billion people by 2050 . Hence to the meet increased demand for rice, annual rice production of the whole world might have to increase by 70% from 520 million tons to 880 million tons by the year 2025. This rise may go up to 1 billion tons by 2050. However, to achieve the target of productivity, great hindrance will be offered by the limiting factor of the area under rice cultivation. The cost of developing new land is very high due to lack of water resources and the development of urban and industrial parts in Asia (Dobermann and Fairhurst, 2002). Therefore, food shortage has become a major global issue with increasing population. At present, a decline in food quality and quantity is the main challenge in the agriculture sector. That is hard to achieve by traditional techniques of cultivation. The main issue with the traditional technique is the loss of crop production conservation and the maintenance of soil structure and fertility (Younas et al., 2020).
To compete with global challenges, such as the increase in population, environmental change, and deficiency of plant nutrients, it is important for the current revolution in agriculture . Among those revolutions, nanotechnology is one of them. Nanotechnology has been performing a significant role in the agro-food sector with a great revolution (Parisi et al., 2014). The use of nanomaterials has been announced as an answer to some of the recent agrifood encounters and can provide desirable resources to improve the whole agricultural and food chain system including the production of nano-based agricultural products and the use of nano-fungicides for pathogen eradication (Sekhon, 2014). Nano Qiangdi nanometer 863 is a nano-device which is widely used for agriculture in China. The nano-863 is a high-tech product, produced by using ceramic material as a carrier which has strong absorbing properties. Previous studies exhibited that nano synergid-863-treated water (NTW) was used as a fertilizer in japonica rice seed germination; the results of growth and pesticide dilution preparation verified the improved plant growth and development. The average germination rate, early growth of legumes (Vigna unguiculata), cucumber (Cucumis sativus), and cabbage (Brassica oleracea var. capitata) were improved by nanometer 863 (Fang et al., 2004).
The research behind the deliberate development, modification, and characterization of extremely small particles and macromolecules is nanotechnology. It advocates the creation of innovative structures with exceptional properties on the nanoscale. The chemical, physical, and biological properties of these nanomaterials differ in essential and valuable ways compared to those of individual atoms, molecules, or bulk matter (Nel et al., 2006). Nanotechnology is now considered a promising tool in recent cultivation technology and nano agricultural crops has become a dynamic source of revenue. Nanomaterials act as agrochemical agents which can effectively increase production alongside lesser consumption of nitrogen fertilizers. Advanced cultivation methods could be used to increase the yield of a crop, to avoid excessive damage to soil and water, minimize the nutrient leaching, and to increase the yield (Mahil et al., 2019;Ahmad et al., 2020). The objective of the study was designed to examine the effects of Qiangdi nano-863 nanosynergids-treated water in combination with NPK fertilizers on five varieties; one from China and four from Pakistan were selected.
Soil Physiochemical Analysis
The experimental area was demonstrated for physical and chemical features.
Plant Material and Growth Conditions
The seed of commonly cultivated rice varieties, such as Zhongzao 39 (Chinese Indica variety), PK aromatic 1121, KSK 133, KS 282, and Super Basmati (Pakistani Indica varieties) were utilized as germplasm. The experiment was conducted in split block design with three replicates having a plot size of 3 m × 2 m.
Nanosynergids-Treated Water and Rice Irrigation
Qiangdi nano-863 biological assistant growth apparatus (disk) was placed in a plastic bucket with 20 L water for 72 h (3 days) to produce nano-treated water. Rice seed was presoaked in tap water for 24 h and then soaked in nano-treated water for 24 h. The germination was started in the treated seed after 36 h. In 3 years (2017, 2018, and 2019), each variety containing 300 rice seedlings were sowed in three replicates (100 seed per replicate).
Water and Fertilizer Management
Fertilizer application after transplanting stage was the same as that of common rice production. Homologous nano-treated water was used to irrigate the rice seedlings originated from nanotreated seeds. The recommended doses of NPK fertilizer were 53 kg N, 16 kg P, and 33 kg K ha −1 for rice (Figure 1).
Nursery Bed Preparation
Sowing of seeds was performed during the second week of June 2017, 2018, and 2019 in treys (1.5 m × 2.0 m) raised nursery beds. The farmyard manure was mixed in the soil and then preparations of uniform nursery beds on 6-7 cm layer of soil were prepared. Nursery beds were flooded with water like conventional rice cultivation. After 1 month, all nursery beds containing seedlings were ready for transplantation.
Transplantation
The seedlings were transplanted into a field. The experiment was contacted in split block design with three replicates having a plot size of 3 m × 2 m.
Germination and Seedling Growth
For nano-treated-water experiment, the germination of the seeds was documented daily according to the Association of Official Seed Analysis (Association of Official Seed Analysis, 1990) protocol till it became persistent. For recorded data, the speed of germination (SG), final germination percent (FGP %), and germination energy percentage (GE %) (Ruan et al., 2002;Anbumalarmathi and Mehta, 2013) were calculated using the following formulae. Dry Weight (g)
SG
Ten random seedlings/treatments were selected for measuring the dry weight (g). Shoot and root dry weights (10 seedlings) were recorded after oven drying at 70 • C for 24 h in a drying oven (Islam et al., 2018).
Chlorophyll Content/ mg g −1 Fw Chlorophyll was extracted from 0.2 g of fresh leaves. Extraction was done by soaking leaf samples in a 25 ml solution of acetone and alcohol (v: v = 1:1) for 24 h in the dark at room temperature. The absorbance of the extract was measured at 663, 645, and 470 nm wavelength by using a UV-VIS spectrophotometer (UV-2600, Shimadzu, Japan) to estimate chlorophyll a (C a) , chlorophyll b (C b ), carotenoids contents (C t ), and total chlorophyll content according to the scheme designated by Marschall and Proctor (2004):
Soil-Plant Analysis Development Value
Chlorophyll content was characterized as Soil-Plant Analysis Development (SPAD) values of the rice seedling (Esfahani et al., 2008). The SPAD values were measured by selecting rice flag leaves, the second and third leaves from the top with 10 days intervals before transplantation by using a chlorophyll meter (SPAD-502 plus).
Antioxidant Enzyme Activities
The activity of superoxide dismutase (SOD) was determined by the method described by Gupta et al. (2018). This activity was measured through inhibited photo-reduction of nitro-blue tetrazolium (NBT). The reaction mixture of SOD contained 25 m mol of sodium phosphate buffer (pH 7.8), 13 m mol of methionine, 2 µmol of riboflavin, 10 µmol of EDTA-Na 2 , 75 µmol of NBT, and 0.1 ml of leaf extract. The total quantity of reaction mixture was 3 ml. The test tube containing reaction solutions was irrigated with light (fluorescent lamps 300 µmol m −2 s −1 ) for 20 min and the activity was measured at 560 nm wavelength.
Peroxidase U g −1 F W The peroxidase (POD) activity was based on the determination of guaiacol oxidation at 470 nm by H 2 O 2 and was expressed as U g −1 F W . The change in absorbance at 470 nm was recorded for every 20 s by a spectrophotometer. One unit of POD activity is the amount of enzyme that will cause the decomposition of 1 µ g of substrate at 470 nm (HITACHI U-3900) for 1 min in 1 g fresh sample at 37 • C Sheteiwy M. S. et al. (2017).
Malondialdehyde µ mol g −1 F W The content level of malondialdehyde (MDA) was determined by the method used by Chun and Wang (2003). Enzyme extracted solution (2 ml) was added in 1 ml of 20% (v / v) trichloroacetic acid and 0.5 ml (v/v) of thiobarbituric acid. The mixture was heated in a preheated water bath at 95 • C for 20 min and cooled at room temperature. The solution was centrifuged at 10,000 rpm × g for 10 min after cooling. The lipid peroxidation absorbance was measured at 450, 532, and 600 nm by a spectrophotometer (UV-VS Spectrophotometer-2600 Shimadzu). The MDA content was calculated by an extinction coefficient of 155 m M −1 cm −1 (Heath and Packer, 1968); its content was expressed as µ mol g −1 FW.
Quantifications of Plant Growth Hormones
Jasmonates (JA) are commonly present in plants and act as plant growth regulators (Ahmad et al., 2019). Brassinosteroids (BR) play an important part in monitoring the broad spectrum of developmental processes and plant growth (Sharma et al., 2015). Salicylic acid (SA) is a major endogenous signal in plant disease resistance, flowering, and thermogenesis (Yang et al., 2004). JA, SA, and BR were quantified by MULTISKAN MS (instrument).
Parameters of Yield Determination/Quantitative Data of Rice
Parameters for yield measurement, such as plant height, biomass, number of panicles, number of seed per panicles, filled grains per panicle, unfilled grains per panicles, and thousand-grain weight were tested (Kheyri et al., 2019).
Statistical Analysis
All the data recorded from the five rice varieties (Zhongzao 39,KSK 133,KS 282,Super basmati,and PS 2) were subjected to statistical analysis as the mean ± standard error (SE) of three replicates. Statistical analyses of the data were performed using standard analyses of variance (Four-way ANOVA). Analyses were performed by using the software SPSS v. 17 (Zheng et al., 2016). The mean variance of the data was examined using the least significant difference (LSD) test at the 0.05 probability level.
Soil Physiochemical Analysis
Soil samples were collected before sowing and after harvesting the crop consecutively for three years. The physical and chemical analysis showed soil pH (8.2, 8.7, and 8.5), electric conductivity Effect of (Nanosynergids-Treated Water) on Seedling Emergence Seed germination experiments were conducted for the assessment of different physiological parameters (Tables 1A-C). Germination in NTW treatment was observed for 72 h (3 days). Assessment parameters for germination experiments included speed of emergence (SE), percentage emergence (PE), and seed emergence energy percentage (SEEP). The NTW was proved significantly effective for rice germination. SE was improved more for KSK 133, super basmati, and PK 1121 Aromatic than KS 282 and Zhongzoa 39. After 3 days, highest SEEP was observed in KSK 133 (80% in 2017, 95% in 2018, and 90% in 2019) Table 2).
Effect of Nanosynergids-Treated Water on Growth Characteristics (Radicle and Plumule Lengths) at the Early Seedling Stage
Nanosynergids-treated water exhibited significant enhancement in radicle and plumule lengths in all rice varieties (Figure 2 and Table 3 (Table 3). Overall, three years (2017, 2018, and 2019) of NTW data revealed improved radicle growth. According to the present experimental observation, nanosynergid is a good tool for the enhancement of germination and growth.
Dry Weight of Early Seedlings
The NTW water revealed improved dry weight of rice seedling (Figures 3A-C). Overall, in three growing seasons, the highest seedling dry weight was observed in KSK 133 (1.18 g in 2017, 1.16 g in 2018, and 1.31 g in 2019, respectively). The lowest dry weight enhancement was observed in Zhongzao 39 (0.88 g in 2017, 0.90 g in 2018, and 0.72 g in 2019 respectively). Super basmati, KS 282, and PK 1121 Aromatic showed improvement in dry weight during NTW treatments. Total seedling dry matter production is considered very important to interpret the yield of the rice crop (Figures 3A-C). Four-way ANOVA for dependent factor dry weight with five rice varieties, three years, two locations, and treatments displayed significant results ( Table 2).
Soil-Plant Analysis Development Value
Soil-plant analysis development (SPAD) values are mainly used for the diagnosis of the N status of crops. All five varieties showed variation in SPAD values after the application of NTW (Figures 4, 5D (Figures 4, 5D-F). Four-way ANOVA for independent factor SPAD data indicated a significant interaction between five varieties, three years, two locations, and treatments ( Table 2). Con, control; NTW, nano-treated water. Values were standard mean and standard error ± (n = 3) with control and nanosynergid treatment SEEP %, SE, and FEP %.
Oxidative Enzymes
The NTW showed effective results in biochemical components like enzymes (antioxidant and oxidant enzymes), reactive oxygen species (ROS), protein, starch, and amino acid. The SOD is a main superoxide scavenger due to its enzymatic activity. SOD activity in rice seedling was increased due to nano-treated water relative to control in all rice varieties. Compared to the SOD activity of control, KSK 133 depicted the highest SOD activity of 18,456 µ g −1 F W h −1 in 2017 and 19,455 µ g −1 F (17,416 µ g −1 F W h −1 in 2017,16,451 µ g −1 F W h −1 in 2018, and 17,345 µg −1 F W h −1 in 2019) and PK 1121 aromatic (12,485 µ g −1 F W h −1 in 2017, 11,543 µ g −1 F W h −1 in 2018, and 12,134 µ g −1 F W h −1 in 2019) were better than control. Four-way ANOVA for SOD with five rice varieties, three years, two locations, and treatments displayed significant results ( Table 2).
Catalase enzyme activity was one of the ROS-scavenging enzymes of plants. Three-year experiments exposed an increase in CAT activity with nano-treated water in all rice varieties (Figures 4D-F). CAT content in 2017 was highly improved in KSK 133 (34376 Table 2).
The present study showed that NTW exposure had an effective impression on SOD, CAT, and POD antioxidant enzymes. Antioxidant enzymes can maintain the ROS and reduce the toxicity of ROS to protect the rice cells from damage. Increased CAT activity under NTW might be the most important cause to detoxify the ROS and decreased MDA contents (Figures 5D-F).
Malondialdehyde µmol g −1 FW
Malondialdehyde content is an important tool for describing the amount of lipid peroxidase. Higher concentrations of MDA affect the plant or indicate cell membrane damage. The increased amount of MDA content is produced when polyunsaturated fatty acids in the membrane undergo oxidation by the accumulation of free oxygen radicals.
The lowest amount of total chlorophyll was exhibited to be 898 ± 1.10 mg g −1 F w in 2017, 899 ± 1.10 mg g −1 F w in 2018, and 956 ± 1.21 mg g −1 F w in 2019. Four-way ANOVA for dependent factor chlorophyll content with independent factors, e.g., five varieties, three years, two locations, and treatments showed significant results ( Table 2). five rice varieties, 3 years, two locations, and treatments displayed significant results ( Table 2). Salicylic acid is an important phenolic compound present in plants at various levels. In the present study, it was observed that SA varies from 454.03 to 6,186.86 p mol/L. The SA is produced from benzoic acid. It is present in leaves as free acid. The present study increased antioxidant, photosynthetic activity, and the level of endogenous SA in rice leaves (Tables 6A-C). Endogenous SA displays a vital antioxidant role in protecting rice from oxidative stress. In a recent study, increased highest SA was showed in KSK 133(6016.27 p mol/L in 2017, 5823.22 p mol/L in 2018, and 5922.12 p mol/L in 2019) and the lowest in Zhongzoa 39 (454.03 p mol/L in 2017, 467.11 pmol/L in 2018, and 460.43 p mol/L in 2019). Four-way ANOVA for SA hormone with five rice varieties, 3 years, two locations, and treatments had shown significant results ( Table 2).
Agronomic Parameters for the Yield of Rice
The NTW application had a significant impact on yield attributing characters, i.e., plant height (cm), remaining biomass (RB), branch weight without seeds (g), panicle weight (g), number of panicles, total number of seeds per panicle, filled grain per panicle, unfilled grain per panicle, and 1000 grain weight ( Table 7) Table 7). The maximum yield was observed in KSK133 as 1,000 grain weight (22.3, 22, and 23.2 g in 2017, 2018, and 2019, respectively) of rice. Four-way ANOVA for 1,000 grain weight dependent factor with five rice varieties, three years, two locations, and treatments had shown significant results in Table 2.
Multivariate Analysis (Principal Component Analysis) Based on Physiological, Biochemical, and Yield Parameters
Principal component analysis (PCA) was used to give the interpretation of complex data based on physiological, biochemical, and yield parameters (Figure 6 and Table 8).
In plots of PCA, 1, 2, and 3 of PCA are results obtained from bio-chemicals, physiological features, and yield of Chinese and Pakistani rice exposed to NTW. In the year 2017, principal component 1 (PC 1) was 33.45% and principal component 2 was 24.10%, respectively. The cumulative percentage of PC 1 was 57.55 %. It was clear that CAT, POD, chlorophyll content, and 1,000 grain weight grouped with positive loading on the upper side of biplot, suggest that these parameters had a positive correlation. Dry weight, JA, SA, and BR were observed to be positive in the lower side and SPADE and SOD were negatively correlated in the biplot. In the year 2018, PCA results obtained were as follows: PC 1 was 45.63% and PC 2 was 18.83%. In PC 2, cumulative percentage was 64.46%. Dry weight, SA, JA, BR, MDA, and 1,000 grain weight were on the right upper side of the biplot, suggesting that these parameters had a positive correlation among themselves. CAT, SOD, POD chlorophyll, and SPAD were on the lower side of the biplot. In the year 2019, the cumulative of PCA 3 was 63.49%. In PC3, the first component was 46.75% and the second component was 16.75%. The upper right side of the PC3 had positive correlation showed in dry weight, chlorophyll, SPAD, CAT, POD, SOD, MDA, and 1,000 grain weight. All hormones, such as JA, SA, and BR were at the lower side of the biplot which were also positively correlated. Thus, the most important descriptor was associated with 1,000 grain weight which exhibited positive correlation (yield parameter) in 2017 PC 1, 2018 PC2, and 2019 PC 3. The PCA can be used for the elimination of redundancy in the data set.
DISCUSSION
In the present experiment, verification of NTW effect on rice growth and yield attributes was done. Growth parameters, such as seed germination, seedling growth, SPAD value, chlorophyll content, antioxidant enzymes, endogenous hormones quantification, and yield characteristics were particularly considered for this experiment.
Seed germination is the first stage of plant life cycle. Moreover, seed germination tests offer numerous benefits like ease, sensitivity, cost-effectiveness, and suitability for mobilized sugars for germination in rice samples (Wang et al., 2018;Acharya et al., 2020). In this study, germination was observed for 24 h (1 day), 48 h (2 days), and 72 h (3 days) after treatment of NTW. SE, PE, and SEEP were evaluated to check the impact of NTW on the emergence of rice seed. However, 72 h of nano water treatment showed significant results than control. SE was improved for KSK 133, Super basmati, and PK 1121 Aromatic than KS 282 and Zhongzoa 39. Effect of nanometer pottery trays (NPTs) and high energy nanomaterials showed better rice seed germination (Jun-rong et al., 2016). In the current study, NTW showed a pronounced effect on FEP. In growing seasons of 2017, 2018, and 2019, after NTW application, most prominent seedling emergence were recorded in KSK 133 and the lowest in Zhongzao 39, respectively (Tables 1A-C). At emergence stage, nano-treated water irrigation exhibited a gradual increase in seed germination, i.e., KSK 133 > Super basmati and PK 1121 Aromatic > KS 282 > Zhongzao 39. Jun-rong et al.,(2016) have the same observation when they were exploring the effects of four NPTs) treatment on the biological properties of rice, and they found positive influence on seed germination and early growth of seedling.
Qiangdi 863 nano synergid is manufactured from composite nano far infrared technology material. The nano synergid comprises "nanomaterials" which cover beneficial/captivate vibration frequency (λ) and release far infrared waves that give phyto-stimulatory effect to the plant growth. NTW exhibited significantly enhanced radicle and plumule lengths in all rice varieties. In previous studies, field experiment results demonstrated that rice seeds soaked with nano device could significantly increase rice production by more than 10%. The tested results showed that the head rice ratio and gel consistency were respectively increased by 31.2 and 15.0% after treatment with nano devices (Liu and Liao, 2008;Wang et al., 2011). Overall, three years (2017, 2018, and 2019) of NTW data revealed improved radicle growth. The highest radicle and plumule lengths were observed in KSK 133 while the lowest in Zhongzao 39. Three rice varieties KS 282, Super Basmati, and PK 1121 aromatic varieties also showed improved growth rate of radical and plumule as compared to the control ( Table 3). According to present experimental observation, nano synergid is a good tool for the enhancement of germination and early growth.
The positive outcomes of NTW revealed improved dry weight of rice seedling Overall, in three growing seasons, the highest seedling dry weight was observed in KSK 133. The lowest dry weight enhancement was observed in Zhongzao 39 (Figures 3A-C). Total seedling dry matter production is considered very important to interpret the yield of the rice crop (Figures 3A-C). In previous studies, carbon-based nanomaterials showed improvement in growth due to activated cell growth. Therefore, they concluded that nanomaterials showed a sound effect on plants with different growth rates (Lahiani et al., 2016). The relation between leaf nitrogen level and chlorophyll in plant leaves can be calculated in terms of SPAD reading (Swain and Sandip, 2010;Zafar et al., 2017). All five varieties showed variations in SPAD values after the application of NTW (Figures 3D-F). The data from the present study showed that SPAD values of flag leaves in Zhongzao 39 and KSK 133 rice were improved. The results suggested that NTW had a significant effect on deferring the senescence of rice seedling. Hong et al. (2005) used nanoparticles of TiO 2 on rice and observed improved photosynthetic rate, photochemical reaction activity like absorbance of light, the transformation of light energy to electron energy, photophosphorylation efficacy, and oxygen progression. After treatment of nanotechnologies, the root was more developed and could absorb more nutrients, thereby increasing the biomass. The total absorption content of phosphorus and content of phosphorus in plants were both increased in the nitrogen, phosphorus, and potassium, which was closely related to the synergistic effect of nano device-treated water on phosphorus .
Therefore, it can be concluded NTW in combination with other chemicals also improves the contributing factors of SPAD.
Nanotechnologies or nanomaterials are a double-edged weapon because they have both positive and negative consequences as well (Service, 2004). Therefore, to reduce these negative effects, the selection of crop, specific nanosynergid and nanotechnologies with suitable energies is very important. Nanosynergids cannot penetrate into the plant cell because they have specific energies which only take part in breaking water molecules. So, these energies dissipate after the formation of activated water and reduce the chances of nano toxicity effects on plant cell (Hussain et al., 2020). The effect of NTW on growth characteristics varied from variety to variety. In this trial, plant growth was effective with NTW. Recent data collection showed a significant increase in whole plant length due to NTW in all rice varieties ( Table 4). Earlier studies have revealed that nano fertilizers may have a synergistic effect for improved nutrient uptake by plant cells, which resulted in optimal growth (Morteza et al., 2013).
Oxidative enzymes play a key factor in abiotic and biotic stress (Chini et al., 2004). Activation of defensive genes by H 2 O 2 acts as a secondary messenger (Pellinen et al., 2002). The NTW showed effective results in accelerating biochemical components like enzymes (antioxidants and oxidant enzymes), ROS, protein, starch, and amino acid. Among the enzymatic antioxidants, SOD is a main superoxide scavenger due to its enzymatic activity (Sharma et al., 2012). The SOD activity in rice seedling was increased due to NTW relative to the control in all rice varieties. Compared to the SOD activity of control, the data from three years (2017, 2018, and 2019) depicted the highest SOD activity in KSK 133 and the lowest in Zhongzao 39 (Figures 4A-F) while the remaining three rice varieties KS 282, Super basmati, and PK 1121 Aromatic performed better than the control.
Catalase enzyme activity was one of the ROS-scavenging enzymes of the plants. Experiments conducted for 3 years showed an increase in CAT activity with NTW in all rice varieties (Figures 4D-F). CAT content in 2017 was least improved in Zhongzao 39 and the highest for KSK 133. In Super Basmati, KS 282 and PK 1211 aromatic varieties showed improved CAT activity, respectively, as compared to non-treated water. Laware and Raskar (2014) stated that the increased antioxidant enzymes, such as POD, SOD. and CAT activities of soya bean germinated seed with nano-SiO2 and nano-TiO 2 could significantly promote the seedling growth (Hong et al., 2005).
Peroxidase is an important element to overcome the cascade of uncontrolled oxidation and protect the plant from oxidative damage (Fahad and Mohammed, 2020). The lowest enhanced value of POD was observed in Zhongzao 39 and the highest increased of POD was observed in KSK 133 in all the three years. The ascending order of rice varieties for POD values enhancement is as follows: KSK 133 > Super basmati > KS 282 > Aromatic 1121 > Zhongzao 39. In the present study, CAT and POD were significantly improved with nano-synergid (Figures 5A-C).
The present study showed that NTW exposure had an effective impression on SOD, CAT, and POD antioxidant enzymes. SOD can exchange negatively charged oxygen molecule − O 2 with H 2 O 2 and + O 2 whereas CAT and POD can transform the H 2 O 2 into H 2 O and + O 2 molecule (Scavenging of H 2 O 2 ) (Anjum et al., 2015;Zafar et al., 2020). Therefore, anti-oxidant enzymes can maintain the ROS, reduce the toxicity of ROS, and protect the rice cells from damage. Increased CAT activity under NTW might be the most important cause to detoxify the ROS activity and decreased MDA contents (Figures 5D-F).
Malondialdehyde content is an important tool for describing the amount of lipid peroxidation. Higher concentrations affect the plant or indicate cell membrane damage. The increased amount of MDA content is produced when polyunsaturated fatty acids in the membrane undergo oxidation by the accumulation of free oxygen radicals. Increased lipid peroxidation is the main indicator of oxidative damage in plants (Bor et al., 2003). Present experimentation displayed higher MDA content in control treatments in all five varieties. The previous studies exhibited decreased MDA content mediated by calcium phosphate nanoparticles (NPs) in both root and shoot as compared to the control. The MDA contents of the root and shoot reduced with calcium phosphate NP, which could be due to the variation of ROS in plants (Upadhyaya et al., 2017). The MDA content decreased in all NTW in the ascending order of Zhongzao 39 > KS 282 > PK 1121 aromatic > Super basmati > KSK133 (Figures 5D-F). The previous studies exhibited decreased MDA content mediated by calcium phosphate NPs in both root and shoot as compared to the control. The MDA contents of the root and shoot reduced with calcium phosphate NP may be due to the variation of ROS in plants (Upadhyaya et al., 2017). In our trial, MDA content was lower in nano-treated rice seedling than the control.
In the present study, we speculated that chlorophylls a, b, and total chlorophyll content were enhanced by NTW. It can be closely related to photochemical reaction activity. The effect of nano-TiO 2 experimented on photosynthetic rate, showed improved photochemical reaction activity like absorbance of light, the transformation of light energy to electron energy, photophosphorylation efficacy, and oxygen progression (Hong et al., 2005). Total chlorophyll content (Chl a, bm and carotenoids) decreased with control and increased in NTW was observed in different rice varieties ( Table 5). These results revealed that chlorophyll contents are basic indicators of photosynthetic activity in rice leaves. Low chlorophyll contents are one of the major causes of low growth and yield for rice plants ( Table 5). Though nanomaterials and nanotechnologies make a positive effect on the plant seed germination and growth, addressing some serious challenges like nanomaterial reaction lowers the photosynthetic activity and phytotoxicity (Tripathi et al., 2017).
Brassinosteroids are polyhydroxylated steroidal hormones or growth regulators, associated with different physiological functions, e.g., seed germination, cell elongation, cell divisions, root development, and they also reciprocate various biotic and abiotic stress (De Vleesschauwer et al., 2012). BR signaling genes improved rice architecture and increased grain yield (Bajguz, 2011). In the present study, BR showed improved results in nano synergid rice than the control. The KSK 133 showed the highest amount of BR while the lowest was reported in Zhongzao 39 (Tables 6A-C). Previous studies claimed that BR activated the specific transcription factors which can stimulate BR-targeted genes, regulated the antioxidant enzymes activities, SPAD value (photosynthetic capacity), and chlorophyll contents to improve plant growth (Anwar et al., 2018). With reference to rice, BR promoted plant growth and immunity against Blast fungal disease (Pyricularia oryzae) . In conformity with the earlier reports, it can be observed that BR remarkably increased all plant growth activities related to defense mechanism and biomass production.
Jasmonic acids are lipid-derived compounds, known as αlinolenic acid, which plays an important role in rice defense system from microbial infection (6a-c) (Yang et al., 2019). The lipid-derived compounds help in plant biotic and biotic stress response or protection (Schaller and Stintzi, 2009). Therefore, the results of the present study are in accordance with the literature where JA and nanosynergids showed a stimulatory effect on rice immunity along with an increased level of JA in rice whereas earlier studies stated that JA involved in a range of processes from development to light responses contrast (Wasternack and Kombrink, 2010). Jasmonates cannot work individually but work in a complex signaling network and collective plant hormone signaling pathways (Ahmad et al., 2019). The present study showed an increased level of endogenous JA in rice (Tables 6A-C).
Salicylic acid is produced from benzoic acid, an important phenolic compound present in plants at various levels, e.g., rice contains high basal SA levels (5000-30,000 ng g −1 fresh weight). SA is present in leaves as the free acid. Therefore, rice plant maintains a high level of SA in leaves than the shoot and roots (Koo et al., 2020). SA helps in control the redox reactions, protects from oxidative stress, and biotic and abiotic stress as well (Yang et al., 2019). The present study increased antioxidants, photosynthetic activity, and increased level of endogenous SA in rice leaves (Tables 6A-C).
Endogenous SA displays a vital antioxidant role in defending rice from oxidative stress. So, a high amount of SA can directly be related to triggering antioxidant responses, modulate redox balance, and scavenge ROS (Grant and Loake, 2000). The present findings also displayed increased SA concentration in KSK 133 and the lowest in Zhongzoa 39. In the foliage of Alternanthera tenella, SA exhibited improved antioxidant activity and increased betacyanin content, which are associated with antioxidant action (Lucho et al., 2019).
The cross-talk of plant hormones is the best way of response to plant stress. SA and JA are resistant factors and BR is responsible for above-ground plant growth. So, the present study stated that endogenous hormones play an important role in growth, developmental process, and plant immunity (Figure 7). These hormones produced a balance between oxidative stress, growth activities, and the defensive system of rice. Plant hormones can improve crop quality and stress tolerance in agriculture harvest. The same observation was expressed in previous studies of BR and JA pathways, which involved a balance between growth and defense, where SA controls early defense gene expressions, and JA tempts late defense-based gene expressions .
The nanosynergid application had a significant impact on yield attributing characters, i.e., plant height (cm), RB, branch weight without seeds (g), panicle weight (g), number of panicles, total number of seeds per panicle, filled grain per panicle, unfilled grain per panicle, and 1000 grain weight (g) as compared to the control ( Table 7). In previous studies, the root, shoot, and grains were improved by ZnO 2 nanoparticles which act as nano fertilizer (Bala et al., 2019). The same observations were made in current study where the yield parameters were significantly higher with nanosynergids in KSK 133. A major reason for the higher yield of rice is that the irrigated nano water can increase the production of filled panicles. Also, the earlier studies have supported this argument that seed primed by Qiangdi nano-863 can achieve good yield in japonica rice (Jun-rong et al., 2016).
As confirmed, nanosynergids enhanced grain yield in rice, and currently is currently used in agriculture due to their lack of toxicity, biodegradability, and edibility (Lemraski et al., 2017). In the present experiment, maximum plant height, RB, and branch weight without seed was recorded in NTW rice. Branch weight and panicle weight was improved in NTW KSK 133 and the lowest in Zhongzao 39 ( Table 7). Number of panicles indicate the yield of grains in rice plant. KSK 133 exhibited highest filled grains per panicle recorded in all three years. In previous studies, nanomaterials showed an increase in root biomass (31-37%), 12-35% root biomass area and overall improved leave area . Absorption of fertilizer, physiological activity, and function was improved in rice, and the growth vigor and stress resistance abilities were stronger than those of the control. The development process was accelerated, precocity was promoted, and yield was increased. The results of seed inspection and yield measurement demonstrated that the spike FIGURE 7 | Qiangdi nano-863 nano synergid release electromagnetic waves which break the macro-molecules of water into micro-molecules. Micro-molecules of water entering into seed activates the hormone (GA) to amylase and speed up the germination process. Oxidative enzymes activation (SOD, CAT) and riddance of reactive oxidation species (ROS) lower the production of MDA and maintain the redox reactions in different subcellular structures. H 2 O 2 is generated in normal metabolism via the different organelle electron transport chains in mitochondria, chloroplasts PS-1 and PS II, and cytosol. SA and JA also helped in oxidative response and rice immunity. BR promotes growth and antioxidant activity. Rice plant will have faster germination, establishing root system, enhanced tillers, flowering, and full filled grains.
number, spike length, grain number per panicle, and 1000grain weight of per unit area in the treatment areas were all significantly higher than those of the control (Zhang et al., 2007). Therefore, agronomists recommend the application of nanofertilizers which could significantly influence the biomass and grain yield (Janmohammadi et al., 2016).
Extensive datasets are gradually common and are often difficult to interpret. PCA is a method for decreasing the dimensionality of such data sets by increasing the interpretability as well as reducing information loss (Jolliffe and Cadima, 2016). So, PCA in the current study used for better practice in the interpretation of complex data, physiological parameters, biochemical parameters, and yield among five rice varieties was shown by PC1 in 2017, PC2 in 2018, and PC 3 in 2019 ( Figure 6 and Table 8). The results of principal components, 1, 2, and 3 were obtained from bio-chemicals, physiological parameters, and yield of Chinese and Pakistani rice varieties irrigated with NTW. The Eigen values greater than one were considered to determine the PC score of each factor (Mandal et al., 2008). Earlier studies have showed the significance of this analysis for detailed crop assessment (Shen et al., 2021).
Proposed Mechanism of Qiangdi Nano-863-Treated Water Induced Seed Germination and Physiological and Biochemical Attributes of Rice Nanometer Qiangdi 863 Nano disk has strong light-absorbing properties and ceramic material acting as a carrier for electron transportation. Nano-ceramic disk has electrical and chemical properties with low toxicity and high biocompatibility. Qiangdi nano-863 disk emits electromagnetic waves (2 ∼ 25) that are enough to produce declustered water (activated water) molecules of high energy (10 −4 ) (Jun-rong et al., 2016). Activated water can easily enter into plant cell and stimulate the metabolism by current/potential and redox kinetics. The magnetic waves influence crystallization process, association, dissociation, and nucleation rates of water (Zlotopolski, 2017).
Nanotreated water can enhance seed germination by the influx of nano-treated water into seed-triggered amylase activity. Due to the activation of antioxidant enzymes (CAT, SOD, and MDA), the Nanometer Qiangdi 863 system maintains the ROS in the optimum range and act as signaling organelles for triggering essential metabolic activity of rice seedling development (germination, dry weight, and chlorophyll content) and growth (Figure 7).
These oxidative enzymes reduce the toxicity of ROS in kerb cycle and citric acid cycle in cytosol and mitochondria too. MDA is a polyunsaturated fatty acid in the membrane that undergoes oxidation by the accumulation of free oxygen radicals. MDA was negatively correlated with the activities of ROS scavenging enzymes. So, oxidative enzymes lower the MDA content in rice, improve the performance of rice performance and enhance their immune competencies. Plant endogenous hormones, such as JA, SaSA, and BR also play an important role in growth, development, and rice immunity (protect form biotic and abiotic stress).
The NTW will induce improved vegetative growth and increase the number of productive panicles which also result in the achievement of high yield (Figure 7).
CONCLUSION
Nano synergid Qiangdi 863 has a great potential for welfare in precision agriculture. NTW has exclusive characteristics. Nanosynergids emit electromagnetic waves that generate high energy (resonance) between water molecules. The NTW enhanced the light absorption at a specific wavelength that changes the structure and energy of water molecules. Activated water is absorbed by the seed and it enhanced amylase activity, and continuously strikes the cell for germination. The present study deals with nano synergid Qiangdi 863 in a field experiment. The nano synergid Qiangdi 863 exhibits prolonged effective nutrient supply, involves all the steps of the crop cycle, from sowing to transplanting and harvest. The cell energy is activated, and its function is stimulated which enhances the metabolism of rice seedling. NTW enters into an oxidative enzyme system like SOD which is a main superoxide scavenger that exchanges negatively charged oxygen molecule − O 2 with H 2 O 2 , CAT and POD transform H 2 O 2 into water and positively charged oxygen molecule. Oxidative enzymes lower the MDA content in rice. The rice performance is improved, and its immune competencies are enhanced. Plant endogenous hormones, such as JA, SA, and BR also play an important role in the growth, development, and rice immunity (protect from biotic and abiotic stress). The range (4 ∼ 25) of nanosynergids waves is safe with reference to the restoration of genetic diversity.
Future Perspective
Nanosynergids require further investigation about the development of nanosynergids system that would improve the release of NPK fertilizers on plant growth without their significant environment damage.
DATA AVAILABILITY STATEMENT
The original contributions presented in this study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. | 2022-07-14T13:57:28.963Z | 2022-07-14T00:00:00.000 | {
"year": 2022,
"sha1": "5ab630d92b5d072daaf514ac5f27af9b0a986d95",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "5ab630d92b5d072daaf514ac5f27af9b0a986d95",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
866873 | pes2o/s2orc | v3-fos-license | Knowledge, attitude, and practices among Iranian neurologists toward evidence-based medicine
Background: Evidence-based medicine (EBM) is a current practice in medicine to produce clinical practice guidelines from well-designed, randomized, controlled trials. We studied knowledge, attitude, and practice of EBM of neurologists who participated in the Iranian congress of neurology. Methods: A self-administered anonymous questionnaire was distributed and filled by neurologists. Results: A total of 200 neurologists were randomly sampled and with response rate of 56%. 33.9% of responder had previously participated in EBM courses. The average total knowledge score was 4.05 ± 0.80 out of a maximum possible score of 5.0. Textbooks were still the most favorite source of knowledge for our neurologists. A lack of time was the highest, and motivation the least mentioned barrier in using EBM. Conclusion: Overall, the Iranian neurologist had acceptable knowledge and attitude toward EBM and had same similar as found in other studies.
Introduction
As neurologists, we are living in an era of exponential growth of knowledge. Needless to say that practicing medicine is very different today than it was 15 years ago and even is going to be more different in 2020. This will be challenging, especially with the current rate of medical knowledge doubling every 3 years. The rate of doubling is expected to be every 73 days by 2020. 1 The evidence-based medicine (EBM) is the process of producing clinical guidelines from well-designed, randomized and controlled trials, published in literature databases. 2,3 Hence, the main goal of EBM is to optimize clinical decisionmaking and keep health practitioners' knowledge up-to-date. 4 The knowledge of medicine is growing very rapidly and using all available resources by medical professionals to be up-to-date is becoming more difficult (if not impossible). Therefore, the physician's attitude toward EBM has increased enormously and positively. 5 By learning how to practice EBM and adopting evidence-based practice protocols, medical professionals can put themselves in tandem of
Iranian Journal of Neurology
http://ijnl.tums.ac.ir 5 January medical advances, and this can be helpful to enhance their clinical performances. 6 Neurologists, like other medical professionals worldwide, are being encouraged to apply EBM to improve their clinical care.
To the best of our knowledge, little is known about knowledge, attitude, and practice of EBM among Iranian neurologists.
To gain more information, we designed and conducted a survey.
The aim of our study was to answer five research questions about these topics: • What are main resources for Iranian neurologist to find their clinical questions?
• Do Iranian neurologists know enough about important clinical aspects of EBM?
• How frequently they use EBM methods in their clinical practice?
• What are their limitations to use EBM? Have they ever participated in EBM workshops?
Materials and Methods
A cross-sectional observation study was conducted on Iranian neurologists who participated in the 22 nd Iranian Congress of Neurology and Electrophysiology, held in Tehran, in 2014. We chose this congress because it was popular and the participation level was expected to be high.
We used convenience-sampling procedure to select participants, and a self-administered anonymous questionnaire was distributed to participants.
The questioner had been translated, validated and used by Navabi, et al. 7 The scientific reliability of the questionnaire was confirmed and labeled as optimal by previous studies, and all of the items were deemed highly appropriate to use for this group of physicians.
The questionnaire included 22 questions and evaluated background data, knowledge, attitude, and practice, in four sections. The first section covered background data, including gender, age, level of education, and type of practice.
The second section evaluated their knowledge level of EBM. They were asked to answer five statements with a three-point Likert scale: "correct," "incorrect," or "do not know." The total knowledge score was calculated by scoring the answers: one point for correct answer and zero point for wrong or for "do not know" answers. The sum of the scores was the basis of calculating their level of knowledge.
In the next section, neurologists were asked to state their most favorite sources for getting their medical information. They were also tested for their knowledge about EBM terms such as EBM, clinical effectiveness, relative risk, systematic review, critical appraisal, and Cochrane collaboration. They had to state their knowledge level by "know well," "little is known," or "do not know anything." The participants were given enough time, and the questionnaires were done anonymously.
One of the objectives of the survey was to understand their level of online medical resources usage, and their limitations in using EBM. We were also interested to know if they had participated in other EBM workshops.
Data were analyzed using Pearson's chisquared test for the comparison of frequencies and mean scores and the SPSS (version 16, SPSS Inc., Chicago, IL, USA). A P < 0.05 was considered significant.
Results
A total of 200 survey questioners were distributed among neurologists who attended the congress, 113 of them returned the survey questioners, making 56% response rate. 81.4% of the responders were male with mean age of 44.4 years. 33.6% of them were university faculty members with 11.8 years of work experience after finishing their neurology training.
The responders got the mean score of 4.05 ± 0.80 out of a maximum score of 5, regarding their knowledge.
Our survey responders mentioned their sources of acquiring professional knowledge. 84.1% of our responder used textbooks to get their clinical answers. Other sources of medical knowledge were online resources (79.6%), expert opinion (61.9%) and personal experiences (24.8%), respectively.
Only 33.9% of the responders admitted that had attended EBM courses before. The "Clinical Effectiveness" was the most familiar and the "EBM" was least familiar term among the participants. More detail is shown in figure 1.
We also asked about limitations in using EBM by Iranian neurologists. Figure 2 shows the frequencies of these barriers. By far the highest mentioned barrier was lack of time (69.9%) and on the other hand, motivation was the least barrier (0.9%). On average participants in our survey spent 1-5 hours/week online to find their clinical answers.
Discussion
This was a questionnaire-based survey on the Iranian neurologist who participated in the annual congress of Iranian Neurology Associations. An acceptable numbers of them kindly returned the self-administered questioner. 8 In general, neurologists who participated in our survey positively welcomed EBM and had acceptable basic knowledge and understanding of EBM. Compared to the same survey on dentists by Navabi, et al., 7 neurologists had better mean total knowledge score and were more likely to have participated in EBM workshop before. Both groups admitted that they usually prefer textbooks than other sources to resolve uncertainty in clinical practice. A lack of time was expressed as the predominant barrier to use EBM. Ghahremanfard, et al. 9 studied knowledge and attitude toward EBM among medical students in Semnan, Iran in 2014. Based on their results, only 24.5% of medical students had good basic information and familiarity with the term of EBM. The positive attitude toward EBM existed in 89.3% of participants. Only 29% of medical students reported having had formal training in search strategies.
Our findings were quite similar to those of other studies conducted in the Middle East, which assessed the healthcare provider's attitude toward EBM. They also showed that the major mentioned barriers to practicing EBM were lack of free personal time. 10, 11 Barghouti, et al. 8 assessed Jordanian family practitioners' attitudes toward and awareness of EBM in 2009. Only 20.4% of them had received formal training in research and critical appraisal. They also found that lack of personal time was the main perceived barrier to practicing EBM.
In a study on Australian general practitioners, the most commonly cited barrier to EBM was patient demand for treatment despite the lack of evidence for effectiveness. In those groups of physicians, the next most highly rated barriers were lack of time. They mentioned that lack of time was rated as a "very important barrier" by significantly more participants than lack of skills. 12 Same findings have been reported in a Norwegian study in 2009. 13 The evidence-based approach can be rationalized as the best treatment in resourcelimited countries like Iran. It can also be the most cost-effective approach by reducing clinical practices that have no proven benefit. At present, EBM has major barriers, such as its inherent complexity, misperceptions, absence in medical curriculum, and unawareness of practicing clinicians. 14 In general, physicians experience significant barriers to integrate EBM into clinical practice. 15 These steps are recommended to overcome these barriers: effective teaching of skills of EBM during residency, motivating the established clinicians, formulating locally applicable guidelines, increasing the accessibility to internet, availing telemedicine facility at remote center and disseminating appropriate information via free journals or even newspapers. A strong political commitment is needed so that these steps can help to lay the foundation of EBM in Iran.
•
Textbooks were main resources for Iranian neurologist to find their clinical questions. They less relied on expert opinion for that reason. We think, it is a major change in their view • Iranian neurologists knew well enough about important clinical aspects and term in EBM practice • They were using EBM methods in their clinical practice more than other medical practitioners in similar studies They had good motivation on using EBM, but the lack of time was a major barrier in that way.
Conflict of Interests
The authors declare no conflict of interest in this study. | 2018-04-03T04:31:42.045Z | 2017-01-04T00:00:00.000 | {
"year": 2017,
"sha1": "ad74b62532e9e78e2ed6a796f5c1d9cbad781dfb",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "4da51c0592aff2a5ccd3e3248c5f08134db4c192",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7003528 | pes2o/s2orc | v3-fos-license | Formulation development and evaluation of zolmitriptan oral soluble films using 22 factorial designs
Objective: The present investigation involves the development of zolmitriptan oral soluble film (OSF) formulations and optimization with quality by design (QBD) using natural polymers and evaluation. Materials and Methods: Initially, various natural polymers such as sodium alginate, pectin, and gelatin were screened by casting films using solvent casting technique and the prepared films were evaluated. Based on the physical and mechanical properties, sodium alginate was selected as best film former and zolmitriptan-loaded films were casted. The formulation was optimized with the help of 2 2 factorial experimental designs (QBD) in which sodium alginate concentration and plasticizer concentrations were used as factors and at two levels. The drug-loaded films were evaluated for various mechanical, physicochemical properties, and in vitro drug release properties. Factor effects were interpreted by calculating the main factor effects and by plotting the interaction plots. Results: Thickness of the films, disintegration time, and percent drug loading efficiency were in the range of 0.698 ± 0.13-1.318 ± 0.22 mm, 175 ± 3.1-280 ± 1.7 s, and 68.34 ± 0.5-94.70 ± 0.7% w/v, respectively. Cumulative percent drug released was 61.8 ± 2.6-94.7 ± 4.1% after 30 min. Polymer concentration at two levels of plasticizer had statistically significant effect on drug loading efficiency and in vitro drug release rate. X 2 formulation was found to be excellent in drug loading efficiency and in vitro drug release profiles; hence, drug excipient compatibility studies using Fourier transform infrared spectroscopy and stability studies for 60 days were carried out for X 2 formulation and found to be stable. Conclusion: Sodium alginate OSFs containing zolmitriptan was successfully prepared, optimized, and evaluated.
INTRODUCTION
Migraine is a type of neurological syndrome affecting at least 12-28% of the world's population at any one time. [1] This disorder is characterized by splitting headaches that comes in waves and is very debilitating becoming a major handicap for the sufferer. Clinical scientists and medical doctors have known about the functional impairment of the brain in patients during a migraine attack. One recent study has also confirmed that the cognitive abilities of patients who suffer repeated migraine attacks decreases over a span of time. [2,3] Several drugs and Objective: The present investigation involves the development of zolmitriptan oral soluble film (OSF) formulations and optimization with quality by design (QBD) using natural polymers and evaluation. Materials and Methods: Initially, various natural polymers such as sodium alginate, pectin, and gelatin were screened by casting films using solvent casting technique and the prepared films were evaluated. Based on the physical and mechanical properties, sodium alginate was selected as best film former and zolmitriptan-loaded films were casted. The formulation was optimized with the help of 2 2 factorial experimental designs (QBD) in which sodium alginate concentration and plasticizer concentrations were used as factors and at two levels. The drug-loaded films were evaluated for various mechanical, physicochemical properties, and in vitro drug release properties. Factor effects were interpreted by calculating the main factor effects and by plotting the interaction plots. Results: Thickness of the films, disintegration time, and percent drug loading efficiency were in the range of 0.698 ± 0.13-1.318 ± 0.22 mm, 175 ± 3.1-280 ± 1.7 s, and 68.34 ± 0.5-94.70 ± 0.7% w/v, respectively. Cumulative percent drug released was 61.8 ± 2.6-94.7 ± 4.1% after 30 min. Polymer concentration at two levels of plasticizer had statistically significant effect on drug loading efficiency and in vitro drug release rate. X 2 formulation was found to be excellent in drug loading efficiency and in vitro drug release profiles; hence, drug excipient compatibility studies using Fourier transform infrared spectroscopy and stability studies for 60 days were carried out for X 2 formulation and found to be stable. Conclusion: Sodium alginate OSFs containing zolmitriptan was successfully prepared, optimized, and evaluated.
action. In liquid formulation, it shows action immediately, but it has very short half-life. Generally, drugs are unstable in liquid preparations. [7] To overcome the above drawbacks, oral soluble films (OSFs) came into existence. OSF drug delivery has emerged as an advanced alternative to the traditional tablets, capsules, and liquids. They are similar in size, shape, and thickness to a postage stamp. OSF not only ensures more accurate administration of drugs but also can improve the ease of administration. [8,9] These properties are, especially beneficial for pediatric, geriatric, and neurodegenerative disease patients where proper and complete dosing can be difficult. Therefore, the focus of the present study was to develop an optimum formulation with the help of factorial designs of OSF containing zolmitriptan using natural polymers.
Materials
Zolmitriptan (was a gratis sample from Dr. Reddys Laboratories, India), sodium alginate, pectin, gelatin, mannitol and aspartame (were purchased from qualingens fine chemicals, India), ascorbic acid and propylene glycol (were purchased from loba chemie, India), and all other materials were of analytical grade and purchased from NSB pharmaceuticals, Vijayawada, India.
Solubility studies of zolmitriptan
Solubility studies of zolmitriptan were carried in different phosphate buffer solutions. Phosphate buffers of pH 6.4, 6.8, and 7.4 were prepared as per the Indian Pharmacopoeal specifications and 5 ml of each buffer solution was taken in three different conical flasks, an excess amount of drug was added and kept on the orbital shaker at 100 rpm for 2 h. Later, the conical flasks were kept aside over night to equilibrate the dissolved and undissolved portions of drug. After 24 h, the samples were filtered, and absorbance was measured at 224.2 nm using ultraviolet (UV)-visible spectrophotometer (ELICO SL 159) after making necessary dilutions. Using standard calibration curve, concentration of dissolved drug was calculated.
Screening of film forming polymers
Films were prepared using natural polymers such as pectin, gelatin, and sodium alginate by solvent casting technique. The polymer was dissolved in water to form a viscous solution. All other ingredients, viz., plasticizer (propylene glycol) and a combination of mannitol and aspartame as sweeteners except the drug were added. Then, the solution was sonicated for 15 min to remove entrapped air. Finally, the films were casted on a plain glass plate in a measured area and allowed to dry for 1 h in a hot air oven at 60°C. Different film forming agents were casted into films and examined for their physical and mechanical properties such as appearance, thickness, folding endurance, and time to dissolve the film.
Preparation optimization of drug-loaded films using quality by design (2 2 factorial design)
Sodium alginate was selected as the best film former based on the physical and mechanical properties of the casted placebo films for the preparation of zolmitriptan OSF. Sodium alginate was allowed to soak in required quantity of water for sufficient time, i.e., 10 min until it formed a uniform viscous solution. Drug was dissolved in 10 ml of water and it was added to the polymeric solution. Then, all other ingredients were added and the entire mixture was sonicated to remove the entrapped air. This solution was casted as film on a glass plate in a measured area and allowed to dry for 1 h in a hot air oven at 60°C. [10] The formulation was optimized by quality by design, i.e., 2 2 factorial experiments. The experiment in which two or more factors are investigated simultaneously is called a factorial design. The different designated categories of the factors are called levels. In factorial designs, we may study not only the effects of individual factors but also if the experiment is properly conducted the interaction between the factors. [11] The factors, levels of factors, and compositions of various drug loaded films are given in Tables 1-3.
Characterization of oral soluble films Thickness
The thickness of the film was measured at five locations (center and four corners) using Vernier calipers, and it was averaged to determine the mean thickness in mm. Samples with air bubbles, nicks, and mean thickness variation of >5% were excluded from analysis.
Folding endurance
To determine the folding endurance, a strip of film was cut and repeatedly folded at the same place till it broke. The Factors: 1) Polymer concentration -Factor-A, 2) Plasticizer concentration -Factor-B number of times the film could be folded at the same place without breaking gave the value of folding endurance. The procedure was repeated for three times, and average value was calculated.
Tensile strength
Tensile strength was measured using analog tensile tester (model TKG, FSA, India) in triplicate. Tensile strength computed from the applied load at rupture and cross-sectional area of fractured film from the following equation.
Percentage elongation
Percentage elongation was calculated by measuring the increase in length of the film after tensile strength measurement by using the following formulae.
Percentage elongation = [L-L 0 ] ×100/L 0 Where, L was the Final length, L 0 was initial length. [11] Time to dissolve the film One square centimeter of film of each formula was added to 50 ml of distilled water to determine the time to dissolve the films, and the times were noted. The procedure was repeated, and average value was determined in seconds.
Estimation of drug content
Film equivalent to 10 mg of drug was weighed accurately and transferred to a glass beaker; to it, 10 ml of methanol was added. The contents were thoroughly mixed, sonicated for 5 min, filtered, and from filtrate, 0.1 ml was taken in 10 ml volumetric flask and made up the volume with distilled water. Then absorbance was measured at 224.2 nm using UV-visible spectrophotometer. Amount of drug present was calculated using the calibration curve. The procedure was repeated thrice and average drug content (%w/v) was calculated.
In vitro drug release studies of zolmitriptan drug loaded films
In vitro dissolution studies were carried out using USP-II paddle apparatus with pH 6.4 phosphate buffer as the dissolution medium. The dissolution baskets were filled with 200 ml of dissolution medium, and the temperature was maintained at 37 ± 0.5°C throughout the study, and the dissolution was carried for 45 min. Samples were withdrawn at the time intervals of 2, 4, 6, 8, 10, 15, 30, 45 min. Sink conditions were maintained by replacing equal volume of buffer during dissolution to mimic the in vivo conditions. Absorbance was measured for collected samples after making necessary dilutions with buffer using UV-visible spectrophotometer, and amount of drug released was calculated from calibration curve.
Drug interaction studies Fourier transform infrared spectroscopy Preparation of samples to obtain FTIR spectrum
Fourier transformer Infrared spectroscopy (FTIR) spectra were recorded using an FTIR spectrophotometer (Shimadzu). The samples were previously ground and mixed thoroughly with potassium bromide, an infrared matrix, at 1:5 (sample: KBr) ratio. The KBr discs were prepared by compressing the powders at a pressure of 5 tons for 5 min in a hydraulic press. Forty scans were obtained at a resolution of 4 cm −1 , from 4000 to 400 cm −1 .
Stability studies
Films of formulae X 2 were wrapped in a butter paper followed by aluminium foil and kept in an aluminium pouch which was heat sealed at the end and Stored at 30°C and 60% relative humidity. The films were evaluated periodically for percent drug content and time to dissolve the film. Stability studies were carried out for a period of 3 months.
Solubility studies of zolmitriptan in different buffers
The salivary pH is not same for all the individuals; it varies depending on the person's diet, health condition, and other factors. The pH of the normal individual is in the range of 6.2-7.4. As the oral films are intended to dissolve within the oral cavity in saliva and release the drug, the solubility studies of zolmitriptan were conducted in different buffers within the salivary pH range and the results are displayed in Table 4. Zolmitriptan is found to be more soluble in phosphate buffer pH 6.4, and therefore, this buffer was used as the dissolution medium to perform in vitro drug release studies.
Screening of the film formers
Natural polymers such as pectin, gelatin, and sodium alginate were screened for the preparation of films because these polymers are freely soluble in water, nontoxic and biocompatible, nonirritant, devoid of side effects, and have good wetting and spreadability properties. These polymers also exhibit good tensile strengths, readily available, and economical. [12] The films were casted by solvent casting technique on a plain glass plate and examined for their physical and mechanical properties such as appearance, thickness, folding endurance, and time to dissolve the film. The results were shown in Table 5. Based on these physical and mechanical properties and disintegration time, sodium alginate was found to be the best polymer, and hence, this polymer was used for further study. Casting of drug loaded film and optimization using 2 2 factorial designs Using 2 2 factorial designs, four formulations were prepared with sodium alginate using the procedure described in experimental methods. The prepared films were evaluated for mechanical, physicochemical properties, drug loading efficiency, in vitro drug release studies, and drug excipient compatibility studies.
Mechanical properties Thickness, tensile strength, percent elongation, elastic modulus, and folding endurance
Thickness of the films increased as the percent weight of the film forming polymer was increased. Tensile strength and percent elongation of the film are important to resist the mechanical movements that occur during the packing, storage, and shipping of the films. All the films possessed good tensile strength. The films of X 2 were smoother than films of other formulations. [13][14][15] The folding endurance was highest for the films X 2 . All the mechanical properties of the films are given in Table 6.
Drug loading efficiency
The percentage drug loading efficiency of all the formulations was in the range of 68.34 ± 0.5%-94.70 ± 0.7%. The drug loading efficiency was found to be more with the formula X 2 , and the results were endowed in Table 6. From the factor analysis, it was observed that polymer had a negative effect on percentage drug content.
In vitro drug release studies
In vitro drug release studies were carried out for 45 min. The cumulative percent drug released, rate of drug release, and T 50 were computed using first order rate equation. And it was evident from the R 2 values that the rate of drug release in all the compositions followed first order kinetics. The dissolution data were also plotted in accordance with the Hixson-Crowell cube root law, i.e., the cube root of the initial concentration minus the cube root of percent remained, as a function of time and a nonlinear relationship was observed in all cases. The drug release profiles did not follow Hixon-crowel model. All the dissolution profiles were shown in Figures 1 and 2.
Factor effect on % drug release, disintegration time and percent drug released
The responses considered were % drug released at 30 min, disintegration time and % drug content of films. [16,17]
Effect of factor-A-polymer concentration
Factor A effect was calculated by subtracting the average responses of all experimental runs for which A was at its low level from the average responses of all experimental runs for which A was at its high level using the following formula.
Effect of factor-B-plasticizer quantity
Factor B effect was calculated by subtracting the average responses of all experimental runs for which B was at its low level from the average responses of all experimental runs for which B was at its high level using the following formula. [18,19] Main effect of B = (b 2 a 1 -b 1 a 1 ) + (b 2 a 2 -b 1 a 2 ) The factor effects were given in Table 7. The polymer concentration had negative effect on the percentage of drug release and drug loading efficiency indicating that the rate of drug release and drug loading efficiency decreased as the polymer concentration was increased. The plasticizer had negligible or no effect on the percentage of drug release. The polymer concentration had positive effect on disintegration time indicating that with increasing polymer concentration, disintegration time was increased, whereas plasticizer had negligible influence on disintegration time. Interaction plots as showed in Figure 3 also revealed that plasticizer at both the levels of polymer did not have any influence on drug loading efficiency, in vitro drug release and disintegration time. But the films were tackier when the plasticizer concentration is at high level because the plasticizer softens the polymer. [20,21] Drug excipient compatibility studies Drug excipient compatibility studies were carried out by FTIR, and the results were given in Figure 4a and b. The FTIR spectra confirmed the absence of drug excipient interaction.
Stability studies
The films did not show any statistically significant change in appearance, % drug content, and disintegration time on storage. The % drug content and disintegration responses were same as that of the responses before the storage. This indicated that X 2 film was stable after storage.
CONCLUSION
From the above experimental results, it can be concluded that sodium alginate had good film forming properties and could be used for the preparation of OSF. With increasing polymer concentration, the drug loading efficiency and the rate of drug release were decreased, and this was confirmed from the interaction plots and calculating the factor effects. Plasticizer concentration did not have statistically significant influence on any of these responses at both the levels of polymer conc. Maximum drug loading efficiency was found in X 2 formulation and the rate of drug release followed first order kinetics.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2018-04-03T03:34:54.816Z | 2016-10-01T00:00:00.000 | {
"year": 2016,
"sha1": "89ba3e37d7774817d9d816491103a6cbece35a49",
"oa_license": "CCBYNCSA",
"oa_url": "http://www.jpionline.org/index.php/ijpi/article/download/281/260",
"oa_status": "BRONZE",
"pdf_src": "Anansi",
"pdf_hash": "999f3560cf131367af4e268e897587b514222157",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
10584891 | pes2o/s2orc | v3-fos-license | Lisboa Primary healthcare in Portugal : 10 years of contractualization of health services in the region of Lisbon
1 Gabinete de Auditoria Interna, Administração Regional de Saúde de Lisboa e Vale do Tejo. Av. Estados Unidos da América 77. 1749-096 Lisboa Portugal. ricardo.monteiro.baltazar@ gmail.com 2 Diretora Executiva do ACeS Oeste Norte, Administração Regional de Saúde de Lisboa e Vale do Tejo. Lisboa Portugal. 3 Núcleo de Estudos e Planeamento, Administração Regional de Saúde de Lisboa e Vale do Tejo. Lisboa Portugal. 4 Departamento de Planeamento e Contratualização, Administração Regional de Saúde de Lisboa e Vale do Tejo. Lisboa Portugal. Cuidados primários em saúde em Portugal: 10 anos de contratualização com os serviços de saúde na Região de Lisboa
Introduction
In Portugal, the right to health was only recognized in 1971.This led to the establishment of first-generation health centers, with the primary concern of integrating multiple institutions with preventive and public health concerns, so far organized vertically.These centers were strongly influenced by public health concepts, preferably geared to vaccination, mother and child surveillance and school health.
Subsequently, second-generation health centers were characterized by a bureaucratic organizational structure within sub-regional structures, coordinated among themselves in the Regional Health Administrations (established in 1982).The activity of these health centers was based on the medical career of general practitioners set in 1983.The organizational model of these health centers proved to be out of alignment with regard to the needs of both users and professionals and created a space for the development of an experimental project of organizational innovation, called Alfa Project (1996) and the Experimental Remuneration Scheme (RRE) (1998).
The Alfa Project was an example of a new organization and provision of healthcare by general practitioners and family doctors with the aim of providing autonomy in exchange for objective accountability for improved access and quality.It also sought to encourage the initiative of small groups of family physicians in collaboration with other health professionals to autonomously and innovatively care for a list of users, taking into account their capacity and available means.
The evaluation of this project pointed to the centrality of professional remunerations, aiming at the best performances.This is how the RRE (1998) was established and defined a new payment mode for physicians, with a salary component associated with the quantity of work and quality of professional performance, a weighted training component as an alternative to the traditional salary model.The cost-effectiveness of this measure and the satisfaction of the professionals attested the benefit of introducing the RRE.
In 1996 (with the experience gained in AR-SLVT), Portugal started the contractualization process, as an important instrument to support financing in a perspective of greater equity and guaranteed access to health by citizens.It was incumbent upon the Health Service Monitoring Agencies (AASS) 1 , founded in 1997 and later renamed Health Service Contractualization Agencies 2 , one for each health region, to be the intervening entities in the system, with representation of citizens and administration and with the mission of spreading health needs and advocating the interests of society in general.Therefore, equity and technical rationality should guide the distribution of financial resources by health institutions in each region.
The establishment of agencies aimed mainly at ensuring the best use of public health resources, associated with the concept of "middle-agency' between citizens and healthcare services.It is governed by a clear distinction between the State financing of providers and the payment of health care actually provided.The relationship of "perfect agency would be one in which the health status achieved would be exactly what the users would have been able to achieve if they were to choose by knowing the situation" 3 .
Thus, program-agreements, which are necessary to combine the planned activity with the level of financial resources delivered to the institution are shown here, instead of combining these financial transfers and the internal structure.As a first step, the program-budget model was partially implemented and only involved part of the public health institutions.According to an author 4 , a second stage would see an extension to general services and the development of new financing models would occur in the third stage.
These AASSs became five (in 1998 and 1999), and should contractualize with all hospitals and health centers.In those years, contractualization with health centers was only successful in the ARS of Lisbon and Tagus Valley and Alentejo, whereas contractualization only occurred with hospitals in the remaining three health regions of the continent.In these cases, the ARS carried out the performance control.
In 1999, the AASS were renamed Health Care Contractualization Agencies, and the National Council of Agencies was established and responsible for cross-linking and concerted analysis of information and action, in the pursuit of the respective attributions 5 .
The ARSLVT Contractualization Agency started its activities with the competence of negotiating the programmed budgets 6 of the hospitals of the region, and the process included contractualization of Primary Health Care (PHC) in the year 1999, with an evaluation of the performance of the 86 Health Centers at the time, based on a list of analysis and follow-up indicators provided in documents prepared 7 by the Health Services Monitoring Agency.
The analysis and monitoring indicators were bundled in five groups, namely: production and production progress, productivity and produc-tivity progress, access and access and use progress, health surveillance (quality) and efficiency and expenditure control.Reference values were set for each indicator or benchmark, from which a comparative analysis of health centers was performed and a final classification for each institution was calculated.
In 2000, in addition to the process of negotiating program-budgets with health centers, the Contractualization Agency submitted an opinion on the distribution of the amounts of operating subsidies to be allocated to each of the three Health Subregions 8 and to each Health Center, in an innovative exercise that promoted a change in the logic of financial distribution based exclusively on historical expenditure.However, in that same year, contractualization agencies' activity was interrupted, although legislation never extinguished them.
In 2005, the PHC Reform began in Portugal.It was to be based on a new structure to respond to the population's health problems at local, regional and national levels.This was one of the most successful public service reforms of the last decades in Portugal, implemented by the Mission for Primary Health Care (MCSP), with the responsibility of conducting the global project of launching, coordinating and monitoring the reconfiguration strategy of health centers and implementation of the family health facilities (USF).
The process of change had two fundamental vectors: one from "top-down", in line with the restructuring of the State Central Administration and the new macro structure of the Ministry of Health; the other was "bottom-up", with the involvement of health professionals through voluntary application processes, for the creation of autonomous health care teams, the USF.These applications began in March of 2006 and the first USFs were made official in the field in September of that same year.Applications to USF are subject to a technical evaluation, in that they have to comply with a series of previously established requirements, in a dynamic process of continuous adjustment of the organizational model.
Resumed with the PHC reform, the contractualization process established as a priority its introduction into all SNS services.Normative Order No. 22.250/2005 9 , of October 3 and Normative Order nº 23.825/2005 10 , of November 22, relaunched the project and rehabilitated the ACSS.The new launching of contractualization processes is also associated with the implementation and functioning of USFs and should be an inducing tool for greater accountability and demand, so that better health outcomes can be achieved with greater efficiency, while at the same time ensuring to professionals involved and society, transparency, equity and appropriateness of this whole process.The lack of experience of all those involved in the contractualization process advised that this should be seen as a learning process, but would not be guided by the levels of demand accepted at the outset.In this context of change, the process of contractualization with USFs began on October 2, 2006, with the implementation of the first stage of contractualization, which consisted of direct negotiation with a group of healthcare providers without financial and administrative autonomy.In this stage, two stages were included: in the first one, all the USFs that began their activity until September 30, 2006 were involved; in the second stage, all USFs would be involved and included contractualization for the year 2007 (beginning October 1, 2006).In May 2006, the Health System Central Administration, I.P. (ACSS) identified a "Matrix of indicators to be contractualized for 2006 and 2007", which consisted of a set of 49 indicators distributed in four classes: access, care performance, perceived quality and economic performance, resulting from the work developed by MCSP.The current indicator tables used can be consulted in the ACSS 11 .
The evaluation of the compliance with the goals would have a quarterly periodicity, in an automated way, without additional workload for the elements of the USFs, with total veracity of information expected in the computer system used by the USF.Compliance with contractualized indicators was associated with incentives.
Health facilities' models in the Primary Health Care Reform and contractualization
It should be noted that the model A and B USFs were established in the PHC Reform, which are distinguished as follows: (1) Model A actually corresponds to a phase of learning and improved work in a family health team, while at the same time representing the initial contribution to the development of internal contractualization practice.It is an indispensable phase in situations where isolated individual work is deeply rooted and/or where there is no tradition or practice of assessing technical-scientific performance in family health; (2) Model B is indicated for teams with greater organizational maturity, where family health team work is an effective practice and there is a will to accept the contractualization of more demanding performance levels.
The right to incentives implies the elaboration of a Plan for the Application of Institutional Incentives to be applied in training, documentation, equipment and rehabilitation of infrastructures.This institutional incentive represents a qualification of the investment, that is, it prioritizes investment in the units that comply with the contractualized objectives.
The contractual methodology conceived for Model A USF, in operation since 2006, already included institutional incentives for the application of the respective USFs.Decree-Law no.298/2007 12 consecrated and expanded this possibility to all USFs, regardless of their fitting model.
In this legal document, granting financial incentives to USF professionals in Model B was envisaged, with the incentives of physicians being compensated for the specific activities and those of other professionals -nurses and administrative personnel -integrated in the pay-for-performance.Finally, Administrative Rule no.301/2008, of April 18 13 , would regulate the criteria and conditions for the attribution of institutional and financial incentives to USFs and their professionals.
Again, in 2006, a study commissioned by MCSP was published comparing the performance of the RREs (very similar to the USFs) with the "conventional" Health Centers and, in a second stage, USFs' budgetary impact was analyzed in 2007.Conclusions of Gouveia et al. 14 pointed to a reduced price per patient, per consultation, in medicines and CDTM, in the RRE model, when compared to the current model in the "conventional" Health Centers.
Decree-Law no.222/2007, of May 29 15 , published ARS new structure with the objective of creating a new model, centered in the simplification of the existing organic structure and the reinforcement of its attributions, toward a greater autonomy and functional accommodation required by the progressive extinction of health sub-regions, which, in RSLVT, determined the extinction of the three health sub-regions (Lisbon, Setúbal and Santarém) and the creation of 22 ACeS through Decree-Law no.28/2008 of February 22 16 .
The Statutes of the Regional Health Administration of Lisbon and Tagus Valley (ARSLVT), I.P. are published in Administrative Rule no.651/2007, of May 30 17 , which formalized the establishment of the Contractualization Department (CD) with competencies defined in Article 5 of the diploma, resulting from the 10 year-experience of the Monitoring/Contractualization Agencies, extending to the other four Health Regions of Portugal.Among its competences are the proposal to allocate financial resources to healthcare institutions and services through the negotiation, conclusion and review of program-agreements.
Thus, as part of the PHC reform, in 2007, each USF began to contractualizewith the respective CD 20 indicators for the Basic Services Portfolio and one indicator for each activity conducted within the scope of the Additional Services Portfolio, and from 2008, 15 indicators were contractualized, of which 13 were common to all USFs and 2 were selected by each of them among indicators validated by the ACSS.
PHC external and internal contractualization
The contractualization methodology for 2009 would fit in Step 2 of the contractualization process, in other words, although the ACeS figure was there, they would be in the installation stage during the first semester of 2009, which is why contractualization would continue to be carried out as before between the USFs and the CD of the ARS.There should also be a concern with the development of Local Health Units (ULS), which, since they had previously established a program-agreement with the Ministry of Health, it would be important for a representative of the Board of Directors to be appointed to monitor the contractualizing process between the USFs and the CD, in order to reflect the indicators contractualized in the program-agreement.
Other important changes occurred during 2009: (1) a working paper published by the ACSS 18 stated that the definition of goals would depend on the negotiation between USF and CD and should consider the trend of indicators in the USF itself and in the surrounding health centers.The goals should be demanding, but viable, based on good practices, in order to ensure that the characteristics inherent to the USFs constitution provide better health outcomes without undermining the organizational implementation and development of the teams; (2) a working group 19 was set up to develop contractualization arrangements with PHC; (3) USFs werestrengthened and expanded and the ACeS implemented.
At the contractualization level, it means a turning point, to the extent that the stage of direct negotiation with the USFs has ended and a cycle of internal and external contractualization has begun, in which the support of the CDs of the ARS to the ACeS to consubstantiate their management autonomy would be fundamental for the necessary adaptation of its management facilities to a culture of excellence through governance and accountability.This is one of the great challenges currently faced by contractualization, through the dynamics of articulation between strategic management and the operational management of health organizations.The dialectic between the two forms of contractualization: the external one implies a philosophy of accountability and transparency, transposing it within the organization through internal contractualization.According to Matos et al. 20 , internal contractualization is an objective-related management tool capable of promoting an alignment between externally contractualized objectives and the mission of health institutions, in which the establishment of effective health gains should be valued, and not just production of deeds.
Implicitly, internal contractualization has a new form of internal relationship, alternating decision-making methodologies, presenting itself as a participatory management model, creating consistency among all the activities of the organization, through an alignment of activities with a strategy, aiming to achieve objectives outlined from an external component, of the existing means and the desired results.
This new way of looking at contractualization was consolidated during the implementation of the third stage of contractualization, with two different moments: in the first, an internal contractualization process occurred, in which the ACeS negotiated the various goals with the functional units; in the second, there was an external contracting process in which the CDs contractualized with the Executive Directors of the ACeS, which resulted in the signing of program-agreements.
In this context of implementation of the ACeS, we should consider that the USFs are ACeS' functional units, while maintaining the whole philosophy about indicators, metrics for evaluation and allocation of institutional and financial incentives.We should note that CDs also supported ACeS in the internal contractualization process (with USFs and Customized Healthcare Units), participated as analysts at contractualization meetings and developed a critical analysis to identify aspects to be improved.The contractual model with the ACeS implied an additional effort and qualification in the identification of needs, in the planning, in the con-tractualization of health care and sophistication of payment modes (Figure 1).The Performance Plan and the Program-Agreement are important here.The first is a strategic document negotiated annually with the ACeS, integrating population indicators of sociodemographic and socioeconomic character, but also health outcomes.The second, established between ACeS and ARS, fulfills the commitment in terms of objectives and care goals, in accordance with the Performance Plan.The negotiation process between CDs and ACeS culminates in the signing of the annual Program-Agreement.
Methodology
This case study resorted to bibliographical and files review, both legislative and administrative, in order to contextualize the contractualization process and then to analyze the experience and results obtained in the ACeS Northern West, in comparison to the average obtained in the region of Lisbon, using the Regional Health Administration Information System (SIARS) database.
Contractualization results and experience of ACeS Northern West
The West is a space united by a common cultural heritage, with a strong component of agricultural and fishing practice and a very strong attachment to land and sea.The resident population is 172,168 inhabitants (estimate on December 31, 2015), with a population density (162.9 inhabitants/Km²) above the country average (114.3inhabitants/Km²), but the population enrolled in the ACeS Northern West was 191,275 users (December 2015).
The ACeS Northern West stems from the aggregation of six municipalities (Alcobaça, Bombarral, Caldas da Rainha, Nazaré, Óbidos and Peniche, and consequently of the health centers in these municipalities).It includes: (1) six Customized Health Care Units (UCSP) ; (2) eight USFs that provide personalized care; (3) three Community Care Units (UCCs) providing health care and psychological and social support at home and community level, especially to people, families and most vulnerable groups in situations of increased risk or physical and functional dependence; it also works in health education, integrating family support networks (social networks) in partnership with authorities and school health programs;(4) a Public Health Facil-ity (USP) that operates as a health observatory of the geo-demographical area of the ACeS in which it is integrated, and is responsible, among other things, for the elaboration of public health information and plans (e.g.Local Health Plan); (5) a Shared Care Resources Unit (URAP) that provides consulting and care services to functional units.It consists of technicians from various areas, namely, social workers, psychologists, physiotherapists and oral health technicians.Doctors with hospital-derived specialties may also be part of this unit and respond to requests from the various units that are part of the ACeS (Figure 2).
Summarily, the inhabitants of the geographical area covered by the ACeS Northern West are mostly female, with a birth rate ranging from 7 (Alcobaça) to 9.3 (Nazaré), evidencing an aging population.In fact, the highest aging index is 172.5 in Bombarral and the lowest is 138.4 in Peniche, values that are in line with the dependency rates that record, respectively values of 44.9 and 39.6.All these values are higher than the averages recorded in the Region (Table 1).
In terms of compliance with contractualized indicators between ACeS Northern West and AR-SLVT, from 2012 to 2015, we noted that access has always remained higher than the ARSLVT's average, although it had a negative fluctuation between that first and last years.There were significant increases in enrollments of people aged 14 or older, with smoking habits (from 25.2% in 2012 to 53.6% in 2015) are relevant, and the proportion of low birthweight term newborns that achieved in 2015 the same values as in 2012, that is, 3.04%.
In terms of care performance, it is important to highlight the incidence of major lower limb amputations in residents, which was 0.9‰ in 2012 and recorded 0.3‰ in 2015 (Table 2).
How did we arrive to these results?One of the strategies used to involve professionals in ACeS Northern West was quarterly meetings of all unit coordinators, sharing results and then analyzing their level of compliance and sharing strategies developed by each one.Cases of more difficult compliance required the involvement of the clinical council, mainly related to the form of registration in the electronic clinical process.
At the same time, each month, units received maps with the evolution of their contractualized indicators (internal contractualization), as well as those contractualized by the ACeS and influ- According to data published in the 2015 PHC contractualization methodology by the ACSS 21 , there is a growing codification of health problems at national level, reflecting the increase in computerized clinical records, with increasing demands by users and health care service providers.This fact allows for better planning and better management of health care.As a whole and at the functional units' level, knowledge of health issues of the user lists allows for better clinical governance.
By the end of 2011, 20.6 million health issues were identified at the national level, and by the end of 2013, this figure had increased to 30.2 million issues.The percentage of ICPC-2 coded PHC consultations in Portugal (69.2% in 2011, 83.9% in 2012 and 84% in 2013) 21 .
The list of the user's active problems includes health problems for which there is a follow-up plan, the relevant diseases, those that require a continuous medical treatment, which allows us to characterize users, the activity developed or to schedule a future activity.With good records, we can ensure the adequacy of care, monitor and Incidence of major lower limb amputations (DM) in residents 0,3% 0,4% 0,45% 0,5% 0,5% 0,5% 0,9% 0,7% evaluate the work performed and the care provided to the population.
In Table 3 we can observe, in comparison with values for ARSLVT, the development of incidence and prevalence of the problems coded with ICPC-2 in the ACeS Northern West.Worth highlighting are the results obtained in the cardiovascular and diabetes areas, as well as mental health, which are areas that are part of the local health plan projects.In 2013, they had values very different from the ARSLVT, but in 2015 achieved closer values.
However, these same areas should continue to be subject to intervention in the near future, given their prevalence in the population of the Northern West ACeS, according to the analysis of Table 3, with values above the ARSLVT average.The fact's constituting element, namely, demographic characteristic of aging of the population, will not be unrelated to it.
In spite of all the interventions conducted in schools for several years with regard to obesity and overweight, we have verified that the prevalence of this problem has higher values than the RVLT's average.
Discussion
Investment in PHC is essential and a prerequisite for achieving efficiency, effectiveness and equity in a universalist health system such as the Portuguese, believing that good results in these indicators will lead to better health of populations.This statement is true for the case briefly described in this paper: improved access, improved health care and efficiency performance, related to the application of the contractualization model and attribution of incentives to professionals.
The Portuguese incentive scheme closely follows the ideas developed in the United Kingdom, where the Quality and Outcomes Framework (QOF) was introduced in 2004 as the most comprehensive pay-for-performance scheme in PHC 22 .concluded that the effect of QOF on performance was modest, with uncertainties about possible adverse effects, and therefore recommending that policymakers be cautious in disseminating the project, even with regard to quality indicators (such as user satisfaction and equity), and requires a cost / benefit assessment, since incentive payments remain an imperfect approach to PHC improvement and should therefore be considered as one of the possible alternatives among the existing ones to obtain continuous improvement in this area of health care.
The advised consideration is of particular relevance in contextual terms, for example: both the creation of the National Health Service, the great and profound reform of the health system in Portugal, and the very significant and successful PHC Reform were carried out in counter-cycle.If the former was contemporaneous with the first oil blockade, the April Revolution and the mass return of the Portuguese population residing in the former colonies, the second is contemporary with the greatest financial crisis we have experienced worldwide after 1930.If the former lacked crucial funding for the launch of the SNS bases, the latter lacked the financial basis for completing the entire reform process.Thus, the context described witnesses an escalation of the aggregate health deficit imposed by financial and operational pressures on the SNS.The use of efficiency-seeking strategies through reduced funding and staff expenditure is making the situation unsustainable.Funding cutbacks, the main cause of the diagnosed deficits, is not in keeping with the growing demand and the strong epidemiological change.
In a context where the practice of General and Family Medicine remains poorly demanded by health professionals and where restrictions on the allocation of human resources remain difficult, health has been choosing to pay for production, whether higher volumes of service are required or not.More recently, the focus is shifting from volume to quality and efficiency.Guided by the epidemiological changes and the complexity of health states, in a context of increasingly scarce resources that have been responding to the transformation of the health system, the focus has shifted towards remuneration policies.
Final considerations
During these 10 years of reform, the provision of care and information collected has greatly evolved.In addition to indicators compliance data, we have a background that allows us to define strategies for effective health gains.In this first decade, the programs defined for intervention were linked to pathologies that translate a greater burden of disease in the Portuguese population, often defined in a purely clinical aspect.It is urgent to diversify these areas and make a more familiar and communitarian approach, also fitting the psychosocial area.If, until a few years ago, prevention was more primary and secondary, we now have special emphasis on quaternary hospital-centered prevention, given that we are before citizens with comorbidities and not a single pathology.For effective health gains, it is urgent to define integrated care plans, where each sector intervenes in a whole, with the individual at the center of the intervention.
The definition of new clinical areas of intervention (mental health and respiratory diseases), the integration of care (Facilities / ACeS / hospitals / continued care), the empowerment and autonomy of users and the involvement of the community andthe reorganization of the health care supply should be considered as points of evolution,and expectations of the population should be weighted.
Providers must also be diversified, with teams being acquiring other knowledge and techniques that improve user's quality of life and gradually increase his/her life expectancy.
The driving force of the next 10-year strategy should be curbing hospitalizations and use of hospital services in the broad sense.It will be necessary to change the PHC paradigm in order to pursue this principle.PHC should be equipped with new technologies that avoid users seeking secondary care facilities (hospitals) to perform complementary exams and receive care after work or on weekends.
Collaborations
BR Monteiro was responsible for data collection, writing, critical review and version to be published; AMSA Pisco, for data collection, analysis and writing; F Candoso, for data analysis and paper critical review; M Reis for data collection and interpretation; and S Bastos for data analysis and interpretation.
Figure 1 .
Figure 1.Contractualization model of an ARS with the ACeS (external) and these with the respective UF (internal).
Figure 2 .
Figure 2. Map of ARSLVT and locations of respective ACeS.
Table 1 .
Residents, population density, proportion of women, birth rate, mortality, dependence and aging rates in ACeS Northern West -2015.
Population Density Proportion of Women Birth Rate Mortality Rate Dependence Rate Aging Rate
Source: www.ine.pt(2011 Census final data on resident population).
Table 2 .
2012-2015 development of indicators contractualized between ACeS Northern West compared to ARSLVT.
Table 3 .
2013-2015 Development of the prevalence of ICPC-2 coded problems in ACeS Northern West compared to ARSLVT, Portugal. | 2017-08-29T23:27:36.339Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "a707998662ed65e3eaf125f446f5ced6432e50ca",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/csc/v22n3/en_1413-8123-csc-22-03-0725.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "85e526e0144d4e3ae9d103ed2bed638917060378",
"s2fieldsofstudy": [
"Economics",
"Medicine"
],
"extfieldsofstudy": []
} |
12080099 | pes2o/s2orc | v3-fos-license | Impact and attribute of each obesity-related cardiovascular risk factor in combination with abdominal obesity on total health expenditures in adult Japanese National Health insurance beneficiaries: The Ibaraki Prefectural health study
Background The aim of this study was to examine the attribution of each cardiovascular risk factor in combination with abdominal obesity (AO) on Japanese health expenditures. Methods The health insurance claims of 43,469 National Health Insurance beneficiaries aged 40–75 years in Ibaraki, Japan, from the second cohort of the Ibaraki Prefectural Health Study were followed-up from 2009 through 2013. Multivariable health expenditure ratios (HERs) of diabetes mellitus (DM), high low-density lipoprotein cholesterol (LDL-C), low high-density lipoprotein cholesterol (HDL-C), and hypertension with and without AO were calculated with reference to no risk factors using a Tweedie regression model. Results Without AO, HERs were 1.58 for DM, 1.06 for high LDL-C, 1.27 for low HDL-C, and 1.31 for hypertension (all P < 0.05). With AO, HERs were 1.15 for AO, 1.42 for DM, 1.03 for high LDL-C, 1.11 for low HDL-C, and 1.26 for hypertension (all P < 0.05, except high LDL-C). Without AO, population attributable fractions (PAFs) were 2.8% for DM, 0.8% for high LDL-C, 0.7% for low HDL-C, and 6.5% for hypertension. With AO, PAFs were 1.0% for AO, 2.3% for DM, 0.4% for low HDL-C, and 5.0% for hypertension. Conclusions Of the obesity-related cardiovascular risk factors, hypertension, independent of AO, appears to impose the greatest burden on Japanese health expenditures.
Introduction
Ratios of total health expenditures to Gross Domestic Product have been increasing in most Organization for Economic Cooperation and Development (OECD) countries. 1 Many countries have health care systems that can be divided into three categories: (i) social insurance systems, such as in Japan, France, and Germany; (ii) tax-based systems, such as in the United Kingdom (UK) and Sweden; and (iii) limited services to the elderly or disabled in the United States of America (USA), that is Medicare and Medicaid. 2 The proportion of population coverage reached 100% in 22 of 31 OECD countries. Moreover, the coverage in the USA is being expanded by the Affordable Care Act. 3 Cardiovascular diseases, such as ischemic heart diseases and cerebrovascular diseases, result in high mortality in most OECD countries, 1 which can lead to high expenditures. Many previous studies have shown the associations of abdominal obesity, diabetes mellitus, high serum total cholesterol levels, low high-density lipoprotein cholesterol (HDL-C) levels, and hypertension, which are major risk factors for cardiovascular risk factors in terms of obesity and metabolic syndrome, with health expenditures. 4e14 Several studies have shown that metabolic syndrome per se and abdominal obesity might not have an important effect on health expenditures in the USA and Taiwan. 4,5 However, to the best of our knowledge, no study has investigated the relationships of each cardiovascular risk factor in combination with abdominal obesity and their effects on health expenditures. Thus, the aim of the present study was to examine the impact, as indicated by the health expenditure ratio (HER), and attribution, as indicated by the population attributable fraction (PAF), of cardiovascular risk factors, with or without abdominal obesity, on health expenditures in Japan.
Health insurance system in Japan
The Japanese government has organized several health insurance schemes, 15 and all citizens are obliged to join one. The National Health Insurance, one of them, is mainly for farmers, selfemployed persons, retired persons, and their nonworking dependents aged less than 75 years. The Late-Stage Medical Care System for the Elderly is mainly for persons aged 75 years or more. Access to medical care is unlimited for all persons. The unit prices for medical care are decided by the government. Individuals must pay 10e30% of the medical fees, and the remainder is paid by the insurance. In the present study, the health expenditures included the portion of medical expenses paid individually.
Study cohort and population
A total of 90,442 beneficiaries aged 40e75 years who completed a health check-up conducted by 21 of 44 Japanese National Health Insurance schemes in Ibaraki Prefecture in fiscal year 2009 (from April 2009 to March 2010) were recruited in the second cohort of the Ibaraki Prefectural Health Study. Written informed consent was obtained from 58,757 of those beneficiaries. However, 5418 individuals could not be matched with their health check-up data because of invalid numbering. Thus, the second cohort of the Ibaraki Prefectural Health Study consisted of 53,339 participants. The participants' health claim records were followed-up until the end of fiscal year 2013.
In this cohort study, 4550 participants with missing values for waist circumference, both fasting blood glucose and glycohemoglobin A1c (HbA1c) levels, low-density lipoprotein cholesterol (LDL-C) levels, HDL-C levels, systolic blood pressure (SBP), diastolic blood pressure (SBP), age, sex, smoking status, drinking habits, proteinuria, and/or of the follow-up period were excluded. Furthermore, 5320 participants with a history of stroke, heart disease, and/or kidney disease were excluded. Thus, the data of 43,469 participants were analyzed.
Baseline measurements
At baseline, the subjects completed a self-administered lifestyle questionnaire. The questionnaire included questions about smoking status and drinking habits.
The health check-ups were conducted by the Ibaraki Health Service Association for the participants in 20 districts and by the Hitachi Medical Center for the participants in one district. Most (96%) of the participants underwent the health check-ups by the Ibaraki Health Service Association.
For the check-ups conducted by the Ibaraki Health Service Association, waist circumferences were measured in light clothing. SBP and DBP were measured using the right arm of seated subjects by trained observers using automated sphygmomanometers. Blood samples were drawn from seated subjects; fasting was not required. Plasma glucose levels were measured using an enzymatic method if the subjects were fasting. HbA1c levels were measured using immunoassay if the subjects were non-fasting or had unknown fasting status. In this study, the HbA1c levels are shown according to the National Glycohemoglobin Standardization Program. LDL-C levels were measured using an enzymatic method. Proteinuria levels were tested using a dipstick, and trace positive samples of proteinuria were re-examined using a sulphosalicylic acid test. An interview was conducted to ascertain the use of antidiabetic medication, anti-dyslipidemic medication, and antihypertension medication, as well as histories of stroke, heart disease, and kidney disease.
For the check-ups conducted by the Hitachi Medical Center, HbA1c levels were measured using a high-performance liquid chromatography method if the subjects were non-fasting or had unknown fasting status. However, both laboratories participated in the same quality control programs directed by the Japan Medical Association, the National Federation of Industrial Health Organization, and the Ibaraki Association of Medical Technologists. Other measurement methods were the same as for check-ups conducted by the Ibaraki Health Service Association.
Follow-up surveillance
To ascertain the medical expenditures of the participants, medical (outpatient and inpatient), dental, and pharmaceutical health insurance claim records of the participants were obtained from the Ibaraki National Health Insurance Organization. Moreover, the dates of acquisition and loss of eligibility of a recipient were obtained to ascertain the days the participants were eligible recipients. Follow-ups were restarted if the participants who had lost their eligibility to be recipients re-acquired their eligibility to be recipients. In addition, the follow-ups were continued even if the participants changed from the National Health Insurance to the Late-Stage Medical Care System for the Elderly.
Definitions
Abdominal obesity was defined as waist circumference !85 cm for men and !90 cm for women, referring to the Japanese definition of metabolic syndrome. 16 Diabetes was defined as fasting blood glucose !126 mg/dL (!7.0 mmol/L), HbA1c ! 6.5%, or anti-diabetic medication use according to the guidelines of the World Health Organization. 17,18 High LDL-C was defined as LDL-C ! 160 mg/dL (!4.1 mmol/L) or anti-dyslipidemic medication use, and low HDL-C was defined as HDL-C < 40 mg/dL (<1.0 mmol/L) according to the executive summary of the Third Report of the National Cholesterol Education Program Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults III. 19 Hypertension was defined as systolic blood pressure (SBP) !140 mm Hg, DBP !90 mm Hg, or anti-hypertensive medication use according to the 2014 Evidence-Based Guideline for the Management of High Blood Pressure in Adults Report from the Panel Members Appointed to the Eighth Joint National Committee. 20 Smoking habit was divided into three categories: non-smoker, ex-smoker, and current smoker. Alcohol intake was divided into seven categories: non-drinker, exdrinker, 1e3 days per month, 1e2 days per week, 3e4 days per week, 5e6 days per week, and every day. Proteinuria was divided into four categories: À, þ, 2þ, and 3þ.
Statistical analysis
The unit of health expenditures was set as the Japanese yen (i.e., 123.9 Japanese yen ¼ 1 USA dollar, at the exchange rates published on June 17, 2015). The P values for differences in baseline characteristics according to the presence or absence of each risk factor were calculated using analysis of variance for age, and using c 2 tests for sex, smoking status, drinking habits, and proteinuria.
HERs and 95% confidence interval (CI) according to the presence or absence of the risk factors were calculated with reference to no risk factor using a generalized linear model with an assumption of the Tweedie (compound Poisson-Gamma) distribution, which is the most widely used method in insurance claims modeling. 21 The link function was logarithmic. The log-transformed years of eligibility to be a recipient were treated as an offset variable. Covariates included age and sex in the age-and sex-adjusted model. Covariates included age, sex, smoking status, drinking habits, proteinuria, and the risk factors of interest in the multivariable model. All statistical tests were 2-sided, and P < 0.05 was regarded as significant. The statistical analyses were conducted with SAS, version 9.4 (SAS Institute, Inc, Cary, NC, USA). Furthermore, PAFs were calculated using the following formula: where pd is the proportion of expenditures from the group exposed to the risk factor among total expenditures, and mHER is a multivariable HER. 22 The 95% confidence intervals of PAF were calculated using the following formula 23 : V is a variance estimate for logðmHERÞ, e 1 is expenditures from exposed persons, and e 2 is the expenditures from nonexposed persons.
where CI PAF is a 95% confidence interval for PAF.
Standard protocol approvals, registrations, and patient consent
Written informed consent was obtained from all individual participants included in the study. The Ethics Committee of Ibaraki Prefecture and the Bioethics Committee of Dokkyo Medical University approved this study. The baseline characteristics are shown in Table 1. Mean age and the proportions of men, smoking status, drinking habits, and proteinuria were significantly different according to the presence or absence of abdominal obesity, diabetes mellitus, high LDL-C, low HDL-C, and hypertension, excluding proteinuria with high LDL-C.
Results
The HERs and PAFs according to the presence or absence of each risk factor for the participants are shown in Table 2. Significant associations between each risk factor and health expenditures were found in the age-and sex-adjusted analyses, as well as in the multivariable analyses. The highest multivariable HER was for diabetes mellitus. On the other hand, the highest PAF was observed for hypertension.
The multivariable HERs and PAFs according to the combination of risk factors and abdominal obesity are shown in Table 3. Significantly higher multivariable HERs were observed for all combinations, except for high LDL-C with abdominal obesity. The highest multivariable HER was observed for diabetes mellitus without abdominal obesity. The highest PAF was observed for hypertension without abdominal obesity followed by hypertension with abdominal obesity.
Discussion
To the best of our knowledge, the present study is the first to show that the obesity-related cardiovascular risk factor related to the greatest attribution on the Japanese National Health Insurance expenditures was hypertension, independent of abdominal obesity. The present results support the importance of the prevention of hypertension for persons with and without abdominal obesity for health expenditures in Japan. In addition, metabolic syndrome, which assumes abdominal obesity to be indispensable, might have to be treated as one of multiple risk factors from the perspective of primary prevention for a sustainable health insurance system.
Several studies have investigated the relationships between cardiovascular risk factors and health expenditures. 4e14 The Cardiovascular Health Study 5 showed higher costs of 14.9% (95% CI, 4.3%e 26.7%) for abdominal obesity, 15.8% (95% CI, 1.7%e31.8%) for low HDL cholesterol, and 20.4% (95% CI, 10.1%e31.7%) for elevated blood pressure of the total Medicare costs. The study also showed that there was no significant association between metabolic syndrome and Medicare costs. The Chicago Heart Association Detection Project in Industry 6 showed an association between high blood pressure (SBP !120 mm Hg or DBP !80 mm Hg) and Medicare costs related to cardiovascular disease in men and women. In another report of the Chicago Heart Association Detection Project in Industry, 7 the adjusted average annual total Medicare costs of subjects with body mass index (BMI) 25.0e29.9 kg/m 2 compared with BMI 18.5e24.9 kg/ m 2 were 1.12-fold higher in men and 1.20-fold higher in women. A large prospective cohort study 8 showed that the average total Medicare expenditures of beneficiaries with diabetes was approximately 1.7-fold higher than that of beneficiaries without diabetes.
The Elderly Nutrition and Health Survey in Taiwan 4 showed adjusted cost ratios of 1.45 for high fasting glucose (!100 mg/dL or current use of diabetes medication) and 1.46 for high blood pressure (SPB !130 mm Hg, DBP !85 mm Hg, or current use of antihypertensive medication) in men, whereas central obesity (waist circumference !90 cm for men or !80 cm for women) was not significant. The Ohsaki Study 9 showed that the total medical costs of participants with BMI 25.0e29.9 kg/m 2 and with BMI !30.0 kg/ m 2 were approximately 1.1-and 1.2-fold higher, respectively, than participants with BMI 21.0e22.9 kg/m 2 , and the excess direct costs attributable to BMI !25.0 kg/m 2 were 3.2% of total health care expenditures. The subgroup analysis of the Ohsaki Study 13 indicated that the excess adjusted total cost was 9.4% for overweight/ obesity (BMI !25 kg/m 2 ), 35.6% for hypertension (SBP !140 mm Hg, DBP !90 mm Hg, or antihypertensive medication use), 42.2% for hyperglycemia (casual plasma glucose !150 mg/dL or history of diabetes), and no excess cost for dyslipidemia (serum also showed that the adjusted geometric mean of beneficiaries with Stage 1 hypertension compared to those with normal blood pressure was 1.24fold higher for men and 1.01-fold higher for women, and they also showed that the percentage of stage 1 and 2 hypertensionrelated medical costs for the beneficiaries was 14.2%. Another report of the Health Promotion Research Committee of the Shiga National Health Insurance Organizations 11 showed that total medical expenditure was approximately two-fold higher for subjects with diabetes (history of diabetes) than for those without diabetes. Furthermore, in another report of the Health Promotion Research Committee of the Shiga National Health Insurance Organizations, 14 the excess medical expenditures attributable to cardiovascular risk factors with normal weight were higher than in those who were overweight (BMI !25.0 kg/m 2 ). Those previous studies suggested that each cardiovascular risk factor (i.e., obesity, hyperglycemia, dyslipidemia, and high blood pressures) was associated with health expenditures, as in the present study. However, those previous studies did not investigate the association of each cardiovascular risk factor with health expenditures in combination with abdominal obesity.
The strength of the present study was the statistical analysis with a Tweedie distribution, which is the most widely used mixture distribution model in insurance claims modeling. The Tweedie distribution is defined as a Poisson sum of gamma random variables, known as an aggregate loss in actuarial science. 21 The Tweedie model is a generalized linear model from the exponential family. A Tweedie random variable can be represented as a Poisson sum of gamma distributed random variables. That is, where N has a Poisson distribution, and Y i s have independent, identical gamma distributions. In this case, Y has a discrete mass at 0, PrðY ¼ 0Þ ¼ PrðN ¼ 0Þ ¼ expðÀlÞ, where l is the mean of Poisson distribution, and the probability density of Y f ðyÞ is represented by an infinite series fory > 0. 24 That is, a Tweedie distribution can treat a gamma distribution with inflated zero data, such as health claim data, simultaneously. In Tweedie distributions, a compound Poisson-Gamma distribution is produced when the power parameter p is >1 and <2. In the present study, all power parameters estimated were approximately 1.8 (data not shown) which indicated that the distributions were valid. On the other hand, the present study has several limitations. First, the definition of abdominal obesity did not refer to the National Cholesterol Education Program Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults III criteria. 19 However, the Japanese criteria of waist Table 2 Health expenditure ratios and population attributable fractions according to the presence or absence of each risk factor among 43,469 Japanese National Health Insurance beneficiaries aged 40e74 years in Ibaraki, Japan, 2009e2013. circumference 16 have been adopted on a nationwide scale in Japan. For instance, the Japanese Society of Hypertension Guidelines for the Management of Hypertension adopted the Japanese waist circumference criteria (!85 cm for men and !90 cm for women) as defining abdominal obesity. According to the National Health and Nutrition Survey in Japan, 2013, 25 conducted by the Ministry of Health, Labour and Welfare, Japan, the prevalence of high waist circumference (!85 cm for men and !90 cm for women) was 34% among a representative sample of the Japanese population aged 20 years or older. Second, further studies are warranted to ascertain the generalizability of the findings due to the low participation rate (34%) in health-checkups in the study area and the low proportion of study subjects among participants in health-checkups.
In summary, hypertension, independent of abdominal obesity, appears to impose the greatest burden among the obesity-related cardiovascular risk factors on Japanese National Health Insurance expenditures. | 2018-04-03T00:26:09.803Z | 2017-03-01T00:00:00.000 | {
"year": 2017,
"sha1": "b0787aaa8f8effb07b4dad929f9a4643d989b3d4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.je.2016.08.009",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b0787aaa8f8effb07b4dad929f9a4643d989b3d4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9358210 | pes2o/s2orc | v3-fos-license | Differential analysis of mutations in the Jewish population and their implications for diseases
Sequencing large cohorts of ethnically homogeneous individuals yields genetic insights with implications for the entire population rather than a single individual. In order to evaluate the genetic basis of certain diseases encountered at high frequency in the Ashkenazi Jewish population (AJP), as well as to improve variant annotation among the AJP, we examined the entire exome, focusing on specific genes with known clinical implications in 128 Ashkenazi Jews and compared these data to other non-Jewish populations (European, African, South Asian and East Asian). We targeted American College of Medical Genetics incidental finding recommended genes and the Catalogue of Somatic Mutations in Cancer (COSMIC) germline cancer-related genes. We identified previously known disease-causing variants and discovered potentially deleterious variants in known disease-causing genes that are population specific or substantially more prevalent in the AJP, such as in the ATP and HGFAC genes associated with colorectal cancer and pancreatic cancer, respectively. Additionally, we tested the advantage of utilizing the database of the AJP when assigning pathogenicity to rare variants of independent whole-exome sequencing data of 49 Ashkenazi Jew early-onset breast cancer (BC) patients. Importantly, population-based filtering using our AJP database enabled a reduction in the number of potential causal variants in the BC cohort by 36%. Taken together, population-specific sequencing of the AJP offers valuable, clinically applicable information and improves AJP filter annotation.
Introduction
High-throughput sequencing, also known as nextgeneration sequencing (NGS), reduced the cost and increased the yield of DNA sequencing. As wholeexome sequencing (WES) and whole-genome sequencing (WGS) are increasingly integrated into practical medical care, the importance of studying the genetic structure of ethnically diverse populations using NGS rises. Although most of the variant sites in the human genome are shared among individuals, allele frequencies vary substantially between populations (The International HapMap Consortium, 2005;1000Genomes Project Consortium et al., 2012Visscher et al., 2012;Carmi et al., 2014; The Genome of the Netherlands Consortium, 2014; Gudbjartsson et al., 2015;Nagasaki et al., 2015). The value and advantages of sequencing diverse populations has already been shown in: genome-wide association studies (Visscher et al., 2012); discovering rare and de novo variants; improving variant calling sensitivity and specificity; and improving the accuracy of curating pathogenic variants (Carmi et al., 2014; The Genome of the Netherlands Consortium, 2014; Gudbjartsson et al., 2015). Substantial efforts have been devoted to sequencing large number of individuals from diverse populations in order to create public databases that can assist human genetic studies such as the 1000 Genomes Project (1KG) (1000Genomes Project Consortium et al., 2012, the Exome Sequencing Project (ESP; http://evs.gs.washington.edu/EVS/) and the Exome Aggregation Consortium (ExAC; http:// exac.broadinstitute.org/). The Ashkenazi Jewish population (AJP) is known to have a high rate of several diseases affecting individuals of that ethnic origin compared with other world ethnicities (Rosner et al., 2009). These include both autosomal recessive disorders due to the founder effect (Slatkin, 2004;Bray et al., 2010;Carmi et al., 2014), such as Gaucher disease (Beutler et al., 1993), cystic fibrosis (Abeliovich et al., 1992) and Tay-Sachs disease (Myerowitz & Costigan, 1988), as well as more common, adult-onset autosomal dominant diseases such as Parkinson's disease (PD) (Ozelius et al., 2006) and hereditary BC and ovarian cancer (Struewing et al., 1997). Notably, the AJP has not been included as part of large-scale international sequencing projects. A recent NGS study of an AJP cohort demonstrated an improvement in imputation accuracy and modelling of Jewish history (Carmi et al., 2014). However, further research is warranted in order to elucidate the possible clinical implications of the AJP allelic architecture and to improve the curation and accuracy of pathogenic variant screening in current and future AJP studies.
Recently, new recommendations for the AJP screening panel were published based on the same dataset as ours (Baskovich et al., 2016). However, that study focused only on the identification of pathogenic variants for the purpose of clinical screening in the AJP, whereas the current study takes a more global view by focusing on the genome and gene-level trends, rather than particular genetic variants, examining the utility of using an AJP-specific reference panel in interpreting clinical sequencing projects involving AJP individuals.
In this study, we focused on the clinical utility and practical implications resulting from WES analysis of 128 Ashkenazi Jews, of whom 74 individuals had no discernible disease and 54 were controls in a PD study. We examined the genetic differences between the AJP and other non-Jewish populations (NJPs) and searched for genes that are more likely to carry pathogenic variants among the AJP than in NJPs. Finally, we applied our findings to 49 independent Ashkenazi Jewish BC patients in order to evaluate the value of utilising an Ashkenazi Jew-specific database as a filtering tool.
Ashkenazi Jew variants
We used an unfiltered variant calling file (VCF) of 128 verified Ashkenazi Jewish individuals who underwent WGS as a part of a population genetic study of the AJP (Carmi et al., 2014). WGS was conducted by Complete Genomics with a high coverage (average coverage >50×). Seventy-four of the individuals were considered healthy and 54 were controls in a PD study. We extracted variants from the whole-exome region only, based on Ilumina's TruSeq Exome Enrichment Kit targets (https://www.illumina.com/ content/dam/illumina-marketing/documents/products/ datasheets/truseq-exome-data-sheet-770-2015-007.pdf), and did not include areas outside this region in our bioinformatics analysis. The target region size was 62 Mb, which targets 20,794 genes and 96·4% of RefSeq43-coding exons. We performed quality check (QC) and applied different filtrations (see Supplementary Methods; available online), which resulted in 222,179 high-quality single-nucleotide variants (SNVs).
BC patient variants
The VCF of 49 Ashkenazi Jewish BC patients, suspected to be hereditary, was obtained using the Genome Analysis Toolkit (GATK) best practice pipeline (McKenna et al. 2010), followed by QC (see Supplementary Methods), which resulted in 173,300 variants for the same exome region as the 128 Ashkenazi Jews.
1KG control groups
As control groups, and in order to compare the AJP with other populations, we used the European, African, East Asian (EAS) and South Asian (SAS) populations from the 1KG Project version 3 database (1000Genomes Project Consortium et al., 2012. The data for these datasets were generated using the Illumina platform, and the variants were called by combining different variant callers, among them GATK's variant caller (http://www.1000genomes. org/analysis). For each population, 128 individuals were selected randomly, and the same region that was examined for the AJP was extracted.
Results
In this study, we analysed the whole-exome data of 128 Ashkenazi Jewish individuals. We detected 222,179 SNVs, of which 30·6% (68,139) were singletons and 81·7% were shared and were annotated in other European population databases, including the European samples of ESP, ExAC and 1KG. Although this rate of overlap between the AJP and the European population is in line with the known relatedness and genetic similarity between the European population and the AJP (Behar et al., 2003;Costa et al., 2013), approximately 20% of the detected variants were unique to the AJP. The overlap rates between AJP variation and genetically more distant populations including African, EAS and SAS populations (inferred from ExAC and 1KG databases) were significantly smaller, as expected (68-49%, Fig. 1(a)), further strengthening the validity of our data. Only 3·2% of the AJP variants were present in one of these distantly related populations but not in the European dataset, resulting in 13·3% (29,221) AJP-unique (i.e. novel) variants not reported in any of the population databases or in dbSNP142 ( Fig. 1(b)).
Next, we functionally annotated the coding variants and classified the exonic variants into three categories by severity: (i) 'high impact' including stop-gain or stop-loss variants and variants within 2-bp of a splicing junction; (ii) 'moderate impact' included exonic missense variants; and (iii) 'low impact' included synonymous variants and exonic variants of unknown type due to incomplete gene structure information. Using this classification scheme, 831 variants (19 splice site variants) with high impact were identified, 54,585 were moderate-impact variants and 45,876 were low-impact variants. A similar distribution of variant severity was observed in the 128 European individuals ( Supplementary Fig. S1).
Evaluating ACMG and COSMIC set of genes
We evaluated the clinical implications of the highimpact, very rare variants by comparing the existence of these variants in two gene sets: the Catalogue of Somatic Mutations in Cancer (COSMIC; http://cancer.sanger.ac.uk/cosmic) and the American College of Medical Genetics and Genomics (ACMG; https:// www.acmg.net/). COSMIC's Cancer Genes Census catalogues genes that exhibit mutations that are causally implicated in cancer pathogenesis (see Supplementary Material for the complete list). Of all COSMIC genes harbouring germline cancer mutations (n = 87) associated with cancer predisposition, six high-impact variants in five cancer predisposition genes were noted (Supplementary Table S2). Five of the variants were singletons, and one was a doubleton: rs34295337 in ERCC3, a gene associated with xeroderma pigmentosum type B (Ma et al., 1994), which is a rare autosomal recessive disease that is associated with skin cancer (Paszkowska-Szczur et al., 2013). One variant, rs11571833, in the BRCA2 gene, was described previously as being associated with an increased risk of developing a variety of cancer types including lung, breast, prostate, gastric and aerodigestive tract cancer (Wang et al., 2014;Delahaye-Sourdeix et al., 2015;Thompson et al., 2015;Meeks et al., 2016;Vijai et al., 2016). Two variants, one in the DICER1 gene and one in the NF1 gene, were novel. The NF1 gene harboured one additional highimpact variant. Notably, NF1 germline mutations underlie the neurofibromatosis type 1 phenotype, a disease that is reportedly diagnosed at higher rates in the AJP than in the European population (Garty et al., 1994).
The ACMG recommendation for reporting incidental findings in clinical sequencing includes 56 genes (22 genes intersect with COSMIC genes; see Supplementary Material for the complete list). High-impact variants were noted in two ACMG genes. The first variant (rs11571833) in the BRCA2 gene was already described and discussed above. The second variant, rs200563280, results in a premature stop codon in the RYR1 gene, a gene that is associated with malignant hyperthermia (Robinson et al., 2006). Thus, the rate of actionable incidental findings in the AJP is 1·56%, similar to the estimate for Europeans at approximately 2% (Amendola et al., 2015). None of the above variants were mentioned in a recent study, based on the same dataset that expanded the recommendations for an AJP screening panel (Baskovich et al., 2016).
AJP-specific variants
We next examined AJP-specific variants. We defined variants as AJP specific if they were unique (i.e. novel) or very rare (minor allele frequency (MAF) <1%) in the NJPs, but more prevalent in the AJP (MAF >1%). Of the total AJP variants, 17,977 (8%) were AJP specific. To confirm that our dataset is enriched with variants that are unique to the AJP, we performed the same analysis on 128 verified Europeans from the Personal Genome Project (PGP) (Church, 2005; see Supplementary methods). Only 8748 variants (3·6% of the PGP dataset) were more than 1% in the PGP dataset but not in NJPs (both European and non-European populations).
We then looked at genes that are enriched for moderate-to high-impact variant groups that are AJP specific. This analysis yielded 5142 variants. Most genes harboured up to one such variant, 840 genes exhibited two variants and 196 genes displayed three or more moderate-to high-impact variants ( Supplementary Fig. S2). After QC (see Supplementary Methods), three outlier genes were filtered out ( Supplementary Fig. S2). In this analysis, virtually no correlation between the number of variants and the genomic length of the gene was observed (Pearson's correlation = 0·1). Next, we examined the residual variation intolerance score (RVIS) (Petrovski et al., 2013) in order to identify genes under purifying selection that harbour unique or prevalent mutations in the AJP. Briefly, RVIS measures the tolerance of a gene to contain damaging variation. Genes with a low RVIS are predicted to be less tolerant to variation, and hence are more likely to exhibit a phenotype due to non-synonymous variants. The APC gene harboured a high number of AJP-specific variants (n = 7) and is in the lowest 0·2 percentile of RVIS ( Fig. 2(a)). Mutations in the APC gene are associated with a specific form of inherited predisposition to colorectal cancer. Overall, colorectal cancer is more prevalent in the AJP than in NJPs (Feldman, 2001). Notably, the p.I1307K missense mutation in APC (rs1801155), which has been previously shown to moderately increase colorectal cancer risk in the AJP (Woodage et al., 1998), was among the identified variants (MAF = 0·047), and was recommended for inclusion in AJP screening (Baskovich et al., 2016). However, additional susceptibility variants were detected in the APC gene, suggesting that other variants may contribute to the increased prevalence of colorectal cancer in the AJP. Other genes with low RVIS and harbouring four AJP-specific damaging variants are ABCA12, TULP4, DNMT1, DMXL1 and HECW1.
To the best of our knowledge, the prevalence of the phenotypes associated with these genes (Supplementary Table S3) is not significantly higher in the AJP compared with other NJPs. Hence, the clinical implications and significance of this seemingly high rate of damaging variants in these genes warrant further investigation in additional extended Ashkenazi Jewish studies.
To assess the effect of the AJP-specific variants on protein function, we used the MetaLR (Dong et al., Only six genes had a very low RVIS and four or more high to moderate Ashkenazi Jewish population (AJP)-specific variants, including the APC gene, which had the lowest RVIS and highest number of variants at seven. (b) Histogram of the number of AJP-specific deleterious variants in a gene. While most of the genes had two or fewer of these variants, eight genes had three to five variants. 2015) ensemble tool, which integrates different prediction tools using logistic regression to predict whether a variant is deleterious (see Supplementary Methods). Overall, we obtained 649 AJP-specific deleterious variants in 580 different genes. Only eight genes had at least three AJP-specific deleterious variants ( Fig. 2(b) and Supplementary Table S3): APC, ABCA12, LRP2, EPPK1, HGFAC, ACAD11, HLCS and NOX1. APC and ABCA12 were discussed; the HGFAC (three variants) gene is a member of the peptidase S1 protein family and is associated with pancreatic cancer (Kitajima et al., 2008), a cancer type that is known to be more frequent among the AJP (Feldman, 2001). The EPPK1 gene (four variants) encodes a protein that belongs to the plakin family and is related to 'vacterl association' disorder (Hilger et al., 2013). The phenotype of this disorder encompasses Fanconi anaemia, a phenotype that is diagnosed at a higher frequency in the AJP compared with NJPs (Kutler & Auerbach, 2004), and hence, these variants may contribute to these higher occurrence rates. The other genes are associated with different types of rare diseases, but to the best of our knowledge, these conditions are not diagnosed at an increased rate in the AJP (Supplementary Table S3).
Furthermore, to examine whether the genes harbouring AJP-specific deleterious variants were previously implicated as AJP-prevalent phenotypes, we queried VarElect (http://varelect.genecards.org/) using the term 'Ashkenazi'. VarElect can prioritise genotype-phenotype associations based on various databases. Of the 580 queried genes, 14 genes harbouring 17 variants (Table 1) were found to be directly related to the 'Ashkenazi' term, denoting conditions that are common to the AJP. Five of the 17 variants are considered to be pathogenic by the Clinvar database, four of the variants were also included in the recent recommendation for the AJP screening panel (Baskovich et al., 2016) and four of the genes are included in the AJP screening panel, but for different variants. To verify our results, we did the same for the 128 European individuals looking at European-specific variants, meaning genes with variants that were very rare in the non-European population but not in the European population (423 genes), and tried to find genes that were related to the 'Ashkenazi' phenotype. Although 20 genes were found to be related, none of the variants in them was found to be pathogenic by Clinvar, which further supports our results. Taken together, these results suggest that additional variants, among these 17 variants, are plausibly causal and hence should be further investigated.
Using the Ashkenazi Jewish database in an analysis of Ashkenazi Jewish early BC patients
The major objective of clinical sequencing is to identify the causative mutation from amongst numerous detected variants. To that end, non-synonymous variants with rare allele frequencies are considered initially as plausible causative mutations. Since the AJP is not included in any of the public databases of international sequencing efforts, the MAFs of closely related populations such as Europeans (Haas et al., 2012;Lee et al., 2012;Rees et al., 2012) are often utilised as surrogates. We evaluated the advantages of using AJP-specific MAFs when screening the WES data of Ashkenazi Jewish samples. Of the 55,416 high-and moderate-impact mutations, 57·7% were classified as very rare based on the general European MAF versus 50·6% based on the AJP MAF, leading to out-filtration of approximately 3900 variants (Fig. 3(a)). Likewise, based on the maximum MAF (MMAF) of all NJPs, 50·1% of the variants were classified as very rare, compared to 40·8% when including the AJP. These results are in line with Carmi et al. (2014). For rare variants (MAF <5%), the advantage of using AJP-specific MAFs is somewhat less significant (1·2% difference), in line with the notion that population-specific variants are predominantly very rare (1000Genomes Project Consortium et al., 2015. Similarly, potentially deleterious variants are prioritised in clinical NGS applications. Based on the AJP MAF, 79·0% of deleterious variants, based on MetaLR, were considered very rare, whereas 89·6% were considered very rare based on the European MAF ( Fig. 3(b)). Furthermore, combining the AJP MAF with the NJP MMAF substantially improved filtering from 85·9% of the variants classified as very rare to just 72·9%. Since the MAFs of numerous populations, but not the AJP, are included in the MetaLR model, adding the AJP MAF can significantly improve the filtering of deleterious variants. Taken together, these significant population-specific differences in rare variants indicate that by utilising AJP-specific MAFs, finer filtration and lower false-positive rates can be achieved in Ashkenazi Jewish sequencing studies.
Importantly, we evaluated the utility of the AJP-specific screening approach using the independent WES data of 49 Ashkenazi Jewish samples derived from high-risk BC cases who do not harbour mutations in the predominant underlying genes -BRCA1 and BRCA2. Of the 2638 predicted deleterious variants, 81·3% were very rare according to the European MAF, compared to 77·5% using the AJP MAF. Similarly, combining the AJP with the NJP MMAF improved filtering by approximately 10% from 75·9% to 64·5% (Supplementary Fig. S3).
In our actual disease gene analysis of the Ashkenazi Jewish BC sample, we screened for very rare variants that are potentially deleterious by MetaLR and are present in at least three BC cases, resulting in 450 potentially deleterious variants. Filtering by using the European MAF resulted in 189 variants in 148 genes, while using the MMAF of the Ashkenazi Jewish and Europeans filtered an additional 69 variants, resulting in 120 potential variants (36%). In comparison, using the MMAF of Europeans and 128 individuals from African, EAS or SAS populations resulted in minor additional filtering of only seven, two and 13 variants, respectively ( Supplementary Fig. S4). Using all populations' MMAFs (AJP + NJP) versus only the NJP MMAF resulted in 100 variants in 72 genes compared to 157 variants in 126 genes (36%) (Fig. 3(c)). We then used VarElect to search for genes related to the keyword 'breast'. The MSH6 gene scored highest using VarElect (Supplementary Table S4) and by the MetaLR deleterious score (0·88). The protein coded by this gene is a member of the DNA mismatch repair MutS family, and rare variants in this gene are associated with familial BC (Wasielewski et al., 2010). Mutations in MSH6 are traditionally associated with Lynch syndrome (Baglietto et al., 2010), a syndrome that seems to encompass BC susceptibility according to recent publications (Win et al., 2013). This finding requires further examination of a larger cohort in order to draw better conclusions about the role of these variants in BC predisposition.
Discussion
In this study, a comprehensive analysis of the whole exome in 128 Ashkenazi Jewish individuals using high-coverage NGS technology was carried out and compared with the same data generated from a closely related European population. By targeting AJP-specific variants, the clinical utility of using NGS technology to genotype entire populations is clearly demonstrated. Using such an approach, applying a variety of bioinformatics and predictive tools and querying several publicly available databases, we revealed novel variants and genes that may be associated with an increased risk of developing a host of diseases in the AJP. Some of these variants occur within genes related to diseases that are known to be more commonly diagnosed in the AJP than in NJPs: colorectal cancer (APC gene) and pancreatic cancer (HGFAC gene). Although these variants are predicted to be pathogenic and may indeed affect cancer risk, the current evidence is still tentative and cannot be clinically applied until validation and expansion of these results is provided by future studies. The EPPK1 gene harboured a few AJP-specific Fig. 3. Frequencies of high-to moderate-impact variants (a) and deleterious variants (b) by populations' minor allele frequency (MAF) (orangevery rare; yellowrare; greencommon). By joining the Ashkenazi Jewish population (AJP) MAF to the non-Jewish population (NJP) MAF and using the maximum MAF, the percentages of very rare variants were reduced by 10% and 13%, respectively. (c) Filtration of very rare variants of 49 Ashkenazi Jewish (AJ) breast cancer patients. Adding the AJ MAF filtered an additional 57 (36%) of the variants, demonstrating the utility of using the same population database. AFR = African; EAS = East Asian; EUR = European; SAS = South Asian. deleterious variants. Homozygous mutations in this gene are associated with Fanconi anaemia, a disorder that is more commonly encountered in AJP (Kutler & Auerbach, 2004). Moreover, heterozygous mutations in Fanconi anaemia genes are associated with increased cancer risk, primarily BC (Mathew, 2006;Alan & D'Andrea, 2010), and indeed, two of the four AJP-specific deleterious variants in the EPPK1 gene were also detected in the high-risk BC cohort. Among the observed AJP-specific deleterious variants, five were known to be pathogenic variants that increase the risk of five different diseases that are common to the AJP, and three of them were included in a new recommended screening panel for the AJP (Baskovich et al., 2016). These overlaps confirm the effectiveness of the methodology applied in the present study for finding population-based pathogenic variants, as well as supporting the potential of population screening using NGS. Additionally, by examining specific genes with known and valuable clinical implications and consequences (i.e. ACMG incidental findings genes and COSMIC germline mutationharbouring genes), a number of variants were identified in genes that lead to a phenotype that is seen at a higher occurrence in the AJP than in other populations (e.g. the NF1 gene).
Based on the results of the present study and the current ACMG incidental findings recommendations, in approximately 3/200 (1·56%) members of the AJP who undergo WES, an incidental finding will emerge. As information about the role of each variant in the exome/genome accumulates and the pathogenicity prediction tools and functional analyses continue to evolve, some of the moderate-impact variants of these genes might also be reclassified as pathogenic, so that the rate of incidental findings may still be altered.
The present study also illustrated the importance of using the Ashkenazi Jewish-specific database in the course of analysing the genetic basis of inherited cancer in the AJP. Using the dataset and analysis tools, the number of potential causal sequence variants underlying an inherited predisposition to BC was reduced by 36%. Such a filtering step is critical to defining a bona fide causal mutation. Therefore, this provides further support for the importance of creating and using a population-specific database when investigating the genetic basis of inherited diseases, rather than using genetically related but not identical populations.
While a recent study of 5685 Ashkenazi Jewish exomes has been published (Rivas et al., 2016), the current study provides evidence that by using wholeexome data from a relatively small number (n = 128) of Ashkenazi Jewish individuals, clinically relevant information and improvements in filter annotation are feasible. Thus, the research potential value and clinical benefits of using NGS technology at a population level are further emphasised. | 2017-05-23T02:37:18.927Z | 2017-05-15T00:00:00.000 | {
"year": 2017,
"sha1": "7b71c94ab1e4e23903732927b990a31b0e02d53c",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/47602503E378F9E5F5BDAE34033B52F8/S0016672317000015a.pdf/div-class-title-differential-analysis-of-mutations-in-the-jewish-population-and-their-implications-for-diseases-div.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4974925bc77fe0158d1b75851e9abf2aa836de6f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
232877175 | pes2o/s2orc | v3-fos-license | Electrochemical study of Zr–1Nb alloy with oxide coatings formed by plasma electrolytic oxidation
. Plasma electrolytic oxidation (PEO) coatings were formed on Zr–1Nb alloy in electrolyte containing 9 g/L sodium silicate, 5 g/L sodium hypophosphite, and 6 g/L submicron yttrium oxide powder at current densities 20, 30, and 40 A/dm 2 . The coatings surface morphology was studied by scanning electron microscope. The electrochemical behavior of coated and uncoated samples was investigated after 1- and 7-days exposure in 10 % HCl. The samples with PEO coatings formed at 30 A/dm 2 current density had the best corrosion protective properties after 1-day exposure in 10 % HCl. After 7-days exposure in 10 % HCl the samples with PEO coatings formed at 30 and 40 A/dm 2 current densities showed greater corrosion resistance then samples with PEO coatings formed at 20 A/dm 2 current density and uncoated samples.
Introduction
Plasma electrolytic oxidation (PEO) of zirconium and its alloys is the subject of many modern studies. PEO coatings are promising for the protection of reactor structural materials against corrosion, accelerated oxidation at high temperatures, embrittlement and absorption of oxygen and hydrogen [1,2]. Zirconium and its alloys with PEO coatings are also promising in orthopedics and dental prosthetics [3]. Due to the low thermal conductivity the possibilities of PEO coatings application as thermal barrier coatings are also considered [4][5][6].
PEO is electrochemical process using the energy of electrical microdischarges functioning on the surface of the materials being treated [7]. During PEO process electrolyte components can be incorporated into the coatings, forming oxides and various compounds with components of base material. Also, powders of insoluble compounds can be added in the slurry electrolytes to provide certain properties to the coatings: wear resistance, corrosion resistance, heat resistance, etc. For example, the addition of nanoparticles of such oxides as Al 2 O 3 , CeO 2 , and ZrO 2 in PEO electrolytes improves the corrosion resistance of zirconium alloys (up to 10 3 times compared with uncoated alloys) [1]. In present work PEO coatings were formed on Zr-1Nb alloy in electrolyte with yttrium oxide submicron particles addition. Yttrium oxide additives can increase the corrosion protective properties of the PEO coatings, as well as lead to tetragonal and cubic ZrO 2 phases stabilization in oxide layer on zirconium alloys that improve its thermal stability and hardness.
Experimental setup and characterization techniques
PEO coatings were formed on Zr-1Nb alloy in the electrolyte containing 9 g/L sodium silicate, 5 g/L sodium hypophosphite, and 6 g/L submicron yttrium oxide powder. Slurry electrolyte was treated for 3 min using a homogenizer at ultrasonic vibration frequency of 40 kHz to stabilize the suspension. Plasma electrolytic oxidation was carried out for 60 min at AC electrical mode and equal values of anodic and cathodic currents and sum current densities 20, 30, and 40 A/dm 2 .
Investigation of the surface morphology and thickness cross-sections of PEO coatings measurements were carried out using a Quanta 600 scanning electron microscope (SEM). The electrochemical behavior of uncoated and coated samples was investigated in 10 % HCl. Experimental curves were obtained by polarization from the cathodic to the anodic region with the sweep rate of 1 mV/s after 1-and 7-days exposure in 10 % HCl. The studies were carried out in standard threeelectrode cell with silver chloride (Ag/AgCl) reference electrode. Polarization was carried out using a PI-50-1 potentiostat.
Results and discussion
Cross-sections SEM study showed that PEO coatings average thickness formed at current density 20, 30 and 40 A/dm 2 is ~40, ~110 and ~170 μm with accordingly. The surface layer of PEO coatings is characterized by crater-like regions, regions of globular structure (figure 1, a) and incorporated in coating yttrium oxide submicron particles up to 300 nm in size ( figure 1, b). Figure 1b shows that the yttrium oxide particles quite uniformly cover the PEO coating surface, that can decrease open porosity. In addition, yttrium oxide can melt and form a solid solution with zirconia when it enter in areas of micro-discharges functioning, whose temperature reaches several thousand degrees. In this case the stabilization of the high-temperature zirconia phases occurs [8]. Thus, increasing of PEO process current density up to 30 and 40 A/dm 2 leads to higher corrosion protective properties of oxide coatings on Zr-1Nb alloy. It may be due to forming of denser barrier layer forming as a result of higher local temperatures in discharges during PEO process. Increasing of current density also leads to more intensive incorporation of submicron yttrium oxide particles into the PEO coating structure and stabilization of the tetragonal and cubic ZrO 2 phases as was shown in [8]. It was also reported in [1][2] that electrophoretic interaction could be responsible for the migration of yttrium oxide nanoparticles towards the anode during PEO process. Addition of yttrium oxide nanoparticles in PEO electrolyte leads to increasing of the corrosion resistance of coated zirconium alloys by several orders of magnitude [1]. The obtained in present work data suggest the similar effect for submicron particles.
Conclusions
PEO coatings were formed on Zr-1Nb alloy in the electrolyte containing 9 g/L sodium silicate, 5 g/L sodium hypophosphite and 6 g/L submicron yttrium oxide powder at current densities 20, 30 and 40 A/dm 2 . The corresponding average coating thicknesses were ~40, ~110 and ~170 μm. The electrochemical behavior of coated and uncoated samples was investigated after 1-and 7-days exposure in 10 % HCl. The samples with PEO coatings formed at 30 A/dm 2 had the best corrosion protective properties after the 1-day exposure in 10 % HCl. After 7-days exposure in 10 % HCl the samples PEO modified at 30 and 40 A/dm 2 showed greater corrosion resistance that may indicate the presence of denser barrier layers under PEO coatings and more intensive incorporation of submicron yttrium oxide particles into their structure compared with the coatings formed at 20 A/dm 2 . | 2021-04-04T14:15:36.765Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "0f6294ede3fd6a3867a3558ea9fb4166ec49330b",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1713/1/012039/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "60b0783636eae2edc481d619a875a6fd77cdd8c3",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
236532578 | pes2o/s2orc | v3-fos-license | Epiglottitis in Patients With Preexisting Autoimmune Diseases: A Nationwide Case–Control Study in Taiwan
Objectives: The role of autoimmune diseases on the risk for acute epiglottitis remains uncertain. This study aimed to delineate the association between epiglottitis and autoimmune diseases using population database. Methods: A population-based retrospective study was conducted to analyze claims data from Taiwan National Health Insurance Research Database collected over January, 2000, to December, 2013. Results: In total, 2339 patients with epiglottitis were matched with 9356 controls without epiglottitis by sex, age, socioeconomic status, and urbanization level. The correlation between autoimmune diseases and epiglottitis was analyzed by multivariate logistic regression. Compared with controls, patients with epiglottitis were much more likely to have preexisting Sjögren syndrome (adjusted odds ratio [aOR]: 2.37; 95% CI: 1.14-4.91; P = .021). In addition, polyautoimmunity was associated with increased risk of epiglottitis (aOR: 2.08; 95% CI: 1.14-3.80; P = .018), particularly in those aged >50 years (aOR: 2.61; 95% CI: 1.21-5.66; P = .015). Conclusions: Among autoimmune diseases, we verify the association between epiglottitis and Sjögren syndrome in Taiwan. Furthermore, we present the novel discovery that patients with epiglottitis have an increased risk of polyautoimmunity, particularly those aged >50 years.
Introduction
Epiglottitis is a life-threatening medical emergency, with the potential risk of sudden airway obstruction secondary to the extensive inflammatory response in the epiglottis, aryepiglottic folds, and arytenoid regions.][4] Although epiglottitis is a severe inflammatory disease, it also occurs in patients with compromised immune system or impaired inflammatory response. 5,6Epiglottitis in immunocompromised patients is reported together with various underlying conditions and is associated with higher mortality than epiglottitis in immunocompetent patients, which makes early diagnosis and treatment crucial. 5 primary defect in the immune system, the long-term use of immunosuppressive therapy, complement abnormalities, alterations to the innate and adaptive immune response, and splenic dysfunction may all contribute to patients with autoimmune diseases having increased susceptibility to encapsulated bacteria such as Haemophilus influenza as well as to meningococcal, salmonella, and pneumococcal infection, which are major pathogens of the infectious epiglottitis. 5,7,80][11] Studies have highlighted the high prevalence of supraglottic involvement in patients with Sjo ¨gren syndrome (SS), Behcet disease, rheumatoid arthritis (RA), and systemic lupus erythematosus (SLE), 10,[12][13][14][15][16] and case reports have indicated that epiglottitis might present as the initial or flare manifestation of SLE. 11,17lthough the aforementioned studies have discovered crucial connections between the 2 rare disease entities, the lack of a control group and small samples have limited the research.Considering expansion of these research findings, an association can be reasonably hypothesized to exist between the autoimmune diseases and epiglottitis; however, clear clinical advice cannot yet be formulated because of the lack of robust evidence.In the current nationwide population-based casecontrol study, we investigated the association between epiglottitis and autoimmune diseases-particularly SS, RA, SLE, and ankylosing spondylitis (AS)-so as to determine whether this association is substantial and thereby to identify early risk factors for acute epiglottitis in the Taiwanese population.
Study Design and Data Source
We performed this retrospective nationwide case-control study using data from Taiwan National Health Insurance (NHI) Research Database (NHIRD).Taiwan government established NHI in 1995, which was covering >99.6% of residents in 2018, 18 and NHIRD details on all the medical information of NHI beneficiaries, including surgery and intervention procedure types, prescribed drugs, residence location, monthly income level, and disease diagnoses at clinic visits or hospitalization according to the International Classification of Diseases, Ninth revision, Clinical Modification (ICD-9-CM) codes. 18n the present study, the Longitudinal Health Insurance Database 2005 (LHID2005) data were employed to generate case and control groups for the formal analysis.Longitudinal Health Insurance Database 2005 is a representative subdatabase of NHIRD and comprises the entire medical claims of 1 million insured people, who were randomly selected in 2005 through systematic sampling of NHIRD.19 According to the reports from the National Health Research Institutes, all individuals in LHID2005 have nonsignificant differences in their age, sex, and health care costs compared with all enrollees in NHIRD.19 This study was approved by the Institutional Review Board of Chang Gung Memorial Hospital (201901186B1), which waived the need for obtaining informed consent because NHIRD data are made suitable for public research through anonymization and deidentification by scrambling identification codes.This study was also conducted in compliance with the Declaration of Helsinki principles.
Selection of Cases and Controls Through Matching
A flowchart showing how patients were selected is presented in Figure 1.To form the case group, we enrolled from LHID2005 all patients with new otolaryngologist-made diagnosis of epiglottitis (ICD-9-CM codes 464.3, 464.30, and 464.31) in an inpatient claim or at least 3 outpatient claims during January 1, 2000, to December 31, 2013.Only cases of epiglottitis diagnosed by otolaryngologists were included, and patients with concomitant deep neck infection were excluded to increase the accuracy of diagnosis.Finally, 2339 patients with epiglottitis comprised the case group.To ensure that our statistical power would be high, 1:4 matching of cases with randomly selected controls without epiglottitis was performed.The control group was obtained from LHID2005 according to the case index date.In total, the control group comprised 9356 patients, frequencymatched to the case group in age, sex, degree of urbanization level, and socioeconomic status.
Autoimmune Diseases and Other Covariates
The present study's primary outcome of interest was the prevalence of preexisting autoimmune diseases during the study period.Patients in both groups who had a diagnosis of specific comorbid autoimmune disease were identified by searching for ICD-9-CM codes 720 and 720.0 for AS; 714 20 the polyautoimmunity is defined as at least 2 autoimmune diseases being recorded in a single patient during the study period.Whether each case patient had an autoimmune disease before epiglottitis and whether each control patient had an autoimmune disease before the matched index date were determined.As well as for age and sex, adjustments were made for degree of urbanization of residence, monthly income, and the following concomitant covariates related to epiglottitis 21,22 : liver cirrhosis (LC, ICD-9-CM 571.xx), chronic obstructive pulmonary disease (COPD, ICD-9-CM 490-496), coronary artery disease (CAD, ICD-9-CM 410-414), hypertension (HTN, ICD-9-CM 401-405), DM (ICD-9-CM 250.xx,A-code A-181), and chronic kidney disease (CKD, ICD-9-CM 582, 582.xx, 585, and 586).In order to enhance the accuracy of the diagnosis, a diagnostic code of above-mentioned diseases in an inpatient care claim or at least 3 outpatient claims prior to the index date was used to indicate that one patient had the comorbid disease. 23We analyzed the covariates as binomial variables.
Statistical Analysis
Intergroup differences in baseline characteristics were determined using Pearson chi-square testing for nominal data.In order to investigate the association of autoimmune diseases with acute epiglottitis, conditional logistic regression analysis was performed with control for potential confounders-including sex, age, urbanization level, monthly income, COPD, DM, HTN, LC, CAD, and CKD-to calculate the adjusted odds ratio (aOR) and 95% CI.Logistic regression was performed in subgroup analysis to identify the associations between epiglottitis risk and the number of preexisting autoimmune diseases when adjusting for age, sex, income, degree of urbanization of residence, and covariates.All analyses were performed on SAS (version 9.4; SAS Institute).We considered a 2-sided P of <.05 to indicate the significance of the associated result.
Availability of Data and Material
The data underlying this study is from the NHIRD, which has been transferred to the Health and Welfare Data Science Center (HWDC).Interested researchers can obtain the data through formal application to the HWDC, Department of Statistics, Ministry of Health and Welfare, Taiwan (http://dep.mohw.gov.tw/DOS/np-2497-113.html).
Characteristics of Case and Control Cohorts
Between January 1, 2000, and December 31, 2013, the case group inclusion criteria were met by 2339 patients (1203 males and 1136 females) with a new diagnosis of acute epiglottitis coded by otolaryngologist.The matched control group comprised 9356 patients without epiglottitis.Approximately half of both groups were male (51.4%), and the majority of patients were 50 years or younger (72.7%).The median age of all 11 695 study enrollees was 34 (interquartile range: 10-51) years.The case group had higher incidences of preexisting SLE (P ¼ .043)and SS (P ¼ .005,Table 1).The prevalence of other 2 major autoimmune diseases, AS and RA, was higher in the case group, but not statistically significant (P ¼ .167and .067,respectively).For some diseases, such as dermatopolymyositis, Behcet disease, systemic sclerosis, and myasthenia gravis, the number of patients was too small to be analyzed.Significant intergroup differences were discovered in the percentage of numbers of polyautoimmunity (P ¼ .016),COPD (P < .001),DM (P ¼ .004),HTN (P ¼ .031),LC (P ¼ .002),and CAD (P ¼ .008).
Characteristics of Patients With Epiglottitis
The case group's baseline characteristics are presented in Table 2, which showed that the diagnosis was made at similar ages in the male and female patients with epiglottitis (P ¼ .734).Hashimoto thyroiditis, multiple sclerosis, and myasthenia gravis were only found in the female patients with epiglottitis, and polyautoimmunity was more common in the female patients with epiglottitis (P ¼ .016,Table 2).
Association Between Autoimmune Diseases and Epiglottitis
Compared with the controls, for the patients with epiglottitis, the odds of preexisting autoimmune diseases were greatest for SS (aOR: 2.37; 95% CI: 1.14-4.91;P ¼ .021,Table 3) after adjustments for age, sex, monthly income, degree of urbanization of residence, and covariates.Because the epiglottitis may not be associated with the autoimmune disease diagnosed shortly before it attacked, we repeated the analysis by excluding patients with autoimmune disease diagnosed within 6 and 12 months before epiglottitis to ensure a lag time between preexisting autoimmune disease and epiglottis.The results were consistent and presented in the Supplement File.Other concomitant covariates including COPD and LC also had significant associations with epiglottitis.Risk of epiglottitis was not found to be significantly associated with the history of RA, SLE, AS, uveitis, Hashimoto thyroiditis, Graves disease, multiple sclerosis, or psoriasis (P ¼ .170,.775,.261,.829,.778,.461,.808,and .465,respectively).
Associated Autoimmune Disease Counts Stratified by Age and Sex
Table 4 details the subgroup analysis on the association between the risk of epiglottitis and the polyautoimmunity.Compared with no autoimmune disease, patients with polyautoimmunity were susceptible to epiglottitis (aOR: 2.08; 95% CI: 1.14-3.80;P ¼ .018).We then performed age-group stratification with the cutoff age of 50 years, 24 which demonstrated a significant association between epiglottitis and polyautoimmunity among those who are aged 50 years or older (aOR: 2.61; 95% CI: 1.21-5.66;P ¼ .015);this association was absent in the younger age-group.In both sexes, having more than one autoimmune disease was associated with elevated epiglottitis risk, but the result was nonsignificant (aOR: 2.39; 95% CI: 0.84-6.81;P ¼ .103for men, and aOR: 1.98; 95% CI: 0.94-4.15;P ¼ .072for women).
Discussion
17,26 Based on our literature review, the present research is the first populationbased study on the epiglottitis-autoimmune disease association.In working to this end, obtaining a sufficiently large sample with long observation period from only one medical institution would be difficult given the rarity of the diseases investigated.Accordingly, we used a nationwide database to recruit enough patients with epiglottitis while minimizing the selection bias.In addition, we adopted the aOR to compare the case and control groups to minimize the effects of potential covariates.Once various covariates had been accounted forincluding COPD, CKD, DM, HTN, CAD, and LC-associations were found of preexisting SS and polyautoimmunity with elevated risk of epiglottitis in this case-control study involving 11 695 people.According to the age-stratified analysis, the polyautoimmunity-epiglottitis association was strongest in the 50 years and older age-group, and this may be explained by an overlap of the age distributions of adult patients with epiglottitis and those with more than one autoimmune disease. 24,27his study extends the literature on epiglottitis and can raise awareness regarding the association of SS and polyautoimmunity with high epiglottitis risk.For patients with epiglottitis and repeated episodes or a refractory treatment course, underlying comorbid SS or polyautoimmune diseases should also be considered, particularly in those more than 50 years old.
Patients with autoimmune disease are susceptible to higher risk of respiratory tract infection caused by Streptococcus pneumoniae and H influenzae, 28,29 common pathogens involved in epiglottitis.The clinical presentation of epiglottitis in patients with autoimmune or immunocompromised disorders follows a highly variable course, ranging from a mild symptomatology to fulminant, life-threatening airway compromise. 5,26Odynophagia, dysphonia, stridor, and dysphagia should alert primary care physicians to the presence of epiglottitis, 30 which warrant appropriate differential diagnosis, timely referral, and thorough radiologic or fiberoptic laryngoscopic evaluation for early diagnosis and intervention. 10,31Epiglottitis treatment, particularly in the case of patients with comorbid autoimmune diseases, is centered on airway management, given that laryngeal involvement in autoimmunity may exacerbate the upper airway obstruction. 25,32n 8-year retrospective review revealed that 13.2% and 3.6% of patients admitted for epiglottitis underwent intubation and tracheostomy, respectively. 21In patients with both epiglottitis and autoimmune disease, stridor and respiratory distress may result either from supraglottic inflammation and edema or from cricoarytenoid arthritis, both of which are robust predictors of airway intervention. 13,32,33Subtler signs and symptoms, such as rapid symptom onset, tachypnea, and tachycardia, which may mislead clinicians, were also reported to predict airway compromise in a retrospective review. 33The use of corticosteroid as an adjunct in epiglottitis treatment 21 may not only control the underlying
NP45
autoimmune comorbidities but also shorten the intensive care unit stay and overall hospital stay. 33,34However, steroid use in the management of epiglottitis is controversial. 35Further prospective studies are warranted to probe the effects of steroid in patients with epiglottitis comorbid with autoimmune disease.The association between autoimmune disease and epiglottitis has been investigated by few researchers.Bizaki et al retrospectively investigated 308 patients with epiglottitis to discover that 20.8% of the patients had concomitant autoimmune disease. 2 They indicated that unusual pathogens may lead to epiglottitis in immunocompromised patients with increased risk of death, but their study lacked a control group, making further analysis of the association between 2 disease entities impossible.Although the exact pathogenesis between epiglottitis and autoimmune disease remains unclear, our study obtained the novel finding that epiglottitis risk is much higher in patients with polyautoimmunity than in those without autoimmune disease, particularly in patients aged >50 years.This novel observation may be partially due to the prevalence of antinuclear antibodies, the most common biomarker of autoimmunity, is higher in older adults (age 50 years) 24 and patients with polyautoimmunity having more complicated underlying immune dysregulation and receiving more intense treatment, such as immunosuppressant or immunomodulatory drugs, leading to vulnerable respiratory immunity and higher susceptibility to epiglottitis after a long disease course.Polyautoimmunity is also a frequent condition in patients with SS, 36 and our study results showed that among autoimmune diseases, SS had the strongest association with the risk of epiglottitis.Increasing studies suggest that SS is associated with several lower respiratory tract manifestations, such as chronic interstitial lung disease and tracheobronchial disease, and recurrent respiratory infections are reported in 10% to 35% of patients with SS. 37 However, both upper aerodigestive tract and the lower airway may be involved in the disease course of SS.From a population-based study, Chang et al reported SS to be a risk factor for chronic sinusitis, with the risk being increased 2.5-fold. 38Thick secretion in the upper respiratory tract may lead to impaired mucociliary function 39 and exacerbated upper airway obstruction. 40The decreased amylase and carbonic anhydrase secretions may further cause a deficit of supraglottic innate immunity in patients with SS. 41 In addition, Belafsky and Postma discovered that individuals with SS were predisposed to laryngopharyngeal reflux, which may lead to chronic mucosal destruction and inflammation of the supraglottic area. 12In the histopathological examination, patients with SS had severe glandular atrophy and marked lymphocytic infiltration in the supraglottic airway, suggesting that SS may be essential to the pathogenesis of noninfectious inflammatory epiglottitis. 42hese study results may explain the findings in our study and provide crucial clues for elucidating the pathogenesis of epiglottitis from an immunologic perspective.Nevertheless, our results do not appear to be completely consistent with those of prior studies, 9,10 which were mainly case reports or series, because we did not observe significant association between epiglottitis and RA or SLE, possibly due to the ethnic differences and the lack of statistical power.4][45][46] However, determining whether these diseases were associated with epiglottitis was difficult in our study because of the small sample size.
The present study has various strengths.First, it included a large sample (2339 patients with epiglottitis) from a nationwide population database and had long study duration.Epiglottitis studies performed in tertiary referral centers usually lack generalizability because of referral bias; this population-based study broadens the general applicability of this issue.Second, we considered epiglottitis to have been confirmed only when the diagnosis had been specifically coded in claims data by an otolaryngologist in addition to an ICD-9 code being present in the database.In addition, we did not include patients with concomitant deep neck infection; therefore, our results should not be biased by the presence of these diseases.There are several limitations in this study.First, despite the use of a national database in the present study, the sample size was small because of the rarity of epiglottitis.Even the lower number of patients with autoimmune comorbidities precluded us from analyzing their associations with epiglottitis.Second, detailed laboratory data indicating inflammation, infection, and imaging reports are not currently available in the database; hence, we could not examine whether the severity of autoimmune disease was correlated with the severity of epiglottitis.Third, no data are available in NHIRD regarding certain potential risk factors of epiglottitis and impaired immunity, such as cigarette smoking and alcohol consumption. 47However, a study identified COPD as an independent factor associated with cigarette smoking 48 ; adjustment for COPD likely reduced the potential impacts of smoking on the present results.Fourth, the overlapping nature of autoimmune diseases, particularly the SS, 49 may lead to a degree of bias in favor of SS and polyautoimmunity as contributing factors of epiglottitis.In view of this, we performed subgroup analysis across both sexes and patients younger or older than 50 years, and more large-scale prospective studies are warranted to further clarify our observations.Finally, the lack of definite etiology of epiglottitis in NHIRD is another limitation that precluded us to compare diseases, such as immunoglobulin G4-related disease, 50 with SS in causing epiglottitis.Although the study results supported our hypothesis and reached statistical significance, the reader should consider the aforementioned limitations when interpreting the findings.
In conclusion, this is the first study to investigate the association between autoimmune disease and epiglottitis in Taiwan using a real-world database.We confirm the association between epiglottitis and SS.A novel finding is that patients with epiglottitis, particularly those older than 50 years, are at higher risk of polyautoimmunity.The present results serve as a reference for clinicians with regard to the early diagnosis and treatment of epiglottitis in patients with autoimmune comorbidities.The mechanism underlying SS, polyautoimmunity, and the development of epiglottitis warrants further investigation.
Figure 1 .
Figure 1.Flow chart of the case and control group selections.
Table 1 .
Demographic Characteristics of Patients With Epiglottitis and Matched-Control Group.
a P values calculated with Pearson chi-square.
Table 2 .
Characteristics of Patients With Epiglottitis.
Abbreviations: AS, ankylosing spondylitis; IOR, interquartile range; RA, rheumatoid arthritis; SLE, systemic lupus erythematosus; SS, Sjo ¨gren syndrome.a P values for male versus female were calculated by Wilcoxon rank sum test for continuous variables and Pearson chi-square test or Fisher exact test for nominal data.All P values were 2-tailed.
Table 4 .
Subgroup Analyses for Associations Between Epiglottitis and Autoimmune Disease Counts.
aThe model is adjusted by sex, age, urbanization levels, monthly income, and covariates.
Table 3 .
Association Between Epiglottitis and Autoimmune Diseases.
Abbreviations: AS, ankylosing spondylitis; OR, odds ratio; RA, rheumatoid arthritis; SS, Sjo ¨gren syndrome; SLE, systemic lupus erythematosus.a Odds ratio was adjusted for sex, age, urbanization level, monthly income, and covariates before index dates. | 2021-08-01T06:17:07.651Z | 2021-07-30T00:00:00.000 | {
"year": 2021,
"sha1": "9a3ffe4fc47ab993b9a6c2fb826311ba34582cfc",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/01455613211033689",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "33888b657036dd0d29baa065fd1d2a90bc36fa7b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247473357 | pes2o/s2orc | v3-fos-license | Monitoring fluid intake by commercially available smart water bottles
Fluid intake is important to prevent dehydration and reduce recurrent kidney stones. There has been a trend in recent years to develop tools to monitor fluid intake using “smart” products such as smart bottles. Several commercial smart bottles are available, mainly targeting health-conscious adults. To the best of our knowledge, these bottles have not been validated in the literature. This study compares four commercially available smart bottles in terms of both performance and functionality. These bottles are the H2OPal, HidrateSpark Steel, HidrateSpark 3, and Thermos Smart Lid. One hundred intake events for each bottle were recorded and analyzed versus ground truth obtained from a high-resolution weight scale. The H2OPal had the lowest Mean Percent Error (MPE) and was able to balance out errors throughout multiple sips. The HidrateSpark 3 provided the most consistent and reliable results, with the lowest per sip error. The MPE values for HidrateSpark bottles were further improved using linear regression, as they had more consistent individual error values. The Thermos Smart Lid provides the lowest accuracy, as the sensors do not extend through the entire bottle, leading to many missed recordings.
Methods
Each bottle was tested in two phases: (1) a controlled sip volume phase and (2) a free-living phase. In both phases, the results recorded by the bottle (obtained from the products' mobile apps used on Android 11) were compared to the ground truth obtained using a 5 kg weight scale (Starfrit Electronic Kitchen Scale 93756). All bottles were calibrated before data collection using the apps. In Phase 1, sip sizes ranging from 10 to 100 mL in increments of 10 mL were measured in a random order, five times each-for a total of 50 measurements per bottle. These events were not actual drinking events by a human but were poured out so the volume of each sip could be better controlled. In this phase, the bottles were recalibrated if the sip error was larger than 50 mL and were re-paired if the app lost Bluetooth connection with the bottle. In the free-living phase, a single user drank from the bottles freely during the day, taking varying sip sizes of their choice. This phase also consisted of 50 sips over time, but not all in succession. Therefore, each bottle had a dataset of 100 measurements in total.
To determine summative liquid intake and ensure proper daily hydration, it is more important to have an accurate volume intake detection throughout the whole day (24 h), rather than of each sip. However, to determine just-in-time intervention prompts, there is a need for each sip to have a low error, as done in the study by Conroy et al. 2 . If a sip is not recorded or is poorly recorded, it is crucial that the bottle can balance out the volume in the next recordings. Therefore, the error (measured volume − actual volume) was adjusted manually. For example, assume that the subject drinks 10 mL and the bottle reports 0 mL, but then subsequently the subject drinks 20 mL and the bottle reports 30 mL total, then the adjusted error is 0 mL. www.nature.com/scientificreports/ where S i act and S i est are the actual and estimated intake volume for i th sip, respectively and n is the total number of sips. C k act and C k est represent the cumulative intake volume from the last k sips. The Sip MPE looks at the percent error for each individual sip and the Cumulative MPE looks at the total percent error over time. According to the results in Table 1, the H2OPal has the minimum number of missed recordings, the lowest Sip MPE and the lowest Cumulative MPE. When determining the total intake over a period of time, the Mean Error is preferred as a comparative metric over the Mean Absolute Error (MAE). Because it accounts for the bottle's ability to recover a poor measurement over time when recording the subsequent measurements. The sip MAE has also been included for applications where the accuracy of each sip is important, as it calculates the absolute error of each sip. The Cumulative MPE also measures how well the measurements balance out over the entire phase and does not penalize individual sips. Another observation was that 3 out of 4 bottles underestimated the volume intake per sip shown in Table 1 with negative numbers.
Results and discussion
The R-square Pearson correlation coefficients for all bottles are also shown in Table 1. The HidrateSpark 3 provided the highest correlation coefficient. Although the HidrateSpark 3 had some missed recordings, the majority of those were small sips (< 40 mL), so that it did not affect the correlation coefficient as heavily. The H2OPal and HidrateSpark Steel both had high correlations with r = 0.88, where the Thermos SmartLid had the lowest correlation (r = 0.75).
The Bland-Altman plots in Fig. 2 also confirmed that the HidrateSpark 3 had the smallest Limits of Agreements (LoA) compared to the other three bottles. The LoA analyzes the extent to which the actual and measured values agree. In addition, almost all measurements were in the range of LoAs confirming that this bottle provides consistent results, as shown in Fig. 2c. However, most of the values are below zero, meaning that generally the sip sizes are being underestimated. The same is true for the HidrateSpark Steel in Fig. 2b, where most of the error values are negative. Therefore, these two bottles provided the highest MPEs and Cumulative MPEs compared to H2Opal and Thermos Smart Lid where the errors were distributed above and below 0 as seen in Fig. 2a,d.
The HidrateSpark Steel and H2OPal had similar standard deviations of 20.04 mL and 21.41 mL, respectively. Figure 2a,b also demonstrated that the HidrateSpark Steel's values bounced consistently around the mean but generally stayed within the LoA region, while the H2Opal had more values outside of the LoA region. The Thermos Smart Lid had the largest standard deviation of 35.42 mL and more than 10% of measurements were outside of the LoA region shown in Fig. 2d. This bottle provided the minimum Sip Mean Error and relatively small Cumulative MPE, despite having the highest number of missed recordings and largest standard deviation. The Thermos SmartLid had many missed recordings because the sensor straw does not extend to the bottom of the container, causing missed recordings when the liquid contents are below the sensor stick (around 80 mL). This should lead to underestimating the fluid intake; however, the Thermos is the only bottle that had a positive MPE and Sip Mean Error, meaning that the bottle is overestimating the fluid intake. Therefore, the reason the Thermos had a very low mean sip error is because nearly each measurement from the bottle is a large overestimation. When these overestimations are averaged including the many missed sips that are not recorded at all (or "underestimated") the mean result balances out. When excluding the missed recordings from the calculation, the Sip Mean Error became + 10.38 mL, confirming that there is a large overestimation of individual sips. Though this may appear to be positive, in reality this bottle is inaccurate in individual sips estimation and unreliable as it misses many drinking events. Additionally, as seen in Fig. 2d, the Thermos SmartLid appears to have an increased error as the sip size increases. www.nature.com/scientificreports/ In summary, the H2OPal is the most accurate at estimating sips over time and was the most reliable to measure most of the recordings. The Thermos Smart Lid was the least accurate and missed more sips than the other bottles. The HidrateSpark 3 bottles had more consistent error values, however underestimated the majority of sips leading to poor performance throughout time.
Calibration. The results suggest that the bottles may have a certain offset that could be compensated using a calibration algorithm. This is especially true for the HidrateSpark bottles that have small standard deviations of errors and always underestimate individual sips. The Least-Square (LS) method was used with Phase 1 data while excluding any missed recordings to obtain the offset and gain values. The obtained equation was used on the measured sip intakes of Phase 2 to calculate the actual value and determine the error after calibration. Table 2 shows that the calibration improved the Sip Mean Error for both HidrateSpark bottles, but not the H2OPal or Thermos Smart Lid.
Bottle liquid level comparison. In Phase 1 to complete all the measurements, each bottle was refilled multiple times so it is possible that the calculated MAE is impacted by the filled level of the bottle. To determine this, each bottle was divided into three liquid levels as high, medium, and low, based on the total volume of each bottle. For the measurements in Phase 1, a one-way ANOVA test was performed to determine if the liquid level had a significant difference on the absolute error. For the HidrateSpark 3 and Steel, there was no significant differences in the error in each of the three categories. For the H2OPal and the Thermos bottles, there was a borderline significant difference (p < 0.05) using the Welsh test for unequal variance. Subsequently, a multiple comparison Tukey HSD test was performed on these two bottles. In both cases, the significant difference was between the "high" level and the "low" level categories. For the H2OPal, the "high" category had the largest mean and standard deviation, meaning that there is a higher error when the bottle is more filled. However, in the Thermos, the "low" category had the highest error. This is likely because the sensor does not extend to the bottom of the bottle.
Real world vs simulated phases.
A two tailed t-test was conducted to compare the Phase 1 and Phase 2 errors for each bottle. For all bottles, we achieved p > 0.05 meaning that the two populations were not significantly different. However, it was observed that the number of missed recordings was much greater in Phase 2 for both HidrateSpark bottles. For the H2OPal, the number of missed recordings were almost equal (2 vs 3), and for the Thermos SmartLid there were fewer missed recordings in free living scenario (6 vs 10). Since the HidrateSpark bottles were both improved after calibration, a t-test was also conducted after-calibration. For the HidrateSpark 3, the Phase 1 and 2 errors were borderline significantly different (p = 0.046). This is more likely due to the larger number of missed recordings in Phase 2 compared to Phase 1.
Usability analysis and limitation. This section provides insight on the usability of the bottles and their
apps, as well as additional functionality information. Although the accuracy of the bottles is important, the usability factors are also of interest when choosing a bottle.
App performance and functionality. The HidrateSpark 3 and HidrateSpark Steel are equipped with LEDs that blink to remind the users to drink if they are not on track to meet their goal, or to blink a certain number of times a day (set by the user). They can also be set to blink every time the user drinks. The H2OPal and Thermos Smart Lid do not have any visual feedback to remind the user to drink. However, all purchased bottles have mobile notifications to remind the users to drink via the mobile app. The number of notifications per day can be customized in the HidrateSpark and H2OPal app.
The HidrateSpark 3 and Steel use a linear trend to guide the user when to drink, and give an hourly suggested target the user should aim for to reach the goal by the end of the day. The H2OPal and Thermos Smart Lid only provide one total daily target. In all bottles, if the device is not connected to the app by Bluetooth, the data is stored locally and synchs once paired.
None of these four bottles focuses on elderly hydration. Additionally, the formulae the bottles use to determine the daily intake goal were not available, so it is difficult to determine if they are appropriate for seniors. The bottles are mostly large, heavy and are not tailored to seniors. The use of the mobile app may also not be ideal for seniors, though it could be useful for researchers to collect data, remotely. www.nature.com/scientificreports/ Hardware and software limitations. All bottles could not determine if the liquid was consumed, discarded or spilled. All bottles also needed to be placed down on a surface after each sip to record the intake, accurately. This means that it is possibile to miss drinks if the bottle is not placed down, especially if it is refilled. Another limitation is that the devices needed to be re-paired regularly with the app to synchronize the data. The Thermos needed to be re-paired every time the app is opened, and the HidrateSpark bottles often had difficulty finding the Bluetooth connection. The H2OPal was the easiest to re-pair with the app if the connection was interrupted. All bottles were calibrated before the testing began, and all had to be recalibrated at least once during the process. The HidrateSpark bottles and H2OPal had to be emptied and filled fully to calibrate.
All the bottles did not have the option to download or save data long term. Additionally, none were able to be accessed via an API.
Battery. The HidrateSpark 3 and H2OPal use replaceable lithium-ion batteries and the HidrateSpark Steel and Thermos SmartLid use rechargeable batteries. The rechargeable batteries should last up to 2 weeks on a full charge, as stated by the manufacturers, however, the Thermos SmartLid had to be charged almost every week when using it frequently. This is a limitation as many people will not remember to charge the bottles regularly.
Additional factors. There are various factors that impact the selection of a smart bottle especially when the users are elderly. The heaviness and bulkiness of the bottle is an important factor, as it needs to be easy to use for frail older adults. As mentioned, these bottles are not tailored to older adults. The price and the amount liquid each bottle can contain is also another factor. Table 3 shows the height, weight, liquid volume and price of each bottle. The Thermos Smart Lid is the cheapest and lightest, as it is made entirely of lighter plastic. It can also hold the most amount of liquid compared to other three bottles. Conversely, the H2OPal is the tallest, heaviest and most expensive among the study bottles.
Conclusion and future works
Commercially available smart bottles are very useful to researchers as there is no need to prototype a new device. Though there are many smart water bottles available, the most prevalent issue was that the data or raw signals were not accessible to the users, and only some had the results displayed in a mobile app. The development of a widely available smart bottle with high accuracy and completely accessible data is needed, especially one tailored to older adults. Of the four bottles tested, out of the box the H2OPal had the lowest Sip MPE, Cumulative MPE, and number of missed recordings. The HidrateSpark 3 had the highest linearity and smallest standard deviation and lowest MAE. The HidrateSpark Steel and HidrateSpark 3 can be simply calibrated manually to decrease the Sip Mean Error using LS method. For a more accurate sip recording, the HidrateSpark 3 is the preferred bottle, and for more consistent measurements over a period of time, the H2OPal is preferred. The Thermos SmartLid had the most unreliable performance with the most missed number of sips and a large over estimation of individual sips.
This study is not without limitations. In a real-world scenario, many users will drink from other vessels, especially for hot liquids, store bought beverages and alcohol. Future work should evaluate how the form factor of each bottle might affect the error to guide smart water bottle design. | 2022-03-17T06:24:07.444Z | 2022-03-15T00:00:00.000 | {
"year": 2022,
"sha1": "420c0f245bcbdf0af954e8007416701ee1e0cb21",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-022-08335-5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cd99113bc6d152d2728720af57a60b5090ee7fe1",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221376818 | pes2o/s2orc | v3-fos-license | Initialization Process of a Power System Transient Simulation Scheme for Stability Studies
The initialization process of a novel power system transient simulation scheme for stability studies is put forward, by further developing a"time-domain harmonic power-flow algorithm". The initialization process is formulated as an algebraic problem to ensure that the power system under study is in steady state and operated at a specified operating point, at the beginning of a transient simulation run. The algebraic problem is then solved efficiently by a preconditioned finite difference Newton-GMRES method. Case studies verify the validity and efficiency of the initialization process. The proposed initialization process is general-purpose and can be applied to other power system transient simulation schemes.
INTRODUCTION
Transient simulation is a powerful tool for studying dynamic behavior of power systems [1]- [4]. For stability studies, a transient simulation run is typically required to start from the steady state [3], [4]. Furthermore, power flow conditions are also to be satisfied, which characterize the operating point of the system [3], [4]. The initialization process of transient simulation for stability studies has to meet these requirements.
For three-phase balanced power systems, initialization process is mature and available in the literature [3], [4]. Nevertheless that for general unbalanced power systems is much more complicated and difficult to achieve [1], [5]. Power flow computation [3], [4] and the multiphase version [6] are no longer applicable due to the presence of harmonics [7]. Time domain techniques for steady state computation have been applied to power systems [8], [9]; however the power flow conditions were not considered. How to initialize a possibly unbalanced power system is also a huge challenge encountered by the novel transient simulation scheme proposed in an earlier work of the authors [10], which is based on frequency response optimized integrators considering second order derivative and especially suitable for stability studies on general unbalanced power systems.
In fact, initialization process for unbalanced power systems may be linked to the "time-domain harmonic powerflow algorithm" proposed in [11]. Based on the sensitivity circuit analysis [12], [11] formulates a problem aiming at obtaining the steady state solution for general power systems so that the state variables are periodic regarding the nominal fundamental period while the power flow conditions are satisfied. That paper then uses Newton's method to solve the problem. The Jacobian matrix is constructed by the finite difference method at the first iteration; it is updated with Broyden's method in the later iterations. The initial guess of the unknowns is done by a flat start. This paper further develops the work of [11] to put forward the initialization process of the novel transient simulation scheme [10]. Contributions of this paper are threefold. First, the initial value condition is added to account for variables which cannot be periodic regarding the nominal fundamental period. Based on an alternative and straightforward derivation, the initialization process is formulated as an algebraic problem to simultaneously tackle the periodic boundary value condition, initial value condition and power flow conditions, which is to be solved via Newton's method. Second, a preconditioned finite difference generalized minimal residual (GMRES) method is introduced to solve each Newton iteration [13], [14]. The combination of Newton's method and the finite difference GMRES method is referred to as the finite difference Newton-GMRES method. Compared to constructing the Jacobian matrix by the finite difference method, the approach adopted in this paper significantly improves the computational efficiency. It is enhanced by an initial guess achieved based on three-phase power flow computation [6], [16], [17]. Third, the functionality of the novel transient simulation scheme [10] is strengthened in that it is able to start simulation runs from true steady states satisfying power flow conditions. The remainder of this paper is organized as follow. Section II provides a brief overview on the objective of the initialization process. Section III formulates the initialization process as an algebraic problem. Section IV introduces the preconditioned finite difference Newton-GMRES method to solve the problem. Section V verifies the validity and efficiency of the proposed initialization process via case studies. Section VI concludes the paper and points out some directions for future research.
II. OBJECTIVE OF INITIALIZATION PROCESS
To start a transient simulation run, the initial value of variables, the external inputs to and parameters of a power system under study have to be specified. Some parameters are intrinsic to the power system, such as the resistance and inductance of a transmission line. These parameters can be found in the input data for the simulation. Other parameters and the external inputs depend on the specific operating point. Examples of the dependent parameters include the resistance and inductance of a static load. Examples of the external inputs include the voltage reference of an excitation system and the load reference of a turbine-governor system. The objective of the initialization process is to give proper values to the variables, the dependent parameters and the external inputs.
As mentioned in Section I, stability studies require these quantities to be provided so that the power system is initially in steady state. In steady state, the dependent parameters and the external inputs should be constant. However their specific values have to be determined so that the power system is operated at a certain operating point, which is depicted by power flow conditions. The following section will detail how to meet these requirements in the initialization process.
A. Periodic Boundary Value Condition
Note that the nominal fundamental period is also a period of any higher order harmonics. The DC offset, if there is any, is constant in steady state. Therefore after a nominal fundamental period, a typical quantity in the power system regains its original value. Suppose that x is a typical variable in the simulation. x should satisfy the periodic boundary value condition (1) where t 0 is the starting time; T is the nominal fundamental period. The residual function is
B. Initial Value Condition
Although most of the quantities in the power system satisfy the periodic boundary value condition, some quantities are exceptional. For example, the rotor angle of an induction machine regarding the synchronously rotating reference frame generally cannot regain its original value after a nominal fundamental period. If these quantities are considered as variables in the simulation, their initial value has to be specified, resulting in the initial value condition. Suppose that y is one of these variables.
C. Power Flow Conditions
Power flow conditions usually appear in the input data for the simulation. They specify the terminal conditions of individual devices in steady state in terms of voltage magnitude or angle, real or reactive power injection or consumption, and so on. Several related comments are made as follows.
First, power flow conditions are considered conventionally as specified at busses or nodes in the power system; terms such as "swing bus", "PV bus", "PQ bus" are used. In fact, power flow conditions are assigned to devices. A bus or a node itself generates or consumes no power; it is the devices connected to the bus or node that generate or consume power. Furthermore, a three-phase synchronous generator and a three-phase load may be connected to the same bus; but their power flow conditions are specified separately. Second, power flow conditions in the existing input data are mainly prepared for power flow computation. Power flow computation is based on the phasor representation at nominal fundamental frequency, no matter it is for transmission or distribution systems, positive sequence or three-phase [3], [4], [6], [15]- [17]. Therefore in this paper, the voltage magnitude and voltage angle in power flow conditions are understood as the magnitude and angle of the voltage phasor respectively; the complex power is calculated by multiplying the voltage phasor and the conjugate of the current phasor; the real power and reactive power are respectively the real part and imaginary part of the complex power. Note that voltages and currents are given as waveforms by the novel transient simulation scheme [10]. Consequently a waveform to phasor conversion is necessary to take the power flow conditions into account. Third, most power flow conditions for three-phase devices are prepared for positive sequence power flow computation [3], [4], [15]. Therefore, the related voltage phasor magnitude and angle, real and reactive power are understood as the positive sequence values in this paper. As a result, a phase to sequence conversion [3], [4], [15] is performed to extract the positive sequence information so that the later calculations can be carried out. Fourth, the assumptions made in the second and the third terms are a compromise with the existing input data. If more detailed data are available in the future, the specific calculations will need minor modifications; but the overall initialization process will remain valid and feasible.
Power flow conditions for individual types of devices are discussed as follows.
1) Vθ device: At least one device in the power system has to be assigned with voltage phasor magnitude and angle at its terminal so that it can serve as an angle reference for the system. Such a type of devices is called Vθ device. (5) and (6) are to be satisfied cos( ) device is the voltage phasor across the device; V is the specified voltage phasor magnitude; θ is the specified voltage phasor angle. The residual functions are 2) PV device: Some devices are assigned with real power output and the terminal voltage phasor magnitude. Such a type of devices is called PV device. The real power generation of a PV device is calculated as where S device is the complex power generation of the device; I device is the current injection phasor from the device; * denotes the conjugate; P device is the real power generation of the device. (11) and (12) 3) PQ device: Some devices are assigned with real power and reactive power generation or consumption. Such a type of devices is called PQ device. Note that power generation is understood as negative power consumption. The specified power consumption can be made voltage-dependent. The well-known ZIP load [3] is an example. Specifically for a ZIP load 2 0 () where S device is the complex power consumption of the device; I device is the current extraction phasor; P device is the real power consumption of the device; Q device is the reactive power consumption of the device. (20) and (21) are to be satisfied
D. Waveform to Phasor Conversion
Several mature techniques may be adopted to convert a waveform into its nominal fundamental frequency phasor [18], such as curve fitting, fast Fourier transform (FFT) and digital filtering. In this paper, curve fitting [2] is adopted.
E. Phase to Sequence Conversion
Given the three-phase phasors X a , X b and X c , the positive sequence phasor is calculated as [3], [4], [15] 22 33
F. Initialization Process as an Algebraic Equation Set
Based on the discussion in this section, the initialization process is formulated as an algebraic equation set, which is constructed by transient simulation runs of one nominal fundamental period where F is a vector consisting of individual residual functions introduced in this section; X is a vector of unknowns consisting of the variables, the dependent parameters and the external inputs, as discussed in Section II; 0 is a zero vector of proper dimension. In fact, X is the input to a transient simulation run while F(X) is the outcome. The initialization process forces the residual function vector to zero, which solves (25); details will be given in the next section.
A. Initial Guess
To start Newton's method, an initial guess of the vector of unknowns is needed. If the initial guess is close enough to the final solution, the convergence of Newton's method will be sped up [19].
Although three-phase power flow computation is not able to give the true steady state of an unbalanced power system [17], it does give a solution which is close [6], [16], [17]. Therefore a three-phase power flow computation is first performed in order to obtain nodal voltage phasors and branch current phasors of the power system network. Three-phase power flow computation is classical, details can be found in [6], [16] and [17]. Once a phasor is obtained, it can be readily translated into instantaneous value.
To initialize three-phase devices, the phase to sequence conversion is first performed at its terminal to extract the positive sequence information. The device is then temporarily initialized in the conventional way [3], [4] as if the external system is balanced. Note that in the conventional positive sequence transient stability (TS) simulation, single-phase equivalent is used [3], [4]; the same initialization is thus applicable to single-phase devices. Similarly the controllers are also initialized. During the initialization of devices, some heuristic simplifying assumptions are made to obtain explicit expressions for the variables to be initialized so that iteration is avoided. Such assumptions include neglecting the saturation and neglecting the limit. These assumptions are justified because the objective here is to achieve an initial guess which is relatively close to the final solution. However, the exact accuracy is not the main concern when choosing a starting point for the initialization process.
Due to the simplifying assumptions mentioned above, the resulting initial guess of the vector of unknowns may be incompatible in the sense that it may cause violation of the Kirchhoff's current law (KCL). Such an issue is taken care of by treating the first time step as a discontinuity event [10].
B. Finite Difference Newton-GMRES Method
An implementation of the finite difference Newton-GMRES method is presented in Fig. 1. The implementation of Newton's method is based on [15]; the implementation of the GMRES method is based on [13]. The dimension of the vector of unknowns is assumed to be m. The evaluation of F(X) is introduced in Section III. tolerance is a user defined tolerance for Newton's method. c and s are m-dimensional vectors. Q is an m×(m+1) matrix. H is an (m+1)×m matrix. The absolute tolerance for the GMRES method abstol is heuristically set to 0.5×tolerance. The relative tolerance for the GMRES method reltol is user defined; 0.001 is recommended. ε is a small number for the finite difference method; 0.0001 is a typical value. The upper triangular linear equation set is solved by backward substitution.
C. Preconditioning
If the eigenvalues of the Jacobian matrix are tightly clustered on the complex plane, the GMRES method will quickly converge; if on the other hand the eigenvalues are widely scattered, the convergence of the GMRES method will be very slow, if at all [13]. One idea to accelerate the convergence of the GMRES method is to solve a modified well-behaved linear equation Reference [14] reports a strategy to form the right preconditioner for the GMRES method. An initial preconditioner is first specified by the user, which can be the identity matrix of proper size; it is then updated via Broyden's method. The updates are performed at both Newton's iteration and the GMRES iteration, referred to as the outer updates and inner updates respectively, see the outer and inner "while" loops in Fig. 1. The same strategy is implemented in this paper. It can be activated if desired. The implementation is described as follows.
Before entering Newton's iteration, the initial preconditioner M is specified, which is the identity matrix. This step is added at the beginning of Fig. 1.
Immediately after (A) in Fig. 1, the outer update is performed. Specifically if iter > 1, the result of the inner updates M 0 is passed to M; then M is updated as Immediately after (B) in Fig. 1, M is passed to M 0 . At (C) in Fig. 1, the equation is replaced with (:, ) z MQ k (30) Immediately after (D) in Fig. 1, the inner update is performed. M 0 is updated as Fig. 1, the update of X is calculated as (32) instead (:, 1: ) X MQ k y (32) The initialization process proposed in this paper is applied to a test system in this section. The test system is a modification of the well-known 3-generator 9-bus power system [20], as shown in Fig. 2. The original static load at Bus 5 is replaced with an induction motor. Note that the real power consumption of an induction motor is specified as the power flow condition [16], [17]. The original real power is thus assigned to the induction motor while the reactive power is relaxed. System information including bus parameters, branch parameters and power flow conditions can be found in [20]. Generator and induction motor dynamic parameters used in this paper are given in the appendix. Unbalance is introduced into the system by non-uniform allocation of static loads on individual phases. Specifically, the total static load at a specific bus is allocated as follows where S is the total complex power load at the bus; S A , S B and S C are the Phase A, B and C complex power load respectively; k is an allocation factor, in this paper k = 0.1. Static loads are assumed to be constant impedance in transient simulation. For the transient simulation runs during the initialization process, a step size of one 25th of the nominal fundamental period is used (around 667 μs). The tolerance for Newton's method is set to 10 -6 in Fig. 1. There are 274 elements in the vector of unknowns in (25). The convergence behavior of the initialization process is listed in Table I. As can be seen from Table I, the finite difference Newton-GMRES method adopted in the initialization process converges to the final solution within 3 iterations, no matter the preconditioning is used or not. The preconditioning accelerates the convergence, which is reflected in the smaller number of transient simulation runs required.
If the algebraic equation set (25) is to be solved for this study case with the idea of constructing the Jacobian matrix by the finite difference method, at least 276 transient simulation runs are needed. This estimation is made even under the very generous assumption that only one Newton iteration is required, in which the first one is performed to generate the vector of residual functions, 274 are performed to construct the Jacobian matrix column by column which is equal to the number of unknowns, and the last one is performed to check the convergence criteria so that the initialization process is terminated. Nevertheless the number of transient simulation runs required by the finite difference Newton-GMRES method is much less than the number of unknowns. Therefore the finite difference Newton-GMRES method is significantly more efficient.
After the initialization process is completed, a step size of 250 μs is used for the later transient simulation. Note that such a small step size is not necessary for accuracy with the novel transient simulation scheme [10]. It is used to better demonstrate the time domain waveforms and to show that the step size used in the initialization process and that used in the later transient simulation can be different. Bus 5 Phase A voltage and Generator 2 rotor speed are plotted in Figs. 3 and 4 respectively. As the system is unbalanced, the rotor speed exhibits second order harmonics. These variables are indeed in periodic steady state, verifying the validity of the initialization process. Fig. 5 shows the induction motor rotor angle regarding the synchronously rotating reference frame. Note that it does not satisfy the periodic boundary value condition, which raises the necessity of adding the initial value condition into the initialization process.
VI. CONCLUSION AND FUTURE WORK
The initialization process of the novel transient simulation scheme [10] is put forward in this paper. Case studies demonstrate its validity and efficiency. Although the initialization process is proposed for a specific transient simulation scheme, it can be applied to other ones. In fact, it has also been successfully implemented for an iterative electromagnetic transients (EMT) simulator based on conventional numerical integrators [10].
In the future, the initialization process may be applied to larger systems to test its scalability. Other strategies to construct the preconditioner may be investigated to further speed up the process. Research effort may be directed to constructing the Jacobian matrix analytically to improve the computational efficiency. | 2020-09-01T01:01:18.976Z | 2020-08-29T00:00:00.000 | {
"year": 2021,
"sha1": "1217fbedf869b5e21816b6d40daf77bb2e822723",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2008.13059",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1217fbedf869b5e21816b6d40daf77bb2e822723",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
} |
222299257 | pes2o/s2orc | v3-fos-license | Electrochemical cell design and impedance spectroscopy of cement hydration
Understanding the complexity of the chemical and microstructural evolution of cement during hydration remains a controversial subject, and although numerous techniques have been used to assess this process, further insight is still needed. Alternating current impedance spectroscopy has been demonstrated to be a sensitive and powerful technique for cement characterisation in both fresh and hardened states; however, it has also shown certain experimental limitations (e.g. data interpretation, electrode, and parasitic effects) that prevent its wider acceptance. This study assesses electrochemical cell design and the impedance response during cement hydration. The results show that a significant decrease in the parasitic effects at high frequencies (caused mainly by leads and electrode effects) can be achieved through an optimal cell design and impedance measurements correction, enabling correlation of impedance measurements to particular aspects of the cement hydration process. However, due the limited solid phase microstructural development and the high conductivity of cement paste at low degrees of hydration, the parasitic effects could not be fully eliminated for fresh or early-age cement pastes.
Introduction
Portland cement is one of the most-used materials in the world ([ 4 Gt p.a.), but hardens and gains strength via what is arguably one of the most chemically complex non-biological reaction processes that is studied by scientists and engineers. Therefore, there are still important aspects that are not fully understood about this material. Cement hydration is critically important, since it determines the final microstructure, physical, and mechanical properties of the hydrated cement paste.
The hydration process consists of a series of simultaneous and sequential chemical reactions, involving dissolution of multi-mineral clinkers and ancillary sulphate phases, water consumption, hydrate product formation, heat release, and the development of a solid microstructure containing a high ionic-strength pore solution [1][2][3][4]. The full understanding of cement hydration is of great importance to enhance its early and final properties and to achieve better performance (technical and environmental) in its applications.
Many different techniques, tools, and interpretation methods have been used to assess cement hydration. One of these techniques is alternating current impedance spectroscopy (ACIS). Although ACIS has been demonstrated to be an effective and powerful technique and used in many studies in the cement research field due to the possibility to obtain electrical, chemical, and microstructural information for different materials, it is not yet fully accepted in practice due to its limitations (e.g. electrode effects, parasitic effects at high frequencies, instrument drawbacks, and data interpretation) [5][6][7][8][9][10].
ACIS measures the electrical response of a system (electrolyte-electrode) as a function of frequency by applying a sinusoidal voltage perturbation and is commonly studied as a function of time and/or temperature. Because cement can be considered as a circuit (electrochemical system) with a complex behaviour made up of electrical components (e.g. resistive and capacitive behaviour), cement hydration is able to be assessed by this technique. ACIS measurements are often analysed as complex impedance spectra (conventionally represented as Z* = Z' -jZ'', where Z' is the real part, j = H-1, and -Z'' is the imaginary part). The measurements obtained are divided into the high-frequency region (i.e. material bulk response), the low-frequency region (i.e. material-electrode response), and the intercept point between the high-low-frequency response (i.e. material resistivity) [11][12][13].
ACIS is a highly sensitive technique and its measurements can be affected by parasitic effects (e.g. inductance and capacitance) which could arise from the system under analysis, the cell design, the experimental procedure and/or from the leads configuration and to obtain accurate and reliable data which are often overlooked leading to data misinterpretation. So, ACIS experimentation for in situ analysis of cements requires significant care in experimentation. For example, a small cable in combination with a low-impedance cell will exhibit stray inductance, which will increasingly oppose the current flow as the frequency increases [9,14,15]. In addition, effects such as magnetic coupling, produced by the current flowing in leads, can produce a perturbation (false inductance) in the impedance measurements [5,10,16,17].
To enable more accurate assessment of the hydration of cement by ACIS, the aim of this paper is to investigate and evaluate the parameters that could produce parasitic effects in the ACIS measurements, and to provide guidance for appropriate experimental protocols. Understanding these parameters and their influence on the ACIS measurements is crucial in obtaining reliable data and thus correct data interpretation.
Sample preparation
Samples were prepared at room temperature (20 AE 3°C and 50 AE 15% relative humidity) by mixing water with white Portland cement (wPc; Lafarge Blue Circle Snowcrete) classified as CEM I 52.5R under BS EN 197-1, at a water to cement ratio (w/c) of 0.45. The chemical composition and physical properties of the wPc are given in Table 1.
Each 300 g sample was hand-mixed by combining the components for 3 min to form a homogeneous paste, and then transferred into a custom-designed cell as described below for ACIS measurements. Prior to the start of the analysis, all samples were vibrated for 2 min to reduce entrapment of air bubbles.
Instrumental analysis
The samples were subjected to ACIS measurements using an impedance analyser with a single channel (Metrohm AutoLab, PGSTAT204) [18] connected to a custom two-electrode cell design (Fig. 1). The initial custom cell was designed using a cylindrical polypropylene container (/6 9 11.2 cm) and two threaded stainless-steel electrodes (/0.5 9 8 cm). The design of the initial cell was based on the characteristics of its constituents, simplicity of the design, low cost of the materials, its performance, reproducibility, and convenience for laboratory experimentation ( Table 2). To avoid sample leakage, the electrodes were inserted in the bottom face of the container and attached with a hard-plastic adhesive. ACIS measurements (50 data points per cycle) were collected at room temperature over a frequency range of 100 Hz to 1 MHz, an applied perturbation amplitude of 10 mV, and a current range up to 1 mA. The frequency range 1 MHz-100 Hz was found to be representative of the most important behaviour in this analysis (i.e. cement bulk response, cementelectrode response). Measurements were conducted in triplicate. Data were obtained every 5 min during the first 24 h after mixing. The measurements were consistent to within 8% at 5 min, 3% at 24 h, and the timing of the main features observed was repeatable to within AE 5 min. The data obtained from this technique are conveniently represented as a Nyquist plot, where the imaginary component is conventionally plotted as -Z'' and the real component is Z' of the complex impedance formalism, Z*.
Experimental methodology
To assess the hydration process of cement by ACIS, the experimentation was divided into two stages.
The first stage objective is, by evaluating different cell designs (changing one parameter at a time), to select a custom cell design and procedure which could be capable of providing reliable impedance measurements with the minimum noise and external interference (parasitic effects), for wPc at early hydration periods.
To achieve this, the initial cell set-up (Table 3) involved the selection of an electrode attachment method capable of maintaining the electrodes at a fixed position in the container, avoiding sample leakage and unwanted contributions to the ACIS measurements.
To verify linearity of the ACIS response, and the sensitivity of the cement system at early ages, evaluation of the amplitude of perturbation was carried out, followed by an analysis of the leads (identified as the main source of parasitic effects). Finally, the sample/cell geometry was evaluated.
After selecting the initial cell set-up parameters, the second segment (Table 4), electrode effects, was focused on comparing the impedance response of wPc to different electrode specifications such as the surface area, material, and electrode position.
After evaluation of the first stage, the second experimental stage was carried out by the custom cell calibration and the ACIS measurements correction procedure. The calibration was determined by measuring the impedance response of the cell in a short circuit arrangement (without sample) before the samples were tested, to enable minimisation of the parasitic effects associated with the cell components and leads. The measurement correction was made at each frequency, considering the ACIS response of the cement sample and the parasitic effects as additional quantities in the final ACIS measurements [19][20][21]. The selection of the final cell design was carried out through optimising the cell components according to their capacity to minimise parasitic effects in the impedance measurements.
Results and discussion
In previous investigations, the hydration of white Portland cement (wPc) was studied by ACIS during the first 72 h after mixing [22]. The impedance measurements obtained were affected by parasitic inductance effects at high frequencies (that appear below the Z' axis intercept), potentially leading to unreliable data interpretation. Figure 2 shows the impedance spectra of wPc during the first 72 h. During all experimentation, it can be noticed that the high-frequency data (below the Z' axis, red line) show parasitic inductive effects mainly related to the effects of the leads, and influenced by factors including the high conductivity and low microstructural development of the cement paste, magnetic coupling and electrode effects, cell design, potentiostat response, and working frequency range [16,20,21]. With an increase in frequency, the inductance increases, and as a result, the capacitiveresistive arc at high frequencies disappears. This indicates that the information in that frequency range is altered by the inductance effects and cannot be Table 2 Basis of the initial cell design [1][2][3][4][5][6][7][8][9] Two-electrode set-up Commonly used to measure the potential and bulk resistance across the sample Allows the assessment of electrochemical impedance measurements at high frequency ([ 100 kHz) Sample dimensions Allow measuring the bulk properties of the sample Allow using a small perturbation amplitude without affecting the measurement Polypropylene container Inexpensive Inert to the majority of chemical reactions Suitable for alkaline solutions More volume per unit wall area (cylindrical geometry) Electrode properties Material Stainless steel has suitable corrosion resistance properties in high alkaline solutions Geometry and dimensions High active surface area Allow more current density at the surface Potential and current distribution is uniform Electric field across the sample is uniform Increase measurement sensitivity Texture Improves contact between the electrodes and the cement paste Increases the surface-active area of the electrode Separation Increases the signal-to-noise ratio Used to assess mutual and self-inductance analysed or considered reliable data for characterisation of the cement. Even though the parasitic inductance effects were only visibly affecting the high-frequency data, it was possible that all frequency ranges (high, medium, and low) were affected by the same parasitic effects since they can influence potential-working electrode response and the measured capacitance at different frequencies.
The ACIS measurements obtained were evaluated by examination of Lissajous and resolution plots, and Nyquist plots. The information obtained regarding the cement material response at early age was limited because of the high conductivity of the cement paste produced by the water content, the high ionic strength of the pore solution, the hydration kinetics, and the continuous and open pore structure [14,23,24].
Initial cell set-up
System linearity ACIS relies on the use of an amplitude small enough to obtain a linear response which can be expressed analytically. The linearity of the system is directly related to the amplitude that is applied to the system. To find the optimal amplitude permutation and verify the linear response for cement systems, raw impedance data were used to generate Lissajous and resolution plots. The amplitude must be small enough to minimise perturbation of the material behaviour during cement hydration, but large enough that a high-fidelity signal can be recorded.
The Lissajous plots (e.g. Fig. 4) show the AC potential (x-axis) and the AC current (y-axis). The information obtained allows the verification of the linearity of the ACIS response in which the AC amplitude should be small enough so that the response of the electrochemical cell can be considered to be linear, but large enough to measure the system response [13,[25][26][27]. The resolution plots (e.g. Fig. 5) show the AC current and the AC potential (y-axis) as a function of time (x-axis). The information obtained allows measurement of the sensitivity of the system and the noise significance in the processed data. Two different amplitude perturbations (10 mV and 1 mV) were used to verify the sensitivity and linear response of the experimental procedure for cement systems, in the frequency range 1 MHz to 100 Hz and at a fixed current of 1 mA.
At an amplitude of 10 mV (Fig. 3), the Lissajous plots show that the linearity of the ACIS response is maintained, with central symmetry of a straight line with respect to the origin of the plots at each frequency. In the same way, the resolution plots (Fig. 4) show a high resolution of both signals at each frequency.
Conversely, at an amplitude of 1 mV (Fig. 5), the Lissajous plots show a strongly nonlinear response, as the central symmetry of the plot is not maintained and the shape is disturbed by noise. Likewise, the resolution plots (Fig. 6) show a high resolution for the current signal, but a low resolution for the potential signal.
To avoid nonlinear effects and based on the evaluation between both amplitudes, a linear response and accurate data can be obtained using 10 mV as the preferred amplitude for analysis of cement systems in the apparatus described here. This amplitude is selected due to the reduction of the errors that could be produced by charge-transfer resistance, noise, and polarisation during the impedance measurements. Smaller amplitudes will degrade and distort the impedance measurements, leading to inaccurate measurements and nonlinear response [28][29][30][31].
Electrode attachment
The electrode-cell attachment was assessed by comparison of the ACIS responses measured during wPc hydration using the SS electrodes attached with SS nuts, and with hard-plastic adhesive. Figure 7 shows the ACIS response of wPc as a function of the electrode attachment method, at hydration times of 5 min and 24 h. The electrodes that were attached with hard-plastic adhesive show fewer inductance effects at high frequency (likely to impact the impedance measurements) for both ages.
The electrodes attached with SS nuts show higher inductance effects due to the decrease in resistance as a result of the increase in the number of flux lines and amount of energy stored in the electrodes. Also, the changes in the ACIS values at both ages, as a result of an increase in the electrode surface area and a decrease in the electrode separation at the bottom of the cell produced by the attachment of the SS nuts to the SS electrodes, mean that an uneven current is flowing through the SS electrodes and nuts, which are acting as both working and counter-electrodes ( Fig. 1) [32][33][34].
For the cement at an early stage of hydration, with both attachment methods, it is not possible to identify a high-frequency semicircular arc, as the measured values fall below the Z' axis. As mentioned before, this is probably due to the high conductivity of the fluid cement paste with a highly connected aqueous phase of relatively high ionic strength, and the parasitic effects of the leads and cell. At longer ages, cement hydration proceeds, inductance decreases, and an increasing tendency in the resistance is observed, progressively yielding a more noticeable high-frequency semicircular arc. This can be attributed to the microstructural development, water consumption, and reduction of the connectivity of the pores [14,16]. These trends are discussed in the ''Calibration and measurement corrections'' section.
Lead effects
Parameters related to the leads, such as length and diameter, degree of grounding and shielding, weak end contacts, and positioning, are some of the main sources of noise and error in ACIS measurements. Parasitic perturbations produced due to the leads have been found in both high-and low-impedance cells in the form of stray capacitance or stray inductance, respectively [35,36]. These measurement perturbations are difficult to evaluate due to the interconnecting wiring, the external environment, and the parameters previously mentioned. For example, the AC current that passes through the current-transporting leads produces a magnetic field which couples to the leads from which the potential is measured, leading to unwanted AC voltages which could lead to mutual inductance errors in the ACIS measurements. Previous studies have proposed different solutions to minimise these parasitic perturbations [15,19,37].
To evaluate the lead effects on ACIS measurements for cement pastes at early age, different lead positions and lengths were analysed using the initial cell design. Figure 8 shows a schematic representation of the parameters evaluated; (a) distance of the leads between the working surface and the electrochemical cell, (b) twisting of leads, (c) alignment of connections between the leads and the SS electrodes, and (d) lead length. Figure 9a, b shows that there is not a significant change in the impedance spectra as a result of changing the distance of the leads between the working surface and the electrochemical cell, or by twisting the leads. These results confirm that these lead arrangements (i.e. conductor, insulation, binder, braid, jacket, and connectors) are suitable for the following experiments.
Similarly, there are no meaningful changes in the ACIS measurements when changing the alignment of the connections between the leads and the SS electrodes (Fig. 9c), since the SS electrode position (3 cm separation distance) is restricted and does not allow the leads to separate further from each other. Also, the results are an indication that the impedance measurements are not affected by the magnetic coupling or the pickup effects produced by the low-intensity magnetic field of the leads.
However, it can be observed that an increase in the lead length has a meaningful impact on the ACIS measurements. Figure 9d shows the changes in parasitic effects and Z' values as the length of the leads increases. The lead length increase raises the resistance, leading to a negative impact on the measured signal amplitude as a result of the wave deterioration as the leads move away from the energy source. Also, when AC is applied, a phase shift between the applied amplitude and the current can occur [33,37,38]. The following experiments were therefore conducted using the standard cables (150 cm); it was not possible to further reduce the length of the leads due to the limitations of the equipment available.
Electrode effects
The cement paste parameters measured by ACIS can be divided into two categories: the first set corresponds to the properties and behaviour of the cement itself (e.g. conductivity, kinetics, pore solution), and the second relates to the electrode-cement interaction (e.g. diffusion, adsorption, capacitance, electrical double layer capacitance). ACIS measurements can be affected by the electrode performance which depends on the system under analysis, the electrode specifications (e.g. surface area and material), and the electrode position. A significant change in any of these parameters will have a significant impact on the electrode performance and stability, which may lead to collection of erroneous data or misinterpretation of ACIS measurements [39].
An ideal electrode design should have a surface area which is able to deliver a uniform current density to ensure a uniform potential distribution over the electrodes and the sample [40]. The electrode alignment needs to be symmetrical between the WE and RE not only to avoid uneven current distribution, but also to decrease errors produced by differences in potential distribution. In addition, the study of the electrode material should be performed considering the sample material under investigation, since interactions with the sample can affect the electrode surface (e.g. corrosion and/or passivation layers), leading to an unwanted contribution to the measured impedance, and erroneous data interpretation. Cement paste and its pore solution have a highly alkaline environment (pH around 12 to 14) which can in turn influence the passivation properties of the electrode material [2,4,19,41]. The electrode material needs to have good electrical properties and performance to obtain good electric field distribution and Vertical Horizontal reduce unwanted impedance responses. It is fundamental to understand the electrode effects to obtain a better cell configuration which could yield more, and more reliable, information about the cement system. This section presents discussion of the influence of the electrode effects on cement ACIS measurements, intending to obtain insight into the relationship between the electrode effects and the ACIS response. To enable comparison of the results obtained using electrodes of differing surface areas, the results were normalised by multiplying the impedance obtained by the electrode surface area under investigation.
Electrode surface area Figure 10 shows the electrode surface area effects in the cement ACIS measurements of wPc pastes. At early age (5 min after mixing the cement pastes), the impedance values are slightly affected by the changes in the electrode surface area when either length ( Fig. 10a) or diameter (Fig. 10b) is varied. As the electrode surface area increases, the parasitic inductance effects are seen at high frequencies, and the values on the Z' axis increase, showing a correlation between the electrode surface area and the impedance values. The information at high frequency is obscured due to parasitic inductance effects which arise from the lead effects (as discussed in the preceding section) and the state of the cement paste (low developed microstructure and highly conductive). At longer ages (24 h), the results show a higher influence of the electrode surface area on the ACIS measurements, following the same tendency as at early ages. The increase in the resistance and the reduction of the inductance effects are produced by the microstructural development of the cement paste. These changes can be observed through the appearance of a high-frequency semicircular arc at longer ages.
It is important to notice that changes in the ACIS measurements produced by changing the electrode texture from threaded to flat (Fig. 10c) are small at early age, and almost null for a more mature cement paste. These changes will not influence the effective surface area drastically (in terms of the distribution of electrical flux lines), but they will have an impact on the contact between the electrodes and the cement paste.
Parameters measured by ACIS such as electrical double layer capacitance (EDLC), electron transfer resistance (ETR), and the uncompensated electrolyte resistance will depend on the ionic concentration and the ion types in the aqueous phase, the temperature, the reaction kinetics, the electrode surface area, and the current distribution [12,[42][43][44][45]. As the electrode surface area increases, the electrode-sample reaction kinetics, the parameters previously mentioned, and the current distribution will rise, leading to differences in the ACIS response as observed in the results (Fig. 10). Considering the experimental system as a circuit, as the electrode surface area is changed, the magnetic flux through the circuit, the amplitude, and current dispersion through the cement paste also change, affecting the parasitic effects and the impedance values. Figure 11 shows the short circuit cell ACIS measurements obtained from two different SS electrode diameters. ACIS measurements show -Z'' negative values in which inductance effects (-Z'') and the resistance values (Z' axis), produced by the leads, the cell, and the frequency dependence of both parasitic components, are observed [10,20,21]. The reason for a lower impedance value is explained by Eq. 1 (based on the comparison between both diameters). Figure 11 ACIS data for short circuit calibration measurement.
Here, R is the electrical resistance of the electrode, q is the specific resistivity, l is the length, and A is the cross-sectional area [20,46]. Therefore, the electrical resistance of the electrode is expected to be reduced as the cross-sectional area decreases, while the inductance effects maintain the same values because they are caused by the leads, and not by the cell specifications.
Regarding the final custom cell design as a result of this parametric study, it was decided to use threaded electrodes with 3 mm diameter and 7 cm length, of which 6 cm would be in contact with the cement paste and 1 cm would be outside the cell for connection of the WE and the RE.
The decision to use this electrode specification was made considering that the threaded texture did not have a significant effect on the effective surface area for the ACIS measurements but did significantly increase the electrode-sample contact to reduce the likelihood of debonding at that interface. The electrode length (7 cm) was selected to ensure a uniform current distribution, larger values of the EDLC, and effective surface area through the cement paste [42,47,48]. Finally, it was observed in Fig. 10b that a decrease in the electrode diameter was able to reduce the parasitic effects, and it is expected that the lower total internal resistance of the cell would increase the fractional contribution of the cement bulk to the overall impedance measurements. Figure 12 shows the influence of the electrode material on the measured ACIS spectra of wPc pastes. At early age, the ACIS measurements in the high frequency range and the inductance effects are not considerably affected by the choice of electrode materials among those tested here. However, at low frequencies it is possible to observe a difference in the part of the response that is attributed to diffusional processes, due to the variation in the electrode material having an impact on the reaction rates between the cement paste and the electrode [45,49,50]. At longer ages, there is a greater impact on the ACIS measurements due to the choice of electrode material.
Electrode material
The increase in resistance, the disappearance of the inductance effects, and the emergence of a high-frequency semicircular arc when using mild steel electrodes can all be related to the combination of the response of the cement microstructure development and the formation of a protective iron oxide film on the steel surface generated by the alkaline environment of the cement paste [51][52][53]. The electrode surface film has a strong influence on the ACIS measurements obtained using mild steel electrodes. Conversely, the ACIS measurements obtained with SS electrodes are determined mainly by the cement bulk and the electrode-cement interface [49,52,54,55]. This fact is attributed to the action exerted by the chromium-rich oxide film (passive film) on the SS electrode surface giving as a result a more stable electrode response. The reaction between the cement paste and the SS electrodes is slower than the reaction involving mild steel electrodes.
It is evident that the correct electrode material selection can enhance the ACIS measurements in terms of specific capacitance, diffusion rates, EDLC, accuracy, and performance [13,55,56]. Comparison of the ACIS results showed that SS electrodes are suitable in a highly alkaline environment, without affecting the impedance measurements and ensuring a stable electrode-cement interface and interaction, uniform current distribution, and better performance than mild steel electrodes.
Stainless steel and mild steel were considered suitable materials in this investigation. Other electrode materials, such as graphite or platinum, were not considered because of the single-use application and the higher cost.
Electrode position Figure 13 shows the effects of electrode separation effects on the ACIS spectra of wPc pastes. Taking the electrode separation distance of the initial cell (3 cm) as a reference, the results at both ages show an increase in the impedance values of Z' axis (ohmic resistance) when the electrode separation is either increased or decreased. The low-frequency response, dominated by diffusional behaviour, appears significantly less sensitive to the electrode separation. At 1.5 cm electrode separation, inductance increases slightly because of mutual inductance effects as the separation between the electrodes is insufficient, whereas at 6.0 cm electrode separation, inductance increases considerably as the mutual inductance decreases and the self-inductance of the leads increases. Figure 14 shows the effects of the electrode positions on the impedance spectra of wPc pastes. The electrode positions studied were to have the electrodes located at the bottom (Fig. 1), top, and lateral faces of the polypropylene cell, without changing the distance between the SS electrodes.
At early age, the impedance measurements are not affected by the majority of the electrode positions. The only position which influences the impedance measurements is at the top of the cell, where the ohmic resistance and inductance effects increase. This tendency is probably due to cement bleeding and air entrapment produced by electrode insertion from the top of the cell filled with cement paste.
At later age (24 h), the ACIS measurements and spectra are more notably affected by the electrode positions. The vertical, horizontal, and top electrode positions show an increase in the ohmic resistance values (Z' axis) and the inductance effects at high frequencies are reduced, giving as a result the emergence of a semicircular arc. These ACIS measurements are the result of the combined response of the cement microstructural development and the ohmic resistance produced by the differential shrinkage and the potential cracking of cement generated by the thermal restrains and the stress/load of the electrodes position [14,57].
The influence of the position of the electrodes in the ACIS measurements at later age is due to the sample geometry and the electrode direction changing the restraint of shrinkage of cement, leading to cracks that induce an increase in the ohmic resistance values [44,[58][59][60][61][62].
Based on these results and considering the practicalities of cell construction and loading, it was decided to position the electrodes at the bottom face of the custom cell since this location showed a better performance on the ACIS measurements, without cracks appearing in the hardened cement, and ensuring consistent measurement of the cement paste. Figure 14 ACIS data for wPc pastes, as a function of the electrode position.
Calibration and measurement corrections
Calibration experiments followed the same experimental procedure and the set-up used in the previous cell design test, as any change in the configuration affects the ACIS measurements and therefore the calibration values. The ACIS measurement corrections were carried out after verifying the reproducibility of the short-circuited custom cell design measurement, and considering the calibration measurements as an additive correction to the ACIS measurements [19][20][21]63]. Figure 15 shows the impedance spectra before and after application of these corrections, for wPc at early age. After the correction, the impedance spectra show high-frequency data above the Z' axis, with a highfrequency semicircular arc, while the measurements at low frequencies do not change. For correct data interpretation, it is necessary to apply this correction to the raw ACIS measurements.
Final cell design and ACIS data for white Portland cement hydration
To select a cell design, the evaluation of ACIS measurements for different cell parameters was presented in the preceding sections. Figure 16 shows the custom cell design specification involved the use of threaded SS electrodes (/0.3 9 7 cm) and the cylindrical polypropylene container (/6 9 11.2 cm) used for the following experiments. The time between measurement acquisitions was every 5 min (0-24 h), 10 min (24-48 h), 15 min (48-72 h), and 20 min (72-92 h). Subsequently, the impedance data were calibrated and corrected. Figure 17a shows the ACIS spectra of wPc during the first 92 h after mixing. Before the first 3.5 h, the inductance effects are removed from the ACIS spectra by application of the calibration and correction as described above. However, after 3.5 h, inductance effects suddenly appear (followed by an increase in Z' values), showing a decreasing trend which disappears after 30 h. At longer ages and as the hydration proceeds, the emergence of semicircular arcs at high frequency is more developed, while the increase in Z' axis values becomes slower as the thickness of the hydrated products increases and the hydration process slows [64][65][66].
The conductivity was obtained from the resistivity of the wPc paste by dividing the Z' axis intercept point of the impedance spectra into a cell constant. The cell constant was obtained by measuring the ACIS response of different concentrations of NaOH solutions of known conductivity [46,[67][68][69][70], using the cell shown in Fig. 16. Figure 17b shows the conductivity and resistivity as a function of time for wPc. On the first day of the hydration reaction, three perturbations are observed as the resistivity increases slightly and the conductivity drops quickly. At longer ages, the resistance increases rapidly, showing an increase in the amplitude and number of perturbations, while the conductivity decreases reaching a point where the changes in conductivity are minor. Figure 17c shows a second perspective of the impedance spectra of wPc, by plotting the real induction/dormant, acceleration, deceleration/diffusion, and long-term reaction [1,71,72]. The chemical and microstructural processes taking place during these stages are also identifiable in the ACIS data. At early ages (dissolution and induction stages), the results show small impedance and resistivity values due to the high conductivity of the cement paste (with ions in the aqueous phase supplied by the rapid dissolution of soluble alkali and calcium sulphates), and limited solid phase microstructural development. At 3.5 h after mixing, the ACIS values at high frequencies are affected by the sudden emergence of inductance effects. Between the end of the induction period and the beginning of the acceleration period, the dissolution of C 3 S and C 2 S increases the ionic strength of the cement paste pore fluid, followed by the nucleation of C-S-H and initial crystallisation of CH. At this point, the resistivity starts to decrease. During the deceleration period, the heat flow and reaction rate of silicates decreases, and the microstructure is affected by water consumption, pore reduction, and space limitation. At this point, the results show increasing Z' values (Fig. 17c), and the inductance effects (-Z'') remain unchanged. At approximately 10 h, the low frequency values in Fig. 17b decrease, while the Z' values in Fig. 17c show a perturbation at both high and low frequencies.
At the end of the deceleration period and during the long-term reaction period (* 15 h), the inductance effects (-Z'') start to decrease, until they disappear at 30 h. The Z' values keep increasing at both frequencies, probably because of the microstructural development, reduced water content, and the partial closure and eventual depercolation of the pore structure.
At longer ages ([ 30 h), a high-frequency arc starts to emerge, and while the microstructure continues developing slowly, the diameter of the high-frequency arc increases, as can be observed in Fig. 17a. The conductivity decreases to reach a certain point where no further significant changes can be appreciated, while the resistivity keeps increasing due to the slow microstructural development [8,71,73].
Conclusions
This study has assessed the electrochemical cell design and ACIS measurements during cement hydration in the early stages of hydration, in both the fresh (fluid) and hardened states. The results demonstrate the importance of the correct assessment of the parameters (e.g. electrode, lead and parasitic effects, and procedure) in the cell design to reduce the parasitic effects that appear in ACIS data. A good correlation between the ACIS measurements and the cement hydration stages was obtained. However, due to the limited solid phase microstructural development and the highly conductive condition of cement at early hydration periods, the parasitic effects could not be fully corrected until the cement had hydrated sufficiently to yield a microstructure that was able to raise the resistivity of the paste.
It is therefore possible to highlight the following conclusions: 1. ACIS response and parasitic effects are directly affected by electrode effects and cell design. 2. Cement conductivity and resistivity behaviour, and their variation as a function of time during hydration, correlate with existing conceptual models developed from calorimetric and other data. 3. ACIS has been shown to be a sensitive and versatile technique for assessing the different stages of cement hydration, from the fresh to the hardened state, which is very difficult to probe truly continuously by any other single technique in a time-resolved manner. However, in order to fully understand this process and its microstructural development, the behaviour and interpretation of ACIS measurements and the parasitic effects that complicate the data processing and analysis need further investigation, supported by other characterisation techniques.
Author contributions
ASG and JP contributed to methodology, experimental plan, interpretation of data, and draft preparation; ASG contributed to experimental work and data analysis; JP contributed to funding acquisition, supervision, resources, and review & editing.
Compliance with ethical standards
Conflict of interest The authors do not hold any conflict of interest related to the work described in this article.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licen ses/by/4.0/. | 2020-10-13T14:09:07.356Z | 2020-10-12T00:00:00.000 | {
"year": 2020,
"sha1": "9119d11ab84b346d0592a725f3782985a84faf65",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10853-020-05397-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "9119d11ab84b346d0592a725f3782985a84faf65",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
247067871 | pes2o/s2orc | v3-fos-license | Mitochondrial ROS Produced by Skeletal Muscle Mitochondria Promote the Decisive Signal for UPRmt Activation
The mitochondrial unfolded protein response (UPRmt) can repair and remove misfolded or unfolded proteins in mitochondria and enhance mitochondrial protein homeostasis. Reactive oxygen species (ROS) produced by regular exercise is a crucial signal for promoting health, and skeletal muscle mitochondria are the primary source of ROS during exercise. To verify whether UPRmt is related to ROS produced by mitochondria in skeletal muscle during regular exercise, we adapted MitoTEMPO, mitochondrially targeted antioxidants, and ROS production by mitochondria. Our results showed that mitochondrial ROS is the key factor for activating UPRmt in different pathways.
Introduction
Regular exercise and physical activity improve physiological ability and body functions [1] and reduce the risk of chronic diseases, including type II diabetes, cardiovascular disease, and cancers [2,3]. Skeletal muscle is an important part involved in physical activity. Exercise can significantly increase the metabolism of skeletal muscles [4,5], while contraction of skeletal muscles can produce reactive oxygen species (ROS) [6]. Previous studies have shown that intense exercise and muscle contraction increase ROS production. The damage and modification of cell proteins, lipids, and DNA can lead to skeletal muscle fatigue and injury [7][8][9][10][11][12][13]. Therefore, antioxidant supplements are often used as prescriptions to resist the adverse reactions of exercise [14][15][16][17]. However, there are increasing evidence to show that exogenous antioxidant supplements have adverse reactions to some acute and chronic responses of skeletal muscles to exercise [18][19][20][21][22][23][24][25][26], as well as weaken the normal redox signaling pathway in muscles [13], weakening the adaptive response to endurance training [19][20][21].
Mitochondria are critical for maintaining cell homeostasis by regulating energy production, cell signaling, and apo-ptosis [27]. Mitochondria are one of the primary sources of ROS during skeletal muscle contraction [28][29][30]. The mitochondrial unfolded protein response (UPRmt) is a stress response pathway that allows mitochondria to converse with the nucleus, activates the expression of nuclear transcription factors, activates mitochondrial chaperone and proteasome, and promotes the repairing and clearance of misfolded and unfolded proteins, thereby maintaining mitochondrial protein homeostasis and reducing stress response [31][32][33]. Previous studies have shown that UPRmt in the mitochondrial matrix mainly relies on activating the mitochondrial chaperone HSP60 and the protease CLPP [32,[34][35][36]. In contrast, UPRmt in the mitochondrial intercellular space (IMS) is associated with the expression of membrane interstitial protein HTRA2 [37][38][39]. In addition, the antioxidant effect of the UPRmt sirtuin axis is also crucial for the stability of mitochondrial reconstruction protein [35]. To date, there is no report about eliminating the effect of mitochondrial ROS on UPRmt in skeletal muscle mitochondria after long-term exercise. Therefore, we mainly discuss the relationship between the exercise health effect of screening mitochondrial ROS and UPRmt.
MitoTEMPO is a novel cell-penetrating antioxidant targeting mitochondria, containing a hydrophobic tetramethylpiperidine (TEMPO) group and a lipophilic triphenylphosphine cation (TPP+). It can penetrate mitochondria and remove ROS produced by mitochondria [40][41][42][43]. Therefore, we selected MitoTEMPO as a targeted scavenger of mitochondrial ROS and C57BL/6J mice who had long-term exercise and MitoTEMPO were selected as the research objects to explore the effects of skeletal muscle mitochondrial health in mice to investigate the relationship between mitochondrial health and UPRmt and mitochondrial ROS in long-term regular exercise of skeletal muscle MitoTEMPO. Our results showed that during regular exercise, MitoTEMPO offset mitochondrial ROS production by exercise, as the mitochondrial membrane potential of skeletal muscle of mice decreased significantly, and the activation degree of UPRmt mediated by different pathways fell heavily. Therefore, we determined that mitochondrial ROS produced by exercise is one of the important stressors for promoting skeletal muscle health and moderately activated UPRmt is one of the reasons for achieving this healthpromoting effect.
Materials and Methods
2.1. Animals. Thirty male C57BL/6J mice (8 weeks old) were purchased from Beijing Vital River Laboratory Animal Technology Co. Ltd. and placed in an environmentally controlled room for a 12-hour light/dark cycle. Prior to the experiment, all animals were given a week of adaptive feeding, which meant that they drink cold and eat without any limitations. All experimental protocols were carried out in accordance with the National Research Council's standards for the care and use of laboratory animals, which had been approved by the Animal Ethics Committee of the Tianjin Institute of Physical Education.
Exercise Program and MitoTEMPO
Intervention. The animals were randomly divided into three groups: control group, exercise group, and exercise + MitoTEMPO group. The exercise group received 2 days of adaptive treadmill training in advance and then 15 m/min treadmill training for 12 consecutive days. MitoTEMPO (Sigma, SML0737) dissolved in normal saline was intraperitoneally injected 0.7 mg/kg into mice to establish the model of the exercise + MitoTEMPO group [44][45][46][47][48][49]. The rest and exercise groups were injected with the same amount of normal saline as the control group.
Detection of Mitochondrial Membrane
Potential. The mice were sacrificed with a broken neck and separate skeletal muscle. Then, they were placed in precooled medium A (120 mM KCL, 5 mM MgCl2.6H2O, 20 mM HEPES, 1 mM EGTA, 0.5% BSA, and pH 7.4). Mince the tissue into small pieces using a pair of sharp scissors (tissue should become a mash), adding medium drops while cutting. Then, the shredded tissue was transferred to a precooled glass homogenizer for grinding and centrifuged at 600 g at 4°C for 10 minutes. The supernatant was transferred to a new centri-fuge tube. Next, it was centrifuged at 12000 g at 4°C for 10 minutes, the supernatant was discarded, and mitochondria were resuspended in a small volume of the isolation medium B (300 mM sucrose, 20 mM HEPES,0.1 mM EGTA, and pH 7.4). Then, mitochondria were stored on ice and used within 3-4 h.
After the mitochondria were purified by differential centrifugation, the JC-1 mitochondrial membrane potential detection kit (Beyotime, C2006) was used to detect the mitochondrial membrane potential. When the JC-1 monomer was detected, the excitation light was set at 490 nm and the emission light was set at 530 nm. For detecting JC-1 polymer, the excitation light can be set at 525 nm and the emission light can be set at 590 nm. The larger the JC-1 polymer/ monomer ratio, the higher the mitochondrial membrane potential. All the experiments were triplicated.
Western
Blotting. RIPA lysis buffer (Beyotime, P0013B) was used to extract total protein from skeletal muscle of mice, and the mixture of the protease and phosphatase inhibitors (Beyotime, P1050) was added. The BCA protein concentration assay kit (Beyotime, P0012) was used to determine the protein concentration. The same number of protein samples was separated using 10% SDS-PAGE electrophoresis and transferred to 0.22 μm PVDF (Millipore, Billerica). Under the condition of room temperature, the PVDF membrane was blocked with 5% skim milk for 1 hour. GDDPH (1 : 1000, Abcam) and β-tubulin
Offsetting Mitochondrial ROS Induced in the Loss of
Membrane Potential in the Mitochondria of the Training Mice Skeletal Muscle. The mitochondrial membrane potential was measured using the fluorescence labeling method. Compared with the quiet group, 12-day regular exercise increased the mitochondrial membrane potential by 22% (P < 0:01), while the mitochondrial membrane potential in skeletal muscle decreased by 17% (P < 0:01) after sending MitoTEMPO to eliminate mtROS ( Figure 1 and Table 1).
mtROS Is the Key Signal to Activate Mitochondrial
Matrix CHOP-Dependent UPRmt. After processing muscle protein, we used Western blot and found that 12-day regular exercise promoted the activation of mitochondrial matrix
mtROS Activated the Expression of MnSOD in the SIRT3-Dominated Antioxidant
System. In addition, we also found that the expression of SIRT3 protein in skeletal muscle did not increase significantly after 12 weeks of regular exercise. However, the expression of MnSOD protein in the mitochondrial matrix increased by 67% (P < 0:01). Similarly, the expression of SIRT3 and MnSOD in skeletal muscle decreased by 65% (P < 0:01) and 62% (P < 0:01), respectively, after MitoTEMPO intervention (Figure 4 and Table 4).
Discussion
Mitochondrial membrane potential is often used as an indicator of mitochondrial membrane integrity to evaluate the health status of mitochondria. Our results show that regular exercise training can improve mitochondrial membrane potential, while MitoTEMPO can significantly reduce the mitochondrial membrane potential after the targeted elimination of mtROS. Thus, the results suggest that mtROS is critical for maintaining mitochondrial membrane potential but how mtROS plays a role in maintaining mitochondrial quality is unclear. UPRmt, as an essential quality control mechanism for maintaining mitochondrial protein homeostasis, is closely related to mitochondrial membrane potential [50][51][52]. However, whether mtROS generated during Table 3: HTRA2 in the intermembrane space of mitochondria protein and control group normalization (X À ± SD).
Control
Exercise Exercise + MitoTEMPO BioMed Research International exercise is an important signal for UPRmt activation remains to be investigated. Mitochondria have their chaperone and protease library. When the accumulation of misfolded and unfolded proteins exceeds the folding ability of mitochondria, it can cause damage and organelle dysfunction [53]. Therefore, moderate UPRmt is beneficial to maintain mitochondrial health. Currently, the universal mitochondrial UPRmt reaction occurs in the mitochondrial matrix [54]. The UPRmt reaction mainly depends on the retrograde signal transduction activated by CHOP, leading to the transient expression of nuclear-encoded mitochondrial chaperone HSP60 and the protease CLPP [55], playing a role in controlling protein quality. The degree of this reaction is closely related to the level of unfolded protein in mitochondria [33]. When unfolded or misfolded proteins accumulate in the mitochondrial matrix to produce stress, they are first cleaved by CLPP protease and the peptides are transported out of the mitochondria through an unclear transport mechanism, resulting in the activation of c-Jun N-terminal kinase (JNK2). It stimulates the transcription factor CHOP, and the dimer of CHOP and C/EBPβ binds to the CHOP element on the promoter of the UPRmt gene, encoding the mitochondrial chaperone protein HSP60 and the protease CLPP [54,56,57]. The nucleoencoded mitochondrial chaperone HSP60 mainly helps the further folding of mitochondrial unfolded proteins [58], while the protease CLPP is mainly responsible for the degradation of abnormal proteins in the mitochondrial matrix [55]. In addition, there is a CHOP-independent unfolded protein response UPRmt sirtuin axis in the cytoplasmic matrix and cells can activate the UPRmt sirtuin axis to promote the expression of MnSOD in the matrix and then maintain mitochondrial protein homeostasis [59,60]. This includes SIRT3, which is located in the mitochondria and controls mitochondrial metabolism [61]. IRT3 regulates the deacetylation of transcription factor FOXO3A, which causes FOXO3A to localize in the nucleus [62][63][64], stimulating the transcription of mitochondrial superoxide dismutase 2 (MnSOD) [65,66], and then has an antioxidant role. A previous study has shown that CHOP gene knockout and SIRT3 inhibition mediated by RNAi do not affect the classical UPRmt transcription reaction [51]. Therefore, the UPRmt reaction involved in the SIRT3-FOXO3A axis is independent
BioMed Research International
of the mitochondrial quality control system of CHOP and the antioxidant activity of the UPRmt sirtuin axis may be highly complementary to the classical UPRmt transcription reaction to ensure mitochondrial health [52]. Our results showed that exercise activated the mitochondrial matrix UPRmt response and increased the content of MnSOD. However, the expression of c-JUN,CHOP, HSP60,CLPP, SIRT3, and MnSOD decreased differently after applying MitoTEMPO to inhibit mtROS, suggesting that our mtROS is a key factor for exercise to promote CHOP-dependent and as sirtuin axis-based UPRmt.
Unfolded protein protection programs are also present in the mitochondrial intermembrane compartment (IMS) [54]; IMS protein quality control is different from matrix protein protection procedures. The control is divided into two sections. First, misfolded proteins targeting IMS are ubiquitinated and degraded by the 26S proteasome in the cytoplasm. Second, when unfolded proteins and excessive proteins enter IMS, they can be eliminated by the protease HTRA2 (also known as OMI) [67]. The specific process is that the phosphorylation of protein kinase B (AKT) can be activated when there is an excessive accumulation of misfolded or unfolded proteins in the mitochondrial membrane intercellular space (IMS); thus, estrogen receptor α (ERα) in the nucleus is activated and the expression of IMS protease HTRA2 is induced to control the quality of IMS protein [68]. Our results show that exercise activates the mitochondrial matrix UPRmt-related protein HTRA2 by upregulating AKT and pAKT, while the expression of AKT, pAKT, and HTRA2 is suppressed after blocking out mtROS, suggesting that exercise-generated mtROS is an important signal for activating IMS UPRmt.
Conclusion
Regular exercise training generates mtROS as a key signal to activate different pathway-dependent UPRmt. The production of appropriate amounts of mtROS enhances protein homeostasis in mitochondria which can maintain the integrity of mitochondrial membranes and safeguard the efficiency of energy metabolism.
Data Availability
The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.
Conflicts of Interest
It is declared by the authors that this article is free of conflict of interest. | 2022-02-24T16:20:45.997Z | 2022-02-21T00:00:00.000 | {
"year": 2022,
"sha1": "3e2b5e344be7197747e9a5ccdafd9c1d2ec975fb",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/bmri/2022/7436577.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2bf5b37891c14465fc611d1bcb331660f60d4745",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
196201849 | pes2o/s2orc | v3-fos-license | Simple Unsupervised Summarization by Contextual Matching
We propose an unsupervised method for sentence summarization using only language modeling. The approach employs two language models, one that is generic (i.e. pretrained), and the other that is specific to the target domain. We show that by using a product-of-experts criteria these are enough for maintaining continuous contextual matching while maintaining output fluency. Experiments on both abstractive and extractive sentence summarization data sets show promising results of our method without being exposed to any paired data.
Introduction
Automatic text summarization is the process of formulating a shorter output text than the original while capturing its core meaning. We study the problem of unsupervised sentence summarization with no paired examples. While datadriven approaches have achieved great success based on various powerful learning frameworks such as sequence-to-sequence models with attention (Rush et al., 2015;Chopra et al., 2016;Nallapati et al., 2016), variational auto-encoders (Miao and Blunsom, 2016), and reinforcement learning (Paulus et al., 2017), they usually require a large amount of parallel data for supervision to do well. In comparison, the unsupervised approach reduces the human effort for collecting and annotating large amount of paired training data.
Recently researchers have begun to study the unsupervised sentence summarization tasks. These methods all use parameterized unsupervised learning methods to induce a latent variable model: for example Schumann (2018) uses a length controlled variational autoencoder, Fevry and Phang (2018) use a denoising autoencoder but only for extractive summarization, and Wang and Lee (2018) apply a reinforcement learning procedure combined with GANs, which takes a further step to the goal of Miao and Blunsom (2016) using language as latent representations for semisupervised learning.
This work instead proposes a simple approach to this task that does not require any joint training. We utilize a generic pretrained language model to enforce contextual matching between sentence prefixes. We then use a smoothed problem specific target language model to guide the fluency of the generation process. We combine these two models in a product-of-experts objective. This approach does not require any task-specific training, yet experiments show results on par with or better than the best unsupervised systems while producing qualitatively fluent outputs. The key aspect of this technique is the use of a pretrained language model for unsupervised contextual matching, i.e. unsupervised paraphrasing.
Model Description
Intuitively, a sentence summary is a shorter sentence that covers the main point succinctly. It should satisfy the following two properties (similar to Pitler (2010)): (a) Faithfulness: the sequence is close to the original sentence in terms of meaning; (b) Fluency: the sequence is grammatical and sensible to the domain.
We propose to enforce the criteria using a product-of-experts model (Hinton, 2002), where the left-hand side is the probability that a target sequence y is the summary of a source sequence x, p cm (y|x) measures the faithfulness in terms of contextual similarity from y to x, and p fm (y|x) measures the fluency of the token sequence y with respect to the target domain. We use λ as a hyper-parameter to balance the two expert models.
We consider this distribution (1) being defined over all possible y whose tokens are restricted to a candidate list C determined by x. For extractive summarization, C is the set of word types in x. For abstractive summarization, C consists of relevant word types to x by taking K closest word types from a full vocabulary V for each source token measured by pretrained embeddings.
Contextual Matching Model
The first expert, p cm (y|x), tracks how close y is to the original input x in terms of a contextual "trajectory". We use a pretrained language model to define the left-contextual representations for both the source and target sequences. Define S(x 1:m , y 1:n ) to be the contextual similarity between a source and target sequence of length m and n respectively under this model. We implement this as the cosine-similarity of a neural language model's final states with inputs x 1:m and y 1:n . This approach relies heavily on the observed property that similar contextual sequences often correspond to paraphrases. If we can ensure close contextual matching, it will keep the output faithful to the original.
We use this similarity function to specify a generative process over the token sequence y, p cm (y|x) = N n=1 q cm (y n |y <n , x).
The generative process aligns each target word to a source prefix. At the first step, n = 1, we compute a greedy alignment score for each possible word w ∈ C, s w = max j≥1 S(x 1:j , w) for all source prefixes up to length j. The probability q cm (y 1 = w|x) is computed as softmax(s) over all target words. We also store the aligned context z 1 = arg max j≥1 S(x 1:j , y 1 ).
For future words, we ensure that the alignment is strictly monotonic increasing, such that z n < z n+1 for all n. Monotonicity is a common assumption in summarization (Yu et al., 2016a,b;Raffel et al., 2017). For n > 1 we compute the alignment score s w = max j>z n−1 S(x 1:j , [y 1:n−1 , w]) to only look at prefixes longer than z n−1 , the last greedy alignment. Since the distribution conditions on y the past alignments are deterministic to compute (and can be stored). The main computational cost is in extending the target language ? z n x y Encode candidate words using language model with the current prefix Calculate the similarity scores with best match This process is terminated when a sampled token in y is aligned to the end of the source sequence x, and the strict monotonic increasing alignment constraint guarantees that the target sequence will not be longer than the source sequence. The generative process of the above model is illustrated in Fig. 1.
Domain Fluency Model
The second expert, p fm (y|x), accounts for the fluency of y with respect to the target domain. It directly is based on a domain specific language model. Its role is to adapt the output to read closer shorter sentences common to the summarization domain. Note that unlike the contextual matching model where y explicitly depends on x in its generative process, in the domain fluency language model, the dependency of y on x is implicit through the candidate set C that is determined by the specific source sequence x.
The main technical challenge is that the probabilities of a pretrained language model are not well-calibrated with the contextual matching model within the candidate set C, and so the language model tends to dominate the objective because it has much lower variance (more peaky) in the output distribution than the contextual matching model. To manage this issue we apply kernel smoothing over the language model to adapt it from the full vocab V down to the candidate word list C.
Our smoothing process focuses on the output embeddings from the pretrained language model. First we form the Voronoi partition (Aurenham-mer, 1991) over all the embeddings using the candidate set C. That is, each word type w in the full vocabulary V is exactly assigned to one region represented by a word type w in the candidate set C, such that the distance from w to w is not greater than its distance to any other word types in C. As above, we use cosine similarity between corresponding word embeddings to define the regions. This results in a partition of the full vocabulary space into |C| distinct regions, called Voronoi cells. For each word type w ∈ C, we define N (w) to be the Voronoi cell formed around it. We then use cluster smoothing to define a new probability distribution: where lm is the conditional probability distribution of the pretrained domain fluency language model. By our construction, p fm is a valid distribution over the candidate list C. The main benefit is that it redistributes probability mass lost to terms in V to the active words in C. We find this approach smoothing balances integration with p cm .
Summary Generation
To generate summaries we maximize the log probability (1) to approximate y * using beam search. We begin with a special start token. A sequence is moved out of beam if it has aligned to the end token appended to the source sequence. To discourage extremely short sequences, we apply length normalization to re-rank the finished hypotheses. We choose a simple length penalty as lp(y) = |y| + α with α a tuning parameter.
Experimental Setup
For the contextual matching model's similarity function S, we adopt the forward language model of ELMo (Peters et al., 2018) to encode tokens to corresponding hidden states in the sequence, resulting in a three-layer representation each of dimension 512. The bottom layer is a fixed character embedding layer, and the above two layers are LSTMs associated with the generic unsupervised language model trained on a large amount of text data. We explicitly manage the ELMo hidden states to allow our model to generate contextual embeddings sequentially for efficient beam search. 1 The fluency language model component lm is task specific, and pretrained on a corpus of summarizations. We use an LSTM model with 2 layers, both embedding size and hidden size set to 1024. It is trained using dropout rate 0.5 and SGD combined with gradient clipping.
We test our method on both abstractive and extractive sentence summarization tasks. For abstractive summarization, we use the English Gigaword data set pre-processed by Rush et al. (2015). We train p fm using its 3.8 million headlines in the training set, and generate summaries for the input in test set. For extractive summarization, we use the Google data set from Filippova and Altun (2013). We train p fm on 200K compressed sentences in the training set and test on the first 1000 pairs of evaluation set consistent with previous works. For generation, we set λ = 0.11 in (1) and beam size to 10. Each source sentence is tokenized and lowercased, with periods deleted and a special end of sentence token appended. In abstractive summarization, we use K = 6 in the candidate list and use the fixed embeddings at the bottom layer of ELMo language model for similarity. Larger K has only small impact on performance but makes the generation more expensive. The hyper-parameter α for length penalty ranges from -0.1 to 0.1 for different tasks, mainly for desired output length as we find ROUGE scores are not sensitive to it. We use concatenation of all ELMo layers as default in p cm .
Results and Analysis
Quantitative Results. The automatic evaluation scores are presented in Table 1 and Table 2. For abstractive sentence summarization, we report the ROUGE F1 scores compared with baselines and previous unsupervised methods. Our method outperforms commonly used prefix baselines for this task which take the first 75 characters or 8 words of the source as a summary. Our system achieves comparable results to Wang and Lee (2018) a system based on both GANs and reinforcement training. Note that the GAN-based system needs both source and target sentences for training (they are unpaired), whereas our method only needs the target domain sentences for a simple language model. In Table 1, we also list scores of the stateof-the-art supervised model, an attention based Table 2: Experimental results of extractive summarization on Google data set. F1 is the token overlapping score, and CR is the compression rate. F&A is an unsupervised baseline used in Filippova and Altun (2013), and the middle section is supervised results.
seq-to-seq model of our own implementation, as well as the oracle scores of our method obtained by choosing the best summary among all finished hypothesis from beam search. The oracle scores are much higher, indicating that our unsupervised method does allow summaries of better quality, but with no supervision it is hard to pick them out with any unsupervised metric. For extractive sentence summarization, our method achieves good compression rate and significantly raises a previous unsupervised baseline on token level F1 score. Results show the effectiveness of our cluster smoothing method for the vocabulary adaptive language model p fm , although temperature smoothing is an option for abstractive datasets. Additionally Contextual embeddings have a huge impact on performance. When using word embeddings (bottom layer only from ELMo language model) in our contextual matching model p cm , the summarization performance drops significantly to below simple baselines as demonstrated by score decrease. This is strong evidence that encoding independent tokens in a sequence with generic language model hidden states helps maintain the contextual flow. Experiments also show that even when only using p cm (by setting λ = 0), utilizing the ELMo language model states allows the generated sequence to follow the source x closely, whereas normal context-free word embeddings would fail to do so. Table 4 shows some examples of our unsupervised generation of summaries, compared with the human reference, an attention based seq-to-seq model we trained using all the Gigaword parallel data, and the GAN-based unsupervised system from Wang and Lee (2018). Besides our default of using all ELMo layers, we also show generations I: japan 's nec corp. and UNK computer corp. of the united states said wednesday they had agreed to join forces in supercomputer sales G: nec UNK in computer sales tie-up s2s: nec UNK to join forces in supercomputer sales GAN: nec corp. to join forces in sales CM (cat): nec agrees to join forces in supercomputer sales CM (top): nec agrees to join forces in computer sales CM (bot): nec to join forces in supercomputer sales I: turnout was heavy for parliamentary elections monday in trinidad and tobago after a month of intensive campaigning throughout the country , one of the most prosperous in the caribbean G: trinidad and tobago poll draws heavy turnout by john babb s2s: turnout heavy for parliamentary elections in trinidad and tobago GAN: heavy turnout for parliamentary elections in trinidad CM (cat): parliamentary elections monday in trinidad and tobago CM (top): turnout is hefty for parliamentary elections in trinidad and tobago CM (bot): trinidad and tobago most prosperous in the caribbean I: a consortium led by us investment bank goldman sachs thursday increased its takeover offer of associated british ports holdings , the biggest port operator in britain , after being threatened with a possible rival bid G: goldman sachs increases bid for ab ports s2s: goldman sachs ups takeover offer of british ports GAN: us investment bank increased takeover offer of british ports CM (cat): us investment bank goldman sachs increases shareholdings CM (top): investment bank goldman sachs increases investment in britain CM (bot): britain being threatened with a possible bid Table 4: Abstractive sentence summary examples on Gigaword test set. I is the input, G is the reference, s2s is a supervised attention based seq-to-seq model, GAN is the unsupervised system from Wang and Lee (2018), and CM is our unsupervised model. The third example is a failure case we picked where the sentence is fluent and makes sense but misses the point as a summary. by using the top and bottom (context-independent) layer only. Our generation has fairly good qualities, and it can correct verb tenses and paraphrase automatically. Note that top representation actually finds more abstractive summaries (such as in example 2), and the bottom representation fails to focus on the proper context. The failed examples are mostly due to missing the main point, as in example 3, or the summary needs to reorder tokens in the source sequence. Moreover, as a byproduct, our unsupervised method naturally generates hard alignments between summary and source sentences in the contextual matching pro- Table 4.
Conclusion
We propose a novel methodology for unsupervised sentence summarization using contextual matching. Previous neural unsupervised works mostly adopt complex encoder-decoder frameworks. We achieve good generation qualities and competitive evaluation scores. We also demonstrate a new way of utilizing pre-trained generic language models for contextual matching in untrained generation. Future work could be comparing language models of different types and scales in this direction. | 2019-07-14T07:01:41.595Z | 2019-07-01T00:00:00.000 | {
"year": 2019,
"sha1": "513084a723ffcbd6546b5a84f1f01ecaa7dfeddd",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/P19-1503.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "ef3c9b9f590a323f0873ad67cf59e04666810c99",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
239244463 | pes2o/s2orc | v3-fos-license | WAYS TO ACHIEVE PROGRESSION IN THE FOOTBALL GAME
. The paper presents the most important aspects related to how the ball is transmitted by the players for finalisation in the game of football. Considering that there are many problems with achieving progression in the Romanian football, particularly due to the lack of some philosophy to be put into practice at the earliest age (starting with juniors), we aim to present a coherent concrete model of training based on tactical periodisation. The game model is fundamental for creating some specificity in relation to the multitude of situations that the game can offer at all times. In this case, the player needs to have as many opportunities as possible to solve the game situations, which will make him more creative. This theoretical model simplifies the way in which progression is made and highlights how individual and collective tactical actions are put into practice, each time taking into account the defensive balance in the context of creating superiority in the ball circulation and player movement related to the opponent, the possibilities of achieving these aspects depending on the physical, technical and especially mental qualities of the team players. The paper discusses how to approach progression throughout the week in the context of using some training means based on the work with the ball. The results of the research can contribute to certain national philosophy based on the specific characteristics of the Romanian football players.
Introduction
As we still do not have unitary philosophy and a coherent concept to be implemented at national level and to be put into practice at the earliest age (starting with juniors), Romanian football encounters some problems in organizing the game, especially with junior and senior national teams.If this model is not appropriately understood and efficiently used by each club, the results will have a casual effect.
This paper aims to achieve a model of progression in the football game, which is based on tactical periodisation.Progression in football has been a permanent concern for the authors of football concepts.According to Doblado (2010, p. 33), progression refers to the actions taken by a team in order to advance into the opponents' field; the author says that progression may be more or less rapid, but must be clearly conceived and established.
Tactical periodisation is a modern training model in which the main element is the preparation of all training factors, as far as tactical training is concerned.Practically, the physical, technical, theoretical and psychological factors are developed in the tactical training.Frade is recognized by the field specialists as the founder of this training concept, this model being taken up and practiced in various forms by football players.
The model of play is fundamental in creating some sort of specificity in relation to the multitude of situations that the game can offer at all times.Meneses (2016, p. 19, quoting Rui Faria) has emphasised that the specificity of the game model is fundamental and we must create conditions for the player to face as many game situations as possible.In this case, the player needs to have as many opportunities as possible to solve the game situations, which will make him more creative.
Progression can be assimilated with the time of attack, but we must take into account the phase when the ball is driven by the players to the opponents' goal.This phase requires players with an exceptional individual technique primarily based on physical fitness and genetic qualities.
This theoretical model simplifies the way in which progression is made and highlights how individual and collective tactical actions are put into practice, taking into account each time the defensive balance in the context of creating superiority in the ball circulation and player movement depending on the opponent and the possibilities to achieve these aspects according to the physical, technical and especially mental qualities of the own team players, but also of the opponents.
The paper discusses how to approach the progression based on several criteria, including: the player's qualities, the philosophy specific to the club, the days of the week, the opponent's qualities, etc., in the context of using the training with the ball.The results of the research can be a milestone in creating a national philosophy according to the specifics of the Romanian football players.
Progression is mistaken for attack; it involves the ball leading to the opponent's goal and the concrete ways to reach it.There are two ways to achieve progression: the general way of achieving tactical and technical attack and defence actions in all systems and the specific way in which players are positioned within the chosen game system.
We intend to present a progression model for the 1-4-3-3 game system, taking into account two representative ways of achieving it, based on the football-specific factors presented above: The overall tactical and technical way existent for all the game systems used; The specific way used by the players according to the philosophy of the coach or club (in our case, in the 1-4-3-3 system).
Topic Addressed
Progression model for the 1-4-3-3 game system General technical and tactical way to achieve progression: individual tactical actions through which a player can make a progression are player demarking, dribbling and free kick.
1. Player demarking is the individual action with the highest frequency, which occurs near the opponents' goal.The most important thing is to do it in depth.But there are plenty of situations when a lateral or backward demarcation will contribute to more effective progression.Many coaches consider that the basic technical elements of picking up and kicking the ball have a great contribution to achieving demarcation.
The most important features specific to the player who performs an effective demo are: To perform it in the free space for the ball not to be intercepted by the opponent; To represent a real solution for the teammate; To represent a future way of continuing the phase through a coherent (oriented) takeover.2. Dribbling is the increasingly rare use of the action through a collective game imposed by today's modern game.It can be done by all the team players, even the goalkeeper.Most often, players are attacked when there is a 1 vs. 1 relationship and the defender is not doubled in defence.There are plenty of situations in which the player in possession of the ball is not closely marked by an opponent, and then the best way is to attack free spaces with the ball at footindividual progression.
The most important features specific to the player who achieves an effective dribbling/progression are: To have a high-level technique put into practice especially in situations of adversity; To have good speed in all manifestations; To analyse the situation correctly when making such a decision, to have motor and mental intelligence.
3. Free kick is a fixed moment in which the game continues after the opponent's inadequate interventions.It is exclusively a decision taken individually by the player who performs it and who will have to resort to the best solution taken over from the training or a new solution due in particular to the specific positioning of the opponent.
The most important features specific to the player who makes an effective free kick are: To have the technique to kick the ball very well; To have a high mental ability to analyse the situation; To collaborate effectively with teammates especially at fixed moments of approaching the goal.
Collective tactical actions through which a football player can make progression are ball passing, one-two tactical combinations in two or more players and tactical movement of the players as a result of demarcations according to teammate and opponent, as well as movements through which different crosses or envelopes are made for occupying the space in the opponents' field.
The specific objectives will direct, focus and channel the theoretical and practical activities.According to Gómez and Doblado (2010, p. 442), they refer to: Describing the historical evolution of football and knowing its current state; Understanding football as a team game involving a number of factors in the game behaviour, which are extracted from a structural and functional football analysis; Knowing the structural elements of the game and how football works; Establishing the principles on which the teaching model is based; Building a conceptual and terminological framework based on strategy and tactics, as well as the mental involvement determined by the cognitive actions adapted to the development of the football game; Knowing and explaining the principles of action or means of play; Understanding the basic game systems applicable to the game; Knowing and applying the main concepts of the football game through exercises with an increasingly large number of players, positioning and possession games which aim at progression.Ways to achieve progression in football: 1. Passing the ball is the first and easiest tactical element of collaboration with a partner and is performed using the technical elements of ball transfer: kicking the ball, hitting it with the head and throwing it from the edge.The player who performs effective progression by passing the ball must: Possess exceptional technique in transmitting the ball: short, medium and long distance; Analyse the game situation correctly passing the ball to the best placed teammate and with real chances of continuing the game; Do it in depth.2. Tactical combinations, but also the tactical movement of the players with and without the ball, will go according to the game philosophy.Players with solid features are needed in terms of: Physical and mental development; Creativity in complex situations and anticipation of the opponent's reactions; Complex possibilities of penetrating the opponents' defence.
The specific way to achieve progression in the 1-4-3-3 game system The general lines to which effective progression relates will consider: Establishing a solid philosophy based on the selection that complies with the requirements of the game system used, the principles and sub-principles that the team players must observe; Careful analysis of the opponent that changes weekly; Coaches who adopt the club's philosophy because they like it.According to these considerations, the coach will first prepare his own philosophy that will be passed on to the players.
There will be prepared situations to participate in the game by putting into practice: Travelling with the ball on lines and corridorsfrom line to line, medium and long passes forward; Passing back to gain space in progressionstatic control construction; Building-up ways in 1-4-3-3different approaches; Contradiction of systemsreaction on construction according to the opponent.
Every game situation will be in one of the forms of attack.We will analyse several game-specific situations 1-4-3-3, where progression is as follows: 1. Counterattack is the first tactical solution for the player who has won the ball.In this case, progression is achieved by direct attention towards the striker or an intermediary player.From his own half field, the player who has won the ball sends it directly to the striker.We have the following cases: a.The ball is won by the goalkeeper (Figure 1).He will pass it: Directly to the striker in depth; To an intermediate player on the side of the court; To a teammate in the centre of the field, who has space created by the side of the team.The counterattack tactics will take into account the following: Precision in long transmission of the ball in the future position towards the striker; Fast-speed response in deciding to trigger and move players towards the opponent's goal; Strength in duels with the opponents; Running the ball with feint and dribbling in speed; Coordination of player movements, perfect timing; Bringing and finding the player with a perfect pass that is positioned in front of the goal.2. Rapid attack is characteristic of teams possessing very good technique in speed and players with a rapid reaction to game situations of congestion through decisions related to the opponent's positioning.The first pass will be transmitted, if possible, to the opposing goal to leave as many opponents as possible behind the line of the ball and to have more space for quick construction in the opponents' field.
The main ways to make progression in rapid attack (Figure 3) for the 1-4-3-3 game system are: Progression in attack, where players act in post-specific positions without changing their places during the action; Progression in attack, where players change positions between themselves; Progression in attack, in which defence players advance into the defensive system of the opponents.
Figure 3. Making progression through direct attack -1-4-3-3 game system
The main requirements for success in rapid attack are: Making the quick and accurate decision to transmit the first pass in depth; Control of fast and efficient construction situations according to the opponent's position; Progression by running the ball will be avoided; Rapid transition in rapid attack with actions performed with amplitude and depth; On finalisation, quick decisions will be put into practice at high speed; Permanent concern for players to provide solutions for passing on finalisation.3. Combined attack, based on solid principles, will take into account our players' characteristics and the defensive way of settling the adverse team, as well as their reaction to our offensive movements.
The objectives of an elevated progression will include: The precise transfer of the ball to best placed teammate; Multiple-game construction solutions starting from the goalkeeper to the back line, midfield and even attacking players; Developing automaticity in construction where progression is conceived during training sessions; Making creative and inventive decisions where players resort to instinctive solutions; Forming the ability to attack free spaces, penetrate the defence and transmit the ball towards the opponents' goal. Welding the team, creating the unity in which each player self-directs, thinks and acts as part of a whole.Considering that, with moving the team from defence into attack, there are a lot of offensive movements, it is important that these should be theoretically known by the team players and practiced very hard during the training sessions.There is practically a new settlement where the team will have amplitude and depth, which means large spaces among the players.We will continue to present some situations of progression originating in clearly determined principles (Figure 4) in the positional-combinative attack within the 1-4-3-3 game system: According to Marziali and Mora (2008, p. 194), construction and organization can be very effective when all the knowledge and motor experience lead to the storage of situations.The implementation and learning of construction in the football game is done gradually, starting from one individual up to the total number of players involved in the offensive organization.After setting up the game system, the coach starts working on compartments, individual and group progression.
A. The construction of the game after the ball has gone out of the goal with the goalkeeper getting it back from the 5.5 m box can be done as follows (Figure 5): Transmitting the ball to a centre back that is positioned at the edge of the 16 m box: this will continue to the defensive midfielder with progression; transmission to the fullback and then to the defensive midfielder (the first exercise will be made without adversity, then with two and three semi-active and active opponents, with the four defenders and the defensive midfielder -1 + 5 vs. 2 (3); Transmitting the ball to the centre defensive midfielder descending between the central defenders; Transmitting the ball to one of the fullbacks positioned with amplitude.B. After working with the five players, the two midfielders are introduced into the training.Their movements may be different depending on the requirements.They will work primarily without adversity, and then five defenders will be placed in a 1 + 7 vs. 5 adversity relationship (Figure 6), from the goalkeeper in lines 1 and 2, then in line 3.The offensive movements will be developed according to the figure below:
Discussion
Progression in the 1-4-3-3 game system has special features, being considered beneficial if executed within a game system with three strikers.Each player has tasks specific to the position in which he performs.The construction of the game model with progression as its main element is made differently according to the club's philosophy and put into practice by a coach who agrees with this philosophy.
When there is such philosophy, it is important that there should be more variants of response to the opponent's possible reactions.
The 1-4-3-3 game system offers a multitude of game construction variations where progression will focus on the possession of the ball.There were many cases (Barcelona) in which it took 40-50 passes to reach finalisation.Therefore, progression is often preceded by backwards passes in order to unbalance the opponent, which is not positive but beneficial in the strategy of the team.
The construction with progression is based on principles and sub-principles established in the game philosophy and put into practice by the team players.The characteristics of the players implementing the 1-4-3-3 system must be well established.The selection within the team is based on these features.Such players will put into practice individual and collective tactical actions in which progression will gain specific connotations.
The occurrence of the forms of attack will depend on the quality of the own team, as well as on the opponent.The more prolonged possession will be, the more positional attacks will occur to the detriment of counterattacks and rapid attacks.
Conclusion
Teams in advanced football countries such as Spain, Italy, England, France or Germany are based on carefully built philosophy initiated a long time ago, so now every team knows very well how to attack.
Any player, first of all, theoretically knows the basic principles of the game he puts into practice very easily.Players have very good mental training mainly built through psycho-sociological training and theoretical training, with information mostly acquired in practical training.
Progression is the individual and collective tactical principle of attack based on some philosophy which has been built on the characteristics of the own players, as well as on a careful analysis of the opponent, and put into practice in the competition.
The need to build some own philosophy of training in the football game at the level of the clubs and federation will obviously lead to positive results.
Figure 4 .
Figure 4. Progression in the football game | 2020-04-02T09:34:48.184Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "c96638f9f755f329da7fd6676904f6605ba7cb1c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.35189/iphm.icpesk.2019.4",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "13ff9c03ce60feb8e96ebc6374d6e110d3947b53",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
25434219 | pes2o/s2orc | v3-fos-license | Protein O-Linked Mannose β-1,4-N-Acetylglucosaminyl-transferase 2 (POMGNT2) Is a Gatekeeper Enzyme for Functional Glycosylation of α-Dystroglycan*
Disruption of the O-mannosylation pathway involved in functional glycosylation of α-dystroglycan gives rise to congenital muscular dystrophies. Protein O-linked mannose β-1,4-N-acetylglucosaminyltransferase 2 (POMGNT2) catalyzes the first step toward the functional matriglycan structure on α-dystroglycan that is responsible for binding extracellular matrix proteins and certain arenaviruses. Alternatively, protein O-linked mannose β-1,2-N-acetylglucosaminyltransferase 1 (POMGNT1) catalyzes the first step toward other various glycan structures present on α-dystroglycan of unknown function. Here, we demonstrate that POMGNT1 is promiscuous for O-mannosylated peptides, whereas POMGNT2 displays significant primary amino acid selectivity near the site of O-mannosylation. We define a POMGNT2 acceptor motif, conserved among 59 vertebrate species, in α-dystroglycan that when engineered into a POMGNT1-only site is sufficient to convert the O-mannosylated peptide to a substrate for POMGNT2. Additionally, an acceptor glycopeptide is a less efficient substrate for POMGNT2 when two of the conserved amino acids are replaced. These findings begin to define the selectivity of POMGNT2 and suggest that this enzyme functions as a gatekeeper enzyme to prevent the vast majority of O-mannosylated sites on proteins from becoming modified with glycan structures functional for binding laminin globular domain-containing proteins.
rise to the phosphotrisaccharide core M3 glycan structure while still in the ER (12)(13)(14). From here, it has been recently demonstrated that FKTN and FKRP appear to be responsible for extending the core M3 phosphotrisaccharide in the Golgi by addition of two ribitol phosphate units in phosphodiester linkages (15). TMEM5 then apparently adds a xylose to the distal ribitol that is followed by B4GAT1-catalyzed addition of glucuronic acid in a -1,4 linkage to the xylose (16,17). This primer permits LARGE1 to catalyze the addition of a repeating disaccharide (␣-1,3-linked xylose--1,3-linked glucuronic acid) that is the functional component, termed matriglycan, responsible for binding to LG domains of ECM proteins (2,18,19).
Human ␣-DG has at least 25 O-mannosylation sites (20 -22). The majority of the O-mannosylation sites on ␣-DG are populated by core M1 and M2 glycan structures via the action of POMGNT1 (M1) followed by MGAT5B (M2) (20 -23). Site mapping studies have identified only two positions, Thr-317 and Thr-379, on ␣-DG with M3 core structures, although some evidence suggests 319 and 381 may also be sites of M3 modification ( Fig. 1) (12,18,20,21). Paradoxically, from a spatialtemporal perspective, O-Man-modified ␣-DG encounters POMGNT2 in the ER before POMGNT1 in the cis-Golgi but is preferentially modified by POMGNT1. This led us to hypothesize that POMGNT2 must demonstrate substrate selectivity beyond simply an O-Man-modified amino acid.
Here, we explore the specificity of POMGNT2 and compare it with POMGNT1. We synthesized multiple O-mannosylated peptides derived from known M1-and M3-modified sites of ␣-DG and tested their ability to be acceptor substrates for the two enzymes. POMGNT2 displays selectivity based on the primary amino acid sequence in proximity to the site of O-mannosylation, whereas POMGNT1 is promiscuous. We identified a sequence motif, highly conserved in vertebrates, in ␣-DG that appears to modulate POMGNT2 substrate specificity in vitro. We demonstrated sufficiency of the extended motif by engineering the sequence into a typical M1 O-mannosylated peptide that resulted in it being a POMGNT2 acceptor. We also demonstrate that replacement of conserved amino acids compromises an M3 peptide for extension by POMGNT2. Intriguingly, a conservative degenerate sequence based on our identified motif is present in several human membrane/secreted proteins.
Results
Acceptor Selectivity of POMGNT1 and POMGNT2 Using Synthetic ␣-DG Glycopeptides-To identify primary amino acid determinants of POMGNT2 selectivity, we used solid-phase peptide synthesis to generate synthetic glycopeptides whose sequences are those from known O-mannosylated regions of human ␣-DG (Table 1). Direct physical evidence for core M3 extension at position 379 in ␣-DG has been shown previously (24), whereas the threonine at position 341 in ␣-DG has been demonstrated as a POMT1/POMT2 acceptor that does not carry an M3 core (25). We selected these two sites (379 and 341) because we predicted that their extensions differ in core glycan structure whereas their immediate primary amino acid sequences share a similar Thr(-O-Man)-Pro-Thr (TPT) motif. The synthetic glycopeptides were designed to be 21 amino acids in length with the mannosylated threonine as the central residue (residue 11) to evaluate nearby C-terminal and N-terminal amino acid determinants (Table 1).
To establish whether the synthetic glycopeptides were substrates for POMGNT1 and POMGNT2, we performed overnight radioactive transfer assays. Recombinant human POMGNT1 catalyzed GlcNAc transfer to both the Man341 and Man379 synthetic glycopeptides (Fig. 2, A and B). To confirm the composition of POMGNT1 reaction products, parallel transfer assays using non-radiolabeled UDP-GlcNAc were performed using the Man341 and Man379 glycopeptides as acceptor substrates, and the reaction products were analyzed by mass spectrometry (MS) (Fig. 2, C and D). The observed peaks at 859.121 and 899.504 m/z in the full FTMS correspond to the addition of a N-acetylhexosamine residue (ϩ203) to the Man341 and Man379 glycopeptides, respectively. Thus, POMGNT1 FIGURE 1. Core O-Man structures on ␣-dystroglycan. A, POMGNT1 is responsible for generating the M1 core glycan structure that can be branched by MGAT5B to generate the M2 core, whereas POMGNT2 is responsible for generating the M3 core glycan structure. B, schematic of known O-mannosylated sites on ␣-dystroglycan addressed in this study. Thr-317 and Thr-379 are elaborated with the M3 core glycan structure, whereas Thr-341 and Thr-414 are elaborated with M1 core glycan structures that can be further elaborated to core M2 glycan structures. Glycan symbols follow guidelines outlined in Ref. 38. SP, signal peptide.
will extend the mannose in synthetic glycopeptides at positions 341 and 379 of the ␣-DG sequence in vitro. These results clearly demonstrate that POMGNT1 exhibits minimal acceptor selectivity between core M1 and M3 sites on these two ␣-DG-derived glycopeptides.
In comparison, POMGNT2 showed preferential acceptor selectivity for the known M3 site in the ␣-DG sequence. Radiolabel transfer assays showed no detectable transfer of the sugar to the acceptor Man341 glycopeptide by POMGNT2 ( Fig. 2A) but did show transfer of GlcNAc to the Man379 synthetic glycopeptide (Fig. 2B). Parallel non-radioactive transfer assays followed by MS analysis identified the composition of the POMGNT2 reaction products and verified the transfer results observed in the radioactive assays (Fig. 2, E and F). The predominant peak at 791.427 m/z in the full FTMS corresponds to the unmodified Man341 glycopeptide with a single mannose (Fig. 2E). In contrast to the results seen in Fig. 2E, the observed peak at 899.504 m/z in the full FTMS in Fig. 2F corresponds to the addition of an N-acetylhexosamine residue (ϩ203) to the Man379 glycopeptide. These results suggest that POMGNT2 preferentially modifies specific sites on ␣-DG intended for core M3 glycan elaboration.
Kinetic Parameters of POMGNT1 and POMGNT2 with Synthetic ␣-DG Glycopeptides-To further characterize the substrate specificities of POMGNT1 and POMGNT2, additional core M1 and core M3 synthetic glycopeptides were generated. Man414 and Man317 are known O-mannosylated regions of ␣-DG (18,21). Evidence for core M3 extension at position 317 in ␣-DG has been shown previously (12,18). Man317 is 21 amino acids in length, contains the TPT motif, and has the mannosylated threonine as the central residue (residue 11), similar to Man379 and Man341 (Table 1). Man414 is only seven amino acids in length and lacks the TPT motif (Table 1). However, the kinetics of Man414 with POMGNT1 have been studied previously (26), and the homologous residue in rabbit (Oryctolagus cuniculus) has been site-mapped with mannose (21), making it a useful predicted core M1 glycopeptide for this study.
Glycosyltransferase reaction kinetics for POMGNT1 with the four synthesized glycopeptides were investigated by UDP-Glo TM assays. The ␣-DG sequences in the four synthetic glycopeptides were all utilized by POMGNT1 as acceptors (Fig. 3, A-D). Inspection of the K m values derived from non-linear regression analyses of the experimentally obtained values reveals that the affinity of POMGNT1 for synthetic acceptor glycopeptides containing a TPT motif (Man317, Man379, and Man341) is greater than the affinity for the ShortMan414 synthetic glycopeptide lacking a TPT motif (Table 1). POMGNT1 has the fastest turnover (k cat ) with the Man341 synthetic glycopeptide, but catalytic efficiency (k cat /K m ) is an order of magnitude greater for the core M3 synthetic glycopeptides Man317 and Man379 (Table 1).
To validate the acceptor selectivity of POMGNT2, we also performed UDP-Glo assays with the four synthesized glycopeptides to investigate glycosyltransferase reaction kinetics. Transfer of GlcNAc to Man341 and Man414 by POMGNT2 (Fig. 4, A and C) was below the level of detection, whereas Man379 and Man317 (Fig. 4, B and D) are clearly acceptor substrates for POMGNT2 activity. The measured K m , k cat , and k cat /K m for Man317 and Man379 synthetic glycopeptides with POMGNT2 are similar (Table 1). These data are consistent with the results obtained in our initial transfer assays and support the proposal that the features of the primary amino acid sequence in the region of the TPT sequence are determinants of POMGNT2 selectivity.
A Primary Amino Acid Motif in ␣-DG Is Favorable for POMGNT2 Activity-It was suggested previously that all O-mannosylated sites on ␣-DG have a conserved TPT motif at the mannosylated threonine (18,25), although site mapping studies have demonstrated that only a subset of mapped sites follow this pattern (20 -22). Indeed, the primary amino acid sequences around mapped O-mannose sites on ␣-DG (excluding sites Thr-317 and Thr-379) are heterogeneous (Fig. 5A). To identify primary sequence elements that govern the observed preferences on POMGNT2 acceptor substrate selectivity, ␣-DG amino acid sequences surrounding sites Thr-317 and Thr-379 from 59 vertebrate species with orthologues of human DAG1, POMGNT1 or POMGNT2, and FKTN or B4GAT1 were aligned using WebLogo (27) (Fig. 5B). The previously identified TPT motif was evident in our alignment, but other
Comparison of POMGNT1 and POMGNT2 kinetics with various M1 and M3 synthetic glycopeptide acceptors
An asterisk in the acceptor sequence indicates the mannosylated threonine residue. Kinetic parameters of Man341 and Man414 with POMGNT2 were not measurable as indicated by the dash.
Acceptor
Acceptor sequence Core glycan structure
POMGNT2, a Gatekeeper Enzyme
FEBRUARY 10, 2017 • VOLUME 292 • NUMBER 6 conserved amino acids were observed that are not present around Thr-341. Interestingly, our alignment of 317/379 M3 sites across species demonstrated that arginines at Ϫ6 and Ϫ8 and an Ile at Ϫ3 were conserved in addition to the Pro at ϩ1 and the Thr at ϩ2. Thus, the RXRXXIXXTPT motif is a proposed conserved sequence for M3 extension (Fig. 5B). Because this motif is only present at known core M3 sites and not at core M1 sites, we hypothesized that the RXR portion of the primary amino acid sequence motif of ␣-DG might confer extension by POMGNT2. To test this, we synthesized a modified version of the Man341 peptide that already contains the TPT sequence and an Ile at Ϫ3 that we refer to as Man341-RPR. In this glycopeptide, we replaced the two divergent amino acids at Ϫ6 and Ϫ8 with arginines to introduce the conserved RXR motif. Glycosyltransferase reaction kinetics of POMGNT2 with this modified glycopeptide were investigated by UDP-Glo assay. In contrast to the undetectable reaction with Man341, POMGNT2 transferred GlcNAc to Man341-RPR ( Fig. 6A and Table 1). Thus, we successfully converted a core M1 non-acceptor peptide into a core M3 acceptor for POMGNT2 in vitro by the addition of our identified motif.
To further test that the RXR portion of the primary amino acid sequence motif of ␣-DG is important for glycan extension by POMGNT2, we synthesized a new glycopeptide based on the sequence at a known core M3 site (379) but with the two N-terminal arginine residues altered to the divergent residues of a core M1 acceptor (Man341). We have designated this modified core M3 glycopeptide as Man379-ETP. Glycosyltransferase (38). E and F, FTMS spectra verifying POMGNT2 Man341 product (1.11-ppm mass accuracy) (E) and POMGNT2-extended Man379 (1.11-ppm mass accuracy) (F). The green circle represents a mannose, and the blue square represents an N-acetylglucosamine (38).
reaction kinetics of POMGNT2 with this modified glycopeptide were investigated by UDP-Glo assays. In comparison with the kinetics of POMGNT2 with Man379, POMGNT2 has a lower affinity and a greater than 5-fold reduction in catalytic efficiency for Man379-ETP (Fig. 6B and Table 1). The replace-ment of our identified RXR motif in a core M3 acceptor with divergent residues reduced but did not eliminate POMGNT2 activity.
Lastly, to test the necessity of the RXR portion of the primary amino acid sequence motif of ␣-DG for POMGNT2 activity, we Table 1 for a list of kinetic parameters. Table 1 for a list of kinetic parameters. FEBRUARY 10, 2017 • VOLUME 292 • NUMBER 6 synthesized a truncated version of the Man379 M3 glycopeptide that we refer to as ShortMan379. The N terminus of this glycopeptide begins immediately following the RXR motif. Glycosyltransferase reaction kinetics of POMGNT2 with this truncated glycopeptide were investigated by the UDP-Glo assay. POMGNT2 utilized ShortMan379 as an acceptor substrate with an affinity similar to Man379 but with a less than 1-fold reduction in catalytic efficiency (Fig. 6C and Table 1). Thus, the RXR portion of the motif appears to not be essential in the context of a synthetic peptide for POMGNT2 activity.
Discussion
Although POMGNT2 is poised to modify ␣-DG in the ER before it encounters POMGNT1 in the cis-Golgi, only two M3 sites have been identified on ␣-DG. Thus, it seems likely that POMGNT2 demonstrates acceptor substrate preferences beyond simply an O-Man-modified residue. We tested this hypothesis regarding specificity by examining the impact of local primary amino acid sequence around O-Man sites on synthetic peptides as acceptor substrates for POMGNT1 and POMGNT2.
Using a set of O-Man glycopeptide substrates, we have shown that POMGNT2 has a preference for acceptors with mannosylated residues at positions Thr-317 and Thr-379, whereas POMGNT1 has no significant acceptor substrate preferences among the various synthetic glycopeptides tested (Figs. 2-4 and Table 1). Analysis of the sites that are POMGNT2-dependent demonstrate a RXRXXIXXTPT motif that is conserved among vertebrate ␣-DGs (Fig. 5B). We also observed that this sequence is not found in any of the mapped sites from other O-mannosylated proteins (9), consistent with ␣-DG being the only demonstrated protein to contain M3 glycans (Fig. 5A). We found that replacement of a divergent sequence on a POMGNT1 acceptor that was only missing the conserved RXR motif converted it to a POMGNT2 acceptor, demonstrating that replacing the two amino acids was sufficient to confer activity (Fig. 6A). Likewise, replacement of the arginines in the Man379 peptide with amino acids found in the M1 peptide of Man341 reduced the efficiency of POMGNT2 to catalyze the addition of GlcNAc to the O-Man peptide more than 5-fold (Fig. 6B). However, although the addition of the RXR motif to a core M1 acceptor is sufficient to make it a substrate for POMGNT2, the complete removal of the RXR motif on a core M3 acceptor does not abolish POMGNT2 activity (Fig. 6C). Taken together, these results suggest that the RXR motif allows for extension by POMGNT2 at core M3 sites but is not essential for a short synthetic O-Man peptide. These results support a case of sufficiency in the absence of necessity that deviates from the normal necessary and sufficient or necessary but not sufficient arguments. We would rationalize that when there is sequence upstream of the site of action, such as that actually found in the full-length ␣-dystroglycan protein, non-basic Table 1 for a list of kinetic parameters.
POMGNT2, a Gatekeeper Enzyme
amino acids replacing the RXR portion of the motif generate steric or electrostatic clashes that prevent proper binding of the substrate protein.
Interestingly, for Man317, the identified RXR motif is upstream of the known furin cleavage site. However, as POMGNT2 is an ER-resident glycosyltransferase and furin is located in the Golgi, POMGNT2 acts first and thus has the capability to interact with residues upstream of the furin cleavage site, and this may at least partially explain the requirement for the N terminus for synthesis of functionally glycosylated mature ␣-DG (19).
Our current model, based on the data presented here, is that POMGNT2 selectivity determines which sites on ␣-DG become modified with the core M3 glycan structure. In turn, only the core M3 glycan structure can be extended by B3GALNT2, phosphorylated by POMK, and further elaborated to become the functional matriglycan for ␣-DG (2,13). Functional glycosylation of ␣-DG and, in particular, matriglycan synthesis stemming from the POMGNT2-dependent core M3 glycan structure are required for binding to ECM proteins with LG domains and maintaining overall ECM integrity (11). Thus, POMGNT2 acts a gatekeeper enzyme for functional glycosylation of ␣-DG.
The strict RXRXXIXXTPT motif that we have presented here is not present on any other secreted or membrane-associated protein in humans except for ␣-DG at Thr-317/319 and Thr-379/381 (2,13). Relaxing the sequence constraints to allow for conservative replacements generates a motif of (R/K)X(R/ K)XX(I/L/V)XX(T/S)P(T/S). This motif is found on a handful of membrane/secreted human proteins including SRPX, CLEC18C, FREM2, MANBA, SEMA3E, SPACA7, and TMEM182. However, if we examine conservation of the motif in these proteins across 59 vertebrate species, as we did for ␣-DG, we see poor conservation (data not shown). This lends further support to the working model that only ␣-DG contains sequences that are substrates for POMGNT2 that go on to become functionally glycosylated with matriglycan (2).
We have identified and partially characterized a primary amino acid sequence motif governing acceptor specificity for POMGNT2 toward O-mannosylated substrates. Additional studies are required to fully characterize the functional roles of individual amino acids in this motif. Structural analyses of POMGNT2 in complex with various acceptor substrates would greatly assist in defining the molecular details of the POMGNT2 gatekeeping mechanism that we have established here. Furthermore, future in vivo studies testing the role of the RXRXXIXXTPT motif in POMGNT2 acceptor selectivity will be invaluable to complement our in vitro findings presented here.
Experimental Procedures
Cell Culture and Protein Purification-The catalytic domains of human POMGNT1 (amino acid residues 60 -660; UniProt Q8WZA1) and POMGNT2 (amino acid residues 25-580; Uni-Prot Q8NAT1) were expressed as soluble, secreted fusion proteins by transient transfection of HEK293 suspension cultures (28). The coding regions were amplified from Mammalian Gene Collection (29) clones using primers that appended a tobacco etch virus protease cleavage site (30) to the N-terminal end of the coding region and attL1 and attL2 Gateway adaptor sites to the 5Ј-and 3Ј-terminal ends of the amplimer products. The amplimers were recombined via BP clonase reaction into the pDONR221 vector, and the DNA sequences were confirmed. The pDONR221 clones were then recombined via LR clonase reaction into a custom Gateway adapted version of the pGEn2 mammalian expression vector (28,31) to assemble a recombinant coding region comprising a 25-amino acid N-terminal signal sequence from the Trypanosoma cruzi lysosomal ␣-mannosidase (32) followed by a His 8 tag, 17-amino acid Avi-Tag (33), "superfolder" GFP (34), the nine-amino acid sequence encoded by attB1 recombination site followed by the tobacco etch virus protease cleavage site and the respective glycosyltransferase catalytic domain coding region. Suspension culture HEK293f cells (Life Technologies) were transfected as described previously (28), and the culture supernatant was subjected to nickel-nitrilotriacetic acid superflow chromatography (Qiagen). Enzyme preparations eluted with 300 mM imidazole were concentrated to ϳ1 mg/ml using an ultrafiltration pressure cell membrane (Millipore) with a 10-kDa molecular mass cutoff.
Glycopeptide Synthesis-The glycopeptide synthesis here extends earlier work describing synthesis of O-Man-Ser and -Thr peptide synthesis building blocks as well as O-Man glycopeptides (35). The glycopeptides were prepared as C-terminal carboxamides and acetylated at the N terminus to emulate the situation in the native protein. For this work, all couplings except those for glycosylated residues were carried out on an automated microwave-assisted solid-phase peptide synthesizer (CEM Corp. Liberty microwave synthesizer) using standard protocols in the instrument software on Rink amide resin (ϳ0.5 meq/g; Novabiochem) via an N R -N-(9-fluorenyl)methoxycarbonyl (Fmoc)-based approach with N,N-dimethylformamide (DMF) as the primary solvent. 20% 4-methylpiperidine in DMF was used for Fmoc removal. 2-(1H-Benzotriazole-1-yl)-oxy-1,1,3,3-tetramethyluronium hexafluorophosphate/1-hydroxybenzotriazole in the presence of N,N-diisopropylethylamine (DIPEA) were used as the coupling reagents for standard amino acids. For the coupling of the glycosylated amino acid Fmoc-Thr(␣-D-Man(Ac) 4 )-OH (Sussex Research), the peptide resin was removed from the synthesizer, and coupling was performed manually using a CEM Corp. Discover microwave apparatus. 2-(7-Aza-1H-benzotriazole-1-yl)-1,1,3,3-tetramethyluronium hexafluorophosphate and 1-hydroxy-7-azabenzotriazole in the presence of DIPEA were the activating reagents. Typically two couplings at ϳ1.5-fold excess of glycosylated amino acid to the resin loading were done for this amino acid to conserve reagent. Upon completion of the manual coupling reaction, as determined by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS), glycopeptide resins were returned to the automatic synthesizer to complete assembly. After final N-deprotection, the glycopeptides were manually N-acetylated by treatment with DMF/acetic anhydride/DIPEA (85:10:5, v/v) for ϳ30 min, and O-acetyl protection on the mannosyl residues was subsequently removed by two successive treatments with hydrazine/ MeOH (70:20, v/v) for an hour each. Glycopeptides were then POMGNT2, a Gatekeeper Enzyme FEBRUARY 10, 2017 • VOLUME 292 • NUMBER 6 cleaved from the resin as C-terminal carboxamides with simultaneous removal of remaining amino acid side chain protection through treatment with TFA/triisopropylsilane/H 2 O (95:2.5: 2.5) for ϳ4 h. The resin was filtered off, and the TFA solution was concentrated on a rotary evaporator to a few milliliters. The remaining concentrate was added dropwise to cold ether from which the crude glycopeptides precipitated. After centrifugation and removal of the ether supernatant, the glycopeptides were redissolved and purified via HPLC over an Ultra II 250 ϫ 10.0-mm 5-m C 18 column (Restek Corp.) with a 0.1% TFA in water, 0.1% TFA in acetonitrile solvent gradient. Purity was verified by analytical HPLC and MALDI-TOF MS (see supplemental Fig. 1). Yields were in the range of 30 -50%.
Radiolabel Transfer Assays-The radiometric assays were carried out in reactions containing 100 mM MES (pH 6.5), 10 mM MnCl 2 , 2 mM UDP-GlcNAc mixed with 10 nCi of 3 H-labeled UDP-GlcNAc and 1 mM glycopeptide acceptor. Reactions were incubated for 21 h at 37°C, then quenched by addition of 5 l of 1% TFA, and boiled at 100°C for 5 min. Reaction products were purified by reverse-phase separation using C 18 Sep-Pak microspin columns (The Nest Group) by loading and washing with 0.1% formic acid and elution with 80% acetonitrile with 0.1% formic acid. Disintegrations per minute (dpm) were counted using a liquid scintillation counter (Beckman) to determine the amount of [ 3 H]GlcNAc incorporated into the glycopeptides. The data presented represent the average of at least three independent experiments.
Mass Spectrometry-Cold glycosyltransferase reactions used for analysis by mass spectrometry were carried out identically to the radioactive transfer assays but without radioactive UDP-GlcNAc. After reverse phase separation, the product was vacuumed to dryness and resuspended in 100 l of 0.1% formic acid. Samples were filtered using a 0.2-m Nanosep microcentrifuge filter (Pall Life Sciences) and transferred to an autosampler vial with glass insert (Thermo Scientific TM ). The samples were run on a Thermo Scientific Orbitrap Fusion TM Lumos TM mass spectrometer. Full Fourier transform MS Spectra were analyzed using Xcalibur Qual Browser software, and MS/MS scans were analyzed using Byonic TM version 2.6.46 (Protein Metrics Inc.) using a precursor mass tolerance of 10 ppm and a fragmentation mass tolerance of 0.3 daltons followed by manual interpretation.
UDP-Glo Glycosyltransferase Assays-UDP-Glo glycosyltransferase assays (Promega) were performed using 50 mM Tris-HCl (pH 7.5), 5 mM MnCl 2 , 100 M UDP-GlcNAc, 40 ng of enzyme, and varying amounts of glycopeptide acceptor substrates at 37°C for 2 h in a white, flat bottom, 384-well plate. After the glycosyltransferase reaction, an equal volume of UDP detection reagent was added to simultaneously convert the UDP product to ATP and generate light in a luciferase reaction. The light generated was detected using a GloMax-Multiϩ luminometer (Promega). Luminescence was correlated to UDP concentration by using a UDP standard curve. Kinetic parameters were extracted from the data after fitting to the Michaelis-Menten equation using the non-linear regression fit in GraphPad Prism version 7.1. The data presented represent the average of at least three independent experiments. (37). The 10 amino acids upstream and downstream of the threonine at position 317 and 379 in human DAG1 for all species were extracted from the alignment and used for analysis in Berkeley's WebLogo program (version 3) (27). | 2018-04-03T06:22:31.281Z | 2016-12-08T00:00:00.000 | {
"year": 2016,
"sha1": "372ff83e0f8523c8a0c410c9f50603914a477484",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/292/6/2101.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "3e42d1657fdad29e8dedf8dbfa8e817563a07197",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
246309287 | pes2o/s2orc | v3-fos-license | Blood speckle imaging compared with conventional Doppler ultrasound for transvalvular pressure drop estimation in an aortic flow phantom
Background Transvalvular pressure drops are assessed using Doppler echocardiography for the diagnosis of heart valve disease. However, this method is highly user-dependent and may overestimate transvalvular pressure drops by up to 54%. This work aimed to assess transvalvular pressure drops using velocity fields derived from blood speckle imaging (BSI), as a potential alternative to Doppler. Methods A silicone 3D-printed aortic valve model, segmented from a healthy CT scan, was placed within a silicone tube. A CardioFlow 5000MR flow pump was used to circulate blood mimicking fluid to create eight different stenotic conditions. Eight PendoTech pressure sensors were embedded along the tube wall to record ground-truth pressures (10 kHz). The simplified Bernoulli equation with measured probe angle correction was used to estimate pressure drop from maximum velocity values acquired across the valve using Doppler and BSI with a GE Vivid E95 ultrasound machine and 6S-D cardiac phased array transducer. Results There were no significant differences between pressure drops estimated by Doppler, BSI and ground-truth at the lowest stenotic condition (10.4 ± 1.76, 10.3 ± 1.63 vs. 10.5 ± 1.00 mmHg, respectively; p > 0.05). Significant differences were observed between the pressure drops estimated by the three methods at the greatest stenotic condition (26.4 ± 1.52, 14.5 ± 2.14 vs. 20.9 ± 1.92 mmHg for Doppler, BSI and ground-truth, respectively; p < 0.05). Across all conditions, Doppler overestimated pressure drop (Bias = 3.92 mmHg), while BSI underestimated pressure drop (Bias = -3.31 mmHg). Conclusions BSI accurately estimated pressure drops only up to 10.5 mmHg in controlled phantom conditions of low stenotic burden. Doppler overestimated pressure drops of 20.9 mmHg. Although BSI offers a number of theoretical advantages to conventional Doppler echocardiography, further refinements and clinical studies are required with BSI before it can be used to improve transvalvular pressure drop estimation in the clinical evaluation of aortic stenosis. Supplementary Information The online version contains supplementary material available at 10.1186/s12947-022-00286-1.
Background
Doppler echocardiography is routinely used in clinical practice to assess the severity of aortic stenosis. The maximum velocity of blood flow through the aortic valve Open Access *Correspondence: pablo.lamata@kcl.ac.uk during systole is recorded, and the simplified Bernoulli equation is used to estimate the transvalvular pressure drop (a more accurate term to the widely used gradient) across the valve [1]. This technique is preferred to cardiac catheterisation as it is non-invasive, widely accessible and inexpensive [2].
Despite this, applying the simplified Bernoulli equation, as is the case in Doppler echocardiography, has been shown to overestimate transvalvular pressure drops by up to 54% when compared to the equation accounting for the complete haemodynamic profile at the point of maximum constriction [3]: taking peak velocity events in the Bernoulli formulation ignores the momentum of blood flow across the entire vascular cross-section that is key to estimate the actual pressure drop. In addition, pressure drop estimation using Doppler echocardiography is highly user-dependent. If the angle of insonation is not fully aligned with the direction of blood flow, the maximum velocity will be missed [4]. Several non-invasive alternatives have been studied but are not yet applied clinically [5].
Blood speckle imaging (BSI) has recently emerged as an alternative methodology for the assessment of aortic stenosis severity [6,7]. By the direct measurement and visualisation of blood vector velocity fields, captured at ultra-high frame rates in the kilohertz range [6,8,9], it has the potential to overcome the angle-dependence and acquisition of single peak velocities, which currently limit conventional Doppler echocardiography [3]. BSI utilises existing technology from tissue speckle-tracking that is commonly used to evaluate myocardial deformation [10]. A small image kernel is defined in the first image of the vessel and the same speckle signature is tracked in the following frame using a "best match" search algorithm. This is then repeated for a grid of measurements to quantify the velocity and direction of the blood flow [6,11]. Acquiring blood flow velocity data in this way is advantageous as it potentially allows pressure drop to be calculated from velocity data across a cross-sectional profile [3], rather than from a single streamline in conventional Doppler echocardiography, although thisis not investigated in this report.
The principal aim of the current manuscript was to evaluate blood speckle imaging (BSI) and Doppler echocardiography for pressure drop estimations derived from maximum velocity values against ground-truth pressure sensors in a bespoke aortic phantom with a 3D-printed aortic valve at various flow rates.
Pressure drop phantom
In order to investigate the accuracy and utility of novel techniques for pressure drop estimation, a bespoke aortic phantom was developed [12,13]. The phantom was designed to simulate the human aorta, able to deform with pulsatile and constant flow conditions, allowing for comparison of novel pressure drop estimation techniques to ground-truth pressure drop data from pressure sensors across a 3D-printed aortic valve. Pressure drop values measured across valves manufactured using these methods, placed into the same phantom system were representative of those reported in vivo [13].
A silicone 3D-printed aortic valve model was placed within a semi-compliant silicone tube of 32 mm internal diameter, suspended in an acrylic box (Fig. 1). The valve was designed to resemble a healthy aortic valve, segmented from patient CT scans. The valve design was converted to a mould, before degassed Ecoflex 0030 silicone (Smooth-On Inc., Macungie, PE, USA) was poured Fig. 1 The Ecoflex 0030 silicone aortic valve used for the experiments. Pictured from the front, outside of tube (left) and from the back, in situ (right) in and left to cure. Valve mounts were also 3D printed using poly-lactic acid.
Eight PendoTech pressure sensors (PRESS-S-000 sensor, PendoTech, Princeton, NJ, USA) were embedded along the tube wall, 1 situated before the valve and 7 downstream. The position of each pressure sensor was decided aiming to both capture the event of maximum constriction (i.e. location of the vena contracta) and the distal net pressure drop (i.e. characterisation of the pressure recovery). The positions of the sensors, relative to the valve, were -3.0 cm, 1.5 cm, 3.0 cm, 5.0 cm, 7.5 cm, 10.0 cm, 20.0 cm and 50.0 cm. The sensors were calibrated and validated using a pressure catheter (Mikro-Cath, Millar Inc, Houston, TX, USA). The pressure sensors were wired to input modules of a data acquisition USB chassis (National Instruments, Austin, TX, USA) to record ground-truth pressures for the 8 locations along the silicone tube at a sampling frequency of 10 kHz (Fig. 2).
A 20 L external reservoir containing blood mimicking fluid [14] was connected to a CardioFlow 5000MR flow pump (Shelley Medical Imaging Technologies, Ontario, Canada). The pump was programmed, via control unit, to circulate approximately 15 L of blood mimicking fluid at eight different pump flow rates (100, 150, 200 and 250 mL/s, constant and pulsatile flows). Flow was maintained at a constant rate throughout each acquisition in the constant flow conditions. The pulsatile flow conditions were programmed to closely resemble the flow waveforms produced by the human heart, with fluctuations in pressure corresponding to systole and diastole.
Velocity and ground truth pressure data acquisition and analysis
Continuous wave Doppler and BSI data were acquired using a Vivid E95 ultrasound machine and 6S-D cardiac phased array transducer (GE Healthcare, Oslo, Norway) across the valve for each flow condition. Ground-truth pressure drop data were calculated using the methods outlined below. A metal clamp held the probe in a fixed position during all acquisitions. The tilt angle between the tube and probe was recorded and used for angle correction calculations (Fig. 3). Each experimental condition was repeated on a second day of experiments.
Pressure sensor data were extracted and analysed offline using MATLAB (Mathworks, Natick, MA, USA). To calibrate the pressure sensors for random error, the mean pressure across the 8 pressure sensors was calculated with static fluid in the phantom, before and after each experimental condition. For each condition, a correction was applied to each pressure sensor based on its deviation from this set mean. Pressure data in pulsatile conditions was enhanced with a Butterworth filter that reduced the random peaks and noise on the raw temporal transients of pressure data from each sensor. For each experimental condition and flow rate, pressure data were recorded over 8 s. The maximum number of available cycles were then segmented before the mean and standard deviation of the pressure transient were calculated. In most cases, 6 full cycles were used for these analyses. The instant of peak pressure difference between the valve and channels 2 or 3 was selected for further analysis. It was assumed that the pressure at valve level (0 cm) was the same as in Channel 1 (-3 cm) after correcting for a time shift to account for the time taken for the pulse wave to travel between Channel 1 and the valve. The mean and standard deviation values from each pressure sensor were then plotted against their physical position in the phantom, relative to the valve at point 0 cm. Modified Akima piecewise cubic Hermite interpolation was performed between the valve and channel 4, the region of the vena contracta, to estimate the maximum pressure drop and its location, relative to the valve (Fig. 4A). In constant flow conditions, a Butterworth filter was applied to the pressure signals before the mean and standard deviation values from each pressure sensor were then plotted against their physical position in the phantom. (Fig. 4B).
To estimate maximum velocity using Doppler, a continuous wave acquisition was acquired with the cursor placed at the valve opening. Once acquired, the E95 machine was used to manually select the maximum velocity observed in the acquisition (Fig. 5A). To estimate maximum velocity using BSI, the blood speckle imaging setting was used on the Vivid E95 machine to acquire BSI velocity data at frame rates in the kilohertz range [6]. A movie containing examples of BSI acquisitions at the different flow rates can be viewed in the additional files (Additional File 1). The computation of the velocities acquired using BSI was performed with the manufacturer code for BSI velocity quantification (GE Healthcare, Oslo, Norway). The detection of peak velocities at the region of interest, positioned at the vena contracta, where velocity was maximal, was programmed in-house. Secondary validation was performed by an observer, using the interface shown in Fig. 5B to confirm that the maximum velocity vector was indeed observed within the expected region of interest where the jet after the valve is observed.
Pressure drop values were estimated from Doppler and BSI acquisitions by applying an angle correction, using the measured probe angle (Fig. 3), to the maximum velocity (m/s) value measured by each technique. Following this, the simplified Bernoulli formulation was applied to the angle-corrected velocity value to convert to transvalvular pressure drop (mmHg).
Data and statistical analysis
Pressure drop and flow velocity data are presented as the mean ± standard deviation. To calculate the significance level of the values estimated by each technique, a paired two-tailed distribution t-test was used with a significance level of p < 0.05. To calculate the significance level of the bias in the Bland-Altman analyses, a one-sample twotailed t-test was used with a significance level of p < 0.05
Results
The probe angles were 24° and 40° on two days of experiments, respectively. The mean pressure drops across the valve, recorded by the ground-truth pressure sensors, were 9.65 ± 0.07, 12.3 ± 0.00, 16. The pressure drop values estimated across the 4 flow rates by the ground-truth pressure sensors, BSI and Doppler methods are presented in Fig. 6. No significant differences were observed between pressure drops estimated by Doppler, BSI and ground-truth sensors at the 100 mL/s pump flow rate (10.4 ± 1.76, 10.3 ± 1.63 vs. 10.5 ± 1.00 mmHg, respectively; p > 0.05). Under the 150, 200 and 250 mL/s pump flow rates, pressure drops estimated by BSI (10.5 ± 1.47, 13.7 ± 2.27 and 14.5 ± 2.14 mmHg, respectively) were significantly lower than ground-truth pressure drops (13.3 ± 1.20, 17.0 ± 0.99 and 20.9 ± 1.92 mmHg, respectively; p < 0.05). On the other hand, pressure drops estimated from Doppler velocity data were significantly higher than ground-truth pressure drop under 250 mL/s pump flow conditions (26.4 ± 1.52 vs. 20.1 ± 1.78 mmHg, respectively; p < 0.05).
Bland-Altman analysis of the Doppler and BSI techniques, compared to ground-truth pressure drop are presented in Fig. 8 [15]. Figure 8A shows a statistically significant bias of pressure drop estimations made using Doppler, when compared to ground-truth pressure drop of 3.91 mmHg (p < 0.05). The upper limit of agreement was 14.6 mmHg and the lower limit was -6.74 mmHg (Fig. 8A). Figure 8B shows a statistically significant bias of pressure drop estimations made using BSI, when compared to ground-truth pressure drop of -3.31 mmHg (p < 0.05). The upper limit of agreement was 2.82 mmHg and the lower limit was -9.43 mmHg (Fig. 8B).
Intra-technique reproducibility of the pressure drop estimations made across the two days of experiments by the three methods are presented in Fig. 9. Bland-Altman analysis produced statistically significant intratechnique bias values of 0.46 mmHg and 5.19 mmHg for ground-truth and Doppler, respectively (p < 0.05). The 1.61 mmHg bias for BSI was not statistically significant (p > 0.05). The upper and lower limits of agreement were 1.23 mmHg and -0.31 mmHg, 16.3 mmHg and -5.89 mmHg, 6.73 mmHg and -3.51 mmHg for groundtruth, Doppler and BSI, respectively (Fig. 9).
Discussion
This study provides evidence to show that both BSI and Doppler techniques can make accurate estimations of low pressure drops in a controlled and reproducible aortic phantom. However, for stenotic conditions of clinical relevance in the setting of aortic stenosis, BSI underestimates while Doppler overestimates the pressure drop.
The assessment of the pressure drop by echocardiography in conventional clinical practice is subject to important methodological limitations that cannot be solved by current BSI technology. Significantly different pressure drop estimations, large percentage errors, bias values and wide limits of agreement exist for Doppler and BSI when compared with ground-truth pressure drop estimations. This is coupled with poor intra-technique reproducibility across two days of experiments. These findings illustrate that despite its theoretical advantages, further development of BSI or alternative novel and more comprehensive methods for pressure drop estimation are required to improve clinical practice.
Pressure drop estimation
A good agreement between pressure sensors, Doppler and BSI was found at low stenosis levels (10.5 ± 1.00 mmHg) but BSI significantly underestimated pressure drop at the next stenotic condition Fig. 6). With the onset of pressure drop underestimations using BSI occurring at these low stenotic conditions, BSI in its current form is inappropriate for the classification of aortic stenosis severity, which begins at 20 mmHg [16]. BSI would likely be accurate in the estimations of trans-mitral valve pressure drop, where the upper classification limit is 10 mmHg [17]. That said, when comparing Doppler and BSI to ground-truth pressure drop estimations, the limits of agreement are wide for both methods (upper limit 14.6 mmHg, lower limit -6.74 mmHg for Doppler vs. upper limit 2.82 mmHg, lower limit -9.43 mmHg for BSI; Fig. 8). Over/underestimations of transvalvular pressure drop by these margins are clinically significant as they could lead to incorrect diagnoses and the misclassification of disease severity. The differences between the experimental conditions should only be attributed to the ability to capture the peak velocity events, since both methods used the simplified Bernoulli formulation to estimate pressure drop. Pressure sensors were used to demonstrate that comparing ground-truth pressure values, acquired using pressure sensors, to velocity-based estimations of pressure drop results in discrepancies.
At the higher pressure drops, with higher flow velocities, BSI significantly underestimates pressure drop (Fig. 6). A negative linear relationship is observed for absolute error (Fig. 7) and agreement (Fig. 8B). Absolute error at the highest flow rate was significantly different to that at the lowest flow rate (Fig. 7). Underestimation of pressure drop with BSI is therefore more pronounced at higher flow rates. These results are consistent with previous findings conducted in vitro/in silico, whereby BSI was shown to underestimate flow velocity [18][19][20][21]. The largest in vivo study to date was performed in 51 healthy paediatric controls, where underestimations of velocity values acquired using BSI were also observed. The same study also revealed that the difference tended to increase at higher velocities [8]. High velocity gradients and considerable out-of-plane flow generated across flow obstructions lead to speckle decorrelation [18,22]. This explains the underestimation of pressure drop at the higher flow rates using BSI.
On the other hand, pressure drops estimated using Doppler were significantly higher than ground-truth pressure drop at the greatest level of stenosis tested (20.9 ± 1.92 mmHg). This is likely due to the error in estimation of momentum from a single velocity value: the characterisation of the pressure drop requires the full velocity profile [3,5]. This finding has clinical significance as a pressure drop of 20 mmHg is the lower limit for the classification of moderate aortic stenosis [16]. Significant overestimation of pressure drops in this range could result in misclassification of aortic stenosis severity and the inappropriate treatment of patients. The bias of Doppler measurements across the experimental conditions was 3.92 mmHg (p < 0.05); with peak overestimations of up to 20 mmHg (Fig. 8A). These findings are consistent with those reported by Donati et al. (2017), where pressure drop values obtained using the Simplified Bernoulli formulation were shown to overestimate the true pressure drop by 54% [3].
The peak pressure drop values measured by the pressure sensors in the phantom are different to net pressure drops measured in clinical practice during cardiac catheterisation, which measure the pressure difference between the left ventricular outflow tract and the ascending aorta, downstream of the vena contracta [23]. The peak pressure drop measured by the pressure sensors in the phantom is at the location of the vena contracta, as is the case for the Doppler data. The effect of pressure recovery further downstream, which is the traditional The absolute error of measurements made using Doppler increased with flow rate and a significant difference in absolute error was observed between the highest and lowest flow rates (Fig. 7). A linear increase in the difference between the estimated pressure drop of Doppler vs. ground-truth with increasing level of stenosis can also be observed in the respective Bland-Alman agreement plot in Fig. 8A. These findings support the observation that the overestimation of pressure drop by Doppler echocardiography is more pronounced at higher flow rates. The simplified Bernoulli formulation is accurate for uniform spatial velocity profiles (i.e. at the cross-section), observed at low flow velocities [3]. As flow rate increases, the spatial flow profiles, driven by viscous effects, become sharper and more paraboloidal in shape (Fig. 10A), deviating from the flatter spatial flow profiles, driven by inertial effects, observed at low flow rates (Fig. 10B). Donati et al. (2017) report that the variable deviations from the flat velocity profile cause an uncontrolled source of overestimation of pressure drop when applying the simplified Bernoulli equation (Supplemental material C-D of Donati et al. 2017) [3]. Our results report that the high velocity regimes, driven by viscous effects, gradually introduce a larger overestimation. This explains why Doppler is less accurate under these higher flow regimes (Fig. 7). Further variability would be expected if different valve models and geometries were studied, or indeed if using in vivo data. These conditions would likely increase the degree of mismatch between true and estimated pressure drops further.
As a final minor remark, no significant difference was observed between the mean pressure drops achieved under constant and pulsatile conditions, allowing them to be grouped within each pump flow rate ( Fig. 6; n = 4).
Intra-technique reproducibility
The intra-technique reproducibility analysis demonstrates a small, but statistically significant, bias of the ground-truth pressure readings (Bias = 0.46 mmHg (p < 0.05), upper limit 1.23 mmHg, lower limit -0.31 mmHg; Fig. 9A). Albeit small, this significant bias should be considered when interpreting the agreement between Doppler and BSI vs. pressure sensors. Variability is greater in the pulsatile estimations despite the fact each pressure drop is calculated as the mean over 6 cycles. BSI is less variable than Doppler (Bias = 1.61 (p > 0.05), upper limit 6.73 mmHg, lower limit -3.51 vs Bias = 5.19 mmHg (p < 0.05), upper limit 16.3 mmHg, lower limit -5.89, respectively; Fig. 9B-C). However, the reproducibility of BSI may be positively influenced by its inability to track high pressures, resulting in false clustering of measurements (Fig. 9C). The reproducibility of Doppler measurements is influenced by large disagreement at the greatest pulsatile flow rate (Fig. 9B). Disagreement also exists in the ground-truth measurements under this condition (Fig. 9A), which may account for intrinsic variability of the experimental conditions and not the measurement devices, and should be considered when interpreting the reproducibility of Doppler pressure drop measurements at higher flow rates.
Limitations
These experiments were performed using a single model of a healthy aortic valve. The ultrasound probe was placed directly against the phantom to make the BSI and Doppler acquisitions with low penetration depth. These two factors represent a best-case scenario for the acquisition data for pressure drop estimation. In human subjects, acquisitions would be at an increased penetration depth, thus reducing the imaging frame rate, with an increased level of attenuation. In addition, experiments in human valves would exhibit more physiological and/or pathological variation. The results, therefore, would likely be different if data were obtained in vivo.
Although qualitative evidence ( Fig. 5 and Additional File 1) and quantitative evidence (Fig. 6) demonstrate plausible haemodynamic behaviour, it is important to consider that similarity between the mechanical properties of the silicone valve and those of a human valve is not demonstrated beyond the reasonable confidence reported previously [12,13].
In these experiments, the velocity profile and resultant pressure drop were controlled by modifying the pump flow rate. Changing the valve type or orifice area would be the ideal workbench for this experiment as it changes the velocity profile under the same flow conditions. However, this was not performed in order to avoid damaging the valves during the changeover procedure, and to maximise the reproducibility of the pressure drops created by a fixed valve. Additional experiments with different pulsatile conditions, using different frequencies and duty cycles, would allow for a better understanding of transient effects during the upstroke/downstroke of the acceleration of the jet, and thus secondary effects of the impact of the temporal resolution of data. The lack of these results is a limitation of the study.
In this work, pressure drop estimations were made using a single peak velocity value acquired by BSI. The reported limitations of current BSI technology preclude the use of pressure estimations made using the full velocity profile at this stage as underestimation will exist across the region of interest where higher flow velocities are found. Future technological advances and resultant improvements in temporal resolution may improve the ability of BSI to track higher flow velocities and therefore estimate greater pressure drops more accurately, allowing for two advantages, to correctly account for the physics of advection by capturing the full velocity profile [3], and to overcome the angle-dependence and aliasing limitations of Doppler echocardiography.
Given the small sample size, the results of this pilot study should be considered as preliminary. Future in vivo studies are required before BSI can be used in the clinical setting.
Conclusions
BSI accurately estimated pressure drops up to 10.5 mmHg in controlled and reproducible phantom conditions of low stenotic burden, which may be useful for estimations of trans-mitral valve and intra-cardiac pressure drops. BSI underestimated all of the greater pressure drops tested, likely due to an inability of the algorithm to track higher flow velocities and speckle decorrelation. Doppler overestimated pressure drop values of clinical significance, in line with the published literature. Although BSI offers a number of theoretical advantages to conventional Doppler echocardiography, further refinements and clinical studies are required with BSI before it can be used to improve transvalvular pressure drop estimation in the clinical evaluation of aortic stenosis. | 2022-01-28T16:45:20.637Z | 2022-01-25T00:00:00.000 | {
"year": 2022,
"sha1": "8dd9931dc1e2ff52b91f55d969226c623e0f58a4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "8660d1f55fcf9c7399dd8d8638a8923765847780",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
199469205 | pes2o/s2orc | v3-fos-license | Low-temperature catalyst based Hydrothermal liquefaction of harmful Macroalgal blooms, and aqueous phase nutrient recycling by microalgae
The present study investigates the hydrothermal liquefaction (HTL) of harmful green macroalgal blooms at a temperature of 270 °C with, and without a catalyst with a holding time of 45 min. The effect of different catalysts on the HTL product yield was also studied. Two separation methods were used for recovering the biocrude oil yield from the solid phase. On comparision with other catalyst, Na2CO3 was found to produce higher yiled of bio-oil. The total bio-oil yield was 20.10% with Na2CO3, 18.74% with TiO2, 17.37% with CaO, and 14.6% without a catalyst. The aqueous phase was analyzed for TOC, COD, TN, and TP to determine the nutrient enrichment of water phase for microalgae cultivation. Growth of four microalgae strains viz., Chlorella Minutissima, Chlorella sorokiniana UUIND6, Chlorella singularis UUIND5 and Scenedesmus abundans in the aqueous phase were studied, and compared with a standard growth medium. The results indicate that harmful macroalgal blooms are a suitable feedstock for HTL, and its aqueous phase offers a promising nutrient source for microalgae.
The appearance of a dense mat of macroalgae on water bodies is a widespread phenomenon. An algal bloom is a result of accumulating algal biomass in the slow-moving lake, pond or river. Macroalgae blooms are largely filamentous, unattached forms, and are mainly green algal species found in nutrient-rich, and temperate waters. Increase of nitrogen and other micropollutants into water bodies are linked to increases in macroalgal blooms worldwide 1 .
Accumulation of macroalgae in rivers and ponds leads to microbial decomposition that may reduce dissolved oxygen in the water bodies due to algal respiration 2 . Low dissolved oxygen content in water bodies leads to a change in biodiversity and species composition 3 . Macroalgal blooms lead to a decline in the growth of non-blooming algae and also affects the diversity of plankton and zooplankton in water bodies 4 . These algal blooms affect the ecosystem by changing the quality of water, light penetration, and alteration of the food chain, and food web 5 . Various initiatives have been undertaken to deracinate the problem related to algal blooms contaminate freshwater as well as marine water 5 . Many studies have been reported on producing biofuels from microalgae 6 . However, a limited number of studies are related to the use of macroalgae for biofuel production. Toxic algal blooms can be used as a cheap raw material for microalgae cultivation because its density has extensively increased worldwide in recent years due to wastewater being discharged into water bodies 7 .
Hydrothermal liquefaction (HTL) is a process in which wet algal feedstocks at 200-400 °C temperature and 10-15 MPa pressure gets converted into four products (1) Bio-crude oil (2) Gas (3) Solid phase and (4) Aqueous phase rich in organics and nutrients 8 . During the HTL process, along with lipids, proteins and carbohydrates also get converted into biocrude oil. Therefore, the yield of oil is higher using HTL 8,9 . Thus, HTL is well suited to a variety of biomass, including bacteria, wastewater sludge, and algal biomass, which is fast-growing but have low-lipid contents. This process provides the energetic advantages by the use of wet algal biomass and the efficient separation of products over alternative techniques such as lipid isolation and transesterification, pyrolysis etc 9 . HTL is ideal for conversion of high-moisture biomass into biocrude oil because water acts as a reaction medium and thus avoids the costly phase of biomass drying 9 .
The use of a catalyst was found more to be suitable for HTL of macroalgae as compared to microalgae because it increases the conversion of carbohydrates into biocrude oil 10 . Previous studies have shown that most of the catalysts lead to a significant increase in biocrude oil. HTL process will not be economically viable if these catalysts cannot be recycled properly 11,12 . Na 2 CO 3 increases the HTL biocrude oil yield of terrestrial plants, and microalgae by promoting hydrolysis 12 .
Quality of the biocrude oil is dependent on the properties of the algal biomass. High carbon, hydrogen content and low ash, nitrogen, sulfur, and oxygen content containing biomass are considered as ideal for HTL 13 . HTL bio-crude ids dark in color, highly viscous liquid which is 10-10,000 times higher than that of conventional fuel and have a smoke-like smell 14 . High nitrogen content of algal biomass results in higher nitrogen content in HTL biocrude oil. This causes emission of toxic NOx which can be removed by the refining process. The high carbon content of algal biomass resulted in high biocrude yield by HTL process 10 . Composition of biocrude oil is mainly carbon content (71-73%), hydrogen content (7-8%), oxygen content (10-11%), nitrogen content (6-7%) and sulfur content (0-1%) which leads to less greenhouse gas emissions as compared to conventional biofuel and bioethanol 10 . Biocrude can be upgraded for the production of gasoline, jet fuel, and other fuel by using a suitable catalyst which can remove oxygen, nitrogen and double bonds 15 . However, HTL shows a negative energy balance, which is the principal disadvantage of this process. Fast heating and cooling also increase the yield of biocrude oil because of better conversion of protein, lipid, and carbohydrates into liquid fuel rather than solid or gaseous fuel 10 .
According to the energy-efficiency ratio, the use of HTL aqueous phase for microalgae cultivation shows a positive energy balance for biofuel production 16 . Recycling nutrients from wastewater could potentially fulfil the nutrients requirement for microalgae cultivation and scope to integrate the biofuel production and wastewater treatment 17 . Post-hydrothermal liquefaction aqueous phase can accumulate approximately 80% of nutrients and some organics, this provides an excellent opportunity for nutrient and carbon recycling 8 . The nutrients and carbon recycling have been investigated in some recent studies using aqueous phase of HTL 18 . These studies show that nutrients and carbon in the aqueous phase from hydrothermal liquefaction can be used for microalgae cultivation at different dilution factors (50-500 times) 19 . Several studies have been reported in the literature on nutrient cycling of HTL wastewater for microalgae cultivation, but the effect of a different catalyst on the growth of microalgae has not been reported yet. In a study, Jain et al. 7 reported that freshwater toxic algal blooms are a promising feedstock for microalgae cultivation.
The present study investigates explicitly a novel integrated method of using harmful algal blooms as biomass for energy production that synergistically combines algal blooms biofuel production using the HTL process as given in Fig. 1. This study aims to experimentally confirm the feasibility of the harmful green algal blooms for biocrude production completed in four steps. Four steps includes (1) utilization of harmful macroalgal blooms for the HTL process. (2) Increase the yield of biocrude oil using different separation methods. (3) Study the effect of different catalysts on biocrude oil yield. (4) Use of aqueous phase of HTL processes with a catalyst for the cultivation of microalgae.
Results
Analysis of macroalgal blooms. The Fig. 4) three central regions, lipid band (around 1630 cm −1 ), amide band (1401 cm −1 ) and the carbohydrate region (1100-874 cm −1 ). product yields by HtL. In this study yields of bio-crude oil by the two separation methods were investigated. In the first separation method (without soxhlet), 10.03% of biocrude oil was obtained. The results indicated that oil yield was high (14.06%) in second separation method (with soxhlet). The Soxhlet extraction method was selected and then further evaluated for their bio-crude oil yield with different catalysts for 45 min at 270 °C as given in Fig. 1. The total bio-oil was 20.10% with Na 2 CO 3 , 18.74% with TiO 2 , 17.37% with CaO, and 14.6% without a catalyst. The biochar yield was 30.12%, 30.05%, 32.09% and 35.84% for Na 2 CO 3, TiO 2, CaO, and without catalyst respectively. The total biocrude oil yield was measured by the mixing of bio-oil-1 and bio-oil-2. The significant portion of biocrude oil one was obtained from the acetone extraction of the solid phase.
Analysis of the biocrude oil obtained by HtL. The biocrude oil obtained from the macroalgal blooms have been analyzed by Gas Chromatography (GC), and NIST library was used for the identification of compounds ( Table 2). Nine main compounds were identified based on retention area % >1 during direct HTL without catalysts at 270 °C and 45 min of reaction time. The compounds such as amides derivatives, palmitic acid, phenolic compounds, and ketones derivatives, alkanes and alkenes derivatives and some furans were considered as central components of biocrude oil obtained by HTL of algal biomass 20 . Most of the components identified in biocrude oil extracted using macroalgal blooms were somewhat similar to those obtained from microalgae. However, differences in some of the compounds (Pyrrolo [1,2-a]pyrazine-1,4-dione hexahydro-3-(2-methylpropyl and www.nature.com/scientificreports www.nature.com/scientificreports/ Phytol) were also recorded during the study. This may be due to differences in algal species, composition of the macroalgal blooms and also different GC-MS analysis procedure implemented.
HHV and eR of algal blooms biocrude oil. The primary elements present in biocrude oils obtained by the catalytic and non-catalytic reaction is given in Table 3. The biocrude oil extracted in the absence of catalyst displayed higher carbon content (70.31%) rather than the macroalgal blooms.
Element enrichment percentage was higher in biocrude oil obtained using Na 2 CO 3 as a catalyst . In the biocrude oil, the nitrogen concentration was less as compared to the green algae. This may be a result of nitrogenous compounds that are obtained in the liquid phase of the HTL process. The amount of nitrogen in the biomass of macroalgal blooms is higher than that of bio-crude oil as a result of generation of nitrogenous compounds during the HTL process in aqueous phase 21 . In contrast to our study, Ross et al. 12 reported that using Na 2 CO 3 there is an increase in nitrogen content, whereas Jena et al. 20 observed a decrease in nitrogen content of biocrude oil with Na 2 CO 3 catalyst. The sulfur content was recorded below one wt. % for Na 2 CO 3 and TiO 2 based reaction, whereas for petroleum crude oil, it varies between 0 and three wt.% 22 .
Analysis of HtL water phase.
Water phase obtained by the HTL of harmful macroalgal blooms had a very foul smell and is dark brown. The pH of catalytic reactions (7.8-8.3) was found to be higher than non-catalytic reaction (7.6) ( Table 4). The same pattern of pH was also observed in the aqueous phase of other algal biomass after HTL when Na 2 CO 3 was used as catalyst 12,22 .
The high amount of TOC present in the liquid phase of HTL is due to organic matter of feedstock dissolved in water. The nitrogen content in the HTL liquid phase increases in catalyzed reaction because algal proteins get converted into water-soluble amino acids and ammonia 23 .
Effect of catalysts stress on microalgal growth, biomass and lipid productivity. Microalgal growth in the aqueous phase of each catalyst is different; it may be due to removal and detoxification of catalyst by adsorption of catalysts on to the cell surface or intracellular metabolism of microalgae in response to catalysts. In the aqueous phase of Na 2 CO 3 and TiO 2 log phase lasted from day 6 to day 24. While in the aqueous phase of the CaO log phase lasted from day 5 to day 20 (Fig. 2). Overall microalgae, biomass productivity (g/l) after the stationary phase in the control medium (BBM) was found to decrease as compared to the HTL aqueous phase. As Fig. 2 shown that microalgae cultivated in HTL grew with the slow rate but the linear fashion after four days. Each catalyst affects the biomass and lipid productivity differently. Highest lipid yield and productivity of 32 ± 6.1% and 147 ± 1.3 mg L −1 d −1 were recorded in Scenedesmus abundansgrown in aqueous phase containing catalyst TiO 2 followed by CaO (Chorella minutissima 26.4 ± 1.7%; 157 ± 1.2 mg L −1 d −1 ), Na 2 CO 3 (Scenedesmus abundans 24.2 ± 2.1%; 110 ± 1.5 mg L −1 d −1 ), and control (Chorella minutissima 22.16 ± 1.6%; 134 ± 2.1 mg L −1 d −1 ) (Tables 5 and 6).
Discussion
FTIR spectrum of harmful macroalgal blooms reflects three central regions, lipid band (around 1630 cm −1 ), amide band (1401 cm −1 ) and the carbohydrate region (1100-874 cm −1 ). Peaks between 2516-1401 cm −1 presents the lipid and phenolic content of algal blooms 24 . Higher phenolic content was reported in marine macroalgae 25 . The sharp peak at 1639 cm −1 is C=O amide stretching of proteins present in algae 26 .
In the present study, for the first time harmful macroalgal blooms were used in HTL process for biocrude oil production. High-temperature (300-500 °C with holding time of 30-60 min) based HTL process for the conversion of biomass to bio-crude oil have been reported in previous studies 27,28 . However, the low-temperature Table 4. Chemical characteristics of HTL aqueous phase.
(250-290) HTL process is still in the trial stage. In this study maximum, 20.1 ± 0.3% oil was obtained in the presence of catalyst Na 2 CO 3 at 270 °C for 45 min. The presence of catalysts showed the increase in oil yield and decrease in the biochar formation 29 . Catalysts increase the quality of biocrude oil by two ways (a) Introduction of catalysts at the time of the HTL process. (b) Upgrade the bio-oil quality after HTL. Catalysts from renewable resources are getting attention because they reduce the production cost of biofuels 30 . In this study, eggshell were used to produced CaO catalyst. Use of Na 2 CO 3 as a catalyst in HTL leads to increases in the yield of biocrude oil from algal blooms. The bio-oil produced with Na 2 CO 3 also had high heating value. Yeh et al. 31 reported that Na 2 CO 3 increases the yield ofbiocrude oil during the HTL of algal biomass containing a high content of carbohydrates by converting it into the oil.
In this study maximum yield of crude oil (20.10%) was reported with catalyst Na 2 CO. The presence of Na 2 CO 3 catalyst has mainly suited the conversion of carbohydrates into biocrude oil in macroalgae and plant-based 12,23 . However, Na 2 CO 3 is inimical to the conversion of algal lipids to biocrude oil and not suitable for algal species contain high lipid contents 23 . Shakya et al. 22 reported that higher amount of carbohydrates containing microalgae species led to increasing in bio-oil yield when Na 2 CO 3 is used as a catalyst. In this study, HHV and ER value of algal blooms biocrude oil with catalyst Na 2 CO 3 were 25.59MJ/Kg and 27.50% respectively. At similar pressure, Alhassan et al. 32 reported the 21.15 ± 0.82 MJ/kg and ER 41.48% of biocrude oil of Jatropha curcas cake under 250 °C temperature. The energy efficiency of HTL biocrude oil depends on the HHV 33 .
Many catalysts such as KOH, NaOH, CH 3 COOH, and H 2 SO 4 have been used by various researchers in HTL of microalgae, and these catalysts can also be reused as growth media for the growth of microalgae along with aqueous phase 12,21 . Wang et al. 27 reported that by using TiO 2 in HTL of microalgae at 300 °C leads to the highest bio-oil yield and the maximum liquefaction conversion.
The higher organic carbon content of macroalgae was responsible for the higher yield of bio oil 34 . The crude oil obtained from this study was 14-20% dw which is more similar to as reported in different green macroalgal species, Enteromorpha prolifera 23.0% 16 , Oedogonium 26.2% and Ulva 18.7% of dry weight 34 .
Algal biomass protein first to gets converted into amino acids and finally into amines and amides by HTL process 12 . Ketones and phenols produced during HTL were obtained from carbohydrates by hydrolysis and dehydration process. The lipids were responsible for the generation of alkenes 16 . Change in nitrogen concentration of aqueous phase of HTL depends on the temperature, type of catalyst, and algae strain 12,22 . High nitrogen content was reported in this study with Na 2 CO 3 as a catalyst. Similar results were observed by Shakya et al. 22 , when Na 2 CO 3 used as a catalyst in HTL of Isochrysis and Pavlova at low temperature, as Na 2 CO 3 increased hydrolysis of protein into water-soluble compounds.
The aqueous phase of HTL had a high amount of soluble organic compound, carbon, and nitrogen. Kumar et al. 35 reported that CaCl 2 increases microalgal biomass productivity. High lipid content was reported in catalysts containing aqueous phase as compared to control. These results were in congruence with other studies in which an increase in lipid content on exposure to heavy metals was reported 36 . Oxidative stress results in degradation of the photosynthetic machinery, while the protective mechanism of algal cells leads to the accumulation of unsaturated fatty acids 37 .
This experimental work confirmed that harmful macroalgal blooms are a suitable feedstock for HTL and aqueous phase can be reuse as a nutrient source for cultivation of microalgal biomass.
Materials. Microalgae strains Chorella minutissima (MCC-27), Scenedesmus abundans (NCIM 2897),
Chlorella singulari UUIND,and Chlorella sorokiniana UUIND6 were used in present study. All the chemicals, including TiO 2 and Na 2 CO 3 and solvents used, were HPLC grade and acquired from Himedia, India. proximate analysis of green macroalgal blooms. Green macroalgal blooms were locally collected during the winter season (December-February 2017) from water pond, and freshwater river nearby Uttaranchal University, Uttarakhand, India. The ash content of algal blooms biomass was determined according to the NREL Analytical Procedure 38 . Elemental compositions of algal blooms were determined by the elemental analyzer (Thermo Fisher, USA). The proximate analysis of the sample was carried out using the standard methods given by the Association of Official Analytical Chemists (AOAC). FTIR analysis (FTIR 6700, NICOLET) of algal blooms biomass frequency range 4000-450 cm −1 were used. experimental procedure of hydrothermal liquefaction (HtL). The Hydrothermal liquefaction (HTL) of harmful macroalgal blooms was carried out in a 100 ml high-pressure autoclave (Parr reactor) operated in a batch mode. Different types of catalysts in different concentrations were used in HTL process reported in the literature, but 10:1 (feedstock: catalyst) ratio has been reported to give the maximum conversion rate of feedstock to crude oil by HTL 27,39 . The reactor was loaded with wet algal blooms with/without catalyst in 10:1 ratio. Three types of catalysts were used in this study viz; TiO 2 , Na 2 CO 3, and CaO. The CaO catalyst was prepared from eggshells according to the protocol given by Niju et al. 40 . The reactor was heated up to 270 °C and pressure 4.5 MPa with He gas, a heating rate of 5 °C/min for holding time of 45 min. After the completion of reaction, the reactor was immediately cooled and opened; gas was vented off, water phase and solid mixture were separated from each other by vacuum filtration. The filtrate was marked as an aqueous phase which consists of dissolved organic compounds. Two separation methods were used for recovering these products.
In the first separation method, the solid phase was treated with acetone three times to recover the oil phase. Acetone was evaporated at 60 °C, and the resulting oil phase was weighed and marked as Biocrude oil 1. The water phase was treated with diethyl ether. Aspirating out upper phase and diethyl ether was evaporated in a rotary evaporator. The total biocrude oil obtained was measured gravimetrically and marked as biocrude oil 2.
In the second separation, Karagoz et al. 29 method were used with some modificationa, briefly the solid phase was acidified to pH 1-2 with H 2 SO 4 (1.3 M) overnight and dried next day. Biocrude oil from the solid mixture was extracted using soxhlet extraction apparatus with acetone as solvent until the solvent in the thimble becomes colorless. Allowed to stand for three h, upper pahse was separated out. The lower phase (pallets) contain catalyst or residues of catalyst. Acetone was recovered from the upper phase at 60 °C. The extracted oil phase was weighed and marked as biocrude oil-1. Diethyl ether was added to liquid phase and aspirating out upper phase and evaporated diethyl ether in a rotary evaporator and remaining fraction was measured gravimetrically and marked as biocrude oil-2.
Biocrude oil-1 was mixed with biocrude oil-2 for the calculation of %wt of total biocrude oil. Analysis of biocrude samples was doen using an elemental analyzer (Thermo Fisher, USA). (2019) 9:11384 | https://doi.org/10.1038/s41598-019-47664-w www.nature.com/scientificreports www.nature.com/scientificreports/ HHV and energy recovery (eR). The essential properties of HTL biocrude oil such as biocrude oil yield percentage, HHV, element enrichment %, and ER were calculated using the empirical formulas given below 27 Microalgal strains were cultivated in 1 L shake flasks containing different dilutions of the aqueous phase of catalytic and non-catalytic reaction and Bold's Basal Medium (BBM) as a control for 14 days with 16 h light: 8 h dark photoperiod and irradiated with LED tubes (200 μmol m −2 s −1 ). BBM was prepared according to the protocol developed by Guarnieri et al. 42 .
Three concentrations 200×, 400× and 600× of the aqueous phase of non-catalytic reaction were prepared by the dilution of distilled water ( Table 1). The effect on the growth rate of microalgae strains was observed by taking absorbance at 750 nm using UV-Vis spectrophotometer (Shimadzu 133 model no. 1700). Dilution 400× was capable of growing the maximum biomass and was further evaluated for their efficiency in biomass production with 1% BBM and 400× dilution of the aqueous phase of the catalytic reaction.
Determination of total lipid content (%, w/w) and lipid productivity of the cultivated microalgae.
For extraction of lipids, first of all, samples were dried from the % ml culture broth. Further, the microalgal cells were broken down using liquid nitrogen with the help of a mortar and pestle. The fine powder was obtained from which the lipids were extracted using chloroform:methanol (2:1) kept overnight at room temperature, with constant shaking 35 . The extract obtained was treated with 0.034% MgCl 2 , centrifuged at 3500 rpm for 5 min. The supernatant was washed two-three times with 1 ml of 2 N KCl/methanol (4:1 v/v). 5 ml of chloroform/methanol/ water (3:48:47, v/v/v) was added to it. The bottom chloroform layer was transferred to a new test tube, and lipids yield was measured gravimetrically. Lipid production and percentage of lipid were calculated by the following equations 35 : = × Lipid yield % Lipid content (g)/Dry algae biomass (g) 100 = × Lipid productivity Biomass productivity Lipid yield (%)/100 Statistical analysis. All the experiments were done in triplicates (n = 3) and are presented in mean value ± SD. A GraphPad Prism software (version7:0) with p < 0.05 was used in this study. | 2019-08-08T13:13:50.791Z | 2019-08-06T00:00:00.000 | {
"year": 2019,
"sha1": "3f0b94bfec09fae305e8449053d7482fa1da5530",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-47664-w.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f0b94bfec09fae305e8449053d7482fa1da5530",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.