text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Synthetic Artificial Apoptosis‐Inducing Receptor for On‐Demand Deactivation of Engineered Cells
Abstract The design of a fully synthetic, chemical “apoptosis‐inducing receptor” (AIR) molecule is reported that is anchored into the lipid bilayer of cells, is activated by the incoming biological input, and responds with the release of a secondary messenger—a highly potent toxin for cell killing. The AIR molecule has four elements, namely, an exofacial trigger group, a bilayer anchor, a toxin as a secondary messenger, and a self‐immolative scaffold as a mechanism for signal transduction. Receptor installation into cells is established via a robust protocol with minimal cell handling. The synthetic receptor remains dormant in the engineered cells, but is effectively triggered externally by the addition of an activating biomolecule (enzyme) or in a mixed cell population through interaction with the surrounding cells. In 3D cell culture (spheroids), receptor activation is accessible for at least 5 days, which compares favorably with other state of the art receptor designs.
Introduction
Receptors are an exquisite tool of molecular and cell biology. [1] Biological functions of receptors are pivotal for each cell and include recognition of solutes, controlled adhesion events, receptor-mediated endocytosis, and transmembrane signaling. Design of synthetic artificial receptors is highly challenging but at the same time highly appealing to a broad scientific audience. [2][3][4][5][6][7][8][9][10][11] Among the diverse functions of receptors, modulation of adhesion is accomplished well using artificial receptors. [6,12] These are created using a successful arsenal of tools of colloidal and surface science, applied to the cell surface. In turn, metabolic engineering has proven to be a highly powerful technique to install artificial chemistry into the cellular membrane, and this has been successful in the context of designing artificial receptors for drug targeting. [10,[13][14][15] Artificial DOI: 10.1002/advs.202004432 "receptor-mediated endocytosis" has also been engineered using bilayer anchored ligands and antibody targeting of synthetic receptors. [5,9] Recently, artificial receptors have emerged as a unique orthogonal recognition ligand to engineer highly specific "chimeric antigen receptor T cells" (CAR T) [16][17][18] and independently, to boost in vivo CAR T expansion. [19] These examples illustrate the highest potential of artificial receptors for the development of cell-based therapies and broader biomedical engineering. Nevertheless, while cell-surface sensing and artificial endocytosis are well mimicked by synthetic tools, [2] transmembrane signaling using synthetic chemistry remains the highest challenge. Even the most prominent examples of this to date remain simplified compared to the natural counterparts and only perform their function in model lipid bilayers (e.g., liposomes) [11,20] whereas their performance within live cells is yet to be achieved.
Herein, we present the development of a novel approach to externally control the cell fate, using a membrane-anchored, synthetic, chemical "apoptosis-inducing receptor" (AIR). Viability switch mechanisms of this kind are highly warranted in the context of cell-based therapies, such as to establish a possibility to discontinue treatment in an event of severe side effects. [21] Apoptosis-inducing receptors may also prove useful in the context of cancer treatment, where remotely addressable cells act as "Trojan horses" and mediate the killing of surrounding (cancerous) cells. [22][23][24] Artificial receptor mechanisms based on biological molecules (chimeric receptors) are highly powerful but are also highly complex in engineering, and for patient derived cells, these require lengthy procedures with extensive cell handling. [3][4]19] We hypothesized that the desired artificial receptor can be designed using synthetic chemistry. Thus, we propose an artificial chemical receptor that bears similarity with natural and chimeric counterparts by featuring an exofacial component for receptor activation, a mechanism of signal transduction, and a secondary messenger molecule that exerts intracellular effects (Scheme). The AIR molecule designed in this work consists of four fragments: i) an exofacial trigger group that can be cleaved using an externally added enzyme, ii) a membrane anchor, iii) a self-immolative mechanism for traceless release of the secondary messenger, and iv) a potent liposoluble toxin (Scheme 1B). We demonstrate that receptor installation into cells is a rapid procedure with minimal cell handling, thus making it highly advantageous compared to its chimeric counterparts. Receptor activation Scheme 1. A) Schematic illustration of the proposed artificial apoptosisinducing receptor (AIR) design based on a membrane-bound prodrug with external (exofacial) activation and a secondary messenger molecule; B) chemical formula of the bilayer-anchored AIR that consists of the glucuronic acid trigger, a C18 lipophilic anchor, p-hydroxybenzyl alcohol selfimmolative linker for drug release, and MMAE as a releasable secondary messenger to exert intracellular response.
can be achieved by externally added biological stimulus (enzyme), or through interaction with surrounding cells.
Results and Discussion
For receptor activation, we chose to use enzymatic activity and focus on -glucuronidase (GUS), an enzyme with a unique, predominantly intracellular distribution in the human body. [25][26][27] This aspect is pivotal to minimize nonspecific activation of the engineered cells. Indeed, activation of the corresponding substrates, glucuronides, in the human body is only associated with diseased tissues such as cancer, and not with normal healthy tissues. [25][26][27] Glucuronides are also highly advantageous for receptor design as being highly polar: [28][29] the use of glucuronic acid as a removable masking group is poised to ensure exofacial localization and accessibility of the trigger to enzymatic activation. As an effector molecule, we used a highly potent intracellular toxin, monomethyl auristatin E (MMAE), which has excellent lipophilicity and cell permeability properties (calculated log P = 3,5). Finally, the mechanism of triggered drug release is engineered using a self-immolative p-hydroxylbenzyl alcohol scaffold, which conveniently offers vacant ortho-positions to install a membrane anchoring element, in our case, a C18 aliphatic tail that mimics the lipid bilayer constituents. Self-immolative linkers (SILs) are highly useful in medicinal chemistry, specifically in the design of prodrugs. [29] The innovative aspect of this work lies in fact that we use SIL as a tool for transduction of chemical stimuli across the sealed biomolecular membrane, mimicking performance of natural receptors.
The synthesis of the proposed AIR molecule was carried out in 10 steps, starting from protected glucuronic acid 1 (Figure 1A). Glucuronidation of 4-hydroxy-3-nitrobenzaldehyde was conducted under the Königs-Knorr conditions yielding excellent -selectivity. Subsequently, the aldehyde was reduced into the corresponding benzyl alcohol and the nitro group reduced into an aniline 6. The aniline functionality ensured the opportunity for the installation of a membrane anchor, which was chosen to be a C18 chain 8. In turn, benzyl alcohol provided a drug conjugation site, through activation with p-nitrophenyl chloroformate followed by the coupling to the toxin, MMAE. Finally, glucuronic acid was deprotected using Zémplen deacetylation conditions followed by saponification of the methyl ester to afford the desired compound 11 (Glu-C18-MMAE). AIR molecule revealed a critical aggregation concentration of 122 × 10 −6 m; it was stable in solution and exhibited nondetectable spontaneous drug release. Upon addition of GUS, the prodrug underwent a rapid bioconversion and quantitatively released the incorporated drug, MMAE ( Figure 1B).
To investigate lipid bilayer anchoring of AIR, we employed sum frequency scattering (SFS) spectroscopy as a tool that can probe molecular vibrations at vesicle surfaces in situ and report back on (bio)molecule-membrane interactions. The technique employs narrowband visible laser pulses overlapped in space and time with broadband infrared laser pulses; nonlinear optical frequency mixing generates sum-frequency photons, which carry a vibrational spectrum of the particle surface and provide information about the order and alignment of interfacial species. [30][31][32] For neat vesicles prepared using deuterated lipids, the C=O stretching region of the spectrum (the lipid head groups at the vesicle interface) and the C-D region of the spectrum (resonances near 2075, 2125, and 2225 cm −1 assigned to the symmetric CD 3 , CD 3 Fermi resonance, and symmetric CD 3 , respectively) reveal resonances related to ordered lipid molecules. The intensity of these modes was drastically reduced upon exposure to Glu-C18-MMAE ( Figure 1C), whereas dynamic light scattering profiles of the vesicles remained unchanged ( Figure S3, Supporting Information). This data illustrates a severe reduction of lipid order within otherwise intact vesicles in the presence of Glu-C18-MMAE, which is indicative of AIR inserting into lipid bilayer membrane.
Next, we labeled AIR molecule using the glucuronic acid carboxyl group, using an amine-containing derivative of fluorescein (for synthesis and characterization, see the Supporting Information). Fluorescein can also be used as an exofacial ligand, [33] which is important to maintain the overall amphiphilic character of the AIR molecule. Confocal laser scanning microscopy (CLSM) reveals that upon administration onto giant unilamellar vesicles (GUVs), the AIR molecule spontaneously incorporates into the lipid bilayer, as evidenced by strong fluorescent signal corresponding to fluorescein ( Figure 1D). This behaviour was not observed for glucuronide of MMAE that is devoid of the C18-lipid anchor (Glu-MMAE). Together, SFS spectroscopy and CLSM validate spontaneous lipid bilayer insertion for the AIR molecule, as is pivotal for its application as an artificial receptor. For cell culture evaluation, we used a GUS-knock out derivative of the HAP-1 human myeloid leukemia-derived cells (GUS Neg ) as well as its parent, GUS-competent counterpart. Dose response curves were obtained for the synthesized AIR molecule (Glu-C18-MMAE) and its simplified analogue devoid of the C18 anchor, Glu-MMAE. Representative dose-response curves are shown in Figure 1E,F; IC 50 values derived thereof are listed in Table 1.
Designed compounds act as prodrugs for MMAE: subnanomolar toxicity of MMAE is masked to a micromolar level by the glucuronides, and is restored to the nanomolar range in the presence of GUS enzyme ( Figure 1E and Table 1), as is well documented. [28,34] These results demonstrate that glucuronides are highly effective as prodrugs and fold-change in toxicity-related IC 50 values (defined as QIC 50 ) exceeds 100, which provides a highly favorable safety window. However, the synthesized glucuronides, regardless of the presence of the C18 anchor, exhibited only minor if any anchoring into the cell membranes if administered onto cells in the presence of serum. Indeed, a simple washing step implemented after a 2 h (pro)drug incubation with cells, but prior to addition of the activating enzyme, removes 99+% of the (pro)drug, as evidenced by an ≈100-1000fold increase in the apparent IC 50 values ( Figure 1F, c. f. "no wash" and "+FBS, wash"). This is observed for each of the three solutes, including pristine MMAE and Glu-C18-MMAE. In other words, at these conditions, glucuronides act as prodrugs and not as membrane-bound receptors. Cell membrane anchoring for the Glu-C18-MMAE was successfully achieved through administration and brief incubation of cells with AIR in serum-free media, that is, conditions that are routinely employed for cell isolation and sorting during, e.g., engineering of CAR T. Subsequent cell culture was conducted in full serum, that is, standard cell culture conditions. CLSM reveals strong fluorescence of cells upon exposure to the fluorescently labeled derivative of AIR, which is not observed for pristine cells or the fluorescently labeled glucuronide Glu-MMAE ( Figure 1G). Cell culture evaluation revealed that with this mode of administration, IC 50 values for toxicity were 2.7 × 10 −9 m for MMAE and micromolar for Glu-MMAE prodrug (no bilayer anchoring). IC 50 value for the Glu-C18-MMAE was 23 × 10 −9 m, which is only tenfold higher than for MMAE at these cell culture conditions. The IC 50 value is also only threefold higher than the value observed through activation of the total administered prodrug payload in solution (the "no wash" conditions), which suggests that ≈30% of the administered prodrug was anchored into the lipid bilayer.
Most importantly, the anchored Glu-C18-MMAE remains dormant in GUS Neg cells. In the absence of the enzyme, spontaneous drug release affords an IC 50 value of over 3 × 10 −6 m, which is ≈130-fold higher than in the presence of the enzyme (24 × 10 −9 vs 3100 × 10 −9 m in GUS Neg cells, in the presence and absence of added enzyme, respectively). This offers a highly favorable safety window to achieve remotely triggered deactivation of cells that are equipped with the dormant AIR receptor molecule using externally added enzyme. From a different perspective, under the same anchorage conditions, Glu-C18-MMAE in GUS-competent cells, with no external addition of GUS enzyme, exhibited an IC 50 value of ≈59 × 10 −9 m. This illustrates a 50-fold selectivity window for differential toxicity in GUS Neg and GUS+ cells (3100 × 10 −9 vs 59 × 10 −9 m in GUS Neg vs GUS-competent cell, respectively), for potential cell-based drug delivery applications.
Taken together, results of cell culture evaluation for the synthesized AIR molecule indicate that in GUS Neg cells, AIR is dormant in a wide range of concentrations and can be activated to release the incorporated toxin, with a 130-fold therapeutic index over nontriggered drug release and a 50-fold therapeutic index over the GUS-competent cells. To capitalize on these opportunities, we investigated triggered deactivation of engineered cells in 3D cells culture, using cell spheroids (Figure 2A). GUS Neg cells engineered to contain AIR formed well-defined spheroids that could be grown for extended times (at least 7 days). At AIR feed content up to 8 × 10 −6 m, we observed no spontaneous toxicity to cells, as evidenced by fluorescence microscopy observation ( Figure 2B) and fluorescence quantification ( Figure 2C). Dormant AIR could be activated by addition of GUS enzyme, which resulted in pronounced toxicity to cells, which was observed with receptor feed concentration as low as 1 × 10 −6 m. This illustrates successful engineering of mammalian cells to contain a dormant, artificial apoptosis-inducing receptor that can be activated to achieve an on-demand cell killing.
Dormant AIR could be activated on demand for at least 5 days of spheroid incubation ( Figure 2D). Over this time, AIR is possibly diluted through cell division, through redistribution into solution phase (due to association with serum proteins), and redistribution into intracellular compartments. Prior reports on cell surface engineering (design of artificial glycocalyx) revealed a fast loss of exofacial membrane-bound ligands, with kinetics of loss measured in hours. [6] Thus, the synthetic receptor molecule engineered in this work exhibits membrane persistence well exceeding that of the lipids, which is highly advantageous. The most membrane-persistent anchor reported previously is cholesterylamine, for which exofacial presence is due to continuous recycling from cell surface into the intracellular compartments and back to the cell surface, over multiple days. [5,6] Yet even in this case, artificial glycocalyx has been shown to perform the nominated function over a time frame of 27 h (in vivo, in zebrafish). [6] Compared to this, 5 days of accessibility of AIR in 3D cell culture presents a considerable advance. In part, this is due to the design of the receptor, based on the C18 aliphatic lipid bilayer anchor and MMAE, which is a lipophilic toxin and contributes to Figure 2. A) Schematic illustration of the 3D cell culture approach that consists of engineering cells using the AIR molecule, cell organization into spheroids, and subsequent cell culture and receptor activation. B) Fluorescence microscopy images for GUS Neg cell spheroids engineered using apoptosis-inducing receptor at varied receptor feed concentrations, with or without receptor activation; C) cross-section intensity profile analyses for the fluorescence images shown in panel B; D) fluorescence microscopy images illustrating viability of the AIR-containing GUS Neg cells within spheroids with receptor activation at day 1, day 3, and day 5 (receptor feed concentration during cell engineering 1 × 10 −6 m); E) Live (green)/Dead (red) fluorescence microscopy images of 3 day old spheroids grown using GUS-competent HAP-1 cells with or without AIR molecule: receptor feed concentration 1 M; no external enzyme added for receptor activation; scale bars 500 m; F) schematic illustration of assembly for the mixed cell spheroids; G) fluorescence microscopy images illustrating cell viability within mixed cell spheroids as a function of receptor feed concentration during cell engineering, for a "Trojan horse"-type receptor activation (trigger "off") and receptor activation using externally added GUS enzyme (trigger "on"). For full details, see Supporting Information. membrane anchoring. However, to a greater extent, we believe this is due to the chosen 3D (spheroid) cell culture format, while in classical 2D cell culture, we observed a significantly shorter duration of accessibility of the receptor (not exceeding 2 days). Finally, we note that it may be possible to extend the on-demand activation opportunities beyond 5 days through the use of cholesterolamine, for receptor recycling via natural cell biology mechanisms; this is the subject of ongoing research.
From a different perspective, we also considered that engineered cells can act as "Trojan horses" for delivery of the dormant AIR molecule to diseased tissues (e.g., for cancer treatment). Indeed, white blood cells such as monocytes and lymphocytes are known to infiltrate into tissues and specifically into the disease-affected areas. Immune cells gain access through the most tightly guarded barriers such as the blood brain barrier. [24] Cell-based therapies are gaining momentum and engineered cells equipped with AIR may be envisioned as efficient cell-based drug carriers. [24,35] We considered that engineered cells can exert toxicity with at least three alternative mechanisms. Firstly, toxicity exerted by the engineered cells can be due to receptor "sharing," from the infiltrating engineered cells to the surrounding cells comprising the target tissue. Whereby infiltrating donor cells are GUS Neg , the acceptor cells are GUS-competent and therefore will be capable of activating AIR ( Figure 2E, also c.f. IC 50 values, Table 1). Second, cancerous and inflamed tissues are characterized by the presence of extracellular GUS. [25][26][27] Elsewhere in the body GUS is confined to the intracellular compartments, and extracellular GUS can be used as a marker for detection and quantification of cancer tissue growth. [25] Extracellular GUS can therefore activate artificial receptor designed in this work, in a manner much similar to that illustrated in Figure 1. Finally, receptor activation can be achieved remotely, via external addition of GUS enzyme. Worthy of note, receptor design featuring MMAE is beneficial in that this toxin is characterized by a strongly pronounced bystander effect (due to ease of membrane permeability of MMAE). [36] Indeed, we observed successful cell killing in 3D cell spheroids assembled using mixed population of cells (GUS Neg and GUS-competent). "Trojan horse" toxicity was achieved within spheroids that contained 10% engineered cells without external addition of GUS enzyme (Figure 2F,G). For this, engineering of the donor cells was performed using 8 × 10 −6 m feed concentration of the receptor. At this concentration, spheroids assembled only GUS Neg cells showed no signs of cell death ( Figure 2B). In contrast, in a mixed cell spheroid, with no externally added enzyme, we observed significant cell killing ( Figure 2G). It was further possible to trigger the AIR using externally added enzyme. On-demand toxicity to the mixed cell spheroid was achieved using GUS Neg cells engineered using as low as 2 × 10 −6 m feed concentration of the receptor ( Figure 2G).
Conclusions
This study presents the design of a molecular switch, a synthetic apoptosis-inducing receptor, for an on-demand deactivation of cells. This receptor contains i) an exofacial trigger for enzymatic activation of drug release, ii) a highly potent liposoluble toxin, MMAE, iii) an additional membrane anchor (C18 aliphatic side group), and iv) a self-immolative scaffold as a signal transduction mechanism. Compared to chimeric (protein-based) artificial receptors, synthetic chemical receptor designed in this work is installed into mammalian cells within minutes, with minimal cell handling, which is highly appealing for practical applications of engineered cells. We demonstrated on-demand deactivation of cells in 2D and in 3D cell culture, in monoculture and in mixed cell coculture. We believe our data open up diverse opportunities for the use of engineered cells in biotechnology and biomedicine, specifically communication to the engineered cells via a dedicated route and cell-based drug delivery.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author. | 4,565.4 | 2021-05-01T00:00:00.000 | [
"Biology"
] |
Extended Line Map-Based Precise Vehicle Localization Using 3D LIDAR
An Extended Line Map (ELM)-based precise vehicle localization method is proposed in this paper, and is implemented using 3D Light Detection and Ranging (LIDAR). A binary occupancy grid map in which grids for road marking or vertical structures have a value of 1 and the rest have a value of 0 was created using the reflectivity and distance data of the 3D LIDAR. From the map, lines were detected using a Hough transform. After the detected lines were converted into the node and link forms, they were stored as a map. This map is called an extended line map, of which data size is extremely small (134 KB/km). The ELM-based localization is performed through correlation matching. The ELM is converted back into an occupancy grid map and matched to the map generated using the current 3D LIDAR. In this instance, a Fast Fourier Transform (FFT) was applied as the correlation matching method, and the matching time was approximately 78 ms (based on MATLAB). The experiment was carried out in the Gangnam area of Seoul, South Korea. The traveling distance was approximately 4.2 km, and the maximum traveling speed was approximately 80 km/h. As a result of localization, the root mean square (RMS) position errors for the lateral and longitudinal directions were 0.136 m and 0.223 m, respectively.
Introduction
Currently, precise vehicle localization is being recognized as a key technology for autonomous driving. Although there are no standards for the localization accuracy required for autonomous driving, the report in [1] requires a localization accuracy of less than 0.7 m in the lateral direction at a 95% confidence level. Recently, a localization accuracy of less than 0.5 m in the lateral direction and less than 1 m in the longitudinal direction has been generally required at a 95% confidence level. In general, the accuracy of the Real-Time Kinematic (RTK) Global Positioning System (GPS) meets such requirements. In urban areas, however, even the costly RTK GPS/Inertial Measurement Unit (IMU) integrated system cannot meet the localization accuracy requirements for autonomous driving.
To address this problem, many studies are being conducted to improve localization accuracy through the convergence of various sensors mounted on autonomous vehicles. In particular, 3D Light Detection and Ranging (LIDAR) is being used as a key sensor for precise vehicle localization. As LIDAR can provide accurate distances and reflectivity information of surrounding objects, it can also estimate the relative vehicle position for surrounding objects in a very accurate manner. Therefore, precise vehicle localization is possible using 3D LIDAR if precision maps for the surroundings are available.
Precision maps are essential for precise vehicle localization. In general, precision maps for LIDAR-based precise vehicle localization are divided into point maps, line maps, 2D/2.5D plane maps, and 3D maps.
First, there are the most basic localization methods based on point maps. In the papers of [2] and [3], the line components of a building wall were extracted from LIDAR data, and both end-points of the lines or corners where two lines met were detected and used for localization. In these papers, however, the quantitative localization accuracy was not derived, and only the point landmark detection performance was analyzed. The study in [4] used columns such as street trees and lamps as point landmarks based on their characteristics perpendicular to the ground. However, the localization produced position errors higher than 0.5 m in many areas. The study in [5] detected the vertical corners of buildings and used them for precise vehicle localization. As such, vertical corners are perpendicular to the ground and can be expressed as point landmarks on a 2D horizontal plane. For this reason, the vertical corner map had a very small data size (28 KB). However, vertical corners were not detected in some areas, and in such areas, an increase in the vehicle position error is inevitable. Furthermore, this method cannot be used in areas that have no buildings.
Secondly, there are localization methods that use line detection. In the papers of [6,7], the horizontal line components of a hallway were detected using a LIDAR mounted on an indoor traveling robot, and localization was performed through the distance and angle information between the lines and the robot. In an indoor hallway, it is possible to easily detect the line components of the wall. Outdoors, however, line detection is not easy as the surrounding buildings and structures have extremely complicated shapes and are hidden by obstacles, such as street trees. For autonomous vehicle localization, road markings such as lanes are typically used. Such road markings can be detected using the reflectivity information of LIDAR and are used to create a line map [8]. In addition, it is possible to correct the position error of a vehicle by matching the detected lane and the line map [9]. For the study in [9], the curb was extracted using LIDAR. From the curb, the area of interest was set and lanes were detected. As a result of localization, the average errors in the lateral and longitudinal directions were 0.14 and 0.20 m, respectively. However, position errors higher than 0.5 m occurred in many sections.
Thirdly, there are localization methods that use plane maps. Hybrid maps were generated by integrating the lane and landmark information with 2D occupancy grid maps, and localization was performed using the hybrid maps [10][11][12]. In the paper of [13], a multilevel surface map was used for localization in a multi-floor parking lot. Studies in which 2D LIDAR data were accumulated on a plane and applied to localization were also conducted [14,15]. In the papers of [16,17], a road reflectivity map was generated and used to perform localization. However, the road reflectivity map was significantly affected by changes in weather and illumination, as well as the presence of vehicles.
To deal with such shortcomings, methods using vertical structures were researched [18,19]. A multiresolution Gaussian mixture map in which the height information of vertical structures is stored in a 2D grid map was proposed. The localization using this method produced RMS position errors of 10 and 13 cm in the lateral and longitudinal directions, respectively. Despite the excellent localization performance, the multiresolution Gaussian mixture map had a very large data size (44.3 MB/km). Most of the localization methods using plane maps require extremely large calculation amounts for map matching.
Finally, there are localization methods based on 3D maps. Recently, localization methods using Normal Distribution Transformation (NDT) scan matching have been researched [20,21]. The NDT scan matching exhibits relatively accurate results with a standard deviation of the horizontal position error of less than approximately 30 cm. However, the calculation time is very long (approximately 2.5 s).
In general, the localization accuracy and reliability increases alongside the amount of information. On the other hand, the data file size increases along with the calculation amount for map matching. For real-time vehicle localization, the data file size and calculation amount must be small. Therefore, it is important to ensure the highest localization accuracy and reliability using a map with a small amount of information.
As lanes must exist on roads where vehicles travel, map production companies are producing lane maps most preferentially. Furthermore, such lane maps are stored in line form to minimize the data file size. However, methods with high localization accuracy and reliability among the existing localization methods using LIDAR mostly use maps with very large data sizes. Therefore, research to ensure high localization accuracy using actual produced line maps is necessary. Consequently, considering the compatibility with actual produced maps as well as the data size, it is deemed most effective that maps for LIDAR-based localization also have a line form.
In this paper, a road reflectivity map and occupancy grid map for surrounding vertical structures were generated, and line components were extracted from each map. The extracted lines were stored in a map in node and line forms. This map is called an extended line map (ELM) in this paper. As an ELM includes all information for road markings and vertical structures, it can mutually supplement the shortcomings of both, and thus can ensure high localization accuracy, reliability, and availability.
Although the road marking information is stored in ELM in line form, the type of road marking to which each line belongs is not expressed. Likewise, for vertical structures, it is not expressed whether each line belongs to a building or a traffic sign. In the case of ELM-based localization, the ELM is converted into an occupancy grid map and is used for correlation matching. For this reason, whether the lines included in ELM are actual lines is not important, but it is important that certain road markings or vertical structures including lines are present in the area. As a result, as no lines can be detected from areas where nothing exists, it is not necessary to verify whether the lines were properly detected when generating an ELM map. Figure 1 describes the generation process of an ELM (refer to Section 2). As lanes must exist on roads where vehicles travel, map production companies are producing lane maps most preferentially. Furthermore, such lane maps are stored in line form to minimize the data file size. However, methods with high localization accuracy and reliability among the existing localization methods using LIDAR mostly use maps with very large data sizes. Therefore, research to ensure high localization accuracy using actual produced line maps is necessary. Consequently, considering the compatibility with actual produced maps as well as the data size, it is deemed most effective that maps for LIDAR-based localization also have a line form.
In this paper, a road reflectivity map and occupancy grid map for surrounding vertical structures were generated, and line components were extracted from each map. The extracted lines were stored in a map in node and line forms. This map is called an extended line map (ELM) in this paper. As an ELM includes all information for road markings and vertical structures, it can mutually supplement the shortcomings of both, and thus can ensure high localization accuracy, reliability, and availability.
Although the road marking information is stored in ELM in line form, the type of road marking to which each line belongs is not expressed. Likewise, for vertical structures, it is not expressed whether each line belongs to a building or a traffic sign. In the case of ELM-based localization, the ELM is converted into an occupancy grid map and is used for correlation matching. For this reason, whether the lines included in ELM are actual lines is not important, but it is important that certain road markings or vertical structures including lines are present in the area. As a result, as no lines can be detected from areas where nothing exists, it is not necessary to verify whether the lines were properly detected when generating an ELM map. Figure 1 describes the generation process of an ELM (refer to Section 2). When localization is performed using an ELM generated through the process shown in Figure 1, the ELM is converted back into an occupancy grid map, and correlation matching with the generated occupancy grid map is performed using the currently acquired LIDAR data. This matching result is used as the measurement of a Kalman Filter (KF). Figure 2 describes the localization process using an ELM (refer to Section 3). When localization is performed using an ELM generated through the process shown in Figure 1, the ELM is converted back into an occupancy grid map, and correlation matching with the generated occupancy grid map is performed using the currently acquired LIDAR data. This matching result is used as the measurement of a Kalman Filter (KF). Figure 2 describes the localization process using an ELM (refer to Section 3). The 3D LIDAR sensor used in this paper was a Velodyne HDL-32E, which was installed on top of a vehicle, as shown in Figure 3. In addition, layers 1 through 16 were used to generate an occupancy grid map for vertical structures, and layers 17 through 32 were used to generate a road reflectivity map.
The proposed ELM has the following benefits: • It includes all information for road markings and vertical structures.
•
It has a very small data file size (approximately 134 KB/km).
•
It can be generated through a map generation algorithm, and no verification procedure is required.
•
It is compatible with actual produced line maps for lanes.
In addition, the proposed ELM-based localization method has the following benefits: • It meets the localization accuracy requirements for autonomous driving.
•
As it is used after being converted into an occupancy grid map, line detection and data association processes are not required. • A Fast Fourier Transform (FFT) can be applied to the correlation matching of the binary occupancy grid map, and the correlation matching time is very short (78 ms on average). Section 2 describes how to generate an ELM, and Section 3 explains the ELM-based localization method. Section 4 analyzes the ELM-based localization performance, and Section 5 concludes this paper. The 3D LIDAR sensor used in this paper was a Velodyne HDL-32E, which was installed on top of a vehicle, as shown in Figure 3. In addition, layers 1 through 16 were used to generate an occupancy grid map for vertical structures, and layers 17 through 32 were used to generate a road reflectivity map. The 3D LIDAR sensor used in this paper was a Velodyne HDL-32E, which was installed on top of a vehicle, as shown in Figure 3. In addition, layers 1 through 16 were used to generate an occupancy grid map for vertical structures, and layers 17 through 32 were used to generate a road reflectivity map.
The proposed ELM has the following benefits: • It includes all information for road markings and vertical structures.
•
It has a very small data file size (approximately 134 KB/km).
•
It can be generated through a map generation algorithm, and no verification procedure is required.
•
It is compatible with actual produced line maps for lanes.
In addition, the proposed ELM-based localization method has the following benefits: • It meets the localization accuracy requirements for autonomous driving.
•
As it is used after being converted into an occupancy grid map, line detection and data association processes are not required.
•
A Fast Fourier Transform (FFT) can be applied to the correlation matching of the binary occupancy grid map, and the correlation matching time is very short (78 ms on average).
Section 2 describes how to generate an ELM, and Section 3 explains the ELM-based localization method. Section 4 analyzes the ELM-based localization performance, and Section 5 concludes this paper. The proposed ELM has the following benefits:
•
It includes all information for road markings and vertical structures. • It has a very small data file size (approximately 134 KB/km). • It can be generated through a map generation algorithm, and no verification procedure is required. • It is compatible with actual produced line maps for lanes.
•
In addition, the proposed ELM-based localization method has the following benefits: • It meets the localization accuracy requirements for autonomous driving.
•
As it is used after being converted into an occupancy grid map, line detection and data association processes are not required. • A Fast Fourier Transform (FFT) can be applied to the correlation matching of the binary occupancy grid map, and the correlation matching time is very short (78 ms on average).
Section 2 describes how to generate an ELM, and Section 3 explains the ELM-based localization method. Section 4 analyzes the ELM-based localization performance, and Section 5 concludes this paper.
How to Generate an ELM
As described in the paper of [17], when vehicle localization is performed through a road reflectivity map and 2D plane matching, it is not necessary to know what sign a certain road marking represents. In other words, it is not necessary that the map has all of the shape information. This means that road markings can be modeled in simple forms and applied to localization when the 2D plane-matching method is used.
All roads where vehicles travel basically have lanes drawn, and also include many additional road markings, such as stop lines, crosswalks, and arrows. Maps of roads which are currently being produced necessarily include information on lanes, which is typically in the form of lines. Such lane maps have already been applied to vehicle localization [8,9]. However, as most road markings other than lanes also have similar forms to lines, they can all just be expressed as a set of lines. Therefore, line maps with the same form as existing lane maps can be generated. They can be converted into 2D plane maps and applied to vehicle localization.
It is difficult to use road markings in areas with traffic congestion. In urban areas, many tall buildings are present around roads, and they can always be scanned regardless of traffic congestion. Therefore, it is necessary to use buildings to increase localization availability. Furthermore, the outer walls of urban buildings are mostly composed of planes and are perpendicular to the ground. For this reason, such outer walls are mostly expressed as lines when a 2D occupancy grip map is generated using 3D LIDAR. Such lines can be expressed in the same form as the lines extracted from road markings. As seen, lines are extracted from the reflectivity map for the road surface and the occupancy grid map for buildings, and the extracted lines are converted into node and line forms. Finally, the position of each node is stored in the ELM. Next, the method for ELM generation is explained.
Vehicle Trajectory Optimization
An experiment was carried out in the Gangnam area of Seoul, South Korea. Figure 4 shows the vehicle trajectory and the street view at the four intersections.
How to Generate an ELM
As described in the paper of [17], when vehicle localization is performed through a road reflectivity map and 2D plane matching, it is not necessary to know what sign a certain road marking represents. In other words, it is not necessary that the map has all of the shape information. This means that road markings can be modeled in simple forms and applied to localization when the 2D plane-matching method is used.
All roads where vehicles travel basically have lanes drawn, and also include many additional road markings, such as stop lines, crosswalks, and arrows. Maps of roads which are currently being produced necessarily include information on lanes, which is typically in the form of lines. Such lane maps have already been applied to vehicle localization [8,9]. However, as most road markings other than lanes also have similar forms to lines, they can all just be expressed as a set of lines. Therefore, line maps with the same form as existing lane maps can be generated. They can be converted into 2D plane maps and applied to vehicle localization.
It is difficult to use road markings in areas with traffic congestion. In urban areas, many tall buildings are present around roads, and they can always be scanned regardless of traffic congestion. Therefore, it is necessary to use buildings to increase localization availability. Furthermore, the outer walls of urban buildings are mostly composed of planes and are perpendicular to the ground. For this reason, such outer walls are mostly expressed as lines when a 2D occupancy grip map is generated using 3D LIDAR. Such lines can be expressed in the same form as the lines extracted from road markings. As seen, lines are extracted from the reflectivity map for the road surface and the occupancy grid map for buildings, and the extracted lines are converted into node and line forms. Finally, the position of each node is stored in the ELM. Next, the method for ELM generation is explained.
Vehicle Trajectory Optimization
An experiment was carried out in the Gangnam area of Seoul, South Korea. Figure 4 shows the vehicle trajectory and the street view at the four intersections. As shown in Figure 4, the experimental environment is a dense urban area surrounded by tall buildings, and two laps were driven from the starting point to the finishing point. The traveling distance was approximately 4.2 km, and the maximum traveling speed and average speed were approximately 80 km/h and 32 km/h, respectively. The position of the vehicle was acquired by using the integrated system of the RTK GPS and Inertial Navigation System (INS) (NovAtel RTK/SPAN As shown in Figure 4, the experimental environment is a dense urban area surrounded by tall buildings, and two laps were driven from the starting point to the finishing point. The traveling distance was approximately 4.2 km, and the maximum traveling speed and average speed were approximately 80 km/h and 32 km/h, respectively. The position of the vehicle was acquired by using the integrated system of the RTK GPS and Inertial Navigation System (INS) (NovAtel RTK/SPAN system). In an environment where there are many tall buildings, the position error of the RTK/INS is about 1-2 m. Therefore, the vehicle trajectory must be corrected to generate the precision map. The vehicle trajectory was optimized by using the GraphSLAM method. Figure 5 shows the graph optimization result of the vehicle trajectory.
Sensors 2018, 16, x FOR PEER REVIEW 6 of 28 system). In an environment where there are many tall buildings, the position error of the RTK/INS is about 1-2 m. Therefore, the vehicle trajectory must be corrected to generate the precision map. The vehicle trajectory was optimized by using the GraphSLAM method. Figure 5 shows the graph optimization result of the vehicle trajectory. As shown at the top left of Figure 5, the road reflectivity maps for the two laps do not match. On the right of Figure 5, the red points represent the corrected vehicle trajectory after the graph optimization. As shown at the bottom left of Figure 5, the road reflectivity map matches exactly.
Here, the incremental pose information outputted from the Iterative Closest Point (ICP) algorithm was used as an edge measurement of the graph. The theory and principles for the GraphSLAM method are well described in [22,23]. Thus, obtaining the optimized vehicle trajectory is possible by using the GraphSLAM. In this paper, this optimized vehicle trajectory was considered as the ground truth.
Line Extraction from Road Reflectivity Map
First, a road reflectivity map with a grid size of 15 cm was generated using the vehicle position optimized in Section 2.1. Figure 6 shows the generated road reflectivity map. As shown at the top left of Figure 5, the road reflectivity maps for the two laps do not match. On the right of Figure 5, the red points represent the corrected vehicle trajectory after the graph optimization. As shown at the bottom left of Figure 5, the road reflectivity map matches exactly.
Here, the incremental pose information outputted from the Iterative Closest Point (ICP) algorithm was used as an edge measurement of the graph. The theory and principles for the GraphSLAM method are well described in [22,23]. Thus, obtaining the optimized vehicle trajectory is possible by using the GraphSLAM. In this paper, this optimized vehicle trajectory was considered as the ground truth.
Line Extraction from Road Reflectivity Map
First, a road reflectivity map with a grid size of 15 cm was generated using the vehicle position optimized in Section 2.1. Figure 6 shows the generated road reflectivity map.
The reflectivity map of Figure 6 was generated using layers 17 through 32 of the 3D LIDAR. As seen in the figure, the reflectivity map includes the reflected parts of sidewalks or nearby vehicles. These parts must be eliminated as they may degrade the localization performance. Plane extraction was performed to extract only the LIDAR points reflected from the road. Figure 7 shows the road reflectivity map generated after plane extraction.
In Figure 7, the sidewalks have not been completely eliminated, but most of the unnecessary parts have been removed. Now, lines must be extracted from the reflectivity map, as shown in Figure 7. In the reflectivity map, however, areas except for road markings are filled with certain reflectivity values. As lines can be extracted from these areas, it is necessary to eliminate the values of such areas. In general, the reflectivity value differs greatly between road markings and other areas. Therefore, the values of such areas can be eliminated in a simple manner through binarization. In this paper, binarization was performed using the Otsu thresholding method [24]. Figure 8 shows the results of applying binarization to the reflectivity map of Figure 7.
are well described in [22,23]. Thus, obtaining the optimized vehicle trajectory is possible by using the GraphSLAM. In this paper, this optimized vehicle trajectory was considered as the ground truth.
Line Extraction from Road Reflectivity Map
First, a road reflectivity map with a grid size of 15 cm was generated using the vehicle position optimized in Section 2.1. Figure 6 shows the generated road reflectivity map. The reflectivity map of Figure 6 was generated using layers 17 through 32 of the 3D LIDAR. As seen in the figure, the reflectivity map includes the reflected parts of sidewalks or nearby vehicles. These parts must be eliminated as they may degrade the localization performance. Plane extraction was performed to extract only the LIDAR points reflected from the road. Figure 7 shows the road reflectivity map generated after plane extraction. In Figure 7, the sidewalks have not been completely eliminated, but most of the unnecessary parts have been removed. Now, lines must be extracted from the reflectivity map, as shown in Figure 7. In the reflectivity map, however, areas except for road markings are filled with certain reflectivity values. As lines can be extracted from these areas, it is necessary to eliminate the values of such areas. In general, the reflectivity value differs greatly between road markings and other areas. Therefore, the values of such areas can be eliminated in a simple manner through binarization. In this paper, binarization was performed using the Otsu thresholding method [24]. The reflectivity map of Figure 6 was generated using layers 17 through 32 of the 3D LIDAR. As seen in the figure, the reflectivity map includes the reflected parts of sidewalks or nearby vehicles. These parts must be eliminated as they may degrade the localization performance. Plane extraction was performed to extract only the LIDAR points reflected from the road. Figure 7 shows the road reflectivity map generated after plane extraction. In Figure 7, the sidewalks have not been completely eliminated, but most of the unnecessary parts have been removed. Now, lines must be extracted from the reflectivity map, as shown in Figure 7. In the reflectivity map, however, areas except for road markings are filled with certain reflectivity values. As lines can be extracted from these areas, it is necessary to eliminate the values of such areas. In general, the reflectivity value differs greatly between road markings and other areas. Therefore, the values of such areas can be eliminated in a simple manner through binarization. In this paper, binarization was performed using the Otsu thresholding method [24]. Figure 8 shows the results of applying binarization to the reflectivity map of Figure 7. As shown in Figure 8, some parts of the sidewalks and curbs have not been eliminated, but most of the road markings remain. Now, lines are extracted from the binary map, as shown in As shown in Figure 8, some parts of the sidewalks and curbs have not been eliminated, but most of the road markings remain. Now, lines are extracted from the binary map, as shown in Figure 8. A Hough transform was used as the line extraction algorithm. Figure 9 shows the line extraction results for the road markings.
Sensors 2018, 16, x FOR PEER REVIEW 8 of 28 Figure 8. A Hough transform was used as the line extraction algorithm. Figure 9 shows the line extraction results for the road markings. In Figure 9, it can be seen that lines have been extracted from most of the road markings. As can be seen from the right-hand figure, multiple lines have been extracted from thick road markings. Therefore, the actual information on the shape can be retained as much as possible. However, as the parts marked with blue circles show, lines have been extracted from the parts that are not road markings. It is difficult to eliminate these parts because they have the same reflectivity characteristics as road markings even though they are not road markings. However, the incorrectly detected lines do not significantly affect the correlation matching results because many clear road markings are present nearby.
Line Extraction from Occupancy Grid Map
This paper uses the fact that the outer walls of buildings are expressed as lines on the 2D horizontal plane. To extract highly reliable line information, a 2D probabilistic occupancy grid map for vertical structures was generated, and lines were extracted from the map. Figure 10 shows the generated 2D probabilistic occupancy grid map. In Figure 9, it can be seen that lines have been extracted from most of the road markings. As can be seen from the right-hand figure, multiple lines have been extracted from thick road markings. Therefore, the actual information on the shape can be retained as much as possible. However, as the parts marked with blue circles show, lines have been extracted from the parts that are not road markings. It is difficult to eliminate these parts because they have the same reflectivity characteristics as road markings even though they are not road markings. However, the incorrectly detected lines do not significantly affect the correlation matching results because many clear road markings are present nearby.
Line Extraction from Occupancy Grid Map
This paper uses the fact that the outer walls of buildings are expressed as lines on the 2D horizontal plane. To extract highly reliable line information, a 2D probabilistic occupancy grid map for vertical structures was generated, and lines were extracted from the map. Figure 10 shows the generated 2D probabilistic occupancy grid map.
As can be seen from Figure 10, the outer walls of buildings appear as lines on the 2D plane map. However, the occupancy probability for the outer walls of the buildings is low owing to the influence of the street trees or building forms, and many street trees around the road remain on the map. As only lines are required on the map to be used for localization, it is necessary to eliminate unnecessary parts as much as possible. For this, line components are extracted from the LIDAR point cloud, and a probabilistic occupancy grid map is generated using only the extracted points. In this case, street trees can be effectively removed from the probabilistic occupancy grid map, and the occupancy probability for the building outer walls can be increased.
There are several methods to extract lines from the LIDAR point cloud [2,[25][26][27]. Among the methods, the Iterative-End-Point-Fit (IEPF) algorithm exhibits the best performance in terms of accuracy and calculation time [28]. Figure 11 shows the results of line extraction using the IEPF algorithm.
In Figure 11, most of these line segments are a set of scan points from the leaves of the roadside trees. Therefore, the laser data reflected by the leaves of roadside trees must be removed. Figure 12 shows the LIDAR points reflected by the leaves and the outer wall of the building.
Line Extraction from Occupancy Grid Map
This paper uses the fact that the outer walls of buildings are expressed as lines on the 2D horizontal plane. To extract highly reliable line information, a 2D probabilistic occupancy grid map for vertical structures was generated, and lines were extracted from the map. Figure 10 shows the generated 2D probabilistic occupancy grid map. As can be seen from Figure 10, the outer walls of buildings appear as lines on the 2D plane map. However, the occupancy probability for the outer walls of the buildings is low owing to the influence of the street trees or building forms, and many street trees around the road remain on the map. As only lines are required on the map to be used for localization, it is necessary to eliminate unnecessary parts as much as possible. For this, line components are extracted from the LIDAR point cloud, and a probabilistic occupancy grid map is generated using only the extracted points. In this case, street trees can be effectively removed from the probabilistic occupancy grid map, and the occupancy probability for the building outer walls can be increased.
There are several methods to extract lines from the LIDAR point cloud [2,[25][26][27]. Among the methods, the Iterative-End-Point-Fit (IEPF) algorithm exhibits the best performance in terms of accuracy and calculation time [28]. Figure 11 shows the results of line extraction using the IEPF algorithm. In Figure 11, most of these line segments are a set of scan points from the leaves of the roadside trees. Therefore, the laser data reflected by the leaves of roadside trees must be removed. Figure 12 shows the LIDAR points reflected by the leaves and the outer wall of the building. As shown in Figure 12, the features of the LIDAR points that are reflected by the two types of objects are clearly distinguished. For roadside trees, the variance of distance errors between the extracted line and each point is very large. On the other hand, for the outer wall of a building, the variance of the distance errors is very small. Figure 13 shows the pseudocode for outlier removal.
By using the pseudocode, outliers such as roadside trees can be removed. Figure 14 shows the line extraction result after the outlier removal.
In Figure 14, the green points represent the line segments that are finally extracted. The probabilistic occupancy grid map is generated again using only the LIDAR points corresponding to the extracted lines. Figure 15 shows the probabilistic occupancy grid map generated by applying the IEPF algorithm. In Figure 11, most of these line segments are a set of scan points from the leaves of the roadside trees. Therefore, the laser data reflected by the leaves of roadside trees must be removed. Figure 12 shows the LIDAR points reflected by the leaves and the outer wall of the building. As shown in Figure 12, the features of the LIDAR points that are reflected by the two types of objects are clearly distinguished. For roadside trees, the variance of distance errors between the extracted line and each point is very large. On the other hand, for the outer wall of a building, the variance of the distance errors is very small. Figure 13 shows the pseudocode for outlier removal. By using the pseudocode, outliers such as roadside trees can be removed. Figure 14 shows the line extraction result after the outlier removal. As can be seen from Figure 15, many street trees have been eliminated, but some of them remain. Furthermore, the vertical structures reflecting only some layers have low probability values. However, it can be seen that the probability values for the building outer walls are higher in Figure 15 than in Figure 10. This is because only lines with relatively high reliability have been mapped to the map through the IEPF algorithm. Binarization is performed to remove grids with low probabilities. A value between 0 and 1 is stored in each grid. This paper assumes that grids with values equal to or higher than 0.5 were occupied. Figure 16 shows the final generated occupancy grid map after binarization. By using the pseudocode, outliers such as roadside trees can be removed. Figure 14 shows the line extraction result after the outlier removal. In Figure 14, the green points represent the line segments that are finally extracted. The probabilistic occupancy grid map is generated again using only the LIDAR points corresponding to the extracted lines. Figure 15 shows the probabilistic occupancy grid map generated by applying the IEPF algorithm. As can be seen from Figure 15, many street trees have been eliminated, but some of them remain. Furthermore, the vertical structures reflecting only some layers have low probability values. However, it can be seen that the probability values for the building outer walls are higher in Figure 15 than in Figure 10. This is because only lines with relatively high reliability have been mapped to the map through the IEPF algorithm. Binarization is performed to remove grids with low probabilities. A value between 0 and 1 is stored in each grid. This paper assumes that grids with values equal to or higher than 0.5 were occupied. Figure 16 shows the final generated occupancy grid map after binarization. Figure 16 shows that most street trees have been removed and only the outer walls of buildings or traffic signs remain. Next, lines were extracted from this occupancy grid map. As with Section 2.2, a Hough transform is used as the line extraction algorithm. Figure 17 shows the final line extraction result.
In Figure 17, lines have been extracted from parts other than buildings as areas marked with a blue circle. However, as with the case of road markings, incorrectly detected lines do not significantly affect correlation matching because many other vertical structures are present nearby. In this way, lines for vertical structures can be extracted. remain. Furthermore, the vertical structures reflecting only some layers have low probability values. However, it can be seen that the probability values for the building outer walls are higher in Figure 15 than in Figure 10. This is because only lines with relatively high reliability have been mapped to the map through the IEPF algorithm. Binarization is performed to remove grids with low probabilities. A value between 0 and 1 is stored in each grid. This paper assumes that grids with values equal to or higher than 0.5 were occupied. Figure 16 shows the final generated occupancy grid map after binarization. Figure 16 shows that most street trees have been removed and only the outer walls of buildings or traffic signs remain. Next, lines were extracted from this occupancy grid map. As with Section 2.2, a Hough transform is used as the line extraction algorithm. Figure 17 shows the final line extraction result. In Figure 17, lines have been extracted from parts other than buildings as areas marked with a blue circle. However, as with the case of road markings, incorrectly detected lines do not significantly affect correlation matching because many other vertical structures are present nearby. In this way, lines for vertical structures can be extracted.
Generation of ELM
The lines extracted in Sections 2.2 and 2.3 were converted into nodes and links, and stored in a map. Table 1 presents an example of the ELM. As shown in Table 1, the position information of each line was stored in a text file. The data size of the created ELM was approximately 134 KB/km. The ELM has a significantly smaller data size than other maps used for 3D LIDAR-based localization. Figure 18 shows the ELM information
Generation of ELM
The lines extracted in Sections 2.2 and 2.3 were converted into nodes and links, and stored in a map. Table 1 presents an example of the ELM. As shown in Table 1, the position information of each line was stored in a text file. The data size of the created ELM was approximately 134 KB/km. The ELM has a significantly smaller data size than other maps used for 3D LIDAR-based localization. Figure 18 shows the ELM information displayed on a synthetic map of the 2D occupancy grid and road reflectivity maps.
Correlation-Based Matching Using ELM
As shown in Figure 2, localization was performed through correlation matching of the occupancy grid map converted from ELM and the occupancy grid map generated using the current LIDAR data. First, nodes present within a certain distance from the current vehicle position were selected from the ELM. The current position information is acquired from a commercial GPS/Dead Reckoning (DR) sensor. Figure 19 shows the node and link information of the ELM present in the area of interest on the 2D plane. Furthermore, the lines in Figure 19 can be mapped to the grid map. Figure 20 shows the result of converting the line map into an occupancy grid map. In Figure 18, the yellow star and red star represent the positions of node 1 and node 2, respectively. The green line connects node 1 and node 2. In this study, localization was performed using an ELM created in this way.
Correlation-Based Matching Using ELM
As shown in Figure 2, localization was performed through correlation matching of the occupancy grid map converted from ELM and the occupancy grid map generated using the current LIDAR data. First, nodes present within a certain distance from the current vehicle position were selected from the ELM. The current position information is acquired from a commercial GPS/Dead Reckoning (DR) sensor. Figure 19 shows the node and link information of the ELM present in the area of interest on the 2D plane. Furthermore, the lines in Figure 19 can be mapped to the grid map. Figure 20 shows the result of converting the line map into an occupancy grid map.
Next, an occupancy grid map with the same size as the map of Figure 20 was generated using the current LIDAR data. First, a road reflectivity map was generated and binarized in the same way as the case of ELM generation. Then, a probabilistic occupancy grid map for vertical structures was generated, and an occupancy grid map was generated through binarization. The two generated maps were composed of 0 and 1. Therefore, the two maps were integrated to complete an occupied grid map. Figure 21 shows the generated binary occupancy grid map. occupancy grid map converted from ELM and the occupancy grid map generated using the current LIDAR data. First, nodes present within a certain distance from the current vehicle position were selected from the ELM. The current position information is acquired from a commercial GPS/Dead Reckoning (DR) sensor. Figure 19 shows the node and link information of the ELM present in the area of interest on the 2D plane. Furthermore, the lines in Figure 19 can be mapped to the grid map. Figure 20 shows the result of converting the line map into an occupancy grid map. Next, an occupancy grid map with the same size as the map of Figure 20 was generated using the current LIDAR data. First, a road reflectivity map was generated and binarized in the same way as the case of ELM generation. Then, a probabilistic occupancy grid map for vertical structures was generated, and an occupancy grid map was generated through binarization. The two generated maps were composed of 0 and 1. Therefore, the two maps were integrated to complete an occupied grid map. Figure 21 shows the generated binary occupancy grid map. Subsequently, correlation matching was performed for the two occupancy grid maps of Figures 20 and 21. In this paper, an area of 160 m × 160 m was set as the area of interest, and the grid size was 15 cm. The maps used for matching became 1081 × 1081 matrices. Therefore, it was difficult to use the general serial-search-type correlation matching method. However, as the two occupancy grid maps were composed of only 0 and 1, the matching of the two maps could simply be calculated through multiplying the two matrices. Consequently, it was possible to apply an FFT. When correlation matching is performed using an FFT, the calculation time is approximately 78 ms (based on MATLAB). The time required for coordinate transformation of the 3D LIDAR point cloud and generation of the binary occupancy grid map is approximately 94 ms and 142 ms (based on MATLAB), respectively. Therefore, the localization execution time is less than 350 ms. In this paper, localization was performed by post-processing to verify the performance using all LIDAR data (10 Hz). Figure 22 shows the localization execution process.
Next, an occupancy grid map with the same size as the map of Figure 20 was generated using the current LIDAR data. First, a road reflectivity map was generated and binarized in the same way as the case of ELM generation. Then, a probabilistic occupancy grid map for vertical structures was generated, and an occupancy grid map was generated through binarization. The two generated maps were composed of 0 and 1. Therefore, the two maps were integrated to complete an occupied grid map. Figure 21 shows the generated binary occupancy grid map. size was 15 cm. The maps used for matching became 1081 × 1081 matrices. Therefore, it was difficult to use the general serial-search-type correlation matching method. However, as the two occupancy grid maps were composed of only 0 and 1, the matching of the two maps could simply be calculated through multiplying the two matrices. Consequently, it was possible to apply an FFT. When correlation matching is performed using an FFT, the calculation time is approximately 78 ms (based on MATLAB). The time required for coordinate transformation of the 3D LIDAR point cloud and generation of the binary occupancy grid map is approximately 94 ms and 142 ms (based on MATLAB), respectively. Therefore, the localization execution time is less than 350 ms. In this paper, localization was performed by post-processing to verify the performance using all LIDAR data (10 Hz). Figure 22 shows the localization execution process. As shown in Figure 22, experiments were performed in a downtown area where many other vehicles were present. In the bottom center of the figure, the light green dots represent LIDAR scan data to be matched with the map. After these dots and the ELM in the area of interest were converted into binary occupancy grid maps, the results of the correlation matching (FFT) between the two maps were used for the measurements of the KF. In the bottom right of the figure, the red rectangle represents the currently estimated vehicle position, while the black rectangle represents the actual position (ground truth). The blue and light blue rectangles represent the positions of GPS/DR and RTK/INS, respectively. The fact that the red and black rectangles almost overlap indicates that the position of the vehicle was estimated accurately.
Kalman Filter Configuration
In this paper, the position error of the GPS/DR sensor was estimated using the map-matching results. For the GPS/DR sensor, a CruizCore DS6200 from Microinfinity (Suwon, South Korea) was As shown in Figure 22, experiments were performed in a downtown area where many other vehicles were present. In the bottom center of the figure, the light green dots represent LIDAR scan data to be matched with the map. After these dots and the ELM in the area of interest were converted into binary occupancy grid maps, the results of the correlation matching (FFT) between the two maps were used for the measurements of the KF. In the bottom right of the figure, the red rectangle represents the currently estimated vehicle position, while the black rectangle represents the actual position (ground truth). The blue and light blue rectangles represent the positions of GPS/DR and RTK/INS, respectively. The fact that the red and black rectangles almost overlap indicates that the position of the vehicle was estimated accurately.
Kalman Filter Configuration
In this paper, the position error of the GPS/DR sensor was estimated using the map-matching results. For the GPS/DR sensor, a CruizCore DS6200 from Microinfinity (Suwon, South Korea) was used, and the azimuth accuracy was within 5 • in open space. Figures 23 and 24 show the position error and the attitude error of the GPS/DR sensor, respectively. As the position error of GPS/DR is very large in a downtown area, error correction through map matching is essential. In Figure 24, the roll and pitch errors were mostly within 2°, and the yaw error was mostly within 0.5°, indicating comparatively accurate results. Therefore, for the coordinate transformation of the 3D LIDAR data, the attitude information of GPS/DR was used as it was estimated separately, and the attitude of the vehicle was not separately estimated. The position error was calculated based on the ground truth mentioned in Section 2.1, and the attitude error was calculated based on the roll, pitch, and yaw output values of the NovAtel RTK/SPAN System. Figure 23 shows that the position error of GPS/DR is up to 10 m. In addition, sections where the position error rapidly changes are mostly sections where the vehicle rotates. Such rapid changes in the error may lead to large position errors during the vehicle position estimation.
As the position error of GPS/DR is very large in a downtown area, error correction through map matching is essential. In Figure 24, the roll and pitch errors were mostly within 2°, and the yaw error was mostly within 0.5°, indicating comparatively accurate results. Therefore, for the coordinate transformation of the 3D LIDAR data, the attitude information of GPS/DR was used as it was estimated separately, and the attitude of the vehicle was not separately estimated.
The state variable of the filter can be represented as Equation (1): The position error was calculated based on the ground truth mentioned in Section 2.1, and the attitude error was calculated based on the roll, pitch, and yaw output values of the NovAtel RTK/SPAN System. Figure 23 shows that the position error of GPS/DR is up to 10 m. In addition, sections where the position error rapidly changes are mostly sections where the vehicle rotates. Such rapid changes in the error may lead to large position errors during the vehicle position estimation.
As the position error of GPS/DR is very large in a downtown area, error correction through map matching is essential. In Figure 24, the roll and pitch errors were mostly within 2 • , and the yaw error was mostly within 0.5 • , indicating comparatively accurate results. Therefore, for the coordinate transformation of the 3D LIDAR data, the attitude information of GPS/DR was used as it was estimated separately, and the attitude of the vehicle was not separately estimated.
The state variable of the filter can be represented as Equation (1): where δx and δy refer to the 2D horizontal position errors of a vehicle in the East-North-Up (ENU) coordinate system. The time update is conducted as presented in Equation (2): As represented in Equation (2), no change in the error of GPS/DR for a short period of time was assumed. The measurement Equation is represented as Equation (3): where ∆x and ∆y refer to the result values of the correlation-based matching. The measurement update is conducted as presented in Equation (4): where K refers to the Kalman gain.
Experimental Results
In this paper, localization was performed for three cases: a case in which only road markings are used, a case in which only vertical structures are used, and a case in which both road markings and vertical structures are used (ELM). Figures 25 and 26 show the localization results of the case in which only road markings were used. Figure 25 shows the lateral position error. The RMS lateral position error is 0.143 m. Figure 26 shows the longitudinal position error. The RMS longitudinal position error is 0.389 m. When only road markings were used, the longitudinal position error was much larger than the lateral position error. As a road must have lanes, the lateral accuracy is high. To correct the longitudinal position error, certain road markings, such as crosswalks or stop lines, are necessary. On a number of roads, however, road markings other than lanes do not exist. Therefore, the longitudinal position error is somewhat higher. Figures 27 and 28 show the localization results when only vertical structures were used. Figure 27 shows the lateral position error. The RMS lateral position error is 0.163 m. Figure 28 shows the longitudinal position error. The RMS longitudinal position error is 0.233 m. When only vertical structures were used, it was found that the longitudinal position error significantly decreased. This is because building walls and traffic signs perpendicular to the vehicle traveling direction can provide measurements for the longitudinal direction.
Experimental Results
In this paper, localization was performed for three cases: a case in which only road markings are used, a case in which only vertical structures are used, and a case in which both road markings and vertical structures are used (ELM). Figures 25 and 26 show the localization results of the case in which only road markings were used. Figure 26 shows the longitudinal position error. The RMS longitudinal position error is 0.389 m. When only road markings were used, the longitudinal position error was much larger than the lateral position error. As a road must have lanes, the lateral accuracy is high. To correct the longitudinal position error, certain road markings, such as crosswalks or stop lines, are necessary. On a number of roads, however, road markings other than lanes do not exist. Therefore, the longitudinal position error is somewhat higher. Figures 27 and 28 show the localization results when only vertical structures were used. Likewise, the lateral position accuracy is high because there are buildings on both sides of the road. Vertical structures provide highly reliable measurements in a downtown area. However, as they are typically tens of meters away from the vehicle, the localization accuracy is significantly affected by the roll and pitch errors. As road markings are available closer to the vehicle, the localization accuracy in an area with a large number of road markings will be higher when the road markings are used than when vertical structures are used. Actually, as shown in Figure 25, although large position errors occasionally occur, the RMS position error is smaller than that when only vertical structures are used.
It is difficult to use road markings for localization in areas with traffic congestion. Figure 29 shows the results of localization using road markings in an area with traffic congestion. error. As a road must have lanes, the lateral accuracy is high. To correct the longitudinal position error, certain road markings, such as crosswalks or stop lines, are necessary. On a number of roads, however, road markings other than lanes do not exist. Therefore, the longitudinal position error is somewhat higher. Figures 27 and 28 show the localization results when only vertical structures were used. Likewise, the lateral position accuracy is high because there are buildings on both sides of the road. Vertical structures provide highly reliable measurements in a downtown area. However, as they are typically tens of meters away from the vehicle, the localization accuracy is significantly affected by the roll and pitch errors. As road markings are available closer to the vehicle, the localization accuracy in an area with a large number of road markings will be higher when the road markings are used than when vertical structures are used. Actually, as shown in Figure 25, although large position errors occasionally occur, the RMS position error is smaller than that when only vertical structures are used.
It is difficult to use road markings for localization in areas with traffic congestion. Figure 29 As can be seen from the camera image of Figure 29, most road markings are hidden by surrounding vehicles. In the bottom center of the figure, road markings (light green dots) that can be matched to the map were not significantly detected. As shown at the top of the figure, lateral matching is well-performed owing to some lanes and curbs, but longitudinal matching is not performed well. As a result, in the bottom-right hand figure, the estimated vehicle position (red) has a large error in the longitudinal direction compared to the ground truth (black). On the other hand, in the same area, vertical structures can be scanned without the influence of surrounding vehicles. Figure 30 shows the results of localization using vertical structures in an area with traffic congestion. As can be seen from Figure 30, longitudinal matching is performed well even in an area with traffic congestion. As a result, the bottom-right hand figure shows that the red rectangle overlaps the black one.
In contrast to the area with traffic congestion, in an area with a small number of nearby buildings, the performance of localization using vertical structures can be degraded. Figure 31 shows the results of localization using vertical structures in an area with a small number of nearby buildings. As can be seen from Figure 30, longitudinal matching is performed well even in an area with traffic congestion. As a result, the bottom-right hand figure shows that the red rectangle overlaps the black one.
In contrast to the area with traffic congestion, in an area with a small number of nearby buildings, the performance of localization using vertical structures can be degraded. Figure 31 shows the results of localization using vertical structures in an area with a small number of nearby buildings. As can be seen from Figure 30, longitudinal matching is performed well even in an area with traffic congestion. As a result, the bottom-right hand figure shows that the red rectangle overlaps the black one.
In contrast to the area with traffic congestion, in an area with a small number of nearby buildings, the performance of localization using vertical structures can be degraded. Figure 31 shows the results of localization using vertical structures in an area with a small number of nearby buildings. As can be seen from the camera image of Figure 31, there are not many buildings on the left side of the road. In this area, multiple peaks appear in the correlation shape for the lateral direction. Furthermore, the side peak is larger than the main peak. As a result, as shown in the bottom-right hand figure, the red rectangle has a lateral position error compared to the black one. On the other hand, if road markings are used in the same area, the lateral position error can be reduced. Figure 32 shows the results of localization using road markings in an area with a small number of nearby buildings. As shown in Figure 32, the main peak can be clearly detected in the correlation shape for the lateral direction. As a result, it can be seen that the estimated vehicle position has almost no lateral error. As can be seen from the camera image of Figure 31, there are not many buildings on the left side of the road. In this area, multiple peaks appear in the correlation shape for the lateral direction. Furthermore, the side peak is larger than the main peak. As a result, as shown in the bottom-right hand figure, the red rectangle has a lateral position error compared to the black one. On the other hand, if road markings are used in the same area, the lateral position error can be reduced. Figure 32 shows the results of localization using road markings in an area with a small number of nearby buildings. As can be seen from the camera image of Figure 31, there are not many buildings on the left side of the road. In this area, multiple peaks appear in the correlation shape for the lateral direction. Furthermore, the side peak is larger than the main peak. As a result, as shown in the bottom-right hand figure, the red rectangle has a lateral position error compared to the black one. On the other hand, if road markings are used in the same area, the lateral position error can be reduced. Figure 32 shows the results of localization using road markings in an area with a small number of nearby buildings. As shown in Figure 32, the main peak can be clearly detected in the correlation shape for the lateral direction. As a result, it can be seen that the estimated vehicle position has almost no lateral error. As shown in Figure 32, the main peak can be clearly detected in the correlation shape for the lateral direction. As a result, it can be seen that the estimated vehicle position has almost no lateral error.
As seen, road markings and vertical structures can supplement each other. Therefore, the use of an ELM, which includes both attributes, can exhibit further improved localization accuracy. Figures 33 and 34 show the results of localization using an ELM (road markings + vertical structures). As seen, road markings and vertical structures can supplement each other. Therefore, the use of an ELM, which includes both attributes, can exhibit further improved localization accuracy. Figures 33 and 34 show the results of localization using an ELM (road markings + vertical structures). Figures 33 and 34 show that both the lateral and longitudinal position errors were reduced. In particular, the number of sections with large position errors was significantly reduced. Table 2 lists the localization performances of the three maps (road markings, vertical structures, and ELM). As seen, road markings and vertical structures can supplement each other. Therefore, the use of an ELM, which includes both attributes, can exhibit further improved localization accuracy. Figures 33 and 34 show the results of localization using an ELM (road markings + vertical structures). Figure 34 shows the longitudinal position error. The RMS longitudinal position error is 0.223 m. Figures 33 and 34 show that both the lateral and longitudinal position errors were reduced. In particular, the number of sections with large position errors was significantly reduced. Table 2 lists the localization performances of the three maps (road markings, vertical structures, and ELM). Figure 34 shows the longitudinal position error. The RMS longitudinal position error is 0.223 m. Figures 33 and 34 show that both the lateral and longitudinal position errors were reduced. In particular, the number of sections with large position errors was significantly reduced. Table 2 lists the localization performances of the three maps (road markings, vertical structures, and ELM). Table 2 shows that the use of an ELM exhibited the smallest RMS position errors in both the lateral and longitudinal directions. At a 95% confidence level, road markings caused the smallest lateral position error, and the ELM led to the smallest longitudinal position error. At a 99% confidence level, the ELM exhibited the smallest lateral and longitudinal position errors. As seen, ELM-based localization shows higher position accuracy than the use of either road markings or vertical structures. This is because the amount of map information available at the time of localization is increased owing to the mutual complement of road markings and vertical structures.
The localization accuracy improves as the amount of map information increases. When the ELM is converted into an occupancy grid map, this map is filled with only 0 and 1. In this case, if 0 and 1 are seen as the elevation of the terrain, the concept of terrain roughness can be used to determine the amount of map information [29][30][31][32]. The terrain roughness can be expressed using the two indices of sigma-T and sigma-Z, as follows: In Equation (5), H i is the elevation information of the i-th grid, and N is the number of grids in the map. Sigma-T is the standard deviation of the elevation, and sigma-Z is the standard deviation of the elevation difference of neighboring grids. In general, the localization accuracy is higher as the values of sigma-T and sigma-Z become higher.
The ELM is expressed as a 2D occupancy grid map. Therefore, sigma-T and sigma-Z must be calculated on the 2D plane. In this paper, sigma-T and sigma-Z are calculated as follows: where N is the map size in the area of interest based on the current vehicle position, and this map is a N × N matrix. For a localization performance analysis, this map is rotated using the current vehicle azimuth information so that the longitudinal direction faces north. Therefore, H ij denotes the elevation of the grid located at the i-th position in the longitudinal direction and the j-th position in the lateral direction on the rotated map. Figures 35 and 36 show sigma-T for the lateral and longitudinal directions, respectively. Figures 37 and 38 show the sigma-Z for the lateral and longitudinal directions, respectively. Sigma-T is related to the magnitude of the correlation peak, while sigma-Z is related to the sharpness of the main peak. For this reason, both sigma-T and sigma-Z must be large to find the correlation peak more accurately. The above figures show that the roughness of the ELM is the highest. Furthermore, the lateral roughness generally has higher values than the longitudinal roughness. As a result, the lateral accuracy is higher than the longitudinal accuracy, as shown in Table 2. In addition, when the RMS position errors for road markings, vertical structures, and the ELM were compared with the roughness values, it was found that as the roughness increased, the RMS position error decreased. In Figures 34, 36, and 38, the points at which the longitudinal position error was 0.5 m or more also show low longitudinal roughness values. Sigma-T is related to the magnitude of the correlation peak, while sigma-Z is related to the sharpness of the main peak. For this reason, both sigma-T and sigma-Z must be large to find the correlation peak more accurately. The above figures show that the roughness of the ELM is the highest. Furthermore, the lateral roughness generally has higher values than the longitudinal roughness. As a result, the lateral accuracy is higher than the longitudinal accuracy, as shown in Table 2. In addition, when the RMS position errors for road markings, vertical structures, and the ELM were compared with the roughness values, it was found that as the roughness increased, the RMS position error decreased. In Figures 34, 36, and 38, the points at which the longitudinal position error was 0.5 m or more also show low longitudinal roughness values. Sigma-T is related to the magnitude of the correlation peak, while sigma-Z is related to the sharpness of the main peak. For this reason, both sigma-T and sigma-Z must be large to find the correlation peak more accurately. The above figures show that the roughness of the ELM is the highest. Furthermore, the lateral roughness generally has higher values than the longitudinal roughness. As a result, the lateral accuracy is higher than the longitudinal accuracy, as shown in Table 2. In addition, when the RMS position errors for road markings, vertical structures, and the ELM were compared with the roughness values, it was found that as the roughness increased, the RMS position error decreased. In Figure 34, Figure 36, and Figure 38, the points at which the longitudinal position error was 0.5 m or more also show low longitudinal roughness values.
As seen, the ELM has the highest roughness as well as the most excellent localization performance. Although the ELM has a very small data file size (approximately 134 KB/km), it has sufficient information required for accurate localization. As it contains both road markings and vertical structure information, it is possible to continuously perform localization even in areas with traffic congestion or in areas without surrounding buildings. Furthermore, the ELM-based localization method sufficiently meets the localization accuracy requirements for autonomous driving.
Conclusions
This paper proposed an ELM-based precise vehicle localization method using 3D LIDAR. The proposed ELM has a very small data file size (134 KB/km). Furthermore, as it contains both road markings and vertical structure information, the ELM-based localization method exhibits better performance in terms of accuracy, reliability, and availability than methods that use either road markings or vertical structures. In addition, ELM can be generated through the ELM generation algorithm, and no verification is required for the detected lines. Furthermore, as ELM has a form that is extremely similar to the actually produced line maps of lanes, ELM is easily compatible with similar maps.
As a result of ELM-based localization, the RMS position errors for the lateral and longitudinal directions were 0.136 m and 0.223 m, respectively. These results sufficiently met the localization accuracy requirements for autonomous driving. In addition, line detection and data association processes are not required because the correlation matching method was used. Furthermore, correlation matching can be performed using a Fast Fourier Transform (FFT) because simple multiplication for 0 and 1 is required. The matching time using FFT is approximately 78 ms.
As seen, the ELM-based localization method can ensure high position accuracy using a map with a small data size. In the future, research on map generation and localization in more areas is required. | 15,915.2 | 2018-09-20T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Manipulating Interword and Interletter Spacing in Cursive Script: An Eye Movements Investigation of Reading Persian
Persian is an Indo-Iranian language that features a derivation of Arabic cursive script, where most letters within words are connectable to adjacent letters with ligatures. Two experiments are reported where the properties of Persian script were utilized to investigate the effects of reducing interword spacing and increasing the interletter distance (ligature) within a word. Experiment 1 revealed that decreasing interword spacing while extending interletter ligature by the same amount was detrimental to reading speed. Experiment 2 largely replicated these findings. The experiments show that providing the readers with inaccurate word boundary information is detrimental to reading rate. This was achieved by reducing the interword space that follows letters that do not connect to the next letter in Experiment 1, and replacing the interword space with ligature that connected the words in Experiment 2. In both experiments, readers were able to comprehend the text read, despite the considerable costs to reading rates in the experimental conditions.
Introduction
Persian is an Indo-Iranian language, a subdivision of Indo-European languages (Windfuhr, 1987). It is estimated to be spoken by about 110 million people worldwide (https://en.wikipedia.org/wiki/Persian_language). Modern Persian alphabet, a derivation of the Arabic script, features thirty-two letters. Similar to Arabic script, Persian script is written from right-to-left. It is also a cursive script, with the majority of the letters connected with ligatures within words. As the case with the majority of Arabic letters, most Persian letters change form depending on where they occur in a word: beginning (connected to subsequent letters), middle (connected to letters on both sides), or end, connected only to previous letters (e.g., the letter س /s/ which appears as ﺳ ـ at the beginning, ـ ﺴ ـ in the middle, or ـ ﺲ at the end of a word). When letters are connected, this connection is established by adding a flat ligature that is typically extendable (e.g., ordinary size ligatures: ﺳ ـ ـ ﺴ ـ or ـ ﺲ , compared to when each ligature is extended by a factor of three: ﺳ ـ ـ ـ ـ ـ ـ ﺴ ـ ـ ـ or ـ ـ ـ ﺲ ). An important implication of this is that the distance between letters within a word in this script is mostly demarcated by a horizontal black line, not white space (e.g., the trigram ﺑ ﺴ ﯿ ـ /bsi/, which looks like this if the distance between the letters is increased by expanding the ligature: ). Only seven Persian letters do not connect to the following letter: , and .و Thus, whereas there is always white space to demarcate word boundaries in Persian, that is, interword spacing, the distance between the letters in a word, or interletter spacing, may be composed of a combination of ligatures if the letters are connectable, and white spaces (e.g., the word ﺑ ﺴ ﯿ ﺎ ر features white space between the last two letters, ـ ﺎ ر , but the rest of the letters before are 'separated' by ligature. Thus, the allographic nature of Persian letters and cursive script, similar to Arabic, makes it an interesting script to use in studying eye movement control during reading. In the reported experiments the effects of manipulating word spacing and the distance between letters were investigated in reading Persian sentences.
Interword spacing, or the space between words in text, has been shown to play an important role in facilitating reading. This space allows readers to segment the text and perform word identification. Previous findings showed that removing or filling this space resulted in delaying word identification in sentence reading and thus slowing reading rate considerably (estimated 30-50% decrement, e.g., Drieghe, Fitzsimmons, & Liversedge, 2017;Morris, Rayner, & Pollatsek, 1990;Perea & Acha, 2009;Rayner, Fischer, & Pollatsek, 1998;Rayner, Yang, Schuett, & Slattery, 2013;Sheridan, Rayner, & Reingold, 2013;Sheridan, Reichle, & Reingold, 2016;Veldre, Drieghe, & Andrews, 2017;Yang & McConkie, 2001). The interword spacing also allows readers to benefit from the important information conveyed by each word's first and last letters that play an important role in word identification (Davis, 2010;Gomez, Ratcliff, & Perea, 2008;Jordan, 1990Jordan, , 1995. It is not surprising thus that moderate increases in interword spacing was found to facilitate reading in numerous studies (Drieghe, Brysbaert, & Desmet, 2005;Inhoff, Radach, & Heller, 2000;Paterson & Jordan, 2010;. Increasing interword spacing was suggested to aid in text segmentation and word identification, and this improves reading performance given that words are the main units of linguistic processing during reading (e.g., . Furthermore, when considering the readers' eye movements during text reading, the presence of space between words provides the necessary spatial frequency information needed in saccade targeting so that fixations may land on optimal spot for uptake of visual information (i.e., the preferred viewing location, PVL, Rayner, 1979). Removing these spaces results in significant changes to the saccadic targeting system, with readers' fixations landing in suboptimal locations, closer to word begin-ning, as well as shortening saccade amplitude (e.g., Paterson & Jordan, 2010;Perea & Acha, 2009;Rayner et al., 1998;Sheridan et al., 2013;Yang & McConkie, 2001).
Beyond the superficial visual processing of text, removing the space between words was found to disrupt the core linguistic processes of word identification. Several studies showed evidence of this by including in the sentences target words of high and low frequency. Word frequency effects are typically considered an indicator of the time course of lexical processing, with higher frequency words being identified earlier (i.e., faster) than lower frequency words (Rayner, 1998;Reingold, Reichle, Glaholt, & Sheridan, 2012). Indeed, several investigations showed that in the absence of interword spacing, frequency effects are amplified (e.g., Paterson & Jordan, 2010;Perea & Acha, 2009;Rayner et al., 1998;Sheridan et al., 2013;Sheridan et al., 2016). Furthermore, and more indicative of the disruption to word identification when interword spacing was removed, analyses of distributions of fixation times showed that the onset of word frequency effects was delayed, relative to when the interword spaces were preserved (Sheridan et al., 2013;Sheridan et al., 2016). There are languages that do not feature interword spacing (e.g., Chinese and Thai), however, segmentation and word boundary identification is of equal importance in these languages (e.g., Bai, Yan, Liversedge, Zang, & Rayner, 2008;Hsu & Huang, 2000a;2000b;Li, Rayner, & Cave, 2009;Winskel, Radach, & Luksaneeyanawin, 2009).
Interletter spacing, or the space between the letters in a word also plays a role in word identification, albeit a role that requires further clarification. Reducing this space and making letters appear closer to each other increases visual crowding, or the phenomenon that a middle letter would be slower to identify if flanked by two close outer letters (e.g., Bouma, 1970;1973;Chung, Levi, & Legge, 2001). Increasing interletter spacing and reducing crowding results in increased perception of letter size (Skottun & Freeman, 1983). Subtle increases in this space were reported to facilitate lexical decision (Perea, Moret-Tatay, & Gomez, 2011;Perea & Gomez, 2012). In sentence reading, subtle increases in letter spacing (+0.5 and +1.0 pixel conditions) was associated with reduction in average fixation duration, but an increase in the total number of fixations, relative to normal, unaltered, letter spacing, with the latter condition resulting in the shortest total sentence reading time (Slattery & Rayner, 2013, Experiment 1). On the other hand, Slattery and Rayner found that decreasing letter spacing by 0.5 pixel resulted in higher average fixation durations, increased number of fixations, and longer total sentence reading time relative to the unaltered interletter spacing condition. Interestingly, Slattery and Rayner found that manipulating interletter spacing had no significant effect on the rate of target word skipping (the word is not fixated at all during first pass reading). Similarly, this letter spacing manipulation had no significant effect on the location of the initial fixation this target word received, with these first fixations always landing at the optimal position between the word beginning and center. Slattery and Rayner concluded that the saccade targeting system rapidly adjusts to the spacing manipulation and continues to optimally serve the process of reading.
Other investigations revealed that any benefits of increasing interletter spacing asymptote, and even reverse, after a certain point in word identification tasks (e.g., Chung, 2002;McLeish, 2007;Paterson & Jordan 2010;Pelli et al., 2007;Risko, Lanthier, & Besner, 2011;Slattery, Yates, & Angele, 2016), with sizable disruptions reported when interletter space extends beyond 2-3 character spaces. Clearly, as interletter spacing increases, more characters are pushed out of foveal vision, and less information becomes available parafoveally as more and more characters are pushed further from fixation location. This has a detrimental effect on sentence reading that depends on the availability of foveal (fixated) and parafoveal (upcoming) information (see Rayner, 1998Rayner, , 2009Schotter, Angele, & Rayner, 2012). Some investigations revealed that readers compensate for increased interletter spacing by making more fixations, of shorter duration, relative to when reading normally-spaced texts, thus producing largely comparable overall sentence reading times in both conditions (e.g., Drieghe et al., 2005;McGowan, White, & Paterson, 2015;Perea, Giner, Marcet, & Gomez, 2016;Rayner et al., 1998). Notable inconsistencies of the reported results from interletter spacing manipulations were attributed to the different fonts used in the different investigations, given the natural, and sizable, differences in letter spacing in different fonts. For instance, monospaced fonts that render all characters, including spaces of equal size (e.g., Courier New), feature larger letter spacing than proportional fonts that allow for character spaces to vary naturally (e.g., Times New Roman, where the characters I naturally occupy narrower space than the character W; see relevant discus-sions in Hermena, Liversedge, & Drieghe, 2017 for fonts used in Arabic script; Perea et al., 2011;Slattery, 2016;Slattery et al., 2016;van den Boer & Hakvoort, 2015).
The two experiments reported here investigate the effects of reducing interword spacing, and increasing the distance (ligature) between Persian letters within words on eye movement behavior, and on sentence comprehension when reading Persian sentences. These experiments are part of an on-going series of investigations at our labs of interword and interletter spacing in Arabic and Persian as examples of cursive scripts.
Experiment 1
In this experiment the readers were presented with two conditions: A baseline condition with no manipulation of word or letter spacing or distance; and an experimental condition where interword spacing was reduced such that the words were almost touching (what will be referred to as the Pixel-Spaced condition, see Figure 1 for an example). The reduction of interword spacing in the Pixel-Spaced condition was accompanied by an interletter compensation such that the space before each word was added to the word itself in the form of extended ligature, thus increasing the distance between letters within the word. with the space between them removed and added to the second word ﺑ ﺴ ﯿ ﺎ ر , in effect pushing the letters ﺑ ـ ـ ﺴ ـ and ـ ﯿ ـ away from each other by the same amount of space that was removed from between the words (see also Figure 1). In a sense, this manipulation is the opposite of one of the manipulations of Slattery and Rayner (2013). In their second experiment, Slattery and Rayner found the combination of reducing interletter spacing, and increasing interword spacing resulted in facilitation in sentence reading (reduced fixation durations). As such, the opposite effects (i.e., processing cost) may be expected in this condition, given that the Pixel-Spaced manipulation increased the distance (ligature) between the word's letters and reduced interword spacing.
However, there is an alternative scenario. Namely, increasing the distance between the letters within the words in the Pixel-Spaced condition may equate to reducing crowding and lateral inhibition, and thus some benefit maybe observed. Importantly, the stimuli sentences in this experiment featured words that ended with letters that cannot be connected to the following letter (i.e., one of the letters , and ,)و that is, letters that naturally insert white space between letter strings and thus may effectively serve as word-end markers. One of the aims of this experiment is thus to determine whether readers use these letters as word-boundary markers. If the presence of these letters at the end of the word, combined with increasing the distance (ligature) between the letters within words facilitates word identification, it would be possible to offset, at least to some extent, the costs expected for dramatically decreasing interword spacing in the Pixel-Spaced condition.
Participants
The same set of participants took part in both experiments. Twenty-eight participants (six men) took part in the experiments. All participants were native Persian speakers living in the UAE, and all indicated that they regularly read Persian, on daily or weekly basis. The participants' mean age was 35.7 years (SD = 8.7, range = 18 -50). All participants had normal or corrected to normal vision as determined by the Bailey-Lovie chart (Bailey & Lovie, 1980).
Materials
Forty simple Persian sentences were used as stimuli, and were presented to the participants in either normally spaced (baseline condition), or with the space between the words (interword spacing) reduced significantly (the Pixel-Spaced condition). An example of the sentence used is available in Figure 1. With the exception of the last word in each sentence, all words ended with letters that cannot be connected to the following letter. The sentences comprised, on average, 10.4 words (SD = 2.1, range = 6 -15 words). This was about 48.8 characters per sentence (including interword spaces in the Spaced condition, or the within-word ligatures that replaced this space in the Pixel-Spaced condition, SD = 8.4, range = 30 -64 characters). The amount of physical space the sentences occupied in the Spaced and Pixel-Spaced conditions was thus identical. The sentences were all rendered in Arial font size 14. Arial is a proportional font that allows characters to naturally vary in the amount of physical space they occupy. It is a widely-known and used font in Persian print. Additional four sentences of similar complexity and length were used as practice items for the participants.
Stimuli norming. For assessing the grammaticality and correctness of structure of all stimuli sentences, in both experiments, additional 5 native readers of Persian were asked to rate the sentences on these variables on a 1 -5 scale (1 = poor grammar / structure, 5 = perfect grammar / structure), thus providing 5 ratings per sentence. Those 5 participants did not take part in the eye tracking procedure. All sentences for both experiments were rated as grammatically sound, with average rating of 4.5 (SD = 0.2, range = 3 -5).
Apparatus
A tower-mounted EyeLink 1000+ eye tracker was used to sample readers' eye movements during reading. Viewing was binocular, but eye movements were recorded from the right eye only. The eye tracker sampling rate was set to 1000Hz. The eye tracker was interfaced with a Silverstone computer, and with a 24-inch BenQ monitor. Monitor resolution was set at 1920 × 1080 pixels, with the maximum vertical refresh rate (144Hz). The participants leaned on a headrest to minimize head movements. The sentences were displayed as a single line, in black on a white background. The participants viewed the screen from 78 cm, and at this distance, on average, 4.3 characters equaled 1° of visual angle.
Design
The spacing manipulation was the within-participants independent variable. The order of sentence presentation was randomized, and the presentation of the two spacing conditions was counterbalanced such that each participant saw each sentence only once, in either the Spaced (baseline) or the Pixel-Spaced conditions.
Procedure
The study was approved by the university's ethics review board. At the beginning of the testing session, the participants were given the consent form package (including information sheet). Consenting participants took part in a vision acuity test before the start of the eye tracking procedure.
The eye tracker was calibrated using a horizontal 3point calibration at the beginning of the experiment, and the calibration was validated. Calibration accuracy was always ≤ 0.25°, otherwise calibration and validation were repeated. Prior to the onset of the target sentence, a circular fixation target (diameter = 1°) appeared on the screen in the location of the first character of the sentence. When the tracker registered a stable fixation on the circle, the sentence was displayed.
The participants were told to read silently and press a button on the button box when finished reading each sentence. Additionally, they would be required to use the button box to provide a yes/no answer to the comprehension questions that followed around 40% of the sentences. Before being exposed to the experimental sentences, the participants read 4 practice sentences (also followed by yes/no questions) to become acquainted with the procedure.
In total, the participants read 104 sentences (4 practice sentences + 40 sentences in Experiment 1 + 60 sentences in Experiment 2). The participants were allowed to take breaks followed by re-calibration of the tracker. The testing session lasted around 45-50 minutes, depending on how many breaks a participant took.
Results and Discussion
Global eye movement measures that index sentence processing are reported. These are: the average duration of fixations made during sentence reading, average number of fixations made, total sentence reading time (from the onset of the sentence until the participant pressed the button to change the display), and average amplitude (length) of saccades made during sentence reading (reported in visual angle). In addition, the average sentence comprehension score is also reported as an indicator of whether readers' comprehension performance was affected by the spacing manipulation. Table 1 shows the descriptive statistics for these dependent measures for both spacing conditions.
The lme4 package (version 1.1-26, Bates, Mächler, Bolker, & Walker, 2015) was used within the R environment for statistical computing (R-Core Development Team, 2016) to analyze all dependent measures by fitting generalized linear mixed-effects models (GLMMs), with Gamma-distribution assumed for the fixation duration measures (Average Fixation Duration and Total Reading Time). Using GLMMs to analyze raw positively-skewed response times, including fixation durations, maintains the transparency of the reported analyses while satisfying the necessary normality assumptions, without the need to transform data (Lo & Andrews, 2015). For the sentence comprehension measure, logistic GLMM was used to account for the binary nature of this variable. In these models the spacing condition was the fixed variable, and subjects and items were the random variables. Models with maximal random structure were always the start point (Barr, Levy, Scheepers, & Tily, 2013). Model trimming was carried out when failure to converge occurred, or when singular boundaries (suggesting overparameterization) were identified. All findings reported here are from successfully converging models. For each measure the beta values (b), standard error (SE), t statistic, and the associated p value are reported in Table 2.
As Tables 1 and 2 show, reducing the interword spacing and increasing the interletter distance in the Pixel-Spaced condition resulted in significant increases in average fixation duration, average fixation count, and in total sentence reading time. By contrast, saccade amplitude was significantly reduced in the Pixel-Spaced condition. Readers comprehension scores, however, indicated that they were still able to successfully comprehend the sentences they were reading, in both conditions, albeit with longer sentence reading time in the Pixel-Spaced condition.
The obtained results replicate previous findings where reducing interword spacing (the space between words) was detrimental to reading speed (see literature review above). Importantly, the results suggest that there was no clear benefit from increasing the distance (ligature) between the letters within a word. Furthermore, if readers used the Persian letters that do not connect to the next letter as word-boundary markers, the results show that dramatically reducing the white space that follows these letters results in similar reduction to reading speed as is the case in other non-cursive scripts. In other words, for Persian readers, word segmentation is more dependent on having normal-sized white space between the words, rather than relying on any cues from the letters that do not connect to the next letter. This is perhaps not surprising since such letters do regularly occur in the middle of words, as well as at word ends. These findings will be discussed in more detail in the General Discussion.
Experiment 2
This experiment aimed to replicate and expand on the findings of Experiment 1. The main difference between the two experiments was that in the current experiment the sentences used words that ended with letters that can be connected to the next letter. Experiment 2 thus featured Spaced (baseline) and Pixel-Spaced conditions, same as in Experiment 1. In addition, there was a third condition where the space between the words was replaced by ligature that connected the words. This condition will be referred to as the Connected condition. As opposed to extending the interletter space (ligature) within words in the Pixel-Spaced condition, in the Connected condition the interword spacing was completely replaced by between-word ligatures, without affecting the interletter distance within words (see Figure 2).
Completely filling the white interword space in previous investigations resulted in significant disruption to reading, as discussed above (e.g., Rayner et al., 1998 replacing the spaces with the character x; Sheridan et al., 2013 replacing the space with random numbers; etc.). However, none of these investigations reported a manipulation that involved cursive, connected-by-ligatures, text. Additionally, none of these investigations, also changed the appearance of the final and first letters of the words being connected as is the case when replacing the space between Persian words with connecting ligatures (see Figure 2). Importantly, connecting words by ligatures can be considered as a very strong manipulation that would compromise word segmentation cues in a way not possible in non-cursive scripts (e.g., European languages). In addition to significantly altering the appearance of the words' first and last letters, this manipulation provides inaccurate information about word boundaries, which can be expected to slow readers down. Thus, in addition to expecting to replicate the disruption to reading in the Pixel-Spaced condition, it is plausible to expect even greater disruption to reading in the Connected condition given the loss of the white interword spacing, and the profound change to the first and last letters of the words when connected. As detailed above, these letters play an important role in word identification.
Methods
The participants, apparatus, and procedure in this experiment were identical to Experiment 1.
Stimuli
Sixty simple Persian sentences were used as stimuli, and were presented to the participants in the Spaced (baseline) condition, the Pixel-Spaced condition, or the Connected condition, that is, with the interword spacing replaced by ligatures, as explained above. An example of the sentences used are available in Figure 2. The sentences comprised, on average, 8.9 words (SD = 1.5, range = 7 -13 words). This was about, on average, 47.8 characters per sentence (including interword spaces in the Spaced condition, or the within-word ligatures that replaced this space in the Pixel-Spaced condition, or the between-word ligatures that replaced this space in the Connected condition, SD = 6.5, range = 35 -62 characters). The amount of physical space the sentences occupied in all three conditions was thus identical. The sentences were also rendered in Arial font size 14.
Design
The spacing manipulation was the within-participants independent variable. The order of sentence presentation was randomized, and the presentation of the three spacing conditions was counterbalanced such that each participant saw each sentence only once, in either the Spaced (baseline), the Pixel-Spaced, or Connected conditions.
Results and Discussion
The same dependent measures reported in Experiment 1 are reported in Experiment 2. Table 3 provides the descriptive statistics for these dependent measures for all three spacing conditions. The same inferential analyses described in Experiment 1 were used in Experiment 2, using GLMMs within the R environment. Two contrast matrices were prespecified for the GLMM models. In the first matrix, the Spaced condition was treated as the baseline against which the Pixel-Spaced and Connected conditions were contrasted. In the second matrix, Pixel-Spaced and Connected conditions were contrasted. The full output of these analyses is reported in Table 4. Tables 3 and 4 show, the results obtained largely replicate those reported in Experiment 1, albeit with a reduced magnitude of effects for the Pixel-Spaced condition in Experiment 2. The small (and non-significant) increases in average fixation duration and average fixation count measures in the Pixel-Spaced condition translated into a significant increase in total sentence reading time in this condition. Also as seen in Experiment 1, saccade amplitude was significantly reduced in the Pixel-Spaced condition. And there was no significant difference in reading comprehension between the two conditions. Spaced vs. Connected Contrast. There were sizable and significant costs for filling the interword spacing with ligature in the Connected condition whereby significant increases in average fixation duration, average fixation count, and total sentence reading time were observed. Saccade amplitude was also significantly shorter in the Connected condition. The difference between the two conditions in sentence comprehension was however not statistically reliable, with readers still scoring 85% accuracy on the Connected condition.
Spaced vs. Pixel-Spaced Contrast. As
Pixel-Spaced vs. Connected Contrast. Once again, there were sizable and significant costs for filling the interword spacing with ligature in the Connected condition relative to the Pixel-Spaced condition. Significant increases in average fixation duration, average fixation count, and total sentence reading time were observed. Saccade amplitude was also significantly shorter in the Connected condition. And the two conditions did not differ significantly in sentence comprehension.
In addition to largely replicating the sentence reading disruption observed in the Pixel-Spaced condition in Experiment 1, the results clearly show a massive disruption to reading in the Connected condition. Replacing interword spacing with ligature, and connecting the words thus altering the form of the first and last letters proved to be detrimental to the speed of reading, as predicted. Participants' performance on reading comprehension was, however, still comparable in all conditions. These findings will be discussed in more detail below in the General Discussion.
General Discussion
The reported experiments aimed to use the properties of Persian script to explore how eye movement behavior and reading performance were affected by: (a) reducing interword (between words) spacing, while increasing the interletter distance (ligature) within words (the Pixel-Spaced conditions in both experiments), and (b) replacing the interword space with connecting ligature (the Connected condition in Experiment 2). In addition, Experiment 1 aimed to explore whether readers use Persian letters that do not connect to the following letter as word boundary markers.
The results obtained were unequivocal. With regards to reading rate, the severe reduction in interword spacing combined with increasing the interletter distance (ligature) within words in the Pixel-Spaced condition resulted in significant reading disruption in Experiment 1. This was largely replicated in Experiment 2. Amongst the mechanisms that can account for the reported results is that decreasing the interword white space in the Pixel-Spaced conditions may have resulted in the words' first and last letters suffering increased crowding effects and lateral masking (e.g., Bouma, 1970;1973;Townsend, Taylor, & Brown, 1971). The significant disruption to reading rate observed in the Connected condition indicates that not only the distance between characters is important, but also preserving the physical allographic form of these characters is also important for successful text segmentation and word identification. It will be recalled that connecting the first and last letters by ligature resulted in altering the physical forms of these letters.
With regards to the question whether readers use letters that do not connect to the next letter as word boundary markers, contrasting the results from both experiments was informative. Specifically, the results showed that reducing the white space that followed these letters (Experiment 1) yielded more sizable effects (e.g., the measures of average fixation duration and count, and total reading time) than reducing the space that followed the other letters that can be connected to the next letter (Experiment 2). One explanation may have to do with the fact that the letters that do not connect to the next letter regularly fall in the middle of words, and when they do, they are followed by a very small white space that separates them from the next letter within the word. As such, reducing the space that followed these letters in Experiment 1 potentially provided inaccurate word segmentation information to the readers (i.e., making them look like interletter and not interword spaces). Reading was thus significantly disrupted in the Pixel-Spaced condition in Experiment 1. By contrast, in Experiment 2, the fact that letters that can be connected to the next letter by ligatures remained unconnected in the Pixel-Spaced condition may have provided a valuable word boundary cue that somewhat attenuated the effect of reducing the interword space (see illustration in Figure 3). We can thus conclude that preserving or violating word segmentation cues is more important for reading than particular letter properties, such as the possibility of connecting to the next letter, per se. This is in line with findings that showed that text spacing that violates word boundaries was found to be particularly detrimental to reading rate (e.g., Bai et al., 2008, see also Epelboim et al., 1994;1996;Morris et al., 1990;Pollatsek & Rayner, 1982;Rayner et al., 1998;Sheridan et al., 2013;Slattery et al., 2016). The black arrows indicate where reducing the interword space, following the letters that do not connect to the next letter may have resulted in inaccurate word boundary cues in Experiment 1. The grey arrows indicate where the words' final letters in Experiment 2 could be connected to the next letter, but were not, thus perhaps providing word segmentation cues to the readers.
A sizable disruption to reading was observed when the interword spaces were replaced by word-connecting ligature that also altered the form of the words' first and final letters. As outlined above, connecting words by ligatures is a strong manipulation that compromise word segmentation cues in a way not possible in non-cursive scripts (e.g., European languages). These results replicate the findings discussed above with regards to the costs of reducing interword spacing (e.g., Drieghe et al., 2017;Morris et al., 1990;Perea & Acha, 2009;Rayner et al., 1998;Rayner et al., 2013;etc.), and the importance of the first and last letters in word identification (e.g., Davis, 2010;Gomez et al., 2008;Jordan, 1990;1995). The reported results also add further support to the suggestion that text segmentation and word identification are vital for smooth reading (e.g., . Furthermore, in the Connected condition (Exp. 2), reading may have been disrupted because unsegmented text is an unfamiliar visual format for Persian readers. Bai et al. (2008) suggested that unfamiliar visual text format may disrupt reading and result in longer reading times relative to a more familiar format. However, as Sheridan et al. (2013) pointed out, visual familiarity cannot solely account for the observed disruption to reading rate. Indeed, in English, with its relative less complex visual characteristics (e.g., letters do not change shape depending on their location in the word, save some instances of initial capitalization), lengthy training and familiarization of participants to read unspaced texts did not result in reading facilitation (e.g., Malt & Seamon, 1978). Future investigations should further explore the psychological reality of visual familiarity in allographic scripts (e.g., Arabic and Persian), relative to the scripts of European languages.
Thus far the main focus was on the findings concerning how reading rate was affected by the reported experimental manipulations. The reported sentence comprehension scores in both experiments, even in the Connected condition in Experiment 2, replicated previous findings that readers are still able to comprehend unspaced text (e.g., Epelboim, Booth, & Steinman, 1994;1996), albeit with the sizable decreases in reading rate and efficiency (e.g., Rayner et al., 1998). It is plausible to suggest that whereas the Pixel-Spaced condition posed significant difficulty, the difficulty readers encountered in reading the Connected condition in Experiment 2 makes the obtained comprehension scores more akin to solving a visual puzzle rather than natural sentence reading. The increases in readers' average fixation duration, fixation counts, and total reading time suggest that the eye movement behavior was guided by the attempts to segment the text, identify words (and test hypotheses about where this should be done), to facilitate the processes of extracting meaning from the visual stimuli. That readers were able to obtain such high comprehension scores indicates the resilience of the linguistic processing system, and its role in guiding eye movements.
Finally, the obtained results may be considered to lend further support to models of eye movements control that postulates serial processing and word identification in reading (e.g., the E-Z Reader model : Pollatsek, Reichle, & Rayner, 2006;Rayner, Ashby, Pollatsek, & Reichle, 2004;Reichle, 2011;Reichle, Pollatsek, Fisher, & Rayner, 1998;Reichle, Pollatsek, & Rayner, 2012, see also Reichle, 2020), rather than models that postulate distributed attentional grade and parallel processing of multiple words (e.g., the SWIFT model: Engbert, Nuthmann, Richter, & Reinhold, 2005;Laubrock, Kliegl, & Engbert, 2006;Richter, Engbert, & Kliegl, 2006). Specifically, in the reported experiments, disrupting word identification by increasing the interletter distance (ligature), and bringing the upcoming word closer by reducing the interword spacing (the Pixel-Spaced conditions) did not result in any facilitation to reading, rather, the opposite. However, this may not be a completely fair assumption since bringing the parafoveal word closer was achieved by reducing the interword spacing, and thus dramatically reducing the ability to segment words. Presumably, both models would predict some sort of cost associated with this. For interletter spacing, both E-Z Reader and SWIFT models would predict costs for increasing interletter spacing, given that increasing this space would lead to placing the letters further from the point of fixation (e.g., see Bricolo, Salvi, Martelli, Arduino, & Daini, 2015;Pelli et al., 2007 for discussion of interletter spacing and crowding effects). As neither model has yet simulated the effects of interword or interletter spacing manipulations, making model-derived predictions about the effects of such manipulations is not possible (e.g., Perea et al., 2016;also Reichle, 2020). Further modeling activity is thus necessary to obtain further clarity.
Future investigations should further utilize the properties of cursive scripts (e.g., Arabic and Persian) to formally expand and update current theories and models to accommodate the characteristics of non-European scripts. In this regard, developing corpuses that provide letter positional probabilities (particularly the probability of letters occurring at word beginning or end, see e.g., Yen, Radach, Tzeng, & Tsai, 2012) can further elucidate the extent to which readers may use such properties and rely on certain letters (more than others) as markers of word boundaries.
Ethics and Conflict of Interest
The author declares that the contents of the article are in agreement with the ethics described in http://biblio.unibe.ch/portale/elibrary/BOP/jemr/ethics.ht ml and that there is no conflict of interest regarding the publication of this paper. | 8,001 | 2021-05-31T00:00:00.000 | [
"Linguistics"
] |
Assessment of carbon emission potential of polyvinyl chloride plastics
. Plastic pollution has become a global concern, and research has shown that carbon emissions during the lifecycle of plastics are rapidly consuming global carbon credits. This study focuses on the effective assessment of carbon emissions from polyvinyl chloride (PVC) plastics using a life cycle assessment (LCA) method during the production and recycling stages. The greenhouse gas emission potential is evaluated using 1kg PVC plastic as a functional unit. Research has shown that the total carbon emissions during the production stage of PVC plastic are 7.83kg CO 2-eq . The carbon emissions during the production stage of hydrochloric acid, acetylene, electricity, and water vapor are 2.340 kg CO 2-eq , 4.900 kg CO 2-eq , 0.117 kg CO 2-eq , and 0.468 kg CO 2-eq , respectively. During the recycling phase, the carbon emissions from the power consumption zone are 0.184 kg CO 2-eq , followed by 0.156 kg CO 2-eq from natural gas. Research has shown that fossil materials contribute the largest carbon emissions during the production stage of PVC plastics. Therefore, how to effectively reduce the use of fossil fuels or seek alternative raw materials can effectively reduce carbon emissions.
Introduction
Plastics are multifunctional, durable, and cost-effective materials used in a wide range of strategic fields, including packaging, construction, automotive manufacturing, electronics, and agricultural production [1].Over the past 70 years, plastic production has continued to grow, from 1.5 million tons in the 1950s to 359 million tons in 2018 [2].Due to the difficulty of natural decomposition of plastics, they have accumulated in land, freshwater, and oceans for decades. In 2010 alone, an estimated 4 million to 12 million tons of plastic waste generated on land entered the marine environment [3],There are also increasing reports of pollution in freshwater systems and terrestrial habitats, as well as the pollution of synthetic fibers to the environment [4].After physical, chemical, and/or biological interactions, large blocks of plastic will be decomposed into microplastics (MP) (1-5 mm) or nanoplastics (NPs) (< 1000 nm) [5].
At present, there are more than 300 types of plastics produced, of which more than 60 are commonly used and can be divided into ordinary plastics and engineering plastics according to their uses. Polyethylene (PE), polypropylene (PP), polyvinyl chloride (PVC), polystyrene (PS), polyurethane (PU), and phenolic resin are the main general-purpose plastics, among which PP and PE are the most commonly used polymers in daily plastic products. In Europe, 40% of plastic is used for packaging. The large-scale production of plastics will inevitably lead to a large amount of waste generation. Currently, the main disposal methods include incineration, landfill, and recycling, among which recycling can minimize environmental pollution. At present, the biggest difficulty faced by plastic recycling is that the cost of collection and treatment is lower than the value of secondary materials. Unfortunately, the economic benefits of plastic recycling are poor. Landfill is the lowest cost method of all treatments, with most waste plastics directly entering the landfill site.
When microplastics enter the marine environment, marine organisms will ingest a certain amount of microplastics and enter higher-level organisms along the biological chain, posing unpredictable hazards. The harm caused by plastics to aquatic organisms mainly includes plastic additives, physical blockages caused by ingestion, and other issues. Plastic additives and pollutants can cause behavioral changes, metabolic processes, and endocrine disruptions. The types of harm caused by ingestion include internal damage to aquatic organisms, suffocation and entanglement, reduced growth and photosynthesis of primary producers in the food chain such as algae, and their impact on the reproduction and development of crustacean.
All plastics used before, including resins, fibers, and additives, were processed from fossil fuels. The molecules or monomers used to manufacture plastics, such as ethylene and propylene, also come from fossil fuels. Plastics require a large amount of resources and energy input during the production stage, which can only be used in plastic production after being processed. The energy and material input during the processing further exacerbates the carbon emissions brought by the plastic industry during the production stage. Research has shown that the primary production stage of synthetic resins contributes the highest carbon emissions, accounting for 69% to 86% of the total carbon emissions during their lifecycle. In the case of different fossil raw materials, the carbon emissions generated vary significantly, with coal to olefin producing the highest carbon emissions.
Plastic not only exists in our environment, but also accumulates as it enters people's bodies along the food chain. Amidst this growing concern, there is another largely hidden aspect of the plastic crisis: the impact of plastics on global greenhouse gas emissions and climate change. At the current level, greenhouse gas emissions from the plastic lifecycle threaten the goal of maintaining global temperature increases below 1.5 ℃. As the petrochemical and plastic industries plan to expand production on a large scale, this problem may become even more severe. Almost every piece of plastic starts with fossil fuels and emits greenhouse gases at every stage of its lifecycle: 1) the extraction and transportation of fossil fuels, 2) the extraction and manufacturing of plastics, and 3) the management of plastic waste. If the production and use of plastics continue to grow according to current plans, these emissions could reach 1.34 billion tons CO 2 per year by 2030-equivalent to over 295 new 500 MW coal-fired power plants. By 2050, the cumulative greenhouse gas emissions from these plastics may exceed 56 billion tons, accounting for 10-13% of the entire remaining carbon budget [6].Among all plastics, polyvinyl chloride (PVC) is the most consumed plastic, and the production stage is the main stage of greenhouse gas emissions from plastics.
All plastics currently used, including resins, fibers, and additives, are processed from fossil fuels. The molecules or monomers used to manufacture plastics, such as ethylene and propylene, also come from fossil fuels. Plastics require a large amount of resources and energy input during the production stage, which can only be used in plastic production after being processed. The energy and material input during the processing further exacerbates the carbon emissions brought by the plastic industry during the production stage. Research has shown that the primary production stage of synthetic resins contributes the highest carbon emissions, accounting for 69% to 86% of the total carbon emissions during their lifecycle. In the case of different fossil raw materials, the carbon emissions generated vary significantly, and the carbon emissions from coal to olefins are the highest It is necessary to explore the key carbon emission sources within this stage. Recycling is currently considered the most user-friendly disposal option, but there is little exploration of greenhouse gas emissions at this stage. Therefore, it is necessary to explore these two stages.
Assessment instrument
The tool for conducting life cycle assessment (LCA) of PVC plastics in this study selected Simapro, a professional operational software for LCA developed by PR é Consultant in the Netherlands. The software includes: (a) a lifecycle unit process database, and (b) an impact assessment database [7]. Users can establish different types of lifecycle units and system processes within the software as needed, and adjust the process allocation ratio according to actual needs. Life Cycle Assessment (LCA) is an assessment method for preventive environmental protection, which mainly identifies and quantifies the potential environmental impacts of energy and material utilization, as well as waste; By identifying and quantifying the environmental impacts of energy and material consumption and pollutant release throughout the entire lifecycle stage. The concept of LCA originated in the 1960s, when environmental degradation, especially limited access to resources, began to become a concern. After experiencing a low point in the 1970s and 1980s, the concept of LCA was first proposed by the International Conference on Environmental Toxicology and Chemistry (SETAC) in 1990, and subsequent academic discussions were held to explore the theory and methods of LCA.
Evaluation method
The LCA method includes four parts: research objectives and scope, system boundaries, data sources and inventory analysis, and results and explanations. (a) The goal and scope are the starting points for determining the conduct of LCA; (b) The system boundary is 1kg PVC plastic production and recycling, which is consistent with the current mainstream domestic processes; (c) Data source and inventory analysis include input output material flow and energy demand of unit process; (d) In the results and interpretation stage, analyze the results of the early stages of LCA to determine the most important issues for the environment. Based on the results obtained, specific conclusions or recommendations should be given.
Due to the serious consequences of global warming, the International Organization for Standardization (ISO) has long established a series of standards for the accounting methods of product carbon footprints. Due to the many emissions that cause the greenhouse effect, the IPCC associates the radiative forcing of various carbon emissions with equal amounts of CO 2 , resulting in a coefficient, Global Warming Potential (GWP). For different stages of carbon footprint calculation, the basic equations provided by IPCC are generally used The formula is as follows: (1) In the formula, RE represents the radiation efficiency of the target gas, representing the energy absorbed by the gas per unit area per unit time in the atmosphere, in W/(m 2 · kg); T is the time period taken for calculating the integral, usually taken as 20 years, 50 years, or 100 years. In this study, the value is 100 years; τ Is the atmospheric lifespan of a gas, in years; AGWP is the absolute global warming potential of CO 2 gas over 100 years.
Results and Interpretation
Interpretation is a stage of life cycle assessment where the results of other stages are comprehensively considered and analyzed based on the uncertainty of applied data and assumptions made and recorded throughout the research process. Based on the results obtained, specific conclusions or suggestions should be given, (1) respecting the intention of the goal definition and the limitations imposed on the research through the scope definition, and (2) considering the appropriateness of functional units and system boundaries. The interpretation should present the conclusions of LCA in an understandable manner. The purpose of the first element of life cycle interpretation is to analyze the results of the early stages of LCA, in order to determine the most important issues for the environment, namely those that may change the final outcome of LCA.
These important issues may be method selection and assumptions, inventory data of important lifecycle processes, characterization, standardization, or factors used in weighted impact assessments. Encourage practitioners to prepare a list of such choices during the actual implementation of LCA, defining goals and scope, product system modeling, and impact assessment, and provide reliable reports and recommendations. In this study, by determining the functional units, system boundaries, and inventory review of different plastics, and calculating the midpoint values of different plastics throughout the entire life cycle , specific greenhouse gas emissions brought by plastics and their contribution values at different stages and substances are obtained. At the same time, specific environmental impact sizes among different plastics should be given, and through these results, specific recommendations can be given, Provide specific reports on how to reduce environmental impacts for practitioners.
Assessment of carbon emission potential during the production stage of polyvinyl chloride plastics
The molecules or monomers used in the manufacturing of plastics, such as ethylene and propylene, all come from fossil hydrocarbons [4]. Research shows that plastic accounts for 70% of the total greenhouse gas emissions during the production stage. Research has shown that the primary production stage of synthetic resins contributes the highest greenhouse gas emissions, accounting for 69~86% of greenhouse gas emissions throughout their lifecycle [8]. This study combines the current mainstream PVC plastic production process to calculate the greenhouse gas emissions caused by materials and energy during the production stage, and identify key carbon emission sources. The total greenhouse gas emissions of PVC plastic during the production stage are 7.83 kg CO2-eq. The greenhouse gas emissions of hydrochloric acid, acetylene, electricity, and water vapor during the production stage are 2.34 kg CO2-eq, 4.9 kg CO2-eq, 0.117 kg CO2-eq, and 0.468 kg CO2-eq. Research has shown that acetylene contributes the largest greenhouse gas emissions during the production stage of PVC plastics, followed by hydrochloric acid. The consumption of electric energy contributes very little to greenhouse gas emissions during the production phase. Recycling of waste plastic recycling is regarded as an environmentally friendly approach. Recycling of plastics not only ensures the secondary utilization of waste plastics, but also reduces the consumption of fossil resources and greenhouse gas emissions, which is a very good decision for the secondary utilization of waste plastics in regions where fossil resources are scarce.
Acetylene, as a monomer in various chemical industry products, has received less attention in terms of greenhouse gas emissions caused by its processing and use. In addition, hydrochloric acid, as another important product in the chemical industry, not only makes significant contributions to greenhouse gas emissions in its production and processing, but also brings certain environmental pollution. These two monomers, as essential products for polyvinyl chloride production, make a relatively large contribution to greenhouse gas emissions. Developing new production processes or seeking other alternatives is an important way to reduce greenhouse gas emissions
Assessment of carbon emission potential in PVC plastic recycling stage
The greenhouse gas emissions in the recycling stage of PVC waste plastic recycling are 0.345 kg CO2-eq, of which the carbon emissions of deionized water, electricity and natural gas are 0.004 kg CO2-eq, 0.184 kg CO2-eq and 0.156 kg CO2-eq, respectively. The main greenhouse gas emissions during the recycling phase are electricity consumption, followed by natural gas. Energy consumption is the main source of greenhouse gas emissions in the plastic recycling stage. The power consumption in the recovery phase is mainly used for the melting and extrusion of waste plastics. During this period, a large amount of heat energy will be generated, which will have a greater demand for electric energy and natural gas energy. At present, domestic power is still dominated by coal power generation. Therefore, the realization of clean energy can further reduce greenhouse gas emissions and make further contributions to the realization of carbon peak and carbon neutrality as soon as possible.
Plastic recycling is the best way to dispose of plastic waste. It can realize the recycling of plastic waste and reduce the consumption of raw plastic and fossil energy, which can bring better environmental benefits at a certain level. For this reason, we should pay more attention to the recycling of plastic recycling and realize the closed-loop recycling of plastic.
Conclusions
Life cycle assessment shows that the carbon emissions during the production process of polyvinyl chloride plastics are in the order of acetylene>hydrochloric acid>water vapor>electricity. The order of carbon emissions during the recycling phase is electricity>natural gas>deionized water. Research has shown that PVC plastics contribute the most to acetylene carbon emissions during the production process. The production stage mainly brings about greenhouse gas emissions through the consumption of energy and fossil fuels. The carbon emissions in the recovery phase are low. In the recycling stage, plastic is remanufactured into plastic particles, which can reduce the reproduction of original plastic. Therefore, the use of fossil materials and energy will also be reduced accordingly. In a sense, this is also a means to reduce greenhouse gas emissions at present. In addition, the recycled plastic can replace the original plastic reproduction, which can achieve a certain carbon emission reduction effect. | 3,606.4 | 2023-01-01T00:00:00.000 | [
"Environmental Science",
"Materials Science"
] |
Complex Question Answering on knowledge graphs using machine translation and multi-task learning
Question answering (QA) over a knowledge graph (KG) is a task of answering a natural language (NL) query using the information stored in KG. In a real-world industrial setting, this involves addressing multiple challenges including entity linking, multi-hop reasoning over KG, etc. Traditional approaches handle these challenges in a modularized sequential manner where errors in one module lead to the accumulation of errors in downstream modules. Often these challenges are inter-related and the solutions to them can reinforce each other when handled simultaneously in an end-to-end learning setup. To this end, we propose a multi-task BERT based Neural Machine Translation (NMT) model to address these challenges. Through experimental analysis, we demonstrate the efficacy of our proposed approach on one publicly available and one proprietary dataset.
Introduction
Question answering on knowledge graphs (KGQA) has mainly been attempted on publicly available KGs such as Freebase Bollacker et al. (2008), DB-Pedia Lehmann et al. (2015), Yago Suchanek et al. (2007), etc. There is also a demand for questions answering on proprietary KGs created by large enterprises. For example, KGQA, on a) a KG that contains information related to retail products, can help the customers choose the right product for their needs, or b) a KG containing document catalogs (best practices, white papers, research papers) can help a knowledge worker find a specific piece of information, or c) a KG that stores profiles of various companies can be used to do preliminary analysis before giving them a loan, etc. Our motivating use-case comes from an enterprise system (referred to as LOCA) that is expected to answer users' questions about the R&D division of an enterprise. Figure 1: Example queries from a real-world dataset LOCA. Column 6 (is SP?) represents whether the queries can be answered via shortest path or not?, all the other columns are self-explanatory.
Sample questions from LOCA dataset are shown in Figure 1. The schema of the corresponding KG is shown in Figure 2. Answering such questions often requires a traversal of KG along multiple relations which may not form a directed chain graph, and may follow more complex topology as shown for question 5, 7 and 8 in Figure 1. It can also be observed that most often words of the natural language question (NLQ) and corresponding relations have a weak correlation. Most of the proposed approaches on the KGQA task Bollacker et al. (2008) parse the NLQ and convert it into a structured query and then execute the structured query on the KG to retrieve the factoid answers. Such conversion involves multiple sub-tasks: a) linking the mentioned entity with corresponding entity-node in the KG Blanco et al. (2015); Pappu et al. (2017), b) identification of the type of the answer entity Ziegler et al. (2017), c) identification of relations Dubey et al. (2018); ; Hakkani-Tür et al. (2014). These tasks are most often performed in a sequence Both et al. (2016); Dubey et al. (2016); Singh et al. (2018), or in parallel Veyseh (2016); Xu et al. (2014); Park et al. (2015), which results in accumulation of errors Dubey et al. (2018). Further, most of the KGQA tasks are not as complex as LOCA. For example, a) All questions of Sim-pleQA Bordes et al. (2015) can be answered using single triple, b) NLQs for most of the datasets (e.g., SimpleQA, Meta QA) contain only one metioned entity, and c) Even if multiple relations are required for answer entity retrieval, they are organized in a sequence, i.e., chain.
Our motivating example contains specific types of questions that pose many challenges with respect to each of the aforementioned tasks. Moreover, some of the questions can only be answered via a model that attempts more than one sub-tasks together. For example, the first two questions of Figure 1 mention the same words, i.e., "deep learning" but they get associated with two different entity nodes of the KG. Additionally, the prior work could detect the set of relations when the schema sub-graph follows a specific topology, however, in our example, most of the questions follow a different topology. We demonstrate in Section 5 that most of the prior art approaches fail to solve such challenges. We provide a summary of such challenges in Section 2.
In this paper, we propose CQA-NMT, a novel transformer-based, NMT (neural machine translation) model to solve the aforementioned challenges by performing four tasks jointly using a single model, i.e., i) Detection of mentioned entities, ii) prediction of entity types of answer nodes, iii) prediction of topology and relations involved, and iv) question type classification such as 'Factoid', 'Count', etc. CQA-NMT not only performs the four sub-tasks but also helps downstream tasks of mentioned entity disambiguation and subsequent answer retrieval from the KG. The key contributions of this paper are: (i) We propose a multi-task model that performs all tasks for parsing of natural language question together, rather than the traditional approach of performing these tasks in a sequential manner, which also involves candidate generation based on upstream task and then short-listing them to make the final prediction. We also demonstrate that using such an approach newer types of challenges of the KGQA task can be solved, which have not been attempted by prior work so far.
(ii) We propose the use of neural machine translation based approach to retrieve the variable number of relations involved in answering a complex NLQ against a KG.
(iii) We also demonstrate that every sub-task of parsing an NLQ is complementary to other tasks and helps the model in performing better towards the final goal of KGQA. In Table 3, we have demonstrated that via joint training on more than one task, the accuracy of individual tasks improves as compared to training them separately. For example, when trained separately, the best F1-score for detecting mentioned entity(s) was 83.3, and the best accuracy for the prediction of entity types of answer nodes was 75.7. When trained jointly, we get the corresponding metrics as 87.1 and 76.3. When trained jointly for all tasks, the results improve even further.
(iv) CQA-NMT predicts the relations involved in a sub-graph of KG and also helps to predict the topology of the sub-graph, resulting in compositional reasoning via a neural network on the KG. However, the prior work predicts the relations for a specific topology only 1 .
(v) We also demonstrate that our approach outperforms the state-of-the-art approaches on the MetaQA dataset, and therefore we present a new baseline on this dataset. Our approach also performs better than standard approaches 1 Topology is a specific arrangement of how the mentioned entities, and answer entities are connected to each other via the predicted relations. Sample topologies are given in Figure 1. Our approach can be used to answer questions of any topology, if adequate number of samples are included in the training data. The prior works have not attempted a dataset such as LOCA which contains many different topologies. To the best of our efforts we could not find another such dataset, which has led to our aforementioned belief. as applicable to our dataset and helps us solve most of the real-world industrial challenges.
KGQA Problem and Challenges
For answering natural language questions (NLQ), we assume that the background knowledge is stored in a knowledge graph G, comprising of a set of nodes V (G), and edges E(G). Here, nodes represent entities, and edges represent the relationship between a pair of entities or connect an entity to one of its properties. An NLQ (q) is a sequence of words w i of a natural language (e.g., English), i.e., q = {w 1 , w 2 , ..., w N }. We also assume that the NLQs can mention zero, one, or more entities present in G and enquire about another entity of G, which is connected with the mentioned entity(s). We pose the KGQA problem as a supervised learning problem and next, describe the labels assumed to be available for every question in the training data and that need to be predicted for every question in the test data.
Entity Linking Annotation Some of the ngrams (η i ) in an NLQ refer to entity(s) of KG. Such n-grams have been underlined in Figure 1. The entity-id (as shown in the third column of Figure 1) of the mentioned entity is also assumed to be available as part of label annotation for every question.
Answer Entity Type Annotation (AET), τ : We assume that every NLQ has an entity type (t i ) for the answer entities. These are shown in the middle column of Figure 1. We refer τ as a set of all entity types in the knowledge graph G.
Relation Sequence and Topology Annotation (path) Sequence of relations connecting the linked entities to the answer entities can be considered paths (path i ), each of which can contain one or more relations. These paths are connected to form a topology, as shown in Figure 1. This topology of the paths and relations are also assumed to be available for an NLQ in training data. These paths need not be the shortest paths between the linked entities and the answer entities. For example, the last three columns of Figure 1 indicate a) the set of paths separated by a semicolon (;), b) whether this is the shortest path, and c) topology of the paths connecting the linked entities to the answer entities.
Question Type Annotation (q type ) Some NLQs can be answered by a single triple of the knowledge graph('Simple'), while some of them require traversal along with more complex topology as indicated earlier('Factoid'), some questions require an aggregate operation such as count ('Count', see question 6, in Figure 1), and finally, some questions perform existence check ('Boolean', see question 8, in Figure 1). Such information is also assumed to be available for every NLQ in training data.
We now describe the challenges that need to be addressed while performing the KGQA task. To the best of our efforts we could not find any prior work that covers all these challenges together.
1. Incomplete Entity Mention: In the NLQ users often do not mention the complete name of the intended entity Huang et al. (2019), e.g., only the first name of a person, short name of a group, etc., e.g., question 8 in Figure 1.
2. Co-occurrence disambiguation: For situations when a mentioned entity should be linked to KG entity with help of another mentioned entity in the question, e.g., in question 7 of Figure 1, there can be many people who have the same first name ('Libby') but there is only one of them who works on NLP, the models needs to use this information to conclusively resolve the mentioned entities Mohammed et al. (2017).
3. Avoid un-intended match: Some of the words in a sentence coincidently match with an entity name but are not an intended mention of an entity, e.g., the word 'vision' may get matched with 'Computer Vision' which is not intended in question 9 of Figure 1.
Duplicate KG Entity
The intended entity names may be different from the words used in the NLQ, and there can be more than one entity in the KG that has the same name Shen et al. (2019), for example, "Life Sciences" is the name of a research area, as well as a keyword (see KG schema given in Figure 2). The model needs to link the entity using other words, similar to how it is shown in questions 1 and 2 of Figure 1. 6. Implicit Relations Indication: Sometimes words of the NLQ do not even make any mention of the relations involved, however, they need to be inferred Zhang et al. (2018). For example, in question 4 of Figure 1, some of the relations are not mentioned in the question.
Problem Definition: The objective of the proposed approach is to output 1) the mentioned entity(s) (s i ) in the query, 2) the answer entity type , 3) the path or set of predicates, P q = {p 1 q , p 2 q , p 3 q , ..., p N q } where, each p i ∈ E(G) and, 4) the question type. The set P q is a sequence of predicates, such that if traversed along these edges from the mentioned entity node(s), we can arrive at the answer entity(s) node(s). The final answer is then retrieved from KG and is post-processed as per the outputs of question type and the 'answer entity type' modules. We assume that we have N training samples
Related Work
In this section, we present a view of prior work, on the KGQA problem as an NLP task, and then on set of techniques used for this task. Luo et al. (2018) proposed an approach to perform KGQA by mapping a query to its logical form and then converting it to a formal query to extract answers. However, these are not joint learning tasks as proposed in our work.
Multi-Task based approaches Similar to us, many works like Lukovnikov et al. (2019) 2019) rely on the jointly learning multiple sub-tasks tasks of KGQA problem. However, all these approaches focus on singlehop relations only, and therefore we cannot take such approaches as a baseline for our model. In a more complex setting, Shen et al. (2019) proposed a joint learning task for entity linking, path prediction (chains topology only), and question type. However, their model does not predict answer entity type. We do not compare our approach with Shen et al. (2019) because they focus on the implicit mention of the entities in previous sentences of dialogue, and also because they do not attempt to predict non-chain topology or the answer entity type.
Non-Chain Multi-Hop Relations Agarwal et al. (2019) proposed an embedding based approach to predict non-chain multi-hop relation prediction (for a fixed and small set of topologies). They perform only one task of relationship prediction.
Techniques used for KGQA
Transformers and Machine Translation: Transformer Vaswani et al. (2017) has proved to be one of the most exciting approaches for NLP research. They have shown dominating results in Vaswani et al. (2017); Devlin et al. (2018), etc. The paper Lukovnikov et al. (2019) closely resembles our approach as they proposed a joint-learning based multi-task model using Transformer. However, they handle only 1-hop questions and consider relation prediction as a classification task. In its current form it cannot be used to solve the variable length path prediction form, as required in our motivating example. In an extension to the work of using logical forms for KGQA Dong and Lapata (2016) proposed the usage of attention-based seq2seq model to generate the logical form of an input utterance. However, they use an LSTM model and not Transformer.
Graph-Based Approaches GraftNet Sun et al. Saxena et al. (2020) presented an approach, EmbedKGQA, for joint learning, again using KG Embeddings, in the context of multi-hop relations. However, their approach is not truly a joint model as they perform answer candidate selection via the model, i.e., they arrive at the candidates before executing the model.
Our proposed approach has outperformed Pull-Net and EmbedKGQA on the MetaQA dataset, as shown in Section 5.
Proposed Architecture
In this section, we describe our proposed joint model (CQA-NMT) which is an encoder-decoder Figure 3 illustrates a high-level view of the proposed model. Joint Model for KGQA In this paper, we extend BERT to generate path (or inference chains), perform sequence labeling and classification jointly. Details of each module are described next.
Entity Mention Detection Module:
To extract the mentioned entity(s) from NL query, we performed a sequence labeling task using BERT's hidden states (Figure 4). Sequence labeling is a seq2seq task that tags the input word sequence x = (w 1 , w 2 , ..., w T ) with the output label sequence y seq = (y 1 , y 2 , ..., y T ). In this paper, we augmented CQA-NMT to jointly infer the type of the mentioned entity(s) along with its(their) 'span'. We feed the final hidden states of the tokens h 2 , h 3 , ..., h T −1 , into a softmax layer to generate output sequence. Also, we ignore the h 1 and h T i.e., [CLS] and [SEP] tokens as they can never be a part of an entity(s) and are only required as a preprocessing step of BERT. Since BERT uses WordPiece tokenization, we assigned the same label to the other tokenized input corresponding to their first sub-token. For e.g., the output of BERT's Wordpiece tokenizer is 'Jim Hen ##son' for the input 'Jim Henson'. We assigned the labels for the tokenized output as 'B-Per I-Per I-per', i.e., the second sub-word '##son' was given the same label as the first sub-word 'Hen'. The output of the softmax layer is: y i etype = sof tmax(W e type .h i + b etype ) (1) where, h i is the hidden state corresponding to the i th token. 2. Entity Linking: The output of the Entity Mention Detection Module is a sequence of tokens along with its type (t i ) for a candidate entity. These mentioned entities still need to be linked to a KG node for traversal. In our work, we do not use any neural network for the linking process. Instead, we rely on an ensemble of string matching algorithms 2 and PageRank Page et al. (1999) to break the ties between candidate entities. The Entity Mention Detection Module outputs as 2 We used Levenshtein Distance and SequenceMatcher packages available in Python many entities as provided in a query and their associated type (t i ). To link the mentioned entity in the NL query, we extract the candidates from V(G) of type t i . We then apply 3 string-matching algorithms, similar to (Mohammed et al., 2017) , and take a majority voting to further break the ties. Finally, we apply the PageRank algorithm to link the mentioned entity with a KG entity. One way to understand the usability of the PageRank algorithm is to consider the notion of popularity. For e.g., if a user queries 'Where was Obama born?', the user here is more likely referring to the famous Barack Obama, compared to any other. A detailed description of the entity mention detection and the entity linking procedure is shown in Figure 4. 3. Path prediction Module: To generate the sequence of predicates for an input query, we augmented our architecture with a Transformer-based Vaswani et al. (2017) decoder which is often used in Neural Machine Translation (NMT) tasks. We define y path ={p 1 , p 2 , ..., p N } where each p i ∈ E(G). In our work, we do not constraint the number of predicates (multiple-hops) that are required to extract the final answer. Hence, an obvious choice was to use a decoder module which can stop generating the predicates once it has predicted the end-of-sentence ([EOS]) token (Figure 4). 4. Question Type and Answer Entity Type prediction module: In our work, we formulate the task of determining the question type and the AET as a classification task since we have a discrete set for both q type and Answer Entity Types. Using the hidden states of the first special token from BERT, i.e., [CLS], we predict: To jointly model all the task using a single architecture, we define our training objective as: p(y|x) = p(y etype , y path , y qtype , y τ |x) (4) p(y|x) = p(y qtype |x).p(y τ |x).p(y etype |x).p(y path |x) The path and AET components of CQA-NMT are defined as, p(y path |x) = T t=1 p(p t |p 1 , p 2 , ..., p t−1 |x). (7) . where, y qtype ∈ {factoid, count, boolean, simple} (8) y τ ∈ {entity types in KG} (9) For training we maximize the conditional probability p(y etype , y path , y qtype , y τ |x). The model is finetuned end-to-end via minimizing the cross-entropy loss.
Experiments and System details
In this section, we first introduce the datasets used for our experiments. We pre-process all NLQs (of all datasets) by downcasing and tokenizing.
Datasets, Metrices, and Baselines
LOCA Dataset: We introduce a new challenging dataset 'LOCA', which consists of 5010 entities, 42 unique predicates, and a total of 45,869 facts. The dataset has 3,275 one or multi-hop questions that have 0, 1, or more entities mentioned in the questions. It contains multiple question types like count, factoid, and boolean. For the questions with multiple entities, we used an operator ";" as a delimiter to separate paths corresponding to each entity (in Figure 1, query 5, 7, and 8). For the scope of this paper, we considered queries involving the only intersection which can be replaced with other operators like union, set-difference, etc. without loss of any generality. The operator ";" help us detect and predict the different topologies involved in an NL MetaQA: The dataset proposed in Zhang et al. (2018) consists of 3 different datasets namely, Vanilla, NTM, and, Audio Data. All the datasets contain single and multi-hop (maximum 3-hop) questions from the movie domain. For our experiments, we used the Vanilla and the NTM version of the datasets and the KB as provided in Zhang et al. (2018). Since, both versions of MetaQA do not consider the AET and question type, we assigned a default label to both the tasks.
Metrics: We used different metrics for different subtasks. Since a query can contain partially mentioned entities, we used F-score to evaluate mention and its type detection module. For Inference Chain (or Path prediction), question type, and, answer entity type prediction we use the accuracy measure. In Table 2, similar to prior works, we have used the Hits@1 to evaluate the query-answer accuracy. Baselines
Training Details
All the baselines and the proposed approach were trained on DGX 32GB NVIDIA GPU using Ten-sorFlow Abadi et al. (2015) and Texar Hu et al. (2018) libraries. For CQA-NMT, we used the small uncased version of pre-trained BERT Devlin et al. (2018) model. Adam Kingma and Ba (2014) optimizer was employed with a learning rate of 2e-5 for BERT and default for others. The training objective of each model was maximized using the cross-entropy loss and the best models were selected using the validation loss. Dropout values were set to .5 and were optimized as described in Srivastava et al. (2014). For BERT we used 10% of total training data for the warmup phase of BERT Vaswani et al. (2017). Finally, for the division of dataset into train, test, and, dev, we used the same split as provided by Zhang et al. (2018) for the MetaQA dataset and a ratio of 80-10-10 for LOCA dataset.
Main Results
In this section, we report the results of the experiments on the MetaQA and the LOCA dataset. Next, we provide insights into the model outputs and results of error-analysis performed on LOCA dataset.
LOCA
The experimental results for LOCA dataset are shown in the last row of table 2. The results affirm that the proposed approach outperforms the baselines. We observed that the baselines' inability to handle Duplicate KG Entity (Section 2 challenge 4) limits their performance. Additionally, the ability of the NMT Bahdanau et al. (2014) model to effectively handle complex and un-known topologies helped us retrieve answers with better accuracy for variable-hop (v-hop) queries.
MetaQA
The experimental results for MetaQA are shown in table 2. For Vanilla MetaQA, we achieved better answer accuracy on 1-hop and 3-hop settings. However, in a 2-hop setting, we were able to achieve comparable results to the state-of-the-art. An increment of about 2% and 4.9% Hits@1 can be seen in the 1-hop and 3-hop settings.
To obtain the performance of each baseline on v-hop (variable-hop) dataset, we re-use the existing models for 1-hop, 2-hop, and 3-hop and assume that there is an oracle which can redirect query to the correct model. Thus estimated accuracy of various approaches is shown in the 4 th row of Table 2, while the actual results on v-hop dataset are shown in the 5 th row. It is evident that CQA-NMT outperforms all the baselines on MetaQA dataset in variable hop setting.
To gauge the effectiveness and robustness of our model, we used the same models trained on vanilla MetaQA dataset and evaluated its performance on NTM MetaQA, i.e., in zero-shot setting. For this, we achieved better results on 1 and 3-hop. The worse performance of CQA-NMT on MetaQA-NMT(2-hop) can be because of zero shot setting. Because, as compared to VRN, we have not trained CQA-NMT on MetaQA-NTM dataset, we trained it on MetaQA vanilla dataset only. Table 3: Effects for reducing the supervision from our approach. The numbers in italics are obtained without any supervision.
Further Results and Analysis
Advantage of Transformers: In the LSTM based implementation of mentioned entity detection, it could not detect different entity types for the same phrase "deep learning" in query 1 and 2 of Figure 1. However, in BERT-based approach it was able to. We therefore infer that such phenomenon could occur due to key features of BERT such as multi head attention, WordPiece embeddings, Positional embeddings, and/ or Segment embeddings. Moreover, in a different context, it was able to assign different types to the entities with the same mentions (Query 1 and 2 from Figure 1). Effects of using less annotations: To study the importance of annotation in our approach, we removed several components from our proposed approach and studied the effects (Table 3). We first studied CQA-NMT after removing all the supervision and used heuristics-based-approaches for AET and Mention Detection (both the approaches were taken from Mohammed et al. (2017)). The shortest path, similar to Sun et al. (2018Sun et al. ( , 2019, between the linked KG entity and AET, was then taken to retrieve the answers. This setting (row 1) results in the worst performance. In row 2, 3, and 4 of Table 3, we kept only one component of CQA-NMT as supervised and applied heuristics for others as mentioned above. As evident from these rows, mention detection plays a crucial role in extracting the correct answer (a jump in range of 2%-5% in answer accuracy). A similar analysis can be found in Dong and Lapata (2016) Table 3, we infer that joint training not only improves the scores of individual components (in range 15%-20%) but also, the overall answer accuracy. We observed that the challenges 5 and 6 from section 2 were handled significantly better after jointly training CQA-NMT for AET and mention detection (row 5 Motivation for PageRank: When we have more than one candidate entity for a mentioned entity, we want to choose the one with higher popularity (Sec 4). One of the most well established measure of popularity of nodes in graphs is PageRank. Therefore we have used it. Further, when more than one entity are mentioned in an NLQ, there can be more than one candidate entity for each of them. The graph-based approach also helps us choose the candidates that are well connected. We also experimented using other measures such as in-degree and out-degree of nodes. However, for LOCA dataset, we achieved an increment of 22% using PageRank on Entity Linking task, as compared to the in-degree and out-degree measures. PageRank also helped in reducing the challenges 1-2 from Sec. 2.
Retrieval of answer(s) from KG
The final objective of a KG-QA system is to retrieve the correct answer from KG against a query q. To this end, we use the outputs of the different components of CQA-NMT and feed them to complete the pre-written SPARQL sketchs. We defined a bunch of rules for different question-types and used a simple-mapping rules to map the queries to the sketches. For e.g., consider the query, q = "Who is working in automated regulatory compliance and has published a paper in NLP?". The output of CQA-NMT contains all the information that is required to form a structured query such as SPARQL. The outputs of CQA-NMT are: 1. Linked Entities: {e5: automated regulatory compliance (sub-area), e6: NLP (keyword)} 2. Inference Chain: key person; has paper, author 3. Answer Entity Type (AET): researcher.name 4. Question Type (q type ): Factoid After using the q type information, we fill a sketch using other outputs. The generated SPARQL query is: SELECT DISTINCT ?uri WHERE {<e5> <key person> <?uri> . <e6> <has paper> <?x> . <?x> <author> <?uri>}. Where, e5 and e6 are unique identities assigned to 'automated regulatory compliance' (of type subarea) and NLP (of type keyword).
Conclusion
We presented a complex version of the KGQA problem, which involves mention of multiple entities in the question. Multiple sequence of relationships combined in complex topologies, are required to answer such questions. It is evident that such questions, while required to be answered in real world industrial setting, cannot be answered using prior approaches. We propose a novel CQA-NMT model to answer such questions and have performed a detailed comparison of our approach with prior art on MetaQA and Loca datasets. We have shown that CQA-NMT not only solves more complex task, but also performs better on MetaQA dataset as compared to baseline approaches. | 6,836.2 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Measurement of the low-energy antideuteron inelastic cross section
In this Letter, we report the first measurement of the inelastic cross section for antideuteron-nucleus interactions at low particle momenta, covering a range of $0.3 \leq p<4$ GeV/$c$. The measurement is carried out using p-Pb collisions at a center-of-mass energy per nucleon-nucleon pair of $\sqrt{s_{\rm{NN}}}$ = 5.02 TeV, recorded with the ALICE detector at the CERN LHC and utilizing the detector material as an absorber for antideuterons and antiprotons. The extracted raw primary antiparticle-to-particle ratios are compared to the results from detailed ALICE simulations based on the GEANT4 toolkit for the propagation of antiparticles through the detector material. The analysis of the raw primary (anti)proton spectra serves as a benchmark for this study, since their hadronic interaction cross sections are well constrained experimentally. The first measurement of the inelastic cross section for antideuteron-nucleus interactions averaged over the ALICE detector material with atomic mass numbers $\langle A \rangle$ = 17.4 and 31.8 is obtained. The measured inelastic cross section points to a possible excess with respect to the Glauber model parameterization used in GEANT4 in the lowest momentum interval of $0.3 \leq p<0.47$ GeV/$c$ up to a factor 2.1. This result is relevant for the understanding of antimatter propagation and the contributions to antinuclei production from cosmic ray interactions within the interstellar medium. In addition, the momentum range covered by this measurement is of particular importance to evaluate signal predictions for indirect dark-matter searches.
The possible presence of antinuclei in the Milky Way could be explained either by reactions of high-energy cosmic rays with the interstellar medium or by more exotic sources, such as dark-matter annihilation [1]. Some dark-matter models [2][3][4][5][6] predict that low-energy antideuterons are a promising probe for indirect dark-matter searches since the contributions from cosmic-ray interactions in the energy range below 1-2 GeV per nucleon [7][8][9] are expected to be rather small. For this reason, the search for antinuclei has been intensified in recent years with new satellite and balloon-borne experiments such as AMS-02 [10] and GAPS [11]. So far, only antiprotons have been detected in space [12] and no clear evidence of heavier antinuclei production has been found yet [13,14], but dedicated analyses searching for antideuteron and antihelium are currently ongoing [3,15].
In order to get a reliable baseline for antideuteron production at low energies, realistic models of cosmic-ray transport are necessary. In addition, also the predicted flux of antinuclei from dark-matter annihilation depends on the production mechanism and antinuclei transport properties within the interstellar medium. There are three main relevant mechanisms that determine the signal and background rates: i) the antideuteron production, either in p-A and A-A reactions between cosmic rays and the interstellar medium depending on the element abundance, or in dark-matter annihilation processes, ii) the antideuteron propagation in the galaxy, the heliosphere and the Earth's atmosphere and iii) inelastic processes such as nuclear breakup, charge exchange or annihilation that occur during propagation and in experiments inside the detectors. These three mechanisms must be measured as precisely as possible to interpret correctly any future measurement in satellite and balloon-borne experiments. While the propagation has been constrained by measuring different nuclei from primary and secondary cosmic rays [16][17][18][19], accelerator experiments can be used to study the production and the inelastic scattering cross sections.
Antimatter is copiously produced in high-energy collisions of protons and heavy ions [20,21]. This environment is hence well suited to study antinuclei properties. At RHIC, the STAR and PHENIX Collaborations have measured p, d, 3 He and 4 He [22-25] yields employing Au-Au collisions at centerof-mass energies per nucleon-nucleon pair of √ s NN = 130 GeV and √ s NN = 200 GeV. At the LHC, the ALICE Collaboration has studied p, d, 3 He and 4 He production in pp, p-Pb and Pb-Pb collisions at center-of-mass energies per nucleon pair from 0.9 to 13 TeV [26][27][28][29][30][31][32] and the yields obtained for A ≥ 2 have been interpreted by means of coalescence or statistical hadronization models [33][34][35][36]. The LHC measurements combined with different coalescence models have been employed to estimate the antideuteron and antihelium flux from cosmic-ray interactions measurable by the AMS-02 and GAPS experiments [15,[37][38][39]. Since the inelastic cross sections for antinuclei-nuclei interactions are measured precisely only for p but barely known for heavier antinuclei, all the available calculations rely on poorly constrained parameterizations. For antideuterons, the inelastic cross sections have been measured for several materials only for two momentum values, p = 13.3 GeV/c [40] and p = 25 GeV/c [41]. However, the low-momentum range accessible by ALICE (p ≤ 5 GeV/c) remains unexplored. For antihelium, no measurement of inelastic cross sections is available.
In this Letter we present a method to evaluate the inelastic cross section of antinuclei based on the measurement of raw reconstructed antiparticle-to-particle ratios. Using ratios instead of individual particle yields allows to extract the antideuteron and antiproton cross sections independently from their production cross sections and for a broad momentum range. We report the first measurement of the inelastic cross section for antideuteron-nucleus interactions in the momentum range of 0.3 ≤ p < 4 GeV/c. The results presented are based on data collected during the 2016 p-Pb LHC run at √ s NN = 5.02 TeV. The performance of the ALICE detector and the description of its subsystems can be found in [42,43]. Collision events are selected by using the information from the V0 detector, which consists of two plastic scintillator arrays located on both sides of the interaction point at forward and backward pseudorapidities. A simultaneous signal in both arrays was used as a minimum-bias (MB) trigger. In total, about 600 million MB events are selected for further analysis, which correspond to an integrated luminosity of L MB int = 287 µb −1 , with a relative uncertainty of 3.7% [44]. The charged-particle tracks are reconstructed in the ALICE central barrel with the Inner Tracking System (ITS) and the Time Projection Chamber (TPC), which are located within a solenoid that provides a homogeneous magnetic field of 0.5 T in the direction of the beam axis. The ITS consists of six cylindrical layers of silicon detectors located at radial distances from the beam axis between 3.9 cm and 43 cm. The TPC extends radially from r = 85 cm to r = 247 cm, is 5 m long and was filled with an Ar-CO 2 gas mixture during the 2016 data taking period. These two subsystems provide full azimuthal coverage for charged-particle trajectories in the pseudorapidity range |η lab | < 0.8. The selected tracks must fulfill basic quality criteria established in antinuclei analyses in p-Pb collisions [31]. These criteria guarantee a resolution of about 2% on the momentum reconstructed at the primary vertex (p primary ) in this analysis.
The TPC is also used for the particle identification (PID) of (anti)protons and (anti)deuterons via their specific energy loss dE/dx in the gas volume, with a resolution of about 5% [45]. The n(σ TPC i ) variable represents the PID response in the TPC expressed in terms of the deviation between the measured and expected dE/dx for a particle species i, normalized by the detector resolution σ . The expected dE/dx is computed with a parameterized Bethe-Bloch curve The PID purity in all momentum intervals is found to be higher than 88% and 47% for the (anti)proton and (anti)deuteron samples, respectively. The background is subtracted from the squared-mass spectra with a two-component fit [31].
The determination of the inelastic cross section requires precise knowledge of the ALICE detector material.
The MC parameterization of the ALICE material budget up to the outer TPC vessel was validated with The selected (anti)proton and deuteron candidates include a substantial amount of background from secondary (anti)particles that originate from weak decays of hyperons or from spallation reactions in the detector material. Following the procedure described in [26, 46, 47], the contribution from secondary (anti)particles is subtracted by performing a fit to the distribution of the measured distance of closest approach (DCA) of track candidates to the primary vertex with templates from Monte Carlo (MC) simulations. In contrast to secondary particles, primary particles point back to the primary vertex, hence a distinct structure peaked at zero in the DCA distribution characterizes the primary particles. Secondary particles correspond to a flat DCA distribution and their contribution can therefore be separated [26, 28]. The fraction of secondary (anti)protons is found to be around 20% in the lowest momentum interval analysed (0.3 ≤ p primary < 0.4 GeV/c) and decreases monotonically down to ∼ 1.5% at high momenta. The main contribution of secondary (anti)protons stems from weak decays. For deuterons, the dominant contribution of secondary particles comes from spallation processes in the detector material that lead to the ejection of fragments such as protons, neutrons or deuterons. The fraction of secondary deuterons is found to be 23.5% in the lowest momentum interval (0.5 ≤ p primary < 0.6 GeV/c) and to decrease exponentially to negligible values at p primary ∼ 1.4 GeV/c. For antiprotons and antideuterons the contribution from spallation processes is absent. The feed-down from weak decays of hyperons and hypernuclei has a negligible impact on the measured ratios [31,46,48]. Hence, the antideuteron sample is composed entirely from primaries. The total number of selected candidates amounts to 7.57 × 10 7 protons, 6.52 × 10 7 antiprotons, 2.52 × 10 5 deuterons and 1.98 × 10 5 antideuterons. The momentum spectra are corrected for the background from secondary particles but not for the detector efficiency or losses of (anti)particles in the detector material, so they are referred to as raw primary spectra. Figure 1 shows the p/p and d/d ratios as a function of p primary . The systematic uncertainties due to track-ing, particle identification and contribution from secondaries are considered, and the total uncertainty is obtained as the quadratic sum of the individual contributions. It increases from 1% (2%) at low momentum up to 2% (6%) in the high-momentum region for p/p (d/d). The uncertainty on the primordial antimatter-to-matter ratio produced in collisions is considered as a global uncertainty. The primordial p/p ratio 0.984 ± 0.015 is extrapolated from available measurements [46,47] and, under the assumption that the (anti)deuteron yield is proportional to the squared yield of (anti)protons [49,50], the primary d/d ratio amounts to 0.968 ± 0.030. These values are used as an input for detailed MC simulations based on the GEANT4 toolkit for the propagation of (anti)particles through the detector material [51]. For the description of antinucleus-nucleus inelastic cross sections, GEANT4 relies on a Glauber calculation convoluted with a MC averaging method [52]. Figure 1 shows that the GEANT4-based simulations are able to describe the p/p ratio and are in qualitative agreement with the data for the d/d ratio.
The sensitivity of the antiparticle-to-particle ratios to the modifications of elastic and inelastic cross sections was benchmarked with the p/p measurement. The (anti)proton cross sections have been measured by various experiments [53][54][55][56][57][58][59], and the results are described well by the GEANT4 parameterization. The blue boxes in Fig. 2 indicate the ±1σ limits for the measured p/p ratio, where 1σ corresponds to the quadratic sum of statistical, systematic and global uncertainties. The green and magenta bands show the simulated ratios with a variation of ±25% of the inelastic antiproton cross section along with the simulations using default cross section (gray band). Only a variation of the total inelastic cross section has been carried out. The widths of the bands correspond to a quadratic sum of the contributions from two additional variations: i) the elastic cross sections of protons and antiprotons are changed independently by ±20%, which leads to 1.5% modification of the ratio and ii) the inelastic proton-nucleus cross section is varied by 3.5%, which is the uncertainty of the GEANT4 parameterizations obtained from fits of the experimental data for this cross section. This variation yields a modification of about 0.5% in the ratio. These systematic checks demonstrate that the antiparticle-to-particle ratio is mainly sensitive to the variation of the inelastic cross sections and can therefore be used to measure the antideuteron inelastic cross section.
Extending this recipe, an iterative and momentum-dependent variation of σ inel (p) within the GEANT4 simulations was carried out to obtain p/p ratios that correspond to the ±1σ and ±2σ experimental limits. The resulting ±1σ and ±2σ limits for σ inel (p) are presented in panels a) and b) of Fig. 3 together with standard GEANT4 parameterizations. Panel a) refers to the ITS+TPC analysis and hence corresponds to the inelastic interaction with nuclei that have average charge and mass number Z = 8.5 and A = 17.4; panel b) refers to the analysis additionally employing the TOF and corresponds to Z = 14.8 and A = 31.8. The inelastic cross sections shown in Fig. 3 are estimated as a function of the momentum p at which the inelastic interaction occurs. Due to the continuous energy loss of the particle inside the detector material, this momentum is lower than p primary reconstructed at the primary vertex.
The corresponding correction is estimated using MC simulations by looking at the average values of the annihilation momentum distribution in each p primary interval. The RMS of the distributions is then propagated to the uncertainty of the cross section measurement. The minimum momentum reconstructed at the primary vertex amounts to p primary = 0.3 GeV/c for antiprotons and to p primary = 0.5 GeV/c for antideuterons, and the energy-loss correction transforms these values to p = 0.18 GeV/c and p = 0.3 GeV/c, correspondingly. For momenta p > 0.7 GeV/c, the antiproton inelastic cross section is found to be in good agreement with the GEANT4 parameterizations, which in turn describe well the existing experimental data [52]. Thus, these results validate the analysis procedure, which then can be applied to (anti)deuterons.
In contrast to antideuterons, the deuteron inelastic cross section was measured on several materials at various momenta [60,61], and the data are well described by GEANT4 parameterizations. The antideuteron inelastic cross section can therefore be constrained via the comparison of the experimental d/d ratio and the GEANT4-based MC simulations with σ inel (d) varied in a similar way as for antiprotons. For this pur- pose, the same uncertainties are considered: i) the variation of elastic cross sections of (anti)deuterons by ±20% that results in 2% deviation for the ratio, ii) the variation of the inelastic deuteron cross section by 7% that corresponds to the precision of GEANT4 parameterizations ( 1% uncertainty) and iii) the uncertainty from the primordial d/d ratio (3.0%).
The resulting upper and lower limits on σ inel (d) for targets with Z = 8.5, A = 17.4 and Z = 14.8, A = 31.8 are shown in panels c) and d) of Fig. 3, respectively. The extracted inelastic cross sections presented here include all inelastic antideuteron processes where the antideuteron is destroyed and represent the first measurement in this low-momentum range.
While the measured σ inel (d) is found to be in agreement with the GEANT4 implementation within the 0.9 ≤ p < 4.0 GeV/c momentum range, it rises faster than the simulated parameterization in the momentum range 0.3 ≤ p < 0.9 GeV/c, reaching a maximal discrepancy of a factor 2.1 in the interval of 0.3 ≤ p < 0.47 GeV/c. These measurements can now help to better understand the antideuteron inelastic processes at low momenta and to improve the parameterization of the inelastic cross section used in GEANT4. Additionally, these results are now available for models of the propagation of antideuterons within the interstellar medium [3,7,38] and will impact the flux expectations at low momentum near Earth.
In summary, we have shown how the ALICE detector can be used as an absorber to study the antinuclei inelastic scattering cross section on detector material. The antiparticle-to-particle ratios method was validated using (anti)protons and the sensitivity of the ratio to the variation of the inelastic cross section was demonstrated. In this way, the first measurement of the inelastic scattering cross section of antideuterons was performed on an effective target with mean charge number Z = 8.5 and mass number A = 17.4 in the momentum range 0.3 ≤ p < 0.9 GeV/c, and with Z = 14.8 and A = 31.8 in 0.9 ≤ p < 4.0 GeV/c. These cross sections can now be used in propagation models of antideuterons within the interstellar medium for dark-matter searches. Future studies of high-statistics pp, p-Pb and Pb-Pb data collected during the second (2015-2018) and third (scheduled to start in 2021) LHC run campaigns should allow the measurement of inelastic cross sections of heavier antinuclei such as 3 He and 4 He in a similar way and the improvement of the current antideuteron results.
Acknowledgements
The ALICE Collaboration would like to thank all its engineers and technicians for their invaluable contributions to the construction of the experiment and the CERN accelerator teams for the outstanding performance of the LHC complex. The [34] J. I. Kapusta, "Mechanisms for deuteron production in relativistic nuclear collisions", Phys. Rev. C21 (1980) . | 3,967.6 | 2020-05-22T00:00:00.000 | [
"Physics"
] |
The association between Korean employed workers’ on-call work and health problems, injuries
Background On-call work is a form of work that requires the person to work at any time during the on-call period. Thus, on-call work is often regarded as one of the most severe stress factors. This study investigates the associations between on-call work and health problems, injuries. Methods This study was based on the 3rd Korean Working Conditions Survey. Total of 29,246 employed workers who had been working for at least 1 year were included. Logistic regression analysis was performed to investigate the association between on-call work and health problems, injuries. Results The odds ratios for on-call workers in terms of physical health problems, psychological health problems, and injuries were 1.33 (95% confidence interval [CI] 1.22-1.44), 1.31 (95% CI 1.08-1.60), and 2.76 (95% CI 2.26-3.37), respectively. Analysis of the detailed symptoms revealed odds ratios in on-call workers of 2.06 for hearing problems (95% CI 1.63-2.62); 1.71 for skin problems (95% CI 1.38-2.12); 1.22 for back pain (95% CI 1.08-1.38); 1.23 for muscular pains in upper limbs (95% CI 1.12-1.34); 1.27 for muscular pains in lower limbs (95% CI 1.15-1.40); 1.46 for headache, eye fatigue (95% CI 1.32-1.60); 1.37 for abdominal pain (95% CI 1.02-1.85); 1.43 for depression or anxiety disorders (95% CI 1.07-1.93); 1.36 for fatigue (95% CI 1.24-1.49); and 1.41 for insomnia and general sleep difficulties (95% CI 1.13-1.76). Conclusions The present study found that on-call work results in an increased risk of health problems and injuries. This study is the result of analyses of broad range of the job spectrum in Korean employed workers; thus, future studies are necessary to determine the effects of on-call work in various job groups.
Background
On-call work, by definition, is a form of work that requires the person to work at any time during the on-call period. Therefore, a person often has to work during both daytime and nighttime hours, leading to stressful situations for the worker [1]. Thus, on-call work is often regarded as one of the most severe stress factors [2][3][4]. This was reflected in a European working conditions survey (EWCS) performed to establish the foundation for creating a healthy work environment; on-call work assessment was first introduced in the 5th European EWCS performed in 2010. Similarly, in Korea, assessment of on-call work was first included the 2nd Korean Working Conditions Survey in 2010, which defined on-call work as "immediately providing work or service if contacted or called". The present study also used this definition.
Previous studies have reported the various effects of oncall work on workers. The health-related effects of on-call work include stress-related symptoms such as exhaustion, irritation, sleep disorders, memory disorders, and headache [4]. A previous study on nurses reported that on-call work was associated with musculoskeletal symptoms including pain in back and muscles of the upper limbs [5]. The neuropsychological effects of on-call work include depressed mood [6], sleep deprivation and insomnia [7], sleepiness during the daytime [7,8], reduced cognitive function [9,10], and difficulty in concentrating [10]. Furthermore, a study on residents showed that long on-call work periods were associated with occupational injuries and malpractice [11].
On-call work may also affect socio-psychological stress. A previous study of Korean physicians showed an association between the frequency of on-call duty during the week and work-related stress [12]. Another study showed the relationship between on-call work and job dissatisfaction [2,13] and also between on-call work and turnover intention [14], suggesting that job dissatisfaction from on-call work can affect turnover intention. In the US, the annual loss of operating budget due to job change is an estimated 5% [15]. Therefore, increased turnover of employees from on-call work may also become a social cost problem. Moreover, irregular working hours due to on-call work is associated with work-life imbalance [16], and a study of on-call workers and their spouses demonstrated the negative effects of on-call work on family life including constraints in their lives, forced sacrifice by spouses, and communication issues among family members [17]. These results clearly showed that on-call work affects not only work life but also life outside of work. In addition, the negative effects of on-call work on other family members were clearly demonstrated. Thus, on-call work not only has negative effects both physically and psychologically but may also result in socio-psychological stress, eventually affecting the quality of life of workers and their family members in addition to the social costs.
However, there are very few studies about on-call work in Korea. Moreover, previous studies generally focused on specific occupations (i.e., nurses and physicians) and no previous study on the effect of on-call work on health and other aspects has included a broad range of the job spectrum. This study aimed to investigate the possible associations between on-call work and its possible negative effects on health and other injuries by utilizing data from the 3rd Korean Working Conditions Survey performed in 2011 regarding on-call work-related personal and occupational characteristics and work environment in a large sample of workers representative of Korean workers nationwide.
Study population
This study was based on the 3rd Korean Working Conditions Survey performed by Korean Occupational Safety and Health Research Institute in 2011. The survey was conducted on a representative sample of workers (≥15 years). A worker was defined as "a person who during the reference week did any work for pay or profit". A total of 50,032 subjects were included in the study, and in-person interviews were performed by professional interviewers from professional survey services after obtaining informed consent. This study focused on employed workers from the entire of 50,032 subjects. An employed worker was defined as "a person who signed either expressed or implied employment contract with individual, family, or business and is receiving salary or daily wage or spot goods in return for their labor". Furthermore, since we aimed to investigate the effect of on-call work on the workers, only the employed workers who had been working for at least 1 year were included. Eventually, a total of 29,246 subjects were included in this study. Although the age limit for the Working Conditions Survey was 15 years, since the subjects included in this study had been working for at least 1 year, the subjects in this study were at least 16 years of age.
On-call work
The subject was defined as working on-call if they responded "Yes" to the question "Do you work on-call (immediately providing work or service if contacted or called)?"
Health problems and injuries
For health problems and injuries, the subject was defined as having health problems and injuries if they answered "Yes" to the question "Over the last 12 months, did you suffer from any of the following health problems?". The questionnaire contained sub-categories for questions regarding health problems and injuries including hearing problems, skin problems, back pain, muscular pains in upper limbs, muscular pains in lower limbs, headaches or eyestrain, abdominal pain, respiratory difficulties, cardiovascular diseases, injuries, depression or anxiety, fatigue, insomnia and general sleep difficulties. Any report of hearing problems, skin problems, backache, muscular pains in upper limbs, muscular pains in lower limbs, headaches or eyestrain, abdominal pain, respiratory difficulties, cardiovascular diseases, fatigue was defined as a physical health problem. Similarly, if the subject had depression or anxiety, insomnia, general sleep difficulties, the subject was defined as having a psychological health problem.
General and occupational characteristics
The general characteristics of the subjects investigated in this study included gender, age, education level, income level, alcohol consumption, smoking, obesity, and hypertension. For obesity and hypertension, the subject was defined as having a chronic condition if they answered "Yes" to the question "Were you ever diagnosed with chronic obesity or hypertension by your physician?". However, for obesity, if the subject answered "Not obese" to the additional question "What is your current status in terms of obesity?", the subject was not considered obese.
The occupational characteristics included job type, employment type, working hours, and shift work. Two methods were used to categorize the job types. First, the subjects were asked, "What is the job type that best describes your current occupation?". Based on their responses, professional workers and senior managers were defined as "Professional", general office workers as "Office", sales workers and service providers as "Service", and skilled/semi-skilled/unskilled occupations and agricultural/ forestry/fishery workers as "Physical". Second, based on the middle categorization of the Korean Employment Classification of Occupation (KECO), the subjects' occupations were categorized into 24 job groups. Employment type was defined as regular or temporary, and working hours were categorized as ≤ 40 h, 41 -60 h, or > 60 h per week.
Work environment and work-related stress
Work environment was assessed in terms of exposures to physical, chemical, and ergonomic factors. If the subject answered "All of the time", "Almost all of the time", "Around 3/4 of the time", "Around half of the time" or "Around 1/4 of the time" to the question "Are you exposed at work to ...?", the subject was considered to be exposed to the factor. However, if the subject answered "Almost never" or "Never" to the same question, the subject was not considered to be exposed to the factor. The physical factors included the vibrations, noise, high temperatures, and low temperatures. The chemical factors included breathing in smoke, fumes, powder or dust, vapors such as solvents and thinners, handling or being in skin contact with chemical products or substances, and tobacco smoke from other people. Lastly, the ergonomic factors included tiring or painful positions, lifting or moving people, carrying or moving heavy loads, standing, and repetitive hand or arm movements.
For work-related stress, if the subject answered "Always", "Most of the time", or "Sometimes" to the question "For each of the following statements, please select the response which best describes your work situation -You experience stress in your work.", the subject was defined as having work-related stress. If the answer was "Rarely" or "Never", the subject was defined as being free of work-related stress.
Data analysis
This study utilized IBM SPSS Statistics for Windows version 19.0 (IBM Corp: Armonk, NY, USA) for analysis of the data after applying weighting adjustments. Personal and occupational characteristics, work environment, and work-related stress were analysed using frequency analysis. Chi-square analysis was used to determine the associations between on-call work and characteristics of the subject and on-call work and health problems and injuries. In order to determine the risk of health problems and injuries due to on-call work, bivariate logistic regression analysis was performed to calculate the odds ratios. Model I was the analysis after adjusting for personal characteristics, while Model II was additionally adjusted for occupational characteristics including job type, employment type, working hours, and shift work. Model III further adjusted for physical, chemical, and ergonomic factors of the work environment, while Model IV had additional adjustment for work-related stress in addition to all controls from Model III.
Results
General and occupational characteristics of the study participants A total of 29,246 subjects were included in this study. Frequency analysis and descriptive statistics were used to analyze the personal and occupational characteristics, work environment, and work-related stress. The personal characteristics of the subjects included the following: 60.9% of the subjects were male; 59.1% were between 30 and 49 years of age; 55.7% had education levels higher than college graduation; 53.5% were non-smokers; 51.5% drank alcohol once a week or less; 98.2% were not obese; and 95.3% were not diagnosed with hypertension. A majority of the subjects were "physical" workers (33.6%), and only 8.7% were "professional" workers. Moreover, a majority of the subjects were regular employees, and 51.5% worked between 41 and 60 h per week, while 39.6% worked 40 h or less per week. Most of the subjects were not working on a rotating basis (i.e., no shift work). More than half of the subjects responded that they did not have exposure to physical, chemical, or ergonomic factors in their work environment, but 72.1% reported that they suffered from work-related stress.
Difference of general occupational characteristics according to on-call work 2722 subjects (9.3%) were on-call workers; the remaining 26,524 (90.7%) were not on-call workers. Chi-square analysis was used to analyze the differences in each factor according to the on-call work status. The results revealed significant differences between on-call workers and nonon-call workers in all factors except for hypertension and working hours. Assessment of the general characteristics revealed a higher rate of on-call workers in the following groups: male, ≥50 years, high school graduates, monthly income ≥2 million won, smokers, and obese subjects. Among job types, there were more on-call workers in the "physical" group, while the "professional" group had fewer on-call workers. The rate of on-call workers was also higher for temporary and shift workers. Assessment of work environment revealed a higher rate of on-call workers exposed to physical, chemical, or ergonomic factors, as well as work-related stress ( Table 1).
The subjects were divided into 24 job classifications based on KECO categorizations and were analyzed for oncall work. The highest rate of on-call workers (31.7%) were those in the law, police, firefighting, and prison-related job group. The job classifications that had higher rates of on-call workers than healthcare and medical-related jobs (12.8%) include soldiers (30.2%), driver and transportationrelated jobs (14.6%), information and communicationsrelated jobs (13.4%), and construction-related jobs (12.9%). Moreover, although the proportions of on-call workers were low for management, accounting, and office-related (8.7%) and sales (8.1%) jobs, the absolute number of oncall workers in these job classifications were quite high, at 21.2%(578 subjects) and 7.8%(214 subjects) of the entire on-call workers, respectively ( Table 2).
On-call work and its association with health problems, injuries
Chi-square analysis was performed to determine the relationship between on-call work and health problems and injuries. The results indicated that all health problems except for cardiovascular disease had statistically significant higher prevalence among on-call workers (Table 3).
Bivariate logistic regression analysis was performed to determine the effect of on-call work on the risk of health problems and injuries. The risks were higher for on-call workers than those for non-on-call workers for the majority of health problems and injuries. This outcome did not change even after adjusting for personal and occupational characteristics, work environment, and work-related stress. The odds ratios for on-call workers in terms of physical health problems, psychological health problems, and injuries were 1
Discussion
This study utilized data from the 3rd Korean Working Conditions Survey on employed workers and analysed the relationships between their characteristics and health problems and injuries as well as the impact of on-call work on health problems and injuries. Analysis of the subjects' characteristics according to the on-call work showed differences in both personal characteristicsgender, age, education, income, smoking, alcohol consumption, and obesityand occupational characteristicsjob type, employment type, and shift work. Especially for the employment type, temporary workers were more often on-call workers. Unstable employment status in these temporary workers likely forces them to cope with stressful work environments. Moreover, previous studies reported that on-call workers have extended working hours (both daytime and nighttime) [1] and are exposed to both shift and night-time work [14], which may explain why on-call workers feel more fatigued in general. In fact, the analysis findings of the current study revealed a higher rate of on-call workers for shift work. However, there was no statistically significant difference in the rate of on-call workers based on working hours. On-call workers had increased exposure to all analysed physical, chemical, and ergonomic factors, suggesting that these workers tend to work in worse work environments. The rate of on-call workers was also higher in the group that experienced work-related stress, further supporting the previous finding that on-call work is a key factor of work-related stress [2][3][4]. The rates of on-call workers were higher in law, police, firefighting, and prison-related jobs; soldiers; driving and transportation-related jobs; information and communications-related jobs; and construction-related jobs, compared to the rate in healthcare and medicalrelated jobs, which was traditionally thought as a job category with a high rate of on-call workers. This result indicates that in addition to traditional studies of on-call workers that focus on healthcare and medical-related jobs, additional studies are necessary in order to analyse other job groups.
Analysis of the effects of on-call work on health problems and injuries showed higher odds ratios for the majority of health-related issues and injuries in on-call workers compared to those of non-on-call workers, with exceptions of respiratory difficulties, cardiovascular diseases. These results did not change even after adjusting for personal and occupational characteristics, work environment, and work-related stress. To our knowledge, no previous study has investigated the relationship between on-call work and hearing problems, skin problems, or abdominal pain. A potential explanation that could explain the relevance of on-call work with these problems is circadian disruption. In a study on sudden sensorineural hearing loss patients, 61.8% of the subjects reported insomnia before suffering from hearing loss, and the circadian clock gene was reduced compared to the control group [18]. It is thought that circadian disruption can be caused by irregular sleeping patterns that may occur during on-call work, which may cause hearing problems. Previous study have shown that exposure to light at night causes circadian disruption and decreased melatonin synthesis [19]. Melatonin may protect against psoriasis because it regulates the inflammatory response and antioxidant activity [20]. Previous study on shift workers have also shown that circadian disruption and decreased melatonin are associated with psoriasis [21]. In addition, in a mouse-based experiment, it was possible to regulate the circadian clock to control psoriasis-like skin inflammation by controlling the IL-23 expression in T cells [22], suggesting that circadian disruption may be related to inflammatory skin problems. Our study shows that the risk of skin problems is higher for on-call workers even after adjusting for the shift work. This may due to on-call workers also have disrupted circadian rhythm, which may caused by their irregular working hours, and light exposure at night. Previous studies have demonstrated the relationship between on-call work and indigestion [4] and possible induction of gastrointestinal tract diseases due to an imbalanced lifestyle from irregular eating habits and lack of sleep [23]. Thus, on-call workers who have irregular eating habitssince they are expected to work when neededwere expected to have various gastrointestinal tract-related issues and diseases. However, similar to hearing problems, additional studies are necessary in order to identify the potential mediating factors of skin problems and abdominal pain. However this is only one potential explanation that could explain the relevance of on-call work with hearing problems, skin problems and abdominal pain, so further studies are required.
A previous study reported that on-call work is associated with musculoskeletal pain (i.e., back and shoulder pain) [5], a result in agreement with our findings that on-call work is associated with back and upper/lower limb muscle pain. Furthermore, one possible explanation for these musculoskeletal symptoms was a lack of a rest and recovery period [24]. On-call workers have a shorter time for rest and recovery due to irregular working hours, which could lead to musculoskeletal symptoms in these workers.
A previous study of Finnish anesthesiologists reported that on-call work is associated with exhaustion, frustration, sleep disorders, memory disorders, and headache. Greater work-related burdens from on-call work resulted in an increased severity of these symptoms, and these symptoms disappeared during vacation [4]. These results indicate a strong relationship between on-call work and symptoms such as headache or fatigue. From our study of Korean workers, on-call work was associated with headache, eye fatigue, and general fatigue.
A previous study discussed depressed mood in on-call workers [6], and another suggested the possible relationship between on-call work and negative emotions [25]. Furthermore, on-call work was associated with daytime sleepiness, insomnia, and sleep deprivation [7,8], and possibly dysthymic disorder from continued sleep deprivation [26]. These results suggest that on-call work may be a cause of sleep deprivation and sleep disorders as well as for negative emotions in on-call workers who are constantly sleep deprived due to the nature of their work. Moreover, previous studies showed the association between on-call work and work dissatisfaction [2,13] and the result of a study on the relationship between work dissatisfaction and general mental health (i.e., depression and anxiety) [27] indicated that work dissatisfaction from on-call work might cause depression or anxiety. These results were further A previous study suggested the association between oncall work and work-related injuries [11]; the present study also found this association in Korean workers. On-call work is associated with reduced cognitive function and concentration [9,10] and studies have shown that sleep deprivation can cause reduced cognitive function [28,29]. Therefore, reduced cognitive function and concentration from on-call work can increase the risk of injuries.
This study has many strengths. To our knowledge, it is the first Korean study to determine the association between on-call work and health problems and injuries. Furthermore, unlike traditional studies that focused on a specific job group, this study included a wide range of job groups in order to investigate the relationship between oncall work and health problems, injuries. In the present study, higher rates of on-call workers were found in other job groups compared to that of the healthcare-related job group, which was the target job group in previous studies. This finding suggests the need for future studies focusing on other job groups. Second, this study utilized data from the Korean Working Conditions Survey and the 29,246 subjects included in this study were representative of Korean employed workers nationwide. In addition, the utilization of trained investigators to perform the survey minimized arbitrary interpretation of the survey responses.
This study had several limitations. First, the crosssectional study design based on data from the Korean Working Conditions Survey made it difficult to identify the clear causal relationship between on-call work and health problems and injuries. Future studies are needed to identify the clear causal relationships. Second, the only measurement tools for variables were the responses to the Korean Working Conditions Survey. More specifically, for the work environment, exposures were determined by dichotomous measures of the survey responses rather than actual assessments of work the environments. Furthermore, instead of objective measurements such as physician interviews, blood test, or medical imaging, self-reporting was used to assess health problems. And there may be a misclassification of the job types, because there were only four job types, so jobs of different character may belong to same category. Lastly, the assessed medical history of the subjects included only the presence of obesity and hypertension.
Conclusion
This is one of the first studies identified a significant relationship between on-call work in Korean workers and health problems and injuries. Moreover, this study included a wide range of job groups in order to investigate the relationship between on-call work and health problems, injuries. Future studies are required to identify clear causal relationships; similarly, additional discussions are needed in order to reduce the adverse effects of on-call work.
Abbreviations EWCS: European working conditions survey; KECO: Korean employment classification of occupation | 5,332.6 | 2018-03-20T00:00:00.000 | [
"Physics"
] |
Filtrations on Springer fiber cohomology and Kostka polynomials
We prove a conjecture which expresses the bigraded Poisson-de Rham homology of the nilpotent cone of a semisimple Lie algebra in terms of the generalized (one-variable) Kostka polynomials, via a formula suggested by Lusztig. This allows us to construct a canonical family of filtrations on the flag variety cohomology, and hence on irreducible representations of the Weyl group, whose Hilbert series are given by the generalized Kostka polynomials. We deduce consequences for the cohomology of all Springer fibers. In particular, this computes the grading on the zeroth Poisson homology of all classical finite W-algebras, as well as the filtration on the zeroth Hochschild homology of all quantum finite W-algebras, and we generalize to all homology degrees. As a consequence, we deduce a conjecture of Proudfoot on symplectic duality, relating in type A the Poisson homology of Slodowy slices to the intersection cohomology of nilpotent orbit closures. In the last section, we give an analogue of our main theorem in the setting of mirabolic D-modules.
Introduction
Let g be a semisimple complex Lie algebra, N ⊆ g * the nilpotent cone (of elements whose coadjoint orbit is stable under dilations), W the Weyl group, and G a simply connected complex Lie group with Lie G = g. The Springer correspondence associates to every irreducible representation χ of W a pair of a nilpotent coadjoint orbit O χ ⊆ g * and a local system L χ on O χ . Let B be the flag variety and ρ : T * B → N the Springer resolution. Then the cohomology of T * B, or equivalently of B, is endowed by the Springer correspondence with a W -action. The graded multiplicity space of each irreducible representation χ of W has Hilbert series given by the generalized Kostka polynomial K g,χ (t), which in the case of g = sl n is an ordinary one-variable Kostka polynomial. Precisely, we set K g,χ (t) := i≥0 t i dim Hom W (χ , H 2 dim B−2i (B, C)), where dim always refers to the complex dimension. Note that, as a graded W -module, H * (B, C) ∼ = Sym h/(Sym h) W + ), putting h in degree two, with ((Sym h) W + ) the ideal generated by the positive-degree W -invariant elements of Sym h.
By a theorem of [6], which was conjectured in [ [4,5] for details. In the case of the nilpotent cone, the Poisson-de Rham homology does not see the W -action, since W does not act on N , unlike on the cohomology of T * B. On the other hand, N has a dilation action which endows HP D R * (N ) with a second grading, which is not seen in H * (T * B). It is interesting to compute this grading. Moreover, this difference makes it clear that the isomorphism of [5] cannot be canonical, and it is interesting to correct this deficiency. Lusztig K g,χ (x 2 )K g,χ (y −2 ). (1.1) In this paper we prove this conjecture, in the following stronger form, as a simple application of a theorem of Hotta and Kashiwara. Let σ denote the sign representation of W . Here the weight grading on the LHS corresponds to the grading on Sym h/((Sym h) W + ) on the RHS (with h in degree two), and the second grading is by the asterisk * . This isomorphism accomplishes our goal of producing a canonical isomorphism.
Remark 1.2 Using the homotopy equivalence T * B
B together with Poincaré duality for B, we can rewrite the theorem more simply as HP D R * (N ) ∼ = Hom W (Sym h/((Sym h) W + ), H * (B)), but the way it is written is more natural; for example, the aforementioned general conjecture states HP D R * (X ) ∼ = H dim X − * ( X ) for symplectic resolutions X → X .
We go further and produce canonical filtrations on the cohomology of the flag variety whose Hilbert series is given in (1.1): Theorem 1. 3 For every element λ ∈ h * reg , there is a canonical associated filtration F λ on H 2 dim B− * (T * B) whose associated graded vector space is HP D R * (N ). This is W -equivariant: F w(λ) = w(F λ ).
The filtration is compatible with the cohomological grading; hence, the associated graded vector space is bigraded. As a result we obtain canonical filtrations on irreducible representations of Weyl groups.
Corollary 1.4
To every λ ∈ h * reg , there is associated a canonical filtration on every irreducible representation χ of W whose associated graded vector space has Hilbert series K g,χ (y −2 ).
As we observe, the construction of Corollary 1.4 actually generalizes from Weyl groups to arbitrary complex reflection groups. We will study the resulting filtrations in detail in future work.
We deduce many consequences and extensions of the above results to Slodowy slices, W -algebras, and Springer fibers. In more detail, let φ ∈ N be any point. Then one can consider the Slodowy slice S φ ∩ N in N to the coadjoint orbit O φ := G · φ ⊆ N . (We recall its construction in Sect. 2). The ring of functions O(S φ ∩ N ) is also called a (centrally reduced) classical W-algebra.
The above results allow us to deduce the grading on the zeroth Poisson homology, as well as the filtration on the quantizations of S φ ∩ N , which are (centrally reduced) quantum W-algebras. Geometrically, these naturally assign to the top cohomology of each Springer fiber ρ −1 (φ) a h * reg -family of filtrations whose Hilbert series we compute.
As a consequence, when g = sl n , and hence Y = S φ ∩ N is symplectically dual to a corresponding coadjoint orbit Y ! ⊆ N , we prove a case of a conjecture of Proudfoot, which states (for general symplectic dual cones Y and Y ! ) that HP 0 (O(Y )) ∼ = IH * (Y ! ) as graded vector spaces, with IH * (Y ! ) the intersection cohomology of Y ! .
We also give formulas for the higher Poisson-de Rham homology of Slodowy slices and for the zeroth Hochschild homology of their quantizations by finite W -algebras. Remark 1.6 Note that (1.1) implies that the weight grading on HP D R * (N ) is nonpositive. This is somewhat unusual; for example, whenever the zeroth Poisson homology of a conical Poisson variety is at least two-dimensional it will have (some) positive weights, as will happen for many Slodowy slices in the nilpotent cone, cf. Corollary 2.5. Note that, for varieties admitting a symplectic resolution for which [8,Conjecture 1.3.(b)] holds, this condition that the zeroth Poisson homology has dimension at least two is equivalent to the statement that the fiber over the vertex in the symplectic resolution has multiple Lagrangian components. (In particular, it is not irreducible.) The proofs, given in Sect. 4, involve a study of the Harish-Chandra D-module on N , following Hotta and Kashiwara in [12]. In Sect. 6.2, we give a generalization of our main result to the setting of mirabolic D-modules, which computes the weakly equivariant structure of the mirabolic Harish-Chandra D-module on gl n × C n , defined in [9].
We begin the body of the paper in Sect. 2 with a detailed statement of our results on the grading associated with the cohomology of Springer fibers as well as to the Poisson and Hochschild homology of W-algebras. The application to Proudfoot's conjecture on symplectic duality is then given in Sect. 3. In the remaining sections, we prove our results using D-modules, recalling first some of the necessary background. In the last section, we explain an alternative proof of Lusztig's formula using Hamiltonian reduction. We then use this to generalize this result to the mirabolic setting, i.e., the setting of SL n -equivariant D-modules on sl n × C n .
Springer fibers and W -algebras
Let φ ∈ N . We may then consider the Springer fiber ρ −1 (φ) ⊆ T * B, which reduces to B itself in the case φ = 0.
There is a beautiful construction of a transverse slice, called the Kostant-Slodowy slice, which we denote S φ , to O φ in g * , which is an affine linear space defined as follows: Let −, − be a nondegenerate invariant bilinear form on g (e.g., the Killing form) and Φ : g → g * be the resulting isomorphism. Then e := Φ −1 (φ) is adnilpotent. The Jacobson-Morozov theorem states that the element e can be extended (nonuniquely) to a so-called sl 2 triple (e, h, f ) of elements of g satisfying the relations [h, e] = 2e, [h, f ] = −2 f , and [e, f ] = h. Then S φ can be defined as Φ(e + ker(ad f )). Moreover, Kazhdan defined a canonical contracting C × action on S φ to φ, by λ · φ = λ 2−ad(h) * (φ); the induced grading on O(S φ ) is called the Kazhdan grading. Using this action, the resolution ρ restricts to a C × -equivariant symplectic resolution ρ : , which was proved in [6,Theorem 1.13]. Since the Poisson-de Rham homology is bigraded by homological and Kazhdan gradings, this yields a bigrading on the cohomology of the Springer fiber H * (ρ −1 (φ)), which was not studied in [6]. Our goal is to compute this grading. For now, we describe the grading in top degree, H dim ρ −1 (φ) (ρ −1 (φ)) (see Corollary 2.5 for the general formula in terms of intersection cohomology). This has an explicit algebraic interpretation in terms of W -algebras. Namely, the finite W -algebra W φ is the coordinate ring of S φ , and its central reduction W 0 φ is the coordinate ring of S φ ∩ N . Let us recall their explicit algebraic description, along with the Poisson structure, following [10] and [13].
Since ad(h) is semisimple, we get a decomposition g = i g i where g i is the eigenspace of ad(h) of eigenvalue i. Equip g with the skew-symmetric form [x, y] . This restricts to a nondegenerate pairing on g −1 . Fix a Lagrangian subspace l ⊆ g −1 . Then we define We also define the shift The W -algebra W φ is then defined by In other words, this is the Hamiltonian reduction of g * with respect to the Lie algebra m φ and its character φ. By construction, W φ is a Poisson algebra (with respect to the Poisson bracket induced from Sym g). In more detail, if x + m φ Sym g ∈ W φ , then {x, m φ } ⊆ m φ Sym g, and hence {x, m φ Sym g} ⊆ m φ Sym g. Thus, the Poisson bracket on Sym g induces one on W φ . Given any central character η : Z (Sym g) = (Sym g) g → C, we can form the central reduction W η φ := W φ / ker(η)W φ . Here Z (Sym g) denotes the Poisson center of Sym g. Then, W φ and W 0 φ are the coordinate rings of the Kostant-Slodowy slices: Note that, for general η, SpecW η φ is a deformation of S φ ∩ N , namely the Kostant-Slodowy slice S φ intersected with the closure of the regular coadjoint orbit on which every f ∈ (Sym g) g restricts to the constant function η( f ). Moreover W η φ is a filtered algebra with gr W η φ = W 0 φ . Recall that the zeroth Poisson homology of a Poisson algebra A is HP 0 (A) := A/{A, A}. Then, as a corollary of our main theorem, we compute the graded structure of the zeroth Poisson homology of W φ and W 0 φ , as well as the filtered structure of W η φ . We will need to use the Springer correspondence, which assigns (injectively) to each irreducible representation χ ∈ Irrep(W ) the pair of a nilpotent coadjoint orbit O χ ⊆ N and an irreducible local system L χ on O χ [see the beginning of Sect. 4 for an explicit definition of (O χ , L χ )]. Define the following polynomial (cf. [19, (8.2)]): (2.4)
Corollary 2.1
The Hilbert series of HP 0 (W 0 φ ), as well as of gr HP 0 (W η φ ) for all η, is P φ (y). Moreover, HP 0 (W φ ) is freely generated over (Sym g) g generated by a graded vector space of the same Hilbert series.
Note that W φ and hence HP 0 (W φ ) inherit actions of Stab G (e, h, f ) which commute with the dilation action and hence preserve degrees; on HP 0 (W φ ) this action factors through the finite group π 0 Stab G (e, h, f ) since the Lie algebra of Stab G (e, h, f ) ⊆ G acts trivially. As observed in [6], Thus, using the π 0 Stab G (φ) action, we can refine the corollary to yield the following. Let V * χ be a graded vector space with Hilbert series K g,χ (y −2 ).
Finally, for W φ itself, Corollary 2.1 yields where r is the semisimple rank of g and d 1 , . . . , d r are the degrees of the fundamental invariants (i.e., one-half the polynomial degrees of generators of (Sym g) g ∼ = (Sym h) W , for h ⊆ g a Cartan subalgebra with Weyl group W ). Similarly, Corollary 2.2 implies we can write, as graded representations of π 0 Stab G (φ),
Hochschild cohomology of quantum W-algebras
Parallel to the previous corollaries, we can consider the quantum analogue of W φ , defined as: where η is a character of Z (U g) ∼ = (Sym g) g . By [6, Theorem 1.10.(i)], gr HH 0 (W q,η φ ) ∼ = HP 0 (W 0 φ ), and it follows that gr HH 0 (W q φ ) ∼ = HP 0 (W φ ). Thus, the following is an immediate consequence of Corollary 2.1 and its proof is omitted: is a free filtered module over Z (U g) generated by a filtered vector space whose associated graded vector space has Hilbert series P φ (y).
Similarly, the associated graded vector space of HH 0 (W q φ ) has Hilbert series (2.8)
Higher cohomology of the Springer fiber
The next result describes the bigrading on the (full) cohomology of the Springer fiber, which is analogous to the associated graded vector space of H * (T * B) appearing in Theorem 1.3. That is, we compute the Poisson-de Rham homology of the Slodowy slices to the nilpotent cone. We do not attempt here to construct actual filtrations on the Springer fiber cohomology.
When O χ ⊇ O φ , we will consider the varieties (called S3 varieties, after Slodowy, Spaltenstein, and Springer), For example, in the case g = sl n , the L χ are all trivial, and by [16,Theorem 2] where λ and μ are the partitions of n corresponding to χ and φ, respectively, and K λμ (x) is the ordinary one-variable Kostka polynomial.
Corollary 2.5 The bigraded Hilbert series of HP
where the sum is taken over all In the case g = sl n , we obtain (in slightly rewritten form) a special case of a statement proved modulo Proudfoot's conjecture in [19, Proposition 6.1]. (We show in the next subsection that the relevant case of Proudfoot's conjecture also follows from our result.) Let X λ,μ := S χ,φ where λ is the partition of n corresponding to χ and μ is the partition of n corresponding to φ. Then X (n)μ = S φ ∩ N .
Corollary 2.6 The bigraded Hilbert series of HP
Here n μ = i (i − 1)μ i is the partition statistic of μ, which equals 1 2 dim O φ , and ≤ is the dominance ordering on partitions.
Proudfoot's conjecture on symplectic duality
In [18, 3.4], Proudfoot conjectured that, in the case that X and X ! are symplectic dual cones in the sense of [3, 10.15] (with Poisson brackets of degree two), then HP 0 (O(X )) ∼ = IH * (X ! ) as graded vector spaces. We deduce this now in a special case. Let g = sl r and let σ be the sign representation of the symmetric group W = S r . Let χ be an irreducible representation of W given by some partition λ of n. Here and following we denote the dual partition of λ by λ t . Let φ and φ be nilpotent elements whose Jordan blocks are given by the parts of λ and λ t , respectively. Let X = S φ ∩ N and X ! = O φ . The varieties X and X ! are symplectically dual (cf., e.g., [3, §10.2.2]).
Proof This follows from Corollary 2.1 by the proof of [19,Proposition 8.9]. Here is an outline for the reader's convenience. By (4.6), noting that in this case all of the local systems L χ are trivial).
We thus need to establish the identity , which is [19, (8.3)]. (It follows from palindromicity of K g,χ (t), Poincaré duality for H * (B), and the dimension formula for O g,χ .) Remark 3.2 By Corollary 2.2, we can also write a formula similar to the one above which holds for arbitrary type: as graded π 0 Stab G (φ) = π 1 O φ representations,
Recollections on D-modules on N
We consider two weakly C × -equivariant D-modules on N , studied in [12]. The first is the pushforward, ρ * Ω T * B , which is actually strongly equivariant (since ρ is equivariant). In [12], it is explained that this D-module has a canonical W -action, and we obtain a decomposition of W -equivariant D-modules, where (O χ , L χ ) is the pair of a nilpotent coadjoint orbit O χ and irreducible local system L χ on O χ ; this is one way to define the Springer correspondence χ → (O χ , L χ ).
The second weakly C × -equivariant D-module on N is defined using the embedding N ⊆ g * . By definition (following Kashiwara), D-modules on N are canonically identified with right D-modules on g * supported on N , via the maps M → i * M and N → i ! N . Since g * is smooth and affine, right D-modules there can be defined as right modules over the ring D(g * ) of differential operators with polynomial coefficients. We have the map ad : g → D(g * ), such that ad(x) is the vector field acting by the adjoint action: precisely, ad( which extends uniquely to a derivation. Then, we consider the D-module M(N ) := i ! (ad(g) + I (N )) · D(g * )\D(g * ) . This is weakly C × -equivariant with respect to the square of the dilation action on g * ⊇ N , since ad(g) and I (N ) are spanned by homogeneous elements. It is not strongly equivariant in general, since the Euler vector field need not be contained in D(g * ) · (ad(g) + I (N )). (In fact, one can see that it is never strongly equivariant, for example using our main result below.) For the convenience of the reader, we recall the definition of the latter and the reason why it is the same as the D-module we define (although we will not actually need it for the arguments of this paper); for a more detailed treatment see [4,5]. In the case X = N and V = g * , actually V itself is Poisson. To identify M(N ) as in [5] with (ad(g) + I (N )) · D(g * )\D(g * ), one can argue as follows. In this case, N → g * is a Poisson embedding, i.e., the Poisson bracket on g * preserves the ideal of N and induces the bracket on N . In particular, H (N ) = H (g * )| N . Thus, it suffices to show that (ad(g) + I (N )) · D(g * ) = (H (g * ) + I (N )) · D(g * ). In fact, more is true: ad(g) · D(g * ) = H (g * ) · D(g * ), and this holds for an arbitrary finite-dimensional Lie algebra g. To see this, first observe that ad(g) ⊆ H (g * ), so we only need to show that H (g * ) ⊆ ad(g) · D(g * ). Next, for every f ∈ O(g * ), we have Thus, H (g * ) ⊆ ad(g) · D(g * ), as desired.
Hotta and Kashiwara's theorem
We recall the following result of Hotta and Kashiwara (the case λ = 0 of [12, Theorem 6.1]). Let Har(h * ) ⊆ O(h * ) = Sym h be the subspace of harmonic polynomials, i.e., where we denote the embedding Sym h * → D(h * ) as constant coefficient differential operators by P → P(∂ x ), (Sym h * ) W is the W -invariant subalgebra, and (Sym h * ) W + is the augmentation ideal (of operators whose constant term is zero). Let σ be the sign representation of W , placed in degree zero.
Theorem 4.2 (Hotta and Kashiwara) There is a canonical isomorphism of weakly equivariant D-modules,
The above result follows from Theorem 5.2, Proposition 6.3.1, and Theorem 6.1 of [12], and the canonical isomorphism is originally constructed as where the superscript F denotes the Fourier transform. (There also the D-modules are on g and h rather than g * and h * , so Har(h * ) appears.) But, for a weakly C × -equivariant D-module M and graded vector space V , the fact that F(Eu g * ) = − Eu g * − dim g implies that (M ⊗ V ) F ∼ = M F ⊗ V * . We recover the statement of the theorem.
Proof of Theorem 1.1
Theorem 1.1 is an immediate application of Hotta and Kashiwara's theorem. Namely, pushing forward to a point, and (T * B)).
Lemma 4.3 The sheaf π * M is a finite rank C × -equivariant vector bundle on
We can furthermore consider an alternative description of N from [12]: Let ι 1 : O(h * ) W → O(g * ) G and ι 2 : (Sym h * ) W → (Sym g * ) G be the Chevalley isomorphisms (considering Sym h * ⊆ D(h * ) the constant coefficient operators and similarly for g * ). Define This is the right corresponding to the left D(g * × h * )-module denoted N in [12,Theorem 4.2]. By [12,Theorem 4.2], we know that N is a simple holonomic D(h * × g * )-module, and moreover, where f = π × pr 2 : g * → h * × g * . By definition, there is a surjection N → N , and since N is simple, N ∼ = N . Now letting π : h * × g * → h * be the first projection, we have π * N ∼ = π * Ω g * . Decomposing π as the composition g * → B × h * → h * , we obtain that the right D-module H −i π * N corresponds (via the right-left D-module correspondence followed by the Riemann-Hilbert correspondence) to the trivial local system on h * with fibers Finally, by definition, we have inducing a Morita equivalence between the two algebras. In particular, The latter is a graded finitely generated projective O(h * )-module and hence also a finitely generated projective O(h * ) W -module. Thus, so is π * M. In other words, π * M is a C × -equivariant finite rank vector bundle over h * /W . This completes the proof of the lemma.
Observe that, since the map q is flat and C × -equivariant, Lemma 4.3 implies that q * π * M is a finite rank C × -equivariant vector bundle on h * . For every λ ∈ h * reg , we can restrict q * π * M to the line C · λ, and we obtain a C × -equivariant vector bundle on the line A 1 , L λ := (q * π * M)| C·λ . But finite rank C × -equivariant vector bundles on the line are well known to be the same thing as finite-dimensional filtered vector spaces: given a finite-dimensional filtered vector space F · V , one takes the associated C[t]-module L = i∈Z F ≤i V · t i , and the opposite direction is given by setting V := L| t=1 and taking the filtration by the image of weight spaces j≤i L j . Moreover, the fiber L| 0 at zero is the associated graded vector space gr V . Therefore, (q * π * M)| C·λ is nothing but a collection of filtrations on the underlying cohomologies H −i (q * π * M)| λ ∼ = H 2 dim B−i (B). This produces the desired filtrations on the cohomology of the flag variety. The associated graded vector spaces are On the other hand, to obtain a filtration on the flag variety cohomology, we identify (q * π * M)| h * reg with the Gauss-Manin system of π −1 (h * reg ) → h * reg . The latter is W -equivariant, and the composition H * ( π −1 (λ)) ∼ = H * (B) ∼ = H * ( π −1 (w(λ))) ∼ = H * ( π −1 (λ)), of applying the Gauss-Manin connection twice followed by the isomorphism π −1 (w(λ)) ∼ = χ −1 (λ) ∼ = π −1 (λ), is the action of w.
Proof of Corollary 1.4
The proof consists of studying the relationship between the right D(h * )-module π * N and the left D(h * ) W -module π * M. Applying the Morita equivalence between D(h * ) W and D(h * ) W defined by D(h * ) to the expression (4.3) for π * N implies that More generally, we can consider the functor from weakly C × × W -equivariant right D(h * )-modules to (weakly) C × -equivariant left D(h * ) W -modules: For every irreducible representation χ of W , we can form the strongly C × × Wequivariant right D(h * )-module, F χ := Ω h * ⊗ χ (which under the Riemann-Hilbert correspondence is the trivial local system χ on h * equipped with the W -linearization given by the representation). We obtain the formula: This is a C × -equivariant vector bundle on h * /W . For any λ ∈ h * reg , we can restrict T (F χ ) to the line C ·λ and get a C × -equivariant vector bundle on the line, i.e., a finite-dimensional filtered vector space T (F χ )|λ. There is a canonical isomorphism of vector spaces obtained from the composition by applying the restriction of the source toλ and the target to λ. This is an isomorphism because, for every W -equivariant vector bundle F on h * and every λ ∈ h * reg , the fiber of F W atλ equals the fiber of F at λ.
Put together, for every λ ∈ h * reg , we obtain a canonical filtration on the fiber F χ | λ . Next, note that every weakly C × × W -equivariant O h * -coherent right D h * -module is a direct sum of shifts F χ (k), where the notation indicates a grading shift: M(k) := M ⊗ C C −k , where k ∈ Z and C k is the representation of C × in which γ ∈ C × acts by γ k · Id. In the strongly equivariant case, k = 0 for every summand, i.e., these modules are direct sums of copies of F χ .
Returning to π * N , we can write π * N as a direct sum of such F χ (with homological and weight shifts). Let us determine the weight shifts. Recall from (4.2) and the preceding that N is a simple holonomic right D h * ×g * -module isomorphic to f * Ω g * . Equip the latter with the canonical strong C × -equivariant structure coming from this structure on Ω g * . Then we can see from the proof of [12,Theorem 4.2] that the isomorphism N → f * Ω g * is W -equivariant and sends the generator [1] ∈ N to an element of degree 2 dim B. Thus, if we put [1] ∈ N in degree zero, we obtain that N ∼ = f * Ω g * (2 dim B) as a weakly C × × W -equivariant D-module. Alternatively (but which amounts to the same proof), we observe that, since N is simple, there is only a single value of the shift, and then it must be 2 dim B in order to agree with Theorem 1.1. Pushing forward, all weight shifts of the F χ appearing in π * N are by 2 dim B.
It follows that the filtration on each cohomology of the fiber at λ, , is a direct sum of copies of a single filtered vector space for each irreducible representation χ of W , and the associated graded vector space gr F χ | λ of the latter is isomorphic to (4.4) where the second equality is due to Poincaré duality for H * (B)
The structure of M(N )
The following statement was conjectured in [19,Conjecture 8.1]. As in Sect. 2, let V * χ be a weight-graded vector space with Hilbert series K g,χ (y −2 ). Since ρ * Ω T * B is strongly C × -equivariant and IC(O χ , L χ ) is a summand for all χ ∈ Irrep(W ), it follows that IC(O χ , L χ ) admits the structure of a strongly C × -equivariant D-module. Let us equip it with this structure.
Theorem 4.4 There is an isomorphism of weakly equivariant D-modules,
Proof The proof follows from Hotta and Kashiwara's Theorem 4.2. We need to observe that Har(h * ) is canonically isomorphic (as a graded W -representation) to Sym h/((Sym h) W + ) and thus to H * (B) as a graded W -representation. So we get , and the Hilbert series of the latter is K g,χ (t 2 ), so its dual has Hilbert series K g,χ (y −2 ). Thus, the RHS is isomorphic to χ V * χ ⊗ IC(O χ , L χ ), as desired.
Proof of Lusztig's formula
Pushing ρ * Ω T * B ∼ = χ ∈Irrep(W ) χ ⊗ IC(O χ , L χ ) to a point, we get Hence, as explained in [6, Theorem 1.10.(iii)] and its proof, HP 0 (W φ ) is a free graded module over O(g) g , so the assertion follows for HP 0 (W φ ) as well. The same argument implies the refined statement, Corollary 2.2.
Proof of Corollary 2.5
For Corollary 2.5, we need to compute M(S φ ∩ N ) as a weakly C × -equivariant Dmodule, with respect to its dilation action with fixed point φ. We will use the notation N (k) for the shift of the weak C × -equivariant structure on N as defined in Sect. 4.5.
Proposition 5.1 As weakly C × -equivariant D-modules,
Proof Completing at φ, we have, by the Darboux-Weinstein decomposition theorem, as formal Poisson schemes,N ∼ = S φ ∩ N ×Ô φ . But O φ is smooth soÔ φ is the completion of a symplectic vector space at the origin. So, disregarding the equivariant structure, As a result, M(S φ ∩ N ) is a direct sum of intermediate extensions of local systems on its leaves, which are S φ ∩ O χ ; the local systems which appear are N ) is the inclusion. This is even true together with the equivariant structure. By [19,Theorem 5 is the weakly equivariant local system described in [5, § 4.3] canonically given by attaching, to each x ∈ S φ ∩ O χ , the fiber HP 0 (O(S x )) where S x is the slice to Precisely, the summand of K S φ ∩O χ which is weakly equivariant with respect to the character m − dim(S φ ∩ O χ ) of C × is the local system attaching to each x the weight m subspace of HP 0 (O(S x )).
Note that S x is isomorphic to the same slice to O χ in N at x, so this is compatible with our previous notation. Passing back to N , we know again that M (N ) Applying the two paragraphs above, we conclude that Summing over all χ yields the proposition. Now, pushing forward M(S φ ∩ N ) to a point, we obtain Corollary 2.5.
An alternative Proof of Theorem 4.4
We sketch an alternative way to complete the proof of Theorem 4.4, using the functor of Hamiltonian reduction. The details, which are easily checked, are left to the interested reader. Let G be a connected complex Lie or algebraic group with g = Lie G. The Harish-Chandra homomorphism is a surjective morphism δ : This descends to an isomorphism (ad(g)D(g * )) G \D(g * ) G ∼ −→ D(h * ) W . This allows one to define the functor of Hamiltonian reduction H : mod-(D(g * ), G) → mod-D(h * ) W , H(M) = M G , from the category of finitely generated, strongly G-equivariant right D(g * )-modules to the category of finitely generated right D(h * ) W -modules.
We are interested in weakly C × -equivariant modules for the squares of the dilation actions on g * and h * . Precisely, in the former case we consider weakly C × -equivariant, strongly G-equivariant right D-modules on g * , and in the latter case we consider graded right D(h * ) W -modules. For brevity we will call these weakly equivariant modules on g * or h * . Let Eu g * and Eu h * denote the Euler vector fields on g * and h * , so that the square of the dilation action in question is generated by twice the Euler vector field.
One calculates that δ(Eu g * ) = Eu h * −N where N = |R + | and R + a choice of positive roots for W . Therefore, the functor H induces a functor between weakly We have isomorphic as a graded W -module to its dual. This implies that V χ ⊗σ (2N ) V * χ as graded vector spaces, and hence h(U χ ; y 2 ) = h(V * χ ; y 2 ) = K g,χ (y −2 ).
The mirabolic case
In this section only, let g = gl(V ) for some n-dimensional vector space V and G = G L(V ). Then G acts diagonally on g * × V . The group C × also acts by dilations along the g * factor, i.e., α · (X, v) = (α −2 X, v). Fix c ∈ C, thought of as a character X → cTr(X ) of g. where μ c (g) = {μ(X ) − cTr(X ) | X ∈ g}. This D-module can also be viewed as the natural module associated with the c-twisted action of g on N × V , as in [7, Remark 2.17]. The space N × V consists of finitely many G-orbits. The orbits with fundamental group Z are naturally labeled by partitions λ ∈ P n of n. Each of these orbits O λ admits a unique irreducible (one-dimensional) (G, c)-monodromic local system L λ,c . For each λ ∈ P n , set c λ := c(n λ t − n λ ), where λ t the dual of λ; recall that n λ = i (i − 1)λ i is the partition statistic.
Theorem 6.2 Assume c is generic. There exists a permutation τ of P n such that c τ (λ) = c λ and an isomorphism of (G, c)-monodromic, weakly C × -equivariant Dmodules where V * λ is a weight-graded vector space with Hilbert series K g,λ (y −2 ).
The basic idea behind the proof of this theorem is essentially the same as the one outlined in Sect. 6.1. Again, there is a functor of Hamiltonian reduction, H c : C c → O −c , where C c is the category of (G, c)-monodromic D-modules supported on N × V and O c denotes category O for the rational Cherednik algebra H c (S n ); see [11] for details. In the case where c is generic, category O c for the rational Cherednik algebra is semisimple and the functor of Hamiltonian reduction induces an equivalence between C c and category O −c (see [1,Proposition 9.13]). The argument of Sect. 6.1 is applicable in this setting, though it is more involved since the D-modules IC(O λ , L λ,c ) are (C × , c λ )-monodromic, unlike the classical setting where they can be endowed with a C × -equivariant structure. Moreover, the reason for the occurrence of the permutation τ is that the analogue of Proposition 6.1 is missing in this context. This is because the key to the proof of Proposition 6.1 is the geometric construction of the simple modules N χ . Since c is assumed to be generic, there is no analogous construction for the corresponding simple mirabolic modules. It is an interesting question if τ is the identity, and if not, it would be interesting to compute it. (There would not seem to be an obvious nontrivial permutation satisfying c τ (λ) = c λ .) As shown in [2], the case where c is not generic is much more interesting. (In particular, there the category of mirabolic sheaves need not be semisimple, and we expect M c (N × V ) not to be semisimple when the category is not.) We will return to this in future work, where details of the proof of Theorem 6.2 will also be given. | 9,185.4 | 2015-09-08T00:00:00.000 | [
"Mathematics"
] |
SUBSTANTIATION OF THE ENVIRONMENTAL AND ENERGY APPROACH OF IMPROVEMENT OF TECHNOLOGICAL REGULATIONS OF WATER TREATMENT SYSTEMS
Об’єктом дослідження є екологічна безпека споруд очищення стічних вод при мінімізації ресурсовитрат на реалізацію технологічних процесів видалення забруднювачів із стоків. Існують фактори, які комплексно створюють передумови щодо неефективності діяльності із підтримання екологічної безпеки систем очищення стічних вод, а відповідно, і складності реалізації їх технічного регулювання. До таких чинників відносяться: – відсутність у режимі реального часу повноти інформації щодо конкретного комбінованого процесу водоочищення, складність його адекватного дослідження навіть у лабораторних умовах; – відсутність і/або низькі точність та швидкодія сучасних технічних засобів вимірювань складу водних розчинів, особливо у промислових умовах. Усунення впливу негативних чинників досягається шляхом удосконалення науково-теоретичних засад створення технологічних регламентів споруд очищення стічних вод при підвищенні екологічної безпеки промислових об’єктів з урахуванням вимог зменшення ресурсовитрат згідно концепції синтезу систем екологічного менеджменту. Обґрунтовано та аналітично отримано еколого-енергетичний критерій оцінки функціонування споруд очищення стічних вод. Аналіз результатів виробничого впровадження дозволив констатувати, що екологоенергетичний критерій, який показує питомі енергозатрати для забезпечення екологічної безпеки водоочищення, прийнятно застосовувати при налаштуванні промислових систем водоочищення та створенні їх технологічних регламентів. Протягом місяця виробничих досліджень значення еколого-енергетичного критерію мали відхилення від заданого на ±3,4 %, що є технологічно прийнятним показником. Удосконалена концепція постановки інтегрованих цілей досягнення екологічно безпечного водовідведення згідно міжнародних систем оцінки якості управління підприємствами на основі еколого-енергетичного критерію створює передумови для отримання сертифікату ISO 14001. Впровадження систем екологічного менеджменту забезпечить: – зниження фінансових витрати за рахунок економії природних ресурсів і зменшення штрафних санкцій; – зростання прибутку завдяки потенційній реалізації повторного використання водних ресурсів. Ключові слова: очищення стічних вод, видалення забруднювачів, екологічно безпечне водовідведення, екологічний менеджмент. Shtepa V., Plyatsuk L., Ablieieva I., Hurets L., Sherstiuk M., Ponomarenko R.
Introduction
The operation of water treatment equipment is based on the implementation of the technological regulation (TR)a regulatory document for internal use [1,2], relates to the technological documentation system of Unified system for technical documentation (USTD), which, in turn, is part of the Unified system for technological production preparation (USTPP) [3].
Technological regulations should facilitate the flow of processes of appropriate (planned) quality with a minimum consumption of resources. Moreover, it should contribute to the achievement of optimal technical and economic indicators of production, to regulate the conditions of production processes and operation of production as a whole [3].
The mandatory availability of such a document at wastewater treatment facilities, which includes water purification equipment, is provided for by the current Order of the State Committee of Ukraine for Housing and Communal Services No. 05 dated July 5, 1995. In the context of technical regulation of water purification systems, technological regulations are a prerequisite for the uninterrupted functioning of the complex of water purification equipment with the mandatory fulfillment of environmental safety conditions while minimizing resource costs. At the same time, there are factors at production facilities that systematically negatively affect the fulfillment of these conditions: insufficient nomenclature of measuring devices, potential effects of unpredictable factors of a natural and technogenic nature.
ISSN 2664-9969
At the same time, insufficiently object -oriented TRs can lead to poor -quality functioning of treatment facilities and, consequently, to environmental pollution.
That is why the creation and implementation of new approaches to improve the creation and compliance with the TR of treatment facilities is relevant.
The object of research and its technological audit
The object of research is the environmental safety of wastewater treatment plants while minimizing resource costs for the implementation of technological processes for the removal of pollutants from effluents.
According to the Law of Ukraine «On Technical Regulations and Conformity Assessment» [4], a specific requirement of an object is a stated need or expectation fixed in technical regulations, standards, technical specifications or in another way. At the same time, the object of conformity to a specific material, product, installation, process, service, system, respectively, wastewater treatment plants (WWTP) also correspond to this definition. So, in relation to them, it is necessary to implement tests (determining the characteristics of the object of assessment) and establish an assessment of their compliance with relevant regulatory documents (the process of proving that the requirements for products, process, services, systems have been met).
Mandatory components of the TR, the requirements of which must comply with the current WWTP: -characteristics and features of the treatment facilities; -quality control of effluents at the entrance to the equipment and treated wastewater at the discharge; -information about: the volume of the spillway, the consumption of electricity and other energy carriers used to ensure the stable operation of the system for removing pollutants from effluents. At the same time, there are factors that comprehensively create the prerequisites for the inefficiency of metrological activity to ensure the uniformity of measurements of the implementation of the assessment of compliance with WWTP, and accordingly the complexity of the implementation of technical regulation [5] based on TR: -uncontrollable and unpredictable actions of emergency situations of natural and man -made nature; -lack of real -time information on a specific combined process of water treatment, the complexity of its adequate research, even in laboratory conditions; -lack and/or low accuracy and speed of modern technical means of measuring the composition of aqueous solutions, especially in industrial conditions.
The aim and objectives of research
The aim of research is improvement of the scientific and theoretical foundations for the creation of technological regulations for wastewater treatment plants while increasing the environmental safety of industrial facilities, taking into account the requirements for reducing resource costs in accordance with the concept of implementing environmental management systems.
To achieve this aim, it is necessary to complete the following tasks: 1. Theoretically substantiate the environmental and energy criterion for water treatment.
2. Check in production the environmental and energy criterion for the functioning of wastewater treatment plants. 3. Improve the methodology for creating technological regulations for wastewater treatment plants based on environmental and energy criteria in accordance with the concept of environmental management systems (EMS).
Research of existing solutions of the problem
Analyzing the structure of various technological regulations, let's single out the works that are part of the structure of such regulations [6]: -verification and adjustment of the components of the water treatment complex [7]; -diagnostics of the automation node(s); -rapid analysis of the liquid at the inlet and outlet of the water treatment plant (or appropriate studies in the laboratory); -diagnostics of measuring instruments for water quality and the state of technological equipment; -regulation of pumping equipment for pressure and costs; -diagnosis of individual functional and technological units of water treatment (filters, electrolyzers, aeration tanks, sand traps, etc.); -final test of the complex with full control of all nodes; -examination of adjacent nodes for integrity; -formation of an official opinion on the status of the device. Analyzing the composition of technological regulations and the features of the functioning of water treatment equipment, it is possible to conclude that the key and very complex tasks when fulfilling the technological regulations directly at the factory are [8]: -control of technological processes at design -established sampling points for wastewater and sludge, characteristics of existing monitoring instruments of treatment facilities [9]; -technological analysis of the equipment according to production operational indicators, resource costs, cleaning efficiency in accordance with the established criteria and indicators [10]. Moreover, the more difficult the task of water purification, the more cumbersome and less reliable (efficient) control over compliance with regulatory requirements. For example, when implementing the technological scheme of the chemical method for removing contaminants from the international concern Siemens, it is necessary to simultaneously monitor more than 40 technological quantities (according to the manufacturer's requirements and the actual availability of a small number of reliable sensors) [11].
At the same time, effluents (domestic, industrial and atmospheric) usually contain a large number of inorganic and organic components [12], their exact composition, even in qualitative terms, can't always be predicted in advance -in the vast majority of cases this can't be done. For example, even with simple mixing of effluents from various shops of the enterprise, chemical reactions occur between the components of these effluents, leading to the formation of new pollutants.
At the same time, the development of similar European regulatory documents is more object -oriented and is based on a system of permits for discharges [13]: -taking into account the characteristics of the best practically applicable technology (best available technology, BAT) [14]; ISSN 2664-9969 -taking into account the need to ensure compliance with environmental quality standards (EQSs), which is part of the goal of ensuring the quality of the water intake [15]. At the same time, the classic shortcomings of the methodology for the development of TR for water treatment systems [16], concerning foreign regulatory solutions: -development of technological regulations does not take into account the effect of emergencies of anthropogenic and natural origin on water treatment processes, only «after -action» is calculated -minimization of consequences after an accident; -requirements of energy efficiency and financial components of the operation of water treatment plants are not comprehensively taken into account. Moreover, in fact, there is no single algorithm for writing TR for combined WWTP, which combines various methods of influencing pollutants, which causes significant practical problems in creating effective ecological safe systems capable of working for a long time.
Summing up the above analysis of studies by other authors, it should be noted the lack of unified approaches to create technological regulations for WWTP and the fact that they do not take into account complex environmental and energy requirements. This emphasizes the prospect of a study to develop a methodology for the implementation of environmental management systems in relation to water treatment.
Methods of research
The methodological support for further research is based on the elimination of the identified key shortcomings of the existing TRs according to the requirements of resource conservation for WWTP ( Table 1).
The rationale for the environmental and energy criterion is advisable to fulfill on the basis of the provisions of DSTU ISO 50001:2014 «Energy Management Systems»: -paragraph 3.8: Energy efficiency -the ratio (coefficient) or other quantitative relationship between the result obtained (initial indicator), that is, between the work performed, services, goods or energy and the input indicator; -paragraph 3.12: Energy performance -measured results on energy efficiency, energy use and energy consumption; -paragraph 4.6.1: Organization shall ensure periodic monitoring, measurement and analysis of the key characteristics of its operations that determine energy characteristics. Key characteristics should cover at least: the essential importance of energy use and other results of energy analysis. Therefore, taking into account experimental tests and theoretical developments [16,17], it is necessary to develop a universal criterion for evaluating the efficiency of treatment of multicomponent wastewater, which would take into account its cost in addition to the quality of the treatment process.
When creating such a criterion, the expression for calculating the technical effectiveness of water treatment is taken as the basic one: where С ent -the concentration of pollutants entering the treatment, g/dm 3 ; С aft -concentration of pollutants after purification, g/dm 3 . At the same time, energy consumption is taken as the main resource consumption, since WWTP is used by electrical technologies to implement pollutant removal processes -the energy and energy recovery of WWTP processes is more than 50 % [16].
At the same time, one of the most common methods in Ukraine for assessing the quality of surface waters, including those formed by the discharge of industrial wastewater, is the method for determining the pollution multiplicity scores. For each ingredient, on the basis of actual concentrations, points of exceeding the maximum permissible concentrations (MPC) K i and the frequency of occurrence of excess cases Н i are calculated, as well as the total estimated score of water pollution -Bi: where C i -concentration of the i-th ingredient in water; MPC i -the maximum permissible concentration of the i-th ingredient; R MPCi -the number of cases of exceeding the MPC for the i-th ingredient; R i -the total number of measurements of the i-th ingredient. Ingredients for which the total estimated score is greater than or equal to 11 are allocated as limiting pollution indicators (LPI). By the value of the combinatorial pollution index, the class of water pollution is established. The combinatorial pollution index itself is calculated as the sum of the total rating points of all the ingredients.
L LN
Hence, the relationship between the environmental indicator of the points of the multiplicity of exceeding the MPC and the environmental and energy criterion (3): That is, an increase in the value of the environmentalenergy criterion corresponds to an increase in the sum of the points of the multiplicity of exceeding the MPC.
It is also established that the overall assessment score is directly proportional to the environmental and energy criterion for water treatment technologies: At the same time, the created ecological and energy criterion allows eliminating an important drawback of exclusively environmental criteria (4)-(6) -they focus on achieving the environmental goals of water treatment without taking into account the efficiency of use of raw materials, materials and energy. Although at real facilities there is always not only the creation of an environmental hazard, but also the overexpenditure of resources for water treatment (Fig. 1).
So, it can be stated that such an ecological and energy criterion (2), which shows specific energy consumption for ensuring technical efficiency, is acceptable to apply to adjust the functional parameters of real water treatment systems to it. At the same time, imitating, if necessary, the effect of emergencies and the necessary reactions to them. of synthetic surface -active substances (SAS) will ensure the removal of other pollutants (ammonium nitrogen, oil products, ammonium nitrogen) -the implementation of the dominant dynamic pollutant method [16].
It is determined that emergencies can be caused by unpredictable contaminants entering sewage (other situations, for example, the simultaneous use of all showers) are generally taken into account at the design stage. Such pollutants include toxic lead, which can get on workers' clothing near technological units, and then flush into the sewers. That is why the electrotechnological system includes an electrocoagulator with a function of pH correction in alkaline solutions with the subsequent neutralization of effluents. Also integrated in WWTP: sorption filter, deaerator with electrolysis destruction, hydrodynamic intensifiers.
Setting the equipment to ensure the environmental and energy criterion (2) makes it possible to fulfill the requirements for observing the quality of water treatment while minimizing resource costs -criterion (2) should deviate from zero ±3.4 % within a month (Fig. 2).
The results of the industrial use of improved approaches to the synthesis of technological regulations of industrial water treatment systems at a small metallurgy enterprise have allowed fulfilling environmental requirements for the quality of wastewater of enterprises. And also to introduce resource -saving measures in the operating conditions of WWTP, improving the technical regulation of the latter.
Thus, the prerequisites have been created for improving the methodology for the synthesis of TRs using the environmental and energy approach and the introduction of EMS at enterprises. 6.3. Improving the methodology for introducing concepts of environmental management systems in relation to wastewater treatment plants based on environmental and energy criteria. Based on the developed method for constructing technological regulations for water treatment systems, the concept of iterative integrated management of environmental water resources by enterprises on the basis of IWRM (Integrated water resources management) and an improved methodology for the WWTP TR synthesis are improved (Fig. 3).
At the first stage of creating an EMS, it is planned to develop a sustainable development safety scheme based on an enterprise's water technological passport (WTP) with the creation of a conceptual model of water resource flows. Moreover, technological solutions for the construction of a new enterprise or the reconstruction of an old one should not cause environmental imbalance, regardless of the industry sector.
The second stage in the implementation of EMS is the testing (optimization) method of the model created at the first stage, taking into account the potential impact of anthropogenic and natural emergency situations: -separate models of EMS elements are studied (based on conceptual decomposition), where specified (target) parameters are fixed; The third stage of the implementation of EMS is predesign, when a business plan is compiled on the basis of the data obtained, with a mandatory comprehensive assessment of both economic (for example, through a profitability index) and technological (energy efficiency) criteria for project prospects.
At the same time, the use of new and improved scientific and theoretical foundations of the regulatory framework of industrial water supply systems makes it possible to implement the concept of integrated goals for achieving resource -efficient water supply in accordance with international systems for assessing the quality of enterprise management, taking into account environmental and energy efficiency requirements.
Strengths.
A key advantage over analogues is the unified methodological support for the creation of object -oriented technological regulations of WWTP, which ensures that environmental safety requirements are minimized and resource costs are minimized.
Weaknesses. The weaknesses of the proposed approaches include: -need for preliminary laboratory and experimental studies; -lack of complex mathematical and software modeling of water treatment processes.
Opportunities. The prospect for the development of an environmental and energy approach is the synthesis of mathematical and software that would allow to quickly diagnose and predict the environmental safety and resource costs of specific WWTPs based on the enterprise's data on the quality of effluents. This would allow an estimated saving of about 20 % of the financial costs of using existing equipment.
Threats. The threat to the implementation of the proposed solution lies in the lack of the necessary range of reliable sensing elements that can work in real time (clearly less than 30 % of the needs). This at certain facilities may make it impossible to quickly calculate the environmental and energy parameters of WWTP.
Conclusions
1. The justified environmental and energy criterion allows to eliminate an important drawback of exclusively environmental criteria for the effectiveness of water treatment, since the latter are oriented only toward achieving the environmental goals of water treatment without taking into account the efficiency of use of raw ma terials, materials and energy. The proposed criterion is aimed at the integrated creation of environmental safety of WWTP and the elimination of resource overruns in water treatment.
2. An analysis of the results of industrial implementation has allows to state that the created environmental and energy approach is acceptable to use when setting the parameters of industrial water treatment systems and creating their technological regulations. At the same time, within a month the value of the environmental and energy criterion was at a technologically acceptable level -they had deviations from zero ±3.4 %.
3. An improved concept for setting integrated goals to achieve environmentally friendly sanitation in accordance with international systems for assessing the quality of enterprise management on the basis of the environmental and energy criterion creates the prerequisites for: -obtaining ISO 14001 certificate (ensuring com pliance with the relevant requirements throughout the entire life cycle of WWTP); -reduction of financial costs by saving resources and reducing penalties; -profit growth due to potential implementation of water reuse schemes.
Introduction
In the case of man-made intervention in the subsoil, the interaction of natural and technical systems that en-sure the geomechanical balance of the masses in the area of subsoil development becomes a general issue in the development of ore deposits. At the same time, it should be possible to monitor the stress-strain state (SSS) of the | 4,508 | 2020-02-28T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Forecasting and Modelling the Uncertainty of Low Voltage Network Demand and the Effect of Renewable Energy Sources
: More and more households are using renewable energy sources, and this will continue as the world moves towards a clean energy future and new patterns in demands for electricity. This creates significant novel challenges for Distribution Network Operators (DNOs) such as volatile net demand behavior and predicting Low Voltage (LV) demand. There is a lack of understanding of modern LV networks’ demand and renewable energy sources behavior. This article starts with an investigation into the unique characteristics of householder demand behavior in Jordan, connected to Photovoltaics (PV) systems. Previous studies have focused mostly on forecasting LV level demand without considering renewable energy sources, disaggregation demand and the weather conditions at the LV level. In this study, we provide detailed LV demand analysis and a variety of forecasting methods in terms of a probabilistic, new optimization learning algorithm called the Golden Ratio Optimization Method (GROM) for an Artificial Neural Network (ANN) model for rolling and point forecasting. Short-term forecasting models have been designed and developed to generate future scenarios for different disaggregation demand levels from households, small cities, net demands and PV system output. The results show that the volatile behavior of LV networks connected to the PV system creates substantial forecasting challenges. The mean absolute percentage error (MAPE) for the ANN-GROM model improved by 41.2% for household demand forecast compared to the traditional ANN model. LV of
Introduction
Load forecasting is a significant tool utilized to evaluate power consumption, or future energy [1,2]. One of the fundamentals to guarantee a secure power system and reduce the operational costs of power networks is to accurately forecast power demand by employing different energy sources. Moreover, accurate forecasts have a functional advantage especially in energy management system issues, for example, peak demand reduction, load shedding and development of electrical infrastructure, which can be achieved by offering the required information in order to make proper decisions. Generating and DNO companies seek to obtain the best market decisions and competitive prices, especially in the industrial electric power sector, through accurate forecasting models that include load demand and congruent price [3]. The procedures of electrical load forecasting are quite complex owing to the instability and potential number of factors, which impact the forecast model accuracy. Typically, load forecasting models can be realized based on major factors, for example, economic circumstances, weather factors (humidity, temperature, and
Literature Review
Recently, both ANN and Autoregressive Integrated Moving Average with explanatory variables (ARIMAX) forecasting approaches have been broadly applied in various applications that have a high stochastic load behavior, for example, demand for electric vehicles and buildings, and electricity price forecasting [7][8][9]. Accordingly, the former ARIMAX approach is widely validated and implemented in the prediction of LV demand applications because of its simplicity compared to other methods that use a nonlinear model [10,11]. Unlike the ARIMAX method, the ANN is highly efficient in implementation for complex nonlinear problems such as rentable energy operation issues and complex relationships between electrical demand and weather conditions. In the ANN model, there is no need for explicit functional relationships between variables and demand for LV [12]. However, the ARIMAX and traditional ANN model face many challenges in handling high uncertainty in household demand and PV generation outputs at LV scale; therefore this paper proposes a novel forecast technique based on a hybrid of different models.
Accordingly, different studies have adopted these forecast models in the LV network, e.g., ARIMAX, ANN and ARIMA, in order to anticipate renewable energy generation and approaches to energy price. Moreover, these models are used to examine the benefits of anticipating renewable energy sources which create more functional management systems. For example, Yuan et al. [13] developed ARIMA algorithms to create a profile of wind speed over one hour on the rotary horizon basis. Nevertheless, the forecasting model in the summer session showed a lower performance with 11% Mean Absolute Percentage Error (MAPE) compared to the rest of the year, a reduction of 6% MAPE. This highlights the importance of analysis of seasonality to obtain certain patterns in the LV that can enhance the forecast model performance [2,4]. As an example, the ARX model adopted day/year as an external variable [14] compared to an ANN model in [15] which used the seasonal input parameter, with daytime/day type as external variables. Moreover, in [13] the study does not include an external predictor (weather conditions or temperature) that might aid in diminishing forecast error and increasing energy savings. However, these studies do not consider the volatile behavior of household demand and PV outputs compared to large-scale demand. This is a significant impact on the LV level grid in increasing energy savings, which can be done via renewable energy forecasting. Accordingly, renewable energy sources are basically driven by weather conditions which increase the challenges of predicting LV demand using renewable energy sources. One of the important factors in gaining optimal operation with economic dispatch is an accurate forecast model. In another study [16], the forecast models were sorted based on further exogenous variables to enhance the performance of the forecast model. In [16] the author clearly utilized an simpler when it does not require iterative tuning which leads to a reduction in training time compared to a gradient descent training algorithm. Furthermore, for a more efficient energy management system it is important to take into account the load disaggregation impact. Recently, different intelligent methods such as Recurrent Neural Network (RNN) have been used to estimate the power and energy demand of low voltage applications as load disaggregation [21]. The results in [20,21] showed that there is a significant use of new optimization models such as the Golden Ratio Optimization Method (GROM) in achieving accurate forecast models for challenging forecast tasks, such as that for renewable energy.
The research has only discussed and investigated aggregated demand in Jordan at high voltage [23] or national level [24][25][26][27], and to the best of the author's knowledge there are no studies discussing low voltage or household demands. In Jordan, the peak demand at high voltage level shows significant seasonal variations a with two-peak pattern, where the peak demand mainly occurs during the hot summer and cold winter days due to the increase in use of air conditioning and electrical heaters [23]. In [25][26][27], yearly forecast models for Jordan's national demand are presented using, for example, Least Squares Method [25], ANN [26] and ARX [27]. However, these studies did not estimate the hourly demand, PV output or LV demand and did not investigate relationships between demand and the different exogenous variables or calendar terms based on the nature of Jordan. Overall, choosing the external variable that allows for improvement in forecast performance has a better impact on the system model's targets and data accessibility. Note that, in most of the literature, sufficient detail is not included on how external variables have an impact on renewable energy and household demand in predicting model accuracy. However, these studies revealed that the input features (external variables) are the most crucial comparison with the selected model. Typically, this behavior might create challenges in gaining an accurate model.
Contributions
Typically, in the literature two factors are chosen on the basis of extensive study needs and data accessibility in order to select a suitable forecast model parameter for LV demand. Moreover, this leads to an enhancement of the forecast model's performance and diminishes forecast error by various assumptions. For low voltage applications, in particular for buildings, the researchers presented both external features and parameters of model forecasting as an important solution to lessen errors and uncertainty in the performance of the forecast model. Thus, this paper aims to present further contributions, which are listed as follows: • A new ANN forecast model optimized by using the Golden Ratio Optimization Method (GROM) technique to examine household and small cities' demand incorporating highly volatile renewable energy sources. • Developing a realistic stochastic prediction model, which is a hybrid forecast model consisting of probabilistic and ARIMAX models. This hybrid forecast model and different rolling and point forecast models are developed in this paper to treat the stochasticity of LV and PV load profiles, taking into account the impact of uncertainty intervals on forecasting confidence bounds. • This paper presents load forecasting for households and small cities using different forecasting methods. Smart meter data for ten household and PV systems were collected and used to predict induvial household demand, as presented in Appendix A. This work has developed forecast models to produce a potential demand profile for households and the PV system separately, in addition to net demand for up to one-day ahead. In addition, this research has provided an analysis of a typical household demand and PV system in Jordan within a real time period, supporting attempts to bridge the gap in the absence of comprehension demand behaviour data, especially in Middle Eastern countries like Jordan.
Outline of Paper
The remainder of the article is organized as follows: in Section 2, the household and PV model topology are introduced and the collected data from the proposed models are analyzed in Section 3. Section 4 describes the methodology of the proposed forecast models. Section 5 presents and discusses the forecast models' results. Finally, conclusions and potential future work are presented in Section 6.
Household and PV System Model Topology
In the case of LV applications, a precise forecast model is needed, focusing on comprehending electrical demand behaviour and examining interrelatedness among external variables and demand. In the case of household energy demand and PV behaviour, this section will analyse and review the data that will be used to develop and evaluate the forecast models. In addition, this section will investigate the common model connections among household electrical demand in Jordan and various external variables, for instance, demand seasonality and temperature. The main outcomes will be used in the next section of this study to establish and determine the best parameters to create a precise forecast model. In this work, the main concern is individual LV demand, therefore household demand with PV has been considered. The measured data were collected at ten induvial houses located in Jordan, Al-Zarqa. The location of the houses is within a 2 km diameter from 32 • 04 27.9 N 36 • 02 58.9 E, as shown in Figure 1. The houses in this area are typical and they connected to the same size PV system. The area of the house is approximately 170 m squared, and consists of five rooms, one kitchen, two bathrooms, and balcony. Furthermore, the electrical system is single phase and the main electrical loads are three air conditioners, fridge, electrical water heater, washing machine, lights and two televisions.
Energies 2021, 14, x FOR PEER REVIEW 5 of 32 demand and PV system in Jordan within a real time period, supporting attempts to bridge the gap in the absence of comprehension demand behaviour data, especially in Middle Eastern countries like Jordan.
Outline of Paper
The remainder of the article is organized as follows: in Section 2, the household and PV model topology are introduced and the collected data from the proposed models are analyzed in Section 3. Section 4 describes the methodology of the proposed forecast models. Section 5 presents and discusses the forecast models' results. Finally, conclusions and potential future work are presented in Section 6.
Household and PV System Model Topology
In the case of LV applications, a precise forecast model is needed, focusing on comprehending electrical demand behaviour and examining interrelatedness among external variables and demand. In the case of household energy demand and PV behaviour, this section will analyse and review the data that will be used to develop and evaluate the forecast models. In addition, this section will investigate the common model connections among household electrical demand in Jordan and various external variables, for instance, demand seasonality and temperature. The main outcomes will be used in the next section of this study to establish and determine the best parameters to create a precise forecast model. In this work, the main concern is individual LV demand, therefore household demand with PV has been considered. The measured data were collected at ten induvial houses located in Jordan, Al-Zarqa. The location of the houses is within a 2 km diameter from 32°04′27.9″ N 36°02′58.9″ E, as shown in Figure 1. The houses in this area are typical and they connected to the same size PV system. The area of the house is approximately 170 m squared, and consists of five rooms, one kitchen, two bathrooms, and balcony. Furthermore, the electrical system is single phase and the main electrical loads are three air conditioners, fridge, electrical water heater, washing machine, lights and two televisions.
PV System
In order to reduce the electricity bill in the ten houses, each is connected to a PV system, as shown in Figure 2. The size of the PV system is 4 kW peak, which is the maximum allowed capacity from the government for household PV systems, and the main parameters of the PV system are detailed in Table 1. For example, the size of the PV system has been determined based on the monthly electricity demand during 2019, as shown in Table 2.
PV System
In order to reduce the electricity bill in the ten houses, each is connected to a PV system, as shown in Figure 2. The size of the PV system is 4 kW peak, which is the maximum allowed capacity from the government for household PV systems, and the main parameters of the PV system are detailed in Table 1. For example, the size of the PV system has been determined based on the monthly electricity demand during 2019, as shown in Table 2.
Data Analysis
In most cases, the designing of the prediction model did not normally occur at a single stroke. Accordingly, it is needed to recall former steps as a first procedure, then check the model during the training levels and both models for parameters and variables. Thus, it is important to divide the data group into three sets: validation, training, and testing. Commonly, these sets can be utilized as training model parameters, locating required patterns in the case of the training set, while the validation set is utilized in the finest model. A trade-off between reaching precise model parameters and preventing overfitting is needed to guarantee a suitable data size. The smart meter data for ten households and PV systems were collected over the period 1st of January 2019 to 30th of November 2020. The gathered data, at a one hour resolution for household demand, defines real daily demand and performance at the house, along with a 15 min resolution for the PV system output. The data set has been collected from the National Electric Power Grid Co (NEPCO) over a five year period up to the end of November 2020 for a small city in Jordan (Madaba). The main reason for including this data set is to evaluate forecast models over different level of electricity consumption. The first 65% of the collected data is employed to develop and train the forecast models as a training data set,15% of the collected data is used to validate the forecast models, and the last 20% of collected data is utilized to assess the forecast models' performance [28][29][30][31].
Data Analysis
In most cases, the designing of the prediction model did not normally occur at a single stroke. Accordingly, it is needed to recall former steps as a first procedure, then check the model during the training levels and both models for parameters and variables. Thus, it is important to divide the data group into three sets: validation, training, and testing. Commonly, these sets can be utilized as training model parameters, locating required patterns in the case of the training set, while the validation set is utilized in the finest model. A trade-off between reaching precise model parameters and preventing overfitting is needed to guarantee a suitable data size. The smart meter data for ten households and PV systems were collected over the period 1st of January 2019 to 30th of November 2020. The gathered data, at a one hour resolution for household demand, defines real daily demand and performance at the house, along with a 15 min resolution for the PV system output. The data set has been collected from the National Electric Power Grid Co (NEPCO) over a five year period up to the end of November 2020 for a small city in Jordan (Madaba). The main reason for including this data set is to evaluate forecast models over different level of electricity consumption. The first 65% of the collected data is employed to develop and train the forecast models as a training data set, 15% of the collected data is used to validate the forecast models, and the last 20% of collected data is utilized to assess the forecast models' performance [28][29][30][31].
PV System Data Analysis
In this section, the training data set of PV output is used to understand the PV system's behavior by employing a time series analysis to investigate whether there are any important patterns or seasonality in the data. This is significant and required in the next section, in order to concentrate on the analysis of time series by determination patterns (cycles) in PV output. The PV system data contains a strong weekly and daily periodicity during sunny days. Figure 3 highlights that all PV output curves within a week (23rd to 29th of August) have a high degree of daily regularity. Figure 4 presents the ten houses' PV system output curves for a typical sunny day. In general, they show a convergent behavior. However, the deviation between the PV curves, as shown in Figure 4, is mainly related to the deviation in the panel's efficiency, panel cleanliness and PV degradation. This deviation between the household PV system output curves increases uncertainty and difficulties in creating an accurate forecast model. On the other hand, Figure 5 shows a case of the PV system output profile for more than one week during the winter season in Jordan. The daily PV profiles are different from day to day where depending on the weather conditions. For instance, the maximum power output on 28th of January 2019 was 2.8 kW, but was 1.8 kW on 30th of January 2019. Besides, it is unclear from Figure 4 that there is an indication of peak output occurring at one point in the day. The peak PV output on 27th of January was 2.8 kW at 12:00 p.m., but was 1.7 kW and 2.3 kW on 24th and 30th of January at the same time, as illustrated by Figure 5. These findings support the fact that the PV output is extremely volatile in light of the absence of weekly/daily patterns or recurrences of unclear sky duration.
PV System Data Analysis
In this section, the training data set of PV output is used to understand the PV system's behavior by employing a time series analysis to investigate whether there are any important patterns or seasonality in the data. This is significant and required in the next section, in order to concentrate on the analysis of time series by determination patterns (cycles) in PV output. The PV system data contains a strong weekly and daily periodicity during sunny days. Figure 3 highlights that all PV output curves within a week (23rd to 29th of August) have a high degree of daily regularity. Figure 4 presents the ten houses' PV system output curves for a typical sunny day. In general, they show a convergent behavior. However, the deviation between the PV curves, as shown in Figure 4, is mainly related to the deviation in the panel's efficiency, panel cleanliness and PV degradation. This deviation between the household PV system output curves increases uncertainty and difficulties in creating an accurate forecast model. On the other hand, Figure 5 shows a case of the PV system output profile for more than one week during the winter season in Jordan. The daily PV profiles are different from day to day where depending on the weather conditions. For instance, the maximum power output on 28th of January 2019 was 2.8 kW, but was 2 1.8 kW on 30th of January 2019. Besides, it is unclear from Figure 4 that there is an indication of peak output occurring at one point in the day. The peak PV output on 27th of January was 2.8 kW at 12:00 p.m., but was 1.7 kW and 2.3 kW on 24th and 30th of January at the same time, as illustrated by Figure 5. These findings support the fact that the PV output is extremely volatile in light of the absence of weekly/daily patterns or recurrences of unclear sky duration. The preceding analysis demonstrates that there is no daily and weekly seasonality in unclear sky conditions compared to the daily pattern during sunny days. Thus, this section aims to identify if the behavior of patterns (daily or weekly) can be classified as special PV output. In this case, the time series points are investigated to find the links (patterns) between them, which can be collected via the Partial Autocorrelation Function (PACF) through 200 time lags, as illustrated in Figure 6. The significance of calculating the PACF is to find any links that can have iteratively taken place. As illustrated in Figure 6, the plot of PACF has demonstrated the correlations among the PV power output time series at (t) for up to 200 (fifteen minutes) lags. In general, the calculation of PACF aids in finding any links via the two direct variables, irrespective of the impact of all retardation (lags) times [32][33][34]. Following lag number 3, a chop-off is manifested as demonstrated in the PACF plot with another negative impact represented among 10-20 lags. From the PACF plot (for unclear sky days), there is no obvious pattern or seasonality when House (2) House (3) House (4) House (5) House (6) House (7) House (8) House (9) House (10) The preceding analysis demonstrates that there is no daily and weekly seasonality in unclear sky conditions compared to the daily pattern during sunny days. Thus, this section aims to identify if the behavior of patterns (daily or weekly) can be classified as special PV output. In this case, the time series points are investigated to find the links (patterns) between them, which can be collected via the Partial Autocorrelation Function (PACF) through 200 time lags, as illustrated in Figure 6. The significance of calculating the PACF is to find any links that can have iteratively taken place. As illustrated in Figure 6, the plot of PACF has demonstrated the correlations among the PV power output time series at (t) for up to 200 (fifteen minutes) lags. In general, the calculation of PACF aids in finding any links via the two direct variables, irrespective of the impact of all retardation (lags) times [32][33][34]. Following lag number 3, a chop-off is manifested as demonstrated in the PACF plot with another negative impact represented among 10-20 lags. From the PACF plot (for unclear sky days), there is no obvious pattern or seasonality when House (2) House (3) House (4) House (5) House (6) House (7) House (8) House (9) House (10) The preceding analysis demonstrates that there is no daily and weekly seasonality in unclear sky conditions compared to the daily pattern during sunny days. Thus, this section aims to identify if the behavior of patterns (daily or weekly) can be classified as special PV output. In this case, the time series points are investigated to find the links (patterns) between them, which can be collected via the Partial Autocorrelation Function (PACF) through 200 time lags, as illustrated in Figure 6. The significance of calculating the PACF is to find any links that can have iteratively taken place. As illustrated in Figure 6, the plot of PACF has demonstrated the correlations among the PV power output time series at P (t) for up to 200 (fifteen minutes) lags. In general, the calculation of PACF aids in finding any links via the two direct variables, irrespective of the impact of all retardation (lags) times [32][33][34]. Following lag number 3, a chop-off is manifested as demonstrated in the PACF plot with another negative impact represented among 10-20 lags. From the Energies 2021, 14, 2151 9 of 31 PACF plot (for unclear sky days), there is no obvious pattern or seasonality when observing the distribution of lags, especially when comparing with sunny days that usually exhibit considerable lags within 48 or 96. The considerable lags in Figure 6, are likely to be due to random salience and they could be related to the continuity of sunshine more than to a single time step. The time series examination indicates that the PV power output unprovided a clear daily or weekly seasonality, leading to more challenges in forecasting the PV output as a result of the non-smooth performance of power curves. This is mostly related to weather conditions; therefore, another consideration should be to comprehend the volatility of the true data.
Energies 2021, 14, x FOR PEER REVIEW 9 of 32 observing the distribution of lags, especially when comparing with sunny days that usually exhibit considerable lags within 48 or 96. The considerable lags in Figure 6, are likely to be due to random salience and they could be related to the continuity of sunshine more than to a single time step. The time series examination indicates that the PV power output unprovided a clear daily or weekly seasonality, leading to more challenges in forecasting the PV output as a result of the non-smooth performance of power curves. This is mostly related to weather conditions; therefore, another consideration should be to comprehend the volatility of the true data.
Weather Data
Weather variables such as temperature and wind are usually considered within load forecasting models [35][36][37] However, it is not obvious that weather conditions have a significant role in forecasting renewable energy sources or LV demand. In this paper, the hourly temperature data has been collected over the training and testing period. In order to minimize the impact of the non-smooth behavior of the power curve on the forecast model, especially during unclear sky conditions, this section focuses on the relationship between weather variables, household demand and PV power output. Figure 7 displays the 2D histogram of the weather variables, household demand and PV power output data sets over one week. Every one of the histogram bins (bars) shows the joint distribution and correlation of the data sets. Figure 7 shows strong correlation between temperature, demand and PV power output curve. In Figure 7a, the higher frequency for household demand occurred between (0.5-1) kWh and (12.5-20) temperature. In addition the higher number of observations for hourly PV power output was (0-0.25) kW when temperature was equal to (12.5-20) °C. For the PV system, the higher power output (2-2.5) kW occurred when the temperature was equal to (20)(21)(22)(23)(24)(25) °C. This was expected as the rated (designed) power output of PV is generated when temperature is 25 °C.
Weather Data
Weather variables such as temperature and wind are usually considered within load forecasting models [35][36][37] However, it is not obvious that weather conditions have a significant role in forecasting renewable energy sources or LV demand. In this paper, the hourly temperature data has been collected over the training and testing period. In order to minimize the impact of the non-smooth behavior of the power curve on the forecast model, especially during unclear sky conditions, this section focuses on the relationship between weather variables, household demand and PV power output. Figure 7 displays the 2D histogram of the weather variables, household demand and PV power output data sets over one week. Every one of the histogram bins (bars) shows the joint distribution and correlation of the data sets. Figure 7 shows strong correlation between temperature, demand and PV power output curve. In Figure 7a, the higher frequency for household demand occurred between (0.5-1) kWh and (12.5-20) temperature. In addition the higher number of observations for hourly PV power output was (0-0.25) kW when temperature was equal to (12.5-20) • C. For the PV system, the higher power output (2-2.5) kW occurred when the temperature was equal to (20-25) • C. This was expected as the rated (designed) power output of PV is generated when temperature is 25 • C.
The relationship between the hourly demand and temperature, • C, is visualized through a scatter plot as seen in Figure 8, for Jordan (Madaba). In this figure, it can now be seen that, for temperatures less and more than 20 • C the demand increases. The increasing demand rate is slower for temperatures less than 20 • C compared to temperatures above 20 • C. Figure 7 shows evidence for annual demand seasonalities and correlation between demand and temperature time series. The demand has high values at high and low temperatures during winter and summer seasons. Demand increases in winter and summer due to the use of electrical heating and air-conditioning. It is clear then that the temperature and the demand series are correlated. The relationship between the hourly demand and temperature, °C, is visualized through a scatter plot as seen in Figure 8, for Jordan (Madaba). In this figure, it can now be seen that, for temperatures less and more than 20 °C the demand increases. The increasing demand rate is slower for temperatures less than 20 °C compared to temperatures above 20 °C. Figure 7 shows evidence for annual demand seasonalities and correlation between demand and temperature time series. The demand has high values at high and low temperatures during winter and summer seasons. Demand increases in winter and summer due to the use of electrical heating and air-conditioning. It is clear then that the temperature and the demand series are correlated. Table 3 presents the R-squared value for the linear relationship between the hourly temperature and wind speed and the PV system output. The R statistical analysis introduces high correlation between temperature and the PV system output and direct proportionality between these variables, with R equal to 0.94. In the case of wind speed, the R Table 3 presents the R-squared value for the linear relationship between the hourly temperature and wind speed and the PV system output. The R 2 statistical analysis introduces high correlation between temperature and the PV system output and direct proportionality between these variables, with R 2 equal to 0.94. In the case of wind speed, the R 2 value becomes 0.39 which shows that wind speed has less ability to explain PV output variability compared to temperature. However, the wind speed, as a natural cooling system for PV panels, helps to increase PV output, which explain the positive linear relationship between them.
Load Data Analysis
In order to provide an overview of the demand data, investigation of the ten households data is demonstrated in Table 4, showing demand statistics comprising average demand, µ, and standard deviation, σ. Furthermore, to exhibit the extent of unevenness at hourly and daily resolutions among both mean and standard deviation where the coefficient variation (CV) is further recognized, a relative standard deviation is presented in Table 4. The summary for domestic demand is demonstrated in Table 4, where the standard deviation (σ) is for the domestic schedule with 1.4 kWh (hourly demand) and 15.1 kWh (daily demand). Accordingly, there is a substantial indication of greatly fluctuating and erratic domestic demand for the mean value of approximately 87.2% (hourly demand) and 38.3% (daily demand). Moreover, Figure 8 represents a substitute visualisation of the allocation of domestic demand data. In Figure 9 also, the average hourly demand can be broadly classified into four groups: (1) from 0 to 0.5 kWh as low demand, (2) from 0.5 to 2 kWh as normal demand, (3) from 2 to 3.5 kWh as high demand, and (4) over 3.5 kWh as high peak demand. A representation of demand values appearing in tie can be traced as follows: 20% as low and 19% as high, while 11% occur as high peak, as observed in Figure 9. In contrast, times with a 50% value represent the average demand consumed by households. The ten houses' demand curves for the same day (working day) are presented in Figure 10. In general, the household demand curves for the ten houses show similar behavior with two main peaks in the morning and evening, popular behaviour for household demands [17,23]. However, a wide deviation between the demand curves at the same time is shown in Figure 4. For example, house (5) achieved morning peak demand equal to 3 kWh compared to 1.9 kWh for house (10) at 8:00 and 2.7 kWh for house 2 at 10:00. This deviation is mainly related to the deviation in householders' behavior in consuming electrical energy. This deviation at individual energy user level increases the uncertainty and the difficulties of creating an accurate forecast model. The ten houses' demand curves for the same day (working day) are presented in Figure 10. In general, the household demand curves for the ten houses show similar behavior with two main peaks in the morning and evening, popular behaviour for household demands [17,23]. However, a wide deviation between the demand curves at the same time is shown in Figure 4. For example, house (5) achieved morning peak demand equal to 3 kWh compared to 1.9 kWh for house (10) at 8:00 and 2.7 kWh for house 2 at 10:00. This deviation is mainly related to the deviation in householders' behavior in consuming electrical energy. This deviation at individual energy user level increases the uncertainty and the difficulties of creating an accurate forecast model.
On the other hand, for the aggregated demand profiles, as in the data collected from Madaba city, the load profile is usually smoother and more predictable with an annual seasonality pattern [17,23]. A detailed demand analysis for this level of aggregated demand is presented and discussed in [23]. Therefore, the following analysis aims to investigate the cycle or pattern on a daily and hourly basis which was not discussed in [23]. Figure 11 presents the total demand patterns related to the days of the week at Madaba city. It is clear that the total daily demand percentage is similar over all weekdays but not on Sunday, with a highest demand percentage of 17.1% from total weekly demand. In Jordan and the Middle East, Sunday is the first working day in the week and the weekend (non-working days) is on Friday and Saturday. In general, there is no obvious pattern of daily distribution over the week while total demand values are similar. 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 Demand kWh Time House (1) House (2) House (3) House (4) House (5) House (6) House (7) 12 On the other hand, for the aggregated demand profiles, as in the data collected from Madaba city, the load profile is usually smoother and more predictable with an annual seasonality pattern [17,23]. A detailed demand analysis for this level of aggregated demand is presented and discussed in [23]. Therefore, the following analysis aims to investigate the cycle or pattern on a daily and hourly basis which was not discussed in [23]. Figure 11 presents the total demand patterns related to the days of the week at Madaba city. It is clear that the total daily demand percentage is similar over all weekdays but not on Sunday, with a highest demand percentage of 17.1% from total weekly demand. In Jordan and the Middle East, Sunday is the first working day in the week and the weekend (non-working days) is on Friday and Saturday. In general, there is no obvious pattern of daily distribution over the week while total demand values are similar. Composition. Table 5 presents the R-squared value for the relationship between the current demand L (t) and the lagged demand L (t-i). The highest R value was 0.89, which shows a high correlation between the current and previous hour's demand. This correlation can be used as main input for the forecast model, however, it will require updating of the measurements in every time step. The R increased gradually from 0.22 to 0.89 in line with the decrease in the (i) value. This means the linear model will be less able to explain demand variability when depending on the high (i) lag value and this correlation will not be an effective relationship in forecasting load. However, the R value for the previous day's demand at the same time shows a positive, strong correlation, with value equal to 0.45. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 Table 5 presents the R-squared value for the relationship between the current demand L (t) and the lagged demand L (t − i). The highest R 2 value was 0.89, which shows a high correlation between the current and previous hour's demand. This correlation can be used as main input for the forecast model, however, it will require updating of the measurements in every time step. The R 2 increased gradually from 0.22 to 0.89 in line with the decrease in the (i) value. This means the linear model will be less able to explain demand variability when depending on the high (i) lag value and this correlation will not be an effective relationship in forecasting load. However, the R 2 value for the previous day's demand at the same time shows a positive, strong correlation, with value equal to 0.45. Table 5. R-squared values for the relationship between current and lagged demand at Madaba city.
Time Series Analysis
The MV network demand usually demonstrates a substantial weekly/daily seasonality, using the time series analysis [38,39]. The current section will examine in detail the energy usage of a single household through the training data period by demand profiles to ensure that it follows either pattern or significant seasonality in demand curves. The former section provided an undetailed examination of demand, which concluded with the observation that demand values can be further sorted into a distribution with unregularly proclivities. The following factors will be taken into consideration in determining the type of cycles or patterns in the case of household demand by using time series analysis: • Analysis based on daily and weekly patterns, to examine, if applicable therein, hour/day-day/week-week demand and any formation of cycles.
• Analysis of autocorrelation and hourly energy consumption to investigate if there are any seasonal patterns, especially those not in day/week cycles.
Firstly, the energy consumption profiles were introduced to probe the patterns of a weekly and daily type. A general analysis of distribution of hourly demand within week/day patterns is given in Figures 12 and 13. As an example, the hourly demand over six weeks are explored in Figure 12, where the box plots symbolize demand during the dataset per every week. It can be seen from the dataset that the location of points from 0.8 kWh to 1.75 kWh is the median related to the six-week period. Besides this, the comparative values between the maximum and minimum of the median demonstrate a rise to 118.7%. Additionally, the value within one week of the Interquartile Range (IQR) also varies greatly. By way of illustration, the minimum and maximum of the first week has IQR from 0.9 to 3.2 kWh as against the second week with an IQR from 0.1 to 1.2 kWh with median 0.8 kWh and 1.75 kWh, respectively. This presents irregular behaviour in demand without apparent reference to weekly seasonality, nor week-to-week uniformity.
to ensure that it follows either pattern or significant seasonality in demand curves. The former section provided an undetailed examination of demand, which concluded with the observation that demand values can be further sorted into a distribution with unregularly proclivities. The following factors will be taken into consideration in determining the type of cycles or patterns in the case of household demand by using time series analysis: • Analysis based on daily and weekly patterns, to examine, if applicable therein, hour/day-day/week-week demand and any formation of cycles. • Analysis of autocorrelation and hourly energy consumption to investigate if there are any seasonal patterns, especially those not in day/week cycles.
Firstly, the energy consumption profiles were introduced to probe the patterns of a weekly and daily type. A general analysis of distribution of hourly demand within week/day patterns is given in Figures 12 and 13. As an example, the hourly demand over six weeks are explored in Figure 12, where the box plots symbolize demand during the dataset per every week. It can be seen from the dataset that the location of points from 0.8 kWh to 1.75 kWh is the median related to the six-week period. Besides this, the comparative values between the maximum and minimum of the median demonstrate a rise to 118.7%. Additionally, the value within one week of the Interquartile Range (IQR) also varies greatly. By way of illustration, the minimum and maximum of the first week has IQR from 0.9 to 3.2 kWh as against the second week with an IQR from 0.1 to 1.2 kWh with median 0.8 kWh and 1.75 kWh, respectively. This presents irregular behaviour in demand without apparent reference to weekly seasonality, nor week-to-week uniformity. As seen in Figures 12 and 13, the weekday patterns can be examined by plotting the hourly demand distribution on the basis of the sort of day. It is shown that the hourly data set consists of two patterns (categories) as addressed in Section 3.3. The first ranges from 70% below 2 kWh, while the other ranges from 12% at over 3.5 kWh. This can also be presented within several demand distributions on every type of day. Nevertheless, the observations of low demand on six days are greater than the number on Friday which ranged from 0 to 0.5 kWh. Furthermore, the demand analysis through type of day indicates that there is no specific day which has an obvious highest or lowest demand value, but every day has a broad spectrum of demand records. There is no obvious pattern of daily distribution while the highest and lowest demand values can separated into particular days. However, low demand values occur highly between 10:00 to 15:00 over the week except for Saturday and Sunday. This is due to the fact that the single household is normally highly volatile compared to aggregated demand profiles for LV feeders or MV demand [29,40], where any small activity in the household can change the load profile behaviour.
Secondly, the behavior of unsteady and erratic household demand against aggregate LV or MV demands provides challenges in seeking for seasonality models. Therefore, this section is intended to examine cross-relationships over the training data set period. The PACF was determined through two-week lags (336-time lags) in order to locate any links or patterns via the time series points, which can be seen from Figure 14. From the PACF plot, there are no obvious models or seasonalities for allocation of the considerable delays (lags) against other aggregated LV demands, that in most cases demonstrate remarkable lags (24 and multiply). Despite this, early correlation lags were randomly distributed, without an obvious automatic association performance through time series on demand. observations of low demand on six days are greater than the number on Friday which ranged from 0 to 0. 5 kWh. Furthermore, the demand analysis through type of day indicates that there is no specific day which has an obvious highest or lowest demand value, but every day has a broad spectrum of demand records. There is no obvious pattern of daily distribution while the highest and lowest demand values can separated into particular days. However, low demand values occur highly between 10:00 to 15:00 over the week except for Saturday and Sunday. This is due to the fact that the single household is normally highly volatile compared to aggregated demand profiles for LV feeders or MV demand [29,40], where any small activity in the household can change the load profile behaviour. Secondly, the behavior of unsteady and erratic household demand against aggregate LV or MV demands provides challenges in seeking for seasonality models. Therefore, this section is intended to examine cross-relationships over the training data set period. The PACF was determined through two-week lags (336-time lags) in order to locate any links or patterns via the time series points, which can be seen from Figure 14. From the PACF plot, there are no obvious models or seasonalities for allocation of the considerable delays (lags) against other aggregated LV demands, that in most cases demonstrate remarkable lags (24 and multiply). Despite this, early correlation lags were randomly distributed, without an obvious automatic association performance through time series on demand. In general, this section has introduced and investigated household demand and PV power output characteristics which are important to comprehend the data profiles' performance for the purpose of improving a load forecast model in view of the current paper. The essential contribution of the existing section is to address the absence of a theoretical foundation in most of the literature for energy demand performance regarding two issues: (1) single household and (2) PV system techniques. This is crucial in promoting the load forecasting algorithms in Section 3. This section introduced an examination of time series in the case of household demand and PV system in order to check if there are any trends/models and correlation with external parameters. In addition, both the PV system and weather conditions demonstrated a high correlation, instead of any obvious indicator of trends/patterns through the data profiles. Furthermore, an explicit influence on the im- In general, this section has introduced and investigated household demand and PV power output characteristics which are important to comprehend the data profiles' performance for the purpose of improving a load forecast model in view of the current paper. The essential contribution of the existing section is to address the absence of a theoretical foundation in most of the literature for energy demand performance regarding two issues: (1) single household and (2) PV system techniques. This is crucial in promoting the load forecasting algorithms in Section 3. This section introduced an examination of time series in the case of household demand and PV system in order to check if there are any trends/models and correlation with external parameters. In addition, both the PV system and weather conditions demonstrated a high correlation, instead of any obvious indicator of trends/patterns through the data profiles. Furthermore, an explicit influence on the improvement of the forecast models is enhanced by the analysis of both cross-correlation and time series, as illustrated in Section 3. To obtain an adequate load forecast for the household demand and PV system, we must determine the ideal variables. For instance, to specify and select the greatest orders of the ARIMA variables (p, d, q), time series analysis and a PACF plot are needed.
Load Forecasting Models
In general, load forecasting models are utilized to anticipate fluctuating demand and might assist in achieving greater performance for low voltage implementation [1][2][3][4][5][10][11][12]. This section ought to improve different ANN forecasts and time series models. As illustrated in Section 3, given the fluctuating performance for household demand, Madaba city demand and PV system outputs compared to low voltage or medium voltage demand, the prediction challenge introduced for this specific task is more difficult and complex. In this section, forecast models are expanded to anticipate a domestic demandL(t) for the next hour, and PV system output powerP(t), that is, at t + 1 until t + 24, where t represents the time step. In this paper, it is important to provide historical records and forecast where or not the equations whether have a (ˆ) notation. Figure 15 illustrates a general diagram for the suggested load prediction procedures. In subsequent sections, various ARIMAX and ANN are improved, using probabilistic and new optimization approaches, respectively, as presented in Sections 4.1 and 4.2.
Probabilistic ARIMAX Forecast Model
In general, the ARIMAX approach is defined as a statistical method using a time series that can develop historical data as a time function to estimate a specified future value. The linear and simple approach to the Auto Regressive Integrated Moving Average (ARIMA) is used as being easier to implement and does not need any historical information via time series. Moreover, the latter method can be broadly employed in predicting electrical load demand. So that the model has an external variable, the ARIMA should be modified to the ARIMAX version which consists of a nonlinear relationship and external variables. Typically, the merits of an external variable can be seen by establishing an additional parameter which assists in reducing prediction errors and increasing the use of accessible data. For the prediction of LV demand both ARIMAX and ARIMA models are common through time series [8,32]. To produce a non-seasonal ARIMAX model with (p, d, q) variables as illustrated by Equation (1) for household demand as an example, combination is needed, via a variation component with the ARIMAX model. Besides, this variation can be implemented repeatedly to create a chain constant [33,34]: Here for L ( ) (t), where d is differenced demand estimate by time t for L ( ) = L, this can be specified through Equation (2) where L ( ) (t) is the previous differenced demand by time t; ∑ ϕ L ( ) (t − I) is related to pth order autoregressive polynomial lag (AR(p) model); ∑ θ Z (t − i) is related to qth order moving average polynomial lag
Probabilistic ARIMAX Forecast Model
In general, the ARIMAX approach is defined as a statistical method using a time series that can develop historical data as a time function to estimate a specified future value. The linear and simple approach to the Auto Regressive Integrated Moving Average (ARIMA) is used as being easier to implement and does not need any historical information via time series. Moreover, the latter method can be broadly employed in predicting electrical load demand. So that the model has an external variable, the ARIMA should be modified to the ARIMAX version which consists of a nonlinear relationship and external variables. Typically, the merits of an external variable can be seen by establishing an additional parameter which assists in reducing prediction errors and increasing the use of accessible data. For the prediction of LV demand both ARIMAX and ARIMA models are common through time series [8,32]. To produce a non-seasonal ARIMAX model with (p, d, q) variables as illustrated by Equation (1) for household demand as an example, combination is needed, via a variation component with the ARIMAX model. Besides, this variation can be implemented repeatedly to create a chain constant [33,34]: is related to qth order moving average polynomial lag (MA (q)); ∑ h j=0 ϕ j X j (t) is the hth exogenous variables term; ϕ j , φ I and θ I stand for the parameter of external variables, and both MA (q) and AR(p) relations; also here Z (n) is defined as the previous error of prediction which can be distributed normally, and C represents a constant value. To investigate the link between both current and any external variables, it is significant to estimate an external variable in the ARIMAX model [29,[32][33][34]. As previously discussed, the parameters which are computed as a task are only utilized in case they decrease prediction error [7][8][9]. In Section 3.2, the analysis of data showed a high correlation between the PV output and household demands and temperature. In addition, a positive strong correlation between PV output and wind speed is presented. Therefore, weather conditions are the external variables: X 1 (t) is the hourly temperature and X 2 (t) is the hourly wind speed. In general, the seven actions must be completed frequently in order to improve the ARIMAX model. Figure 16 illustrates and outlines the common approach to improving ARIMAX models.
Implementation of ARIMAX forecast models: Note that the ARIMA (p, d, q) model can be extended to ARIMAX, consisting of external variables. The BIC matrix computation has also been used to select the best ARIMAX model order. The X 1 (t) and X 2 (t) here represent the external variables for the suggested ARIMAX model. The differencing term (d) in the ARIMAX model helps to stabilize the mean of the time series by eliminating trend and seasonality. In this model, it was only required to take the first difference in order to obtain stationary data, so (d = 1) in all models. The BIC matrix computed and implemented in accordance with values p between 1 and 48, q between 0 and 48, and d between 0 and 3, which can assist parameters selection in the ARIMAX model. Through the minimum BIC value the most preferable parameters in the case of the ARIMAX model can be acquired. The BIC matrix results shows that the most preferable parameters appear in the case of the ARIMAX model (p, d, q) in accordance with the accessible data for household and Madaba city demand, through lowest BIC conveyed by (p, d, q) = (2,1,2) and (1,1,2) for PV power. The ARIMA model can be derived by removing the external variables term from the ARIMAX model.
Probabilistic ARIMAX Model
The previous method was presented as a point forecast with a single estimate output for each time step [41][42][43][44][45]. However, a point forecast is mainly limited to the description of the data model and the degree of uncertainty in the data. Therefore, a forecast model which can give a detailed picture of future demand under different degrees of uncertainty is a significant model. A probabilistic estimation approach is an estimation model which gives future demand scenarios based on the distribution of data [41,45]. In this paper, an ensemble or multivariate forecast model using Monte Carlo is developed to future scenarios of household and Madaba city demandL(t + i) and PV system output power P(t + i). The main advantage of developing the ensemble forecast is that it takes into account the inter-dependencies and uncertainty in the data. To present the volatile and uncertain household demand and PV system output power, the ARIMAX forecast model in Section 4.1 has been modified to generate potential future scenarios by using a Monte Carlo sampling method. Here, we sample household demandL(t + i), and PV system output powerP(t + i) from the joint probability distribution with temperature and time, as presented in Figures 6 and 9 using a 2D histogram. Then, the ARIMAX model as presented in Section 4.1 is used to obtain the forecast scenarios. The basic steps for the proposed probabilistic method using the Monte Carlo and ARIMAX model are summarised as follows [29]: required to take the first difference in order to obtain stationary data, so (d = 1) in all models. The BIC matrix computed and implemented in accordance with values p between 1 and 48, q between 0 and 48, and d between 0 and 3, which can assist parameters selection in the ARIMAX model. Through the minimum BIC value the most preferable parameters in the case of the ARIMAX model can be acquired. The BIC matrix results shows that the most preferable parameters appear in the case of the ARIMAX model (p, d, q) in accordance with the accessible data for household and Madaba city demand, through lowest BIC conveyed by (p, d, q) = (2,1,2) and (1,1,2) for PV power. The ARIMA model can be derived by removing the external variables term from the ARIMAX model.
Collecting and pre-process data : plot the data and identify outliers Checking the stationarity Split the data into training and testing sets.
Difference the data until the data becoming stationary.
Generate forecasts for the testing data set.
General process for ARIMAX forecasting model
Identifying the ARIMA model parameters (p, q) using the following methods : • PACF and ACF plots.
• AIC and BIC calculations.
Select the exogenous variables which can decrease the forecast error and add to the ARIMA models.
ARIMA (p, d, q)
Training the ARIMAX model using the training data set.
ARIMAX (p, d, q)
Check forecast error white noise or not? no yes Figure 16. Methodology of the Autoregressive Integrated Moving Average (ARIMA) and ARIMAX forecasting models.
ANN Forecast Model Optimized by Using Golden Ratio Optimization (GROM)
In general, the prediction of energy demand, which is a difficult and complex problem, includes many non-linear relationships such as temperature and wind speed for renewable energy applications. A range of artificial intelligence techniques are used during energy forecasting because of their flexibility and can manage complex non-linear relationships to create accurate prediction models. In general, the ANN is one of the most fashionable approaches to artificial intelligence, and it is a mathematical model that has a variety of applications that include prediction and control systems [12,40]. The idea of designing artificial neural networks is a simulation that emerged from the biological NNs of the central nervous system with a research goal of discovering how learning operates [40,41]. The mathematical models represented by neural networks consist of artificial neurons associated with synaptic weight W ij , X j refers to the individual neuron among them, and X I is related to each neuron in the second layer [12,41]. Figure 17 illustrates the standard organization of individual artificial neurons in which the process is carried out via activation function in the summation point and gathering input-signs; in this case, the former layer's outputs multiply through synapses [41]. Typically, in the hidden units there is a role for the activation function that can be employed in order to create an output to act as input in the following layer [6,41]. Two activation functions can be broadly classified into a hyperbolic tangent (tanh) and a sigmoid [41,42]. The objective of the scalar to scalar activation function is to model non-linearity in intricate performance and restrict the output of the neuron [41].
Implementation of traditional ANN forecast models:
The traditional ANN feedforward model aims to forecast the future household and Madaba city demand L (t + i) and PV system output power P (t + i); here n represents the current time step and i = 1,2, … ,24. Figure 18 illustrates and sums up the ANN model's steps and introduce the standard method for ANN [6,[40][41][42]. The steps of the ANN model in Figure 16 were pursued with the purpose of choosing appropriate parameter models, as listed below 1-Variable selection: • Output variables: the principal goal of this paper, future demand L (t + i), and PV system output power P (t + i).
•
Input variables: initially, the external variables (temperature and wind speed) have been carefully chosen as key input variables, by reason of the robust link amongst them and the selected output variables. Furthermore, the experimental and error method was employed to choose extra input variables grounded in the relative historical profiles and current for household demand and PV system output power. In step 4, the results and analysis of the trial and error are provided for the purpose of checking parameters. 2-Data collection and pre-processing: the measured data is presented in Section 3. This step includes checking all data to avoid data waste. In addition, the step implies assaying the data to noise abatement, discerning trends, and finding any important
Implementation of traditional ANN forecast models:
The traditional ANN feedforward model aims to forecast the future household and Madaba city demandL(t + i) and PV system output powerP(t + i); here n represents the current time step and i = 1, 2, . . . , 24. Figure 18 illustrates and sums up the ANN model's steps and introduce the standard method for ANN [6,[40][41][42]. The steps of the ANN model in Figure 16 were pursued with the purpose of choosing appropriate parameter models, as listed below.
perature, X : Wind Speed, X : Hour of the day, X : Former hour data and X : Former day data in same hour. On the other hand, the following exogenous variable are used for the household and Madaba city demand forecast model: X : Temperature, X : Average of the previous two hours demand, X : Hour of the day, X : Former hour data and X : Former day data in same hour.
• Number of hidden layers: two hidden layers. • Number of hidden neurons: ten neurons in each hidden layer.
Variable selection (inputs, outputs) based on analysis of data.
Splitting the data into training, validation and testing sets.
1-Variable selection:
• Output variables: the principal goal of this paper, future demandL(t + i), and PV system output powerP(t + i). • Input variables: initially, the external variables (temperature and wind speed) have been carefully chosen as key input variables, by reason of the robust link amongst them and the selected output variables. Furthermore, the experimental and error method was employed to choose extra input variables grounded in the relative historical profiles and current for household demand and PV system output power. In step 4, the results and analysis of the trial and error are provided for the purpose of checking parameters.
2-Data collection and pre-processing: the measured data is presented in Section 3. This step includes checking all data to avoid data waste. In addition, the step implies assaying the data to noise abatement, discerning trends, and finding any important link. 3-Dividing the data set: the collected data sets are separated into training, validation and testing data sets, as discussed in Section 3. 4-ANN model parameters selection: the capability of figuring out and alleviating the computation of complex correlations is the reason behind using parameter functions in this case study. Besides, the trial-and-error approaches apply as a consequence of identifying both numbers in hidden layers and neurons.
•
Input variables: in general, to improve the expected performance, a suitable external variable should be selected based on the objectives of the model and the availability of data. In Section 3.2, the analysis of data showed high correlation between the PV output and household demands and temperature. In addition, a positive strong correlation between the PV output and wind speed is presented. Therefore, weather conditions are recommended to be used as external variables: X 1 (t) is the hourly temperature and X 2 (t) is the hourly wind speed. In Section 3.3, the previous hour demand and the previous day demand at the same time showed a strong positive correlation with the current demand at Madaba; therefore, these two variables and hour of the day are recommended to be used as external variables X 3 to X 5 . In order to verify the impact of the proposed external variables on the forecast model accuracy, Section 5.3 presents a statistical analysis of the ANN forecast models with different external variables.
The following exogenous variables are used in the PV power forecast model: X 1 : Temperature, X 2 : Wind Speed, X 3 : Hour of the day, X 4 : Former hour data and X 5 : Former day data in same hour. On the other hand, the following exogenous variable are used for the household and Madaba city demand forecast model: X 1 : Temperature, X 2 : Average of the previous two hours demand, X 3 : Hour of the day, X 4 : Former hour data and X 5 : Former day data in same hour. • Number of hidden layers: two hidden layers. • Number of hidden neurons: ten neurons in each hidden layer.
ANN-GROM Forecast Model
In the traditional ANN forecast model, optimization techniques such as steepest descent and the Gauss Newton method have been used in the literature [12][13][14][15][16] to solve the learning algorithm and achieve the best performance in ANN. Furthermore, these traditional optimization techniques work in finding local optimal parameters for ANN which requires that the objective function needs to simultaneously satisfy the following criteria: smoothness, continuity and differentiability. However, these traditional optimization methods cannot be efficiently used for optimizing the ANN forecast model for electrical demand with a high level of uncertainty. Therefore, it is significant to explore alternative optimization methods; to the best of the author's knowledge, this is the first work on optimized load forecasting using the Golden Ratio Optimization Method (GROM) technique.
In the previous section, the traditional ANN load forecasting for household demand and city demand connected to renewable energy systems is presented. However, in reality the output renewable energy systems and LV demands are naturally non-smooth due to the volatile behaviour of weather conditions. Here a new optimal technique is required to efficiently achieve the best ANN performance and minimize the forecast by dealing with the uncertainties in renewable energy systems and LV demand profiles. In this paper, the Golden Ratio Optimization Method (GROM) is used to achieve the best ANN performance and optimal parameters. The GROM as a new optimization-training algorithm improved the training process by reducing the tuning time and increasing the speed to arrive at a global solution compared to traditional methods such as the gradient descent training algorithm. The GROM is an optimization solver, based on growth searching patterns nature such as those of plants [43]. The searching pattern in GROM was discovered by Fibonacci and is called the golden ratio. The golden ratio aims to determine the growth searching angle of the model which helps to improve the searching technique and achieve an optimal solution [43]. The golden ratio is used to update the searching process and find the optimal solution in two different phases. Firstly, the mean value of all possible solutions for training the ANN network (the population) is calculated; then, in terms of fitness, the mean solution is compared to the worst solution. In case the mean solution achieves a better fitness value, it will replace the worst solution. This process aims to speed up the algorithm and reach convergence. Secondly, to determine the direction of search (searching angle), a random solution will be selected and compared to the mean solution to investigate the impact of these on the search movement. This helps to determine the optimal ANN model parameters and avoid choosing additional parameters which can mislead the forecast model. In this paper, a GROM is developed to optimize the ANN forecasting model based on the following steps: • Firstly, a number of random learning model parameters for the ANN forecast model, as population initialization is created and the mean value of the population is calculated. • Secondly, the fitness of each model parameter is evaluated by using the learning cost function in ANN. Then, the fitness of the mean value of the population solution will be compared to the worst solution. In case the mean population solution has a better fitness result compared to the worst solution, the worst solution will be replaced by the mean population solution. This process in GROM aims to enhance the optimization speed to achieve convergence. • Thirdly, a random solution vector is created in the population to determine and specify the new step direction and movement. The fitness of the new random solution and selected population will be compared to the mean solution. In this step, the random parameters solution aims to create a random movement towards the next step solution and to create the ability to search the whole space of the cost function. In order to select the size of movement towards the new solution and its direction, the Fibonacci formula (golden ratio) is used in this work as in [43]. The best parameters solution is the solution with the minimum objective function value. In GROM, the parameter solutions need to be updated and moved towards the best solution for the population [43].
In general, the proposed GROM optimization technique is free from any tuning steps for the optimization model, which helps to simplify the model, and reduce the convergence rate and the computational cost. In this work, the optimization model parameters have been evaluated over a wide range of values, as in [43], and the best parameters solution was determined to obtain the results.
Results and Discussion
In this section, the forecast model's results are introduced and discussed. Firstly, to evaluate the performance of the proposed forecasting models over a specific time series, it is significant to determine the forecast evaluation method. The accuracy of forecasting models can be determined by using different techniques such as the Mean Absolute Percentage Error (MAPE) and Root Mean Square Error (RMSE) [1][2][3][4][5][6][7], as shown in Equations (3) and (4).
where L(t) is the actual data, for example, the household demand;L(t) is the predicated data; t is the current time step and T is the total number of time steps (observations). The MAPE and RMSE are the most common evaluation methods for forecast models. MAPE is a scale-independency method, which makes it easy to interpret as a percentage [41]. However, if the actual data reading is zero, MAPE cannot be used because it generates undefined values. Therefore, the RMSE is used in this paper to avoid this problem during evaluation of the forecast modes. However, the RMSE, MAPE and other evaluation methods focus on the mean value of the error and do not show the forecast model performance at every time step. For example, in some cases the actual and forecast demand profiles have a close magnitude, but there is a time shift between the two profiles which leads to extremely high error value. In future work, the evaluation method for LV demand application will be updated by using an energy score model. Throughout this section, comparison evaluation of the performance of forecast models will be determined by: • Comparing the forecast model performance over different data profiles: household demand, Madaba city demand, PV energy output and the net curve at the household, which is the difference between household demand and PV system output.
•
Evaluating the impact of exogenous variables (weather conditions) on the prediction models.
•
Evaluating the importance of designing a rolling load forecast model compared to a fixed forecast model, especially for volatile data profiles such as LV household demand.
Overall Comparisons
The MAPE and RMSE were calculated over the testing period for each day, as presented in Table 6. The MAPE and RMSE scores of the Probabilistic-ARIMAX forecast model are based on the average demand scenario. Furthermore, the MAPE and RMSE for the household demand application are calculated based on the average of the ten houses' results, where these results are without any significant deviation. In general, the mean value approach is one of the most common in solving stochastic problems [29]. In terms of the overall performance, the ANN-GROM forecast models provided the highest prediction accuracy for all data profiles over the testing period. Firstly, the traditional ANN and ARIMAX models' profiles are generated for the three types of data sets over the testing period and compared to the actual data. A specific example of the actual household, net demand and prediction models' profiles are illustrated for one day in Figures 19 and 20. The ARIMAX model misses a significant peak at 8:00 o'clock and tends to underestimate the household demand, as shown in Figure 19. On the other hand, the ARIMAX and Probabilistic-ARIMAX models tend to underestimate compared to the traditional ANN and ANN-GROM models, as presented in Figures 19 and 20. For all three types of data sets, the ANN-GROM and Probabilistic-ARIMAX outperformed the traditional ANN and ARI-MAX models, as presented in Table 6. The MAPEs for ANN-GROM model were improved by 41.2%, 22.1%, 30.1% and 27.9% for household, PV output, net demand and Madaba city demand data profiles, respectively, compared to traditional ANN models. In addition, Table 6 shows that the Probabilistic-ARIMAX models outperformed ARIMAX by providing minimum RMSE values of 28.1 W, 31.9 W, 40.8 W and 845 kW for the household, PV, net demand and Madaba city demand data profiles, respectively. ARIMAX generated the highest RMSE value during forecasting of the net demand curve. In addition, all forecast models show a lower prediction performance during forecasting of the net demand curve compared to PV and household profiles. This is mainly due to the fact that the exogenous variables for both forecast techniques were chosen based on the correlation between weather conditions and PV and household profile without taking into account the net demand curve. Section 5.2 will present the effect of choosing the exogenous variables on both forecast models' accuracy in more details. The ARIMA and ARIMAX was concerned with point forecasts where there is only a single estimate value is generated at each time step. The point forecast model (ARIMAX) is limited in to the demand data behavior and mainly for data with large degree of uncertainty. Instead, the probabilistic-ARIMAX give a more detailed picture of demand by generating a number of future demand scenarios, which will help to capture all possible scenarios including the worst-case scenario based on the historical data. Therefore, the mean value of probabilistic-ARIMAX showed more accurate forecast results compared to the ARIMAX, as shown in Table 6. However, the probabilistic-ARIMAX model will be limited to the number of generating scenarios and the size of available historical data. In general, increases in number generation demand scenarios in the probabilistic-ARIMAX will increase computational costs. Table 6 presents the overall performance of all prediction models. In this section, the percentage of forecast error for ARIMAX, as an example, over one week of the PV system data, has been analysed by plotting the histogram of prediction error in Figure 21. Firstly, the values of forecast error were distributed within a wide range (−0.6 and 0.6). Secondly, the high number of forecast error percentages clustered around 0%, while many of the errors were distributed between −0.2% and 0.2%. Therefore, the normal distribution of the forecast error seems to accurately describe the ARIMAX model error by showing no bias distribution. In addition, this shows that it may be difficult to improve the performance of the forecast models any further, as the error centralised around zero. As previously discussed, the household demand and PV system profiles are volatile and less predictable compared to aggregated LV demands or MV demands. However, the forecast models in Table 6 presents the overall performance of all prediction models. In this section, the percentage of forecast error for ARIMAX, as an example, over one week of the PV system data, has been analysed by plotting the histogram of prediction error in Figure 21. Firstly, the values of forecast error were distributed within a wide range (−0.6 and 0.6). Secondly, the high number of forecast error percentages clustered around 0%, while many of the errors were distributed between −0.2% and 0.2%. Therefore, the normal distribution of the forecast error seems to accurately describe the ARIMAX model error by showing no bias distribution. In addition, this shows that it may be difficult to improve the performance of the forecast models any further, as the error centralised around zero. As previously discussed, the household demand and PV system profiles are volatile and less predictable Table 6 presents the overall performance of all prediction models. In this section, the percentage of forecast error for ARIMAX, as an example, over one week of the PV system data, has been analysed by plotting the histogram of prediction error in Figure 21. Firstly, the values of forecast error were distributed within a wide range (−0.6 and 0.6). Secondly, the high number of forecast error percentages clustered around 0%, while many of the errors were distributed between −0.2% and 0.2%. Therefore, the normal distribution of the forecast error seems to accurately describe the ARIMAX model error by showing no bias distribution. In addition, this shows that it may be difficult to improve the performance of the forecast models any further, as the error centralised around zero. As previously discussed, the household demand and PV system profiles are volatile and less predictable compared to aggregated LV demands or MV demands. However, the forecast models in this paper are accurate compared to examples presented in the literature. For example, an ANN forecast model was presented by Bi et al. [46] to predict the power output of a PV system. The results show a 10.06% and 18.9% MAPE forecast error during sunny and rainy days, respectively. The high MAPE was mainly related to the type of exogenous variables used in the model. In [46], the high, low and average temperature values for similar days was used to generate the forecast profile. The average temperature over the day introduced less correlation with current demand compared to the hourly temperature, where normally the temperature changes from morning to midday to evening time. The differences between the actual temperature and average temperature will reflect demand consumption and PV output, as presented in Section 3. In this paper, the hourly temperature and historical data correlation were used to predict the PV profile.
Forecast Error Analysis
Energies 2021, 14, x FOR PEER REVIEW 26 of 32 was used to generate the forecast profile. The average temperature over the day introduced less correlation with current demand compared to the hourly temperature, where normally the temperature changes from morning to midday to evening time. The differences between the actual temperature and average temperature will reflect demand consumption and PV output, as presented in Section 3. In this paper, the hourly temperature and historical data correlation were used to predict the PV profile.
Effect of Exogenous Variables on Forecast Models
In order to improve the performance of the forecast models and minimise the high error peaks, exogenous variables such as weather conditions have been used. In this paper, the impact of the exogenous variables in ANN and ARIMAX models has been evaluated by dividing the forecast models into sub-models as follows: • Model NN2: ANN model without exogenous variables that is related to weather conditions and includes the following variables (X : Hour of the day, X : Previous hour data, X : Previous day data in same hour).
•
Model NN3: ANN model without exogenous variables that is related to time series and seasonality and includes only the variables related to weather condition (X : Temperature, X : Wind Speed).
In this section, the previous prediction models have been tested for predicting the PV power output (single household system) over the testing period. Table 7 shows significant improvements in the MAPE and RMSE for all ARIMAX and ANN forecast models using the exogenous variables (weather conditions) compared to the ARIMA and ANN models that depend only on the time series correlation. The MAPE of NN1 model decreased by 5.6% compared to NN2. The RMSE values of Model A1 decreased by 21.6 W compared to Model A4. Overall, forecast models with the exogenous variables improve the prediction accuracy and exhibit large errors. This indicates, in the current PV system data set, that the exogenous variables in line with the historical data are recommended as inputs for the
Effect of Exogenous Variables on Forecast Models
In order to improve the performance of the forecast models and minimise the high error peaks, exogenous variables such as weather conditions have been used. In this paper, the impact of the exogenous variables in ANN and ARIMAX models has been evaluated by dividing the forecast models into sub-models as follows: • In this section, the previous prediction models have been tested for predicting the PV power output (single household system) over the testing period. Table 7 shows significant improvements in the MAPE and RMSE for all ARIMAX and ANN forecast models using the exogenous variables (weather conditions) compared to the ARIMA and ANN models that depend only on the time series correlation. The MAPE of NN1 model decreased by 5.6% compared to NN2. The RMSE values of Model A1 decreased by 21.6 W compared to Model A4. Overall, forecast models with the exogenous variables improve the prediction accuracy and exhibit large errors. This indicates, in the current PV system data set, that the exogenous variables in line with the historical data are recommended as inputs for the forecast model. Model A2 using only the temperature as exogenous variable and Model A1 (with two exogenous variables: wind speed and temperature) performed in a similar way with differences in accuracy of less than 1.3%. Furthermore, Table 5 shows that Model A2 is slightly more accurate compared to Model A3 with wind speed as exogenous variable. This indicates that temperature information has a more significant impact on the prediction performance than the wind speed for the PV system output forecast. Based on the analysis of the PV data set and performance of the forecast models, the prediction models require both wind speed and temperature as exogenous variables. On the other hand, one of these as exogenous variable can help to reduce the error peaks and the impact of outlier values. The results of Model A4 and Model NN2 show high forecast errors compared to all other models. This is mainly due to the high correlation between current demand and external variables (related to weather conditions) which are stronger and associated more with demands compared to times series autocorrelation at low voltage demand, with a high level of uncertainty. Weather conditions (as external variables) increase the ability to capture the chaining of demand behaviour at low voltage level in line with the seasonality and time series autocorrelation presented by Model A1 and Model NN1. However, Model NN3 employed only weather conditions (as external variables) without taking into account any time series autocorrelation or variables such as the time of the day or the previous load, which led to a high forecast error of 19.7% due to the weather conditions normally being fixed for different hours during the day. The forecast models NN1 and NN2, as 'inelegant' forecast models, outperformed the deep learning ANN model [47] and Long Short-Term Memory model (LSTM) [47]. The results of [47] in Table 7 are the average best results for ten householders.
Evaluating of the Importance of Designing a Rolling Load Forecast
In this paper, the proposed forecast models are extended to create a rolling demand forecast. The rolling forecast model aims to firstly predict the hourly household demand a day ahead, and then the forecast model will be updated after each time step. This procedure aims to recalculate and update the forecast profile for the following 24 h by using the new real-time measurements and forecast error. The rolling forecast model aims to minimise the forecast error compared to a fixed forecast model over one day.
To assess the rolling forecast accuracy for the proposed forecast models, overall daily MAPE is presented in Table 8. A comparison of the rolling (updating each time step) and fixed forecast models (updating with daily bases) of a single household demand over 7 days, as depicted in Table 8 and Figure 22, shows that prediction performance of the rolling model significantly improves compared to the fixed model. For example, on Day 4 the daily MAPE decreased from 7.3% to 5.2% for ANN and from 8.6% to 7.1% for ARIMAX. The minimum and maximum daily MAPE improvements in the rolling ANN-GROM forecast model was on Day 5 and Day 3, by 18% and 35.2%, respectively. In addition, the average daily MAPE over the testing period for the rolling ANN but with a different time updating schedule is presented in Figure 22. The hourly updating (rolling forecast) improves the overall daily MAPE by 28% compared to 12 h updating. The MAPE slightly improves (less than 3%) after 12-time step updating. This indicates that the updated measurements can help to increase the prediction model accuracy. However, the rolling process will increase the computational costs compared to fixed forecast.
Evaluating the Impact of Demand Disaggregation
In Section 1, the literature of load forecasting focused on a high and medium voltage level and, for low voltage level, focused on feeders' demand (aggregations of smart meter data). In general, low voltage demand for individual users is much more stochastic and non-smooth than high and medium voltage level or aggregation low voltage demand level due to the high uncertainty in the demand profile. Nowadays, smart grid and microgrid systems aim to concentrate on using distribution generation and individual user needs for more efficient energy management models and networks. Therefore, implementation of new intelligent methods and probabilistic forecasts is required to consider the high level of uncertainty based on the level of aggregation of smart meters [21]. For example, the authors in [21] used Recurrent Neural Network (RNN) to estimate the power and energy demand of low voltage applications as load disaggregation in order to achieve a more efficient energy management system. Table 9 presented the forecast models' results for three different level of aggregations: single household, aggregation of ten households' demand (LV demand feeder), and small city (medium voltage). All forecast models performed more accurately with aggregated demand compared to single household demand. This is mainly related to the time series autocorrelations and correlation between the current demand and external variables which are stronger and more associated with aggregation demands, such as feeder and medium voltage level demand. For more explanation, larger demands (high and medium voltage levels and aggregation low voltage), which consist of aggregations of larger numbers of individual householders, increase the prominent regularities in daily, weekly and seasonal behaviour.
Evaluating the Impact of Demand Disaggregation
In Section 1, the literature of load forecasting focused on a high and medium voltage level and, for low voltage level, focused on feeders' demand (aggregations of smart meter data). In general, low voltage demand for individual users is much more stochastic and non-smooth than high and medium voltage level or aggregation low voltage demand level due to the high uncertainty in the demand profile. Nowadays, smart grid and micro-grid systems aim to concentrate on using distribution generation and individual user needs for more efficient energy management models and networks. Therefore, implementation of new intelligent methods and probabilistic forecasts is required to consider the high level of uncertainty based on the level of aggregation of smart meters [21]. For example, the authors in [21] used Recurrent Neural Network (RNN) to estimate the power and energy demand of low voltage applications as load disaggregation in order to achieve a more efficient energy management system. Table 9 presented the forecast models' results for three different level of aggregations: single household, aggregation of ten households' demand (LV demand feeder), and small city (medium voltage). All forecast models performed more accurately with aggregated demand compared to single household demand. This is mainly related to the time series autocorrelations and correlation between the current demand and external variables which are stronger and more associated with aggregation demands, such as feeder and medium voltage level demand. For more explanation, larger demands (high and medium voltage levels and aggregation low voltage), which consist of aggregations of larger numbers of individual householders, increase the prominent regularities in daily, weekly and seasonal behaviour.
Evaluation of the Proposlaictc Forecast
The ensemble forecast model is a common technique to create future power load scenarios and feed stochastic controllers with different input scenarios [31,38]. However, there are difficulties in comparing point forecasting model results such as ARIMAX and ANN to ensemble forecast scenarios, where these techniques are not directly comparable. The analysis of the ensemble forecast results in this section aims to show the significance of using different forecasting techniques when it is important to handle uncertainty in different engineering problems. In general, the forecast process can be repeated to generate 1000 to 10,000 scenarios. However, the more scenarios created, the higher the computational cost, but it will give more diversity of the power load to be captured. In Figure 23, an example of the simulated ensembles forecast model is presented. The scenarios of future single household demandL(t), are shown in red lines deviating closely around the actual demand values. However, the forecast errors get wider when increasing the horizon length in the forecast model due to the accumulation of forecast errors over each step, which also describes the high uncertainty at the end of the prediction horizon length.
Evaluation of the Proposlaictc Forecast
The ensemble forecast model is a common technique to create future power load scenarios and feed stochastic controllers with different input scenarios [31,38]. However, there are difficulties in comparing point forecasting model results such as ARIMAX and ANN to ensemble forecast scenarios, where these techniques are not directly comparable. The analysis of the ensemble forecast results in this section aims to show the significance of using different forecasting techniques when it is important to handle uncertainty in different engineering problems. In general, the forecast process can be repeated to generate 1000 to 10,000 scenarios. However, the more scenarios created, the higher the computational cost, but it will give more diversity of the power load to be captured. In Figure 23, an example of the simulated ensembles forecast model is presented. The scenarios of future single household demand L (t), are shown in red lines deviating closely around the actual demand values. However, the forecast errors get wider when increasing the horizon length in the forecast model due to the accumulation of forecast errors over each step, which also describes the high uncertainty at the end of the prediction horizon length.
Conclusions
The non-smooth and stochastic nature of household demand and PV power output, with no clear time series patterns compared to aggregate demand such as MV, increases the uncertainty levels and challenge of predicting LV applications. Therefore, an advanced prediction technique is required to minimise the impact of non-smooth demand behavior and reduce forecast error. In this paper, Probabilistic-ARIMAX and ANN-GROM forecast models have been developed and implemented to predict different LV applications and improve the performance of the prediction models by using exogenous variables, a new optimization method and a rolling forecast technique to forecast models. The proposed forecast models have been trained and tested by using real-time power grid
Conclusions
The non-smooth and stochastic nature of household demand and PV power output, with no clear time series patterns compared to aggregate demand such as MV, increases the uncertainty levels and challenge of predicting LV applications. Therefore, an advanced prediction technique is required to minimise the impact of non-smooth demand behavior and reduce forecast error. In this paper, Probabilistic-ARIMAX and ANN-GROM forecast models have been developed and implemented to predict different LV applications and improve the performance of the prediction models by using exogenous variables, a new optimization method and a rolling forecast technique to forecast models. The proposed forecast models have been trained and tested by using real-time power grid data. The forecast model results show that the proposed prediction models with exogenous variables and a rolling forecast technique are effective at reducing forecast error. In particular, the ANN-GROM, for the given household demand data, has favourable results and outperforms the traditional ANN, ARIMAX and Probabilistic-ARIMAX. For example, the MAPE for ANN-GROM model improved by 41.2% for household demand forecast compared to the traditional ANN model and showed high ability to capture the chaining in disaggregation demands. In line with the benefits of forecast error reduction, it could also potentially understand LV application demand, and DNOs gain considerable technical and economic benefits from household demand and PV data analysis and this forecast model. In addition, using different optimization methods such as ELM to train the ANN forecast model will form part of our future work. | 21,361.2 | 2021-04-12T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
A Fast Firefly Algorithm for Function Optimization: Application to the Control of BLDC Motor
Firefly Algorithm (FA) is a recent swarm intelligence first introduced by X.S. Yang in 2008. It has been widely used to solve several optimization problems. Since then, many research works were elaborated presenting modified versions intending to improve performances of the standard one. Consequently, this article aims to present an accelerated variant compared to the original Algorithm. Through the resolving of some benchmark functions to reach optimal solution, obtained results demonstrate the superiority of the suggested alternative, so-called Fast Firefly Algorithm (FFA), when faced with those of the standard FA in term of convergence fastness to the global solution according to an almost similar precision. Additionally, a successful application for the control of a brushless direct current electric motor (BLDC) motor by optimization of the Proportional Integral (PI) regulator parameters is given. These parameters are optimized by the FFA, FA, GA, PSO and ABC algorithms using the IAE, ISE, ITAE and ISTE performance criteria.
Introduction
Optimization is one of the methods that seek to solve complex problems in engineering or other fields. The objective of optimization is to locate the optimal value of a cost function in a well-defined research space under different constraints [1]. Among the techniques used, in optimization, are those of swarm intelligence algorithms which are nature-inspired algorithms, these optimization techniques have spread over the past two decades [2]. Thus, the significant performance of swarm intelligence algorithms compared to other conventional optimization methods motivates researchers and are still to be attractive to exploit them in several complex optimization problems at different fields [3]. These algorithms operate on two different search properties: exploitation and exploration, where exploration scans the entire search space and prevents the algorithm from falling into the local optima, while exploitation ensures the efficiency of the search and the convergence of the algorithm towards the optimal solution [4]. Since the appearance of Genetic Algorithm [5], many optimization algorithms have been proposed such as Ant Colony Optimization [6,7], Artificial Bee Colony (ABC) [8,9], Particle Swarm Optimization (PSO) [10,11] Modified Particle Swarm Optimization [12] Cuckoo Search (CS) [13,14], Bat Algorithm (BA) [15,16], Gray Wolf Optimizer (GWO) [17,18], Firefly Algorithm [19,20] and so on.
Recently, Firefly Algorithm is one of the famous swarm intelligence algorithms for optimization problems that have been introduced in 2008. Due to its ease of design, implementation and flexibility in nature, it has become popular in the field of optimization and has been widely applied to diverse engineering optimization problems such as in [21,22]. Despite all these advantages, it has drawbacks such as the problem of local minima and it was unable to guarantee a balance between exploration and exploitation [2,23]. Therefore, several improvement algorithms have been proposed to overcome such drawbacks which make them more widely applied successfully in engineering like optimizing Proportional Integral Derivative (PID) parameters in machine control [24][25][26][27][28].
The PID controller and its variants are mainly used in control process to have a better dynamic performance of the controlled systems. Therefore, the optimal value of the corrector parameters is needed. In this context, the choice of controller gains has become an optimization problem [29]. FA and rival algorithms were successfully applied in the optimization of the parameters of PID mainly in electrical engineering and other fields [30]. One of the prominent applications in electrical engineering is the control of BLDC motor driven by a tuned and optimized PID. However, a BLDC motor is developed on the basis of Brushed DC motor and it is one of the special electrical synchronous motor. It is driven by DC voltage, but current commutation is obtained by solid-state switches. The commutation time is fixed by the rotor position which is detected by hall sensor position [31].
It is noticeable that BLDC motor has the advantages that are: high efficiency, long operating time, low noise, small size and well speed-torque features. In general, it has a great improvement in automotive, aerospace and industry of engineering and so on. Therefore, its use has been exposed to many types of load disturbances. Conventional control methods cannot resist these alterations and lose their precision. Thus, it was necessary to implement advanced control techniques to solve this problem, especially those based on the artificial intelligence, such as: fuzzy control [32,33], neural control [34,35], Genetic Algorithm (GA) control [36,37], PSO control [38], BAT control [31] and recently, FA control and Improved Firefly Algorithm (IFA) or Modified Firefly Algorithm (MFA) [24][25][26][27][28]. These methods are based essentially on the optimization of the PID corrector parameters and its derivatives to obtain optimal performance.
In this paper, we propose an improved version of the FA for function optimization by reducing the search space. We apply this method to several benchmark problems and also to the design of a controller for BLDC motor. The paper contains two experimental parts, the first concerns the search for the global optimum of several benchmark functions according to the FA and FFA algorithms and then a comparative study is carried out. In order to consolidate its efficiency, a second application of PI parameters' optimization for the BLDC motor control is achieved through simulation in the MATLAB platform. This application used the FFA, FA, GA, PSO and ABC algorithms according to the IAE, ISE, ITAE and ISTE performance criteria, to test the competitiveness of the FFA algorithm. Finally, by comparison of the obtained results, it is found that the performances of the FFA are better than those of the other algorithms and it can be concluded that this new algorithm can be a valid concurrent meta-heuristic optimization method.
The paper is organized as follows. Section 2 introduces the mathematical background of the standard FA and the suggested FFA. In Section 3, the two algorithms are compared through optimum finding of several standard test functions. The mathematical model of BLDC motor and the PI controller with description of the experimental results are presented in Section 4. Finally, drawn conclusion summarizing the achieved work is given in Section 5.
Standard Firefly Algorithm
Firefly Algorithm is inspired by the natural behavior of fireflies by using their selfluminosity to get closer to each other in the dark. Three assumptions have been suggested by Yang to clarify the behavior of fireflies [19,20]. Firstly, all fireflies are unisex. Thus, each firefly can be attracted to other fireflies regardless of gender. Secondly, the attractiveness is linked to the intensity which is a function of the distance between the firefly concerned and the other fireflies. The attractiveness decreases as the distance increases. Finally, the luminosity or the luminous intensity of a firefly is given by the value of the cost function of the problem posed. Mathematically, the FA algorithm can be given by the following equations [19].
The light intensity of a firefly is given by Equation (1).
where: γ is the absorption coefficient and (I 0 ) is the initial value at (r = 0). The attractiveness is expressed by Equation (2) where β 0 is the initial value at (r = 0): Equation (3) evaluates the distance between two fireflies i and j, at positions x i and x j , respectively, and can be defined as Cartesian distance. Where x ik is the kth element of the spatial coordinate x j of the ith firefly and D denotes the dimensionality of the problem [19].
The motion equation of the i th firefly to the j th one is determined by Equation (4).
where x i (t + 1) is the position of firefly i at iteration t +1 displacement. As it can be seen, the first part of the right side of Equation (4) is the position of firefly i at iteration t, the second term is relative to the attractiveness and the last one is randomization (blind flying if there is no light) where α is the random walk parameter α ∈ [0,1), [19]. The FA Algorithm 1 is given as follows [19]:
Algorithm 1. Firefly Algorithm
Initialization of the parameters of FA (Population size, α, β o , γ and the number of iterations). The Light intensity is defined by the cost function f(x i ) where x i (i = 1, . . . ,n). While (iter < Max Generation). for i = 1:n (all n fireflies) for j = 1:n (all n fireflies) if (f(x i ) < f(x j )), move firefly i towards j, end if. Update attractiveness β with distance r. Evaluate new solution and update f(x i ) in the same way as (4). end for j end for i rank the solutions and find the best global optimal. end while. Show the results.
Fast Firefly Algorithm
It is worth noting that the original algorithm of Xin-She Yang performs (Max generation n.n) tests. However, in the proposed version, (K.n) tests only are performed, where K is an integer. It means that the conventional one is hugely time consuming when compared to the suggested one. The proposed Algorithm 2 is summarized as follows:
Algorithm 2. Fast Firefly Algorithm
While (iter < Max Generation) for k = 1:K.n (all n fireflies) // Here it is the first modification i = rand(n) // Here it is the second modification j = rand(n) // Here it is the third modification if (f(xi) < f(xj)), move firefly i towards j, end if. Update attractiveness β with distance r. Evaluate new solution and update f(xi) in the same way as Equation (4). Modify the new position obtained by Equation (4) according to Equation (5). end for k rank the solutions and find the best global optimal. end while. Show the results.
As above mentioned, the new position obtained by Equation (4) is modified according to Equation (5): It should be noted that the values of α and γ are given empirically in the original version according to each test function, β 0 is equal to unity. However, on the other hand, the α in FFA is taken equal to: where the convergence is reached easily and γ still chosen equal to 1. The randomization parameter α is reduced exponentially from a maximum value to a minimum value according to successive iterations instead of keeping it constant; with this injected artifice, we can maintain the research balance between the exploitation and the exploration of the proposed algorithm and it can give better results than its rival FA [4].
In the original version of the FA, the technique of updating the motion of fireflies can be improved to be more faster. Thus, it is beneficial for each firefly in the swarm to find a promising region by reorienting its motion in order to easily reach the overall optimum. Consequently, the updated term is redirected to have a better exploration and exploitation of the algorithm and the speed of its convergence is, thus, guaranteed [1,39].
The essence of the proposed method is the reduction of the search space (exploration) while keeping the search efficiency satisfactory to reach the optimal solution. It means that (K.n) evaluated tests were found clearly sufficient to obtain the optimal solution for the large number of benchmark functions and other applications [40].
Benchmark Functions
Standards' functions are essential to prove and compare the characteristics of optimization algorithms. The most terms of evaluation are: The convergence speed and the precision. Hence, 12 different test functions are used to compare the performance of the original algorithm FA and the proposed one FFA according to the previously mentioned evaluation terms. The used test functions are listed in Table 1, highlighting the variables, ranges and values of the global optimum to reach [41,42].
Parameter Settings
The parameter settings of FA and FFA are showed in Table 2.
Functions' Experimental Results
The two algorithms are applied to minimize a set of test functions of dimensions 2D, 10D, 20D and 30D, respectively. The experimental environment is the MATLAB R2017a software, the CPU is an<EMAIL_ADDRESS>GHZ, the RAM is of size 6 GB. To compare their performance, minimum, mean, standard deviation and the computational time are taken over 10 runs. For each function, the two algorithms operate independently. The results of the optimization are summarized in Table 3. In terms of precision of convergence towards the global optimum, by visualizing the results in Table 3, it can be seen that the mean and the standard deviation of the reached optimum, after 10 runs for each test function, of FFA in all dimensions are better than of FA.
Concerning the convergence fastness to the global optimum, it can be clearly remarked, from extensive simulation tests, that the proposed method outperforms the original one and it is significantly faster (see Table 3). Accordingly, the average speed up ratio, when applying the two algorithms on the 12 test functions, is 12:1, which confirms the effectiveness of the suggested technique.
It is worthy to note that the speed up ratio is defined by: where t FA is the execution time of the original algorithm FA, and the t FFA is the execution time of the proposed one FFA. As is shown in Figures 1-4 below, the proposed algorithm reaches all solutions of all test functions with high precision outperforming, accordingly, those obtained from the standard one. where tFA is the execution time of the original algorithm FA, and the tFFA is the execution time of the proposed one FFA.
As is shown in Figures 1-4 below, the proposed algorithm reaches all solutions of all test functions with high precision outperforming, accordingly, those obtained from the standard one. where tFA is the execution time of the original algorithm FA, and the tFFA is the execution time of the proposed one FFA. As is shown in Figures 1-4 below, the proposed algorithm reaches all solutions of all test functions with high precision outperforming, accordingly, those obtained from the standard one. As can be seen from Table 3, the proposed algorithm is more unbiased (the statistical expected value of obtained cost function of FFA is more tending to the theoretical value than FA) and more consistent (the standard deviation of obtained cost function when applying FFA is more tending to 0 than the FA). The reported remarks hold for the twelve test functions as previously shown in Table 3 for dimensions 2D, 10D, 20D and 30D, respectively. For more convincing, robustness and stability of FFA in higher dimensions are evaluated by using the test functions (F13, F14 and F15) for dimensions 50D, 100D, 150D and 200D, respectively. Table 4 gives the results of these tests with a 10 times run for each test function. Finally, it can be concluded that the stability of FFA is not affected by increasing significantly dimensions (high precision remains obtained). The graphs of Figure 5 reflect these results. As can be seen from Table 3, the proposed algorithm is more unbiased (the statistical expected value of obtained cost function of FFA is more tending to the theoretical value than FA) and more consistent (the standard deviation of obtained cost function when applying FFA is more tending to 0 than the FA). The reported remarks hold for the twelve test functions as previously shown in Table 3 for dimensions 2D, 10D, 20D and 30D, respectively. For more convincing, robustness and stability of FFA in higher dimensions are evaluated by using the test functions (F13, F14 and F15) for dimensions 50D, 100D, 150D and 200D, respectively. Table 4 gives the results of these tests with a 10 times run for each test function. Finally, it can be concluded that the stability of FFA is not affected by increasing significantly dimensions (high precision remains obtained). The graphs of Figure 5 reflect these results.
Description
BLDC motor is a permanent magnet synchronous motor that has trapezoidal Back-EMF and an almost rectangular current. It uses position detectors and an inverter to control the armature currents. It becomes popular for industrial applications because of its high efficiency, reliability, noiseless operation, low maintenance and an optimized volume. BLDC motors are available in several different configurations, but three-phase is the most common type due to its high speed and low torque ripple [43].
The drive model of a BLDC motor is shown in Figure 6. It is divided into two blocks. The first one is the inverter and the second is the BLDC motor. Accordingly, the BLDC motor is powered by a six-switch inverter where, for each control step, two phases operate simultaneously while the third is eliminated. Note that the signals of the Hall Effect position sensor (Ha, Hb, Hc) shifted by 120°, electrically govern these switches by generation
Description
BLDC motor is a permanent magnet synchronous motor that has trapezoidal Back-EMF and an almost rectangular current. It uses position detectors and an inverter to control the armature currents. It becomes popular for industrial applications because of its high efficiency, reliability, noiseless operation, low maintenance and an optimized volume. BLDC motors are available in several different configurations, but three-phase is the most common type due to its high speed and low torque ripple [43].
The drive model of a BLDC motor is shown in Figure 6. It is divided into two blocks. The first one is the inverter and the second is the BLDC motor. Accordingly, the BLDC motor is powered by a six-switch inverter where, for each control step, two phases operate simultaneously while the third is eliminated. Note that the signals of the Hall Effect position sensor (Ha, Hb, Hc) shifted by 120 • , electrically govern these switches by generation of the pulses (S1, . . . ,S2) at every 60 • electrical angle [43][44][45].
Mathematical Modeling of a BLDC Motor
By consideration of the symmetry of the phases, it is assumed that the three phases resistances are identical as well as the inductances. Consequently, the equations describ ing the model of the equivalent circuit of the motor are [43][44][45]:
Mathematical Modeling of a BLDC Motor
By consideration of the symmetry of the phases, it is assumed that the three phases' resistances are identical as well as the inductances. Consequently, the equations describing the model of the equivalent circuit of the motor are [43][44][45]: Then, the line voltage equation can be obtained by subtraction of the phase voltage equation as: where: The relationship between phase currents is given by the equation: Since each voltage is a linear combination of the other two voltages, two equations are sufficient. Using relation 14, Equations (11) and (12) become [44]: The equation of mechanical part represents as follows: where: T e and T L are the electromagnetic torque and the load torque [Nm]. J is the rotor inertia, k f is a friction constant and ωm is the rotor speed [rad/s].
The Back-EMF and electromagnetic torque can be expressed as: where: k e is the Back-EMF's constant. θ e is equal to the rotor angle (θ e = p. θ m /2), θ m the mechanic angle and p the number of pole pairs. F(θ e ) is trapezoidal waveform of Back-EMFs.
Thus, the torque equation can be defined as: kt: the torque constant.
Therefore, the function F(θ e ) is a function of rotor position, which gives the trapezoidal waveform of Back-EMF. One period of function can be written as: For illustration, Figure 7 shows Back-EMF, Hall Effect sensor signal and the current in the three phases. In the trapezoidal motor Back-EMF induced in the stator has a trapezoidal shape and its phases must be supplied with quasi square currents for ripple free torque operation [44,46].
Finally, Equations (15)- (18) can be converted to a state space form. The resulting complete model is given as: where: e ab = e a − e b and e bc = e b − e c
Hall Effect Sensor and Transistor Switching Sequence
According to the angular position of the rotor evolution between 0 • and 360 • , the position produced by Hall Effect sensors is given which is described in Table 5 below. Each Hall Effect sensor operates during the passage of the poles based on the rising and falling edges. Thus, the rising front for the north pole and falling for the south pole. Accordingly, the sensor indicates 1 or 0, respectively. Following this switching logic of Hall Effect sensors, the switching sequence of the inverter is expressed in Table 5, where the switching sequence for shaft rotation is clockwise [45,47].
According to the circuit in Figure 6, the three-phase voltages are calculated with the following formulas [45]: where v d is the DC supply voltage.
Speed Control of Brushless DC Motor
The principle diagram for speed control of the three-phase BLDC motor is shown in Figure 8. At the regulator input, the reference speed is compared to the actual speed of the motor to generate a control voltage at its output.
SS (28) where vd is the DC supply voltage.
Speed Control of Brushless DC Motor
The principle diagram for speed control of the three-phase BLDC motor is shown in Figure 8. At the regulator input, the reference speed is compared to the actual speed of the motor to generate a control voltage at its output. The signals of the switching sequences are obtained from the position of the motor shaft. The motor stator is excited by the three-phase currents [45].
PI Controller
PI controller is a derivative of PID controller. It has been extensively used in industrial applications due to its simplicity, robustness, reliability and easy tuning gains in simple control [21].
The equation of the PI controller is specified by: The Laplace transfer function is: where: k p : proportional gain, k i : integral gain, s: Laplace operator.
Simulation Results and Discussion
To ensure efficient performance of the system to be monitored, the performance criteria defined by Equations (31)-(34) are used. The objective functions are chosen for minimizing the time response characteristics due to the dependency of error on time [27]: The problem can be represented as: Minimize J subjected to: where ω ref is the reference speed and ω m is the actual one. Figure 9 shows PI controller block of the control. In this problem, the values of overshoot, rise time and stabilization time are controlled indirectly. These parameters are directly linked to the objective function so they are optimized implicitly [27]. The model of BLDC motor drive is simulated in MATLAB. The parameters of the BLDC motor are reported in Table 6. To control the BLDC motor, a conventional PI controller is used. However, it is not easy to adjust its parameters in order to have an efficient control. Therefore, the FFA_PI controller is used and it is compared to other algorithms to evaluate its competitiveness. The simulation is performed by considering the well-known algorithms GA, PSO, ABC and the standard FA. The simulation is run with 100 iterations and a population size of 10. Figure 10 shows the evolution of the different performance criteria with the different algorithms. The results of FFA, with the different criteria, are all the better than those presented by the other algorithms. Figure 11, also, presents the cost functions IAE, ISE, ITAE and ISTE obtained by FFA algorithm. The model of BLDC motor drive is simulated in MATLAB. The parameters of the BLDC motor are reported in Table 6. To control the BLDC motor, a conventional PI controller is used. However, it is not easy to adjust its parameters in order to have an efficient control. Therefore, the FFA_PI controller is used and it is compared to other algorithms to evaluate its competitiveness. The simulation is performed by considering the well-known algorithms GA, PSO, ABC and the standard FA. The simulation is run with 100 iterations and a population size of 10. Figure 10 shows the evolution of the different performance criteria with the different algorithms. The results of FFA, with the different criteria, are all the better than those presented by the other algorithms. Figure 11, also, presents the cost functions IAE, ISE, ITAE and ISTE obtained by FFA algorithm. and the standard FA. The simulation is run with 100 iterations and a population size of 10. Figure 10 shows the evolution of the different performance criteria with the different algorithms. The results of FFA, with the different criteria, are all the better than those presented by the other algorithms. Figure 11, also, presents the cost functions IAE, ISE, ITAE and ISTE obtained by FFA algorithm. The values of the PI controller, obtained by different simulations, are shown in Table 7. The values are obtained by the five algorithms used, with different criteria. The values of the PI controller, obtained by different simulations, are shown in Table 7. The values are obtained by the five algorithms used, with different criteria. The values of the PI controller, obtained by different simulations, are shown in Table 7. The values are obtained by the five algorithms used, with different criteria. In the chosen cost functions, the values of the overshoot, the rise time and the settling time can be controlled indirectly. Based on their optimization, the cost functions force the values of the other parameters to be optimum [27]. Table 8 shows the values of the different correctors used in this simulation. The values of the rise time, settling time, peak time, peak and overshoot are reported in Table 8. Accordingly, the results concerning the time are better for the FFA algorithm as well as for the peaks and the overshoots which are alternated with the other algorithms. Moreover, the execution simulation time comparison is given between the different correctors and shown in Table 9. It can be reported that the calculation time using the FFA_PI is faster than those obtained with the FA_PI, GA_PI, PSO_PI and ABC_PI when using 50 or 100 iterations. According to the used criterions, Figures 12-15 represent the BLDC motor speeds obtained with the different corrector optimized. Consequently, the figures are given for comparison and they justify the values in Table 8. From the previous numerical results and the figures' responses, it can be concluded that the optimized PI controller-based FFA showed a better capacity to compete with its FA counterpart, and its rivals GA, PSO and ABC. Thus, it provided the fastest rise and response times in addition to the minimum peak time. Figure 16 show the simulation results of the various variables of the BLDC motor using the FFA_PI using (ki = 2468, kp = 18.19). Accordingly, Figure 16a presents the speed of the BLDC motor where the reference speed ωref is chosen as a ramp in order to dampen the current at start-up and to avoid peaks as well as for the electromagnetic torque on the Figure 16b. At 0.125 s, a torque load TL = 4 Nm is applied and a good rejection by the control is observed. The effect of the load is very apparent on the figure of the speed, the torque, the voltages and the current. From the previous numerical results and the figures' responses, it can be concluded that the optimized PI controller-based FFA showed a better capacity to compete with its FA counterpart, and its rivals GA, PSO and ABC. Thus, it provided the fastest rise and response times in addition to the minimum peak time. Figure 16 show the simulation results of the various variables of the BLDC motor using the FFA_PI using (ki = 2468, kp = 18.19). Accordingly, Figure 16a presents the speed of the BLDC motor where the reference speed ωref is chosen as a ramp in order to dampen the current at start-up and to avoid peaks as well as for the electromagnetic torque on the Figure 16b. At 0.125 s, a torque load TL = 4 Nm is applied and a good rejection by the control is observed. The effect of the load is very apparent on the figure of the speed, the torque, the voltages and the current.
On each figure presented, there are three phases, where the first phase is zoomed-in to clearly visualize the behavior of the signals. Thus, Figure 16c,d show the phase voltages and the phase to phase voltage simultaneously. The trapezoidal Back-EMF shape is well illustrated on the Figure 16e. Finally, the shape of the currents of the three phases of the stator is given by the Figure 16f. As can be seen, there is a distortion in the torque signals which is due to the trapezoidal shape of the Back-EMF and the nature of the currents containing harmonics. Finally, Figure 17 gives the evolution, until the convergence, of the parameters of the FFA_PI and FA_PI on the control technique. Figure 16 show the simulation results of the various variables of the BLDC motor using the FFA_PI using (ki = 2468, kp = 18.19). Accordingly, Figure 16a presents the speed of the BLDC motor where the reference speed ωref is chosen as a ramp in order to dampen the current at start-up and to avoid peaks as well as for the electromagnetic torque on the Figure 16b. At 0.125 s, a torque load TL = 4 Nm is applied and a good rejection by the control is observed. The effect of the load is very apparent on the figure of the speed, the torque, the voltages and the current. On each figure presented, there are three phases, where the first phase is zoomed-in to clearly visualize the behavior of the signals. Thus, Figure 16c,d show the phase voltages and the phase to phase voltage simultaneously. The trapezoidal Back-EMF shape is well illustrated on the Figure 16e. Finally, the shape of the currents of the three phases of the stator is given by the Figure 16f. As can be seen, there is a distortion in the torque signals which is due to the trapezoidal shape of the Back-EMF and the nature of the currents containing harmonics. Finally, Figure 17 gives the evolution, until the convergence, of the parameters of the FFA_PI and FA_PI on the control technique. On each figure presented, there are three phases, where the first phase is zoomed-in to clearly visualize the behavior of the signals. Thus, Figure 16c,d show the phase voltages and the phase to phase voltage simultaneously. The trapezoidal Back-EMF shape is well illustrated on the Figure 16e. Finally, the shape of the currents of the three phases of the stator is given by the Figure 16f. As can be seen, there is a distortion in the torque signals which is due to the trapezoidal shape of the Back-EMF and the nature of the currents containing harmonics. Finally, Figure 17 gives the evolution, until the convergence, of the parameters of the FFA_PI and FA_PI on the control technique. Figure 17. Evolution of parameters of FFA_PI until convergence.
Conclusions
A fast FA algorithm so-called FFA is presented and compared with the standard FA through searching the global optimum by using different standard benchmark functions in a first application. The simulation results were compared, taking in consideration the Figure 17. Evolution of parameters of FFA_PI until convergence.
Conclusions
A fast FA algorithm so-called FFA is presented and compared with the standard FA through searching the global optimum by using different standard benchmark functions in a first application. The simulation results were compared, taking in consideration the precision and the speed of convergence criteria for the two algorithms. The reached results prove that those obtained by FFA are better than those of FA. A second application concerning the optimization of the gains of a PI controlling a BLDC motor is carried out through the ITSE performance criterion. The results obtained show the robustness of the two algorithms with superiority for FFA. The acceleration of the proposed algorithm is due to the search space reduction by a random election of a significantly small set of moving fireflies while the whole search space stills covered. It should be noted that the acceleration, in the optimization function, is in the average 12:1, with respect to FA. Additionally, for the complex problem (BLDC motor control), the acceleration is clearly remarked for the modified algorithm FFA than FA, GA, PSO and ABC algorithms. Globally, the suggested FFA algorithm can be considered as most state of the art metaheuristic algorithms such as FA, GA, PSO and ABC, and presents superior fastness against all reported optimizers.
Furthermore, a modification on the α parameter is given and this guarantees the robustness and precision through the enhancement of search directions toward the global optimal solution. | 7,547.4 | 2021-08-01T00:00:00.000 | [
"Computer Science"
] |
Composite Hydrogels Based on Cross-Linked Chitosan and Low Molecular Weight Hyaluronic Acid for Tissue Engineering
The objectives of the study were as follows: (1) to develop two methods for the preparation of macroporous composite chitosan/hyaluronic acid (Ch/HA) hydrogels based on covalently cross-linked Ch and low molecular weight (Mw) HA (5 and 30 kDa); (2) to investigate some properties (swelling and in vitro degradation) and structures of the hydrogels; (3) to evaluate the hydrogels in vitro as potential biodegradable matrices for tissue engineering. Chitosan was cross-linked with either genipin (Gen) or glutaraldehyde (GA). Method 1 allowed the distribution of HA macromolecules within the hydrogel (bulk modification). In Method 2, hyaluronic acid formed a polyelectrolyte complex with Ch over the hydrogel surface (surface modification). By varying compositions of the Ch/HA hydrogels, highly porous interconnected structures (with mean pore sizes of 50–450 μm) were fabricated and studied using confocal laser scanning microscopy (CLSM). Mouse fibroblasts (L929) were cultured in the hydrogels for 7 days. Cell growth and proliferation within the hydrogel samples were studied via MTT-assay. The entrapment of low molecular weight HA was found to result in an enhancement of cell growth in the Ch/HA hydrogels compared to that in the Ch matrices. The Ch/HA hydrogels after bulk modification promoted better cell adhesion, growth and proliferation than the samples prepared by using Method 2 (surface modification).
Introduction
Polysaccharide hydrogels are of great interest for tissue engineering. Their resemblance to living tissues mimics the natural three-dimensional extracellular matrix (ECM) and promotes cell attachment, proliferation and stem cell differentiation [1]. Chitosan is a biocompatible and biodegradable polymer of natural origin with antimicrobial and biologically adhesive properties [2]. Chitosan is widely employed for tissue engineering, as evidenced by an ever-increasing number of publications [3]. For instance, it is applied in the fields of skin [4], bone [5] and cartilage [6] tissue engineering. However, the rather low mechanical strength of covalently non-cross-linked chitosan matrices could lead to
Preparation of the Macroporous Composite Chitosan/Hyaluronic Acid Hydrogels
In this study, two methods for HA entrapment into the Ch hydrogel were used, namely before (Method 1) and after (Method 2) cross-linking chitosan with genipin or glutaraldehyde. Method 1 was used to provide bulk modification of the Ch hydrogel, and Method 2 allowed us to get the surface modification of the hydrogel.
Surface Modification (Method 2)
A Gen (0.12% w/v, 0.9 mL) or GA solution (0.0525% w/v, 0.9 mL) was added dropwise to a Ch solution (2.5% w/v, 30 mL) and stirred (1000 rpm), and the obtained solution was incubated at room temperature for 2 h and then frozen and freeze-dried. The obtained macroporous cross-linked hydrogel samples were incubated in a 2% (w/v) HA solution for 2 h. Then, they were washed twice with PBS (pH 7.4) and lyophilized again.
Fourier Transform Infrared Spectroscopy
FTIR-spectroscopy of the initial polysaccharides and the fabricated hydrogels was realized with the use of a Spectrum Two FT-IR Spectrometer (PerkinElmer, Waltham, MA, USA) as described previously [26]. All spectra were initially collected in attenuated total reflectance mode and converted into transmittance mode. The spectra were normalized using the intensity of C-O stretching vibrations of a pyranose cycle band (1081 cm −1 ) as the internal standard.
Confocal Laser Scanning Microscopy
The structures of the swollen hydrogel samples were analyzed via confocal laser scanning microscopy using a Nikon TE-2000 inverted microscope equipped with an EZ-C1 confocal laser (Nikon, Tokyo, Japan). The hydrogel samples were stained with Fluorescamine (0.3 µg/mL in acetone) to provide amino-specific staining. The excitation wavelength was 408 nm, and fluorescence signals were collected at 515 ± 30 nm. Image analysis software (ImageJ, National Institutes of Health, Bethesda, Maryland, USA) was used for 3D recon-Polymers 2023, 15, 2371 4 of 20 struction of the hydrogel structure. To study the morphology of the obtained macroporous hydrogels, a quantitative evaluation of micrographs was carried out by calculation of an effective pore diameter (d) using Equation (1): where L is a pore long axis length and S is a pore short axis length. The mean pore size was determined by randomly measuring at least 100 pores for each hydrogel sample.
Hydrogel Equilibrium Swelling Degree Measurements
The swelling degree of the obtained hydrogels was studied using a gravimetric method. For this purpose, the samples (5 × 5 × 2 mm) were incubated in DMEM at 37 • C for 24 h. The weight of the swollen hydrogel was determined to be the difference between the hydrogel weight and liquid weight on the balance plate after hydrogel removal. The swelling ratio (Sw) of the hydrogels was calculated using Equation (2): where ρ is the density of the solution, Mw is the weight of the sample after immersion in the medium and Md is the weight of the dried sample.
Study of Enzymatic Degradation of the Hydrogels In Vitro
The degradation of the hydrogel samples was carried out in PBS (pH 7.4) containing 2 mg/mL lysozyme at 37 • C for 7, 14 and 21 days. The samples in PBS (pH 7.4) without lysozyme were used as controls. After 7, 14 and 21 days, the hydrogels were removed from the solution, washed with milli-Q, dried at 50 • C to constant weight and weighed. The weight loss (Wl) was calculated using Equation (3).
where M i is an initial weight of the hydrogel sample and M t is the weight of the dried hydrogel sample.
Cell Cultivation in the Hydrogels
In the current study, mouse fibroblasts (L929) from the Collection of Vertebrate Cell Cultures (Institute of Cytology, Russian Academy of Sciences) were used. The L929 cells were cultured in DMEM supplemented with 10% FBS and containing 2 mM L-glutamine, 1 mM sodium pyruvate, 50 µM 2-mercaptoethanol, 100 µg/mL streptomycin and 100 U/mL penicillin. The cells were cultured in a 5% CO 2 humidified atmosphere at 37 • C (CO 2 incubator Heraeus B5060 EK/CO 2 , Hanau, Germany).
Hydrogel Sterilization
The hydrogel samples were sterilized via incubation with 70% ethanol for 1 h. After sterilization, the samples were washed 3 times with PBS (pH 7.4).
In Vitro Cytotoxicity Study
The cytotoxicity of the hydrogel samples was studied via an extract test using L929 fibroblasts as model cells. For this purpose, the previously sterilized hydrogel samples were incubated with the culture medium (25 mg per 1 mL of medium) at 37 • C, and supernatants (extracts) were collected after 24 h. Then, the cells were added to a 96-well plate (10 4 cells per well) and incubated in a CO 2 incubator (37 • C, 5% CO 2 ). The medium in each well was replaced with 100 µL of the extracts after 24 h of incubation. The cells cultivated in the medium without the extracts were used as a control. Cell viability was determined via MTT assay. For this purpose, the extracts were replaced with 100 µL of a MTT solution (0.5 mg/mL DMEM) and then incubated at 37 • C for 1 h. Formazan crystals formed in the living cells were dissolved after adding DMSO (100 µL per each well), and optical density was measured at 540/690 nm using a Titertek Multiskan MCC/340 plate reader (Flow Laboratories, McLean, VA, USA). Relative cell viability (V) was calculated according to Equation (4): where OD t is the optical density in testing wells and OD c is the optical density in the control wells. Results are expressed as the mean ± standard deviation for three replicates.
Study of Cell Proliferation in the Hydrogels
Before cell seeding, the sterile hydrogel samples were previously incubated in the culture medium at 37 • C for 24 h. Then, cells were seeded by dropping cell suspension directly onto the hydrogel samples (2 × 10 4 cells/sample). Cell viability was evaluated with an MTT assay after 7 days. For this purpose, the hydrogel samples with the cells were transferred to a fresh 96-well plate, 100 µL of the MTT solution in DMEM (0.5 mg/mL) was added to each well, and the plate was then incubated at 37 • C for 2 h. Then, formazan crystals were dissolved after adding DMSO (200 µL per well) to each well, and 100 µL aliquots were taken to measure optical density at 540/690 nm. In this study, the chitosan hydrogel samples cross-linked either with Gen or GA were used as negative controls, whereas the cell monolayer culture was taken as a positive control.
In order to take into consideration the impact of each hydrogel sample on the results of the MTT assay, an additional experiment was carried out. For this purpose, culture medium with FBS was added to the previously sterilized blank hydrogel samples (without cells), and the samples were placed in a CO 2 incubator for 7 days. Then, cell suspensions were added into the 96-well plate (cell numbers ranging 5-20 × 10 4 cells/well), and the plate was transferred to the CO 2 incubator for 3 h. Finally, the pre-incubated hydrogel samples were added to the previously attached cells, and the MTT assay was carried out for both the cells cultivated in the presence of the hydrogel samples and the cells without them. For each sample, a calibration curve was plotted that shows the optical density for the cells cultivated in the presence of each hydrogel sample (abscissa X) versus the optical density for the cells without the hydrogel sample (ordinate Y). Based on the obtained curve, the optical densities were determined for all hydrogel samples.
The relative cell viability (V) for each sample was calculated according to Equation (4). Results are expressed as the mean ± standard deviation for three replicates.
Study of Cell Morphology
The hydrogel samples for confocal microscopy were prepared as described previously (see Section 2.4.3.). After 7 days of cell cultivation, the cells were stained with Calcein AM vital dye and DNA fluorescent dye DAPI. For this purpose, a mixture of Calcein AM (5 µg/mL) and DAPI (10 µg/mL) in DMEM was added to the hydrogel samples, and the samples were incubated at 37 • C for 30 min. Then, the supernatants were replaced with the fresh culture medium, and the samples were observed using a confocal laser microscope (Nikon TE-2000, Tokyo, Japan). The excitation wavelengths were 360 nm for DAPI and 488 nm for Calcein AM, and fluorescence signals were collected in the range of 380-460 nm for DAPI and 500-530 nm for Calcein AM.
Statistics
The data were analyzed using GraphPad Prism 5.0 software (Graph-Pad Software, San Diego, CA, USA). All values are expressed as mean ± standard error of at least three parallel replicates, and they were compared using one-way analysis of variance (ANOVA) with Dunnett's Multiple Comparison Test as a post hoc test. Values of p < 0.05 are considered significant.
Results and Discussion
In the current study, the hydrogels based on Ch cross-linked with Gen or GA and modified with low molecular weight HA (MW 30 kDa) or oligo-HA (MW 5 kDa) were obtained and characterized in terms of their structures, biocompatibility and their ability to support cell growth and proliferation.
Preparation of the Macroporous Ch/HA Hydrogels
The macroporous matrices were prepared via the lyophilization of hydrogels from Ch, which was cross-linked with Gen or GA. As is well-known, this approach allows one to obtain non-soluble Ch hydrogels, which demonstrate rather high swelling behavior. The conditions for preparation of the Ch/HA samples were chosen based on polyelectrolyte complex formation mechanisms described earlier [27], whereas chitosan gelation from Ch solution using Gen or GA was also reported by us previously [28]. The conditions for crosslinking Ch hydrogels, namely pH and Gen/NH 2 ratio, were selected based on our previous results, in particular the dependence curves of gelation time on Gen concentration [29]. The results of the change in the elasticity modulus of the chitosan hydrogels are shown in the Supplementary Materials ( Figure S1). As a result of chitosan cross-linking with GA, more rigid hydrogels were formed than those in the case of Gen. Thus, equilibrium values of the modulus of elasticity measured with single-wall compression were found to be twice as high for the chitosan hydrogels cross-linked with GA than for Gen cross-linked hydrogels, even with lower cross-linker content in the case of GA. We also took into account that the polymer system should be liquid for at least 1.5 h, which is needed for the degassing and casting of the polymer solution in special forms for freezing.
In the current study, two approaches to the preparation of the macroporous Ch/HA hydrogels with cross-linked Ch were developed ( Figure 1). These approaches differ by way of the HA entrapment and its distribution in/on the Ch hydrogel.
In Method 1, the cross-linker was added to the mixtures of the Ch and HA solutions. As a result, one could suggest that HA molecules were distributed more or less evenly within the Ch hydrogel and formed polyelectrolyte complexes with Ch macromolecules (bulk modification). In Method 2, the chitosan macromolecules were first cross-linked with Gen or GA to get Ch cross-linked hydrogels, and after that, the HA solution was added, providing polyelectrolyte Ch/HA complex formation mostly on the surface of the Ch hydrogel. Thus, we obtained composite hydrogels, which differed in their structure due to HA macromolecule being distributed either mostly within the hydrogel volume (see Method 1) or over the hydrogel surface (see Method 2).
The hydrogel samples modified with HA over the surface (surface modification), hereafter referred to as Ch/HA-5s and Ch/HA-30s, differed only by HA molecular weight (5 and 30 kDa, relatively). These samples were additionally washed after the modification step and then freeze-dried. It should be noted that when using the freeze-drying technique, a number of freeze-drying cycles could affect the hydrogel structure. In order to take this effect into account, an additional set of the bulk-modified hydrogels was prepared and evaluated in the current study. For this purpose, the bulk-modified Ch/HA-5v and Ch/HA-30v hydrogels as well as the Ch hydrogels without HA (a control) were also washed with PBS (pH 7.4) and then lyophilized.
Thus, as seen in Table 1, two sets of the samples were prepared: (1) Initial samples. These samples were obtained using Method 1 (bulk modification) for Ch/HA-5v, Ch/HA-30v and the non-modified Ch hydrogels in which Ch was crosslinked with Gen or GA.
(2) Washed samples. This set of samples can be divided in two parts, the first part being a subset of samples from the initial samples (1) but additionally washed with PBS (pH 7.4) after preparation and lyophilized (see Chw, Ch/HA-5w and Ch/HA-30w). The second subset of samples was prepared using Method 2 (surface modification) (see Ch/HA-5s and Ch/HA-30s) in which Ch was first cross-linked with Gen or GA, and then the hydrogels were incubated in the HA solution and finally washed with PBS (pH 7.4). In Method 1, the cross-linker was added to the mixtures of the Ch and HA solutions. As a result, one could suggest that HA molecules were distributed more or less evenly within the Ch hydrogel and formed polyelectrolyte complexes with Ch macromolecules (bulk modification). In Method 2, the chitosan macromolecules were first cross-linked with Gen or GA to get Ch cross-linked hydrogels, and after that, the HA solution was added, providing polyelectrolyte Ch/HA complex formation mostly on the surface of the Ch hydrogel. Thus, we obtained composite hydrogels, which differed in their structure due to HA macromolecule being distributed either mostly within the hydrogel volume (see Method 1) or over the hydrogel surface (see Method 2).
The hydrogel samples modified with HA over the surface (surface modification), hereafter referred to as Ch/HA-5s and Ch/HA-30s, differed only by HA molecular weight (5 and 30 kDa, relatively). These samples were additionally washed after the modification step and then freeze-dried. It should be noted that when using the freeze-drying technique, a number of freeze-drying cycles could affect the hydrogel structure. In order to take this effect into account, an additional set of the bulk-modified hydrogels was prepared and evaluated in the current study. For this purpose, the bulk-modified Ch/HA-5v and Ch/HA-30v hydrogels as well as the Ch hydrogels without HA (a control) were also washed with PBS (pH 7.4) and then lyophilized.
Thus, as seen in Table 1, two sets of the samples were prepared: (1) Initial samples. These samples were obtained using Method 1 (bulk modification) for Ch/HA-5v, Ch/HA-30v and the non-modified Ch hydrogels in which Ch was crosslinked with Gen or GA.
(2) Washed samples. This set of samples can be divided in two parts, the first part being a subset of samples from the initial samples (1) but additionally washed with PBS
Characterization of the Hydrogels
The FTIR spectra of the initial polysaccharides and the fabricated hydrogels are presented in Figure 2. The spectra of chitosan and hyaluronan show all well-resolved characteristic bands. The intense group of bands that extends from 1500 to 1700 cm −1 appears for all hydrogel samples. This group is the superposition of amide I and II bands and C=O and Ch/HA-30s 2.5 5.6 30 PBS (7.4) mod 0.01 0.12 0.0525
Characterization of the Hydrogels
The FTIR spectra of the initial polysaccharides and the fabricated hydrogels are presented in Figure 2. The spectra of chitosan and hyaluronan show all well-resolved characteristic bands. The intense group of bands that extends from 1500 to 1700 cm −1 appears for all hydrogel samples. This group is the superposition of amide I and II bands and C=O and COO-bands. The main changes, which could be expected due to chitosan crosslinking and Ch/HA polyelectrolyte complex formation, overlap with the carboxylate ion stretching vibrations (about 1580 cm −1 ).
Figure 2.
FTIR spectra of the initial polysaccharides and macroporous chitosan/hyaluronic acid hydrogels with cross-linked chitosan.
Study of the Hydrogel Structures
To provide rather large specific surfaces for cell attachment and growth, hydrogels should have macroporous structures with open, interconnected geometry [30]. An interconnected, porous structure with pores of optimal size is known to stimulate cell growth, Figure 2. FTIR spectra of the initial polysaccharides and macroporous chitosan/hyaluronic acid hydrogels with cross-linked chitosan.
Study of the Hydrogel Structures
To provide rather large specific surfaces for cell attachment and growth, hydrogels should have macroporous structures with open, interconnected geometry [30]. An interconnected, porous structure with pores of optimal size is known to stimulate cell growth, provide uniform cell distribution and spreading and promote neovascularization. In addition, these parameters are crucial in terms of effective mass and gas exchanges, which allows cells to be supplied with nutrients and oxygen [31].
In this study, the macroporous structures of the hydrogels were obtained via freezedrying. As is well-known, the properties of the system to be frozen have a great influence on the formation of the macroporous structures. Varying the composition of a polymer system for hydrogel preparation allows for the formation of matrices that can differ in structure (morphology, average pore size, pore size distribution, etc.). As a result, the obtained structures can affect cell localization and distribution within the hydrogels.
The structures of the swollen hydrogels were studied using CLSM. 3D reconstructions of these hydrogel samples are shown in Figures 3 and 4. Some differences in the swollen hydrogels' structures as function of their composition, type of cross-linking agent and preparation method were observed.
The mean pore sizes for all hydrogel samples are shown in Figure 5. The pore sizes of the GA cross-linked hydrogels were smaller than those of the samples cross-linked with Gen, as GA is a more reactive cross-linking agent [32]. Therefore, in the case of GA, a formation of smaller ice crystals at freezing occurred, and as a result, an arrangement of denser structures was observed. For the most compact Ch hydrogel cross-linked with GA, an average pore diameter was 50 ± 9 µm. After an additional freezing cycle and washing of the sample, the pore size increased up to 375 ± 48 µm. The pore size increased because of the repeated swelling and subsequent freezing, and novel pores formed due to a partial destruction of the hydrogel structure as a result of growing ice crystals. system for hydrogel preparation allows for the formation of matrices that can differ in structure (morphology, average pore size, pore size distribution, etc.). As a result, the obtained structures can affect cell localization and distribution within the hydrogels.
The structures of the swollen hydrogels were studied using CLSM. 3D reconstructions of these hydrogel samples are shown in Figures 3 and 4. Some differences in the swollen hydrogels structures as function of their composition, type of cross-linking agent and preparation method were observed.
Cross-linked with Gen
Cross-linked with GA The mean pore sizes for all hydrogel samples are shown in Figure 5. The pore sizes of the GA cross-linked hydrogels were smaller than those of the samples cross-linked with Gen, as GA is a more reactive cross-linking agent [32]. Therefore, in the case of GA, a formation of smaller ice crystals at freezing occurred, and as a result, an arrangement of denser structures was observed. For the most compact Ch hydrogel cross-linked with GA, an average pore diameter was 50 ± 9 µm. After an additional freezing cycle and washing of the sample, the pore size increased up to 375 ± 48 µm. The pore size increased because of the repeated swelling and subsequent freezing, and novel pores formed due to a partial destruction of the hydrogel structure as a result of growing ice crystals.
An entrapment of HA macromolecules into the composite hydrogel in which Ch was cross-linked with GA led to an increase in average pore size. A formation of rather big pores simultaneously with small pores was observed in the case of initial Ch/HA hydro- more effective in poly(ε-caprolactone) matrices with pores of 400 µm than in those with pores of 100-200 µm [37]. It has also been reported that matrices with a pore size within a range of 380-405 µm demonstrated chondrocyte growth, whereas matrices with pore sizes from 186 to 200 µm promoted fibroblast proliferation [38]. Thus, in our study, hydrogels with pore sizes in the range of 50-450 µm were prepared. Therefore, we expected that the matrices with these pore sizes were suitable for the cultivation of cells.
Study of the Hydrogel Swelling
As is well-known, the swelling behavior of the hydrogels is of great importance, as it allows one to estimate cells ability to survive within the matrix. In addition, swelling properties could affect degradation rate. Earlier, the degradation rate was shown to increase along with swelling degree enhancement [39,40]. Moreover, the mechanical properties of wet hydrogels were found to be significantly reduced [9,22], which could negatively affect cell adhesion, morphology and proliferation. Regulation of the hydrophilichydrophobic balance of the hydrogels is of great importance in order to promote cell adhesion [41]. Because hyaluronic acid is hydrophilic, its entrapment could result in changing the swelling properties of the Ch hydrogels after modification with HA. Moreover, it was shown that introduction of high molecular weight HA into Ch hydrogels resulted in an enhancement of pore size, swelling ratio and degradation rate [22].
The total swelling of the hydrogel samples is considered to be a sum of two parameters, namely polymer swelling, which is related to the swelling capacity of the hydrogel walls, and structural swelling, which characterizes the amount of water retained in the pores. As for our study, the hydrogels swelling capacity measurements are shown in Figure 6. It can be seen that the modification of the chitosan hydrogels with HA as well as additional washing and freeze-drying markedly affected the equilibrium swelling degree of the samples (Figure 6a,b). Minimal swelling degrees of 21.6 ± 1.4 and 17.5 ± 1.8 mL/g An entrapment of HA macromolecules into the composite hydrogel in which Ch was cross-linked with GA led to an increase in average pore size. A formation of rather big pores simultaneously with small pores was observed in the case of initial Ch/HA hydrogels prepared via bulk modification (Method 1). Thus, the mean pore size of 50 µm for the Ch hydrogel increased up to 94 ± 6 µm for the Ch/HA-5v sample. Moreover, an additional freezing cycle of these hydrogels resulted in an enhancement of the pore size up to 256 ± 31 µm for Ch/HA-5w sample. The pore sizes of the hydrogels prepared via surface modification (Method 2) were 340 ± 33 µm and 311 ± 29 µm for the Ch/HA-5s and Ch/HA-30s samples, respectively, in which Ch was cross-linked with GA, whereas in the case of the Ch/HA samples based on Ch cross-linked with Gen, the average sizes were 298 ± 15 µm (for Ch/HA-5s) and 319 ± 22 µm (for Ch/HA-30s). This could be explained by the washing of the samples after surface modification.
In the case of cross-linking with Gen, both Ch and Ch/HA hydrogels prepared via bulk modification were found to have higher mean pore sizes (within a range of 230-320 µm) than those in the hydrogels in which Ch was cross-linked with GA (94 ± 6 and 89 ± 6 µm for Ch/HA-5v and Ch/HA-30v samples, respectively). One can also see that additional washing resulted in the increase of the mean pore sizes of the non-modified and bulkmodified hydrogels cross-linked with Gen up to 387 ± 14 µm (for the Ch/HA-30w sample) and 400 ± 43 µm (for the Ch/HA-5w sample). The biggest pores (452 ± 27 µm) were obtained for the Chw hydrogel in which Ch was cross-linked with Gen after additional hydrogel washing followed by the freeze-drying step.
Pore size is one of key parameters for cell cultivation within matrices. On one hand, this is the parameter that depends upon the composition of the hydrogel used. On the other hand, different kind of cells could prefer matrices that differ in mean pore sizes. To provide diffusion of nutrients and metabolites at cell cultivation, the matrices with an average pore size of >50 µm are desirable [33]. Thus, we could suggest that our hydrogels with the mean pore sizes mentioned previously were suitable to support cell growth and proliferation. However, it should also be mentioned that vascularization within the hydrogel is also dependent upon its pore size. For instance, the depth and rate of vessel formation were higher for matrices with mean pore sizes of 50-150 µm than those for hydrogels with smaller pores within a range of 25-70 µm [34,35]. As for the matrices with pores >200 µm, the development of bigger vessels was revealed, which was not the case for the matrices with smaller pores [36]. Cell proliferation and/or differentiation are known to depend upon the porosity of the hydrogel, in particular mean pore sizes. Thus, matrices with pores within a range of 70-120 µm, in contrast to those with pores ranging from 10 to 70 µm, were shown to better support chondrocyte proliferation as well as accumulation of type II collagen and glycosaminoglycans [35]. As for the differentiation of mesenchymal stromal cells into chondrocytes and the repair of cartilage defects, they were more effective in poly(εcaprolactone) matrices with pores of 400 µm than in those with pores of 100-200 µm [37]. It has also been reported that matrices with a pore size within a range of 380-405 µm demonstrated chondrocyte growth, whereas matrices with pore sizes from 186 to 200 µm promoted fibroblast proliferation [38].
Thus, in our study, hydrogels with pore sizes in the range of 50-450 µm were prepared. Therefore, we expected that the matrices with these pore sizes were suitable for the cultivation of cells.
Study of the Hydrogel Swelling
As is well-known, the swelling behavior of the hydrogels is of great importance, as it allows one to estimate cells' ability to survive within the matrix. In addition, swelling properties could affect degradation rate. Earlier, the degradation rate was shown to increase along with swelling degree enhancement [39,40]. Moreover, the mechanical properties of wet hydrogels were found to be significantly reduced [9,22], which could negatively affect cell adhesion, morphology and proliferation. Regulation of the hydrophilic-hydrophobic balance of the hydrogels is of great importance in order to promote cell adhesion [41]. Because hyaluronic acid is hydrophilic, its entrapment could result in changing the swelling properties of the Ch hydrogels after modification with HA. Moreover, it was shown that introduction of high molecular weight HA into Ch hydrogels resulted in an enhancement of pore size, swelling ratio and degradation rate [22].
The total swelling of the hydrogel samples is considered to be a sum of two parameters, namely polymer swelling, which is related to the swelling capacity of the hydrogel walls, and structural swelling, which characterizes the amount of water retained in the pores. As for our study, the hydrogels' swelling capacity measurements are shown in Figure 6. It can be seen that the modification of the chitosan hydrogels with HA as well as additional washing and freeze-drying markedly affected the equilibrium swelling degree of the samples (Figure 6a,b). Minimal swelling degrees of 21.6 ± 1.4 and 17.5 ± 1.8 mL/g were found for the initial non-modified Ch hydrogels cross-linked with Gen and GA, respectively. The bulk modification with HA led to these increased values (27.4 ± 2.1 and 24 ±2.3 mL/g for Ch/HA-30v samples in which Ch was cross-linked with Gen or GA, respectively). For the hydrogels modified with oligo-HA (Mw 5 kDa), the swelling degree values either did not change or increased a little bit (see Figure 6a,b). However, after the additional washing cycle, the swelling degree values of both non-modified Ch hydrogels and Ch/HA samples after bulk modification increased. Thus, for washed Ch hydrogels, the enhancement was up to 29.9 ± 2.2 and 27.6 ± 5.5 mL/g for Ch hydrogels cross-linked with Gen or GA, respectively.
In the case of surface modification with HA (Method 2), swelling increased compared to the swelling values obtained for both the initial samples and the washed hydrogels after bulk modification (Method 1). Moreover, modification with HA (Mw 30 kDa) led to a significant increase in total swelling. The maximum equilibrium swelling degree values were 34.3 ± 3.1 mL/g and 33.3 ± 3.0 mL/g for two Ch/HA-30s samples in which Ch was cross-linked with Gen or GA, respectively. This increase could be explained by the impact of pore walls swelling on polymer swelling (see Figure 6c,d). This can be attributed to partial damage to Ch/HA polyelectrolyte complexes as a result of interaction with various ions in the cultivation medium (DMEM). The entrapment of oligo-HA (Mw 5 kDa) into the hydrogel composition led to a less pronounced increase in the swelling degree of the Ch/HA samples. could be explained by changes in the hydrogel structures and a retention of water within the hydrogel pores due to enhanced porosity as a result of re-freezing (see Figures 4 and 5). Thus, an increase of mean pore sizes and porosity in the samples in which Ch was cross-linked with Gen or GA because of repeated swelling was observed.
These results are consistent with those reported earlier [22]. Correia et al. showed that HA entrapment leads to an increase in the swelling degree of the Ch/HA hydrogels compared to the swelling degree of the Ch hydrogel.
Study of Enzymatic Degradation of the Hydrogels
The study of matrix degradation behavior is of great importance, as it allows estimation of the time needed for growing cells to fill the pores (cavities) of the hydrogel and in parallel to synthesize an extracellular matrix, which should replace our polymer matrix. In addition, the degradation rate of the polymer matrix should be well-correlated with the rate of novel tissue formation. In order to provide an optimal tissue regeneration rate, the polymer matrix should decompose no faster than ECM is deposited. As is well-known, there are different mechanisms of hydrogel destruction, for instance, resorption and degradation under water and CO2 action, or degradation as a result of enzyme hydrolysis. Here, we studied biodegradation of Ch and Ch/HA hydrogel samples using a solution of lysozyme in PBS (pH 7.4) (Figure 7). Lysozyme is known to cleave chitosan macromolecules. Hydrogel degradation in PBS (pH 7.4) without lysozyme was used as a control. As seen in Figure 7, the degradation of all hydrogel samples in the lysozyme solution (in PBS) was faster than that in PBS (pH 7.4), whereas trends in the behavior of all the samples were preserved. It should be noted that additional washing and freeze-drying did not affect the polymer swelling behavior of both non-modified Ch and Ch/HA samples after bulk modification (see Figure 6c,d). An increased equilibrium swelling of these hydrogels after washing could be explained by changes in the hydrogel structures and a retention of water within the hydrogel pores due to enhanced porosity as a result of re-freezing (see Figures 4 and 5). Thus, an increase of mean pore sizes and porosity in the samples in which Ch was crosslinked with Gen or GA because of repeated swelling was observed.
These results are consistent with those reported earlier [22]. Correia et al. showed that HA entrapment leads to an increase in the swelling degree of the Ch/HA hydrogels compared to the swelling degree of the Ch hydrogel.
Study of Enzymatic Degradation of the Hydrogels
The study of matrix degradation behavior is of great importance, as it allows estimation of the time needed for growing cells to fill the pores (cavities) of the hydrogel and in parallel to synthesize an extracellular matrix, which should replace our polymer matrix. In addition, the degradation rate of the polymer matrix should be well-correlated with the rate of novel tissue formation. In order to provide an optimal tissue regeneration rate, the polymer matrix should decompose no faster than ECM is deposited. As is well-known, there are different mechanisms of hydrogel destruction, for instance, resorption and degradation under water and CO 2 action, or degradation as a result of enzyme hydrolysis. Here, we studied biodegradation of Ch and Ch/HA hydrogel samples using a solution of lysozyme in PBS (pH 7.4) (Figure 7). Lysozyme is known to cleave chitosan macromolecules. Hydrogel degradation in PBS (pH 7.4) without lysozyme was used as a control. As seen in Figure 7, the degradation of all hydrogel samples in the lysozyme solution (in PBS) was faster than that in PBS (pH 7.4), whereas trends in the behavior of all the samples were preserved. The composition of the hydrogels was found to influence hydrogel degradation behavior. The Ch/HA-5s and Ch/HA-30s samples were the weakest, whereas the non-modified Ch hydrogels cross-linked either with Gen or GA were the most stable. The Ch hydrogel samples cross-linked with GA were a bit more stable than those cross-linked with Gen. It can be assumed that that covalent cross-linking hampered the cleavage of Ch macromolecules via lysozymes due to steric hindrance. Moreover, hydrogel structure could also affect this process. For example, the Ch hydrogel cross-linked with GA had the densest structure, which could limit diffusion. Therefore, biodegradation proceeded more slowly, and weight loss was less than 2% after incubation for 21 days. Additional washing and lyophilization of Ch samples led to a slight enhancement of the weight loss rate. The most pronounced effect was found for GA-cross-linked Ch hydrogel, as the most drastic change in the hydrogel structure was observed for this sample (see Figures 4 and 5).
As for the Ch/HA hydrogels, the entrapment of HA led to an increase in the weight loss of the samples. For hydrogels after bulk modification, HA molecular weight as well as washing did not markedly influence hydrogel degradation. Thus, the weight losses of these hydrogels were more or less similar; in particular, they were 15-19% after 21 days of incubation in the lysozyme solution.
As for the Ch/HA samples after surface modification, we observed faster weight losses than those for the Ch hydrogels. Moreover, the samples with oligo-HA (Mw 5 kDa) degraded faster than those with Mw of 30 kDa. Thus, the most pronounced effect (41%) was revealed for the Ch/HA-5s hydrogel sample in which Ch was cross-linked with Gen. The composition of the hydrogels was found to influence hydrogel degradation behavior. The Ch/HA-5s and Ch/HA-30s samples were the weakest, whereas the non-modified Ch hydrogels cross-linked either with Gen or GA were the most stable. The Ch hydrogel samples cross-linked with GA were a bit more stable than those cross-linked with Gen. It can be assumed that that covalent cross-linking hampered the cleavage of Ch macromolecules via lysozymes due to steric hindrance. Moreover, hydrogel structure could also affect this process. For example, the Ch hydrogel cross-linked with GA had the densest structure, which could limit diffusion. Therefore, biodegradation proceeded more slowly, and weight loss was less than 2% after incubation for 21 days. Additional washing and lyophilization of Ch samples led to a slight enhancement of the weight loss rate. The most pronounced effect was found for GA-cross-linked Ch hydrogel, as the most drastic change in the hydrogel structure was observed for this sample (see Figures 4 and 5).
As for the Ch/HA hydrogels, the entrapment of HA led to an increase in the weight loss of the samples. For hydrogels after bulk modification, HA molecular weight as well as washing did not markedly influence hydrogel degradation. Thus, the weight losses of these hydrogels were more or less similar; in particular, they were 15-19% after 21 days of incubation in the lysozyme solution.
As for the Ch/HA samples after surface modification, we observed faster weight losses than those for the Ch hydrogels. Moreover, the samples with oligo-HA (Mw 5 kDa) degraded faster than those with Mw of 30 kDa. Thus, the most pronounced effect (41%) was revealed for the Ch/HA-5s hydrogel sample in which Ch was cross-linked with Gen. As for the Ch/HA hydrogels in which Ch was cross-linked with GA, they degraded faster than the Ch samples as well, but they degraded more slowly compared to the Ch/HA hydrogels in which Ch was cross-linked with Gen. For instance, the weight loss of the Ch/HA-5s hydrogel with GA was 29% after 21 days of incubation in the lysozyme solution.
Cytotoxicity Study of the Hydrogels
Because GA is rather toxic [42], there is increasing interest in using genipin as a crosslinker, which would impart stability and rigidity to biocompatible hydrogels. Genipin is 5-10 thousandfold less cytotoxic than glutaraldehyde [43]. The limiting factor for genipin widespread use is its rather high cost. Recently, a new method for genipin preparation from geniposide using Fusarium solani was reported [10]. In this context, Gen is a promising alternative to GA to improve the mechanical properties of Ch-based matrices [44]. In this study, we used Gen along with GA to prepare cross-linked hydrogels. Therefore, it was of great importance to evaluate the possible cytotoxic effects of both of these compounds.
The cytotoxicity of the hydrogels was studied using an extract test (Figure 8). This technique allows estimation of the cytotoxic effects of the compounds released from the matrix after incubating the hydrogel samples in DMEM (10% FBS) for 24 h. Cell viability was measured via MTT assay after cell cultivation in these extracts for 24 h. As seen in Figure 8, there was a 90% decrease of cell viability for the extracts of the Ch/HA hydrogels prepared via surface modification. This could be attributed to an acidic environment as a result of the partial destruction of a Ch/HA polyelectrolyte complex. For other hydrogels, we did not observe any decrease in viable cell numbers after cell cultivation in these extracts for 24 h compared to the control (monolayer cell culture in DMEM + 10% FBS). than the Ch samples as well, but they degraded more slowly compared to the Ch/HA hydrogels in which Ch was cross-linked with Gen. For instance, the weight loss of the Ch/HA-5s hydrogel with GA was 29% after 21 days of incubation in the lysozyme solution.
Cytotoxicity Study of the Hydrogels
Because GA is rather toxic [42], there is increasing interest in using genipin as a crosslinker, which would impart stability and rigidity to biocompatible hydrogels. Genipin is 5-10 thousandfold less cytotoxic than glutaraldehyde [43]. The limiting factor for genipin widespread use is its rather high cost. Recently, a new method for genipin preparation from geniposide using Fusarium solani was reported [10]. In this context, Gen is a promising alternative to GA to improve the mechanical properties of Ch-based matrices [44]. In this study, we used Gen along with GA to prepare cross-linked hydrogels. Therefore, it was of great importance to evaluate the possible cytotoxic effects of both of these compounds.
The cytotoxicity of the hydrogels was studied using an extract test (Figure 8). This technique allows estimation of the cytotoxic effects of the compounds released from the matrix after incubating the hydrogel samples in DMEM (10% FBS) for 24 h. Cell viability was measured via MTT assay after cell cultivation in these extracts for 24 h. As seen in Figure 8, there was a 90% decrease of cell viability for the extracts of the Ch/HA hydrogels prepared via surface modification. This could be attributed to an acidic environment as a result of the partial destruction of a Ch/HA polyelectrolyte complex. For other hydrogels, we did not observe any decrease in viable cell numbers after cell cultivation in these extracts for 24 h compared to the control (monolayer cell culture in DMEM + 10% FBS). Figure 8. Viability of L929 mouse fibroblasts after 24 h incubation with the extracts of the hydrogels in which Ch was cross-linked with genipin (a) and glutaraldehyde (b). Results of MTT assay. Monolayer cell culture was taken as a control (100%). Data are expressed as the mean ± SD. Asterisk indicates significant difference versus control (* p < 0.05). Three parallel replicates were carried out for each sample.
Growth of Cells in the Hydrogel Samples In Vitro
Hyaluronic acid as one of the key components of the ECM provides many specific interactions with growth factors, adhesive proteins and receptors. Therefore, HA entrapment into the chitosan hydrogels could alter the bioactivity of these matrices. To estimate the effects of the hydrogel properties on cell behavior, particularly cell adhesion, spreading and proliferation, mouse fibroblasts L929 were cultivated in the hydrogels for 7 days.
Growth of Cells in the Hydrogel Samples In Vitro
Hyaluronic acid as one of the key components of the ECM provides many specific interactions with growth factors, adhesive proteins and receptors. Therefore, HA entrapment into the chitosan hydrogels could alter the bioactivity of these matrices. To estimate the effects of the hydrogel properties on cell behavior, particularly cell adhesion, spreading and proliferation, mouse fibroblasts L929 were cultivated in the hydrogels for 7 days. Cell morphology was observed via CLSM, and cell proliferation was evaluated by using the MTT assay.
Morphology of Cells in the Hydrogels
As seen in Figure 9, the L929 cells were distributed evenly over the matrix surface in both cases of initial (non-modified) Ch hydrogels and Ch/HA samples after bulk modification. After 7 days of cultivation, the cells in these matrices were found to attach, spread well and form monolayers on the hydrogels' surfaces. In contrast, in the Ch/HA hydrogels after surface modification, the cells were distributed less evenly, were not well-spread and formed multicellular aggregates ( Figure 10). This could be explained by a negatively charged HA surface, which causes an electrostatic repulsion of negatively charged cell membranes [45]. As a result, the cells were spherical in shape and did not spread. As for the Ch hydrogels and the Ch/HA hydrogels (bulk modification) after washing, the cells in these samples were distributed evenly over the surface. However, the cellular aggregates in these hydrogels were also revealed (see Figure 9H,J).
Thus, it can be concluded that the surfaces of the Ch/HA hydrogels after bulk modification were better for L929 fibroblast adhesion, spreading and growth than those after surface modification.
Cell Proliferation in the Hydrogels
The growth and proliferation of cells within the hydrogel samples was studied via MTT assay (Figure 11). A number of viable L929 fibroblasts in the hydrogels were found to depend upon the hydrogel type. As can be seen in Figure 11, cell numbers for all hydrogels in which Ch was cross-linked with Gen were higher than those for all hydrogels in which Ch was cross-linked with GA. This fact can be explained by the smaller average pore sizes of the hydrogels from Ch cross-linked with GA (50-100 µm) compared to those of the samples in which Ch was cross-linked with Gen (>250 µm). The structures with smaller pores could have led to limited cell migration and reduced cell proliferation. It is worth noting that in the case of the washed samples, increased cell growth was revealed. After additional hydrogel washing and lyophilization, the numbers of viable cells were higher for the Ch hydrogels, and especially for those that were cross-linked with GA, than the same values for initial hydrogels. This could be also attributed to changes in the samples structure, as for the GA-cross-linked hydrogels, these changes were more pronounced. These results could be also explained by an enhancement of the specific surface of the samples due to their increased porosity. As a result, these changed hydrogel structures could contribute to the observed improved cell adhesion and growth. Thus, additional washing and lyophilization of the matrices affected the hydrogel structures by increasing their pore sizes and porosity, which in turn led to enhanced cell migration within the matrices. Therefore, the structures of the washed hydrogels were more favorable for cell growth and proliferation. membranes [45]. As a result, the cells were spherical in shape and did not spread. As for the Ch hydrogels and the Ch/HA hydrogels (bulk modification) after washing, the cells in these samples were distributed evenly over the surface. However, the cellular aggregates in these hydrogels were also revealed (see Figure 9H,J).
Thus, it can be concluded that the surfaces of the Ch/HA hydrogels after bulk modification were better for L929 fibroblast adhesion, spreading and growth than those after surface modification.
Cross-linked with Gen
Cross-linked with GA Modification of the Ch hydrogels by HA entrapment in both methods led to cell growth enhancement in the Ch/HA matrices. The bulk HA modification of the hydrogels led to increased numbers of viable cells. Moreover, between the initial bulk-modified samples and samples modified on the surface, their relative cell viability values were comparable. For instance, in the case of the hydrogels in which Ch was cross-linked with Gen, cell viability rates in Ch/HA-5v, Ch/HA-30v, Ch/HA-5s and Ch/HA-30s hydrogels were 77 ± 8%, 78 ± 10%, 80 ± 15% and 76 ± 12%, respectively. In the case of the hydrogels with GA, cell viability rates in Ch/HA-5v, Ch/HA-30v, Ch/HA-5s and Ch/HA-30s were 63 ± 12%, 55 ± 6%, 55 ± 6% and 62 ± 10%, respectively. Maximum cell viability values were found for the washed samples. Thus, as seen in Figure 11, the maximum number of living cells (104 ± 13%) was revealed for the washed Ch/HA-30w sample in which Ch was crossed-linked with Gen. It should be also noted that we did not find any significant differences in cell viability for the Ch/HA samples that differed in the molecular weight of hyaluronic acid used.
Initial
Thus, both methods for the modification of cross-linked Ch hydrogels via entrapment of hyaluronic acid allowed us to enhance cell growth and proliferation.
Cell Proliferation in the Hydrogels
The growth and proliferation of cells within the hydrogel samples was studied via MTT assay (Figure 11). A number of viable L929 fibroblasts in the hydrogels were found to depend upon the hydrogel type. As can be seen in Figure 11, cell numbers for all hydrogels in which Ch was cross-linked with Gen were higher than those for all hydrogels in which Ch was cross-linked with GA. This fact can be explained by the smaller average pore sizes of the hydrogels from Ch cross-linked with GA (50-100 µm) compared to those of the samples in which Ch was cross-linked with Gen (>250 µm). The structures with smaller pores could have led to limited cell migration and reduced cell proliferation. It is worth noting that in the case of the washed samples, increased cell growth was revealed. After additional hydrogel washing and lyophilization, the numbers of viable cells were higher for the Ch hydrogels, and especially for those that were cross-linked with GA, than the same values for initial hydrogels. This could be also attributed to changes in the samples structure, as for the GA-cross-linked hydrogels, these changes were more pronounced. These results could be also explained by an enhancement of the specific surface of the samples due to their increased porosity. As a result, these changed hydrogel structures could contribute to the observed improved cell adhesion and growth. Thus, additional washing and lyophilization of the matrices affected the hydrogel structures by increasing their pore sizes and porosity, which in turn led to enhanced cell migration within the matrices. Therefore, the structures of the washed hydrogels were more favorable for cell growth and proliferation.
Modification of the Ch hydrogels by HA entrapment in both methods led to cell growth enhancement in the Ch/HA matrices. The bulk HA modification of the hydrogels led to increased numbers of viable cells. Moreover, between the initial bulk-modified samples and samples modified on the surface, their relative cell viability values were comparable. For instance, in the case of the hydrogels in which Ch was cross-linked with Gen, cell viability rates in Ch/HA-5v, Ch/HA-30v, Ch/HA-5s and Ch/HA-30s hydrogels were 77 ± 8%, 78 ± 10%, 80 ± 15% and 76 ± 12%, respectively. In the case of the hydrogels with GA, cell viability rates in Ch/HA-5v, Ch/HA-30v, Ch/HA-5s and Ch/HA-30s were 63 ± 12%, 55 ± 6%, 55 ± 6% and 62 ± 10%, respectively. Maximum cell viability values were found for the washed samples. Thus, as seen in Figure 11, the maximum number of living cells (104 ± 13%) was revealed for the washed Ch/HA-30w sample in which Ch was crossed-linked with Gen. It should be also noted that we did not find any significant differences in cell viability for the Ch/HA samples that differed in the molecular weight of hyaluronic acid used. Thus, both methods for the modification of cross-linked Ch hydrogels via entrapment of hyaluronic acid allowed us to enhance cell growth and proliferation. Figure 11. Viability of L929 mouse fibroblasts cultivated in the hydrogels in which Ch was crosslinked with genipin (a) and glutaraldehyde (b) for 7 days. Results of MTT assay. The monolayer cell culture (without the hydrogel sample) was taken as a control (100%). Data are expressed as the mean ± SD. Asterisk indicates significant difference versus control (initial Ch hydrogels) (*** p < 0.001; * p < 0.05). Three parallel replicates were carried out for each sample.
Conclusions
In this study, two different methods are proposed for the fabrication of cross-linked chitosan hydrogels modified via the entrapment of hyaluronic acid (Mw 5 kDa or 30 kDa) as a bioactive compound. In order to prepare the macroporous composite Ch/HA hydrogels based on polyelectrolyte complexes, hyaluronic acid was entrapped in the Ch hydrogels either via bulk modification (Method 1) or surface modification (Method 2). The chitosan macromolecules were cross-linked with GA or Gen.
All hydrogels were characterized in terms of their FTIR spectra, swelling behavior, structure, in vitro enzymatic degradation and their ability to support cell adhesion and Figure 11. Viability of L929 mouse fibroblasts cultivated in the hydrogels in which Ch was crosslinked with genipin (a) and glutaraldehyde (b) for 7 days. Results of MTT assay. The monolayer cell culture (without the hydrogel sample) was taken as a control (100%). Data are expressed as the mean ± SD. Asterisk indicates significant difference versus control (initial Ch hydrogels) (*** p < 0.001; * p < 0.05). Three parallel replicates were carried out for each sample.
Conclusions
In this study, two different methods are proposed for the fabrication of cross-linked chitosan hydrogels modified via the entrapment of hyaluronic acid (Mw 5 kDa or 30 kDa) as a bioactive compound. In order to prepare the macroporous composite Ch/HA hydrogels based on polyelectrolyte complexes, hyaluronic acid was entrapped in the Ch hydrogels either via bulk modification (Method 1) or surface modification (Method 2). The chitosan macromolecules were cross-linked with GA or Gen.
All hydrogels were characterized in terms of their FTIR spectra, swelling behavior, structure, in vitro enzymatic degradation and their ability to support cell adhesion and growth. The effects of HA on the Ch/HA hydrogel properties mentioned previously were evaluated regarding the function of the method for HA entrapment, the molecular weight of the HA and the cross-linker (Gen or GA) used for Ch cross-linking. The swelling degree and degradation were found to depend on the method used and the composition of the Ch/HA hydrogel samples. Thus, HA entrapment into the Ch hydrogels led to an increase in the swelling degree as well as an enhancement of the degradation of the Ch/HA samples. Moreover, HA entrapment via surface modification (Method 2) resulted in bigger changes in these parameters than in the samples prepared using Method 1. All hydrogels were not toxic, which was confirmed in the extract test using the L929 mouse fibroblasts. The 3D cell growth and proliferation in the hydrogels were studied. Cell morphology and viability in the hydrogels were shown to depend on hydrogel composition and the preparation method used. The Ch/HA hydrogels after bulk modification promoted better cell adhesion and spreading as well as cell growth and proliferation compared to the samples prepared using Method 2 (surface modification). Moreover, additional washing and freeze-drying provided better cell adhesion and proliferation, whereas HA introduction into the hydrogels resulted in enhanced cell growth compared to the Ch samples.
Thus, by varying the Ch-based hydrogel composition and fabrication technique, macroporous composite Ch/HA hydrogels with highly porous interconnected structures were developed. A chitosan component of these hydrogels provided rather good cell adhesion, whereas a combination of Ch with HA enhanced cell growth and proliferation. The cross-linked chitosan hydrogels modified with hyaluronic acid could be promising for tissue engineering. | 12,546.8 | 2023-05-01T00:00:00.000 | [
"Biology",
"Materials Science",
"Engineering"
] |
Making Holes in the Second Symmetric Product of a Cyclicly Connected Graph
A continuum is a connected compact metric space. The second symmetric product of a continuum X, F2(X), is the hyperspace of all nonempty subsets of X having at most two elements. An element A of F2(X) is said to make a hole with respect to multicoherence degree in F2(X) if the multicoherence degree of F2(X)− {A} is greater than the multicoherence degree of F2(X). In this paper, we characterize those elements A ∈ F2(X) such that A makes a hole with respect to multicoherence degree in F2(X) when X is a cyclicly connected graph.
Introduction
A continuum is a connected compact metric space.Let X be a continuum.For each positive interger n, let F n (X) = {A ⊂ X : A has at most n elements and A ∅}.The hyperspace F n (X) is called the n th symmetric product of X.It is known that each hyperspace F n (X) is a continuum (see Borsuk & Ulam, 1931, pp. 876, 877) and (Michael, 1951, Theorem 4.10, p. 165).
If Z is any topological space, let b 0 (Z) denote the number of components of Z minus one if this number is finite and b 0 (Z) = ∞ otherwise.Given a connected topological space Y, the multicoherence degree of Y, is defined by r(Y) = sup{b 0 (K ∩ L) : K and L are closed connected subsets of Y and Y = K ∪ L}.The space Y is said to be unicoherent if r(Y) = 0. Let y ∈ Y such that Y − {y} is connected, we say that y makes a hole with respect to multicoherence degree in Y if r(Y − {y}) > r(Y).This is a generalization of the notion of to make a hole in a unicoherent topological space defined in (Anaya, 2007(Anaya, , p. 2000)).
In this paper, we are interesting in the following problem.
Problem.Let H(X) be a hyperspace of a continuum X.For which elements A ∈ H(X), A makes a hole with respect to multicoherence degree in H(X).
In the current paper, we are presenting the solution to this problem when X is a cyclicly connected graph and H(X) = F 2 (X).
Preliminaries
Given a positive interger m, define λ(m) = {1, 2, . . ., m}.A map is a continuous function.The identity map for a topological space Z is denoted by id Z .An arc is any space homeomorphic to [0, 1].A simple closed curve is a space which is homeomorphic to the unit circle S 1 in the Euclidean plane R 2 .A theta curve is a space which is homeomorphic to S 1 ∪ ([−1, 1] × {0}) in R 2 .The symbol [0, 1] 2 denotes the space [0, 1] × [0, 1].The set {(u, v) ∈ [0, 1] 2 : u ≤ v} is denoted by Δ.A graph is a continuum which can be written as the union of finitely many arcs any two of which are either disjoint or intersect only in one or both their end points.A point y in a connected topological space Y is called cut point (non-cut point) if Y − {y} is not connected (connected).A space W is said to be cyclicly connected provided that every two points of W belong to some simple closed curve in W (see (Whyburn, 1942, p. 77)).A graph X is a cyclicly connected graph if X is a cyclicly connected space.
Given a topological space Y.A subspace Z of Y is said to be: (c) a strong deformation retract of Y if there exist f and g as in (b) with the additional property that g(z, t) = z for every (z, t) Let y ∈ Y. Let β be a cardinal number.We say that y is of order less than or equal to β in Y, written ord(y, Y) ≤ β, provided that for each open subset U of Y containing y, there exists an open subset V of Y such that y ∈ V ⊂ U and the cardinality of the boundary of V is less than or equal to β.We say that y is of order β in Y, written ord(y, Y) = β, provided that ord(y, Y) ≤ β and ord(y, Y) α for any cardinal number
Auxiliary Results
Lemma 2.1 If X is a cyclicly connected graph different from a simple closed curve, then the following conditions hold: (1) for each simple closed curve S in X, S ∩ R(X) has at least two points; (2) X = I(X); (3) the set I(X) is finite; (4) for each p ∈ X, M(p, X) is a nondegenerate subcontinuum of X.
Proof.In order to prove (1), let S be a simple closed curve in X.Since S X, there exists a simple closed curve S 1 S in X such that S ∩ S 1 ∅.So, using (Nadler, Jr., 1992, Proposition 9.5, p. 142), R(S ∪ S 1 ) ∩ S ∩ S 1 ∅.Thus, by (Kuratowski, 1968, Theorem 3, p. 278), R(X) ∩ S ∩ S 1 ∅.Now, assume that R(X) ∩ S ∩ S 1 consists of precisely one point.Then, there exists a simple closed curve S 2 S in X such that S 2 ∩ (S − S 1 ) ∅. Applying the previous argument to S ∪ S 2 , we have R(X) ∩ (S − S 1 ) ∩ S 2 ∅.Hence, S ∩ R(X) has at least two points.
Finally, to check (4), let p ∈ X.By (2), there exists I ∈ I(X) such that p ∈ I. So, since I ⊂ M(p, X), M(p, X) is nondegenerate set.On the other hand, clearly, M(p, X) is connected.By (3), M(p, X) is closed in X.
Lemma 2.2 Let X be a cyclicly connected graph and let p ∈ X.If N(p, X) ∅, then N(p, X) is a subcontinuum of X.
Proof.First, by (3) of Lemma 2.1, N(p, X) is closed in X.We shall prove the connectedness of N(p, X).By (Whyburn, 1942, (9.3) Since N(p, X) ∩ M(p, X) = F and by the definition of f , f is well defined.Clearly, f is surjective.The continuity of f follows from the continuity of f and the fact that N(p, X) and M(p, X) − {p} are closed subsets of X − {p}.This finishes the proof of that N(p, X) is connected.
Lemma 2.3 Let X be a cyclicly connected graph different from a simple closed curve and let p, q be different points in X.If X − {p, q} is not connected, there exist a simple closed curve S in X containing p and q and a retract f : Given I ∈ I(X), let f I : I → S be a one-to-one map such that Define f : X → S as follows: for each x ∈ X, take Hence, f is well defined.The continuity of f follows from the fact that each f I is continuous and, by ( 2) and (3) of Lemma 2.1.It is easy to see that f | S = id S .Thus, f is a retraction.
From the fact that p q, we have that f −1 (p) = {p} and f −1 (q) = {q}.
Lemma 2.4 Let X be a cyclicly connected graph different from a simple closed curve and let p, q be different points in X.If X − {p, q} is connected, there exist a theta curve Y in X containing p and q and a retract f : Proof.By the definition of cyclic connectedness, there exists a simple closed curve S in X such that p, q ∈ Y. Since X − {p, q} is connected, there exists an arc J in X such that S − {p, q} Define f : X → Y as follows: for each x ∈ X, take I ∈ I(X) such that x ∈ I and let f (x) = f I (x).From the fact that f | R(X) = f 0 , it follows that f is well defined.Since X = I(X) and I(X) is finite (see ( 2) and (3) of Lemma 2.1), f is continuous.From the fact that f | Y = id Y , it follows that f is a retraction.
We will prove that f Proposition 2.5 Let X be a continuum and let K and L be connected subsets (subcontinua) of X.Then K, L is a connected subset (subcontinuum) of F 2 (X) and, it does not have cut points when K and L are nondegenerate sets.
In order to prove the second part of this proposition, let {p, q} ∈ K, L .Using K and L are nondegenerate sets and the arguments in (Kuratowski, 1968, Theorem 11, p. 137), it can be shown that Proof.
It is easy to verify that f and g have the required properties.
Finally, let h: [0, 1] → I be a homeomorphism such that h([0, It can be proved that h is a homeomorphism such that h(Γ 0 ) = H, I ∪ J, I .Therefore, H, I ∪ J, I is a strong deformation retract of F 2 (I) − {{p}}.
Lemma 2.7 If X is a graph containing a simple closed curve, then X is not unicoherent.
Proof.We shall prove that there exist subcontinua K and L of X such that b 0 (K ∩ L) > 0 and X = K ∪ L. Let S be a simple closed curve in X.By (Nadler, Jr., 1992, Theorem 9.10, p. 144), there exists x ∈ S such that ord(x, X) = 2. Now, using (Nadler, Jr., 1992, Theorem 9.7, p. 143), it can be proved that there exists an arc J in S which is a neighborhood of x in X.Then, J − E(J) is an open connected subset of X.Now, by (Nadler, Jr., 1992, 9.44, (a), p. 160), S −(J −E(J)) is connected.Hence, X −(J −E(J)) is a subcontinuum of X.So, K = J and L = X −(J −E(J)) satify the requiered properties.
Making Holes in the Second Symmetric Product of a Cyclicly Connected Graph
Theorem 3.1 Let X be a graph and let p ∈ O(X).Then {p} does not make a hole with respect to multicoherence degree in F 2 (X).
Since p ∈ O(X), using (Nadler, Jr., 1992, Lemma 9.7, p. 143), it can be shown that there exists an arc I in X such that I is a neighborhood of p in X.So, clearly, p ∈ I − E(I).Let H and J be nondegenerate subcontinua of I such that H ∪ J ⊂ I −{p} and each one of them contains a different end point of I. Put Z = (X − I)∪ H ∪ J and Z = X, Z .Clearly, F 2 (X) = Z ∪ F 2 (I).Now, by Lemma 2.6, there exist a retraction f : F 2 (I) − {{p}} → H, I ∪ J, I and a map g: (F 2 (I) − {{p}}) × [0, 1] → F 2 (I) − {{p}} such that g(A, 0) = A and g(A, 1) = f (A) for each A ∈ F 2 (I) − {{p}} and g(B, t) = B for each (B, t) ∈ ( H, I ∪ J, I ) To check that f and ḡ are well defined, notice that Z ∩ F 2 (I) . Now, the continuity of f and ḡ follows from the continuity of the maps f and g and the fact that Z and F 2 (I) − {{p}} are closed in F 2 (X) − {{p}}.It is easy to verify that f and ḡ have the required properties.Thus, Z is a deformation retract of F 2 (X) − {{p}}.
Finally, to check that r(Z) = r(F 2 (X)), we shall show that Z is homeomorphic to F 2 (X).It can be shown that there exists a homeomorphism h: It is easy to see that h is a homeomorphism.Hence, r(F 2 (X)) = r(Z).
This finishes the proof that {p} does not make a hole with respect to multicoherence degree in F 2 (X).
Theorem 3.2 Let X be a cyclicly connected graph and p ∈ R(X).Then {p} makes a hole with respect to multicoherence degree in F 2 (X).
Proof.Since r(F 2 (X)) = 1 (see Theorem 2.8), we shall show that r(F 2 (X) − {{p}}) ≥ 2. So, it suffices to prove that there exist two closed connected subsets K and L of F 2 (X) − {{p}} such that F 2 (X) and, ϕ k and ϕ j are one-to-one, ψ (k, j) is well defined.Using the fact that ϕ k and ϕ j are surjective, it is easy to prove that ψ (k, j) is surjective.Clearly, for each k, j ∈ λ(m) with k j, Consider the following cases.
We are ready to prove that K ∩ L = {C k : k ∈ λ(m)}.From the fact that Σ = Λ ∩ Γ and ii), we have that . This case can be proved using similar arguments in the proof of Case A by considering Y = {ϕ 1 (1)}.
Theorem 3.3 Let X be a simple closed curve and let p, q ∈ X such that p q. Then {p, q} makes a hole with respect to multicoherence degree in F 2 (X).
We are ready to prove that {p, q} makes a hole with respect to multicoherence degree in F 2 (X).Since X is a simple closed curve, there exists a homeomorphism h: S 1 → X such that h(A) = {p, q}.Consider the induced mapping h 2 : F 2 (S 1 ) → F 2 (X) defined by h 2 (B) = h(B) for each B ∈ F 2 (S 1 ).By (Higuera & Illanes, 2011, Theorem 3.1, p. 369), h 2 is a homeomorphism.Then, since A makes a hole with respect to multicoherence degree in F 2 (S 1 ) and h 2 (A) = {p, q}, {p, q} makes a hole with respect to multicoherence degree in F 2 (X).
Theorem 3.4 Let X be a theta curve and let p, q ∈ X such that ord(p, X) = ord(q, X) = 2 and X − {p, q} is connected.Then {p, q} makes a hole with respect to multicoherence degree in F 2 (X).
Lemma 2. 6
Let I be an arc and let p ∈ I − E(I).If H and J are subcontinua of I such that H ∪ J ⊂ I − {p} and each one of them contains a different end point of I, then H, I ∪ J, I is a strong deformation retract of F 2 (I) − {{p}}. | 3,428.6 | 2014-08-04T00:00:00.000 | [
"Mathematics"
] |
ARGprofiler—a pipeline for large-scale analysis of antimicrobial resistance genes and their flanking regions in metagenomic datasets
Abstract Motivation Analyzing metagenomic data can be highly valuable for understanding the function and distribution of antimicrobial resistance genes (ARGs). However, there is a need for standardized and reproducible workflows to ensure the comparability of studies, as the current options involve various tools and reference databases, each designed with a specific purpose in mind. Results In this work, we have created the workflow ARGprofiler to process large amounts of raw sequencing reads for studying the composition, distribution, and function of ARGs. ARGprofiler tackles the challenge of deciding which reference database to use by providing the PanRes database of 14 078 unique ARGs that combines several existing collections into one. Our pipeline is designed to not only produce abundance tables of genes and microbes but also to reconstruct the flanking regions of ARGs with ARGextender. ARGextender is a bioinformatic approach combining KMA and SPAdes to recruit reads for a targeted de novo assembly. While our aim is on ARGs, the pipeline also creates Mash sketches for fast searching and comparisons of sequencing runs. Availability and implementation The ARGprofiler pipeline is a Snakemake workflow that supports the reuse of metagenomic sequencing data and is easily installable and maintained at https://github.com/genomicepidemiology/ARGprofiler.
Introduction
Investigating the resistome of metagenomic datasets, including the abundances of the different antimicrobial resistance genes (ARGs) and the gene synteny (gene flanking regions), has become a major research area in recent years (Holmes et al. 2016, Bengtsson-Palme et al. 2018, Hendriksen et al. 2019, Anthony et al. 2021, Zhang et al. 2021, Martiny et al. 2022b, Munk et al. 2022).In many cases, research investigation, especially on a large scale, has been limited to research groups that are technologically and financially able to combine large-scale data generation with advanced bioinformatic and modeling expertise.However, because of the good datasharing practices of next-generation sequencing efforts, there are today a large number of sequencing datasets available in public repositories.We have recently provided a curated dataset of acquired ARG abundance estimates in more than 214 000 publicly available metagenomic datasets (Martiny et al. 2022a).
Processing these datasets in a uniform approach calls for optimized, standardized methods to support the broader scientific community in utilizing these datasets.The practice of sharing bioinformatic workflows, or pipelines, has not historically been part of the academic publishing process.
However, with the growing volumes of biological sequencing data available, researchers have begun to publish their workflows.Recent examples include pangolin for tracing SARS-CoV2 lineages (O'Toole et al. 2021), RASflow for RNA sequencing data (Zhang and Jonassen 2020), and ATLAS for metagenomic sequencing data (Kieser et al. 2020).
Here, we present ARGprofiler, a newly developed pipeline designed to analyze read dissimilarities, abundances, and genomic flanking regions of ARGs in metagenomic sequencing data (Fig. 1).ARGprofiler has been adapted to work for short-read sequencing reads, where we have carefully evaluated each step in our metagenomic workflow.ARGprofiler includes the PanRes database, a combined collection of current ARG databases, and ARGextender, an assembly tool for producing targeted de novo assemblies.The pipeline is an easily usable and scalable workflow implemented in Snakemake (K€ oster andRahmann 2012, M€ older et al. 2021), which allows any user to process sequencing data to perform epidemiological analyses of ARGs globally.ARGprofiler is another step toward enabling the reuse of metagenomic sequences, and while we have targeted antimicrobial resistance (AMR), the pipeline can be repurposed for other tasks.
Implementation of ARGprofiler
ARGprofiler is a Snakemake workflow (K€ oster and Rahmann 2012) that is organized into five different parts: (i) download of metagenomic datasets, (b) trimming and quality check of sequencing reads, (iii) mapping and alignment of reads against reference sequences, including ARGs and bacteria, (iv) building of flanking regions around genes of interest, and (v) creation of Mash sketches (Fig. 1).Mapping to bacteria is done to compare the ARG content to the bacterial composition, and Mash sketches to allow for searching for samples with similar or different compositions.
ARGprofiler can handle both short single-and paired-end reads and combines existing and newly established tools and reference databases to produce a single comprehensive analysis pipeline.
Retrieval of metagenomic datasets
To manage the download of sequencing read data from ENA, ARGprofiler utilizes fastq-dl 2.0.4 (https://github.com/rpetit3/fastq-dl) to retrieve and download the reads by matching the given run_accessions in a JSON input file provided by the user.ARGprofiler is also capable of handling reads stored in a local folder as an alternative to downloading ENA read sets.
Preprocessing of sequencing reads
The first step in a sequencing workflow is the quality checking and trimming of the raw sequencing reads.Historically, FASTQC (Andrews 2010) has been used for quality checking and BBduk (Bushnell 2014) to remove adaptors and lowquality sequences (Martiny et al. 2022a, Munk et al. 2022).However, new and faster tools have appeared, such as the widely adopted tool fastp (Chen 2023).Therefore, we decided to compare the performance of fastp 0.23.2 with the combination of FASTQC and BBduk to ensure an efficient preprocessing of the raw reads.
Mapping and alignment of reads against reference databases
To quantify ARGs and microorganisms, KMA 1.4.12a(Clausen et al. 2018) was used to map and align the trimmed reads to different databases.KMA uses k-mer seeding to increase mapping speed and is specifically made for mapping reads against redundant databases.We designed the alignment procedure to use two reference sequence databases: the PanRes collection of ARGs and the mOTUs3 database (Ruscheweyh et al. 2022) for microbiome profiling.Details on the choice and design of reference databases are described in later sections.
Metagenome representation using Mash sketches
We used Mash 2.3 (Ondov et al. 2016) to enable comparison between large sets of metagenomes for subsequent selection and analysis by creating MinHash sketches as representatives for individual metagenomes.This allows an unbiased comparison of samples with a low constant memory footprint and a short turnaround time, which can be used for subsequent clustering and identification of closely related metagenomes (Ondov et al. 2016).We identified appropriate sketch and k-mer sizes using a selection of 72 sewage metagenomes: 36 from Copenhagen sewage (Brinch et al. 2020), 18 from various sites in the world (Munk et al. 2022), and the remaining 18 were technical replicates of a single sewage sample taken in Copenhagen, Denmark (PRJEB63576).Sketches were created for all samples using sketch sizes of 10 3 , 10 4 , 10 5 , and 10 6 , with k-mer sizes of 16, 21, 27, and 31.Mash distances were calculated to find the parameters resulting in lower within-sample distances than between-sample distances.Distances were also clustered using Dynamic Neighbor-Joining with CCPhylo 0.8.3 (Clausen 2023) to verify the appropriate sub-clustering of samples.
Building flanking regions around genes of interest with ARGextender
To examine the genomic content surrounding ARGs, we created ARGextender to build genomic flanking regions around identified ARGs.ARGextender recruits reads using a recursive approach with KMA and SPAdes (Prjibelski et al. 2020), which produce comparable results and are faster than full metagenome de novo assembly with SPAdes.A more detailed description of ARGextender can be found in a later section.Because KMA will also assign reads to low abundant sequences that are unlikely to form contigs, we created a filtering step using the KMA mapstat files so that only samples fulfilling the following criteria would be assembled: > 90% query identity, > 90% global consensus identity, and a mean read depth > 6.
The PanRes database
Bacterial genes that encode resistance to antibiotic drugs, heavy metals, and biocides have been previously identified and compiled into several databases (Alcock et al. 2023, Bonin et al. 2023, Bortolaia et al. 2020, Feldgarden et al. 2021, Gupta et al. 2014, Gschwind et al. 2023) .We sought to collect these genes of interest into a single unique collection that we named PanRes; short for the pan resistance, as having a single, although highly redundant collection is computationally more efficient to search through.
From the CARD database, the genes based on the protein homolog model were included as they are acquired and do not rely on mutations for resistance.In the MegaRes gene collection, sequences with the "RequiresSNPConfirmation" tag were excluded from consideration, as these represent mutated versions of housekeeping genes, regulators, repressors, and promoter sequences.From the AMRFinderPlus sequences of the "AMR" type, only those satisfying the "AMR" subtype were used.For the genes of the "STRESS" type, just the "BIOCIDE" and "METAL" subtypes were retained.
As heavy metals often co-select for antibiotic resistance (Baker-Austin et al. 2006), we did a screening for metal resistance genes.The BacMet v1.1 collection of experimentally verified resistance proteins was used as a starting point (Pal et al. 2014), where the BacMet GenBank accessions were used to extract the coding sequences (NCBI Resource Coordinators 2018).We then manually curated the collection of metal resistance genes with sequences identified in the published literature, with a special focus on acquired cobalt, zinc, cobalt, copper, arsenite, mercury, cadmium, lead, or silver-resistance genes retrieved using the NCBI nucleotide database (NCBI Resource Coordinators 2018).The final collection of metal resistance genes we refer to as the MetalResistance database is deposited at: https://doi.org/10.5281/zenodo.8108201.
All retrieved sequences were clustered using Usearch 11.0.667(Edgar 2010) with the fastx_uniques algorithm to identify unique sequences to include in PanRes.These unique ARGs were clustered based on 90% identity and 90% coverage with the cluster_fast algorithm.GeneAssimiliator (https:// github.com/genomicepidemiology/gene\_assimilator)was used to perform this iterative approach of recruiting, clustering, and refining gene collections of various sources into one.
ARGextender
Despite the value provided through metagenomic de novo assemblies, the computational demands are often too high to be considered for routine use (Martiny et al. 2022b).To enable a shorter turnaround time of metagenomic assemblies with lower computational demands, we developed ARGextender to perform de novo assemblies around target sequences of interest.
ARGextender recursively applies KMA 1.4.12a(Clausen et al. 2018) and SPAdes 3.15.5 (Nurk et al. 2017, Prjibelski et al. 2020), where KMA is used to identify the target sequences in the sample, followed by a de novo assembly of the reads matching the target(s) using SPAdes.After each de novo assembly, scaffolds containing target sequences are extracted using KMA and set as the new target.This recursion is repeated until no more reads are included in the de novo assembly or the user-defined maximum number of recursions has been met (unlimited by default).When the targeted de novo assembly has saturated, the scaffolds and assembly graph are saved, along with a table containing the information about which target sequences are found within each scaffold.These include target sequences with an alignment score within 70% of the best-scoring target sequences.
We evaluated the ARGextender tool by comparing the output of ARGextender to full de novo metagenomic assemblies of 951 urban sewage samples published by (Munk et al. 2022).We compared the resulting scaffolds by matching the sequences of the ResFinder database to the scaffolds using KMA 1.4.12a with the "-ont" parameter.The surrounding flanks were then extracted and compared.
Evaluating tools for profiling microbiomes
We compared the performance of KMA (Clausen et al. 2018) with several microbial reference databases and profiling tools.Using the in silico data generated for the profiling test in the Critical Assessment of Metagenome Interpretation (CAMI) challenge (Meyer et al. 2022), we tested mOTUs 3.0.3(Ruscheweyh et al. 2022) The performance of each tool and reference database was evaluated similarly to the CAMI challenge using the OPAL tool (Meyer et al. 2019).Results with low abundances were filtered away ("-f 1").We used three binary classification metrics on each taxonomic level, from superkingdom to species, to determine performance and the sum of abundances, specifically purity, completeness, and F1 scores.Purity and completeness consider the performance of correctly identifying taxons without considering relative abundances, where TP is the number of true positives, FP is the number of false positives, and FN is the false negatives: Purity ¼ TP=ðTP þ FPÞ and Completeness ¼ TP=ðTP þ FNÞ.The F1 score measures the overall performance of taxon identification and is defined as: The MetalResistance gene database is available on Zenodo, at https://doi.org/10.5281/zenodo.8108201,and the first version of the PanRes collection is available at https:// doi.org/10.5281/zenodo.8055115.Output files of benchmarking microbial profilers are available at https://doi.org/10.5281/zenodo.7923774.The full de novo assemblies of urban sewage samples are available on ENA under project accessions PRJEB40798, PRJEB40816, PRJEB40815, PRJEB27621, and ERP015409.The Copenhagen sewage collection is under PRJEB34633, and the repeated resequencing of a single Copenhagen sewage sample is under PRJEB63576.
Results
With ARGprofiler, we wanted to focus on creating a pipeline that produces three main outputs suitable for analyzing metagenomic datasets: the abundance of reads aligned to different genes of two suitable reference databases (mOTUs and PanRes), targeted de novo assemblies with ARGextender, and MinHash sketches with Mash (Fig. 1).Our pipeline has been designed to be suitable for large volumes of sequencing reads by using the workflow manage Snakemake and by carefully selecting appropriate tools.
The PanRes collection
PanRes was created to compile several existing ARG reference databases into one, as there are both overlaps and discrepancies between the different ones (Fig. 2a).Out of 30 400 genes, we identified a set of 14 078 unique sequences that were included in PanRes.Grouping these genes based on 90% identity and 90% coverage produced 5280 centroids, which ranged in lengths between 93 and 5972 bp, with a median of 762 bp (Fig. 2b).
Assembling flanks around genes with ARGextender
To validate the output of ARGextender, we compared the scaffolds with those produced by SPAdes on a set of 951 urban sewage samples (Fig. 3).On average, ARGextender built 101 scaffolds (range: 1-350), whereas SPAdes reported 95 ARG-scaffolds on average (range: 1-348).The number of distinct ARGs detected in the scaffolds was on average 57 for ARGextender (range 1-141) and 58 for SPAdes (range: 1-153) (Fig. 3a).Most of the flanks extracted around ARGs were between 100 and 5000 bp, although many had no flanking regions (Fig. 3b).Excluding scaffolds with zero flanks, we can see that ARGextender and SPAdes can assemble flanks around a similar number of ARGs (Fig. 3c).Overall, ARGextender was capable of assembling the same amount of flanks as SPAdes with lower computational requirements (Supplementary Material Appendix B), although SPAdes produced longer flanks for some samples.
Choosing a microbial profiler and reference database
Since there are multiple different tools and reference databases available to profile the microbial content of a sample, we compared the performances of each method with various databases (Supplementary Material Appendix A, Fig. 4).KMA reported more matched reference sequences, regardless of the database used, where most were false positives (Supplementary Material Fig. A1).After removing hits with low abundances, KMA were comparable with MetaPhlAn and mOTUs, sometimes identifying more taxons than the other tools (Supplementary Material Fig. A2).We observed a decrease in completeness and purity in taxon identification for the non-human environments for all tools and databases (Supplementary Material Figs A3 and A4).Despite this ARGprofiler decrease, KMA had higher completeness than MetaPhlAn, mOTUs, and Bracken on the plant-associated samples (Supplementary Material Fig. A4a).KMA with the mOTUs sequences (KMA-mOTUs) outperformed KMA with genomic and Silva databases across all six sampling groups regarding binary metrics (Supplementary Material Appendix A).While KMA-mOTUs abundance results were generally lower, the F1 scores were on par with the MetaPhlAn and mOTUs profilers (Supplementary Material Figs A5 and A6), which we believe is due to a difference in how abundance is calculated in the other tools.We decided on combining KMA with the mOTUs sequences for two reasons.First, we plan to apply our pipeline as an environment-agnostic passive surveillance tool encompassing many One Health settings important for ARG ecology.Therefore, we deemed performance outside just human microbiomes important, where KMA-mOTUs outperformed all other tools on the plant-associated samples and were on par with marine samples (Fig. 4).Second, since we also use KMA for the ARG quantification, this choice limited the overall pipeline complexity.
Clustering and representing metagenomes using Mash sketches
Most of the output of ARGprofiler relies on known reference sequences.To enable unbiased metagenome-wise comparison and clustering of sequence runs, we include the creation of sketches using Mash.We tested the discriminatory power of different sketch and k-mer sizes on sewage samples, some of which were technical sequence replicates of the same sample.Both the very small sketch sizes and short k-mers failed clearly distinguish between technical replicates of the same sample and sequence runs of other sewage samples (Supplementary Material Fig. B5 in appendix).Technical replicates were efficiently separated from the remaining samples with a k-mer size of 31 and sketch size of � 10 4 (Supplementary Material Fig. B6 in appendix).As smaller sketch sizes require less computational resources, a sketch size of 104 was included as default in ARGprofiler.
Reducing computational time and memory usage
Analyzing the large quantities of metagenomic data currently available at ENA and future data is not computationally trivial, and choosing efficient workflows will seriously impact the associated time, costs, and energy expenditure.We did a benchmark of each rule using a set of metagenomic datasets from a variety of sampling origins (details in Supplementary The average (l) CPU time in hours (h) and the peak memory in megabytes (MB) are reported together with standard deviations (r).Note that the CPU hours and peak memory for ARGextender did not include the two samples that were not completed within our limit of 48 h.ARG: antimicrobial resistance genes.
Material Appendix B), where we observed that with our final parameter settings, the ARGprofiler pipeline processed 1.21 gigabasepairs/h (Gbps/h) with a median processing performance of 0.36 Gbps/h.Most steps had a sample-average peak memory footprint below 1 GB and required less than a CPU hour, except for KMA-mOTUs and ARGextender (Table 1).
Discussion
There are currently terabytes of metagenomic sequencing data available in public databases, and producing standardized and consistent results is necessary for downstream analyses.Therefore, we have designed ARGprofiler to allow efficient ARG-monitoring and quantification in vast amounts of sequencing data, determine flanking regions around ARGs for downstream epidemiological investigation, and k-merbased comparison of sequence runs.Each output aligns with our overall goal of reanalyzing public sequencing datasets for the characteristics of ARGs in a global microbial and environmental context.One of the unique features of ARGprofiler is the addition of the PanRes database.The motivation behind PanRes was to eliminate the inefficiency associated with searching for the same gene in multiple collections and the additional overhead of spawning extra compute jobs for each collection.We, therefore, sought to collect the unique sequences of ARGs from a wide spectrum of existing databases.It is our hope that this will help facilitate fewer but larger and more efficient monitoring runs of public metagenomes, followed by data sharing and the individual AMR researchers then filtering results to their specific focus.
An important feature of ARGprofiler is the creation of targeted assemblies around genes with ARGextender.ARGextender uses KMA and SPAdes to create targeted de novo assemblies by identifying if the targeted ARGs are present in a sample and then recruiting reads to the surrounding regions.The pairwise comparison of ARG-carrying scaffolds produced with ARGextender and SPAdes in sewage samples showed that ARGextender could extract comparable flanking regions to SPAdes but in a much shorter time frame.This approach also avoids running the more expensive algorithm if none of the target genes is identified.
However, there are still a few points we need to address in the way that ARGprofiler currently works.First, ARGprofiler is designed to only work with short-read sequencing data, thus not utilizing the advantages of long-read sequencing technologies.We are planning to extend the input options to include long-read datasets.Second, a significant aspect of ARGprofiler is the choice of reference databases.PanRes is a one-size-fits-all approach for ARGs.However, as it combines different sources and scopes, it will be up to the individual research questions, which subsets are appropriate for consideration.ARGprofiler also profiles the microbiome of each sequencing dataset by mapping the reads against the microbial reference sequence database mOTUs.This step is included to compare the abundance of ARGs to the microbial content (Martiny et al. 2022b, Munk et al. 2022, Johansson et al. 2023).We chose mOTUs as this collection performed best across different environments with KMA.Our choice was based on the in silico datasets, and while real data might contain many more unknowns, it appeared to be the best choice for our pipeline.We chose to incorporate a sub-workflow of creating Mash sketches with optimized parameters to allow the user to compare and cluster to determine similar sequencing datasets and those of poor quality.The sketches also make it possible to query the read sets against pre-sketched genomes, thus allowing the user to re-use the data without rerunning the whole pipeline.
In conclusion, we have implemented and evaluated ARGprofiler to be a robust bioinformatic pipeline that provides other researchers the opportunity to analyze large collections of metagenomic sequencing runs against a collection of ARGs or other genes of interest.The ARGprofiler code is publicly available under the Apache-2.0license at https:// github.com/genomicepidemiology/ARGprofiler.
Figure 1 .2
Figure 1.The ARGprofiler pipeline.This schematic illustrates the components of the pipeline: (a) Download of sequencing reads from ENA, (b) preprocessing of retrieved reads with fastp, (c) procedure of aligning reads against chosen reference sequence database(s) with KMA, (d) assembly of targeted flanking regions of antimicrobial resistance genes with ARGextender that passes requirements in a filtering step, and (e) sketching sequencing reads with Mash.
Figure 2 .
Figure 2. Overview of the sequences included in PanRes.(a) A comparison of overlaps between the different databases in the PanRes collection.(b) Distribution of antimicrobial resistance gene lengths.
Figure 3 .
Figure 3.A comparison between the flanks extracted around ARGs found in scaffolds produced with either SPAdes or ARGextender.(a) Number of scaffolds containing at least one ARG compared with the number of ARGs detected across all scaffolds in a sample.(b) Distribution of flanking content (bp: basepairs) in sewage samples, including the number of scaffolds without a flanking region in the top right corner.(c) Overlap between the tool regarding which ARGs had flanks, excluding the scaffolds with zero flank regions.Only ARGs with a minimum of 95% breadth of coverage were included in this figure.ARG: antimicrobial resistance genes.
Figure 4 .
Figure 4. Performance of microbial profilers on the in silico CAMI data as measured by the F1 score at genus rank.The harmonic mean is reported as the circle, and the error bars are the standard deviation.CAMI: critical assessment of metagenome interpretation.
Table 1 .
Measured times and memory usage for each step of the ARGprofiler pipeline. | 5,154.8 | 2024-02-20T00:00:00.000 | [
"Environmental Science",
"Biology",
"Computer Science",
"Medicine"
] |
STUDENTS’ PERCEPTION OF BIOLOGY LEARNING AT KEPANJEN ISLAMIC SENIOR HIGH SCHOOL
Biology is one of the subjects in the science specialization. Opinions from students stating that biology is the easiest subject among science specialization subjects. However, some students have difficulties studying biology for a variety of reasons, including the teacher's teaching style, the student's learning style, the students' unfavorable opinion of the course, and a lack of learning materials. This study aims to find out the student's perception of biology learning at Kepanjen Islamic senior high school. This study uses a descriptive quantitative approach. The sample in this study was class X students with the sampling technique carried out by proportioned stratified random sampling. Data collection techniques used in this study were questionnaires and interviews. The score from the questionnaire data is calculated by percentage statistics. The finding reveals that the perception of class X students on the implementation of the biology learning process as a whole is in enough category with a percentage of 79.17%. Of the five activities in the learning process, there are two categories, namely gathering information and communicating with ‘good,’ category, and those having ‘enough’ categories, namely observing, asking questions, and associating or processing information.
INTRODUCTION
Education and guiding are two approaches to raising knowledge of aims in a systematic and targeted manner in order to influence behavior toward student maturity. Teaching is a technique that serves to teach students in life, particularly in guiding themselves to develop in accordance with the developmental obligations that students must fulfill. The primary goal of education is to educate students in making changes in their intellectual, moral, and social behavior so that they can remain independent as persons and social beings. Efforts that can be made to accomplish this ambition include guiding processes that allow students to interact with the learning environment that is governed by the teacher (Sadirman, 2012).
The teacher is a human factor in the guiding and learning process who performs a role in an endeavor to build capable human resources in the field of development (Sadirman, 2012). It is critical to develop teacher competency in order to achieve a positive student perception of the teacher. Whereas perception is a person's process of knowing, understanding, and evaluating other people about their nature, quality, and other conditions that exist within the perceived self. If students have faith in the teacher, it will lead to favorable acceptance of both the teacher and the subject matter being taught, and vice versa (Anggraini, 2015).
Biology is one of the subjects in the science specialization. Opinions from students stating that biology is the easiest subject among science specialization subjects. The results of research related to student perceptions of science subjects were also revealed by Prokop et al. (2007) where science subjects are boring for many students, difficult, irrelevant to human life and less interesting for students in higher grades. Although this opinion cannot be applied to all branches of science. Students' opinions on physics and biology studies differ. Students have a more negative attitude toward physics than they do toward biology. Male students, on the other hand, have a greater interest in physics, while female students have a greater interest in biology.
Nugraini (2015) stated that many students struggle with biology and believe that the subject is solely about memorizing. Certain concepts, such as cell division and metabolism, are extremely difficult for students to grasp. Students have difficulties studying biology for a variety of reasons, including the teacher's teaching style, the student's learning style, the students' unfavorable opinion of the course, and a lack of learning materials. Students struggle to understand biology and lose interest in the subject because they believe the material is irrelevant to daily life.
The results of research on student perceptions of the implementation of learning in several high schools have been carried out by Marina (2016); Rahma (2015); and Sewasa & Har (2015) reveals that there is a relationship between students' perceptions of the implementation of biology learning. In addition, this research was also conducted by Anggraini (2015) which also states that there is a positive relationship between students' perceptions of teachers' pedagogic competence and biology learning outcomes. Based on the results of observations, this study aims to determine students' opinions on biology learning at Kepanjen Islamic Senior High School.
LITERATURE REVIEW 2.1. Perception
According to Morgan in (Marina, 2016) perception shows how we see, hear, feel, taste, and smell the arena around us, in different terms it can also be described as the whole thing that people experience. Based on the fact that understanding is relative, a teacher can expect a good picture of his students for the next lesson because the teacher already knows the perceptions that have been had by using previous students (Slameto, 1988).
Students' perceptions of mastery can be interpreted as organizing and storing stimuli in the mastery environment. As for the components that are assessed in the form of topics, teachers, substance, opinions and all things related to the way of mastery itself, evaluation can also have good and bad values. For students' perception of the subject of technological knowledge, the method that the topic of technological knowledge and all sports that take part in the mastery of technological knowledge is a good way gadget to assess using students (Maaruf et al., 2013).
Teacher Learning Implementation
According to Diaz in (Budiana et al., 2022) learning is an accumulation of teaching ideas and mastering ideas. The emphasis lies on the combination of the two, in particular the improvement of male and female undergraduate sports. The idea is gadgets, so that during
TRANSPUBLIKA INTERNATIONAL RESEARCH IN EXACT SCIENCES (TIRES) Volume 1 ISSUE 1 (2022)
the mastery of this gadget there are additions which include: students, desires, substances to obtain desires, centers and methods, in addition to equipment or media that must be prepared.
Teacher Workload Standards in the Implementation of Learning
1) Planning Lessons Learning Implementation Plan (hereinafter referred to as RPP) is a plan that describes the methods and methods of mastery to acquire the main competencies set out in the Content Standards and has been set out in the syllabus. The scope of the RPP consists of at least one basic competency consisting of several indicators for one or more meetings (Kunandar & Si, 2014).
2) Learning Implementation According to Barnawi in (Marina, 2016), the second task of the teacher is to carry out learning. Learning activities are activities when there is an educational interaction between students and teachers, this activity is a real face-to-face activity. 3
) Assessing Lesson Results
The third task of the teacher is to assess the results of the lesson. Assessing learning outcomes is a series of activities to obtain, analyze, and interpret data about the process and learning outcomes of students which are carried out systematically so that it becomes meaningful information for assessing students and in making other decisions (Barnawi in (Marina, 2016)).
4) Guiding and Coaching Students
According to Barnawi in (Marina, 2016) the last challenge for teachers is to guide and educate students, especially guiding or educating individuals in mastery, intracurricular, and extracurricular. Furthermore, according to (Sanjaya, 2006) in order for the teacher to behave as a true mentor, many things must be possessed, in particular must have knowledge of the children he is mentoring, both must recognize and be professional in making plans, each making plans desires and acquired skills and making plans with how to master.
RESEARCH METHOD
This study uses a descriptive quantitative approach. This descriptive research is exploratory in nature which aims to describe the state/status of the phenomenon and is also a qualitative research, namely to determine students' perceptions of Biology lessons at Kepanjen Islamic Senior High School. The sample in this study was class X students with the sampling technique carried out by proportionet stratified random sampling. Data collection techniques used in this study were questionnaires and interviews. The score from the questionnaire data is calculated by percentage statistics. Students' perceptions of biology learning at Kepanjen Islamic Senior High School, and in this study were obtained from a questionnaire consisting of 40 statements. Based on the research data in table 2, there are two categories of student perceptions, namely the enough and good categories. For the good category it is in the range of 8089%, where for the good category there are two activities, namely the activity of collecting information by 80.79% and communicating 80.70%. As for the enough category, it is in the range of 6579%, where for this sufficient category there are three activities, namely observing 77.32%, asking 77.89%, and associating 79.17%. Based on the two activities in the good category, the highest percentage was in information gathering activities with a percentage of 80.79%. This is because in this activity students feel satisfied because they can collect information from various sources, and students can collect information alone or in groups with friends, so that through collecting information students will be able to train themselves in developing a thorough, honest, polite attitude. attitude, and value opinion. with friends. This information gathering activity can also encourage students to think critically and become more independent in learning. The process supports collaborative learning.
Learning with the Think Pair Square (TPSq) type overcomes the passive nature of students in learning because it requires students to think for themselves, share with their partners, and work in groups. The TPSq learning model combined with problem solving is higher than using the lecture method (Masrudi, Sudirman, and Ramses, 2016) because the TPSq learning model motivates students to learn biology (Prayitno et al., 2017), collaborative learning integrated with individuals and other learning models are recommended for use in science education to improve academic performance. Of the three activities that fall into the enough category, the lowest percentage is observation of 77.32%. This is because students are less serious in seeing and paying attention to the material provided by the teacher during the learning process.
According to Regulation of the Minister of Education and Culture (Permendikbud) No. 81 ATH in 2013, through observing, it can train students in integrity, thoroughness, and information seeking. In observation activities, the teacher opens various opportunities for
TRANSPUBLIKA INTERNATIONAL RESEARCH IN EXACT SCIENCES (TIRES) Volume 1 ISSUE 1 (2022)
students to make observations through observation, listening, listening, and reading activities that are formulated in the scenario of the learning process (Yusa & Maniam, 2013).
Based on the data analysis that has been carried out, in this study it can be described in general the perceptions of class X Mathematics and Natural Sciences students towards the implementation of the biology learning process based on the 2013 curriculum at Kepanjen Islamic Senior High School including the sufficient category with a percentage of 79.17%. This is because the biology learning process in class X of Mathematics and Natural Sciences in Kepanjen Islamic Senior High School has been implemented based on the 2013 curriculum which consists of observing, asking questions, gathering information, associating, and communicating. Based on the results of the study, it can be concluded that the perception of class X Mathematics and Natural Sciences students on the implementation of the biology learning process based on the 2013 curriculum of Kepanjen Islamic Senior High School as a whole is sufficient with a percentage of 79.17%. Of the five activities in the learning process based on the 2013 curriculum, there are two categories, namely good and sufficient.
As a professional educator, the role and function of the media is very important to be applied in learning. Media is the integration of the learning system as the basis for policies in the selection, development and utilization. Learning media can lead to good student perceptions by utilizing their senses. Therefore, students can assess and can give their respective arguments about what they feel by using the media when learning.
As revealed by Zacharia & Barton (2004) that students' interest in science depends on how a science topic is presented. If science is taught by involving students, hands-on experience, and science presents interesting situations, it will help to arouse passion for science (Howe & Jones, 1993). The same circumstances is shown by the results of this study, where the favorable perception of class X students has shown on the implementation of the biology learning process.
CONCLUSION
Based on the results of the study, it can be concluded that the perception of class X students on the implementation of the biology learning process as a whole is in enough category with a percentage of 79.17%. Of the five activities in the learning process, there are two categories, namely gathering information and communicating with 'good,' category, and those having 'enough' categories, namely observing, asking questions, and associating or processing information. | 2,959.6 | 2022-01-25T00:00:00.000 | [
"Biology",
"Education"
] |
Threshold-induced correlations in the Random Field Ising Model
We present a numerical study of the correlations in the occurrence times of consecutive crackling noise events in the nonequilibrium zero-temperature Random Field Ising model in three dimensions. The critical behavior of the system is portrayed by the intermittent bursts of activity known as avalanches with scale-invariant properties which are power-law distributed. Our findings, based on the scaling analysis and collapse of data collected in extensive simulations show that the observed correlations emerge upon applying a finite threshold to the pertaining signals when defining events of interest. Such events are called subavalanches and are obtained by separation of original avalanches in the thresholding process. The correlations are evidenced by power law distributed waiting times and are present in the system even when the original avalanche triggerings are described by a random uncorrelated process.
When a threshold is imposed on a signal, the events of interest are connected bursts of activity above the threshold. Each such burst is a subavalanche of some underlying avalanche, comprising entire activity of the system at the current stage of its evolution. A subavalanche, selected by thresholding, begins at the moment of time when the signal exceeds the threshold, and ends when the signal falls below it. The difference between these two moments is taken as duration T, and the area between the imposed threshold and the portion of the signal above it as the size of subavalanche, T s 0 th , see Fig. 1a, where t is the time measured from the start t s of the subavalanche.
Once a threshold is applied, a portion of the signal will remain below it. This implies an introduction of the concept of waiting time T w , describing the time interval between two consecutive subavalanches, selected by the imposed threshold. Provided the start and end of each avalanche are known, like in simulations or in experiments after some minimal threshold is imposed, one can differentiate two kinds of waiting times. The internal waiting time is the waiting time T int between two consecutive subavalanches thresholded from the same avalanche (avalanche j in Fig. 1a), while the external waiting time T ext is the time between two consecutive subavalanches thresholded from two different avalanches (avalanches i and j in Fig. 1a).
A further distinction can be made among different types of contribution to the external waiting time T ext (i, j; V th ); thus, see Fig. 1a, where T end (i; V th ) is the time taken by the avalanche i to end (i.e. fall from V th to zero), T mid (i, j; V th ) is the time spent by a whole sequence of consecutive avalanches that lie between the avalanches i and j and remain below V th (note that this sequence may be empty), and T ini (j; V th ) is the time taken by the avalanche j to rise from zero to V th . In the main panel of Fig. 1b Thresholding of RFIM signal. The signal is obtained in simulations of 3D system with size L = 1024 and 40 random field configurations for each disorder R. (a) For a (blue) part of a train of avalanches (shown in bottom, and zoomed in top panel) and the imposed threshold V th (red line), we illustrate: the determination of size S and duration T of a subavalanche (starting at the moment t s and ending at t e = t s + T) taken out of the avalanche i, the internal waiting time T int between two subavalanches of avalanche j, and the contributions T end , T mid , and T ini to the external waiting time T ext between avalanches i and j, see Eq. (1). (b) Distributions D(T) of duration (main panel), and distributions D(S) of size (inset) of subavalanches selected by thresholds from a wide range shown in legend. (c) 〈S〉 T shown against T for the thresholds in legend, where 〈S〉 T is the average size of subavalanches with duration T; variation of exponent γ S/T with V th is shown in inset. (d) γ S/T vs V th data, obtained for various disorders R (see legend), collapse onto a same curve when presented against V th r, where the reduced disorder r = (R − R c )/R measures a distance to the critical disorder R c of the model. Inset shows how γ S T / (0) (i.e. the exponent γ S/T taken for V th = 0) depends on the reduced disorder r. c eff for the given system size (see Methods for more details). The distributions are collected for the subavalanches extracted above threshold V th from the avalanches triggered in a zero-centered window of external magnetic field in which the response signal can be considered as stochastically stationary. We found that in a wide range of thresholds, both types of distribution follow power-laws , terminated by the cutoff scaling functions g T (x) and g S (x) for duration and size, respectively. The cutoff time T 0 , and the cutoff size S 0 , decrease when the threshold V th increases. The values of exponents, pertaining to these distributions are: τ T = 1.64 ± 0.02 for duration, and τ s = 1.38 ± 0.03 for size of subavalanches. These values are obtained using 0 0 S , where σ T and σ S are the cutoff exponents whose values are close to 1 for all the distributions being analysed.
The average size 〈S〉 T of subavalanches with duration T is shown against T in the main panel of Fig. 1c for a family of curves, corresponding to different values of threshold V th . The graph demonstrates that 〈 〉 ∼ γ S T T S T / in a broad range of thresholds, and that the power-law exponent γ S/T varies with threshold. As can be seen in inset, when the threshold increases, the exponent γ S/T decreases (from the value 1.77 for very low thresholds, to the value of 1.44 for very high threshold levels), forming some sort of plateau, like in the crack-line propagation model 30 .
In order to gain a more complete insight into the variation of γ S/T with V th , we show in the main panel of Fig. 1d its values against V th r, i.e. threshold multiplied by r, where r is the reduced disorder r ≡ (R − R c )/R, measuring the distance to the critical disorder R c of the model. The data obtained for different disorders collapse onto a same curve, suggesting that a joint plateau is formed at the value γ = . ± .
1 49 0 02 . The existence of plateau may be considered as an important feature of the model, because the plateau value γ S T / (pl) remains stable under variation of both threshold and disorder unlike, for instance, the value γ S T / (0) of exponent γ S/T for zero threshold, which (seeming linearly) changes with disorder -see the inset of for the exponent of the waiting time, and terminated by the cutoff scaling function g w (x), taken for T w /T w,0 , where T w,0 is the cutoff waiting time. In contrast to the cutoff time T 0 which decreases with threshold, the cutoff waiting time T w,0 increases with V th . This is shown in the bottom inset of Fig. 2a, where one can see that the relation t h is satisfied with δ ≈ 1.30 ± 0.02. Regarding the shape of distributions D(T w ), one can see that the power law part appears with the increase of threshold, indicating the onset of correlations due to avalanches that are partially hidden below the detection threshold. Opposite to that, one can notice that the power law gradually vanishes for the very low threshold levels due to very small value of the cutoff waiting time T w,0 . This is illustrated in the top inset of Fig. 2a, where the power law reduces to the (approximately) exponential cutoff scaling function g w (T w /T w,0 ). The distribution of waiting times D(T w ) for the zero threshold, should be given by a delta function limit of the exponential distribution with vanishing cutoff waiting time T w,0 , due to specific pattern of driving explained in Methods, giving no time separation between consequtive avalanches.
Finally, in Fig. 2 panel b, we present the cutoff time T 0 and the cutoff waiting time T w,0 against threshold V th for a family of curves obtained for different disorders R. For small thresholds, the cutoff waiting time grows as a power law ( , and more rapidly than that for larger V th . This leads to the conclusion that if the threshold is so high that only the avalanches from the cutoff of the "true" distribution are "observable", then a very large separation between the time scales of the waiting times and avalanche durations may ensue 32 . Pertaining scaling collapse of the cutoff waiting times, corresponding to different disorders, is obtained by multiplying the threshold axis by r, as is shown in the inset.
Scaling theory of avalanche correlations. Scaling properties of the threshold induced correlations can
be predicted for signals emitted by a wide range of systems that respond to stationary external driving in stochastically stationary trains of avalanches. To this end, let V(t) be a response signal that continuously varies with (continuous) time t, and let V(t) > 0 at any moment of time when the system is active, while V(t) = 0 otherwise. Such signal is a sequence of avalanches, separated by intervals of time when the system is quiet, each avalanche being a continuous burst of values V(t) > 0 taken at any t s < t < t e between the moments t s and t e when the avalanche starts/ends, and therefore V(t s ) = V(t e ) = 0. Essentially the same can be said for discrete signals sampled at discrete moments of time, provided that they are continuously (e.g. linearly) extrapolated to all moments of time throughout the signal duration, like it is done here with the RFIM signals.
The avalanches can be classified into types so that all avalanches of the same type have the same profile f(t′) ≡ V(t s + t′) with respect to time t′ measured from their start. Using this equivalence relation, one can obtain the set of all avalanche types I, and introduce a one-parameter family of scaling transformations where x is an exponent specified by the type of the involved system 5 . For this family of transformations one can prove that where T i is duration, and , and also for the following types of waiting times: all for the subavalanches extracted above the threshold V th out of avalanche of type i, and for the corresponding x end t h describing the scaled avalanche ˆi S b , extracted above the scaled threshold b V x th . Furthermore, if one scales the whole portion of response signal, starting with an avalanche (of type i) that surpasses threshold V th , and ending with successive avalanche (of type j) above the same threshold, then analogous expression hold for the waiting time T mid (i, j; V th ), spent by the avalanches that lie between these avalanches and remain below V th , namely: which, combined with Eqs (1) and (5), gives the same type of scaling Next, let dp(i; λ) be an elementary probability of obtaining an avalanche of type i in the response signal under observation conditions specified by some appropriate multiparameter λ = (λ 1 , λ 2 , …λ n ), like λ = (h′, r, 1/L) for the RFIM signals (L is system size, r is the reduced disorder, and h′ is the reduced magnetic field, see Methods). Having at our disposal this probability, one can express the distribution D T (T; V th , λ) of subavalanches that are obtained under conditions λ and have the duration T above the threshold V th . Thus, where w is a probability exponent, and ζ = (ζ 1 , are the scaled conditions under which the type ˆi S b is observed. Starting from this expression, which is in fact a generalized scaling hypothesis, one can obtain the scaling laws The foregoing general predictions can be tested in the case of any response signal for which can be expected that the assumptions, used in their derivation, are reasonably satisfied. The first step towards that in the case of RFIM signal is to specify the observation conditions λ, and next to express the generic exponents x, y, w, and ζ in the terms of standard RFIM exponents. Here, as we already mentioned, the observation conditions are λ = (h′, r, 1/L), while for the exponents x and w, and for the multiexponent ζ = (ζ h , ζ r , ζ L ), we found that h r L where σ, ν, z, α, β and δ are the standard RFIM exponents 3,9,10 . In Fig. 3 we present the collapsing for distributions of duration and of various types of waiting times, all for the subavalanches above thresholds. The subavalanches are taken from a family of response signals observed under conditions which are aligned according to the collapsing requirements together with the corresponding collapsing predictions. Thus, in panel a, the data are scaled in agreement with power-law parts of waiting time distributions gradually vanish as the value of the threshold decreases, down to the lowest threshold when the distribution of waiting times turns approximately exponential. This implies that even though the avalanches are triggered by a random process, applying a finite threshold implicitly introduces underlying temporal correlations in a given signal. On the other hand, distributions of the subavalanche durations and sizes follow power law with cutoffs decreasing when the threshold level increases. Thus the present paper verifies the fact that the scaling predictions obtained for the previously studied crack-line propagation model 30,32 also hold in the case of RFIM signals, suggesting their general validity in accordance with the scaling hypothesis and derived scaling forms proposed in the previous section. In our analysis we have also identified different contributions of waiting times (T int , T ext , T ini , T end , T mid ), which all follow the same scaling form Eq. (20). This form predicts that rescaled distributions of each type of waiting time, obtained for system parameters adjusted according to the collapsing requirements, all collapse onto a single curve. As the proposed general scaling theory predicts, we anticipate that the derived scaling forms of the threshold induced correlations, provided that the requirements of the theory are fulfilled, should hold for any response signal originating from system that, as a response to slowly changing external conditions, relaxes in an avalanche-like intermittent way. For such response signals, one can say that, in general, the external waiting times are affected by the two mechanisms: (i) implicit or explicit application of a finite threshold resulting in low levels of avalanche activity not being detected, and (ii) effects due to details of the implementation of the external driving. While we focus here on the first mechanism, a more detailed study of the interplay between the two, considering the joint effects of finite thresholds and driving rates, would be an interesting avenue of future work.
Additionally, we have found that the exponent γ S/T , obtained from the scaling of the average avalanche size with duration, is affected by the level of the applied theshold, with theoretically expected value of 1.77 recovered only in the limit of very low thresholds. The effective value of γ S/T deviates from this value in a way that it initially decreases and then reaches the plateau where it remains stable for a wide range of threshold and disorder values. This result is of special importance for analysis of experimental results where setting a finite detection threshold is an inevitable procedure in order to be able to perform the analysis of the recorded signal.
Qualitatively speaking, our results meet very well to the ones obtained for the formerly studied crack-line propagation model 30,32 . Given that our numerically generated data are noise-free, we anticipate the possibility to observe in real experiments the additional effects that the presence of noise could impose on the thresholded signal 28 . Thus, our work calls for further and more detailed investigation of experimental data in order to unravel the true nature of underlaying mechanisms causing the onset of temporal correlations in these systems.
Methods
Simulations of the Random Field Ising Model. In order to investigate the threshold induced temporal correlations in the Random Field Ising Model (RFIM) we performed numerical simulations of its athermal (T = 0) variant in nonequilibrium adiabatic regime. The athermal RFIM describes a system of N ferromagnetically coupled classical Ising spins S i = ±1, located at sites i of some underlying lattice. The spins are influenced by a homogeneous external magnetic field H, and by some local magnetic field h, whose values h i vary randomly from site to site due to random distribution of quenched impurities that generate that field. Therefore, it is considered that each instance of RFIM system is specified by the configuration of values 1 that the quenched random field h takes at the sites i of that system, and that these values remain frozen throughout any evolution of the system.
In the basic RFIM case, presented here, the ferromagnetic coupling between spins extends only to the nearest-neighbors, so the effective magnetic field acting on spin S i is i j j i eff where S j are the nearest neighbors of S i , and J > 0 is a ferromagnetic coupling constant. Hence, the system Hamiltonian reads and in this expression the first sum refers only to the nearest-neighbor spins S i and S j , the second term describes the coupling of spins with the external field H, while the last term gives the coupling with quenched random field h. At any site i, its value h i is taken randomly from some zero-centered distribution, same for all sites, and for any two different sites i ≠ j the values h i and h j are chosen independently, so that the expected value of their product is 〈h i h j 〉 = 0. For generating the values of the quenched random field, here we use a Gaussian distribution and take its standard deviation = 〈 〉 R h 2 as a measure of disorder in the system. In the nonequilibrium athermal RFIM, the system evolves according to the following flipping rule: each spin S i remains stable while its sign equals the sign of the effective field h i eff at its site; otherwise, S i becomes unstable, and flips at the next moment t + 1 of discrete time. Thus, the flipping of each spin influences the effective field for all of its nearest neighbors. All those neighbors that become unstable will flip at the next moment of time, which in turn may cause the flipping of their neighbors, making an avalanche which lasts until all spins become stable.
Once all spins become stable, the only way to trigger a new avalanche is to change the external field H, and in this way drive the system by a sequence of H-increments, forming a driving pattern that is set in advance. Typically, the changes between two consecutive moments of time are small, resembling the usual real-world situation with two well separated time scales: fast one for spin flipping, and slow scale for the external field. In the limiting regime of infinitely slow (i.e. adiabatic) driving, the external field is kept constant during (any) avalanche. After the avalanche dies, H continues to change (i.e. increase or decrease following the current direction of driving pattern) until it reaches exactly the value that triggers only the least stable spin. Note, however, that because all spins during the foregoing change remain stable, and therefore unaltered, the overall change of H is allowed to be done in a single jump, which is utilized in computer simulations for better efficiency. The consequence of such driving pattern is that the next avalanche is triggered immediately after the previous one has ended.
Together with the driving pattern, one also needs to specify some initial and final conditions. Here, we take that initially H = −∞ and all spins are −1, and then we gradually increase H until all spins become +1. At each moment of time t, we register the number of spins V(t) flipped at that moment, and in this way collect system's response along the whole rising part of the saturation hysteresis loop. Note that if one repeats the run with the same sample (i.e. same configuration of random field), in the same driving regime, and with the same initial conditions, the system response will be the same because the flipping rule is deterministic. Therefore, reliable SCIEntIfIC RepoRTS | (2018) 8:2571 | DOI:10.1038/s41598-018-20759-6 avalanche statistics are collected by repeating the whole procedure many times using different random field configurations (quenched or sample averaging).
Our RFIM simulations are done in the nonequilibrium adiabatic regime with J = 1, and with closed boundary conditions on 3D lattices L × L × L of linear size L = 1024. For the analysis of the effects of the of imposed threshold V th , we have used the parts of signal where the signal can be considered to be approximately stationary. This is fulfilled in the narrow window of external field H, taken around the coercive value (i.e. the value at which the system magnetization = ∑ = M S i N i 1 is zero). One fragment of such signal is shown in the bottom panel of Fig. 1a. The number of spins flipped at a given moment of time t is taken as the signal value V(t) at that moment, and the time is measured from the window start. The statistics, collected by quenched averaging for given disorder R, are described using the scaling variables: reduced disorder r = (R − R c )/R and reduced magnetic field h = H − H c − b r r, where R c is the critical disorder, H c is the critical value of magnetic field H, and b r is the rotational parameter accounting how the effective critical value H r ( ) c eff of the external field (i.e. the value of H at which the maximum of susceptibility occurs), shifts with reduced disorder r 3,10 . In this paper we have confined our study to disorders that are above the effective critical disorder R c eff pertaining to the underlying lattices 33,34 . For this value of disorder, precisely defined in the quoted references, it is likely for the spanning avalanches to occur. The spanning avalanche is the avalanche that spans the finite system along at least one of its dimensions and therefore plays the role of infinite avalanche, causing the jump of magnetization in infinite systems below the critical disorder. In three-dimensional RFIM systems, these avalanches have different distributions than remaining (i.e. non-spanning) avalanches 35 , and violate the scaling assumptions given in section Scaling theory of avalanche correlations. Data availability. The datasets generated during and analysed during the current study are available from the corresponding author on reasonable request. | 5,511.8 | 2018-02-07T00:00:00.000 | [
"Physics"
] |
Uncertainty evaluation on the absolute phase error of digitizers
In many engineering applications the phase angle of a signal is a key parameter. Especially when measuring small angles, the measurement accuracy is of a vital importance. Often, the absolute phase error of a digitizer, which is defined as the phase displacement between the digitized output and the input analog waveform and which represents a systematic measurement error, is neglected. Therefore, in this paper, a new measurement technique for the evaluation of this absolute phase error is discussed, along with a deep theoretical analysis on the uncertainty sources and how to handle them. The measurement technique is validated through a high accuracy experimental setup. Experimental tests demonstrate that even high accuracy digitizers can show non-linear behavior in the absolute phase errors.
Introduction
The signal phase angle is an important information required in a lot of engineering applications, from telecommunications to power systems (Pawula et al., 1982;Phadke, 1993). Most of the modern electronic instruments that perform accurate measurement of this quantity are based on data acquisition systems, namely digitizers. The digital samples are then used by a measurement algorithm that returns the measurement. However, it must be accounted that every digitizer has its own phase frequency response, which introduces a systematic phase deviation between the analog input and its corresponding digital output samples. This deviation can be defined as the absolute phase error of the digitizer and it depends on the characteristics of the digitizer input circuitry and on the digitalization architecture. In applications with low frequency signals, this phase deviation can be considered negligible; in particular, when the phase displacement reflects into a time delay very short with respect to the time period of the considered signals. In addition, for those measurements, in which only the relative phase delay between signals is important (f.i. power, energy, impedance measurement, etc.), this effect reduces its impact because phase subtraction leads to the compensation this systematic effect. Nevertheless, the issues related with measurement of the relative phase delay between two channels of the same digitizer, or two channels of two different digitizers with synchronized sampling clocks, are faced in a number of scientific papers (Bosco et al., 2011;Crotti et al., 2017;Trinchera et al., 2017).
However, there are issues in special applications not yet addressed in scientific literature, such as the measurement of the phase angles timestamped against the absolute timeused, for example, for Phasor Measurement Unit (PMU) -in medium voltage grid application (Braun et al., 2016;Georgakopoulos and Quigg, 2017;Luiso et al., 2018;Sa´nchez-Ayala et al., 2013;Tang et al., 2013), or as the phase difference measurement with accuracy of the microradian; necessary, for example, for the calibration of Low Power Instrument Transformers (LPIT) with digital output (DLPIT) or Stand Alone Merging Units (SAMU) (Collin et al., 2018;Crotti et al., 2018aCrotti et al., , 2018bDel Prete et al., 2018;Djokic and So, 2005;Houtzager et al., 2016;Juvik, 2000;Mohns et al., 2017), carried out by comparison with a reference device with analog output -where the absolute phase deviation of the single channel of the used digitizer may be comparable or higher than the required accuracy.
In Crotti et al. (2019), the same authors presented a measurement procedure for the evaluation of the absolute phase errors of a digitizer, with an experimental validation. Here, the attention will be focused on aspects not fully addressed by Crotti et al. (2019). In particular, this paper will deepen the metrological characterization, explaining in details the evaluation of the uncertainty contributions, especially that due to the non-linearity of the comparator shown in Crotti et al. (2019). Moreover, in Crotti et al. (2019), only the dependence of the absolute phase error on signal frequency and sampling frequency was shown, whereas, here, also the dependence on the signal amplitude and on the temperature are shown. The structure of the paper is as follows. Section 2 presents the measurement technique and Section 3 discusses the implementation of the experimental setup. In Section 4, a deep theoretical analysis of the uncertainty sources and how to evaluate them is shown. Section 5 describes the experimental tests on a high accuracy digitizer and, finally, Section 6 draws the conclusions.
Measurement method
In order to evaluate the absolute phase error of a digitizer, a phase delay between a digital quantity, composed by the output samples of the digitizer, and an analog quantity (very often it is a voltage), which is the digitized signal, must be evaluated . To the best of the authors' knowledge, direct measurement methods able to quantify this phase error are not available. Therefore, the authors propose an indirect measurement method, which has been fully described in Crotti et al. (2019); here, it is only briefly summarized. It is based on the introduction of a Phase Reference Signal (PRS), that is a square wave, having the same frequency of the input signal. The digitizer to be characterized (Digitizer Under Test, DUT) is supplied with a sinusoidal signal s g generated by an Arbitrary Waveform Generator (AWG), that also provides a signal that acts as the PRS (see Figure 1). Assuming the rising edge of the PRS as the time reference (t = 0), the initial phase of the generated sine wave should be zero; however, due to phase frequency response of the AWG and its internal time delay, the sine wave is delayed of a phase u g . Thus, the DUT input can be written as where f 0 is the signal frequency and, for sake of simplicity, a unitary amplitude is considered. Let us consider now the DUT sampling clock; the PRS is used as trigger to start the sampling clock. In the ideal case, the first sampling command is coincident with the rising edge of the PRS; however, since there could be a propagation delay in the clock paths, in actual cases it has a time delay equal to t c , as shown in Figure 1. Instead, under the hypothesis of short term stability of the sampling clock, choosing a sampling period T s results in equally spaced sampling commands and so the k th sampling command is delayed of the quantity kT s + t c . The delay t c latter must be taken into account and not be confused with the phase error of the DUT. There is also another phenomenon to be considered, that is, the delay between the DUT sampling command and the actual sample acquisition (samples obtained with instantaneous sampling, represented as the circle in Figure (1); of course, the acquisition is not instantaneous and the actual samples are delayed (crosses in Figure (1)). The phase shift due to this time delay is the quantity of interest, that is, the absolute phase error of the digitizer and it is due to two contributions: the phase shift introduced by the analog input circuitry (see generated and delayed waveform in Figure (1)) and the further internal delay on the sample command that is the time needed by the digital circuits to sample the input signal (see sampling command and acquired command in Figure (1)).
Therefore, the samples at the output of the DUT can be expressed as where u DUT f 0 ð Þ is the phase deviation introduced by the DUT at frequency f 0 (the gain deviation has been neglected) and u TOT is the phase angle of the samples of the DUT.
The quantity u g f 0 ð Þ can be measured with a phase comparator (COMP) (Crotti et al., 2017;Trinchera et al., 2017) and the quantity t c can be measured with a frequency counter. The phase angle, u TOT , of the DUT samples, at frequency f 0 , can be evaluated by performing the Discrete Fourier Transform (DFT) and expressed as ]F s DUT kT s ð Þ ½ j f0 ; thus, the DUT phase error, at frequency f 0 , can be calculated as Usually, a COMP is used to measure the phase shift between two sinusoidal signals. Instead, the quantity u g f 0 ð Þ is the phase shift among the input sinusoidal signal and the PRS, which is a square wave. For an ideal square wave, the fundamental component presents the zero crossing with positive slope in correspondence of the rising edge of the square wave. Therefore, in order to get the quantity u g , a convenient solution is represented by the extraction of the PRS fundamental spectral tone and the successive comparison of its phase angle with that of the sine wave s g t ð Þ: Since these two signals (the sine wave and the PRS) are stationary, the execution of the DFT on the two signals allows for the measurement, in the frequency domain, of the desired phase delay. Possible measurement errors could arise from the effect of the use of a finite sample rate to measure the spectral content of the square wave (namely the aliasing), which presents an infinite frequency spectrum. An analog antialiasing filter, applied to the square wave, can solve the problem, but introduce other issues, like offsets, non-linearity, temperature drifts, and so forth. Therefore, in order to overcome this issue, the oversampling technique here along with a digital antialiasing filter and a sampling decimation has been used. This filter is integrated in the COMP and it is executed both on sine wave as well as on PRS.
Thus, if the sampling frequency has a value higher (here the minimum sampling frequency used is ten times greater) than the frequency of the PRS, the aliasing issue is solved and, since the Nyquist theorem is respected, we can accurately measure the PRS fundamental phase angle.
Measurement setup
A high accuracy measurement setup, shown in Figure 2, has been built in order to give an experimental validation to the proposed technique. A NI PXI (National Instruments PCI eXtension for Instrumentation) platform is at the base of the setup, which makes use also of a GPS-disciplined Rubidium atomic clock (Fluke 910R) and the external universal frequency counter Agilent 53230A (350 MHz, 20 ps). The multifunction I/O module NI PXIe-6124 (6 10 V, 16 bit, maximum sampling rate of 4 MHz) has been used as DUT. The module NI PXI-5422 (6 12 V, programmable gain, 16 bit, maximum sampling rate of 200 MHz) has been used as AWG.
The digitizer used as phase comparator (COMP) is, instead, the module NI PXI 4462 (6 10 V, 24 bit, maximum sampling rate of 204.8 kHz). All the instruments of the test bench operate synchronously, since the clock source from the Fluke 910R is provided to the whole PXI backplane and to the frequency counter as external timebase. Clock signals (with frequency different from 10 MHz) and trigger signals are generated by the NI PXI-6683H synchronization board.
In particular, the sampling frequency of the AWG is 5 MHz, while the sampling clock of the DUT is made variable up to 1 MHz. The PRS is generated by the NI PXI-6683H, too. A digital storage oscilloscope (Tektronix TDS 2014B) is only used to control the correct operation of the setup and it is not involved in the measurement of the absolute phase error. The signal C DUT is the DUT sampling clock and the signal C AWG is the sampling clock of the AWG. The sine wave is connected to both the DUT and the COMP. The COMP measures the phase difference between the sine wave and the PRS. The frequency counter receives a 10 MHz clock as external timebase and measures the time delay between the PRS and the DUT sampling clock.
All the clock and signal paths are symmetric in order to avoid different propagation delays. Since the two input channels of COMP and of the counter could have inter-channel time (or phase) delay, in order to compensate for these systematic errors, two measurements are performed, interchanging the signals between the two channels, both for the COMP and the counter (Crotti et al., 2017(Crotti et al., , 2019Trinchera et al., 2017).
Measurement software is developed in LabVIEW. For each test point, amplitude and frequency of the test signal of the DUT can be chosen and 30 repeated measurements of ]F s DUT kT s ð Þ ½ j f0 , t c and u g f 0 ð Þ are performed.
Uncertainty evaluation
As follows from the equation (3), the absolute phase error of the DUT is obtained by combining three different quantities: the phase angle of the samples acquired by the DUT, u T , the phase delay between the PRS and the DUT sampling clock, u c , and the phase angle of the generated signal referred to PRS, u g . Therefore, in order to evaluate the uncertainty on the measurement of u DUT , in the following, each of these terms will be analyzed to point out the uncertainty contributions. In addition to these main contributions, the repeatability and stability of the measurement setup should be accounted too.
It is possible to identify three sources of uncertainty in measurement: (1) the uncertainty on the phase due to comparator non-linearity uu g , (2) the uncertainty on clock delay uu c and (3) the uncertainty on sampling event uu T .
Let us consider now, in more detail, the causes of those uncertainty contribution and how they have been evaluated.
As explained in Section 2, the phase angle of the generated sine wave, with respect to the PRS that defines the reference time instant (t = 0), is evaluated by means of the COMP, which measures the relative phase deviation between the sine wave and the fundamental component of the PRS. Therefore, the two input channels of the COMP are stimulated with waveforms with different characteristics, namely, a sine wave and a square wave. In particular, the square wave could stimulate a residual non-linear behavior of the channel, which is not stimulated by the sine wave. This aspect must be taken into account in the uncertainty evaluation.
Generally speaking, the comparator introduces a systematic phase error when measuring the relative phase between the two signals due to different time delays of its internal paths. This systematic error can easily be highlighted by supplying the same signal to both inputs (Channel 1 and Channel 2) through symmetrical paths. In this situation, the measured phase delay should be theoretically exactly zero, so a measured value different form zero is due to the systematic error introduced by the comparator that can be modelled with a differential phase displacement. This effect can be compensated in a simple way when input signals are sinusoidal. In fact, considering two sinusoidal signals (WaveA and WaveB) at the same frequency, with a relative phase delay of Du d , and connecting WaveA at Channel 1 and WaveB at the Channel 2, the measured phase delay, Du m1 , can be expressed as where Du 1 and Du 2 are phase displacements at the frequency of the two sinusoidal signals due to Channel 1 and Channel 2, respectively, and Du 12 is the resulting systematic error introduced by the comparator on relative phase measurements. So, the measurement result is the combination of the systematic error of the comparator with the actual phase displacement, Du d . Nevertheless, this systematic error can be compensated, inverting the waveform connection at comparator inputs (WaveB is connected to Channel 1 and WaveB at Channel 2) and measuring again the phase displacement. With this configuration, the phase delay between the signal is changed in sign, instead the systematic error remains the same so the second measurement of the phase displacement, Du m2 , results Now, the correct value of the phase displacement can be easily obtained with equation (6) Du This kind of analysis should be conducted even when input waveforms are not sinusoidal and even different (i.e. sinusoid and square wave as in the considered application). In fact, we have already pointed out that the comparator can be used to evaluate the phase displacement between the fundamental components for those signals for which the zero crossing (or rising edge) of the waveform is coincident with the zero crossing of the fundamental component (like for square wave). Under the hypothesis of two perfect input channels (perfect linear behavior and ideal frequency response without attenuation/amplification and phase modifications in the frequency range of interest) the presence of harmonic does not affect the measurement and the correction can be performed as in equation (6). On the contrary, in real cases, the correction is not so simple. In fact, the non-ideal frequency response introduces modifications in amplitudes and phases of the harmonics components and these reflect in the modification of the zero crossing and thus of the phase of the fundamental component. In addition, an eventual non-linear behavior even changes the shape of the waveform. Both the phenomena, obviously, affect the measurement of the phase displacement introducing a further systematic deviation. In this situation, the relations for the results obtained with the two measurements, performed by inverting the input waveforms -that is, equations (3) and (4) -can be rewritten as where two additional systematic deviations, Du 1, NL and Du 2, NL , due to the non-sinusoidal waveforms considered, are added. In this case, the two contributions should be in general considered different because they depend on the input waveforms and on the different input channel characteristics. It is apparent that the more complex is the input waveform (i.e. the greater is the number of harmonics that must be considered) the more difficult is the evaluation of the nonlinearity contribution to the systematic error. In this situation, the average of the measured values obtained by inverting the signals at the inputs of the comparator becomes so, generally speaking, the deviation due to non-linearity cannot be exactly eliminated. Nevertheless, as the circuits of the comparator inputs are built in identical way, under the hypothesis of identical waveforms (in amplitude and shape) the two contributions can be reasonably expected to be identical or at least very similar. Therefore, with the averaging, it is possible to expect a cancellation of these contributions or at least a great reduction of their impact in the results, so that they can be neglected and even in this case the average of the two measurements leads to a correct measurement result This assumption is not straightforward for the considered application, because the two waveforms are very different (a sine wave and a square wave) and with different amplitudes, so that further considerations are necessary. First of all, it is important to underline that the on-board anti-aliasing filter, which removes all the harmonic components above Nyquist frequency, produces a smoothing in the square wave shape, so reducing the non-linear phenomena. At first, some analyses to quantify the amount of difference in non-linearity of the two considered channels were performed. To this aim, the two input channels were supplied with the same signal trough symmetrical path and two types of waveform were considered: a pure sinusoidal signal and a square wave (see Figure 3). In both the situations, the input waveforms were synchronously acquired from the two channels and the spectra of the acquired signal were analyzed by measuring the difference between the corresponding tones of the two spectra. The comparison was performed only for the harmonic components with magnitudes above the noise floor. These analyses were continuously performed for a period of time of about 10 hours, in order to account also for warming effects. The maximum measured difference was obtained with square wave and this value normalized with respect to the amplitude of the signal was equal to 0.3 mV/V. This low value show that the non-linear behavior of the considered acquisition channels are almost equal.
Then, in order to evaluate the residual uncertainty associated with this cancellation, we measured the systematic error introduced by the comparator in different conditions: (1) Du SQ , with two square waves with equal amplitudes, equal to 3.3 V; 2) Du S, 1 , Du S, 2 , Du S, 5 , Du S, 10 , with two sinewaves with the same amplitudes, equal to 1 V, 2 V, 5 V and 10 V (the same amplitudes at which the DUT absolute phase error has been measured).
Then, the standard uncertainty related to the non-linearity cancellation has been estimated as follows where the subscript X represents the standard uncertainty when the amplitude of the sinewave is X volt.
In practice, we assumed that the maximum phase error due to the non-linearity of the comparator is the difference between the systematic phase errors measured when at the inputs there are two square waves and two sinewaves. Then, assuming a uniform probability distribution, the standard deviation is taken as the standard uncertainty. For each frequency, the maximum uncertainty value, among all the tested sinewave amplitudes, has been taken as uncertainty due to non-linearity.
It is worth underlining that the proper uncertainty evaluation is proven by the comparison of the proposed method with other classical method, shown in Crotti et al. (2019): the results are always compatible.
The second contribution to the uncertainty comes from the measurement of the time delay between DUT sampling clock and the PRS by means of the frequency counter. The uncertainty uu c in measurement of this parameter is due to the finite slew rate of the electronic devices that generates the two considered signals; it prevents having an instantaneous change between the logical levels (low and high) and thus an exact definition of time instants. The considered signals, in fact, when the transition should apply, change their amplitude almost linearly from the initial value to the final value (see Figure 4), taking a certain time to complete the commutation. The parameter that characterizes the quality of the commutation is the rise time and it is defined as the time needed for the signal to rise from the 10% to the 90% of the final value. Therefore, if we measure this rise time, we can evaluate the uncertainty on the quantity 2pf 0 t c , as where f is the frequency of input sine wave and Dt is the rise time shown in Figure 4. The estimated uncertainty contribution due to the PRS rise time is lower than 0.2 mrad at 50 Hz and 68 mrad at 20 kHz. The actual time instant in which the DUT, after the recognition of a pulse of the sampling clock, performs the sampling, represents another source of uncertainty. The sampling command is recognized by the DUT at a level of 2.2 V, with positive slope, of the sampling clock. Thus, we can consider that, since the sampling clock (due to the finite analog bandwidth of the digital circuitry) has a rising edge that is not vertical and the DUT does not recognizes the level of 2.2 V in a perfect way, from these two effects an uncertainty contribution arises. With the frequency counter we measured the time interval between the time instants in which the sampling clock crosses the levels of 2.1 V and 2.3 V (Dt in Figure 5). Its uncertainty contribution has been quantified, with an equation similar to (11), to be lower than 10 nrad at 50 Hz and 4 mrad at 20 kHz.
All the standard uncertainty contributions, at 50 Hz and 20 kHz, are shown in Table 1.
Experimental results
This section will refer to the relative phase error measurement method, illustrated in Trinchera et al. (2017) and Crotti et al., 2017), as COMP method. In this section, further experimental results with respect to Crotti et al. (2019) are presented. Various experimental tests have been performed on the DUT. The used input signals had four amplitudes, 1 V, 2 V, 5 V, 10 V (these values correspond to the four full scale ranges of the DUT) and various frequencies between 1 Hz to 20 kHz. The sampling frequency was kept constant at 1 MHz for this set of tests. Then, the signal frequency was kept constant to 50 Hz and the sampling frequency was varied in the range between 1 kHz and 1 MHz. The absolute phase errors of two channels of the DUT (CH0 and CH1) were measured.
From the measurement of the absolute errors of the two DUT channels, their relative phase error is obtained by computing the difference. In this way, it is possible to compare the obtained results with those of the more conventional COMP method on the same two channels. Some results have been already shown in Crotti et al. (2019); further results are shown in Figures 6-8. Figure 6(a) shows the absolute error of CH0 of the DUT; it has been obtained with 2 V input signal amplitude, in the frequency range from 1 Hz to 20 kHz, with constant sampling frequency of 1 MHz. Figure 6(b) shows, instead, the relative phase error between CH0 and CH1 of the DUT, evaluated by Figure 6. (a) CH0 absolute phase error at 2 V with constant sampling frequency, (b) relative phase errors between CH0 and CH1, obtained with two different methods.
computing the difference among the absolute phase errors (solid line) and by performing the COMP method (dashed line). The error bars represent the expanded uncertainty (level of confidence of 95 %). It is worthwhile noting that the results are always compatible and so in good agreement. Another phenomenon to be observed is the dependence of the absolute phase error of the DUT channels on the input signal amplitude, which implies a non-linear behavior of the channel. This phenomenon is shown in Figure 7(a), which depicts the absolute phase error of CH0 when the input signal frequency is 50 Hz and the sampling frequency is 1 MHz. Moreover, Figure 7(b) shows the relative phase error between CH0 and CH1 of the DUT, evaluated by computing the difference among the absolute phase errors (solid line) and by performing the COMP method (dashed line). The error bars represent the expanded uncertainty (level of confidence of 95 %). Unlike the absolute phase error, the relative phase error between the channels is practically independent from the input signal amplitude: this means that, even each channel has a non-linear behavior (the absolute phase error depends on the input amplitude), the two non-linear behaviours are approximately the same and thus they disappear when the relative phase error is evaluated.
The same quantities shown in Figure 7(a) and Figure 7(b) are shown also in Figure 8(a) and Figure 8(b) but with input signal frequency equal to 20 kHz. Looking at Figure 8(a), we can see that the non-linear behavior of the absolute phase error, already observed in Figure 7(a), is now amplified. Moreover, in Figure 8(b) we can observe also that the relative phase error now shows a non-linear behavior, despite it is not being observed at 50 Hz (Figure 7(b)).
A final consideration is about the temperature dependency of the absolute phase error. It is worth noting that all the experimental tests have been performed in the following way. First of all, the DUT has been warmed up, setting operating condition as near as possible to the test conditions, monitoring the temperature of all the instrumentation involved and in particular that of the DUT. Then, multiple subsequent measurements were conducted for each test verifying that first and last measurement results still agrees within the uncertainty bound. It means, of course, that the measurand is not changed significantly during the tests. This is easily achieved when the thermal regimen is reached and the temperature is almost constant. This approach allows avoiding a further uncertainty contribution due to measurement instrumentation temperature. All the results obtained without following this procedure presented high variability and also the comparison with the COMP method did not always show compatible results.
In order to highlight this phenomenon, a dependence of the CH0 absolute phase error on the DUT temperature is shown in Figure 9, where the input signal has an amplitude of 5 V and frequency of 20 kHz and the DUT has a sampling frequency of 1 MHz. All the temperature measurements are performed by using the internal temperature sensor of the digitizer. Even if the values of the absolute phase errors measured at very near temperatures (less than 1 K) are compatible within the measurement uncertainty (the bars show the level of confidence equal to 95%), it is possible to observe a variation of the absolute phase error with the temperature. However, further investigation on the temperature dependency of the DUT absolute phase error is still in progress.
Conclusion
This paper deepens the evaluation of the uncertainty in the measurement of the absolute phase error of a digitizer. This method has been already presented by Crotti et al. (2019) and here it is briefly reviewed. Particular attention has been devoted to the theoretical analysis of the compensation of the systematic errors present in the measurement method and to the uncertainty contributions due to such compensations. The method for the estimation of the non-linearity of the used phase comparator has been presented.
New conclusions have been drawn about the absolute phase error of a digitizer: even high accuracy digitizers can suffer from non-linear behavior regarding the absolute phase error and, moreover, different channels of the same digitizer can have different non-linear behaviors in such a way that also the relative phase error between these channels can have non-linear behavior.
Another aspect that is highlighted in this paper is the temperature dependence of the absolute phase error: in order to avoid the temperature contribution to the total measurement uncertainty, a specific test method is proposed. Further investigations about the temperature dependence are still in progress.
Declaration of conflicting interests
The author(s) declared no potential conflict of interests with respect to the research, authorship and/or publication of this article. | 6,932.8 | 2020-02-01T00:00:00.000 | [
"Materials Science"
] |
Better Higgs-CP Tests Through Information Geometry
Measuring the CP symmetry in the Higgs sector is one of the key tasks of the LHC and a crucial ingredient for precision studies, for example in the language of effective Lagrangians. We systematically analyze which LHC signatures offer dedicated CP measurements in the Higgs-gauge sector, and discuss the nature of the information they provide. Based on the Fisher information measure, we compare the maximal reach for CP-violating effects in weak boson fusion, associated ZH production, and Higgs decays into four leptons. We find a subtle balance between more theory-independent approaches and more powerful analysis channels, indicating that rigorous evidence for CP violation in the Higgs-gauge sector will likely require a multi-step process.
I. INTRODUCTION
Since the experimental observation of the Higgs boson at the Large Hadron Collider (LHC) [1,2], detailed studies of its properties have become one of the most important laboratories to search for physics beyond the Standard Model. With the measurement of the Higgs mass, the last remaining parameter of the Standard Model has been determined. This implies that further Higgs measurements can be viewed as consistency checks on the validity of the Standard Model description. In particular, deviations from the Standard Model expectations induced by heavy new particles can be described by a continuous and high-dimensional parameter space of Wilson coefficients in the Lagrangian of an effective field theory (EFT) [3][4][5][6]. EFT descriptions have the advantage that they are well-defined quantum field theories and allow us to predict and include kinematic distributions in the analysis [7,8].
The key assumptions defining any effective Lagrangian are the particle content and the symmetry structure. Once these two initial assumptions are agreed upon, the Lagrangian is defined as a power series in the heavy new physics scale Λ. First, the general consensus is that the particle content of Higgs analyses is given by the Standard Model particles [9]. However, on the symmetry side, the situation is less clear. To begin with, one can embed the Higgs scalar in a SM-like SU (2) L doublet or add a scalar field unrelated to the Goldstone modes. In this paper we realize the electroweak gauge symmetry linearly and include a complex Higgs-Goldstone doublet. A remaining question concerns the charge conjugation (C) and parity (P ) symmetries of the Higgs boson and its interactions. In the Standard Model, after the CKM rotations which diagonalize the fermion masses, the Higgs boson has C and P preserving interactions at tree level. Any deviation from this prediction would be a striking manifestation of physics beyond the Standard Model, and it is experimentally exigent to determine whether there are new sources of CP violation in the Higgs sector.
A common approach addresses this question by simplistically combining CP -even and CP -odd operators into one effective Lagrangian and fitting them to a combination of arbitrary observables. Because of many caveats affecting global dimension-six EFT analyses, the results of such an analysis do not say much about the CP nature of the Higgs boson. Instead, we propose to carefully disentangle three questions [10,11]: 1. Which LHC observables are sensitive to the CP nature of the Higgs boson? 2. What are the assumptions linking these observables to CP ? 3. How well can we quantitatively test the Higgs' CP properties based on these observables?
Once such a dedicated analysis establishes that CP is not a good symmetry of the Higgs sector, we will expand the effective Lagrangian to include CP -violating operators to better discern the nature of the CP violation. The first two questions have straightforward answers [12]. In fact, there exists a wealth of individual LHC studies for this kind of measurement in the Higgs-gauge [13][14][15][16][17][18][19][20][21][22] and in the Yukawa sectors [23][24][25], as well as through global analyses [26]. Our focus therefore lies on a unified theory framework which allows us to systematically compare and combine the leading three Higgs-gauge LHC production/decay channels.
Progress on the third question requires the use of modern analysis and tools. The LHC experiments have come to rely on high-level statistical discriminants, including hypothesis tests based on multivariate analysis with machine learning or the matrix element method [27,28]. These tools are able to tease out features that defy simpler cut-and-count analysis based on one-dimensional or two-dimensional kinematic distributions. We apply the new MadFisher approach [29] based on information geometry [30] to systematically study the sensitivity of different Higgs processes to different scenarios of CP violation. Through the Cramér-Rao bound, the Fisher information determines the maximum knowledge about model parameters that can be derived from a given ex-periment [31]. It allows us to define and to compute the best possible outcome of any multivariate black-box analysis [28,32] as well as the expected outcome based on a more limited set of kinematic observables. In this way, we determine not only which Higgs production and decay processes are best-suited to test its CP properties, but also identify which kinematic variables carry the relevant information.
We begin with a brief review of CP -sensitive observables at the LHC in Sec. I B, CP violation in the Higgs-gauge sector in Sec. I C, and our Fisher information approach in Sec. I D. We study the three leading LHC signatures, Higgs production in weak boson fusion (WBF) in Sec. II, associated ZH production in Sec. III, and Higgs decays to four leptons in Sec. IV. For each of these signatures we discuss the possible CP -sensitive observables and briefly describe the advantages and challenges of the corresponding LHC analysis. In Sec. V, we compare all three channels.
A. CP vs naive time reversal
As is well known, the three discrete symmetries consistent with Lorentz invariance and a Hamiltonian which is Hermitian [12] are charge conjugation (C), parity (P ), and time reversal (T ). These three operators act on a complex scalar field φ(t, x) as: where the phases η j define the intrinsic symmetry properties of φ. C and P are unitary transformations, while T is anti-unitary, implying that the phase η T is not measurable and can be chosen to be η T = 1. Acting on a single-particle state with 4-momentum p and spin s produces where η φ is the intrinsic parity of the field. Time reversal transforms incoming states into outgoing states, so it is convenient to define a 'naive time reversal' [10,12,33] T |φ(p, s) = |φ(−p, −s) , which explicitly omits exchanging initial and final states.
Observables can be chosen to reflect the C, P , or T transformation properties of the underlying transition amplitude. We are interested in a real-valued observable O that can be measured in a process |i → |f . Interesting observables at the LHC are functions of the 4-momenta, spins, flavors, and charges of initial and final states. First, we define a U -odd or U -even observable as where the upper (lower) sign refers to U -odd (U -even).
For the purpose of testing the properties of the underlying theory, a genuine U -odd observable is defined as having a vanishing expectation value in a U -symmetric theory (for which L = U LU −1 ), In case the initial state is a U eigenstate, or the probability distribution of the initial states p(|i ) is U -symmetric, the second definition is slightly weaker. One can show that under this condition any U -odd observable is also genuinely U -odd, Figure 1. Feynman diagrams describing the three processes considered in this paper: WBF Higgs production, associated ZH production, and H → 4 decays.
In particular, any observable that compares the probabilities of two conjugated processes is obviously genuinely U -odd.
We can gain additional insights on CP from theT transformation properties. Based on the definition in Eq. (5), at tree level, a finite expectation value of a genuineT -odd observable O indicates a CP -violating theory [34]. In addition to CP T invariance this argument requires: • the phase space isT -symmetric; • the initial state is aT -eigenstate, or its distribution is invariant underT ; and • there cannot be re-scattering effects.
The latter correspond to absorptive, complex-valued loop contributions, for instance an imaginary part in the propagator of an intermediate on-shell particle. To illustrate this point, consider the transition amplitude T defined via S = 1 + iT . The matrix elements satisfy with T |i = i T | as given in Eq. (2), etc. The first step follows from CP T invariance, and the second from the optical theorem in the absence of re-scattering. Indeed, in a CP -symmetric theory and in the absence of re-scattering, the matrix element squared isT -invariant.
In practice, this argument means that where genuine CP observables cannot be constructed, we can analyze genuineT observables instead. A non-zero expectation value here is evidence for CP violation under the additional assumption of no or negligible re-scattering.
B. CP violation in LHC processes
We evaluate the effect of CP -odd operators on the three most promising LHC Higgs signatures: WBF Higgs production in Sec. II, associated ZH production in Sec. III, and Higgs decays to four leptons in Sec. IV. From Fig. 1, it is clear that these three processes are governed by the same hard process, with different initial and final state assignments, and the W and Z couplings related by custodial symmetry.
Since it is not realistically possible to determine the spins in the initial or final states in these processes, all observables must be constructed as functions of the 4-momenta. Ideally, we can reconstruct four independent external 4-momenta for each process shown in Fig. 1 and combine them into ten scalar products of the type (p i · p j ). Four of them correspond to the masses of the initial-and final-state particles, and the remaining six specify the kinematics. Scalar products are P -even. Equation (3) alsoT -even. As we will see in Sec. II A, two of them are C-odd, while the remaining four are C-even. In addition to the scalar products, there is one P -odd andT -odd observable constructed from four independent 4-momenta, µνρσ k µ 1 k ν 2 q ρ 1 q σ 2 [15,16]. Altogether, there are: • four scalar products corresponding to masses of the external particles; • four C-even, P -even, andT -even scalar products; • two C-odd, P -even, andT -even scalar products; • one C-even, P -odd, andT -odd observable constructed from the Levi-Civita-tensor, for all three processes illustrated in Fig. 1. More details, including some analytic results, are given in the appendix. Thus, at most three observables are CP -odd, with the main difference between each process coming from what can be measured for each of the four fermion lines.
In cases where the initial state is guaranteed to be CP -even andT -even, or can be boosted into such a frame, we can distinguish two types of CP -odd observables: 1. CP -odd andT -odd: for the qq initial state this implies that the observable is also genuine CP -odd and genuineT -odd (see Eq. (6)). In a CP -symmetric theory its expectation value vanishes, implying that a non-zero expectation value requires CP violation regardless of the presence of re-scattering. The different cases are illustrate in the upper half of Table I. 2. CP -odd andT -even: for the qq initial state the observable is also genuine CP -odd, so in a CP -symmetric theory its expectation value vanishes. In the lower half of Table I we show the different scenarios: if the theory is CP -violating, the corresponding expectation values does not vanish. If we ignore re-scattering, the theory also appearsT -violating, but the expectation value of theT -even observable combined with an anti-symmetric amplitude will still vanish. However, in the presence of re-scattering or another complex phase, this unwanted condition from theT symmetry vanishes, and the expectation for O matches the symmetry of the theory.
This implies that for a (statistically) CP -symmetric initial state, one can arrive at a meaningful statement about the CP symmetry of the underlying theory either through a CP -odd andTodd observable without any assumption about complex phases or through a CP -odd andT -even observable in the presence of an complex phase.
C. CP violation in the Higgs-gauge sector
Typical tests of C, P , or T symmetries of the Higgs sector do not probe the symmetry nature of the actual Higgs field, but rather the transformation properties of the action through its influence on S-matrix elements. We focus on the transformation properties of observables and explore how they reflect the symmetry structure of the Higgs Lagrangian. To this end, we evaluate the effect of CP -violating as opposed to CP -conserving Higgs couplings to weak bosons or heavy fermions. For an effective Higgs-gauge Lagrangian truncated at mass dimension six, our CP -even reference scenario consists of the renormalizable Standard Model Lagrangian combined with the five CP -even dimension-six operators in the HISZ basis [6,7,35], At the same mass dimension, CP -odd couplings are described by operators With the Levi-Civita tensor, these operators break down as C-conserving and P -violating.
While the effective Lagrangians in Eqs. (10) and (11) demand real coefficients f W W and f W W , it is also interesting to observe what happens when they are taken to be complex. Strictly speaking, this does not occur in an EFT from integrating out massive degrees of freedom in a well-defined UV theory. However, absorptive complex phases can appear through light degrees of freedom. Such cases are not technically described by a local EFT and could lead to different momentum dependences, so we leave a more refined treatment of this case for future work. Instead, we consider coefficients such as f W W and f W W to be complex to illustrate how such cases complicate the determination of the CP nature of the Higgs interactions. Such complex phases already occur in the Standard Model, for instance from electroweak corrections or in Higgs production with a hard jet [36]. Such loop-induced contributions to the expectation value of CP -odd observables must be taken into account in precision measurements.
Combining the different pieces, we arrive at thirteen model parameters of interest, where the factor v 2 ensures that the model parameters are dimensionless. The first seven entries represent the usual Wilson coefficients in the EFT. The last six entries allow for absorptive contributions. We will use this full vector of model parameters to analyze the sensitivity of different processes to the CP properties of the Higgs-gauge sector.
D. Information geometry and Cramér-Rao bound
We briefly review the basics of information geometry applied to Higgs physics at the LHC, as introduced in Ref. [29]. The LHC measurements are represented by a set of events with kinematic observables x. Their distribution depends on a vector of model parameters, for example Higgs couplings, with unknown true values g. An analysis leads to an estimatorĝ, designed to follow a probability distribution around the true values. For an unbiased estimator the corresponding expectation values are equal to the true values,ḡ i ≡ E [ĝ i |g] = g i . The typical error of the measurement is described by the covariance matrix which, as a generalization of the variance in one dimension, gives the precision of the measurement: the smaller C ij , the better one can measure the combination of couplings g i and g j . The second object of interest is the Fisher information matrix, the first term in a Taylor series of the log-likelihood around its maximum, which measures the sensitivity of the likelihood of experimental outcomes x to the model parameters g. The Fisher information matrix can be computed from the probability distribution f (x|g) for a specific phase space configuration given a model, as A large entry in the Fisher matrix implies that the measurement is particularly sensitive to a given model parameter combination g i,j . Conversely, an eigenvector of the Fisher matrix with zero eigenvalue indicates a blind direction, corresponding to a combination of measurements with no expected impact.
The Cramér-Rao bound [31] links these two tracers of the sensitivity of a measurement: the (inverse) Fisher information tells us how much information a given experiment can optimally extract about a set of model parameters. The covariance matrix gives the actual uncertainty of the measurements, and its minimum value must be larger than the inverse Fisher information, The Fisher information is invariant under a reparametrization of the observables x, and transforms covariantly under a reparametrization of the model parameters g. After removing blind directions, the Fisher information is a symmetric and positive definite rank-two tensor and defines a metric on the model space [30]. The model-space distance measure gives contours of constant distances as optimal error ellipsoids. Strictly speaking, it is defined in the tangent space at g a , but can easily be extended to distances calculated along geodesics on the theory manifold [29]. Such local or global distances track how (un-)likely it is to measureĝ = g b given g = g a . In the Gaussian limit the distance value is measured in standard deviations.
The distributions f (x|g) entering Eq. (14) can be computed for any model from Monte-Carlo simulations combined with a detector simulation. The corresponding measurement consists of an observed n events distributed over phase space positions x. For a total cross section σ(g) and an integrated luminosity L the full probability distribution in Eq. (14) factorizes [28,32] as where f (1) (x|g) is the normalized probability distribution for a single event populating x and can be computed by standard event generators. The factorized Fisher information is This total Fisher information can be calculated from Monte-Carlo simulations. It defines the best possible precision with which the parameters g can be measured based on the full observable space, independent of the (multivariate) analysis strategy. It also intrinsically includes all directions in theory space and all correlations between different parameters and does not require any discretization of the parameter space.
Given the discussion in Sec. I B, an interesting question is how much of the full information is included in particular kinematic distributions. To answer it, we alternatively calculate the information in one-dimensional or two-dimensional histograms of kinematic observables. This gives the maximum precision with which parameters can be measured by analyzing a given set of observables. Comparing this reduced Fisher information to the total information based on the full phase space lets us quantitatively analyze whether the clearer theory interpretation of well-defined CP observables is worth the loss in sensitivity compared to a multivariate approach.
We evaluate the resulting Fisher information matrices in three ways. First, we calculate curves of constant distances given by Eq. (16) in the space of dimension-six Wilson coefficients, corresponding to optimal expected exclusion limits of an analysis. This allows us to study correlations between different Wilson coefficients, for example between CP -violating and CP -conserving operators.
Second, we can rotate the symmetric Fisher information matrix I ij into its diagonal form, defining eigenvectors as a superposition of model parameters and the corresponding information eigenvalue. In the diagonal form, the Cramér-Rao bound conveniently defines the reach of a given analysis in each eigenvector direction. Numerically, this reach can be expressed in terms of the Wilson coefficients Λ/ √ f , as defined in Eq. (9) [29]. We discuss this analysis in terms of modelspace eigenvectors and their reach in Sec. V.
Finally, if we are especially interested in a subset of parameters, we can compute the corresponding Fisher information either setting all operators to zero, or by profiling over all other operators, as discussed in detail in the appendix of Ref. [29]. In Sec. V we use this procedure to analyze the robustness of signatures from CP -violating operators to other scenarios of new physics.
II. HIGGS PRODUCTION IN WEAK BOSON FUSION
To construct appropriate kinematic observables for WBF Higgs production, we can in principle make use of three final-state momenta and two initial-state momenta, where one momentum is linearly dependent on the other four due to energy-momentum conservation. We assume the Higgs decay H → τ τ , which allows us to approximately reconstruct the , though this specific choice of decay mode is expected to have little if any impact on the final results [15]. Throughout the discussion, we rely on the reconstruction of the Higgs momentum to reconstruct the missing information about the initial parton momenta.
A. CP observables
The partonic qq initial states of weak boson fusion is not a C eigenstate. The discussion in Sec. I B thus implies that one cannot construct a production-side genuine CP -sensitive observable in WBF Higgs production. On the other hand, Eq. (3) states that in the absence of spin information the transformation properties under P andT are the same, and thus one can construct exactly one genuineT -odd observable based on the Levi-Civita tensor in the center-of-mass frame, making use of the fact that the initial state probability distribution isT -symmetric in proton-proton collisions. In the absence of large re-scattering effects, it probes CP violation in the Higgs-gauge sector. This observable can be naively defined as [15,16] where the two incoming parton momenta are k 1,2 and the two outgoing tagging jet momenta are q 1,2 . However, this definition suffers from the feature that it changes sign under exchange of the two tagging jet momenta q 1 ↔ q 2 . We remove this ambiguity through the modification [10] O Defining k + and k − to be the initial state momenta in the lab frame pointing along the positive and negative beam axis (z-direction), q + and q − are delineated 'forward' and 'backward' such that k + and q + point to the same hemisphere, or more generally 16]. This implies that (q + ) z > (q − ) z in the center-of-mass frame. In this notation, In the laboratory frame, k ± = (E ± , 0, 0, ±E ± ). The assignment for q ± imply that the sign factor is always unity, which reduces O to a triple product or, in terms of q x,± = q T,± cos φ ± and q y, where ∆φ jj is the signed azimuthal angle difference The main weakness in the observable O is that it depends on the (usually) poorly determined energies of the initial state partons E ± . Rather than relying on the reconstruction of the Higgs momentum via its decay products to provide this information, we replace the full observable by which retains the CP sensitivity through the well-definedT transformation of the full set of observable, matrix element, and initial state. The primary difference between the Lorentz-invariant observable O and ∆φ jj is that O is more sensitive to the magnitude of the tagging jet momenta. This can be advantageous in some instances, since the dimension-six operators in the EFT lead to Distribution of the signed angle ∆φ jj in WBF Higgs production after the cuts in Eqs. (27) and (29) modifications which grow with momentum transfer. However, the same effect can be achieved by supplementing ∆φ jj with a virtuality measure such as the transverse momentum of the harder jet.
We simulate the WBF process with the MadMax [38,39] setup of Madgraph [40]. We compute the ∆φ jj distribution (with the same event selection as described below) predicted by the Standard Model as well as for the Standard Model augmented by representative operators O W W and O W W defined in Eqs. (10) and (11). The resulting distributions are shown in Fig. 2. As expected, the Standard Model, even when supplemented by a CP -even operator such as f W W , results in a distribution that is symmetric under ∆φ jj → −∆φ jj . Similar results would be obtained for the other CP -even operators of Eq. (10) such as O W . In contrast, the CP -odd operator O W W leads to a distribution with a clear preference for ∆φ jj < 0.
As is evident from Fig. 2, an imaginary Wilson coefficient f W W also leads to an asymmetry in the ∆φ jj distribution. Clearly, absorptive phases can mimic the signatures from CP -violating scenarios in this non-genuine CP observable, and thus potentially complicate the interpretation of such a signature.
B. LHC reach
Based on our simulations, we determine the expected LHC sensitivity to O W W through WBF production followed by the H → τ τ decay. The dominant backgrounds are QCD and electroweak Zjj production followed by the decay Z → τ τ , and Higgs production in gluon fusion with H → τ τ . Our analysis is based on the tagging jet kinematics [41][42][43]. We simulate the WBF signal following Ref. [29] by generating the process multiplying the rates with the branching ratio for the semi-leptonic di-tau mode, and assuming the di-tau system to be reconstructed with a realistic resolution for m τ τ . This means that as the leading detector effect the m τ τ distribution is smeared by a Gaussian [32,38,39] (with width 17 GeV) for Higgs production and a double Gaussian (where the dominant component has a width of 13 GeV) for Z production, as estimated from Fig. 1a of Ref. [44]. The double Gaussian ensures an accurate description of the high-mass tail of the Z peak around m τ τ = m H [45]. Event selection proceeds first with loose cuts to retain as much phase space information as possible. One can improve discrimination of the WBF signal from the electroweak and QCD background processes based on their different radiation patterns [41]. These selections are simulated by applying central jet veto (CJV) survival probabilities [37], Provided the hard phase space does not include any jets beyond the two tagging jets, the results are not expected to first approximation to be sensitive to details of the central jet veto. For simplicity, we assume the reconstruction and identification of the leptonic τ to be fully efficient and assume a constant overall efficiency of 0.6 for the hadronic tau. These efficiencies do not affect the signalto-background ratio. As a second way to suppress backgrounds, we apply a likelihood-based event selection [29] ∆σ SM WBF (x) ∆σ backgrounds (x) > 1 , retaining only phase-space points x with an expected signal-to-background ratio of at least unity.
For an integrated luminosity of L = 100 fb −1 , after all efficiencies and the event selection of Eqs. (27) and (29), we expect a WBF Higgs signal of 1349 events in the Standard Model, together with a total expected background of 388 events. It is worth noting that these numbers are optimistic and do not include the full suite of detector effects, fake backgrounds, etc.
We analyze how well WBF production can extract information about CP violation in the dimension-six EFT defined in Eqs. (10) and (11). The model parameters of interest are given in Eq. (12). For these directions in the EFT parameter space we use the the MadFisher tools [29] to find the Fisher information evaluated at the Standard Model after L = 100 fb −1 to be Figure 3. Optimal 1σ contours for WBF Higgs production with H → τ τ (solid black). Also shown are the results based on different subsets of the ∆φ jj distribution, including its absolute value (purple), its asymmetry (orange), its full distribution (red), its combination with the leading jet p T distribution (blue); as well as the observable O as defined in Eq. (23) (green). In grey we show bounds based on a simple rate measurement. In each panel, the parameters not shown are set to zero.
where red entries correspond to the CP -odd coefficients and we explicitly label the rows and columns with the corresponding Wilson coefficients. Fig. 3 shows the corresponding optimal error contours for representative pairs of Wilson coefficients, with those not shown on the axes set to zero, assuming that the data follows the Standard Model expectation. We assume the cuts of Eqs. (27) and (29) and an integrated luminosity of 100 fb −1 . Since these two-dimensional combinations of Wilson coefficients may not correspond to realistic UV scenarios, the projections should be interpreted with care.
In addition to the full phase-space information, which obviously results in the best reach, we show the expected constraints from observables based on subsets of the information contained in the ∆φ jj distribution. First, we find that all of the observables are sensitive to various CP -even operators. Second, the signed ∆φ jj distribution contains approximately as much information about O W W as about O W W . In contrast, the distribution of its absolute value |∆φ jj | is only sensitive to the CP -even operators. In Fig. 3 we confirm that in the top-left panel the full ∆φ jj results are identical to those from the absolute value |∆φ jj |, while in the two bottom panels with with their imaginary parts they are identical to those from the asymmetry By definition, this asymmetry is not sensitive to CP -even modification with a real Wilson coefficient. This confirms that any asymmetry in ∆φ jj is a clear indicator of CP violation as long as we neglect absorptive phases. The observable O is insensitive to the CP -even operator O W W , because the information in the absolute value |∆φ| is washed out by the residual momentum dependence. The same momentum dependence, on the other hand, results in a slightly enhanced reach for CP -violating physics compared to a ∆φ jj . This is consistent with the observation that supplementing ∆φ jj with the leading p T,j and analyzing their joint distribution also significantly improves the reach. This enhancement is not per se related to CP violation, but rather reflects the well known fact that dimension-six operators lead to effects which are enhanced at higher momentum transfer. These two important distributions cover the majority of the information after the selection cuts in Eqs. (27) and (29), with only modest improvements obtained by including additional phase-space information.
The case of absorptive physics, represented by a complex phase in the Wilson coefficient, is shown in the lower two panels of Fig. 3. While there is some sensitivity to the imaginary parts of f W W and f W W , the reach is typically much weaker for their imaginary parts than for the real parts. Crucially, the lower left panel of Fig. 3 demonstrates that once we allow for such an absorptive phase, an almost blind direction in parameter space arises: none of the observables can unequivocally prove CP violation.
To summarize, CP -violating scenarios can lead to large asymmetries in the signed ∆φ jj distribution in WBF, giving an impressive new physics reach of the LHC in these signatures. But this genuineT -odd observable can only be interpreted as a sign of CP violation under the additional assumption that re-scattering is negligible. As a side remark, an essentially equivalent measurement is possible for Higgs plus two jets production in gluon fusion, testing the CP nature of the effective Higgs interaction with gluons [15].
III. ZH PRODUCTION
At the amplitude level and assuming custodial symmetry, the ZH signature is sensitive to the same EFT vertices as WBF production [10]. However, its qq initial state is CP -even in the centerof-mass frame and at leading order in QCD. Following Sec. I B this implies that one can construct a genuine CP -odd observable [11]. It thus eschews the need for additional theory assumptions concerning absorptive phases in the Wilson coefficients.
We focus on the case with a leptonic Z decay and Higgs decaying into bottom quarks , which allows us to reconstruct the final state with great precision including the electric charges of the two leptons, which opens the door to C-sensitive observables. The specific Higgs decay H → bb has a large branching ratio, but will not play an important role in our analysis aside from providing information about the initial state momenta.
A. CP observables
Once again, the lack of access to spins of any of the participants implies that all realistic observables are constructed from 4-momenta. Following Sec. I C, they have the same transformation properties under P andT , so a CP -odd observable is eitherT -odd, P -odd, and C-even, or it isT -even, P -even and C-odd. There are two types of CP observables distinguished by their transformation underT : 1. CP -odd andT -odd: as discussed in Sec. I B, there is one P -odd, C-even observable based on the four independent 4-momenta, where k 1,2 are the initial parton momenta and q + , − are the outgoing + and − momenta. As before, the sign ensures that the observable is independent of the parton momenta assignment and C-even. As in Eq. (21) and Eq. (23), O 1 can be related to the azimuthal angle, for which a sign imposes an ordering according to the lepton momentum in the center-of-mass frame.
2. CP -odd andT -even: the two C-odd observables are constructed from scalar products between a C-even and a C-odd 4-vector. The C-eigenstate 4-vectors are differences of the 4-momenta, Because (k + · k − ) = (q + · q − ) = 0 for massless fermions, there are two C-odd scalar products (q − · k + ) and (k − · q + ), and the remaining four scalar products are C-even. The first C-odd scalar product maps on to the energy difference between the leptons where in the center-of-mass frame k 1,2 = (E, 0, 0, ±E), q ± = (E ± , q T, ± , q z, ± ), and s = 4E 2 . The observable (k − · q + ) is a challenge at the LHC, because there is no practical way to identify the initial state quarks and anti-quarks on an event-by-event basis. However, the C-odd combination accesses its information, while only depending on the transverse momentum difference of the leptons [11] and the center of mass energy, which can be determined once the Higgs momentum is reconstructed from its decay products.
Following the discussion in Sec. I A, only theT -odd observables O 1 or equivalently ∆φ probe the CP nature of the Higgs-gauge sector. We illustrate this in the left panel of Fig. 4 (38) and (40) for the Standard Model signal (solid black), and for the interference between different dimension-six amplitudes with the SM signal (colored).
∆φ → −∆φ . Unlike in WBF, this genuine signature of CP violation cannot be generated from an absorptive phase in CP -even physics. As discussed in Sec. I A, theT -even observables O 2 and O 3 or equivalently ∆E and ∆p T, will have a non-zero expectation value only in the presence of CP violation and re-scattering. The right panel of Fig. 4 shows the distribution of ∆E , demonstrating that an asymmetry in this observable requires both, CP violation and a source of a complex phase.
B. LHC reach
The signature consists of two b-tagged jets and two opposite-sign, same-flavor leptons. We simulate it as in Sec. II, with the b-jet momenta smeared appropriately for the reconstruction in the H → bb decay mode with a Gaussian with width σ bb = 12.5 GeV [39]. The basic acceptance cuts include a narrow invariant mass window for the two leptons to effectively reject background processes without an on-shell Z → decay. After the leptonic invariant mass cut, the main background is the irreducible bbZ production, where the two b-jets are produced as hadronic radiation. The acceptance cuts of Eq. (38) reduce its rate to 629 fb (before b-tagging), to be compared to the SM ZH signal rate of 14 fb.
We require two b-tags. This helps with fake backgrounds (as explained below), but differs with regard to some of the current experimental strategies grappling with limited statistics -a challenge that is much less of a concern with 100 fb −1 . We assume a double b-tagging rate for the signal and primary background of 0.7 2 . Through mis-tagging, the fake QCD background gg → ccZ will also contribute. Its rate after the acceptance cuts is 423 fb, and as long as the rate to mis-tag a charm as a b remains below 20%, it is small enough to be ignored. There is also a contribution from mis-tagged light-flavor jets. Starting from a jjZ rate of 17.2 pb after acceptance cuts and applying a mis-tag probability below 1% it turns out to be negligible.
Pairs of top quarks lead to a final state bb νν which is primarily distinguished from the signal by the presence of significant / E T . Supplementing the acceptance cuts with [11,46] / E T < 20 GeV (39) results in a rate of 13 fb before b-tagging. A multi-variate analysis of the multi-particle final state will further suppress it to a level where it does not affect the measurement.
After the acceptance cuts, requiring two b-tags, and the / E T cut, the only relevant background is therefore bbZ production. As in the WBF analysis, we improve the signal extraction through the likelihood-based event selection [29], The lower cut-off choice relative to WBF is dictated by the larger background rates for the ZH case in the relevant phase-space regions.
With L = 100 fb −1 of data and after the event selection of Eqs. (38), (39), (40), and all efficiencies, we expect a ZH signal of 208 events in the Standard Model and a total expected background of 1035 events. Our idealized treatment of the detector response and omission of subleading backgrounds mean that these numbers are certainly optimistic.
In the basis of Eq. (12), the Fisher information matrix evaluated at the Standard Model with an integrated luminosity of L = 100 fb −1 is with CP -odd components highlighted in red. This is translated into optimal error contours in Fig. 5, where each panel shows a pair of Wilson coefficients, with the remaining ones set to zero. In addition to the bounds based on the full kinematic information, we also show the optimal constraints based on individual observables. Once again, the best constraints on the CP -even operators come from a combination of angular observables like ∆φ and momentum-sensitive observables like m ZH . The ZH production processes turns out to offer much tighter constraints on f W than f W W .
The distribution of ∆φ is sensitive to both CP -even and CP -violating operators. Unsurprisingly, the information on CP -even operators is entirely contained in the absolute value |∆φ |. In contrast, the differential asymmetry carries all of the information concerning CP violation. Unlike ∆φ jj in WBF, it is now a genuine CP -odd observable, so this asymmetry is never generated from real or imaginary Wilson coefficients of CP -even operators, and the lower left panel of Fig. 5 does not suffer from blind directions. As expected from the discussion in Sec. I B, the distributions of ∆p T, and ∆E can only exhibit asymmetries if both CP violation and absorptive phases are present. This leads to these distributions only being sensitive to the imaginary part of O WW , as is visible in the bottom right panel of Fig. 5.
Altogether, we find that ZH production with its CP -even initial state provides us with genuine CP -odd observables that do not rely on any further theory assumptions. In particular the signed azimuthal angle difference ∆φ provides a clean probe of the CP nature of the Higgs-gauge sector. Unfortunately, the small rate and large backgrounds limit the new physics reach of the LHC in this channel.
IV. H → 4 LEPTONS
The final, classic [13,14] process we consider is the Higgs decaying into four leptons, which offers full reconstruction of the final state, including all of the electric charges. On the other hand, in this process the Higgs is almost always on-shell, limiting the momentum flow through the HZZ vertex. In addition, the fact that one of the Z bosons is typically on-shell results in one less independent degree of freedom in the favored region of kinematics. Nevertheless, the four leptons in the final state can be reconstructed with exquisite precision, which might help compensate for the smaller lever arm in energy.
The four leptons are organized into two same-flavor, opposite-sign pairs, whose momenta are labeled as: where 1,2 = e, µ are restricted to electrons and muons which can be reconstructed very precisely. Even combined with the relatively featureless gluon-fusion Higgs production mode, this decay mode has essentially no backgrounds and is largely statistics limited.
A. CP observables
Once again, the lack of spin information dictates that all observables are constructed from the 4-momenta, and transformation the same way under both P andT . Thus, as for ZH production, any CP -odd observable is eitherT -odd, C-even, and P -odd and orT -even, C-odd, and P -even. The initial state, at leading order, is CP -symmetric andT -symmetric in the Higgs rest or centerof-mass frames. We combine the lepton 4-momenta into C-eigenstates q 1± = q 11 ± q 12 q 2± = q 21 ± q 22 .
Similarly to the discussion in Secs. I B and II A, there are two classes of observables: Figure 5. Optimal 1σ contours for ZH production (solid black). The colored lines show the reach contained in the ∆φ distribution, including its absolute value (orange), asymmetry (green), full distribution (red), combination with the m ZH distribution (blue); based on the distribution of ∆E (purple); for the distribution of ∆p T, (turquoise); and based on a simple rate measurement (grey). In each panel, the parameters not shown are set to zero.
1. CP -odd andT -odd: there is exactly one observable in H → 4 decays that is P -odd and C-even, Unlike in Eq. (20), there is no need for an explicit sign factor to compensate for unobservable permutations. It is convenient to work in the Higgs rest frame with both Z-boson 3-momenta along the z-axis, implying q i+ = (E i , 0, 0, q z,i ) with E 1 + E 2 = m H and q z,1 + q z,2 = 0. In this frame, We can relate this to the Z-decay plane correlation angle Φ in Eq.(2) of Ref. [19] by introducing n i = q i1 × q i2 and making use of the identity ( Note that in Ref. [19] the definition of the angle between the two Z-decay planes is slightly more complicated. They relate the absolute value of Φ from cos Φ = ( n 1 n 2 )/| n 1 n 2 | and extract the sign of the angle from sign(Φ) = q 1+ · ( n 1 × n 2 )/(| q 1+ · ( n 1 × n 2 )|) . Since only the latter is sensitive to P violation, its information is equivalent to O a . 2. CP -odd andT -even: as before, we construct two scalar-product-based CP -odd observables by combining C-even and C-odd 4-vectors: (q 2+ · q 1− ) and (q 1+ · q 2− ). In the rest frame of q 1+ we define with the same angle θ 1 as in Ref. [19]. Similarly, in the q 2+ rest frame, we define The relation between these decay angles the tagging jet correlation in WBF is well known [22]. Because the effects of dimension-six operators are enhanced at higher momentum transfer, selections on the invariant masses q 2 1+ and q 2 2+ can enhance the sensitivity to CP -violating operators, even though these variables themselves are not sensitive to CP violation.
After these cuts, there is a small background from continuum ZZ production, which we include with an appropriate smearing of the m 4 invariant masses. As before, we assume an integrated luminosity of 100 fb −1 and neglect the detector efficiencies for the four leptons. For the Wilson coefficients given in Eq. (12), we find the following Fisher information matrix evaluated at the Standard Model, with CP -odd components in red: Optimal exclusion limits for representative pairs of Wilson coefficients are shown in Fig. 6. The sensitivity is about ten times worse for H → 4 than for ZH or WBF production, indicating that the enhanced precision in measuring the lepton momenta does not overcome the limitations of the restricted momentum transfer and dominantly constrained kinematics.
V. COMPARISON AND SUMMARY
The presence of new sources of CP violation in the Higgs sector is of fundamental importance, and according to some common lore may shed light on mysteries such as the baryon asymmetry of the Universe. It is crucial to establish the symmetry structure through well-defined observables as an ingredient to a global analysis, for example a dimension-six effective field theory containing both CP -conserving and CP -violating operators.
We have examined CP -sensitive observables in WBF Higgs production, ZH production, and Higgs decays into four leptons. While the underlying hard processes, and hence the sensitivity to the CP properties of the Higgs-gauge sector, are essentially identical for the three processes, the different initial and final state assignments define distinct signatures: 1. For WBF the initial state is not a CP eigenstate and one cannot measure the charges of initial-state or final-state quarks. In this situation we can use the naive time reversal to test the underlying CP properties, but only under the assumption of no re-scattering effects. On the other hand, the momentum flow through the Higgs vertex can be large.
2. In ZH production, the initial state is a CP eigenstate at leading order and one can easily identify the lepton charges in the final state. We can construct a genuine CP -odd observable, which directly reflects the CP symmetry of the underlying Lagrangian without any assumptions and without any additional complex phases. The momentum flow through the Higgs vertex can be enhanced by kinematic cuts. 3. Finally, for H → e + e − µ + µ − , one has full control over the kinematics of the process, allowing for a straightforward construction of CP -sensitive observables. However, the momentum flow through the relevant Higgs vertex is restricted by the Higgs mass and one of the Z-bosons is on-shell, limiting the kinematic coverage of the process.
In a next step, we have analyzed the new physics reach of the processes and observables in terms of thirteen Wilson coefficients. By calculating the Fisher information in the different signatures, we determined the optimal possible exclusion limits at the LHC, including through any multivariate analysis and taking into account all correlations between different operators.
The results of this comprehensive comparison are summarized in Fig. 7. We compare the optimal sensitivity of the three analyzed channels when either performing a fully multivariate analysis or a histogram-based analysis of either one or a combination of two kinematic distributions, assuming an integrated luminosity of 100 fb −1 . In the top panel we show the eigenvalues and eigenvectors of the Fisher information matrices. The colors denote the decomposition of the corresponding eigenvectors, defining the direction of a given eigenvector in model parameter space. The right axis translates the corresponding Fisher information into the new physics reach along this direction in model parameter space.
In general, the CP -even operator O φ,2 , which rescales all Higgs couplings, dominates the most sensitive directions for all three processes [29], typically followed by a combination of O W and O W W . Of the two CP -odd operators O W W and O B B , only the former can be meaningfully constrained in these processes. In WBF Higgs production, the sensitivity to this operator is best isolated in the asymmetry a ∆φ , which is not sensitive to any real CP -even Wilson coefficients. But the corresponding Fisher information still shows an admixture mixture of the imaginary Wilson coefficient of the CP -even operator O W W , once again demonstrating that additional theory assumptions are necessary to measure CP violation in this channel. In contrast, the genuine CP -odd asymmetry a ∆φ in ZH production is solely sensitive to CP -violating operators, albeit at a reduced new physics reach. As expected, the asymmetry a ∆E is sensitive only to the combination of CP violation and absorptive physics modelled by imaginary coefficients for the operator O W W . In both WBF and ZH production, adding observables that measure the momentum transfer significantly increases the information on all operators, but at the cost of obfuscating the CP interpretation of the results. Finally, the Higgs decay is only really sensitive to the combination of mostly O φ,2 and O W that affects the total rate in this channel, and the physics reach in any other direction in model space is severely hampered by the limited momentum flow.
In the bottom panel we focus on the Fisher information on the CP -violating Wilson coefficient f W W . The grey bars show the sensitivity assuming that all other considered operators are zero, translated into the new physics reach on the right axis. A combination of the two leading WBF observables p T,j 1 and ∆φ jj [15] captures almost the entire phase-space information on f W W . When we profile over arbitrary values of all of the CP -even parameters, including the absorptive imaginary parts, this feature gets washed out, motivating a multi-variate WBF analysis. The theoretically better-controlled ZH production channel has a significantly smaller reach than the WBF signature, but its reach for f W W is literally unaffected by other operators, thanks to the genuine CP -odd observable. For the Higgs decay the only news which is worse than the fact that there is very little information distributed over phase space is that the genuine CP -odd asymmetry a Φ is extremely limited in reach.
Altogether, we find that a CP measurement in WBF production provides the best reach, but its interpretation is theoretically not very clean. A CP measurement in ZH production is less modeldependent and more stable in terms of correlations, because we can construct an appropriate genuine CP -odd observable. In both cases, variables constructed to be sensitive to CP can be combined with information pertaining to the momentum transfer, which enhances the effect of dimension-six operators compared to the Standard Model amplitude. Finally, the Higgs decay is easily reconstructed and analyzed, but has a very limited reach because of its limited momentum transfer. Between the three processes we studied, there is no unequivocally best signature to determine the CP properties of the Higgs-gauge sector, but there is clearly a worst. where the p e i correspond to the vector boson momenta. In cases of ZH production and H → 4 decay, the momenta p o i will be odd under C-conjugation, while the p e i are C-even. From these momenta, we construct eleven different Lorentz invariant observables describing the kinematics: where we have used the fact that the fermions are approximately massless. The squared matrix element for the process ud → duH, averaged over initial spins and summed over final spins, takes the form |M| 2 = g 4 |V ud | 4 1024 N (p 2 1 − m 2 W ) 2 (p 2 2 − m 2 W ) 2 with N = |a| 2 f a + |b| 2 f b + |β| 2 f β + 2Re(ab * )f R ab + 2Im(ab * )f I ab + 2Re(aβ * )f R aβ + 2Im(aβ * )f I aβ + 2Re(bβ * )f R bβ + 2Im(bβ * )f I bβ . (A6) The individual contributions are For a non-vanishing expectation value of the P -odd observable P 1 , the squared matrix element must contain a term linear in P 1 . Such terms are generated either in the presence of CP -violating new physics, when either Re(aβ * ) = 0 or Re(bβ * ) = 0, or in the presence of the absorptive phase Im(ab) = 0.
In the processes involving a Z boson, the process may also enjoy a well defined transformation under charge conjugation. In this case, a non-vanishing expectation value for the C-odd observables C 1,2 requires the squared matrix element to contain a term linear in C 1,2 . Such terms are generated only via rescattering effects when Im(ab * ), Im(aβ * ), Im(bβ * ) = 0. The measurement of C i sometimes requires the identification of fermion charges, and therefore is not possible for all processes. Note that f I ab is both C-odd and P -odd and therefore CP -even. Thus, it does not contribute to a non-vanishing expectation value of both P and C i if the initial state is CP -symmetric, verifying the observation that absorptive phases can induce an asymmetry in P 1 in WBF Higgs production, but not in ZH production or H → 4 decay. | 12,380.2 | 2017-12-06T00:00:00.000 | [
"Physics"
] |
Towards Advancing Translators’ Guidance for Organisations Tackling Innovation Challenges in Manufacturing within an Industry 5.0 Context
: Following the vision of the European Commission, organisations and workers establishing Industry 5.0 approaches aspire to more future-proof, resilient, sustainable, and human-centred European industries. In this contribution, we explore how technological innovations that contribute to a “win–win” interaction with involved stakeholders may be advanced in a human-centred and transparent proceeding supported by impartial expert translators who provide information or knowledge-based guidance for decision-makers, initiators and implementers in manufacturing innovation driven by sustainability. We elaborate a stepwise procedure for agreeing on milestones and conjointly treading the path towards solving innovation challenges during a translation process. We exemplify the technological aspects of such a process using an innovation case aiming at identifying parameters for enhancements in a vacuum-bagging process applied to the manufacturing of composite parts from prepregs based on condensation-curing matrix resins made from renewable resources. In detail, we present a straightforward design of an experimental approach varying the dwelling temperature and the temperature ramps during the curing of stacked prepregs. In this way, we demonstrate that for cured composites comprising a poly(furfuryl alcohol)-based matrix, the porosity and connected mechanical properties achieved with autoclave-free curing processes sensitively depend on these process parameters. Applying the resulting data-based model is shown to support decision-making for sustainable composite manufacture.
Introduction
In 2021, the European Commission launched the report "Industry 5.0 Towards a sustainable, human-centric and resilient European industry" [1], highlighting the need for broadening value propositions from a sole shareholder value towards value for all involved stakeholders, and they identified research and innovation as drivers for a transition in this direction.We highlight that all involved stakeholders are human beings, and in recent years, the sustainability incentive and climate change increased citizens' interest in products' origins and whether they are fair-trade and/or have a low environmental footprint.Materials Science and Manufacturing are put on the spot to consider just the right materials and deliver in good time just the right products to fulfil these citizens' demands.Ideally, we would like to envisage inexpensive, up-to-date products that provide both a full spectrum of efficient, personalised functionalities and a full life cycle from cradle to cradle that allows for the protection of natural resources.However, disruptive innovations are both fast and strategically planned so that achievements and major challenges in material development may remain unchanged for periods that exceed human lifespans.A good example is the ongoing research in life sciences to develop synthetic materials facilitating targeted drug delivery, which has not yet exceeded approaches tested in clinical practice for more than a century [2].Materials Science tends to go beyond the boundaries of a human body and finds its match in global system boundaries.Sustainability has recently attracted political alertness when, in 2015, the UN agreement on Agenda 2030 set seventeen timebound Sustainable Development Goals (SDGs) [3].For material manufacturing, SDG 9, i.e., Sustainability in Industry, Innovation, and Infrastructure, is of relevance, as is SDG 12, i.e., Responsible Consumption and Production.
In this paper, we will discuss what can be done to make existing products and processes more sustainable and contribute solutions that can be implemented comparably fast.We will also provide evidence that the processes of innovation and knowledge finding can be made sustainable at the same time.Hence, we envision networking teams of experts who-based on mutual agreements-guide the interaction and knowledge exchange between actors in customer-focused industrial research and development (R&D) and experts from information technology (IT) or academia, e.g., philosophy and social, engineering, or natural sciences.We call members of these teams Translators, and we are confident that tailored digital tools and platforms will strongly support them both in gathering and exchanging relevant knowledge.In the H2020 project OntoTrans [4], an Open Translation Environment (OTE) has been developed.The art nouveau of inventing-and/or improving existing materials or processes-is expected to comprise an interplay between users of materials, inventors, or manufacturers.Under the premise of Industry 5.0, society and inventors or developers are supposed to co-create new materials or new versions of mature materials products that meet particular requirements (Figure 1).tested in clinical practice for more than a century [2].Materials Science tends to go beyond the boundaries of a human body and finds its match in global system boundaries.Sustainability has recently attracted political alertness when, in 2015, the UN agreement on Agenda 2030 set seventeen timebound Sustainable Development Goals (SDGs) [3].For material manufacturing, SDG 9, i.e., Sustainability in Industry, Innovation, and Infrastructure, is of relevance, as is SDG 12, i.e., Responsible Consumption and Production.
In this paper, we will discuss what can be done to make existing products and processes more sustainable and contribute solutions that can be implemented comparably fast.We will also provide evidence that the processes of innovation and knowledge finding can be made sustainable at the same time.Hence, we envision networking teams of experts who-based on mutual agreements-guide the interaction and knowledge exchange between actors in customer-focused industrial research and development (R&D) and experts from information technology (IT) or academia, e.g., philosophy and social, engineering, or natural sciences.We call members of these teams Translators, and we are confident that tailored digital tools and platforms will strongly support them both in gathering and exchanging relevant knowledge.In the H2020 project OntoTrans [4], an Open Translation Environment (OTE) has been developed.The art nouveau of inventingand/or improving existing materials or processes-is expected to comprise an interplay between users of materials, inventors, or manufacturers.Under the premise of Industry 5.0, society and inventors or developers are supposed to co-create new materials or new versions of mature materials products that meet particular requirements (Figure 1).In customer-focused innovation, society has the incentive to become a set of wellinformed and consenting users by demanding certain attributes of the new products they wish to use.Inventors then have the new challenge of convincing both society and manufacturers with readily accessible new or changed processes, business models, sustainability strategies, and resulting measures.They may be requested to achieve an efficient use of resources and contribute to developing products manufactured using technologies and In customer-focused innovation, society has the incentive to become a set of wellinformed and consenting users by demanding certain attributes of the new products they wish to use.Inventors then have the new challenge of convincing both society and manufacturers with readily accessible new or changed processes, business models, sustainability strategies, and resulting measures.They may be requested to achieve an efficient use of resources and contribute to developing products manufactured using technologies and processes that help reduce energy consumption and water usage and lower ecosystem disruption [5].Transnational initiatives may be launched to promote advances like the so-called twin green and digital transitions, and ingesting them will require new technologies and well-thought-out approaches for achieving a match between investment and innovation [1].Breque et al. [1] expect technological innovations to support "winwin" interactions between stakeholders in industry and society.Think tanks comprising several industry sectors may elaborate guidance for decision-makers in investment and innovation who are expected to balance ecological and economic objectives when assessing multi-sectorial and interdisciplinary challenges [6].Cooperation in networks is considered pathbreaking for exploiting comprehensive innovation potentials [7].Still, at the end of the day, the entrepreneurial risk and responsibility, as well as the product stewardship [8], are with individual manufacturers, who profit from making well-informed decisions based on pondering between different graspable options and considering multiple decision criteria.Thus, to encourage change, organisations-be it with relevance to internal processes or processes involving external partners-use strategising as a premise for strategic decisionmaking [9,10].During strategising, on the one hand, the problem to be solved is formulated, and on the other hand, a set of different conceivable solutions is developed.The first of these two processes may conceptualise the organisations' larger strategic context and (innovation) challenges, and the second one, which embraces alternative strategies, may rely on modelling, analysing, and presenting the optional innovation cases to decision-makers [9].
Following our concept, Translators' expertise and toolbox will allow them to support both problem formulation and problem-solving.Accordingly, we recently lined up human-centric approaches establishing roles for well-trained expert translators who perform translation processes in materials modelling [11], as advanced by the FORCE [12] and OntoTrans [4] projects, or in knowledge management [13,14], as developed in the Onto-Commons [15] project.These translators shall guide their academic or industrial clients in generating insights by following a multi-step translation process leading from an expressed need to a not-yet-known solution, denoted as a triple (need, translation, solution).In this way, Translation is performed at the interface between a need (provided by an information source) and a solution (information target).We would like to indicate that translators would be involved in group work on a conceptual level, contributing to cooperatively specifying a problem statement and/or to elaborating competing or even conflictive options for decision-making, as sketched in Figure 2. In organisations, processes like the depicted ones are often managed as a system of processes [16].To achieve consistent and predictable results more effectively and efficiently, the implementation of quality management systems following DIN EN ISO 9000 allows activities to be understood and managed as interrelated processes functioning as a coherent system [17].Processes implemented in quality assurance during production or repair and the data acquired thereby contribute not only to the characterisation of material products but also to further stages of the product's life cycle [18,19].In this way, cradle-to-gate approaches are complemented by gate-to-cradle advancements, which allow manufacturers to involve further stakeholders.Iterative and cyclic proceedings may be achieved for products that range from the development of materials and material products to covering their maintenance, repair, and provision for the subsequent life cycle initiated by product users.An iterative group interaction cycle to reveal, from an organisation's/enterprise's point of view, which are the tasks for which they may benefit on a strategic level from involving translators.It was sketched by the authors inspired by the Group Task Circumplex developed by McGrath [20].
Exemplary approaches for tackling challenges when advancing sustainability in materials and process R&D were addressed in a recent study on circular economy and adhesive bonding technology [21] and a recent review by Brinken et al. [22] that focuses on identifying decarbonisation measures for supply chains.With respect to sustainability, these authors elaborate that using a triple bottom line (TBL) comprising ecological, social, and economic perspectives is generally accepted.They summarise archetypal sustainability strategies, i.e., sufficiency, consistency, and efficiency (as presented in Figure 1), and develop a new systematisation of sustainability measures that are intended to support decision-making with respect to the prioritisation and, thus, the selection of decarbonisation measures in supply chains for material products.In addition, these authors recommend considering sections of the supply chain or company divisions on an organisational level.They identify three concepts related to sustainability measures that may become addressed on different time scales, namely the concepts of "Process" and "Product", representing measures that can be implemented in the short term and "System" requiring a long-term perspective.They suggest finally assessing the effects of such measures by using simulated models, in this case, for supply chains.For achieving sustainability targets, Mayer et al. [21] suggested linking the three concepts of "Process", "Material" (or material product), and "Life" related to the technosphere and biosphere, respectively, with procedures for knowledge generation and presented this approach in the context of consistency strategies related to establishing a circular economy in adhesive bonding technology.Presently, approaches for fostering circular economy in the frame of consistency strategies to advance sustainability [23] tend to be centred around forceful process optimisations [24] as potentially low-hanging fruits, e.g., decarbonising the production of energy [3] or establishing digital tools for advancing processes like production or transport within the supply chain.
We share the idea that optimisations that arise from human creativity and can be implemented in the short term are essential steps on a path towards achieving an increased overall sustainability.We will also demonstrate that to tackle innovation challenges in manufacturing, two concepts named "process" and "(materials) product" have been established and readily used to describe and solve problems, be it in the short term or following a long-term perspective.In addition, we refer to the concept of translation processes and highlight the impact that establishing a translator role in product innovation might have on an approach within an Industry 5.0 context.We present the concept of Exemplary approaches for tackling challenges when advancing sustainability in materials and process R&D were addressed in a recent study on circular economy and adhesive bonding technology [21] and a recent review by Brinken et al. [22] that focuses on identifying decarbonisation measures for supply chains.With respect to sustainability, these authors elaborate that using a triple bottom line (TBL) comprising ecological, social, and economic perspectives is generally accepted.They summarise archetypal sustainability strategies, i.e., sufficiency, consistency, and efficiency (as presented in Figure 1), and develop a new systematisation of sustainability measures that are intended to support decision-making with respect to the prioritisation and, thus, the selection of decarbonisation measures in supply chains for material products.In addition, these authors recommend considering sections of the supply chain or company divisions on an organisational level.They identify three concepts related to sustainability measures that may become addressed on different time scales, namely the concepts of "Process" and "Product", representing measures that can be implemented in the short term and "System" requiring a long-term perspective.They suggest finally assessing the effects of such measures by using simulated models, in this case, for supply chains.For achieving sustainability targets, Mayer et al. [21] suggested linking the three concepts of "Process", "Material" (or material product), and "Life" related to the technosphere and biosphere, respectively, with procedures for knowledge generation and presented this approach in the context of consistency strategies related to establishing a circular economy in adhesive bonding technology.Presently, approaches for fostering circular economy in the frame of consistency strategies to advance sustainability [23] tend to be centred around forceful process optimisations [24] as potentially low-hanging fruits, e.g., decarbonising the production of energy [3] or establishing digital tools for advancing processes like production or transport within the supply chain.
We share the idea that optimisations that arise from human creativity and can be implemented in the short term are essential steps on a path towards achieving an increased overall sustainability.We will also demonstrate that to tackle innovation challenges in manufacturing, two concepts named "process" and "(materials) product" have been established and readily used to describe and solve problems, be it in the short term or following a long-term perspective.In addition, we refer to the concept of translation processes and highlight the impact that establishing a translator role in product innovation might have on an approach within an Industry 5.0 context.We present the concept of translator, who could be a mouthpiece for a manufacturer that opens communication channels and access to rele-vant Knowledge Providers by using a tonality that they can readily sense and understand.We exemplarily line up activities that were successfully accomplished in the short term, e.g., within a period of one month, by a team formed by experts of an organisation manufacturing prepregs and a translator performing a stepwise translation procedure.We focus on one exemplary and realistic innovation case in the field of composite manufacturing involving substantially available material.An efficiency-based approach is applied, and a data-based model is used for optimising parameters and controlling tool operations during the curing of component parts that are manufactured from pre-impregnated composite materials (prepregs) comprising carbon fibres and a partially cured polymer matrix.This case aims at minimising voids when hardening laid-up, stacked prepregs with a poly(furfuryl alcohol) (PFA)-based polycondensation-curing matrix bio-resin and, thus, builds on an up-to-date challenge in the sustainable manufacturing of fibre-reinforced polymers.
Due to lightweight advantages, which are strongly related to economic and environmental demands, the use of composite materials has been steadily increasing over the years.Composites combine the properties of at least two materials: the matrix (e.g., a cured polymer-based resin) and the reinforcing fibres (e.g., carbon, glass) [25].Several manufacturing methods are available for composites, for instance, hand lay-up, vacuum infusion, resin transfer moulding, filament winding, compression moulding, and autoclave curing.They might differ in cost, scalability, laminate dimensional tolerance, and laminate mechanical properties [26].Autoclave moulding is well-established as a manufacturing technology, which is practical for generating high-end geometrical and mechanical properties, especially when combined with pre-impregnated semi-cured composite layers (prepregs).Depending on the case, autoclave moulding is associated with high energy consumption and costly tooling [27].For this reason, out-of-autoclave (OoA) technologies have been gaining prominence as an industrial solution for the cost-effective, scalable, energy-efficient, and sustainable production of composite structures.In this regard, one of the main challenges faced by OoA-based manufacturing is the relatively lower pressure involved during the curing of the composite laminates, which leads to a less effective mitigation of gas (e.g., from air, volatile organic compounds (VOCs), or water [28] entrapment) and, consequently, a higher risk of void (i.e., porosity) formation [29].
Such types of manufacturing challenges related to porosity mitigation can be even more pronounced in composite laminates relying on matrix systems based on polycondensation resins.Among polycondensation-curing resins, PFA has emerged as a more sustainable and bio-based alternative to phenolic resins [30][31][32].While Ipakchi et al. [31] introduced a curing catalyst into the PFA formulation and applied a complex curing scenario for roughly two days with 24 h of isothermal curing at 80 • C and 0.13 MPa pressure, followed by 24 h of isothermal post-curing at 100 • C, Sangregorio et al. [32] used compression moulding at 150 • C for 30 or 90 min at a pressure of 0.2 MPa and did not observe voids in the composite macrostructure.Guigo et al. [30] performed a curing scenario by pressing the PFA resin at 1.2 MPa and applying a curing temperature as high as 160 • C for 2 h to obtain their final crosslinked PFA matrix.Therefore, one of the main technologically relevant research questions to be addressed in this context is how to manufacture high-quality composite laminates (in terms of dimensional tolerance and mechanical strength) based on polycondensation curing (being a potential trigger for porosity), which are cost-effective like OoA and do not require a high-pressure manufacturing device, like an autoclave.
Finally, we shortly discuss approaches that may allow manufacturers to assess similar innovation cases within a few days by re-using already established translation approaches and profiting from making their knowledge FAIR, i.e., findable, accessible, interoperable, and re-useable, exemplarily by using information technologies and ontologies that comprise pre-implemented conceptualisations, which express the meaning of concepts in a persistent form and support sustainable material modelling.We aim at identifying overarching concepts and procedural sequences that we recommend for application on an "abstract" level [22], thereby exceeding a specific product, specific branch or domain, or domain-specific conceptualisation approaches.Presently, OntoTrans [4] has developed an ecosystem [33] in which we can translate an innovation challenge into something machinereadable using a semantic framework that is made available in an innovation ecosystem applicable to innovation challenges.
Materials and Methods
In this section, we describe the materials and methods that we used and applied to address the innovation challenges we identified in composite manufacturing.We will round off our methodical demonstration by detailing our approach for the translation process applied both to identifying a challenge in manufacturing and specifying an optimal solution that can be used as a base for decision-making.
The assessment of the materials and processes profiting from the methods described subsequentially is part of an exemplified and instantiated translation case that aims at minimising voids when hardening laid-up, stacked prepregs with a poly(furfuryl alcohol) (PFA)-based polycondensation-curing matrix bio-resin and, thus, is built on an up-to-date challenge in the sustainable manufacturing of fibre-reinforced polymers.
Prepreg
Prepregs consist of a reinforced fabric pre-impregnated with a resin system [34].The prepregs for the manufacturing of composite laminates were made with poly(furfuryl alcohol) (PFA) bio-based resin reinforced with glass fibre (aerial weight of 300 g/m 2 ).A thermoset resin made from PFA is an eco-friendly fire-retardant alternative to phenolic resins.The composite lay-up was ((0/90) 6 ) s .
Composite Lay-Up
The target geometry of the composite laminate, i.e., the final dimensions, was 317 × 200 × 3 mm 3 .Since each prepreg sheet had a nominal thickness of 0.25 mm, a total of twelve layers were stacked by hand over a flat aluminium mould (treated with a release layer) in order to obtain the 3.0 mm nominal thickness of the laminate.After lay-up, the stacked prepreg was covered with a breather material and a top plastic cover.The curing took place using either vacuum bagging or autoclave curing.
Autoclave Curing
The curing conditions were characterised by applying a pressure of 0.5 MPa prevailing during autoclave curing, applying (as shown in Table 1) a heat ramp of 2 K/min until reaching 90 • C, with 0.5 min isothermal dwelling at 90 • C and using a heat ramp of 2 K/min until finally reaching a maximum temperature of 145 • C that is maintained for 75 min.
Vacuum Bagging
Atmospheric pressure of 0.1 MPa prevailed during vacuum bagging, as characterised by the factorial settings presented in Table 1.For the vacuum bagging, two options for application-relevant process variations were considered: the optional use of an isothermal dwelling at 90 • C and choosing the heat ramp to increase the temperature by either 1 K/min or 2 K/min.After reaching the final temperature of 145 • C, the temperature was maintained for 75 min.The measured temperature profiles according to the respective process and composite material batch are graphically visualised in Figure 3.
Vacuum Bagging
Atmospheric pressure of 0.1 MPa prevailed during vacuum bagging, as charact by the factorial settings presented in Table 1.For the vacuum bagging, two optio application-relevant process variations were considered: the optional use of an isoth dwelling at 90 °C and choosing the heat ramp to increase the temperature by ei K/min or 2 K/min.After reaching the final temperature of 145 °C, the temperatur maintained for 75 min.The measured temperature profiles according to the resp process and composite material batch are graphically visualised in Figure 3.
Thermo-Gravimetric Analysis (TGA)
TGA measurements were carried out in a thermo-gravimetric analysis device ( IR TGA device, TA Instruments Inc., New Castle, DE, USA) using either of the heat and dwelling times specified in Table 1 for the VB processes VB-1-60 and VB-2-0.
Preparations of Cross-Sections
After either vacuum bagging or autoclave curing, the cured composite lamina a nominal geometry with lateral and vertical dimensions of 317 × 200 × 3 mm 3 .F mechanical characterisation, ten ILSS-testing samples (20 × 10 × 3 mm 3 ) were pre from the composite laminate using a wet abrasive cutting machine.
Scanning Electron Microscopy (SEM)
After depositing a thin electrically conductive carbon layer, selected cross-se through a composite laminate sheet that previously had been characterised by
Thermo-Gravimetric Analysis (TGA)
TGA measurements were carried out in a thermo-gravimetric analysis device (Q5000 IR TGA device, TA Instruments Inc., New Castle, DE, USA) using either of the heat ramps and dwelling times specified in Table 1 for the VB processes VB-1-60 and VB-2-0.
Preparations of Cross-Sections
After either vacuum bagging or autoclave curing, the cured composite laminate had a nominal geometry with lateral and vertical dimensions of 317 × 200 × 3 mm 3 .For the mechanical characterisation, ten ILSS-testing samples (20 × 10 × 3 mm 3 ) were prepared from the composite laminate using a wet abrasive cutting machine.
Scanning Electron Microscopy (SEM)
After depositing a thin electrically conductive carbon layer, selected cross-sections through a composite laminate sheet that previously had been characterised by light microscopy were investigated in more detail with Scanning Electron Microscopy (SEM) using a field emission device (FESEM), type FEI Helios 600 (ThermoFisher Scientific, Eindhoven, The Netherlands), allowing for the tilting of the analyte specimen.The images of the sample surface were obtained at acceleration voltages of 5 or 10 kV and by using detectors for backscattered or secondary electrons.
Density Measurement
The density of the composite laminate was obtained using ILSS-testing samples.Per composite laminate, a total of five samples were measured.The dimensions (length, width, thickness) were obtained based on the light microscopic characterisation.The mass of samples was determined with a laboratory scale with a precision of 0.1 µg.The density in g/mm 3 of each sample was then determined by dividing the mass by the volume calculated from the geometrical dimensions.
Mechanical Testing
The mechanical characterisation of composite laminates was carried out using threepoint bending flexural testing (3PB), which provided the interlaminar shear strength (ILLS, in [MPa]).The 3PB set-up was utilised according to ISO 14130 [35] and using a Zwick Z100 (Ulm, Germany) machine with a load cell of 10 kN.A testing speed of 1 mm/min was considered, and a supporting span of 14 mm, a supporting pin with 1 mm of radius, and a loading with 5 mm of radius were used.
Design of Experiment (DoE)
We employed the design of experiment (DoE) to establish a data-based model that relates material properties as responses to the process factors applied by an operator.DoE is a current systematic approach for solving engineering problems, and it especially provides guidance for data collection with a minimum expenditure of engineering time [36].DoE may be applied for functionally modelling a process based on a mathematical function with a significant predictive power that is achieved since the approach provides estimates for the coefficients in that function with maximal accuracy [36].MODDE ® DoE software (version 12.1, Sartorius Stedim Data Analytics AB, Umesoft GmbH, Eschborn, Germany) was used to perform data analytics and plot the findings obtained for one response, namely the porosity in cross-sections through composite laminates.The response values were measured fivefold and then averaged.As shown in Table 1, a basic full factorial 22 DoE design was used for VB processes, and the two quantitative factors, namely heat ramp of the stacked prepreg and dwelling time at 90 • C, were varied on two levels each.
Translation
Subsequently, we will describe how translation guided us in mapping out the usage of the abovementioned materials and methods to address the thus instantiated innovation case related to the identified innovation challenge in manufacturing.For this purpose, we provided a survey of several processes that are commonly referred to by different concepts named "translation".In this way, we will reveal three key aspects: Firstly, these concepts share the triplicate structure that we highlighted in the introduction section by introducing a triple (need, translation, solution) when lining out in which way translators may guide organisations in tackling innovation challenges in manufacturing.Secondly, in a more generally expressed triple (source, Translation, target), the source and the target are the same type of entity, e.g., a vector, relation, meaning, or content.Thirdly, the effort of capturing the scope of a source entity is not necessarily smaller than the effort of performing the translation process towards a target entity.In a human-centric approach following Industry 5.0 that builds upon achievements in digitalisation and Industry 4.0, both the conceptualisation used for the translation of expressions in natural language and the conceptualisation used in data management may need to be considered when assessing the triple (need, translation, solution).
In mathematics, namely in geometry and linear algebra, a translation is understood to be a geometric transformation characterised by the displacement of a vector by the addition of a nonzero vector [37], e.g., when displacing a source vector towards a target vector by using a translation vector.Vectors are well-defined, and both the source and targets are of the type "vector".When applied in the context of data or knowledge management or modelling, the concept of translation may be found in adaptations when it comes to interpreting relations, e.g., the one relation of name label that exists between the entities' head and tail as written in the form (head, label, tail), often denoted (h, l, t) [38].Exemplarily, in information retrieval (h, l, t), triplets may be used as basic units of a knowledge graph and represent the two nodes h and t and a relation forming an edge from head to tail [39].Challenges often faced in this context are that the data underlying the information are multi-relational [38] and that h and t may be the representations of h and t [40].For instance, whenever user-dependent representations are used for entities, then individual users might tend to simplify and apply a few selected representational aspects that appear most relevant or that are best known to them.An effect, thus, may be that low-dimensional vector representations of the entities are chosen.Still, the relationships between these representations may be numerous and complex, e.g., reflexive, one-to-many, many-to-one, and many-to-many relationships [41].As a consequence, we may understand a translation operation as a process that aims at interpreting and modelling (the most) relevant relations between representations of entities.
In linguistics or in the context of natural languages, translation refers to the restatement of the forms of one language in another and may be considered the chief means of exchanging information between different language communities [42].Following ISO 17100 [43], translation refers to a set of processes for rendering source language content into target language content in written form.With relevance for translation service providers (TSPs), this normative highlights the specifications of requirements for all aspects that are relevant to the quality and provision of translation services.Exemplarily, based on a recorded client-TSP agreement between the client and the TSP, the translator is expected to perform translation strictly in accordance with the purpose of the translation project.Clearly, the translator is expected to respect both the linguistic conventions of the target language and relevant project specifications.Exemplarily, the translation service will need to (i) comply with specific domain and client terminology or other material provided as reference, (ii) safeguard the semantic accuracy of content (in target language), (iii) respect orthographical conventions, like syntax, (iv) provide lexical cohesion, (v) comply with relevant style guides, (vi) respect relevant standards, (vii) ensure formatting, and (viii) be aware of the target audience and the purpose of the translated content.We highlight that the processes-be they related to pre-production, production, or post-production-are managed in the frame of a project and comprise several steps and tasks.We consider all these aspects relevant for the interfacing between translators and clients who are organisations tackling innovation challenges in manufacturing.Therefore, we are aware that the translators involved will need to manage multi-stage processes and capture syntactical and semantical aspects, as well as the purpose not only of the translation project but also of the target language content.With respect to translators capturing the content of the statements and information provided by their clients in a client-specific source language, we consider a schematic representation of the set of tasks faced by a translator helpful.Inspired by a representation of one exemplary model for a translation process among the ones reported by Albir and Alvez [44], we depict in Figure 4 a protocol-based sequence of activities followed a translator in manufacturing innovation.We refer to some key concepts introduced by Bell [45], who pointed out that three conceptualisations may be considered while on the one hand, translating may refer to a process and an activity rather than a tangible object, on the other hand, translation may be given either of the two meanings: (i) the product of the process of translating, i.e., the translated text, or (ii) the abstract concept encompassing both (i) and (ii).his mathematical theory of communication and his diagram of a general communication system on the inherent engineering problem related to a transmitter that operates on the message in a certain way to produce a signal that is suitable for transmission over a channel, e.g., considering potential noise in the channel.In contrast, we focused on messages that have meaning and semantic aspects of communication, which, following Shannon, "refer to or are correlated according to some system with certain physical or conceptual entities" [46].Still, in addition to their frequent semantic tasks, translators are expected to be aware of transmission characteristics in communication, i.e., the message that they receive is one that they select from a set of possibly sent messages.Therefore, we consider human-to-human interaction essential in conjointly documenting and minuting messages.Based on an understanding of monolingual communication between one sender and one receiver, as shown in Figure 4a, we were inspired by Bell [45] who presented a nine-step sequence of tasks that are performed during translation when a translator is involved in the communication between sender and receiver.As detailed in Figure 4b, the translator (1) receives signal α, sent by a sender and including the source message, (2) recognises code α, (3) decodes the encoded message, (4) retrieves the message comprising content, (5) protocols the message, ( 6) chooses code β, (7) encodes message using code β, (8) chooses a channel, and ( 9) transmits signal β including the target message.Moreover, we sketched a simplified model of the translation process (Figure 4c), inspired by Bell [45] who highlighted that the translator would first analyse the source language text by considering syntactic, semantic, and pragmatic aspects.In detail, the semantic analysis follows after performing the syntactic analysis in a parsing, i.e., stepwise and partitioned approaches.Then, the translator will perform a semantic representation and, finally, synthesise the target language text for relevant parts of a protocol by considering pragmatic, semantic, and syntactic aspects.We infer that a translator translating for clients who tackle innovation challenges in manufacturing is required to master both the source language text describing the innovation challenge and aspects of data or knowledge management related to information that is relevant to an innovation case.We highlight that for translators translating for clients interested in gathering information by using material modelling, a six-step procedure was set up according to the EMMC translators guide by Klein et al. [11].This approach is sketched in Figure 5 and starts with an identified and formulated problem as one type of input into the translation process and expressed by a business case framing an industrial case.Moreover, data already available within the translator's Client will provide the input: they shall be analysed by the translator and may be curated.
Results
Innovation in manufacturing industries is driven by enterprises mastering their processes and current data related to their resources and materials products.As translation as a process is not yet formally established in manufacturing industries, we first tailor the
Results
Innovation in manufacturing industries is driven by enterprises mastering their processes and current data related to their resources and materials products.As translation as a process is not yet formally established in manufacturing industries, we first tailor the line of action for translators and then shed light on an exemplary stepwise translation to achieve a process optimisation driven by updated customer requirements in the manufacturing of fibre-reinforced composites.
Developing an Iterative Cycle for Translation in Materials Innovation
In an organisation involving translators in materials innovation processes, a relevant aspect often is that the translation process can be managed in a similar way to how they manage their tackling of innovation challenges.Especially in enterprises following a process approach, their business is managed as a system of processes [16], and their activities and processes aim at delivering value through fulfilling the needs of relevant interested parties, e.g., their customers, exemplarily by focusing on quality and following DIN EN ISO 9000 and 9001 [16,17].In this way, organisations may increase their performance and build sustainable competitive advantage [47].Process management assesses both the interactions between processes and their required inputs and outputs.Managing processes in a systematic way may be achieved by applying a cyclic approach involving tasks that permit the performance of successive Plan-Do-Check-Act (PDCA) procedures.From this point of view, the translation process is expected by an enterprise to interact with other processes and to aim at continually improving the processes' performance.Therefore, for translation guiding innovation in materials development, we suggest a cyclic approach that allows the translator's Client to re-use a once-worked-out translation process.The outcomes of a previous (and solved) translation problem could facilitate an enterprise to modify its problem formulation by giving it a re-shaped contour and making it require another problem-solving, as displayed in Figure 2. In Figure 6, we show an iterative innovation translation cycle comprising the context, meaning, and data concepts relevant to knowledge management and generation, as presented by Goldbeck et al. [13].The translator performs a mapping from the context-enriched data #1-1 to dataset #1-2, respectively, from #1-3 to dataset #1-4, and the knowledge provider contributes customfit insights to close the knowledge gap when enhancing dataset #1-2 and preparing dataset #1-3.By cooperatively providing feedback to each other, the client, translator, and knowledge provider contribute to a translation cycle that dynamically translates a pointer towards a need into a pointer towards a solution and, thus, contributes to closing a knowledge gap.
The iterative innovation translation cycle shown in Figure 6 involves three roles-the client, the translator, and the knowledge provider-that may be taken on by teams.We suppose that within real-world teams, there may be a notable heterogeneity of contextual, experiential, and motivational factors, as, e.g., highlighted by Grudin and Poltrock [48].Therefore, both translators in materials modelling [11] and knowledge management trans- The translator performs a mapping from the context-enriched data #1-1 to dataset #1-2, respectively, from #1-3 to dataset #1-4, and the knowledge provider contributes custom-fit insights to close the knowledge gap when enhancing dataset #1-2 and preparing dataset #1-3.By cooperatively providing feedback to each other, the client, translator, and knowledge provider contribute to a translation cycle that dynamically translates a pointer towards a need into a pointer towards a solution and, thus, contributes to closing a knowledge gap.
The iterative innovation translation cycle shown in Figure 6 involves three roles-the client, the translator, and the knowledge provider-that may be taken on by teams.We suppose that within real-world teams, there may be a notable heterogeneity of contextual, experiential, and motivational factors, as, e.g., highlighted by Grudin and Poltrock [48].Therefore, both translators in materials modelling [11] and knowledge management translators for Innovation [13] are expected to be skilled not only in technological procedures but also in business, communication, and networking practices.In this way, we effectively express a process of ongoing adjustment by sharing relevant intersubjective knowledge that does not only involve the client and a case-specific translator but also the translator and a case-specific knowledge provider.While involving case-specifically experienced human actors, this characteristic feature of all the involved communication processes holds true in the frame of (i) a first innovation case #1 and (ii) subsequent innovation cases contributing to solving a superordinate innovation challenge.
In the course of their dialogue, the translator and the client mutually take care that they both understand the conception and agree on its conclusion.Based on their shared relevant knowledge, the translator and client will need to identify the client's knowledge gaps that need to be filled in order to solve the challenge.In the course of this challengeoriented communication, the translator proposes an approach for acquiring the missing knowledge, be it by finding available knowledge ("knowledge harvesting") or by generating new knowledge based on gathering information-based evidence, e.g., using materials modelling [11] or characterisation.With the knowledge or data gap having been filled, the translator proposes an optional solution to the client that addresses the client's need and can be realised by the client.Especially after having understood and documented the knowledge gap, the translator and the client may agree on a strategy that might be based on a step-by-step translation approach for obtaining case-by-case solutions, together with knowledge providers offering the respective eligible expertise.Often, jointly identifying a first innovation case that can be solved swiftly with a straightforward expense of resources may be an efficient and effective first step for gaining confidence and harvesting low-hanging fruits.We infer that a translator will profit from digital tools for estimating the time and effort and managing the integration of potentially several knowledge providers.
Thus, when assessing the innovation challenge, the client and a challenge-specifically composed expert translator team may aim at achieving an understanding that they may continually concert by dissipating the divergences in their perception of the (relevant part of the) world [49] in the given challenge-specific context.We infer that a shared perspective is a prerequisite for successful cooperation.In the next step, they may identify a first innovation case, the solution of which then contributes a set of data to solve the more complex challenge.Finally, after assessing a set of subsequent innovation cases, the client and a set of challenges.Specifically composed translator teams finally may close a complex knowledge gap.Exemplarily, the client and the translator may manage the activities of interacting translation processes involving several knowledge providers by following an overarching approach: for solving challenges in engineering, some signification is to be attributed to the ideas in engineering in a way that allows to organise these ideas such that they can be extended by new facts.This may be achieved by activities for pragmatically reconstructing or explaining the meanings of the concepts under discussion [50].Effectively, the overarching proposition of a solution answering a client's need is based on a joint and iterative examination of potential future consequences that can result when the outcomes of collaborative and convergent thinking are implemented in actions [50].From our point of view, comprehensive joining and integrative approaches are outstandingly important on several levels when human needs are to be understood, and actions for solving them are planned in a global context.This holds especially true when it comes to understanding sustainability as a three-dimensional concept, combining economic, ecological, and social (or societal) objectives.With respect to benefitting society, Odum pointed out that particular applications of ecology "must combine holism with reductionism" [51].With high significance for human-centric (scientific) approaches, he highlighted that a human being "is not only a hierarchal system composed of organs, cells, enzyme systems, and genes as subsystems, but is also a component of supra-individual hierarchal systems such as populations, cultural systems, and ecosystems".Understandably talking about the world, Gavalas suggested a feedback system that he called, "Reductive-Holistic Cycle" [52], and he involved categories that help to classify information, objects, and properties based on (mathematical) category theory to assess the dynamic conceptual schemata that organise knowledge.From these examples, we infer that it will be part of the translators' highly dynamic job to balance, on the one hand, reductionism and the analytical method for studying smaller and smaller components in detail and, on the other hand, holism and the synthetic or systemic method for assessing functional wholes.Moreover, translators are expected to transparently perform concept interpretations and data provisions to bridge between a sender's and a receiver's reference domains and use semantics to make concepts achievable and understandable in the counterpart's world.We are aware that individual senders or receivers may use concepts without a clear definition and, thus, may open them to uncertain or changing interpretations.Exemplarily, Jörgenfelt and Partington suggested an updated definition of holism in line with current biological system theories and neurological research [53] based on analysing the original description of holism [54] as a dynamic concept included in a creative, evolutionary process relating the fundamental concepts of matter, life, and mind.In this respect, we would like to highlight that persistently linking their conceptualisation of an innovation case to an ontology supports both translators and their clients in preserving their agreed approach.Exemplarily, using the European Multiperspective Material Ontology (EMMO) [55] facilitates the integration of representations based on holistic and reductionistic perspectives.
As successfully implementing a comprehensive set of knowledge providers is crucial for facilitating the ongoing exchange of information between the translator and the client, we will further detail our perception of this role.In Figure 6, we presented a sketch elaborated based on Bell's model of translation that represents process steps when expressing in a target language what has been expressed in a source language, preserving semantic and stylistic equivalences [56].Following this sketch, the translator shall be an expert in capturing and re-formulating the messages contained in the transmitted, i.e., received and sent, signals.The messages that the translator receives from the client may be text written in natural language or be expressed in a graphical sketch, a table-type list, a photo, or a video.Similarly, knowledge providers are often not represented by persons speaking in text messages; rather, knowledge may also be made available from databases, ISO standards, patent specifications, or wikis.Such knowledge providers may be accessed in the early stages of the communication to help the translator understand what the client intends to express.Effectively, during knowledge sharing, the client expects the translator to capture, document, and use information from different sign systems, which results in an even more complex translation challenge than the one faced when translating a source text.In particular, clients will often be required to consider FAIR data principles.We infer that translators may profit from mastering exploratory search systems (ESS) [33].Moreover, in the frame of ongoing communication with the client, the translator may and shall actively ask the client and somewhat guide the translation process.From this point of view, the client team is an essential knowledge provider, and in this regard, the human-centric interaction is more evident than during a process step characterised by the translator comprehending information from a patent specification, i.e., a legal document [57] containing text and figures (indirectly and impersonally) provided by inventors.
These considerations also shed light on the client's role.If we assume that the client of the translator is a team of project managers within the client's organisation, then the knowledge provider could be situated inside or outside the same organisation.Correspondingly, the translator will provide some internal or external translation services, as recently discussed by Goldbeck et al., for translation in knowledge management [13].We infer that data management skills will be very helpful for a translator and that establishing FAIR data principles within a client's organisation will greatly support the translator in finding, gath-ering, and (digitally) documenting relevant internal knowledge in order to close knowledge gaps.Notably, to establish the scaffold of their communication and understanding, the translator and the client may agree to comprise concepts and benchmarks that allow them to follow a sustainability-focused and human-centric approach that may guide their reasoning about long-term, i.e., future consequences of how their decisions, actions, and products can impact the world in a global context exceeding the interaction with other humans, e.g., striving for harmony and concord [58,59].Effectively, the translator may use Peircean semiosis as a key analytic method to describe engineering systems and pragmatically consider their interactions with their environment.Such an approach has been reported for assessing biological systems [60].For every wording (in Greek λóγoς [61]) used by the client, the translator may need to find an expression that catches its context-relevant meaning in a commonly understandable conception of its signification [62].
With reference to the sketch shown in Figure 7 based on a study by Calabrese and Costa [63], such a joint approach shall be connected to facts and, thus, become a powerful procedure that supports strategising and mental processes as the cognitive foundation of business and materials innovation.A translator and their client may profit from performing translation in a way to represent the recursive structure of analogical abductive reasoning, which may be drawn on Peirce's theory of abduction.We indicate that the required connection to facts in a manufacturing context will call for referring to data since following DIN EN ISO 9000 [17], data provide facts about a perceivable or conceivable entity, be it a (material) product, a service, a process, a person, an organisation, a system, or a resource.The importance of involving data is highlighted in steps 3 through 6, shown in Figure 5, for the translation in material modelling.An iterative innovation translation cycle comprises the human-centric aspects based on knowledge, reasoning, and judgements provided by persons who are experts in translation, i.e., translators.Concerning the overarching scenario in which the translator and their client act in the frame of our manuscript, we are aware that the mere fact that they interact and form some shared understanding of an innovation case means that a frame for this interaction related to technical aspects was already set, as indicated in Figure 2. So, at this stage, the translator and their client already have some established relationship and may have created mutual awareness of the effects of, e.g., regulations and aspects related to compliance or intellectual property [11,12].
The translation that we will describe in a fictive scenario sets in after an organisation represented by an enterprise manufacturing a set of material products identified an innovation challenge that they face and formulated a problem.In the next steps, the contemplated Translator will need to understand the relevant aspects of a formulated problem, elaborate suggestions for solving the problem, and present these suggestions to the organisation.The innovation challenge involves a set of similarly interested customers of the contemplated enterprise.Concerning the overarching scenario in which the translator and their client act in the frame of our manuscript, we are aware that the mere fact that they interact and form some shared understanding of an innovation case means that a frame for this interaction related to technical aspects was already set, as indicated in Figure 2. So, at this stage, the translator and their client already have some established relationship and may have created mutual awareness of the effects of, e.g., regulations and aspects related to compliance or intellectual property [11,12].
Translating an Innovation Case in Composite Manufacturing
The translation that we will describe in a fictive scenario sets in after an organisation represented by an enterprise manufacturing a set of material products identified an innovation challenge that they face and formulated a problem.In the next steps, the contemplated Translator will need to understand the relevant aspects of a formulated problem, elaborate suggestions for solving the problem, and present these suggestions to the organisation.The innovation challenge involves a set of similarly interested customers of the contemplated enterprise.
Translating an Innovation Case in Composite Manufacturing
At this point, we would like to take the reader through what these materials are and how they are used, i.e., shaped, cured, and applied in our exemplary innovation case from the field of composite manufacturing.Above, we highlighted that the innovation challenge involves a set of similarly interested customers of the contemplated enterprise.During the communication between the contemplated enterprise and the Translator, the enterprise turns out to be a manufacturer of prepregs, and potential customers are manufacturers of composite parts.The enterprise and the translator may shortly refer to the probably Interested customers as EMC-VB, i.e., Enterprise(s) Manufacturing Composites by Vacuum Bagging (VB)-based processes for curing parts formed from stacked prepregs.Moreover, the Translator may shortly refer to the innovation challenge as IC-EMP-C-PP-CEMC-VB-CS (Innovation Challenge, Enterprise Manufacturing Prepregs, Translators' Client, Prepreg, Curing at the site of an EMC using VB-based process(es), Customer Support).For the sake of the readability of the manuscript, we will refrain from catenating abbreviations.Still, the Translator will need to be aware of concepts, relations, and context.Exemplarily, both EMP and the Translator may highlight the IC's granularity with respect to human-centric interactions in an enterprise (i.e., organisation) context.The Translator will label EMP's role in the translation process in order to detail their relationship and document their perspectives, EMP's material of interest, EMP's process of interest, and EMP's intended management target in relation to interested EMCs.So, we highlight that the involved materials, processes, and humans playing roles share manifold relationships.Although they are highly relevant to an understanding of the needs, it may be complex to detail and document them in text-based documents.That is one reason why, in the OntoTrans project [4], the use of ontologies and digital tools that facilitate composing detailed and complex representations is pertinent.
In a nutshell, a prepreg is a composite material made from pre-impregnated fibres and a partially cured polymer matrix (resin).A typical scenario of how a prepreg reaches an end user is shown in Figure 8.The prepreg manufacturer keeps the material in a low-temperature environment, and their customers are component part producers, i.e., composite manufacturers.Those customers have autoclaves or vacuum bags wherein they mould the prepreg to its final shape and then deliver it to engineers who mount these parts into, e.g., an aircraft or a vehicle.highlight that the involved materials, processes, and humans playing roles share manifold relationships.Although they are highly relevant to an understanding of the needs, it may be complex to detail and document them in text-based documents.That is one reason why, in the OntoTrans project [4], the use of ontologies and digital tools that facilitate composing detailed and complex representations is pertinent.
In a nutshell, a prepreg is a composite material made from pre-impregnated fibres and a partially cured polymer matrix (resin).A typical scenario of how a prepreg reaches an end user is shown in Figure 8.The prepreg manufacturer keeps the material in a lowtemperature environment, and their customers are component part producers, i.e., composite manufacturers.Those customers have autoclaves or vacuum bags wherein they mould the prepreg to its final shape and then deliver it to engineers who mount these parts into, e.g., an aircraft or a vehicle.The challenge faced by the prepreg manufacturer EMP is around a mature and welltried composite prepreg product based on a poly(furfuryl alcohol) (PFA) bio-resin manufactured by EMP.The need underlying the elaborate innovation case is expressed to EMP by an individual composite manufacturer, EMC01, who may be a potential customer of EMP.EMP considers this enterprise representative of several further composite manufacturers, and from EMP's perspective, this innovation case is exemplary of a superordinate innovation challenge.EMP supposes that several composite manufacturers of EMC may have in common that they run established manufacturing processes based on autoclaves that provide voluminous internal space in which stacked prepregs can be exposed to elevated pressures and temperatures (as compared to ambient conditions).Some of them may already have good experiences with manufacturing composites based on polycondensation-curing bio-resins; some of them may even have good experiences with prepregs manufactured by the very EMP.An EMP perceives a market trend towards composite The challenge faced by the prepreg manufacturer EMP is around a mature and welltried composite prepreg product based on a poly(furfuryl alcohol) (PFA) bio-resin manufactured by EMP.The need underlying the elaborate innovation case is expressed to EMP by an individual composite manufacturer, EMC01, who may be a potential customer of EMP.EMP considers this enterprise representative of several further composite manufacturers, and from EMP's perspective, this innovation case is exemplary of a superordinate innovation challenge.EMP supposes that several composite manufacturers of EMC may have in common that they run established manufacturing processes based on autoclaves that provide voluminous internal space in which stacked prepregs can be exposed to elevated pressures and temperatures (as compared to ambient conditions).Some of them may already have good experiences with manufacturing composites based on polycondensation-curing bio-resins; some of them may even have good experiences with prepregs manufactured by the very EMP.An EMP perceives a market trend towards composite manufacturers' being interested in an out-of-autoclave (OoA) prepreg material and in performing vacuum-bag-only (VBO) curing while still profiting from the properties and performance provided by prepregs based on polycondensation-curing bio-resins.EMPs and their (potential) customers, the Translator, and further experts in autoclavebased composite manufacturing know that void-free composites may be manufactured from polycondensation-curing resins and bio-resins, and the EMP knows that they have in their portfolio several prepreg products that fulfil the requirement of resulting void-free when cured in an autoclave.Exemplarily, in Figure 9, a light microscopical image of a cross-section through a composite that was cured following the process AC-2-0 detailed in Table 1 We would like to highlight several aspects here.Firstly, the translation process guided by the translator shall be conducted in a way that the translator succeed in finding, for example, this knowledge and the respective data during the communication with (human) representatives of EMP.Secondly, the translator may be expected to know-or at least to find-information from publicly accessible sources, and they may be aware that achieving low porosities in composite manufacturing is less challenging in autoclavebased processes than in processes applying atmospheric pressure during curing.Exemplarily, Centea et al. [64] highlighted, in a review, that manufacturers face challenges in producing composite parts that are both void-free and cured using OoA methods.They report on currently available OoA/VBO prepreg resin systems based on commercially available epoxy, cyanate ester, bismaleimide, or benzoxazine resins.In substantial contrast, Pupin et al. [28] studied polycondensation-curing phenolic resins and indicated that the porosity found in the cured matrix is due to the saturation of the solid polymer by water released during curing.Thirdly, the Translator shall be aware that the elaborate innovation case is about a new version of a mature material product rather than about tailoring a novel prepreg material with EMP or identifying the one product already available in the market that might be best suited for a certain manufacturing environment.Therefore, both EMP and the translator will scrutinise the requirements expressed by EMC01.
Jointly identifying a practical translation approach Subsequently, we would like to provide some further insight for the reader on how the Translator and their Client EMP might agree on a joint proceeding for translation that is transparent, available (or even established), and practical (e.g., convenient and promising).Effectively, such insight may be based on the information gathered by EMP and the Translator during the introductory stage of the translation process, which is targeted at We would like to highlight several aspects here.Firstly, the translation process guided by the translator shall be conducted in a way that the translator succeed in finding, for example, this knowledge and the respective data during the communication with (human) representatives of EMP.Secondly, the translator may be expected to know-or at least to find-information from publicly accessible sources, and they may be aware that achieving low porosities in composite manufacturing is less challenging in autoclave-based processes than in processes applying atmospheric pressure during curing.Exemplarily, Centea et al. [64] highlighted, in a review, that manufacturers face challenges in producing composite parts that are both void-free and cured using OoA methods.They report on currently available OoA/VBO prepreg resin systems based on commercially available epoxy, cyanate ester, bismaleimide, or benzoxazine resins.In substantial contrast, Pupin et al. [28] studied polycondensation-curing phenolic resins and indicated that the porosity found in the cured matrix is due to the saturation of the solid polymer by water released during curing.Thirdly, the Translator shall be aware that the elaborate innovation case is about a new version of a mature material product rather than about tailoring a novel prepreg material with EMP or identifying the one product already available in the market that might be best suited for a certain manufacturing environment.Therefore, both EMP and the translator will scrutinise the requirements expressed by EMC01.
Jointly identifying a practical translation approach Subsequently, we would like to provide some further insight for the reader on how the Translator and their Client EMP might agree on a joint proceeding for translation that is transparent, available (or even established), and practical (e.g., convenient and promising).Effectively, such insight may be based on the information gathered by EMP and the Translator during the introductory stage of the translation process, which is targeted at focusing an innovation challenge on an innovation case.Ultimately, we will reveal what made them decide to follow a procedure characteristic of translation in materials modelling.
So, the elaborate innovation case is framed by an exemplary manufacturer of composite parts, EMC01, who expressed their expectations and indicated requirements in a telephone call to EMP.Following the memo taken by a representative of EMP, it became clear that EMC01 had some intentions and had already substantially gathered some insights related to a special product available from EMP.Clearly, as the Translator's Client is EMP, they profit from such information to gain an understanding of EMP's daily business and its relevance to the elaborate case.
•
EMC01 is interested in continuously reinforcing sustainability approaches in the design and manufacture of their products; • EMC01's management is interested in widening their product portfolio performing small batch production using a prepreg based on a bio-resin that they would like to consider not only for AC but also for VB following their updated innovation strategy; • Effectively, EMC01 already has at its disposal an internally approved qualification for applying EMP's prepreg EMP-PP01 in an autoclave-based process, and it expressed that it has quite some knowledge and good experience available, e.g., with storing or handling EMP-PP01.In detail, following an earlier consultation by EMP, it used AC-2-0 as its established curing process and reported that it achieved satisfactory findings for its cured composite parts, which are similar to the ones presented in Figure 9; • Based on this experience, EMC01 had internally already agreed to perform a first hands-on attempt, and it gave the autoclave-free vacuum-bagging process VB-2-0 a trial for curing the prepreg EMP-PP01, applying the same time and temperature settings as established in AC-2-0 but lowering the pressure as compared to the autoclavebased process.It performed a visual inspection and, following the notorious telephone call to EMP, estimated the outcome of their first attempt to be a "fast process but with a porosity more than twice as high than required, so that we did not yet test ILSS"; • Concerning the requirements indicated by EMC01, EMP and the Translator understand that when performing the process VB-2-0, EMC01 obtained some manufactured composite that showed a (visually assessed and estimated) porosity that exceeds the required porosity threshold value t01 by at least a factor of two.Moreover, they conclude from the statement "fast process" that EMC01 might accept longer station times than required for VB-2-0.Finally, they presume that EMC01 needs to achieve at least some minimum interlaminar shear strength (ILSS) values; • Eventually, EMC01 would like to know if EMP-PP01 can be cured in its vacuum bagging set-up in a way to achieve its requirements, which, so far, they evidently only rather qualitatively communicated to EMP.
So, EMP may contextualise the conversation with EMC01 in the frame of its customer support and consider EMC01's qualitative requirements when shaping an overarching profile of requirements that comprises information provided, e.g., by other composite manufacturers.
• They know that their organisation is also interested in continuously reinforcing sustainability approaches in the design, manufacture, and usage of their products; • EMP has considerable knowledge with respect to curing their prepreg EMP-PP01 in an autoclave-based process for achieving porosities clearly below 0.2 volume-% while varying time and temperature settings; • EMP understands that EMC01's management is going to make a material-related decision with some longer-term relevance.Moreover, EMP knows that there is considerable market demand for the autoclave-free curing of stacked prepregs.Moreover, it knows that achieving porosities below 2 volume-% and an ILSS of at least 30 MPa will satisfy further Clients and even allow them to gain new customers; • EMP understands that it might deliver value through fulfilling, on the one hand, EMC01's expressed needs with respect to advancing its portfolio of sustainably manufactured composites while, on the other hand, still respecting other interested parties' needs; • EMP understands that to promote the efficiency of its quality objectives for vacuumbagging processes, EMC01 expects it to provide guidance for reducing the porosity in the frame of its screening approach; • Experts inside EMP agree that they will profit from gathering knowledge about composite parts manufactured with the VB curing of EMP-PP01, especially considering the properties of porosity and ILSS; • So, EMP decides in the first instance to elaborate some more generally applicable material model for EMP-PP01 cured in a VB process, especially comprising VB-2-0 as a referential curing scenario, still providing the potential to additionally comprise both curing scenarios of other potential Clients and a solution for EMC01's needs.EMP decides to go for a model covering a range of manufacturing processes and allowing the capture of the porosity, especially comprising quantitative evidence for the porosity achieved with VB-2-0.In detail, for further communication between EMP and EMC01, it considers advantageous to scale the modelling outcomes such that this porosity (achieved with VB-2-0) will result slightly higher than twice the threshold porosity *t01 that it understood to be relevant to EMC01.Therefore, it involves a Translator in (material) modelling to whom they leave open to oversee subcontracting as an external expert to a contract manufacturer for composite parts who is experienced in curing prepregs using vacuum bagging; • Consequently, EMP has appointed a Translator who is an expert with competence that is relevant to go for an approach involving materials modelling.So, the translator is expected to be a domain specialist to meet the prepreg producer eye-to-eye and to see the given constraints and sustainability demands with a can-do attitude.
As a consequence, we expect the following from the Translator: • On the one hand, the translator may not even be aware of any values expressed by EMC01 because there was not any direct communication between EMC01 and the Translator; on the other hand, the translators are aware of further potentially interested parties' needs, e.g., gaining the benefits when following the purpose of the concept Industry 5.0 expressed by the European Commission [1]; • The translator will ask EMP for the increased value that they plan to deliver to their Clients; • The translator will agree with EMP on the milestones of their cooperation aimed at achieving increased process efficiency; • The translator will follow six steps of translation, as described in [11] and depicted in Figure 5, and incorporate them in an iterative innovation translation cycle (Figure 6) so that a translation process results that is coherently interrelated with the material development process aspiring sustainability; • The translator will implement and document the translation process in a FAIR way, thus allowing the EMP (and-if EMP wishes so-their customers) to incorporate translation in their systematic innovation approach.Therefore, the Translator is keen to provide their service in a way that can be continued or complemented by other agents in the overarching innovation ecosystem.
Performing Translation in Materials Modelling
Subsequently, we will shortly highlight to the reader some of the activities performed and, e.g., graphical tools used by a translator who follows the six translation steps to present the outcome of the translation with the aim of leading to a sustainable prepreg with marginal porosity and appropriate mechanical features.The translator may be guided by the CEN Workshop Agreements, as published in the documents related to CWA 17284 and CWA 17815, but they do not yet come with digital tools like the ones that are presently being developed in the frame of the OntoTrans [4] project.We will exemplarily highlight some implicit or tacit knowledge that will be used in the communication between domain experts.
So far, an agreement may have been elaborated between EMP and the translator to follow an approach involving material modelling in order to address EMP's innovation challenge in one exemplary case and find one solution that may be suggested to EMC01 in order to support its decision-making.Henceforth, EMP and the translator may agree to mark their progress within the stepwise procedure by agreeing on milestones at which they mutually (and formally) conclude their alignment before going to the next milestone.Exemplarily, such milestones may be set when (i) an understanding of both EMP's business and industrial case is achieved, (ii) data available in EMP are analysed, and a workflow for closing gaps is elaborated, and (iii) when modelling is performed by a knowledge provider and understandable information for filling the gap is provided to EMP by the translator.In this way, along with the procedure sketched in Figure 5, the subsequent sections are arranged.
Good Understanding of the Business Case and a Good Understanding of the Industrial Case
A mutually agreed good understanding of the business environment and the industrial constellation may be based on the following information:
•
EMP informs the translator that it is interested in advancing its knowledge about the curing and mechanical behaviour of composites manufactured from their prepregs in vacuum-bagging processes because prospective customers who are interested parties [17] may require the small-batch manufacturing of composites based on prepregs with bio-resins.Special interest has been focused on EMP's prepreg material EMP-PP01; • Both EMP and its customers are experienced in establishing and running autoclavecuring processes providing high-quality and high-performance composites.Often, EMP's customers decide to assess material properties that may be expected when they change from the autoclave curing of a stacked prepreg in a voluminous and massive oven with a considerable heat capacity and implement a vacuum-bagging process performed using a narrow bag so that the heat capacity of composite part pre-dominates.Both of the often-performed characterisations, namely inspection of cross-sections and ILSS testing, are destructive testing approaches; • The chemical curing of the thus contemplated matrix bio-resin in EMP-PP01 is based on condensation reactions resulting in the formation of water molecules from functional groups present in the mixture of raw materials.In detail, the material combination used for the proprietary bio-resin formulation in EMP-PP01 is complex.
•
In polymeric materials operated at temperatures clearly below the boiling point of water, water may be found in a molecularly dispersed state characterised by water/polymer interactions or in a condensed state characterised by aggregates, with other water molecules being the nearest neighbours of a water molecule; • Tacitly, both the translator and EMP know that the boiling point of water is 100 • C within an environment at an atmospheric pressure of 1013 hPa;
•
Both the translator and EMP know that a typical composite laminate manufacturing process starts with resin preparation (mixture of base resin with additives, catalysts, and curing agents).The next step is prepregging, which involves the impregnation of resin into the textile fabric (fibres).The final product of the second step is a roll of prepreg, which can be cut into the final desired dimension;
•
Both the translator and EMP know that essentially void-free composites may be manufactured in autoclaves by applying external pressure clearly exceeding the water vapour pressure and, thus, suppressing the formation of water-filled voids in the polymer phase;
•
Both the translator and EMP know that according to ISO 15901-2:2022 [65], porosity is a term used to indicate the porous nature of solid materials and is more precisely defined as the ratio of the volume of accessible pores and voids to the total volume occupied by a given amount of the solid.In a composite, a void is the space that is not occupied by resin or fibre.When individual voids become large, they are no longer treated at a continuum level but as discrete objects with a size, shape, and specific location within the material;
•
Both the translator and EMP know that primary aerospace structures, for instance, have a requirement that the porosity in finished parts be less than 2% in volume;
•
EMP informs the translator that some of their customers report for EMP-PP01 when cured following scenario VB-2-0, significantly higher porosities are found than with scenario AC-2-0.Based on their mutual best guess, they may agree that such difference could be due to the smaller pressure used in vacuum bagging;
•
In more material-related detail, the translator and EMP may agree that an issue occurring at elevated temperatures when performing the OoA curing of EMP-PP01 may be related to the formation of voids that are filled with water steam.They may already know or find out when shaping their mutual understanding that individual voids may be mechanically entrapped between fibre layers.Such voids between fibre tows are often labelled meso-pores in contrast to micro-pores between individual fibres [66].Similar considerations concerning the root cause of meso-pores, e.g., for water-bearing resins, were reported by Pupin et al. [28] for the curing of phenolic resins.
For resin transfer moulding (RTM) processes, these authors recommend applying prior to resin gelation a consolidation pressure that is above water vapour pressure in order to avoid porosity due to water boiling; • An objective and quantitative assessment of composite part (material) properties based on FAIR data is strategically required by EMP, so EMP and translator plan to establish a predictive model that allows support for the optimisation of vacuumbagging processes.
Considering the outcome of their abductive reasoning, starting from their best guess, the translator suggests that the prepreg manufacturer establish a data-based model.They agree that an experienced service provider who will perform vacuum bagging with stacked prepreg specimens of a straightforwardly manageable size may be identified.Looking ahead, the translator may suggest following a design of experiment (DoE) approach, which will permit to systematically increase the database and, thus, the application range of the expandable model.The initial fully factorial design shall allow for setting the curing parameters with variation on two levels: the heat ramp and the dwelling time at a temperature clearly below 100 • C. The composite property to be measured shall be the porosity, and the geometrical dimensions of cured specimens shall allow to assess the interlaminar shear strength (ILSS) on demand.They agree that these properties may be assessed in a similar way as appointed by EMP, e.g., based on experience or (internally or internationally) standardised procedures.In this way, a variation of manufacturing conditions is set, and a subcontracted expert in vacuum bagging may produce different composites using their machines, and EMP (or a subcontracted service provider) may characterise the obtained composites with respect to their porosity and ILSS.Like that, the translator may guide EMP to become supplied with a concise set of information, allowing them to establish curing conditions to comply with their yet not finally established requirements related to station times.Providing one solution to EMP's problem will provide "hard, fact-based, logical information" [67] that is sought by strategic thinkers because "strategic thinking is a mix of rationality and insight" [63].So, we infer that insight shall result from evidence based on facts (as shown in Figure 7) and shall support the strategising processes performed by-inter alia-real-world managers (as displayed in Figure 2).We highlight that Calabrese and Costa [63] suggested that strategic thinking may be assessed based on Peirce's theory of abductive reasoning.
In conclusion, from a manufacturing point of view, a textual expression of the identified knowledge gap is contained in Table 2. Exemplarily, the quantitative assessment of the porosities and the ILSS will reveal if a porosity below 2 volume-% and an ILSS above 30 MPa is feasible when using a suitable vacuum-bagging process window denoted as VB-[? min , ?max ] -[? min , ?max ], with the question mark representing a complex knowledge gap, e.g., some property-specific interval boundary-confining process windows.
Table 2. Textual expression of the identified knowledge gap (represented by "?") and the customer needs (represented by "!" as expressed by EMC01) for curing stacked prepregs.
Curing Process
Pressure Process Knowledge Gap Material Characterisation Material Characterisation Gap AC-2-0 0.5 MPa known by customer, !: to be changed
Analysis of Data Available within the Client and Translation to Modelling Workflows
So far, the translator and EMP have agreed on a material-agnostic DoE performing planned variations of the curing scenario.The most relevant settings can be applied by a multitude of (potential) material users, like EMC01, with their equipment.With the plethora of data resulting from this DoE, a material-specific process model will be built.To make it relevant to EMP's needs, it should be versatile enough so that a newly found materials user, EMC02, can be well advised, too.At this stage, the translator and EMP (and open-minded material users) build upon their vast domain-specific knowledge.Using the platform OTE, which was developed by OntoTrans [4], will allow EMC to establish a growing knowledge base and capture processes with advanced data analytics to optimise established processes.
Effectively, the translator and their client EMP will now cooperatively elaborate and set the (quantitative) boundaries of the data gap framed by the understood knowledge gap.In an approach that is similar to the iterative group interaction cycle on a strategical level, as presented in Figure 2, the translator is now actively involved in the problem formulation on a data level, as shown in Figure 10.In detail, we designed the sketch presented here by integrating into Figure 2 an aspect of the fact-based recursive structure involving abductive thinking for strategising in business innovation, as shown in Figure 7 and discussed in an ISO 9000 and ISO 9001 context [16,17].
When analysing data that are available in EMP and relevant to the reactive formation of water, EMP informs the translator that the polycondensation-curing matrix bio-resin used in their prepreg EMP-PP01 shows a thermogravimetrically measured weight loss amounting to approximately 3% of the resin mass in a temperature interval between 90 • C and 130 • C when cured at a pressure of 0.1 MPa using a heat ramp of 2 K/min (without applying an isothermal dwelling at 90 • C) and a maximum temperature of 145 • C (Figure 11).By performing data curation, the translator plots the available data by considering the (negative) first derivative of the mass change upon temperature variation.The maximum weight loss is observed around 110 • C, which is quite in the centre of this temperature interval, and it occurs preferentially above a temperature of 100 • C, i.e., the boiling point of water at a pressure of 0.1 MPa.Moreover, based on their experience and responding to EMC's need, EMP already established that the thermogravimetrically measured weight loss in the (same) temperature interval between 90 • C and 130 • C amounts to (merely) 1 mass-% for this matrix resin when curing is performed at a pressure of 0.1 MPa using a heat ramp of 1 K/min and applying an isothermal dwelling at 90 • C before finally reaching a maximum temperature of 145 • C (Figure 11).Based on this available information, the translator and the prepreg manufacturer EMP agree that the porosity that results upon curing the stacked prepreg under investigation in a vacuum-bagging process at a pressure of 0.1 MPa may be governed by the heat ramp and the dwelling time at 90 • C, i.e., a temperature that is below 100 open-minded material users) build upon their vast domain-specific knowledge.Using the platform OTE, which was developed by OntoTrans [4], will allow EMC to establish a growing knowledge base and capture processes with advanced data analytics to optimise established processes.
Effectively, the translator and their client EMP will now cooperatively elaborate and set the (quantitative) boundaries of the data gap framed by the understood knowledge gap.In an approach that is similar to the iterative group interaction cycle on a strategical level, as presented in Figure 2, the translator is now actively involved in the problem formulation on a data level, as shown in Figure 10.In detail, we designed the sketch presented here by integrating into Figure 2 an aspect of the fact-based recursive structure involving abductive thinking for strategising in business innovation, as shown in Figure 7 and discussed in an ISO 9000 and ISO 9001 context [16,17].predictor variables to the response variable established from experimental data.Such relations may collectively be called data-based models [69], and they may be represented in a modelling data (MODA) approach.Still, data-based models are not included in the formalised definitions of fundamental terms for the field of materials modelling and simulation provided in CWA 17284 [70].After a series of meetings between EMP and the translator, the first modelling workflow was defined.The input will be composed of new process parameters (temperature ramp, dwelling time) of a vacuum-bagging process, and the output will comprise the expected porosity.As light microscopy is performed using commonly available instruments and allows for assessing material surfaces, they agree that this porosity will be assessed based on the light microscopic characterisation of void area densities in cross-sections In the next step, the translator and EMC join their expertise and experience to establish a workflow based on gaining objective evidence with respect to material properties and process characteristics.Exemplarily, they may agree on activities for gathering and documenting characterisation data (CHADA) following an approach described in CWA 17815 [68].In this way, properties that "identify the type, manufacturing/process history, and the state of the material" are ascertained and then measured.Based on their joint highlevel perception of observations and needs expressed by several exemplary EMCs, they decided to involve two key characterisation methods, one of them for identifying the nature (e.g., structure and microstructure) of the material and one for evaluating the material behaviour or performance, along with CWA 17815.Therefore, based on the agreed curing scenarios, light microscopy is agreed to be used to assess the porosity and the interlaminar shear strength (ILSS) as the representation of the mechanical performance.Concerning the two identified key characteristics of the curing process, they agree that for the heat ramp, level 2 will be assigned 2 K/min (as used by EMC01 in its screening test), while level 1 will be 1 K/min.Moreover, for the dwelling, a temperature of 90 • C is agreed, and level 1 of the dwelling time will be 0 min (as used by the EMC01 in its screening test), while for level 2, a dwelling time of 60 min will be used.The heat ramp will be controlled using a thermoelement-type sensor connected to the curing specimen, and both the dwelling time and the heat ramp will be established using a PID controller and documented in a .csvfile.The ILSS will be measured as described above in the Materials and Methods section.In detail, a set of composite specimens will be manufactured following the parameter set as reported by EMC01.In this way, EMP may assess if similar findings as reported by EMC01 are obtained when the geometrically simplified composite specimens are manufactured by the service provider to become involved.Moreover, measuring the ILSS for a specimen that was manufactured by vacuum bagging and revealed a porosity below 2 volume-% is expected to highlight how considerable the effect of such porosity on the mechanical property ILSS may become.
The translator and EMP agree that their overall DoE approach is supposed to provide a surrogate model that starts with a cloud of data points gathered by their materials characterisation approach.They aspire to establish relationships between descriptors or predictor variables to the response variable established from experimental data.Such relations may collectively be called data-based models [69], and they may be represented in a modelling data (MODA) approach.Still, data-based models are not included in the formalised definitions of fundamental terms for the field of materials modelling and simulation provided in CWA 17284 [70].
After a series of meetings between EMP and the translator, the first modelling workflow was defined.The input will be composed of new process parameters (temperature ramp, dwelling time) of a vacuum-bagging process, and the output will comprise the expected porosity.As light microscopy is performed using commonly available instruments and allows for assessing material surfaces, they agree that this porosity will be assessed based on the light microscopic characterisation of void area densities in cross-sections through composite specimens.Additionally, they agree to evaluate how this porosity interdepends with the density of the composite samples because some of the EMCs known to EMP, rather than using light microscopy, would gravimetrically assess the mass of accurately cut and geometrically characterised specimens.Mechanical properties (as represented by ILSS values) will be assessed for composite specimens that show a porosity that is expected to be acceptable in a business context for a considerable number of potential EMCs, among them EMC01.
Modelling Execution, Model Validation, and Modelling Results
To establish a data-based model that describes relevant vacuum-bagging processes and provides porosity as a key property of the process outputs, the translator, EMP, and potentially some sub-contractors experienced in vacuum bagging contributed to manufacturing composites from prepregs and characterising representative sample specimens using light microscopy and an image post-processing procedure.In Figure 12a,b, representative microscopical findings are presented to visualise voids as obtained by light microscopy in a top view arrangement and Scanning Electron Microscopy, with the sample specimen being tilted by 45 • with respect to the direction of the incident electron beam.Predominantly, the appearance of the cross-cut regions reveals a void-free cured matrix resin, as shown in Figure 12c.Referring to light microscopy images, the translator classifies matrix-free regions with lateral dimensions between 100 µm and 500 µm as relevant voids.Such voids extend in height between two neighbouring fibre layers and characteristically are wider than high, i.e., they may show some kind of disk shape.The translator and the operator of the microscope know that, for example, an individual disk-shaped void may have been cut along any segment, and further voids may not have been cut at all.Therefore, averaging over a sufficiently substantial set of cross-sections is required in order to reveal the effective porosity.When interpreting the microscopic findings, the translator hypothesises that the fibre layers appear to be a barrier for the flow of water steam bubbles (which continuously grow during resin curing) since the predominant portion of the void perimeter or interphase is constituted by fibrous material.Thus, the translator labels these voids between fibre tows as meso-pores [67].Moreover, the translator may interpret apparently bare fibres in the cross-cut surface region to be effects of the very cutting procedure and, thus, denote them as artefacts from sample preparation rather than micro-pores between individual fibres.Accordingly, the translator may exclude the corresponding areas from an image post-processing targeted at assessing voids resulting from the curing process.
the appearance of the cross-cut regions reveals a void-free cured matrix resin, as shown in Figure 12c.Referring to light microscopy images, the translator classifies matrix-free regions with lateral dimensions between 100 µm and 500 µm as relevant voids.Such voids extend in height between two neighbouring fibre layers and characteristically are wider than high, i.e., they may show some kind of disk shape.The translator and the operator of the microscope know that, for example, an individual disk-shaped void may have been cut along any segment, and further voids may not have been cut at all.Therefore, averaging over a sufficiently substantial set of cross-sections is required in order to reveal the effective porosity.When interpreting the microscopic findings, the translator hypothesises that the fibre layers appear to be a barrier for the flow of water steam bubbles (which continuously grow during resin curing) since the predominant portion of the void perimeter or interphase is constituted by fibrous material.Thus, the translator labels these voids between fibre tows as meso-pores [67].Moreover, the translator may interpret apparently bare fibres in the cross-cut surface region to be effects of the very cutting procedure and, thus, denote them as artefacts from sample preparation rather than micro-pores between individual fibres.Accordingly, the translator may exclude the corresponding areas from an image post-processing targeted at assessing voids resulting from the curing process.The DoE contour plot [71] shown in Figure 13 reveals the values for the response porosity ypor_LM (as measured using light microscopy) when independently varying the settings for the two factors heat ramp x1 and dwelling time x2 (at 90 °C).The respective units for these assessed quantities are volume %, K/min, and min.In detail, the translator applied a linear DoE contour plot and assumed the model expression displayed in equation (Equation (1)): where ym_LM is the overall mean of the porosity (as assessed by light microscopic characterisation), and c1, c2, and c12 are the regression coefficients related to the heat ramp x1, the dwelling time x2, and a mixed term representing the interaction between x1 and x2.The values of ym_LM, c1, c2, and c12 were established using regression analysis and least squares estimation.When scaling and centring factor levels so that the respective low value or the factorial quantities x1 and x2 are attributed to the number −1, the centre level becomes 0, and the high value is attributed to the number +1, the numbers for regression coefficients The DoE contour plot [71] shown in Figure 13 reveals the values for the response porosity y por_LM (as measured using light microscopy) when independently varying the settings for the two factors heat ramp x 1 and dwelling time x 2 (at 90 • C).The respective units for these assessed quantities are volume %, K/min, and min.In detail, the translator applied a linear DoE contour plot and assumed the model expression displayed in equation (Equation ( 1)): where y m_LM is the overall mean of the porosity (as assessed by light microscopic characterisation), and c 1 , c 2 , and c 12 are the regression coefficients related to the heat ramp x 1 , the dwelling time x 2 , and a mixed term representing the interaction between x 1 and x 2 .The values of y m_LM , c 1 , c 2 , and c 12 were established using regression analysis and least squares estimation.When scaling and centring factor levels so that the respective low value or the factorial quantities x 1 and x 2 are attributed to the number −1, the centre level becomes 0, and the high value is attributed to the number +1, the numbers for regression coefficients and respective confidence intervals displayed in Table 3 were obtained.In this case, the translator may apply the taxonomy of the Internation Units, stating that the value of a quantity is generally expressed as the produ ber and a unit [72].They may conclude significant contributions for ym_LM, when assessing the porosity.The algebraic signs of the regression coefficients within the assessed process window, the bias towards void formation can e counteracted by lowering x1 (i.e., the heat ramp) or by increasing x2 (i.e., the dw whilst considering some interactions between x1 and x2.Even more than the algebraic signs, the values of the regression coefficients are expected to be m cific, with their combination being characteristic of the prepreg EMP-PP01 g in a vacuum-bagging process.
Table 3. Numbers of coefficient values and confidence intervals for the estimated mod following the DoE approach for the response porosity ypor_LM, as obtained by regressio Table 3. Numbers of coefficient values and confidence intervals for the estimated model parameters following the DoE approach for the response porosity y por_LM , as obtained by regression analysis and using scaled and centred factors for heat ramp x 1 and dwelling time x 2 .In this case, the translator may apply the taxonomy of the International System of Units, stating that the value of a quantity is generally expressed as the product of a number and a unit [72].They may conclude significant contributions for y m_LM , c 1 , c 2 , and c 12 when assessing the porosity.The algebraic signs of the regression coefficients indicate that
Discussions of the Modelling Results in the Context of the Innovation Challenge
In this contribution, we highlighted the value-guided approaches within the ISO consistent concept of innovation [17] and recently within Industry 5.0, which complements the existing Industry 4.0 paradigm [1] and co-exists with it [73].We interpreted that both green and digital enhancements in material development and production processes are essential for Industry 5.0 approaches.The meaning of a concept like "green transition" and the target (or toolset) represented by the attribute "green" appears to be subject to some interpretation by decision-makers.When reporting the results of a workshop with Europe's technology leaders, the European Commission highlighted that requirements for a successful transition are the acceptance, trust, and commitment of the public and that people and societal needs must be at the centre of respective strategies for industrial modernisation [74].We infer that Industry 5.0 is largely not about additional technologies relative to Industry 4.0 but rather about a paradigm shift and adding a value dimension in a systemic approach since the workshop report highlights that its concept is not based on technologies but rather is centred around values like "human-centricity, ecological or social benefits".As the very decision-making in teams may vary from team to team, we suggest that facilitating the transparent documentation of relevant aspects in a team's decision-making may be a prior-ranking objective as compared to an a priori harsh confinement of the creative leeway in interpreting and conceptually implementing strategic targets intended to make manufacturing more future-proof.We expect that future-proof approaches will need to facilitate customers' and consumers' well-informed decision-making on technologies, material products, and services they use.So, we propose advancing semantic data management and involving trained experts that we call Translators for the FAIR solution to innovation challenges.We expect that in this way, not only the performance of products but also relevant aspects of their backstory (from gathering their raw materials onwards) will become transparently documented along the product's life cycle.Making connections [75] and joining information from different sources will require skills and extensive education and training, and we project that harmonised respective approaches may be as successful as they have proven to be for establishing shared expertise in material-joining technologies advanced by specially trained welding or adhesive
Discussions of the Modelling Results in the Context of the Innovation Challenge
In this contribution, we highlighted the value-guided approaches within the ISO consistent concept of innovation [17] and recently within Industry 5.0, which complements the existing Industry 4.0 paradigm [1] and co-exists with it [73].We interpreted that both green and digital enhancements in material development and production processes are essential for Industry 5.0 approaches.The meaning of a concept like "green transition" and the target (or toolset) represented by the attribute "green" appears to be subject to some interpretation by decision-makers.When reporting the results of a workshop with Europe's technology leaders, the European Commission highlighted that requirements for a successful transition are the acceptance, trust, and commitment of the public and that people and societal needs must be at the centre of respective strategies for industrial modernisation [74].We infer that Industry 5.0 is largely not about additional technologies relative to Industry 4.0 but rather about a paradigm shift and adding a value dimension in a systemic approach since the workshop report highlights that its concept is not based on technologies but rather is centred around values like "human-centricity, ecological or social benefits".As the very decision-making in teams may vary from team to team, we suggest that facilitating the transparent documentation of relevant aspects in a team's decisionmaking may be a prior-ranking objective as compared to an a priori harsh confinement of the creative leeway in interpreting and conceptually implementing strategic targets intended to make manufacturing more future-proof.We expect that future-proof approaches will need to facilitate customers' and consumers' well-informed decision-making on technologies, material products, and services they use.So, we propose advancing semantic data management and involving trained experts that we call Translators for the FAIR solution to innovation challenges.We expect that in this way, not only the performance of products but also relevant aspects of their backstory (from gathering their raw materials onwards) will become transparently documented along the product's life cycle.Making connections [75] and joining information from different sources will require skills and extensive education and training, and we project that harmonised respective approaches may be as successful as they have proven to be for establishing shared expertise in material-joining technologies advanced by specially trained welding or adhesive engineers [21] following procedures framed by the European Federation for Welding, Joining and Cutting.We reckon that a major challenge will be comprehensibly involving social or societal requirements and needs in materials development and translation and suggest cooperation with, e.g., academic institutions.In our opinion, communication and data gathering based on agreed "bridge" concepts and guided by impartial experts can be an essential toehold for bridging material, environmental, and social sciences following their respective perspectives.We may indicate that in the ongoing OntoTrans project, consumer needs, experiences, and preferences are readily considered both in translation and in industrial innovation challenges.In this way, we aspire OntoTrans to provide innovations in semantic technologies that contribute to supporting "win-win" interactions between industry and society, profiting from guidance by trained Translators in an Industry 5.0 context.In upcoming contributions based on the comprehensive translation approach described here, we will highlight the benefits achievable by a translator applying the technology stack tailored to the OntoTrans project.
In this contribution here, the translator team and their client profited from following a procedure characteristic of translation in materials modelling.In detail, we established, in a transparent way, a data-based model and used the porosity as a key property of composite laminate products resulting from vacuum-bagging processes performed with FRP comprising a polycondensation-curing matrix.Starting from a prepreg that contains a PFA-based bio-resin designed to wet the involved fibres and to avoid micro-pores between individual fibres, we prepared composite laminates and showed that the void content is substantiated by meso-pores between fibre tows.We revealed that the porosity achieved by curing processes with vacuum-bagging sensitively depends on adjusting two process parameters, namely the dwelling temperature and the temperature ramps applied during the curing of stacked prepregs.In enterprise manufacturing composites, the respective adjustment is assessed during production planning, and we provided a data-based model that may support decision-making in this phase.When compared to curing scenarios reported by Ipakchi et al. [31] for PFA-based FRP matrices, our underlying scenario is more short-term and less complex.Moreover, it does not involve the application of pressures above atmospheric pressure and final curing temperatures higher than 145 • C, as suggested by Guigo et al. [30] or Sangregorio et al. [32] for natural fibre composites with PFA-based resins.We infer that the energy efficiency of our vacuum-bagging process may be superior as compared to those of these reported curing scenarios.Efforts to grasp and quantitatively compare the CO 2 footprint of our curing scenario with reported ones are ongoing.
Concerning the occurrence of voids in autoclave-free curing processes, we suppose that, in the case of polycondensation-curing prepregs with a low content of VOCs and entrapped air, the formation of porosity essentially is caused by the reactions resulting in chemical network formation.For PFA polycondensation, Sadler et al. [76] highlighted that increasing the degree of prepolymerisation may minimise voids that result from trapped water molecules.Similarly, for polycondensation-curing phenolic resins used in RTM manufacturing, Pupin et al. [28] reported that the residual porosity measured in cured samples may be decreased when the specimens are degassed at higher levels of chemical conversion.Dominguez and Madsen [77] stressed that a high water content in a PFA resin may be a significant challenge for the manufacturing of materials with low porosity because, during processing, water is likely to be trapped in the form of voids inside the materials.They suggest using a double-vacuum-bag (DVB) technique implemented in a vacuum oven with superior performance as compared to a single-vacuum-bag (SVB) technique because it allows composite manufacturers to control the pressure difference between the two vacuum bags during processing.They reported that the PFA matrix resin in their FRP was fully cured after finally applying a maximum temperature of 90 • C for 30 min.They detailed porosity values V p in the range between 0.03 and 0.14 and indicated that the strength of their composite specimens showed a rather large scatter, which they attributed to the inherent brittleness of the PFA matrix and their large porosity.With our SVB scenario, we achieved a porosity below 2 volume-%.Still, we consider a further reduction of the CO 2 footprint related to our curing scenario possible by maintaining the low porosity and balancing a reduction of the maximum curing temperature with the achieved mechanical properties.
We foresee that Translators in material innovation will profit from tools, allowing for transparent multi-criteria optimisation and, indeed, in OntoTransFAIR, access to respective semantic technologies is getting facilitated.
Conclusions and Outlook
In the current work, we contributed advances to three aspects of innovation in manufacturing sustainable products by introducing a human-centred cooperative and systematic approach that facilitates involving essential stakeholders in industry and society.Our first aspect addressed the establishment of a mediating role taken by human expert Translators who provide information or knowledge-based guidance for decision-makers, initiators, and implementers in manufacturing innovation.In detail, we suggested an overarching iterative innovation translation cycle for materials and process innovation by combining the recently introduced contributions by Translators in knowledge management and Translators in material modelling.Secondly, we showcased the relevance of modelling manufacturing processes for enhancing sustainability based on an efficiency strategy.In detail, we presented a data-based modelling approach profiting from the design of experiment (DoE) to identify factorial settings that allow manufacturing in an autoclave-free vacuum-bagging to process a fibre-reinforced composite from a prepreg material based on eco-friendly and bio-based poly(furfuryl alcohol) (PFA).The material-specific process window for the dwelling temperature and the temperature ramp applied during the curing of stacked prepreg was framed by assessing and controlling the area density of meso-pores that were measured with commonly available digital light microscopy of cross-cuts through composite specimens.Thirdly, we elaborated a stepwise procedure guided by competent Translators that allows industrial decision-makers to embrace their innovation challenge by linking concepts and relevant datasets instantiated by innovation cases that are specific to distinct combinations of materials and processes.An interested human stakeholder S 1 may express a specific need N 1 (S 1 ) requiring materials innovation and raise the attention of several manufacturers.As perceived from inside of an organisation O 1 in manufacturing, S 1 might be focussed on as a potential customer, e.g., an end-user, a consumer, or a client.Any organisation in manufacturing O k that understands the need N 1 (S 1 ) may address it, embed it in a business context, and address the innovation case and challenge in a way that is O k -specific.Exemplarily and efficiently, a manufacturer O 1 capable of sustainably managing the activities required for the specified product innovation may realise customer focus by their involvement in the specification phase, the concept development, and the prototyping [78].For example, the three-tier approach presented by Lytras and Garcia [79], based on semantics, ontologies, and business logic, might be realised.
Following the suggested iterative innovation translation cycle shown in Figure 6, we envisage that any innovation-ready organisation O k may readily conceptualise and represent N 1 (S 1 ) and their N 1 (S 1 )-specific challenges by utilising ontologies and semanticweb-based data management to support and interact with human actors in their sustainable product development.As highlighted in Figure 15 and outlined in the OntoTrans project [4,79], the instances that are relevant for the user and handled in the use case are linked to concepts and data that can be attributed to classes in the terminology-related TBox and the assertion-based ABox, respectively, or semantic knowledge bases.Such ontological and semantic data representation reduced ambiguity and misunderstanding during translation.It supports transparency, traceably, and greatly facilitates reacting to changes in needs whenever formerly subordinated optional solutions may readily be assessed as potential future alternative solutions.In OntoTrans [4], this target is inspired by an approach and the respective processes starting from an innovation case, proceeding via conceptualisation and applying the multi-perspective material ontology EMMO [55] within an Open Translation Environment.For example, a key perspective on the innovation case involves all relevant processes and objects.In the case presented, the different materials processes (like prepregging), as well as the formulated resin or the prepreg.The relations between the process steps and the objects are well described and documented as well in terms of their mereology and causality, for example, describing overlaps, inputs and outputs, or next steps.Furthermore, properties of processes and objects can be assigned by means of a semiotic process involving an entity, interpreter and a "sign" given to the entity, which may be a model or a property based on characterisation or observation.Also, the roles of entities in an overall system (whole) can be described (in EMMO's holistic perspective).Typically, an object (or process) can also be identified by its role, for example, the leg of a chair or an actor in a play.
conceptualisation and applying the multi-perspective material ontology EMMO [55] within an Open Translation Environment.For example, a key perspective on the innovation case involves all relevant processes and objects.In the case presented, the different materials processes (like prepregging), as well as the formulated resin or the prepreg.The relations between the process steps and the objects are well described and documented as well in terms of their mereology and causality, for example, describing overlaps, inputs and outputs, or next steps.Furthermore, properties of processes and objects can be assigned by means of a semiotic process involving an entity, interpreter and a "sign" given to the entity, which may be a model or a property based on characterisation or observation.Also, the roles of entities in an overall system (whole) can be described (in EMMO's holistic perspective).Typically, an object (or process) can also be identified by its role, for example, the leg of a chair or an actor in a play.An example of process and object conceptualisation in composite prepreg manufacturing is depicted in Figure 16 [80].Iterative innovation translation cycle involving a customer (i.e., a stakeholder, e.g., an end-user, a consumer or client) who expresses a need requiring innovation and an organisation that understands the need and involves translators for capturing the aspects of the innovation challenge related to manageable changes in business, manufacturing, and R&D, respectively, on a conceptual level (topmost cycle) and for having knowledge-based optional solutions provided in the frame of jointly identified industrial cases, thus allowing for filling knowledge gaps and facilitating fact-based decision-making (bottom).
An example of process and object conceptualisation in composite prepreg manufacturing is depicted in Figure 16 [80].We suggest that such expression, understanding, and guidance may sustainably be achieved by using FAIR semantic approaches that allow for transparently protecting proprietary knowledge.Moreover, we infer that the FAIR accessibility of several solutions, exemplarily labelled solutions # 1 and #2 in Figure 15, will not only contribute to a wellinformed decision by revealing optional solutions making but also to a resilient approach that was manufactured by using a heat ramp of 2 K/min and a 0.5 min dwelling time at 90 • C, a porosity amounting to the 2.26-fold of *t 01 was found based on light microscopical inspection of a cross-cut region.
Table A1.List of the settings for descriptor variables and experimentally determined responses for the DoE approach that was performed based on Table 1 and that provided the findings presented in Figure 13 and Table 3.
Figure 1 .
Figure 1.Material innovation with changed or new processes; demands from society leading to sustainable products.
Figure 1 .
Figure 1.Material innovation with changed or new processes; demands from society leading to sustainable products.
Figure 2 .
Figure 2.An iterative group interaction cycle to reveal, from an organisation's/enterprise's point of view, which are the tasks for which they may benefit on a strategic level from involving translators.It was sketched by the authors inspired by the Group Task Circumplex developed by McGrath [20].
Figure 2 .
Figure 2.An iterative group interaction cycle to reveal, from an organisation's/enterprise's point of view, which are the tasks for which they may benefit on a strategic level from involving translators.It was sketched by the authors inspired by the Group Task Circumplex developed by McGrath [20].
Figure 3 .
Figure 3. Measured temperature profiles according to the composite material batch numb process-related curing identifier (ID) for vacuum bagging (VB) or autoclave curing (AC) of s prepregs.
Figure 3 .
Figure 3. Measured temperature profiles according to the composite material batch number and process-related curing identifier (ID) for vacuum bagging (VB) or autoclave curing (AC) of stacked prepregs.
Sustainability 2024 ,
16, x FOR PEER REVIEW 10 of 38 meanings: (i) the product of the process of translating, i.e., the translated text, or (ii) the abstract concept encompassing both (i) and (ii).
Figure 4 .
Figure 4. Representation of translator activities in manufacturing innovation when logging information received in monolingual communication (a) and when translating logged information from a source language to a protocol in target language (b), as well as a simplified model of a protocolbased translation process (c).It was sketched by the authors inspired by more comprehensive descriptions of translation activities developed by Bell [45].
Figure 4 .
Figure 4. Representation of translator activities in manufacturing innovation when logging information received in monolingual communication (a) and when translating logged information from a source language to a protocol in target language (b), as well as a simplified model of a protocol-based translation process (c).It was sketched by the authors inspired by more comprehensive descriptions of translation activities developed by Bell [45].
Sustainability 2024 , 38 Figure 5 .
Figure 5.Translation steps in materials modelling starting with an identified and formulated problem expressed by a business and an industrial case.The steps follow the EMMC Translators Guide and the sketch shown here was reprinted by the authors from a presentation used by Klein et al. [11].
Figure 5 .
Figure 5.Translation steps in materials modelling starting with an identified and formulated problem expressed by a business and an industrial case.The steps follow the EMMC Translators Guide and the sketch shown here was reprinted by the authors from a presentation used by Klein et al. [11].
Figure 6 .
Figure 6.Iterative innovation translation cycle involving a translator, a client expressing a need #1 in the frame of an innovation case #1, and a knowledge provider contributing to fill identified knowledge gaps.
Figure 6 .
Figure 6.Iterative innovation translation cycle involving a translator, a client expressing a need #1 in the frame of an innovation case #1, and a knowledge provider contributing to fill identified knowledge gaps.
Figure 7 .
Figure 7. Representation of the iterative and recursive interaction between translator and their client involving abductive thinking for materials innovation.It was sketched by the authors inspired by a model for representing leaders' strategising developed by Calabrese and Costa [63].
Figure 7 .
Figure 7. Representation of the iterative and recursive interaction between translator and their client involving abductive thinking for materials innovation.It was sketched by the authors inspired by a model for representing leaders' strategising developed by Calabrese and Costa [63].
Figure 8 .
Figure 8. Section of a value chain sketched from a prepreg manufacturer via a component part producer to an end user.
Figure 8 .
Figure 8. Section of a value chain sketched from a prepreg manufacturer via a component part producer to an end user.
Figure 9 .
Figure 9.Light microscopical image of a cross-section through a composite after autoclave curing following process AC-2-0.
Figure 9 .
Figure 9.Light microscopical image of a cross-section through a composite after autoclave curing following process AC-2-0.
Figure 10 .
Figure 10.An iterative group interaction cycle for revealing from a translator's point of view which are the tasks on an operational data level for which the translator's client will benefit from involving the translator.The sketch builds on Figure 2.
Figure 11 .
Figure 11.Thermogravimetric (TGA) findings for the mass loss of a modified poly(furfuryl alcohol) (PFA) matrix resin during curing with a maximum temperature of 145 °C at a pressure of 0.1 MPa, applying (i) a heat ramp of 2 K/min without performing isothermal dwelling at 90 °C, and (ii) a heat ramp of 1 K/min with 75 min of isothermal dwelling at 90 °C.
Figure 11 .
Figure 11.Thermogravimetric (TGA) findings for the mass loss of a modified poly(furfuryl alcohol) (PFA) matrix resin during curing with a maximum temperature of 145 • C at a pressure of 0.1 MPa, applying (i) a heat ramp of 2 K/min without performing isothermal dwelling at 90 • C, and (ii) a heat ramp of 1 K/min with 75 min of isothermal dwelling at 90 • C.
Figure 12 .
Figure 12.Microscopy images in two types of regions in a cross-cut through a stacked composite prepreg after curing (heat ramp: 2 K/min; without applying an isothermal dwelling: 90 °C; maximum temperature of 145 °C applied for 75 min) when performing vacuum bagging at a pressure of 0.1 MPa: (a,b) contrasting light microscopy and SEM images, respectively, revealing a void; (c) higher resolution SEM image showing the prevailing texture dominated by void-free matrix resin and fibre layers.
Figure 12 .
Figure 12.Microscopy images in two types of regions in a cross-cut through a stacked composite prepreg after curing (heat ramp: 2 K/min; without applying an isothermal dwelling: 90 • C; maximum temperature of 145 • C applied for 75 min) when performing vacuum bagging at a pressure of 0.1 MPa: (a,b) contrasting light microscopy and SEM images, respectively, revealing a void; (c) higher resolution SEM image showing the prevailing texture dominated by void-free matrix resin and fibre layers.
stainability 2024 ,Figure 13 .
Figure 13.DoE contour plot showing a false colour representation (as a gradual he established functional relation between the porosity from light microscopical assess cuts through composite specimens prepared in a vacuum-bagging process from prepr using the heat ramp x1 and the dwelling time x2 applied at a temperature of 90 °C contour lines provide the manifolds of *t01 as explained in the text.
Figure 13 .
Figure 13.DoE contour plot showing a false colour representation (as a gradual heat map) of the established functional relation between the porosity from light microscopical assessment of cross-cuts through composite specimens prepared in a vacuum-bagging process from prepregs EMP PP01 using the heat ramp x 1 and the dwelling time x 2 applied at a temperature of 90 • C. The labels of contour lines provide the manifolds of *t 01 as explained in the text.
Figure 14 .
Figure 14.Assessment of a linear correlation between the porosity of fibre-reinforced composite specimens, as obtained from light microscopy images of cross-cuts and the density of composite samples, revealing the coefficient of determination R 2 = 0.95.The ordinate labels provide the manifolds of *t01 as explained in the text.
Figure 14 .
Figure 14.Assessment of a linear correlation between the porosity of fibre-reinforced composite specimens, as obtained from light microscopy images of cross-cuts and the density of composite samples, revealing the coefficient of determination R 2 = 0.95.The ordinate labels provide the manifolds of *t 01 as explained in the text.
Figure 15 .
Figure 15.Iterative innovation translation cycle involving a customer (i.e., a stakeholder, e.g., an end-user, a consumer or client) who expresses a need requiring innovation and an organisation that understands the need and involves translators for capturing the aspects of the innovation challenge related to manageable changes in business, manufacturing, and R&D, respectively, on a conceptual level (topmost cycle) and for having knowledge-based optional solutions provided in the frame of jointly identified industrial cases, thus allowing for filling knowledge gaps and facilitating factbased decision-making (bottom).
Figure 15 .
Figure 15.Iterative innovation translation cycle involving a customer (i.e., a stakeholder, e.g., an end-user, a consumer or client) who expresses a need requiring innovation and an organisation that understands the need and involves translators for capturing the aspects of the innovation challenge related to manageable changes in business, manufacturing, and R&D, respectively, on a conceptual level (topmost cycle) and for having knowledge-based optional solutions provided in the frame of jointly identified industrial cases, thus allowing for filling knowledge gaps and facilitating fact-based decision-making (bottom).
Sustainability 2024 , 38 Figure 16 .
Figure 16.Process and object conceptualisation in composite prepreg manufacturing, with material objects represented by rectangular boxes and process steps by boxes with rounded corners.
Figure 16 .
Figure 16.Process and object conceptualisation in composite prepreg manufacturing, with material objects represented by rectangular boxes and process steps by boxes with rounded corners.
Table 1 .
Differentiation between the five curing conditions applied during four optional vacuum bagging (VB) and one autoclave curing (AC) processes, characterised by the heat ramps applied between room temperature (RT) and 90 • C, between 90 • C and 145 • C with an isothermal dwelling time of 90 • C, and finally by maintaining 145 • C for 75 min.
Curing ID Composite Material VB process Heat Ramp RT-90 • C, 90-145 • C Dwelling Time 90 • C
is presented.The performance of such composite is featured by competitive mechanical properties exemplified by an ILSS of 36.8 ± 2.0 MPa. | 26,719 | 2024-04-22T00:00:00.000 | [
"Engineering",
"Business",
"Environmental Science"
] |
Targeting CCL2-CCR4 axis suppress cell migration of head and neck squamous cell carcinoma
For head and neck squamous cell carcinoma (HNSCC), the local invasion and distant metastasis represent the predominant causes of mortality. Targeted inhibition of chemokines and their receptors is an ongoing antitumor strategy established on the crucial roles of chemokines in cancer invasion and metastasis. Herein, we showed that C-C motif chemokine ligand 2 (CCL2)- C-C motif chemokine receptor 4 (CCR4) signaling, but not the CCL2- C-C motif chemokine receptor 2 (CCR2) axis, induces the formation of the vav guanine nucleotide exchange factor 2 (Vav2)- Rac family small GTPase 1 (Rac1) complex to activate the phosphorylation of myosin light chain (MLC), which is involved in the regulation of cell motility and cancer metastasis. We identified that targeting CCR4 could effectively interrupt the activation of HNSCC invasion and metastasis induced by CCL2 without the promoting cancer relapse observed during the subsequent withdrawal period. All current findings suggested that CCL2-CCR4-Vav2-Rac1-p-MLC signaling plays an essential role in cell migration and cancer metastasis of HNSCC, and CCR4 may serve as a new potential molecular target for HNSCC therapy.
INTRODUCTION
Cell migration is a crucial process for the invasive and metastasis of cancer [1,2]. An increasing number of evidence has revealed that targeted drugs that disrupt cell migration led to the significant improvement in five-year survival rates of prostate and breast cancer patients with metastasis [3]. Notably, chemokines and their receptors are essential coordinators of directed migration of cancer cells and cell-cell interactions and significantly impact tumor development. Therefore, targeted inhibition of chemokines or their receptors represents a persistent focus for optimizing antitumor strategy [4].
CCL2, also known as monocyte chemotactic protein-1 (MCP-1), has been shown to play critical roles in regulating tumor development and progression [5][6][7][8][9][10][11]. Although some encouraging results have been achieved in targeting CCL2 or its receptors as an antitumoral strategy, three clinical trials targeting the CCL2-CCR2 axis with the humanized neutralizing anti-CCR2 mAb (MLN1202) and a humanized monoclonal CCL2 neutralizing antibody (CNTO 888) were unsuccessful in suppressing tumor growth and metastasis in solid tumors [12][13][14]. More importantly, a recent study indicated that directly targeting CCL2 may provoke unexpected adverse effects, indicating that cessation of CCL2 inhibition leads to a rebound in the number of circulating monocytes, increasing angiogenesis, promoting metastases, and accelerating death in the breast cancer model [15]. Therefore, avoiding the adverse effects during the direct intervention of CCL2 represents an essential challenge of current CCL2-CCR2 axistargeted antitumor therapy.
CCR4, another important receptor of CCL2, is overexpressed in many solid tumors and hematologic malignancies. A humanized anti-CCR4 antibody, Mogamulizumab, has been applied to treat relapsed/refractory adult T-cell leukemia (ATL) and cutaneous T-cell lymphoma (CTL) in Japan [16,17]. In some solid tumors, an increasing number of studies have shown that CCL2, CCL17, and CCL22 with CCR4 expression induced cancer cells migration, EMT, and metastasis in some solid tumors [18][19][20][21][22]. Given that CCR4 deficiency does not affect the infiltration and migration of monocytes, CCR4 inhibition to block the CCL2/CCR4 axis could be a promising novel antitumoral strategy to reduce the risk of rapid tumor recurrence and metastasis during the cessation of CCL2 inhibition treatment.
Local invasion and distant metastasis remain the primary cause of mortality in patients with HNSCC [23]. In the present study, we demonstrated that CCL2/CCR4 interaction, but not CCL2/CCR2 interaction, promoted HNSCC cell migration and invasion by inducing the formation of Vav2-Rac1 complex to upregulate active Rac1 level. Targeting CCR4 could be a promising migrastatics strategy to reduce cancer cell motility and metastasis in HNSCC without promoting tumor relapse observed during the interruption of CCL2 inhibition.
revealed that patients with low CCL2 protein levels had longer disease-specific survival (DSS) than patients with high CCL2 protein levels (Fig. 1C). It indicated that high expression of CCL2 in HNSCC predicts a relatively shorter survival period, which may associate with promoted tumor progression activated by excessive CCL2.
In order to verify the above speculation, we further determined the expression level of CCL2 in HNSCC tissues and more HNSCC cell lines. Sure enough, the result of IHC staining revealed that CCL2 expression in HNSCC tissues was significantly higher than that of cancer adjacent tissue ( Fig. 1D and Supplementary Table 2). Next, we confirmed the overexpression of CCL2 in HNSCC cells by ELISA (Fig. 1E) and Western blot (Fig. 1F). Moreover, CCL2 overexpression in HNSCC cells was inherent because high CCL2 expression remains unaffected by hypoxia conditions (1% oxygen concentration) and serum-free condition in vitro (Fig. 1E, F).
As a chemokine, CCL2 should bind to its receptors to play related molecular functions. Therefore, we focused on the two significant receptors of CCL2, CCR2, and CCR4, and revealed the correlation between the expression levels of these two receptors and the DSS of HNSCC patients through Kaplan-Meier analysis, respectively. Unexpectedly, the expression level of CCR2, the widely concerned classical receptor for CCL2, was positively correlated with DSS in HNSCC. While the higher expression level of CCR4 is correlated with the poor prognosis of HNSCC patients (Fig. 1G). This indicated that the CCL2-CCR4 axis may play a more important role in the progress of HNSCC than the CCL2-CCR2 axis. We determined the protein level of CCR4 in HOK and HNSCC cells under normal culture conditions, and the results showed that the expression level of CCR4 in HNSCC cells was higher than that in HOK cells, but the expression level of CCR4 in several HNSCC cell lines selected in this study is relatively close (Fig. 1H). Therefore, we selected the HSC6 cell line and SCC15 cell line in subsequent in vitro experiments due to their higher expression level of CCL2.
CCL2 promote the motility of HNSCC cells through CCR4 in vitro and in vivo According to our preliminary research, there was no significant effect on cell proliferation of HNSCC cells treated with CCL2 in vitro. However, we found that HNSCC cells treated with exogenous CCL2 (100 ng/ml) exhibited much better motility compared to controls ( Fig. 2A). To confirm the Transwell migration data, we performed the wound healing assay to assess cell capacity to migrate and repair the wound. As illustrated in Fig. 2B, both HSC6 and SCC15 cells become more motile after being treated with CCL2. Notably, cytoskeleton remodeling and filopodia are crucial for cell motility. We observed that the surface of tumor cells exhibited more numerous and longer filopodia after treatment with CCL2 for 2 hs (Fig. 2C). Next, we performed the immunofluorescence staining of β-Actin to access the morphologic changes in HNSCC cells treated with CCL2. The results indicated more spike-like protrusions in CCL2-treated cells extended initially compared with controls (Fig. 2D). These newformed protrusions at the leading edge facilitated the efficient migration of HNSCC cells (Fig. 2D).
In order to explore the function of CCL2 in the development of HNSCC in vivo, the zebrafish model was used. As depicted in Fig. 2E, GFP-labeled SCC15 cells survived and remained visible for 3 days post-injection in vivo experiment. We observed that CCL2 seemingly didn't significantly promote the proliferation of HNSCC cells but promoted the dissemination of injected cells to extravasate and engraft mainly in the perivascular milieu of caudal hematopoietic tissues, revealing a migratory phenotype. More importantly, the interference of CCR4 by siRNA could significantly inhibit the migration ability of cells induced by CCL2 in the zebrafish model.
To decipher the molecular mechanism as to how CCL2 enhances HNSCC cell migration, we have intervened in CCR2 and CCR4, which were reported to be associated with cancer cell progression. We used CCR2 specific antagonists (20 μM, RS102895 hydrochloride, MedChemExpress, Monmouth Junction, NJ) and CCR4 specific antagonists (100 nM, AZD2098, MedChemExpress, Monmouth Junction, NJ) or silencing short interfering RNA to block CCR2 and CCR4 in HSC6 cells and SCC15 cells, and results revealed that suppression of CCR4 instead of CCR2, could reverse CCL2-promoted HNSCC cells migration in both Transwell assay and wound healing ( Fig. 2F-I). The sequences and knockdown efficiency of each siRNA in HNSCC cells were represented in Supplementary Table 3 and Supplementary Fig. 1. Moreover, ELISA assay and qRT-PCR demonstrated that CCL2 did not upregulate the levels of other functional ligands of CCR4, including CCL17 or CCL22, in HNSCC ( Supplementary Fig. 2). Primers used for qRT-PCR were listed in Supplementary Table 4.
CCL2 enhances HNSCC cell motility via promoting Rac1phosphorylated MLC (p-MLC) activation Given that the Rac1-p-MLC signal had been reported to play an important role in cell motility by promoting the contractile motion of the myosin light chain, we used interruption approaches with a specific inhibitor to block Rac1 (100 μM, NSC 23766, MedChemExpress, Monmouth Junction, NJ), we found that the cell migration was suppressed in HNSCC cells treated with CCL2 (Fig. 3A, B). Then we performed the pulldown assay to access Rac1 and found the overexpression of GTP-bound Rac1 in HNSCC cells after treatment with CCL2 but not overexpression of total Rac1 (Fig. 3C). Interestingly, we also found that the increase of GTP-bound Rac1 induced by CCL2 could be blocked with siCCR4, but not with siCCR2, in HNSCC cells ( Fig. 3D and Supplementary Fig. 3). Next, we used a specific siRNA to knock down MLC and found that cell migration was not improved in Fig. 1 Overexpression of CCL2 was inherent and predicted poor prognosis in HNSCC. A The representative images of chip detection. RayBio human inflammatory cytokine antibody array was used to screen the expression profiles of chemokine and cytokine in cells culture supernatants of HNSCC cells grown in serum-free medium for 24 h. B The representative heatmaps of differentially expressed cytokines in HNSCC cells compared with the HOK (Green: downregulated, Red: upregulated). CCL2 (MCP-1) is one of the overexpressed chemokines in the supernatants of HNSCC cells compared with HOK cells. C Kaplan-Meier survival curves of HNSCC patients with low and high CCL2 expression. Kaplan-Meier curves for disease-specific survival (DSS) in 518 HNSCC patients, which were classified by the relative (high or low) immune signal for CCL2 protein levels, respectively. The log-rank (Mantel-Cox) test p value reflects the significance of the correlation between lower CCL2 expression and longer survival outcomes. D Overexpression of CCL2 in HNSCC tissue. The IHC staining revealed that CCL2 expression in HNSCC tissues (n = 180) was significantly higher than that of cancer adjacent tissue (n = 27). (Magnification, ×400; Bar: 50 μm; **P < 0.01). E High CCL2 expression of HNSCC cells was inherent but not attributed to the deprivation of serum or oxygen. ELISA assays revealed that the CCL2 level of cultured supernatants in HNSCC cells was significantly higher than that of HOK cells and not significantly related to the amount of fetal bovine serum or oxygen supply (**P < 0.01). F Inherent high CCL2 expression of HNSCC cells were confirmed by Western blot (*P < 0.05). G Kaplan-Meier survival curves of HNSCC patients with low and high CCR2 or CCR4 expression. Kaplan-Meier curves for disease-specific survival (DSS) in 518 HNSCC patients, which were classified by the relative (high or low) immune signal for CCR2 or CCR4 protein levels, respectively. The log-rank (Mantel-Cox) test p value reflects the significance that lower CCR2 or higher CCR4 expression was correlated with longer survival outcomes. H High CCR4 expression of HNSCC cells were confirmed by Western blot (**P < 0.01).
HNSCC cells treated with CCL2 (Fig. 3E, F) although the level of GTP-bound Rac1 was upregulated (Fig. 3G). Moreover, the phosphorylation of MLC was reversed by the Rac1 inhibitor in a CCR4-dependant manner in HNSCC cells treated with CCL2 ( Fig. 3H and Supplementary Fig. 4). Taken together, these results recognized the Rac1-p-MLC signal as the downstream of the pathway of CCL2-CCR4 could promote the HNSCC cell migration. The sequences and knockdown efficiency of each siRNA in HNSCC cells were represented in Supplementary Table 3 Vav2 is required for Rac1 activation induced by CCL2-CCR4 signaling We first detected the mRNA expression of Prex1, Prex2, Vav2, Vav3, and ECT2 in HNSCC cells by qRT-PCR ( Supplementary Fig. 5).
To confirm which GEF might be involved in Rac1 activation downstream from stimulation of HNSCC cells, we performed a Co-IP assay in the present study. We found that CCL2 treatment for 2 h increased the association of Vav2 with Rac1 (Fig. 4A). However, Rac1 failed to increase the affinity of Prex1 or ECT2 bond small GTPases after CCL2 treatment in HNSCC cells ( Supplementary Fig. 6). We then observe the colocalization of Vav2 and Rac1 in the cytoplasm after CCL2 treatment in HNSCC cells. The result indicated that CCL2 could stimulate the colocalization of Vav2 and Rac1 in the cytoplasm in a CCR4-dependent manner, demonstrating the findings that silencing of CCR4 with RNAi perturbs CCL2-stimulated Vav2 binding with Rac1 to form the Vav2-Rac1 complex (Fig. 4B).
To further study the role of Vav2 in CCL2-induced Rac1 activation, we used siRNA against Vav2 to downregulate Vav2 expression in HNSCC cells. Following the knockdown of Vav2 expression, there was an impaired Rac1 activation in response to CCL2 in HNSCC cells, demonstrating the downregulation of GTPbound Rac1 (Fig. 4C). Moreover, we found that activation of Vav2 tyrosine phosphorylation induced by CCL2 was partly reversed by two PI3K-AKT pathway inhibitors, Wortmannin (2 μM, MedChem-Express, Monmouth Junction, NJ) and LY294002 (50 μM, Med-ChemExpress, Monmouth Junction, NJ) but did not affect by the Src inhibitor PP2 (1 μM, MedChemExpress, Monmouth Junction, NJ) in HNSCC cells (Fig. 4D, E and Supplementary Fig. 7). Therefore, CCL2 promoted the formation of Vav2-Rac1 by inducing the Vav2 phosphorylation through the activation of the PI3k-AKT pathway. H-Score analysis also revealed that the expression levels of CCR4, p-Vav2, and p-MLC in HNSCC tissues were significantly higher than those in normal tissues ( Fig. 4F) which further verified the above results. The details of the human tissue microarrays OR208 containing patient samples were shown in Supplementary Table 2.
CCR4 antagonist inhibited CCL2-mediated HNSCC cells migration and invasion in the in vivo xenograft model
To confirm the CCL2-induced HNSCC cell migration and invasion in vivo, we transduced SCC15 cells with lentiviral-luciferase plasmid and selectively expanded the positive stable cells. The mice were randomly divided into four groups as indicated in Fig. 5A. Tumor progression was monitored in live animals using in vivo imaging system every week. The results revealed that SCC15 cells implanted in mice were treated with CCL2 neutralizing antibody, CCR4 antagonist Mogamulizumab, and the combination injection exhibited lower luminescence signals located at local sites compared to the normal saline (NS) (Fig. 5B, C). Unsurprisingly, we also found that the treatment with CCR4 antagonist or the combination injection had similar inhibition in growth rates compared to treatment with CCL2 neutralizing antibody during treatment (Fig. 5B, C).
In order to determine the relapse and metastasis of the tumor, the treatment was terminated for 3 weeks. The results revealed that cessation of CCL2 neutralizing antibody caused a more repaid recurrence of HNSCC compared to the cessation of CCR4 antagonist ( Fig. 5B-D). Moreover, there was no statistical difference in tumor sizes between the CCR4 antagonist and the combination groups after the cessation of treatment for 3 weeks ( Fig. 5B-D). However, CCL2 and CCR4 antagonist did not significantly change the cell cycle and proliferation of HNSCC cells ( Supplementary Fig. 8).
For tumor cell metastasis, the CCR4 antagonist reduced the lymph node metastasis more effectively than the CCL2 neutralizing antibody (Fig. 5E). As anticipated, H&E and IHC staining of those metastatic foci confirmed micrometastases in HNSCC cells (Fig. 5E). At the same time, IHC staining in transplanted tumors indicated that the CCR4 antagonist could effectively downregulate the phosphorylation of Vav2 and MLC in cancer cells, but not in tumors treated with CCL2 neutralizing antibody (Fig. 5F). Since the expression levels of mouse CCL2 (mCCL2) in the transplanted tumors and mouse lymph nodes of each group were not statistically different, it was considered that the expression of mCCL2 would not affect the mobility of HNSCC cells in the in vivo xenograft model (Fig. 5F).
Together, all these results suggested that CCL2 promoted the formation of the Vav2-Rac1 complex to activate Rac1 and then induce cell migration via CCR4. A proposed model for the regulatory role CCL2 of HNSCC cell motility was schematically summarized in Fig. 6.
DISCUSSION
Although an increasing number of studies have shown that CCL2-CCR2/CCR4 axis plays crucial role in the progression and metastasis of multiple solid tumors [5], the exact molecular mechanisms of this axis on HNSCC and its clinical implications remain elusive. In the present study, we demonstrated that the CCL2-CCR4 axis, not CCL2-CCR2 signaling, mediates the CCL2induced migration and invasion of human HNSCC cells and targeted inhibition of CCR4 significantly inhibit the invasion and metastasis of HNSCC xenografts in the nude mouse without causing relapse during the cessation of CCL2 antibody therapy.
Previous studies have reported that overexpression of CCL2 and its receptors, CCR2, and CCR4, in human cancers, promote lung cancer, breast cancer, and HNSCC development via regulation of angiogenesis, cell proliferation, and migration [5, 7-9, 24, 25]. All these findings suggested that CCL2 is involved in carcinogenesis and tumor progression. However, which receptor, CCR2 or CCR4, plays a critical role in activating downstream signaling molecules of CCL2 remains obscure. Our study identified that CCL2 promoted cell migration of HNSCC cells via CCR4, not by CCR2. Furthermore, we identified that CCL2 and CCR4 was highly expressed in cancer cells of HNSCC, indicating that CCL2 secreted by cancer cells possibly stimulates itself by autocrine or paracrine Fig. 2B, Magnification, ×50; Bar: 400 μm; *P < 0.05; **P < 0.01). C SEM was used to detect the pseudopodia in HNSCC cells treated with CCL2 (100 ng/mL) for 2 h. Data of percent pseudopodia area were analyzed using ImageJ (Magnification, ×3000; Bar: 5 μm; **P < 0.01). D Confocal images showed the increased cytoskeleton remodeling in HNSCC cells induced by CCL2 (100 ng/mL) for 2 h (Original magnification, ×630; Bar: 20 μm; red arrows, stained with green fluorescence for β-actin and blue fluorescence for DAPI). E CCL2 promoted cell motility and distant metastasis of HNSCC cells in zebrafish, but the effect was abolished with siCCR4. HNSCC cells were labeled by GFP and implanted into the zebrafish embryos. After implantation, the embryos were monitored using fluorescence microscopy and LSCM for 3 days to monitor the migration of implanted cells. (Original magnification, ×50; Bar: 1000 μm). F-I Transwell migration assays (F, G) and wound healing assays (H, I) were performed in HNSCC cells cultured with CCR4 inhibitor (100 nM) and CCR2 inhibitor (20 μM), or siCCR4 and siCCR2, respectively (Fig. 2F, G, Magnification, ×100; Bar: 200 μm; Fig. 2H, I, Magnification, ×50; Bar: 400 μm; *P < 0.05; **P < 0.01; NS no statistical significance).
function. Together, these results suggested that CCL2-CCR4 signaling played an essential role in the local invasion and metastasis of HNSCC, and CCR4 might serve as a potential therapeutic molecular target by inhibiting tumor invasion metastasis [26].
Rac1, a member of the Rac subfamily of Rho-GTPases, is a pleiotropic regulator of multiple cellular processes, including cell motility [27][28][29]. Vav2 had been reported to upregulate cell motility by promoting the cycling between an inactive Rac1-GDPbound state to an active Rac1-GTP-bound state in many cancer types [30][31][32]. However, whether Vav2-Rac1 may exert their pleiotropic effects on cancer cells treated with CCL2 as a complex remains unclear. Our findings that the pharmacological inhibition and genetic suppression of Vav2 also inhibited the CCL2-mediated activation of Rac1 in human HNSCC cells in a CCR4-dependent manner, indicating that Vav2 plays a critical role in the activation of Rac1 in HNSCC cells treated with CCL2. Moreover, CO-IP and confocal microscopy results revealed that the binding of Vav2 and Rac1 was enhanced in cancer cells following treatment with CCL2, which indicated that CCL2 induces the functional coupling of Vav2 and Rac1 and the formation of the Vav2-Rac1 molecular complex.
Next, we investigated the downstream target genes of Rac1 regulating cell motility. As previously reported, the remodeling of cytoskeleton and myosin contraction is crucial for cell movement [33]. Studies have shown that several factors, including Rac1, can promote the phosphorylation of MLC protein [28,34,35]. However, whether CCR4 can promote MLC phosphorylation through Vav2/Rac1 signaling pathway and induce HNSCC migration has not been reported. In this study, we observed that the CCL2-CCR4 axis could effectively induce the activation of Rac1 and increase the activation of MLC phosphorylation in HNSCC cells. Moreover, our results indicated that Rac1 inhibitors could effectively inhibit CCL2-CCR4 induced Rac1 activation and the MLC phosphorylation. More importantly, we observed that MLC inhibitor could not block the activation of Rac1 induced by CCL2-CCR4; however, it could inhibit the phosphorylation of MLC and suppress cell migration in HNSCC cells. These findings collectively indicated that MLC is the main downstream target of Vav2-Rac1 signaling and plays a critical role in cell migration in HNSCC cells treated with CCL2. The present study reported that CCR4-Vav2-Rac1-MLC signaling participates in cell migration in HNSCC.
To verify the therapeutic potential of CCL2-CCR4 signaling in a solid tumor, in this study, we used Mogamulizumab, a humanized anti-CCR4 monoclonal antibody, a promising agent for CCR4positive T-cell lymphomas [36], to treat subcutaneously implanted HNSCC tumor in nude mice. Our results showed that Mogamulizumab could effectively inhibit the local invasion and the metastasis of distant lymph nodes by blocking CCL2-CCR4-Vav2-Rac1-MLC signaling in the HNSCC xenograft model. Because a previous study reported that the suspension of CCL2 neutralizing antibody therapy could lead to rapid tumor recurrence due to monocyte release from the bone marrow and blood vessel formation [15], we also discontinued Mogamulizumab therapy to investigate the recurrence of tumor in this study. As anticipated, we observed the slow recurrence of implanted tumors after cessation of Mogamulizumab and CCL2 neutralizing antibody therapy, respectively. However, we did not observe abundant monocytes' infiltration and highly active angiogenesis in tumor lesions after discontinuation of Mogamulizumab therapy. It appeared reasonable that the recurrence of tumors after suspension of Mogamulizumab therapy was slower than those with discontinued CCL2 neutralizing antibody therapy. Therefore, CCR4 represents an effective target for inhibiting local invasion and distant metastasis in HNSCC and circumventing the rebound phenomenon that might occur during the cessation or interruption of CCL2 neutralizing antibody therapy.
In summary, the present study has identified and confirmed that CCL2 could enhance HNSCC cell metastasis via activation of the CCR4-Vav2-Rac1-MLC signaling axis. Besides, targeting CCR4 to block this newly identified signaling may help in the development of alternative new strategies for suppression of HNSCC tumor metastasis.
Human cytokine antibody array
Culture supernatant was used to analyze the cytokine profiles with the RayBiotech ® Human Cytokine Antibody Array C series 4000 (RayBiotech, Norcross, GA, USA) following the manufacturers' instructions. A total of 291 cytokines were evaluated as listed in Supplementary Table 1. The detection was carried out with Aksomics Inc. (Shanghai, CN). Briefly, HOK and SCC15 cells were cultured in a serum-free medium, and supernatants of the cell cultures were collected after 24 h. The array slides were blocked with a blocking buffer, antibody arrays were pretreated with culture supernatant overnight at 4°C, in triplicate. The slides were then extensively washed and incubated with biotin-conjugated primary antibodies for 2 h. After adequate washing, the slides were incubated with streptavidinconjugated secondary antibodies for 1 h. Finally, data were normalized to internal positive and negative controls and demonstrated in Supplementary Table 1. Fig. 3 The crucial role of Rac1-p-MLC activation in the CCL2-CCR4 axis induced HNSCC cell migration. A, B Rac1 inhibitor could restrain the migration induced by CCL2 in HNSCC cells. Transwell migration assay (A) and wound healing assay (B) were performed in HNSCC cells cultured with or without Rac1 inhibitor (100 μM). The migration of HNSCC cells treated with Rac1 inhibitor significantly decreased compared with controls (Fig. 3A, Magnification, ×100; Bar: 200 μm; Fig. 3B, Magnification, ×50; Bar: 400 μm; *P < 0.05; **P < 0.01; NS no statistical significance). C CCL2 upregulated the activation of Rac1. The Thermo Scientific Active Rac1 Pull-Down and Detection Kit were used to detect the GTP-bound Rac1, and results indicated that the amount of GTP-bound Rac1 was significantly increased in HNSCC cells treated with CCL2 (*P < 0.05). D siCCR4 could inhibit the activation of Rac1 induced by CCL2. The amount of GTP-bound Rac1 in HNSCC cells transfected with siCCR4 was detected and compared with that of the NC group. Results revealed that CCR4 inhibition could abolish the upregulation of GTPbound Rac1 induced by CCL2 (*P < 0.05). E, F MLC were essential for CCL2-induced HNSCC cell migration. Transwell migration assay (E) and wound healing assay (F) were performed in HNSCC cells with siMLC compared with the NC group. The migration of HNSCC cells transfected with siMLC significantly reduced compared with NC group (Fig. 3E, Magnification, ×100; Bar: 200 μm; Fig. 3F, Magnification, ×50; Bar: 400 μm; *P < 0.05; **P < 0.01; NS no statistical significance). G MLC was a downstream target of Rac1. The amount of GTP-bound Rac1 in HNSCC cells transfected with siMLC was detected and compared with that of the NC group. There was no statistical significance in the amount of GTPbound Rac1 between HNSCC cells with the siMLC group and the NC group. It suggested that MLC was a downstream target of Rac1 (NS no statistical significance). H CCL2-CCR4 signaling induced the upregulation of p-MLC through activation of Rac1. The levels of p-MLC of HNSCC cells with siCCR2 or siCCR4 were detected and compared after culturing with or without Rac1 inhibitor (100 μM). Results indicated that CCR4 (not CCR2) inhibition could significantly suppress the upregulation of p-MLC in HNSCC cells induced by CCL2. Moreover, results confirmed the crucial role of Rac1 in the upregulation of p-MLC induced by CCL2-CCR4 signaling. (**P < 0.01; NS no statistical significance).
Enzyme-linked immunosorbent assay (ELISA)
CCL2 levels were detected in the cell culture supernatant for 48 h in respective conditions and quantified using Human CCL2 ELISA Kit (Telenbiotech, Guangzhou, GD, CN) according to the manufacturer's instructions. CCL17 and CCL22 levels were detected in the cell culture supernatant collected from HSC6 and SCC15 cells cultured with or without CCL2 (100 ng/mL) for 24 h and analyzed with Human CCL17 and CCL22 ELISA Kit (Telenbiotech, Guangzhou, GD, CN) according to the manufacturer's instructions, respectively. Briefly, 50 uL of standard or sample was added to each well, followed by the addition of 100 μL of enzyme conjugate to standard wells and sample wells except the blank well, and incubated for 60 min at 37°C. The microtiter plates were washed five times Z. Ling et al. and then incubated with freshly mixed substrate solution for 15 min at 37°C in the dark, and the reaction was stopped by adding 2 N H 2 SO 4 . The absorbance was measured at 450 nm with a microplate reader. The concentration of CCL2, CCL17, or CCL22 was calculated with a standard curve obtained using the absorbance value.
Human tissue microarrays
Human tissue microarrays OR208 were selected and purchased from Alenabio (Xi'an, SN, CN) for the research. Each tissue microarray contains 60 cases of human oral squamous cell carcinoma tissues and nine cases of normal oral epithelial tissues (each case took three samples and a few samples lacked epithelial components). Details of the human tissue microarrays containing patient samples were shown in Supplementary Table 2.
Immunohistochemistry staining (IHC)
For IHC, the tissue sections or tissue microarrays were deparaffinized in xylene twice and rehydrated in a graded series of ethanol (100, 95, 85, and 75% ethanol) and phosphate-buffered saline (PBS, pH 7.4) three times. Antigen retrieval was performed by heating the tissue sections at 60°C in 0.01 M sodium citrate buffer (pH 6.0) in a microwave oven for 20 min and naturally cooled to room temperature. The endogenous peroxidase activity was blocked by 3% hydrogen peroxide for 10 min. After rinsing three times with PBS, tissue sections or tissue microarrays were incubated with normal goat serum for 10 min at room temperature to prevent nonspecific antibody binding. Subsequently, the sections were incubated with different primary antibodies at 4°C overnight. Then the sections or tissue microarrays were washed in PBS and incubated with goat anti-rabbit IgG (Dako, Glostrup, Denmark) secondary antibody for 30 min at room temperature. The tissue sections or tissue microarrays were then stained with diaminobenzidine (DAB) as a substrate chromogen and counterstained with hematoxylin. Deltopectoral lymph nodes were used as positive controls, and the negative control was performed by substituting the primary antibody with the primary antibody diluent. Subsequently, the specimens were mounted and observed under a light microscope at 100 and 400 magnifications. The antibodies used in this experiment are shown in Table 1.
Cell migration assay
To detect the cell migration potential, a transwell migration assay (8-μm pore size; Corning, Corning, NY, USA) was performed. Briefly, 5 × 10 4 cells/well of HSC6 cells and 1.2 × 10 5 cells/well of SCC15 cells were resuspended in 200 uL of serum-free media and subsequently seeded into the upper chamber (transwell chambers), and 700 uL of medium supplemented with 10% FBS was added into the lower chamber as a chemoattractant. After 24 h of incubation, cells remaining in the upper inserts were removed carefully, and migrated cells were fixed in 4% paraformaldehyde for 10 min, stained with 0.5% crystal violet (Beyotime Institute of Biotechnology, Shanghai, China) for 15 min. The upper surface of the membranes was gently wiped and cells migrating the bottom surface of the membrane were quantified under the microscope at 100 magnifications. Experiments were performed in triplicate, and a minimum of three random fields per filter was counted using ImageJ. In addition, wound healing assays were also adopted. HSC6 and SCC15 cells were seeded separately into six-well plates at a density of 5 × 10 5 cells/well and cultured in a monolayer for 24 h. A 20-200 μL pipette tip. A sterile 200-μL pipette tip was then held vertically to scratch across each well and washed with PBS. Subsequently, adherent cells were cultured in a serum-free medium to allow wound healing. After 0 and 24 h, cellular migration toward the scratched area was monitored and imaged using a phase-contrast microscope at 50 magnifications. The scratch area of three independent experiments was measured by ImageJ.
Scanning electron microscopy (SEM)
Cells on coverslips were fixed with 2.5% glutaraldehyde overnight at 4°C. Then, coverslips were dehydrated with a graded series of alcohol (30, 50, 70, 80, 95, and 100% ethanol). After dehydration in alcohol, samples were dried with a critical point drier. The dried samples were viewed under a Hitachi S-3400N (Hitachi, Japan) scanning electron microscope at 3000 magnifications after sputter-coating with gold.
Immunofluorescence staining
Cells were seeded into observation dishes. After reaching 60-80% confluence, cells were fixed in 4% paraformaldehyde and incubated for 30 min at room temperature. After fixation, the sections were washed with PBS and permeabilized with PBS containing 0.5% TrionX-100 for 15 min at room temperature. Then, cells were blocked with 3% BSA for 30 min at room temperature and subsequently probed with different primary antibodies overnight at 4°C. Subsequently, the cells were washed in PBS three times and followed by incubation with secondary antibodies for 2 h incubation in the dark at room temperature. The cells were then counterstained with 0.5 ug/mL of DAPI (#4083 s, CST, Danvers, MA, USA) for 5 min at room temperature and washed with PBS three times. The samples were analyzed using a laser scanning confocal microscope at a specific magnification. The antibodies used in this experiment are shown in Table 1.
Transient transfection of siRNA
To investigate the function of target proteins in HNSCC cells, corresponding small interfering RNA (siRNA) oligonucleotide duplexes (Ribobio, Guangzhou, GD, CN) was used. Using Lipofectamine™ RNAiMAX Transfection Reagent (13778150, Invitrogen, Carlsbad, CA), the siRNA oligonucleotides were transiently transfected into the targeted cells according to the Fig. 4 Vav2 is essential for CCL2-CCR4 signaling induced Rac1 activation and cell migration. A CCL2 stimulated the formation of the Vav2-Rac1 complex. HNSCC cells were subjected to treatment with CCL2 for 2 h. The cell lysates were subjected to immunoprecipitation (IP) with Rac1 or Vav2 antibodies, respectively. The immunoprecipitates were subjected to Western blot assay with the indicated antibodies. Results revealed that CCL2 did not upregulate the expression of Vav2 or Rac1 but significantly induce the Vav2-Rac-1 complex formation in HNSCC cells treated with CCL2 (**P < 0.01). B CCL2 stimulated the colocalization of Vav2 and Rac1 via CCR4. HNSCC cells were treated with or without exogenous CCL2 (100 ng/mL) for 2 h and stained with green fluorescence (for Rac1), red fluorescence (for Vav2), and DAPI (nuclear stain). Confocal images showed that CCL2 promoted the colocalization of Vav2 and Rac1 in the cytoplasm (red arrows); however, it was abolished with siCCR4 (Original magnification, ×400; Bar: 5 μm). C Vav2-silencing short interfering RNA (siVav2) could inhibit the activation of Rac1 induced by CCL2. HNSCC cells were either untreated or treated with negative control siRNA or siVav2 for 48 h. The cells were then treated with exogenous CCL2 (100 ng/mL) for 2 h. The amount of GTP-bound Rac1 was significantly reduced in HNSCC cells transfected with siVav2 compared with the NC group (*P < 0.05). D CCL2 induced the upregulation of p-Vav2 through PI3K signaling. HNSCC cells were incubated in a culture medium with or without PI3K inhibitors (LY294002, 50 μM; Wortmannin 2 μM, respectively) for 24 h. The cells were then treated with exogenous CCL2 (100 ng/mL) for 2 h. The cell lysates were subjected to Western blot assay with antibodies against p-Vav2 and total Vav2. Results revealed that PI3K inhibition could inhibit the upregulation of p-Vav2 induced by CCL2 (*P < 0.05). E CCL2 induced the formation of the Vav2-Rac1 complex in a PI3K-dependent manner. HNSCC cells were incubated in a culture medium with or without the PI3K inhibitor (Wortmannin, 2 μM) for 24 h. The cells were then treated with exogenous CCL2 (100 ng/mL) for 2 h. HNSCC cell lysates were subjected to immunoprecipitation (IP) with Rac1 or Vav2 antibodies, respectively. The immunoprecipitates were subjected to Western blot assay with the indicated antibodies. Results revealed that CCL2 upregulates the phosphorylation level of Vav2 and significantly induces the Vav2-Rac1 complex formation in HNSCC cells. But the PI3K inhibitor could abolish the enhancement of Vav2-Rac1 complex formation and phosphorylation level of Vav2 induced by CCL2 (**P < 0.01). F Differential expression of CCR4, p-Vav2, and p-MLC between normal tissue and HNSCC tissue. IHC staining was used to detect the presence of CCR4, p-Vav2, and p-MLC expression in normal tissue and HNSCC tissue. The results revealed that expression of CCR4, p-Vav2, and p-MLC was significant higher in HNSCC tissues compared with normal tissues (Magnification, ×400; Bar: 50 μm; *P < 0.05; **P < 0.01).
manufacturer's instructions. For each target gene, we designed three different siRNA sequences and their transfection efficiency was quantified by qRT-PCR and Western blot, respectively ( Supplementary Fig. 1). We selected the siRNA with the best inhibition efficiency of the corresponding targeted gene for the following experiment, and the siRNA sequences were presented in Supplementary Table 3.
RNA extraction and quantitative RT-PCR analysis
Total RNA was extracted from cell samples using TRIzolTM Reagent (15596026, Invitrogen, Carlsbad, CA) according to the manufacturer's protocol. RNA purity was assessed using the ND-1000 Nanodrop (Nano-Drop Technologies, Wilmington, DE). High-fidelity cDNA was synthesized from purified total RNA using the PrimeScriptTM RT Master Mix (RR036A, Z. Ling et al. TaKaRa, Kusatsu city, JPN). cDNAs were amplified using LightCycler ® 480 SYBR Green I Master mix (04707516001, Roche, Basel, CH) on a Roche LightCycler ® 96 Instrument according to the manufacturer's instructions. The amplification reaction included an initial denaturation at 95°C for 5 min, followed by 45 cycles of denaturation at 95°C for 10 s, annealing at 60°C for 20 s, and extension at 72°C for 30 s, respectively. The expression level of target mRNA was normalized to the internal control GAPDH, using the mean value of the three replicates. The relative gene expression was determined by using the 2 −ΔΔCT method. Primers used for qRT-PCR were listed in Supplementary Table 4.
Lentivirus production and cell transfection
The luciferase fusion plasmids and GFP fusion plasmids were purchased from GeneCopoeiaTM (EX-NEG-Lv217, GeneCopoeiaTM, Rockville, MD). Lentiviral vector production was performed using Lenti-Pac™ HIV Packaging Kit (GeneCopoeiaTM, Rockville, MD) according to the manufacturer's protocols. HEK-293 cells were transfected with the packaging mixture and a lentiviral or GFP plasmid vector, respectively. Conditioned medium from the transfected HEK-293 cells was used for infecting HNSCC cell lines. At post-transfection, HNSCC cells were incubated in 2.5 ug/mL of puromycin (58-58-2, BioFroxx, Einhausen, GER) for selecting the transfected cells.
Larval zebrafish transplantation
Adult zebrafish were acquired from Laboratory Animal Center, Sun Yat-sen University, and 15-20 fish were reared in a circulation tank system at 28.5°C temperature with a 14:10 h light/dark cycle. 0.1 mM 1-phenyl-2thiourea (PTU) was used to inhibit melanogenesis and generate transparent zebrafish. At 48 h post-fertilization,~100 GFP-labeled HNSCC cells resuspended in 10 nL of cell medium without serum and were directly transplanted into the circulation via the Duct of Cuvier. The fish were imaged immediately after transplantation, and any fish that did not receive the proper number of cells was discarded. Fish were also imaged at 3 days post-transplant.
Active GTPase pulldown
To detect GTPase activity, levels of active GTP-bound Rac1 were assessed by an Active Rac1 Pull-Down and Detection Kit (16118, Thermo Fisher Scientific, Waltham, MA, USA). Briefly, sample lysed by GST lysis buffer mixed with 1% protease inhibitor cocktail, and cell lysates which contained equal amounts of protein were exposed to GST-human Pak1-PBD, which contains the binding domain of the Rac1 effector PAK1 overnight. GDP and GTPγ loaded lysates were used as negative and positive active GTPase pulldown controls, respectively. Pulldown samples were resolved by 2X SDS sample buffer and detected by western blot for Rac1. Levels of each GTP-bound Rac1 were normalized to their total protein and compared with each other.
Western blot
Cells were harvested, and the total protein was extracted with and lysed in RIPA buffer (P0013B, Beyotime, Shanghai, CN) supplemented with a 1% protease inhibitor cocktail (CW2200s, CWbio, Beijing, CN) and 1% phosphatase inhibitor cocktail (CW2383S, CWbio, Beijing, CN) on ice for 30 min. Protein concentration was quantified using BCA Protein Assay Kit (CW0014, CWbio, Beijing, CN) according to the manufacturer's protocol. Equal amounts of protein were separated on 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), and then the proteins were electrophoretically transferred onto polyvinylidene difluoride (PVDF) membranes with 0.22 um pore (Millipore, MA, USA). About 5% BSA was used to block heterogenetic antigen on the membranes for 1 h at room temperature. Subsequently, the membrane was incubated with a specific primary antibody overnight at 4°C, followed by incubation with the corresponding HRP-conjugated secondary antibodies for 1 h at room temperature. The target bands were visualized using enhanced chemiluminescence (ECL) detection system (Millipore, MA, USA). The antibodies used in this experiment are shown in Table 1.
Immunoprecipitation (IP)
Cells were lysed in IP lysis buffer, NP-40 lysis buffer (P0013F, Beyotime, Shanghai, CN) supplemented with 1% protease inhibitor cocktail and 1% phosphatase inhibitor cocktail. Cell samples were washed with ice-cold PBS three times and lysed in IP lysis buffer for 30 min on ice. Samples were then centrifuged at 15,000 × g for 15 min at 4°C. The supernatant was collected and then coupled with the corresponding primary antibody overnight at 4°C. Protein A/G Magnetic Beads (HY-K0202, MedChemExpress, Monmouth Junction, NJ, USA) were added and further incubated for 2 h at 4°C. Subsequently, the immunoprecipitated beads were washed three times with cold PBS and boiled for 10 min in 2X SDS loading buffer and was subjected to Western blot assay.
Mouse model
The mouse models used in this study were BALB/c nude mouse lines. Mice were cared for in accordance with the Regulations of Guangdong Province on the Administration of Experimental Animals. BALB/c nude mice were acquired from Laboratory Animal Center, Sun Yat-sen University. Mice were housed under specific pathogen-free conditions with a 12-h light/dark cycle and ad libitum access to tap water and food. We transduced SCC15 cells with lentiviral-luciferase plasmid and selectively expanded the positive stable cells. Each nude mouse was injected with 2.5 million SCC15 cells labeled with luciferase under the skin of the axillary for subcutaneous tumor formation. After tumor implantation, the mice were randomly divided into four groups, including normal saline (NS), CCL2 neutralizing antibody, CCR4 antagonist (Mogamulizumab), and combination group (CCL2 neutralizing antibody combined with CCR4 antagonist), intraperitoneal injection Fig. 5 CCR4 monoclonal antibody blocks the activation of Vav2-Rac1-MLC signaling induced by CCL2 and inhibits HNSCC growth and invasion in vivo. A Experimental design of nude mice in vivo. Twenty-four nude mice were randomly divided into four groups, with six mice in each group. Each nude mouse was injected with 2.5 million SCC15 cells labeled with luciferase under the skin of the axillary for subcutaneous tumor formation (0 W). In 1-3 weeks, four groups of nude mice were treated as indicated in the flow chart, respectively. All treatments were terminated at 3-6 weeks, and all mice were euthanized at the 7th week. The axillary lymph nodes and tumors were collected for IHC staining and weight measurement. B, C Biofluorescence analysis of nude mice. Bioluminescence images were obtained once a week to analyze the tumor growth and local invasion from 1st to the 6th week. There was no significant difference in tumor size among the four groups before treatment (1st week). After 2 weeks of treatment, tumor sizes of three treatment groups were significantly smaller than that of the control group (3rd week). With the suspension of treatment for 3 weeks, the tumor sizes of the CCL2 neutralizing antibody group progressed particularly rapidly, even approaching to control group. However, the tumor progression in the CCR4 monoclonal antibody group and combination group were flatted and significantly slower than that of tumors of the control group (6th weeks) (*P < 0.05; NS no statistical significance). D CCR4 monoclonal antibody inhibited HNSCC progression without rapid relapse after treatment withdrawal. All mice were euthanized at the 7th week, and the tumor sizes were measured. It was similar to the results of the bioluminescence analysis; the tumor weight in the CCL2 neutralizing antibody group exhibited no significant difference between that of the control group. But the tumor weight of both the CCR4 monoclonal antibody group and combination group was lighter than that of the control group (**P < 0.01; NS no statistical significance). E CCR4 monoclonal antibody inhibited the lymph node metastasis of HNSCC. Representative images of HE and Luciferase staining (Luc IHC) in the axillary lymph nodes. The number of metastatic lymph nodes (Positive LN) of both the CCR4 monoclonal antibody group and combination group was lesser than that of the control group, but the CCL2 neutralizing antibody group was an exception (Original magnification, ×400; Bar: 20 μm; *P < 0.05; NS no statistical significance). F CCR4 monoclonal antibody inhibited the activation of Vav2-Rac1-MLC signaling in HNSCC. Representative images of CCR4, p-Vav2, and p-MLC staining in the xenograft tumors. Although there was no significant difference in CCR4 staining in each group, p-Vav2 and p-MLC level of both the CCR4 monoclonal antibody group and combination group were weaker than that of the CCL2 neutralizing antibody group and control group. In addition, the expression levels of mouse CCL2 (mCCL2) in the transplanted tumors and mouse lymph nodes (LN) of each group were not statistically different, it was considered that the expression of mCCL2 would not affect the mobility of HNSCC cells in the in vivo xenograft model. (Magnification, ×400; Bar: 50 μm; *P < 0.05; NS no statistical significance). every 3 days. We monitored tumor progression in live animals using in vivo imaging system every week during the treatment and withdrawal period.
In vivo imaging system
Xenograft tumors were generated by subcutaneous injection of HNSCC cells transfected with luciferase. Subsequently, mice underwent in vivo imaging using the in vivo imaging system (IVIS) spectrum (PerkinElmer) with Living Image software (version 4.4). For bioluminescence studies, animals were injected with an intraperitoneal injection of 200 uL of D-Luciferin (Promega, 1 g dissolved in 66.667 mL DPBS) and anesthesia was induced with 3% isoflurane (RWD) in O 2 at a flow rate of 2 L/min 8-10 min. Then, luciferase-positive regions were captured and the signal intensity was evaluated at indicated time points post-injection. We imaged mice once a week and monitored them until 3 weeks after the withdrawal of the drug.
Flow cytometry
The effect of CCL2 and CCR4 antagonist (Mogamulizumab) on the cell cycle of HNSCC cells was monitored using flow cytometry. Briefly, HSC6 and SCC15 cells were seeded in 6-well plates at a cell density of 5000 cells/well. HNSCC cells were cultured with CCL2 (100 ng/ml) combined with or without CCR4 monoclonal antibody (1 ug/ml) and compared with a nontreated group. After 48 h, the cell cycle of HSC6 and SCC15 cells were analyzed with the cell cycle detection kit (CCS012, MULTI SCIENCES). SCC15 and HSC6 cells were incubated with DNA staining solution and Permeabilization kit for 30 min, according to the manufacturer's instructions. The ratio of cells in the G1, S, G2/ M phases of each group was analyzed by flow cytometry (Cytoflex). Fig. 6 The proposed mechanism of how CCL2 enhances cell motility of HNSCC. The proposed models described the effect of CCL2 on cell migration and tumor metastasis in HNSCC. From a macroscopic perspective, HNSCC can metastasize to distant sites through lymphatics and blood vessels, among which cervical lymph node metastasis was most frequent. From a microscopic perspective, HNSCC cells can secrete CCL2 which could bind to the membrane receptor CCR4 by autocrine or paracrine function. Then, the PI3K pathway transduces extracellular signals into the cell and activates the Vav2-Rac1-MLC signal axis. Finally, CCL2 promotes HNSCC cytoskeletal remodeling and pseudopodia formation, enhances the mobility of HNSCC cells, and promotes the metastasis of HNSCC. | 10,642.8 | 2022-02-01T00:00:00.000 | [
"Biology",
"Medicine",
"Chemistry"
] |
Mesozoic micropalaeontology of exploration well Elf 55/30–1 from the Fasnet Basin, offshore southwest Ireland
The geology, biostratigraphy and palaeoecology of exploration well Elf 55/30–1 in the Fastnet Basin are summarised. The biostratigraphical and ecological distribution of the foraminifera and Ostracoda from the late Triassic, the Lower Jurassic and the Lower Cretaceous are reviewed with reference to microfaunas elsewhere in Europe. Selected microfossil taxa are illustrated.
INTRODUCTION
This paper presents a biostratigraphical and palaeoecological description o f the foraminifera and Ostracoda trom part 01' the late Triassic, the Lower Jurassic and Lower Cretaceous of exploration well Elf 55130-1. Other studies of the Mesozoic microfaunas within the Celtic Sea area include that of Colin et ul. (1981), describing the Cretaceous and late Jurassic microfaunas from the Esso-Marathon wells within quadrants 47,48, 56 and 57; and a study discussing the 12 microfaunal and microfloral associations recognised in the Lower Cretaceous in the Fastnet Basin (Ainsworthet a/., 1985).
GEOLOGY
ElfS.5,'30-1 was the first o f ten wells drilled within thc Fastnet Basin, which is situated at the southwestern end ot' the North Celtic Sea Graben, approximately 140 km south o f lreland (Fig. I). T h e fault-bounded basin is elongate, incasuring 110 km long and 4 0 k m wide and trends northeast t o southwest (Naylor & Shannon, 1982). It was spudded on 18th April 1976 and was plugged and abandoned on 28th June 1976. T h e well wasdrilled in 130m of water and reached a total depth of 2800 m., terminating in Devonian rocks.
The geology of the Fastnet Basin has been described by Robinson el u l . (1981) and by Naylor & Shannon (1982). Early Triassic continental red beds uncontormably overlie the Devonian red beds and tuffs. These sediments are su,cceeded conformably by the Liassic Limestone sequence characteristic of the Lower Jurassic marine transgression. In the basal part of this limestone unit both littoral and non-marine microfaunas occur, indicating a transgressiveiregressive shoreline in this area during the Rhaetian and early Hettangian. A shallow marine environment became established in the late Hettangian and earliest Sinemurian. A marly shale o f Sinemurian age overlies this limestone, above which is a Sinemurian sandstone sequence thought to represent a delta front. These sandstones pass into low energy outer-shelf calcareous shales of Sinemurian -Pliensbachian age. The early Toarcian is represented by shallow marine shales with much lignite and gypsum, suggesting a close proximity t o land.
A major unconformity occurs between the Cretaceous and the Lower Jurassic which is a result of Cimmerian movements. The lowermost Cretaceous rocks consist of non-marine sandstones and shales, devoid of in situ microfaunas, typical of the ' Wealden' facies. These pass into the Barremian marine clays, sandstones and lignites containing moderately abundant marine fossils. This marine incursion continued t o deepen until the earliest Albian, where a regressive Greensand phase of Albian to early Cenomanian is initially represented by a marginal marine facies. and later an inner shelf facies. This is followed by the late Cretaceous Chalk marine transgression spanning the Cenomanian t o Campanian Stages.
T h e Tertiary sediments unconformably overlie the Cretaceous and consist o f a thick inner shelf limestone succession of Middle Eocene to Oligocene age. This passes into opcn marine Miocene t o Pliocene clays with a topinost unit of arenaceous sediments.
BIOSTRATIGRAPHY AND PALAEOECOLOGY
One hundred and eleven cuttings samples were examined, 14 from the Lower Cretaceous and lowermost Upper Cretaceous, and Y7 from the Lower Jurassic and part of the late Triassic. The samples were taken at either 5 m o r 10m intervals, which in some cases has led to assemblages from different stages becoming mixed, especially in the condensed Lower Cretaceous sequence.
The more important foraminifera1 and ostracod taxa from the Lower Cretaceous, the Lower Jurassic and late Triassic are listed below. The first occurrences mentioned represent the topmost occurrence of the microfossil in the well (Figs. 2, 3). The monospecific Orbitolina fauna in the top of the Greensand appears to be from the same form group (equivalent to form group IV of Hofker, 1963, range late Albian to early Cenomanian) as those described from the Upper Greensand of Britain and France which Carter & Hart (1977) have shown to be of early Cenomanian age. This widespread fauna reflects the northward movement of the Tethyan province due to its climatic amelioration and associated transgression in the late Albian and Cenomanian (Price, 1967).
I R E L A N D
Ostracoda: Cytherella sp. Only fragmental evidence of Ostracoda were seen in the early Cenomanian cuttings.
H e c h t i c y t h e r e d e r o o i N e o c y t h e r e ( C j g o t t i s i
The overall assemblage resembles that described from the early ,4lbian of Rumania by Neagu (1965), and is also similar to the fauna recorded from the Gault Clay of Britain (Chapman, 1891(Chapman, -1898 but is more Tethyan in aspect. The main characteristic of this fauna is the lack of planktonic foraminifera and the moderate diversity. This is probably a reflection of deposition in inner shelf conditions. Ostracolda: Bairdoppilata pseudoseptentrionalis Mertens, 1956, Cytherella gr. C . ovata (Roemer, 1841) Cytheropteron arguturn Kaye, 1965 (Triebel, 1940), Rehacythereis sp. 1, Schuleridea sp. 1, Veenia? compressa Kaye, 1965b, V . florentinensis Damotte, 1961. Many of these species have been recovered by Colin er al. (1981) from the Albian and Cenomanianof the North Celtic Sea. The first definite Albian index species which occurred in this well was Veenia? cornpressa which was recokered immediately below an interval without returns (1020 m-1045 m). This fauna has also been described by Kaye (1965a) in southern England, Germany (Griindel, 1966) and northern France (Damotte, 1971).
The Aptian and Barremian sequences are extremeiy condensed. As the samples examined were taken only every 10m, there has been considerable mixing of assemblages, particularly in the sample from 1080-1090m which contains the Barremian index fossil Epistornina hechti with ostracods more typical of the Aptian to late Barremian (see below).
As mentioned above, due to the considerable mixing o f the fauna, many of these species extend downwards into the latest Barremian intervals, however, all of these species d o make their first downhole occurrences in the Aptian.
' Wealden' facies (109Om-1242 m) As in the rest of the Fastnet Basin, the sediments of the Wealden facies were devoid of in situ microfauna. This contrasts with the Celtic Sea Basin which, in the authors' experience, contains non-marine Ostracoda sufficient to zone the continental section of the Lower Cretaceous and the uppermost Jurassic. The facies in the two basins are similar, and there is n o obvious explanation for these differences.
Lower Jurassic and late Triassic
Below the barren clays and sandstones of the 'Wealden' facies, there is an abrupt change to the grey shales of the Lias. Much of the following fauna persisted through several stages. This may be partly due to caving, so only the first downhole occurrences and subsequent downhole abundances are mentioned here.
The foraminifera] faunas are similar to those described by Copestake & Johnson (in press) from the Mochras Borehole in North Wales. The zonation given below is based on the ranges of index species established for the Lower Jurassic of Britain (Copestake & Johnson, 1981, 1984 and Europe (Bartenstein & Brand, 1937, Norling, 1972 Bate & Coleman, 1975. The fauna recovered was very sparse, with only a single specimen of the above index microfossil being recorded. The paucity of fauna is probably due to the marginal marine conditions; close proximity to land is indicated by the occurrence of gypsum and lignite within the grey/ black shales. Bartenstein, 1937, Textularia aeroplecta Tappan, 1955, Trochammina cf. T. canningensis Tappan, 1955. The late Pliensbachian yielded a rich fauna of typical Upper Lias foraminifera which is very similar to that described by Copestake (1974) and Johnson (1975) from the Mochras Borehole and also to that described from the British mainland (Barnard, 1956(Barnard, , 1957(Barnard, , 1959 and northwest Germany (Bartenstein & Brand, 1937).
The fauna is diverse and numerous at the top and bottom of this interval, but there is a section between 1400 m-l450m, in which microfossils are rare or absent. This appears to correspond with the sparse foraminifera1 faunas encountered in the early Pliensbachian of Britain (Copestake & Johnson, 1981). Sedimentation in this interval consists of dark grey shales with pyrite suggesting basin stagnation with anoxic bottom conditions. Ostracoda: Bairdia aff. B. carinata Drexler, 1958, Isohythocypris aff. I . elongala (Tate & Blake, 1876), Pseudomacrocypris sp. 1, Indet. Gen.
Explanation of Plate 1
Conversely, Reinholdella murgarita only ranges up to the early Sinemurian in Britain (Copestake & Johnson, 1981) and its first downhole occurrence at 1600m may therefore represent the top of the early Sinemurian. However, its range in France extends into the late Sinemurian and until its range for this area is better established, it cannot be used as an early Sinemurian marker.
The fauna is abundant and diverse and includes a variety of long ranging species not mentioned above, such as Eoguttulina spp. and Lenticulinu spp. ex gp. L . muensteri.
While abundance is low through this interval there is moderate diversity, with numerous new taxa. These include undescribed species of Kinkelinella (Ektyphocythere) and some undescribed genera. Many specimens of' Kinkelinella recovered from 55i30-1 show marked similarity to K . (E.) treibeli (Klinger & Neuweiler, 1959) recorded from the late Sinemurian of northern Europe.
The top of this interval is marked by the flood occurrence of the Reinholdella spp. and Oberhausella mesotriassica. A similar flood is reported from the Middle Hettangian (liasicus zone) in the Mochras Borehole (Copestake & Johnson, 1981) and it is probable that the two are equivalent. Their age is supported by the occurrence of Planularia nucleata which was recorded from the Hettangian (Barnard, 1949) in his study of the Lower Lias of Byfield although its total range is uncertain.
Below 1820m, the fauna is extremely sparse with occasional incursions of a fauna of a broadly Lower Lias aspect and few simple agglutinated species at 1930m. Both Ogmoconcha hagenowi and Ogmoconchella ellipsoidea were recovered in flood abundance between the intervals 1800-1810m, however, both these species abruptly disappeared below 1820 m (the top of the limestone). No microfauna was recorded until 1880m where a new fauna occurred, consisting mainly of Darwinula spp. and Limnocythere 'p., often in abundance, in-dicating brackish environments (Kozur & Oravecz-Schoffer, 1972). Although many species of Limnocythere recorded in Europe occur within the Rhaetic, the present authors' species may range into the earliest Hettangian.
Below this non-marine fauna, between 2045-2080m, a marine ostracod fauna was recovered, consisting of abundant Ogmoconchella aff. 0. ellipsoidea from a clay interval. The presence of this fauna within the otherwise non-marine limestones indicates an unstable marginal environment at the commencement of the Rhaetian marine transgression (Robinson et al., 1981, Naylor & Shannon, 1982. This fauna disappeared after 2080m and below this no microfaunas were recovered.
CONCLUSIONS
The marginal marine and marine sediments encountered in Elf 55/30-1 contained microfaunas which were used to provide a good biostratigraphical and palaeoecological zonation of the Lower Cretaceous, the Lower Jurassic and part of the late Triassic. The nonmarine ' Wealden' interval could not be subdivided due to the absence of in situ faunas.
The facies and microfaunas occurring in this well were closely comparable with those seen in the Mesozoic of Great Britain and France, although ranges of some of the microfossils are different. | 2,682.4 | 1986-04-01T00:00:00.000 | [
"Geography",
"Environmental Science",
"Geology"
] |
Protein Phosphorylation and Redox Modification in Stomatal Guard Cells
Post-translational modification (PTM) is recognized as a major process accounting for protein structural variation, functional diversity, and the dynamics and complexity of the proteome. Since PTMs can change the structure and function of proteins, they are essential to coordinate signaling networks and to regulate important physiological processes in eukaryotes. Plants are constantly challenged by both biotic and abiotic stresses that reduce productivity, causing economic losses in crops. The plant responses involve complex physiological, cellular, and molecular processes, with stomatal movement as one of the earliest responses. In order to activate such a rapid response, stomatal guard cells employ cellular PTMs of key protein players in the signaling pathways to regulate the opening and closure of the stomatal pores. Here we discuss two major types of PTMs, protein phosphorylation and redox modification that play essential roles in stomatal movement under stress conditions. We present an overview of PTMs that occur in stomatal guard cells, especially the methods and technologies, and their applications in PTM identification and quantification. Our focus is on PTMs that modify molecular components in guard cell signaling at the stages of signal perception, second messenger production, as well as downstream signaling events and output. Improved understanding of guard cell signaling will enable generation of crops with enhanced stress tolerance, and increased yield and bioenergy through biotechnology and molecular breeding.
INTRODUCTION
Stomata are composed of a pair of specialized epidermal cells termed guard cells, which are responsible for regulating gas exchange and water loss through changing the size of the stomatal pores. The opening and closing of stomatal pores are affected by numerous factors, such as humidity, CO 2 , temperature, light, hormones, and pathogens. Changes in the turgor and volume of guard cells accordingly are required for stomatal movement, which are controlled by complex signaling networks (Azoulay-Shemer et al., 2015).
Abscisic acid (ABA) plays important roles in a broad range of plant physiological processes (e.g., seed germination and seedling growth) and plant responses to abiotic and biotic stresses (Lee and Luan, 2012). Under high salinity and drought conditions, the increased levels of ABA are perceived by the guard cells to promote stomatal closure and to inhibit of stomatal opening (Assmann, 2003). The mechanisms underlying ABA signaling in guard cells have been extensively studied (Pei et al., 1997;Schroeder et al., 2001;Assmann, 2003;Acharya et al., 2013;Zhang et al., 2015), which involve the binding of ABA to the receptors, activation of protein kinases, production of second messengers such as reactive oxygen species (ROS) and nitric oxide (NO), regulation of membrane ion channels, and eventually the decrease in turgor and stomatal closure (Schroeder et al., 2001;Zhang et al., 2015). In addition to abiotic stress, guard cells play an important role in limiting pathogen entrance to the plant body. The guard cell response to bacteria is triggered by the recognition of pathogen associated molecular patterns (PAMPs) by pattern recognition receptors (PRRs) on the plasma membrane. Upon PAMP recognition, one of the earliest responses is the change in ion fluxes across the membrane, leading to a rapid and transient extracellular alkalization and increase of Ca 2+ in the cytosol (Boller and Felix, 2009). Ca 2+ functions as a second messenger, activating downstream signaling players such as calcium-dependent protein kinases (CDPKs) to promote stomatal immunity responses. In addition, the apoplastic production of ROS by NADPH oxidase (Boller and Felix, 2009) is a hallmark of successful recognition of plant pathogens. Subsequent plant immune responses include transcriptional reprogramming, which involves the regulation of ROS homeostasis and activation of other protein kinases such as mitogen-activated protein kinases (MAPKs) (Boudsocq et al., 2010).
Stomatal studies are technically challenging because guard cells are small and of low abundance in leaves (Tallman, 2006). Methods for isolating guard cell protoplasts with relatively high purity have been reported over the past 30 years (Outlaw et al., 1981;Gotow et al., 1982Gotow et al., , 1984Zhu et al., , 2014Obulareddy et al., 2013). They have contributed considerably to the understanding of guard cell signaling. However, these methods are usually laborious and the yield is relatively low. The general principle of guard cell isolation is to release the guard cells from epidermal peels in a two-step process. In the first step the pavement and mesophyll cells are removed, and in the second step the guard cell wall is digested to facilitate the release of the guard cell protoplasts. It is important to note that there are important variations of the procedures according to different plant species (Zhu et al., 2016).
Stomatal movement in response to abiotic and biotic stresses is a fast process, which requires an efficient molecular regulation mechanism to relay the signals. Phosphorylation and redox control of the key players during both the signal perception and transduction in plant responses to abiotic and biotic stresses have demonstrated the high efficiency of protein PTMs in cell signaling (Grennan, 2007;Waszczak et al., 2015;Zhang et al., 2015). As the relevance of PTMs in plant stress responses has been demonstrated by independent studies over the years (Kodama et al., 2009;Lindermayr et al., 2010;Stecker et al., 2014;Kim et al., 2015;Yang et al., 2015), there is a growing interest to understand how specific PTMs control various aspects of stomatal guard cell functions. In this review, the frequently used approaches and methods in identification and quantification of PTMs are described. The main objective is to focus on the phosphorylation and redox events, and the recently identified proteins that undergo PTMs in guard cells in response to phytohormone and stress signals. We also discuss the different types of PTMs in the regulation of stomatal movement, and the challenges and perspectives of PTM proteomics.
Significance of PTMs in Biological Processes
PTMs include chemical modifications of specific amino acid residues of a protein and/or cleavage of the translated sequence. They greatly increase the structural and functional diversity of proteins in a proteome. Currently, more than 300 different types of PTMs have been identified (Zhao and Jensen, 2009), including phosphorylation, glycosylation, acetylation, nitrosylation, ubiquitination, and proteolytic cleavage. These modifications affect the properties of the proteins (e.g., charge status and conformation), resulting in changes of activity, binding affinity, localization as well as stability. Most PTMs are highly controlled in the cells, and they often serve as rapid, specific, and reversible molecular switches to regulate biochemical and physiological processes. Different PTMs have also been shown to crosstalk in the modulation of molecular interactions between proteins or regulation within the same protein through multiple site modification, e.g., the histone code (Bannister and Kouzarides, 2011). Therefore, identification and functional characterization of PTMs are critical toward deciphering their roles in cellular processes in many different areas of biology and biomedical research.
Qualitative Analysis of PTMs
In the past, PTMs were often studied at a specific amino acid residue of a particular protein level using molecular and biochemical approaches (Zhu et al., 2000;Reimer et al., 2002). Nowadays, the advances in biological mass spectrometry (MS) have allowed accurate identification and quantification of PTMs at the proteome scale. Two-dimensional gel electrophoresis (2-DE) was widely used in the early years of proteomics to identify PTMs, such as phosphorylation, nitrosylation, acetylation, and glycosylation (Llop et al., 2007;Roux et al., 2008;Scheving et al., 2012). Because PTMs can alter the isoelectric point and/or molecular weight, they may be detected when a change of spot location on the gel is observed between different samples. Different PTM protein stains have been developed to reveal specific PTMs, such as ProQ diamond and ProQ emerald to detect phosphoproteins and glycoproteins in the gels, respectively (Steinberg et al., 2001;Schulenberg et al., 2003;Ge et al., 2004). A big challenge has been to identify the PTM peptides and map the sites of modifications due to the low abundance nature of the modified protein species.
To overcome the challenge of capturing the relatively low abundance of PTM proteins compared with unmodified proteins, fractionation, and/or enrichment strategies have been employed during sample preparation (Lenman et al., 2008;Guo et al., 2014a;Aryal et al., 2015). The MS-based proteomics coupled with PTM enrichment typically has four steps. First, samples containing the total protein of interest are digested by a protease, such as trypsin. Second, the resulting peptides are subject to enrichment, in order to separate the PTM peptides of interest from the often abundant non-modified peptides. Third, the isolated PTM-peptide is analyzed by liquid chromatography (LC)-MS/MS for peptide identification and PTM site mapping. Finally, the MS spectra of the peptides are analyzed using different software algorithms and/or evaluated manually to ensure the accuracy and statistical significance of the data.
Among the different fractionation and enrichment strategies, affinity-based approaches are commonly used to enrich PTM proteins/peptides (Blagoev et al., 2004;Rush et al., 2005;Zhang et al., 2005;Fíla and Honys, 2012;Wang et al., 2015b). The affinity-based enrichment has the advantage of relatively high specificity and significant reduction of sample complexity for downstream LC-MS/MS analyses. For example, antiphosphotyrosine antibodies were successfully used to enrich for peptides with phosphotyrosines residues (Blagoev et al., 2004;Rush et al., 2005;Zhang et al., 2005). However, the antibody-based method is often limited by the availability and quality of the antibodies for the specific PTM of interest. Thus, in order to overcome this limitation, several non-antibody based strategies have been developed. For instance, immobilized metal affinity chromatography (IMAC) utilizes a metal chelating agent to bind trivalent metal cation, such as Fe 3+ or Ga 3+ (Thingholm and Jensen, 2009). The charged resin is used to bind phosphoproteins or phosphopeptides. Although this strategy is widely used, it has the following shortcomings: (1) If multiply phosphorylated peptides are present in high abundance, they may saturate the IMAC resin, resulting in retention of few singly and doubly phosphorylated species (Thingholm et al., 2008). (2) Acidic peptides will be enriched along with the phosphopeptides (Thingholm et al., 2008). In order to overcome this issue, the incubation buffer needs to be acidified to pH 2-2.5. At this pH, most acidic amino acids will be protonated, which will mask the negative charge of the carboxyl groups, preventing acidic peptides from binding onto the column. In contrast, at this pH most of the phosphate moieties are deprotonated and will bind to the column (Fíla and Honys, 2012). Another approach is to use titanium dioxide (TiO 2 ) as a substitute for the metal chelating resin. The use of TiO 2 resin under acidic conditions also prevents the retention of acidic peptides (Fíla and Honys, 2012). Interestingly, these two approaches are complementary in that IMAC has higher affinity for multiply phosphorylated peptides, while TiO 2 preferentially binds singly phosphopeptides (Silva-Sanchez et al., 2015). Therefore, application of both approaches in a single experiment leads to a high coverage of the phosphoproteome.
For cysteine redox modifications, such as S-nitrosylation, a classic biotin-switch method developed by Jaffrey et al. (2001) was often used. Free cysteines of proteins are firstly blocked by a thiol-reactive reagent through alkylation. The S-nitrosylated cysteines are then reduced using ascorbate, which is not a strong reducing reagent allowing specific reduction of the S-NO bonds. After chemical substitution with a biotin-containing affinity molecule, N-[6-(biotinamido)hexyl]-3 ′ -(2 ′ -pyridyldithio) propionamide (biotin-HPDP), the biotinylated proteins/peptides can be enriched by avidin chromatography. Although the classic biotin-switch method has been widely used, and over 300 proteins have been reported to be S-nitrosylated using this method (Lefièvre et al., 2007;Forrester et al., 2009), there are some technical issues inherent to this approach. The disulfide bonds in the proteins may decrease the efficiency of trypsin digestion and further peptide identification (Imai and Yau, 2013). Furthermore, the decomposition of biotin-HPDP may lead to a side reaction with free thiols, which can introduce false-positive signals through disulfide interchange (Forrester et al., 2007). Alternatively, a Thiopropyl Sepharose 6B (TPS6b) enrichment method was developed. The free thiols are alkylated during protein extraction. The proteins are then digested and further reduced prior to enrichment. TPS6b captures reduced thiols via disulfide exchange. TPS6b was initially used to increase the depth of proteome coverage for discovery experiments (Tambor et al., 2012). To date, it has been applied in several redox proteomics studies using cyanobacteria (Guo et al., 2014b), rat myocardium (Paulech et al., 2013), at enrichment efficiencies >95%. Recently, a six-plex iodoTMT technology has been developed to identify and quantify redox cysteines, including S-nitrosylation. Similar to the biotin-switch, free thiols are labeled with iodoTMT, and the TMT-labeled proteins or peptides can be enriched using an anti-TMT resin. This technology allows analysis of up to six samples simultaneously, thus increases throughput and reproducibility.
Quantitative Analysis of PTMs
Multiple proteomics tools are available to quantify the absolute or relative abundances of proteins and their specific PTMs. The quantification of PTMs is crucial, since simple identification of a modification may not provide adequate information for determining its functional importance. In vivo and in vitro labeling methods have been developed to couple with MS in order to identify, map, and quantify PTMs (Gygi et al., 1999;Goodlett et al., 2001;Ong et al., 2002;Ross et al., 2004;Balmant et al., 2015;Glibert et al., 2015;Parker et al., 2015). Stable isotopes can be used to label proteins in vivo via metabolic incorporation. In this approach, one set of sample is grown in a natural nitrogen source (N 14 ) and the other set is grown in a substituted isotopic nitrogen source (N 15 ) as either an amino acid (stable isotopic labeling of amino acids in cell culture, SILAC) or an inorganic nitrogen source (K 15 NO 3 ) (Thelen and Peck, 2007;Stecker et al., 2014;Minkoff et al., 2015). In SILAC, since the isotopes are introduced as a specific amino acid, the mass differences between the heavy and light peptides in the MS scan can be predicted, making the quantification easy. However, this approach is challenging in plant studies, since plants can synthesize amino acids from inorganic nitrogen. For example, the labeling efficiency achieved using exogenous amino acid in Arabidopsis cell cultures has been reported to only 70-80% (Gruhler et al., 2005). In contrast, metabolic labeling with 15 N as a inorganic source has been shown to achieve 98% incorporation in both intact plants (Ippel et al., 2004) and cell cultures (Engelsberger et al., 2006). However, the mass difference between differentially labeled samples cannot be easily predicted. Sophisticated software is needed to perform quantitative analysis, which can be challenging when working with highly complex samples (Thelen and Peck, 2007).
Alternatively, isotope labeling can be done to extracted proteins/peptides in vitro through several different approaches, e.g., isotope-coded affinity tag (ICAT), isobaric tag for relative and absolute quantification (iTRAQ), tandem mass tag (TMT), and iodoTMT. Except for ICAT, the relative quantification of peptides between samples is obtained by comparing the ion intensities of the different tags in the MS/MS spectra. The use of stable isotope labeling for absolute quantification requires internal standards, which are pre-selected synthetic peptides with isotope amino acids from a protein of interest. An absolute quantification of a PTM can be achieved by measuring the abundances of the modified and unmodified peptides and comparing them with the known amount of the isotope standard used (Xie et al., 2011). Recently, the use of label-free approaches to quantify PTMs has shown promise. Label-free analysis allows direct comparison of MS signals between any numbers of samples, which makes it applicable to any types of samples, avoiding isotope reagent costs. One label-free approach is spectral counting, where the levels of a modified form of a protein can be estimated by counting the number of the MS/MS spectra of the modified peptide from the protein. It has been noted that the number of assigned MS/MS spectra directly correlates with protein amount (Cooper et al., 2010;Olinares et al., 2011). Although spectral counting is fairly reliable in the measurement of large changes, its accuracy decreases considerably when measuring small changes of proteins (Jurisica et al., 2007) This is why peptide precursor peak alignment and peak area based labelfree approach has been more popular in accuracy and robustness Zhang et al., 2010;Lin et al., 2014b).
It is important to note that although all the approaches mentioned above have found utility in the identification and quantification of PTMs, they do not often address the issue of protein turnover in the course of the experiment. Overlooking this important issue may lead to misleading results (Muthuramalingam et al., 2013;Go et al., 2014). In order to account for differences in global protein level change, which could lead to a false positive or false negative result, researches have started to acquire PTM proteomics results and total protein proteomics results from parallel or different studies (Rose et al., 2012;Zhu et al., 2014). However, the success of this strategy is often low because some proteins identified in the PTM proteomics experiments are either absent or not quantified with confidence in the total proteomics experiments (vice versa) due to experimental variation and MS2 stochastical sampling (Chong et al., 2006;Lee and Koh, 2011). To overcome this problem, Parker et al. (2015) developed a double-labeling strategy, called cysTMTRAQ, where the isobaric tags iTRAQ and cysTMT are employed in a single experiment for the simultaneous determination of quantifiable cysteine redox changes and protein level changes. This notion of normalizing against total protein turnover can certainly be applied in the studies of other PTMs. PTMs exist in many different forms, are highly dynamic and important in rapid adjustment of protein functions as molecular switches (Lothrop et al., 2013). The aforementioned approaches and the development of new tools are expected to advance the PTMs studies in many areas of biology.
PROTEIN PHOSPHORYLATION IN STOMATAL FUNCTIONS
Protein phosphorylation provides plants with a rapid and versatile mechanism to allow guard cells to respond rapidly to different environmental changes and adjust stomatal aperture accordingly (Zhang et al., 2015;Zou et al., 2015). Although the involvement of protein kinases and phosphorylation in stomatal movement has been known for decades, detailed molecular mechanisms connecting the key components have just emerged during the past 5 years. For instance, blue-light triggered stomatal opening is featured with phosphorylation and activation of the plasma membrane H + -ATPase by Blue Light Signaling 1 (BLUS1) (Takemiya et al., 2013). Here we focus on recent progress on the functions of protein phosphorylation in stomatal movement under abiotic and biotic stresses.
Evidence also indicates there are protein kinases that function in parallel to OST1. For example, CPK6 (Brandt et al., 2012b), CPK21/23 (Geiger et al., 2010), and Guard cell Hydrogen peroxide-Resistant 1 (GHR1) phosphorylate SLAC1 to activate the anion channels upon ABA treatment, forming a redundant signaling pathway (Table 1). However, detailed characterizations using a loss-and gain-of-function approach imply that OST1 is still the central node and limiting factor in ABA guard cell signaling (Acharya et al., 2013). In addition to drought stress, protein phosphorylation may also play a role in stomatal movement in response to other abiotic stresses. For example, mutants of MPK9 and MPK12 are partially impaired in coldinduced stomatal closure, suggesting that the two kinases may function in a cold signaling pathway (Jammes et al., 2009). In the presence of ABA, ABA binds to its receptor PYR/PYL/RCAR, which further binds and inhibits PP2C, releasing and activating OST1. Activated OST1 phosphorylates an array of substrates, including RBOH F and SLAC1. Phosphorylated and active RBOH F promotes ROS burst. Later, ROS can activate Ca 2+ spikes in the cytosol, which can be further transduced by CDPK and CIPKs via phosphorylation of downstream target proteins. In addition, ROS can modify OST1 and RBOH to inhibit their activities as a feedback mechanism to tune down ABA signaling (red arrows). ABA, abscisic acid; PP2C, protein phosphatase 2C; OST1, Open Stomata 1; PYR, pyrabactin resistance; PYL, PYR like; RCAR, regulatory components of ABA receptors; SLAC1, slow anion channel-associated 1; ROS, reactive oxygen species; NO, nitric oxide; RBOH F, respiratory burst oxidase protein F; CDPK, calcium dependent protein kinase; CIPK, CBL (Calcineurin B-like)-interacting protein kinase.
Protein Phosphorylation in Guard Cells under Biotic Stress
Stomatal pores, as the major gate of pathogen entry, constitute the first line of defense to prevent infection of the plant body by efficient stomatal closure. This process is initiated with the detection of the conserved PAMPs by various immune receptors.
One of the best characterized interactions is the flagellin Nterminal 22 amino acid peptide (flg22) and the PRR Flagellin-Sensitive 2 (FLS2) and co-receptor Brassinosteroid insensitive 1-Associated Kinase 1 (BAK1) (Chinchilla et al., 2007;Sun et al., 2013). Using genetic and biochemical approaches, Schulze et al. (2010) showed that phosphorylation of FLS2 and BAK1 were detected within 15 s after flg22 treatment of Arabidopsis plants, and the kinase activity of BAK1 was required for flg22 perception ( Table 1). Although there is no study specific for guard cells showing that FLS2 and BAK1 are phosphorylated after flg22 perception, the same events are likely to occur in the guard cells.
It is known that FLS2 plays an important role in flg22-induced stomatal closure, since stomata in Arabidopsis fls2 mutant are completely impaired by flg22 carrying pathogen Pst. DC3000 . Genetics and biochemical approaches showed that activation of FLS2 and BAK1 in Arabidopsis plants promote formation of the receptor complex with the botrytisinduced kinase 1 (BIK1 , Table 1). BIK1 phosphorylates RBOH D (Table 1), which directly modulates stomatal closure in response to flg22, as rboh D mutant and Arabidopsis carrying RBOH D S39A,S343A,S347A exhibited completely impaired stomatal closure under flg22 treatment (Li et al., 2014). Interestingly, Arabidopsis RBOH D was also shown to be phosphorylated by CPK5 upon flg22 treatment (Dubiella et al., 2013, Table 1). In addition to RBOH D activation, flg22-induced FLS2 receptor complex also activates MPK3 and MPK6 to induce stomatal closure (Montillet et al., 2013). Thus, phosphorylation is an essential and common mechanism in pattern triggered immunity (PTI) responses. Downstream of PTI signaling includes regulation of K + channels, turgor decrease in guard cells, and closure of the stomatal pores to prevent pathogen entry (Zhang et al., 2008;. Successful pathogens deliver effector proteins into the plant cells to overcome PTI, and the effectors trigger the second layer of plant immunity called effector triggered immunity (ETI). For example, the bacterial effectors AvrB can be recognized by the plant immune receptor Resistance to Pseudomonas syringae pv Maculicola 1 (RPM1). Recognition of AvrB by RPM1 causes phosphorylation of RPM1-Interacting Protein4 (RIN4) by RPM1-Induced Protein Kinase (RIPK , Table 1). Recently, Lee et al. (2015) showed that RIN4 T21D/S160D/T166D , a mutant with three phosphorylation sites changed to phosphorylation mimic aspartate residues, rendered Arabidopsis plants to exhibit large stomatal apertures and decreased resistance to P. syringae. This exemplifies how an effector protein facilitates pathogen infection by modulating host cell protein phosphorylation events.
Current Questions in Guard Cell Protein Phosphorylation Research
As more aspects of phosphorylation in stomatal movement have been revealed, more questions have also been raised. The findings of OST1, as a central player in the core ABA pathway, open doors for questions such as how the activity of this key modulator is controlled. Is it activated by autophosphorylation or by an upstream kinase? How is OST1 dephosphorylated? Recently, Casein Kinase 2 (CK2) has been shown as a negative regulator of OST1 by increasing the binding of CK2-phosphorylated OST1 to PP2C (Vilela et al., 2015). With many key kinases identified in guard cells including SnRKs, CPKs, and MPKs, how are these kinase pathways crosstalk to minimize redundancy, and how is the signal specificity determined? What are the target proteins involved in stomatal movement? With the development of kinase substrates screening (Umezawa et al., 2013;Wang et al., 2013) and techniques in live-cell phosphorylation detection (Hayashi et al., 2011), more studies are forthcoming toward better understanding of the phosphorylation-mediated stomatal movement at high spatial and temporal resolution. In addition, since phosphorylation is essential in both ABA and flg22 triggered stomatal closure, what are the convergent nodes and edges? This question is still under debate. One study showed that the flg22 response was independent of ABA signaling (Montillet et al., 2013), while another study indicated that flg22 induced stomatal closure was impaired in the ost1 mutant (Guzel Deger et al., 2015). It should be noted that in the first study 10 times more flg22 was used to cause stomatal movement in the ost1 mutant. Therefore, it is likely that both ABA-dependent and independent pathways are functional. In addition, different protein kinases may be involved in different pathways. For example, MPK3 and MPK6 were shown to be important players in flg22 triggered stomatal closure (Montillet et al., 2013), while MPK9 and MPK12 play critical roles in the guard cell ABA and cold stress signaling (Jammes et al., 2009), as well as yeast elicitor signaling (Salam et al., 2013). Moreover, both protein kinases and phosphatases control the dynamics of protein phosphorylation in guard cell signaling. However, only a few phosphatases have been identified in guard cells (Tseng and Briggs, 2010;Sun et al., 2012;Takemiya et al., 2013), and their interactions with key signaling proteins remaining largely elusive.
REDOX-DEPENDENT PTMS IN STOMATAL FUNCTIONS
As with protein phosphorylation and other PTMs, redoxdependent PTMs may function as molecular switches to turn on or off signaling processes in plant response to abiotic and biotic stresses. Thiol is a nucleophile that when exposed to oxidative stress, undergoes reversible inter-and intra-molecular disulfide bond formation, nitrosylation, glutathionylation, sulfenic acid and sulfinic acid modification, and irreversible sulfonic acid modification. Additionally, the high pKa values of protein cysteines make these residues highly responsive to small redox perturbation (Spoel and Loake, 2011). The production of ROS and NO is a common event during stomatal closure (Xie et al., 2014). The ROS and NO can serve as signaling molecules by modifying the reactive protein thiol groups. Here we focus on recent progress on the roles of redox-dependent cysteine PTMs in stomatal movement under abiotic and biotic stresses.
Redox PTMs in Guard Cells under Abiotic Stress
As described in the previous section, under drought stress ABAinduced stomatal closure is associated with an increase in NO and ROS production in guard cells (Zhang et al., 2001;Neill et al., 2008). The ROS production is catalyzed mainly by two types of enzymes, the plasma membrane NADPH oxidases, and the cell wall peroxidases (Sharma et al., 2012). Other ROS-generating enzymes, such as apoplastic amine oxidases and oxalate oxidases, may also be involved in ROS production leading to stomatal closure (Tripathy and Oelmüller, 2012). The NADPH oxidases are regulated by direct binding of Ca 2+ (Kadota et al., 2015), phosphatidic acid (Zhang et al., 2009), Rac GTPases , and via phosphorylation by OST1 (Sirichandra et al., 2009), CDPKs (Kadota et al., 2015), and BIK1 (Kadota et al., 2014). Consequently, NADPH oxidase may integrate multiple upstream signaling events to promote stomatal closure. NO is produced by the nitrite-dependent nitrate reductase pathway (Desikan et al., 2002) and a nitric oxide associated 1 (NOA1) protein-dependent pathway (Lozano-Juste and León, 2010). It is important to note that the NOA1 is not a NO synthase (Moreau et al., 2008).
Although the essential function of ROS and NO in stomatal closure has been widely accepted, little is known about the underlying molecular mechanisms, by which they achieve the PTM regulation in guard cells. Thus, direct evidence for thiolbased redox regulation under stress conditions and a link between protein redox regulation and stomatal movement need to be established. A recent study showed that NO resulting from the ABA signaling caused S-nitrosylation of OST1 at the cysteine residue (Cys137) close to the kinase catalytic site (Table 1), and the PTM abolished the kinase activity ( Figure 1B). This represents an interesting negative feedback mechanism by which ABA-induced NO helps to desensitize ABA signaling. Additionally, the authors showed that the Cys137 is evolutionarily conserved in some AMPK/SNF1-related kinases and glycogen synthase kinase 3/SHAGGY-like kinases (SKs) in plants, yeast and mammals, and the S-nitrosylation-mediated inhibition may be a general regulatory mechanism (Wang et al., 2015a). This example also highlighted how redox changes regulate protein kinase phosphorylation and signaling cascade in stomatal movement.
In a redox-proteomics study, Zhu et al. (2014) identified 65 and 118 potential redox responsive proteins in ABA and MeJA treated Brassica napus guard cells, respectively. The authors demonstrated that most of the proteins belong to functional groups such as energy, stress and defense, and metabolism. In addition, osmotic stress-activated protein kinase (BnSnRK2) and isopropylmalate dehydrogenase (IPMDH) were confirmed to be redox regulated and involved in stomatal movement ( Table 1). These findings demonstrate the utility of redox-proteomics in discovering uncharacterized redox proteins and their roles in stomatal movement. Although some proteins have been identified to be redox regulated, their functions in regulating stomatal movement are still to be fully characterized.
Redox PTMs in Guard Cells under Biotic Stress
Pathogen perception initiates a signal transduction cascade including ROS and NO production, increase in Ca 2+ influx, alkalization of the extracellular space, activation of MAPK, CDPK, salicylic acid (SA) pathway, and synthesis of ethylene (Arnaud and Hwang, 2015). The ROS and NO generated under biotic stresses are known to act as antimicrobial compounds. ROS are also known to be involved in cell wall cross-linking and blockage of pathogen infection (Torres et al., 2006). Furthermore, they play important signaling roles, e.g., in redox PTM of essential proteins in plant defense (Agurla et al., 2014). Methionine and cysteine residues of certain proteins are sensitive to H 2 O 2 and NO (Hoshi and Heinemann, 2001). The sensitivity of the residues depends on the protein structure, neighboring residues, and solvent accessibility (Roos et al., 2013). H 2 O 2 can react with a cysteine thiolate forming intra-or inter-disulfide bonds, sulfenic acid (-SOH), sulfinic acid (-SO 2 H), and sulfonic (-SO 3 H) acid (Dalle-Donne et al., 2006). NO can covalent bind to a cysteine thiol through S-nitrosylation.
Although redox-dependent PTMs in biotic stresses is an emerging field, there are some examples showing the redox regulation of proteins in guard cells. In plant defense, Nonexpresser of PR gene 1 (NPR1) is one of a limited number of examples of protein redox regulation. NPR1 was detected primarily in the cytoplasm and nuclei of guard cells (Kinkema et al., 2000). Under normal conditions, NPR1 is retained in the cytoplasm as inactive disulfide-bonded oligomers, which is promoted by the S-nitrosylation at cysteine 156 (Table 1). In the presence of pathogen, an increase in SA mediates cellular redox changes, leading to thioredoxin-mediated reduction of the NPR1 oligomer to monomeric forms, which are then transported into the nucleus to activate plant immune processes (Mou et al., 2003;Waszczak et al., 2015). In the nucleus, SA mediated redox change causes de-nitrosylation and reduction of disulfide bonds in TGA transcriptional factors (Table 1) so that they can form an active transcriptional complex with NPR1 to turn on pathogenesis related (PR) genes (Lindermayr et al., 2010, Figure 2), and NPR1 is then phosphorylated and ubiquitinylated for degradation (Waszczak et al., 2015). Although protein redox regulation is not well studied in plant innate immunity, it is clear from the above example that modification of cysteine thiols can alter protein activity, function, and redox crosstalk with other modifications. Yun et al. (2011) demonstrated a NO biphasic control in pathogen triggered cell death. At the initial stage of pathogen infection, S-nitrosothiol (SNO) accumulation leads to accelerated cell death. Conversely, constitutively high SNO levels promote decreased cell death through S-nitrosylation of RBOH D (Table 1), leading to reduction in its activity and oxidative stress. This differential regulation seems important in fine tuning the extent of cell death under conditions of abiotic and biotic stresses, since both cause increases of NO FIGURE 2 | Redox regulation of NPR1 and TGA1. Under normal conditions, NPR1 is retained in the cytosol as an oligomer. S-nitrosylation of NPR1 is known to promote NPR1 oligomerization. In the presence of pathogen, production of SA promotes cellular redox changes, which will contribute to reduction of the NPR1 oligomer to monomeric form. Monomeric form of NPR1 moves to the nucleus and binds to TGA1 that was nitrosylated due to cellular redox changes mediated by SA. The complex NPR1-TGA1 turns on the transcription of PR genes. Although this mechanism was not directly elucidated in the guard cells, it is likely to be the case since NPR1 was primarily in the cytosol and nucleus of guard cells (Kinkema et al., 2000). SA, salicylic acid; NPR1, nonexpresser of PR gene 1; TGA1, teosine glume architecture 1; SNO, S-nitrosylation; S-GS, S-glutathionylation. levels. At a certain level of NO concentration, the signaling components of stomatal movement and plant response may be unresponsive or irreversibly regulated with detrimental effects on stress acclimation. During the NO burst, NO also promotes Snitrosylation of an Arabidopsis SA-binding protein 3 (AtSABP3) at Cys280 ( Table 1). The S-nitrosylation suppresses both SA binding, and its chloroplast carbonic anhydrase activity . Interestingly, in tobacco SABP3 showed antioxidant activity and plays a role in the hypersensitive defense response (Slaymaker et al., 2002). Although the role of SABP3 nitrosylation in stomatal closure in response to biotic stresses has not been studied, it may play a role in stomatal movement signaling as SA is known to promote stomatal closure (Khokon et al., 2011). The examples here demonstrated the great potential of redox regulation in stomatal movement in response to biotic stresses. The development of redox proteomics technologies such as the cysTMTRAQ and application of genetics, biochemistry, metabolism, and bioinformatics tools would accelerate the discovery and characterization of redoxdependent PTMs of proteins and their roles in stomatal signaling and plant immunity.
CONCLUDING REMARKS
Regulation of the size of the stomatal aperture is an essential mechanism in plants for optimizing the efficiency of water usage and photosynthesis. Stomatal movement through dynamic changes of the turgor of guard cells represents the output of integration of environmental signals with cellular signal transduction networks. Perception of abiotic and/or biotic stress signals triggers activation of signal transduction cascade, leading to rapid guard cell responses, which are known to be regulated by PTMs (e.g., protein phosphorylation and redox modification) of key players in the complex guard cell signaling networks. Over the past years, improvement and development of new tools in proteomics and MS have enabled the identification of PTMs of proteins involved in stomatal movement. In fact, LC-MS/MS based PTMomics technologies have become indispensable in identification and mapping of novel protein phosphorylation and redox modification sites. Additional sample preparation techniques, such as PTM enrichment and specific isotope labeling have greatly helped the detection and quantification of protein phosphorylation and redox changes, and thereby the understanding of PTM-controlled signaling pathways. The past decade has seen exciting discoveries in ABA and bacterial pathogen-triggered PTMs, especially phosphorylation and redox modification. Despite of current progress, guard cell PTMomics is still in its infancy and many aspects of protein level regulations remain elusive. For example, the crosstalk among different PTMs, and PTMs involved in regulating stomatal movement in response to other environmental factors are largely unknown. The fast advancement of proteomics technologies, together with genetics, molecular biology, biochemistry, and bioinformatics tools will accelerate the discovery and characterization of novel PTMs, and provide new insights into the complex protein phosphorylation and redox regulatory networks in guard cell signal transduction.
AUTHOR CONTRIBUTIONS
KB drafted the manuscript with assistance from TZ. KB drew the figures. TZ focused on phosphorylation sections. SC provided guidance, edited and finalized the manuscript. | 8,169 | 2016-02-05T00:00:00.000 | [
"Biology"
] |
Pansharpening of WorldView-2 Data via Graph Regularized Sparse Coding and Adaptive Coupled Dictionary
The spectral mismatch between a multispectral (MS) image and its corresponding panchromatic (PAN) image affects the pansharpening quality, especially for WorldView-2 data. To handle this problem, a pansharpening method based on graph regularized sparse coding (GRSC) and adaptive coupled dictionary is proposed in this paper. Firstly, the pansharpening process is divided into three tasks according to the degree of correlation among the MS and PAN channels and the relative spectral response of WorldView-2 sensor. Then, for each task, the image patch set from the MS channels is clustered into several subsets, and the sparse representation of each subset is estimated through the GRSC algorithm. Besides, an adaptive coupled dictionary pair for each task is constructed to effectively represent the subsets. Finally, the high-resolution image subsets for each task are obtained by multiplying the estimated sparse coefficient matrix by the corresponding dictionary. A variety of experiments are conducted on the WorldView-2 data, and the experimental results demonstrate that the proposed method achieves better performance than the existing pansharpening algorithms in both subjective analysis and objective evaluation.
Introduction
For several decades, a huge amount of remote sensing images, which are provided by optical satellites, played a crucial role in human tasks. With an increasing demand for very high-resolution (HR) products, high-performance acquisition devices are quickly being developed. Nevertheless, due to physical constraints, a sole acquisition device cannot provide very fine spatial and spectral resolutions [1]. Normally, the optical satellites are equipped with two types of imaging devices: multispectral (MS) and panchromatic (PAN). The MS image is composed of several spectral channels and has rich color information. However, its spatial resolution does not satisfy the requirements of some remote sensing applications, such as classification and objection detection. The PAN image with only one spectral channel can supply high spatial resolution. Thus, pansharpening (PS) technique, which fuses the MS image and the PAN image, was developed to acquire HR MS images [2].
Nowadays, the existing PS approaches can be classified into three categories: component substitution (CS), multiresolution analysis (MRA), and variational optimization (VO)-based methods [3]. The CS-based methods, also known as spectral approaches, project the MS image onto a specific space and substitute the component that contains the main spatial information with the histogram-matched PAN image. This category of methods consists of intensity-hue-saturation (IHS) [4], Gram-Schmidt (GS) spectral sharpening [5], and principal component analysis (PCA) [6]. Due to the obvious spectral distortion caused by the classical CS-based methods, some improved methods belonging to this category were presented, which can be found in the literatures [7][8][9][10]. The are almost outside the wavelength range covered by the PAN image. Hence, an obvious difference exists in the spectral response for the WorldView-2 data. The spectral mismatch problem makes most of PS methods suffer from spectral and spatial distortions. For example, the VO-based methods usually adopt the linear combination model as the spatial enhancement term under the assumption that the spectral range of the PAN image almost covers that of all the MS channels. Hence, these methods are not suitable for pansharpening the WorldView-2 data. The sparse coding-based methods are based on the assumption that the LR and HR image patches have the same sparse representations over the dictionary pair learned from the PAN image and its degraded version. In our earlier work [56], we firstly exploited the graph regularized sparse coding (GRSC) [57] algorithm into the pansharpening. In this method, we only consider the four-band MS images; for the eightband MS image, due to the spectral mismatch, the dictionary learned from the PAN image may not be adequate to sparsely represent the MS image patches. To reduce the influence of spectral mismatch, this paper proposes a PS method to sharpen the WorldView-2 data via graph regularized sparse coding and adaptive coupled dictionary (GRSC-ACD). Our contributions are as follows. (1) Considering the degree of correlation among the MS channels and the PAN channel, the PS process of the WorldView-2 data is regarded as a multitask problem. The first task is to process the adjacent MS channels, i.e., green, yellow, red, and red edge, with high correlation to the PAN band and within the wavelength range well covered by the PAN image. The second task is to process a single MS channel, i.e., blue band, partially outside the wavelength range covered by the PAN image and with low correlation to the PAN image. The third task is to process the MS channels, i.e., coastal, NIR1, and NIR2 outside the wavelength range covered by the PAN image.
(2) To acquire precise sparse representations of the MS image patches, the GRSC algorithm is used in the GRSC-ACD method by exploiting the local manifold structure that describes the spatial similarity of the image patches. In each task, the LR MS channels are tiled into image patches, which make up an image patch set. Then, the image patch set is clustered into several subsets using the K-means algorithm so that the structural similarities of the image patches are further strengthened. Finally, each subset is sparsely represented by the GRSC algorithm. The accurate sparse representations contribute to a high-quality reconstruction of the HR MS image. (3) Adaptive coupled dictionary is constructed for different PS tasks. For the first task, a coupled dictionary learned from the PAN image and its degraded version is used to sparsely represent the MS image patches. For the second task, to effectively represent the single blue band, the PAN image and the reconstructed green band that has high correlation to the blue band are selected as the image dataset to train the coupled dictionary. For the third PS task, the reconstructed blue band with high correlation to the coastal band is selected as the image dataset to learn the adaptive coupled dictionary for the coastal band. Meanwhile, the reconstructed red edge band is taken as the image dataset to learn the adaptive coupled dictionary for sharpening the NIR 1 and NIR 2 bands.
The rest of this article is organized as follows: Section 2 briefly introduces the SR-based PS methods, the SR theory, and the GRSC algorithm; the proposed GRSC-ACD method is presented in Section 3; Section 4 compares and analyzes the experimental results on degraded and real remote sensing data, and finally, Section 5 concludes this article.
Related Works
In this section, the background materials that our work is based on is briefly reviewed, including the SR-based PS methods, SR theory, and GRSC.
SR-Based PS Methods
During the last ten years, as an import branch of the VO-based methods, the SR theory made remarkable achievements in solving the PS problem. The first impressive work based on SR was proposed by Li et al., which assumes that the HR MS image patches have a sparse representation in a dictionary that is constructed by image patches randomly sampled from the HR MS images acquired by "comparable" sensors [33]. Although this method achieves superior performance compared to the aforementioned methods, the dictionary construction limits the applicability of this method because the ideal HR MS images are not available. To overcome this problem, several learning-based methods for dictionary construction were proposed [35][36][37][38]. In [34], Zhu and Bamler proposed SparseFI, a sparse coding-based PS method where a dictionary was learned from the PAN image and its LR version. This method opens up a new direction of PS, and it is based on the assumption that the LR patches and the HR patches share the same sparse representations. In [39], an extension of SparseFI, named J-SparseFI, was proposed by exploiting the possible signal structure correlations among the MS channels. To reduce spectral distortion, a two-step sparse coding method with patch normalization (PN-TSSC) was proposed [40]. In [41], a PS method featured with an online coupled dictionary learning was proposed, in which a superposition strategy was applied to construct the coupled dictionaries. Inspired by the MRA-based methods, Vicinanza et al. [42] proposed an SR-based PS method to estimate the missing details that were injected into the MS image by exploiting the details self-similarity through the scales. In [43], Tian et al. proposed a VO-based method based on gradient sparse representation. It assumes that the gradients of corresponding LR MS and HR PAN images share the similar sparse coefficients under certain specific dictionaries. Then, Tian et al. [44] proposed a VO-based PS method by exploiting cartoon-texture similarities, in which a reweighted total variation term using gradient sparsity is used to describe cartoon similarity and a group low-rank constraint is used to describe texture similarity. However, the SR-based PS method with patch manner suffers from two disadvantages: limited ability to preserve details and high sensitivity to misregistration. To overcome this problem, Fei et al. improved the above PS method by replacing the traditional SR model with a convolutional SR model [45]. Other similar PS methods were presented in [46][47][48][49]. These methods have good ability to preserve the spatial details and reduce the spectral distortion.
Sparse Representation
Recently, sparse representation became an effective technique for image processing applications [58]. It indicates that natural signals, such as images, are inherently sparse over the dictionary composed of certain appropriate bases. Let x ∈ n be a √ n × √ n image patch ordered lexicographically as a column vector. It can be represented as a linear sparse combination of basis atoms with respect to a dictionary D ∈ n×N (n < N), which is defined as x = Dα, where α ∈ N×1 is the sparse coefficient vector with fewest nonzero elements. The sparsest α can be estimated through solving the following optimization problem: argmin where · 0 is the 0 norm that counts the nonzero elements in the sparse vector α, and · 2 is the 2 norm. However, the optimization problem in Equation (1) is nondeterministic polynomial-time hard (NP-hard). Hence, this optimization problem can be alternatively solved with the 1 norm formulation, which can be represented as where · 1 is the 1 norm, and ε is the error tolerance.
(2) can be written as (3), thanks to Lagrange multiplier. argmin where λ is a regularization parameter for tradeoff between reconstruction fidelity and sparsity. Equation (3) can be efficiently solved by basis pursuit and greedy pursuit algorithms, e.g., orthogonal matching pursuit (OMP).
GRSC
Motivated by the recent progress in sparse coding and manifold learning, GRSC algorithm is an efficient signal processing technique which explicitly considers the local geometrical structure of the data. To encode the geometrical information in the data, the GRSC algorithm builds a k−nearest neighbor graph to encode the geometrical information in the data. Hence, the graph Laplacian from the spectral graph theory can be used as a smooth operator to preserve the local manifold structure, which is incorporated into the sparse coding objective function as a regularizer.
Let X = [x 1 , x 2 , . . . , x m ] ∈ n×m be a data matrix containing m image patch vectors. The objective function of traditional sparse coding can be formulated as follows: where · F denotes the Frobenius norm matrix, and A is the sparse coefficient matrix. The GRSC algorithm is based on the manifold assumption that if two data points x i and x j are close in the intrinsic geometry of the data distribution, the representation of this two points α i and α j with respect to the dictionary D should be also close to each other. For a set of given data points x 1 , x 2 , . . . , x m , we can construct a nearest neighbor graph G with m vertices that represent the data points. Supposed that W is the weight matrix of the graph G. If the data point x i is among the k nearest neighbors of the data point x i or the data point x j is among the k nearest neighbors of the data point x i , we define W ij = 1, otherwise, we define W ij = 0. Based on this, the degree of x i can be defined as W ij , and H = diag(h 1 , . . . , h m ). Considering the problem of mapping the graph G to the sparse representation A, a good map can be obtained by minimize the following objective function: where L = H − W denotes the Laplacian matrix. Hence, the following objective function of the GRSC algorithm can be obtained by incorporating the Laplacian regularizer (5) into (4): where β is the regularization parameter. The optimization problem (6) can be solved following the method proposed in article [57].
Multitask Pansharpening Method: GRSC-ACD
In this section, we introduce the proposed multitask PS method GRSC-ACD for the WorldView-2 data. Figure 2 shows the scheme of the proposed method. The detailed algorithm steps of the proposed method are described as follows.
Description of Multitask Pansharpening
The first step of the proposed method is to divide the PS process into three tasks according to the degree of correlation among the MS channels and the PAN channel and the relative spectral response curves of different channels. The WorldView-2 data used in this paper is exhibited in Figure 3. Figure 3a shows the MS image with eight spectral bands with the size of 1150 × 1151, and Figure 3b shows the PAN image with the size of 4600 × 4604. Then, the degraded PAN image is obtained by blurring and downsampling the PAN image, which has the same spatial resolution and scale as the original MS image. The correlation coefficient matrix among the MS channels and the PAN channel is computed, which is listed in Table 1. According to the correlation coefficients of different channels and the relative spectral response curves among different channels as shown in Figure 1, the PS process of WorldView-2 data is divided into three tasks. (1) First task: The correlation coefficients between the MS channels including green, yellow, red and red edge, and the PAN channel are listed in Table 1, which are highlighted in red. The green, yellow, red and red edge bands have high correlation to the PAN image; also, these bands are almost within the wavelength range covered by the PAN image. Hence, in the first task, these MS channels will be sharpened together. For this task, the HR PAN image and its degraded version are used to learn the coupled dictionary pair. (2) Second task: In Figure 1, the blue band is mostly within the wavelength range covered by the PAN image. However, it has low correlation to the PAN image. Hence, the second task specially deals with the blue band. From the correlation coefficient labeled with blue color, the blue band and the green band have high correlation. Hence, the PAN image and the reconstructed green band are used as the dataset to learn the adaptive coupled dictionary for this task. (3) Third task: The remaining MS channels, i.e., coastal, NIR1, and NIR2, are almost outside the wavelength range covered by the PAN image shown in Figure 1. In this task, three MS channels are divided into two groups: (1) coastal band; (2) NIR1 and NIR2. For these two groups, different reconstructed HR MS bands are chosen to learn the adaptive coupled dictionaries. From the correlation coefficient labeled with purple color, it can be concluded that the coastal band is highly related to the blue band. Hence, the reconstructed blue band is used to learn the coupled dictionary for sharpening the coastal band. The correlation coefficients labeled with green color show the high degree of correlation among red edge, NIR1, and NIR2. Hence, for sharpening the NIR1 and NIR2 bands, we use the reconstructed red edge band to train the coupled dictionary.
Pansharpening Algorithm via GRSC for Each Task
The purpose of PS is to generate an HR MS image X H with a LR MS image X L and an HR PAN image P H . For each task, the MS channels have high correlation to each other. Hence, the image patches from these MS channels have the same or similar manifold structures. Let X L p,t be the pth band of the LR MS bands for the tth task, where p = 1, . . . , P, and t = 1, . . . , T. Then, all the LR MS bands are tiled into image patches with the patch size of r × r and the overlapping size of s × s. Each image patch is arranged in a column vector, and all the column vectors form an image patch set that is denoted as The PS process consists of three main steps which are described as follows.
(1) Constructing image patch sets with similar geometrical structure: To acquire the precise sparse representations of the image patches, the set Ω is first separated into several subsets with K-means clustering algorithm. Let Ω b be the subset of each class, where b = 1, 2, . . . , B, and B is the total number of the subsets. All the image patches in a subset share the same or similar local geometrical structures.
(2) Sparse coding of the subsets via GRSC: The proposed method is based on the assumption that the LR MS image patch and its corresponding HR MS image patch share the same sparse representation over the coupled dictionary pair. Let D L and D H be the LR dictionary and the HR dictionary, respectively. The dictionary construction method will be introduced in the following subsection. Considering the graph regularized sparse coding for image representation, we first construct the weighted graph matrix W b and the degree matrix H b for the subset Ω b . Then, the Laplacian matrix can be defined as The sparse representation of the subset Ω b can be estimated by solving the following objective function: where A b is the sparse coefficient matrix for the subset Ω b , and α b,v is the sparse vector of the vth image patch in the subset. β and γ are the regularization parameters to balance the contribution of the two regularization terms. To solve the objective function by optimizing over each α b,v , (7) can be rewritten in a vector form. First, T can be rewritten as follows: Then, by combining (7) and (8), the problem (5) can be written as Based on the feature-sign search algorithm proposed in [59], the problem in (9) can be effectively solved to acquire the optimal sparse coefficient matrix A b . (3) Reconstructing the HR MS channels for each task: The estimated sparse coefficient matrix b for the subset Ω b can be obtained by solving the problem in (9). Then, the HR MS image patch subset Ω H b corresponding to Ω b can be calculated through the following Formula (10).
After all the HR MS image patch subsets are obtained, the MS bands for each task can be reconstructed by averaging the overlapped image patches.
Dictionary Learning
Dictionary learning is an essential step in the proposed GRSC-ACD method. For dif-ferent tasks, different coupled dictionary pairs need to be learned according to the charac-teristic of the MS channels. In our method, the images used to learn the coupled dictionary should be updated according to the characteristics of the tasks. The detailed descriptions are as follows.
(1) First task: This task processes the MS channels: green, red, yellow, and red edge.
These MS bands are within the wavelength range covered by the PAN image and show high correlation to the PAN image. Hence, the HR PAN image and its degraded version are suitable to learn the coupled dictionary pair for the first task. (2) Second task: This task only processes the blue band, which is partially outside the wavelength band covered by the PAN image, and has low correlation to the PAN image. Thus, only using the PAN image to learn the coupled dictionary is not suitable for this task. To effectively represent the image patches subsets, the PAN image and the reconstructed HR green band with high correlation to the blue band are selected to learn the coupled dictionary. (3) Third task: This task sharpens the MS channels that are almost outside the wavelength range covered by the PAN image, i.e., coastal, NIR1, and NIR2. As shown in Table 1, the coastal band has very low correlation to the NIR1 and NIR2 bands. Hence, this task is divided into two subtasks. One subtask processes the coastal spectral band. For this subtask, the reconstructed blue band is used to learn the coupled dictionary. Another subtask processes the NIR1 and NIR2 bands. For this subtask, the reconstructed red edge band is used to learn the coupled dictionary.
Then, the dictionary construction method for each subset Ω b is introduced. Let Y H k,b , k = 1, 2, . . . , K be high-resolution images for dictionary learning. The HR images are blurred and downsampled to obtain the corresponding LR images Y L k,b , k = 1, 2, . . . , K. Then, the HR and LR image pairs are tiled into the image patches. The patch size for the LR images is r × r, and the overlapping size is s × s. While the patch size for the HR images is F DS r × F DS r, and the overlapping size is F DS s × F DS s, where F DS is the scale factor between MS and PAN. The image patches are arranged into vectors; hence, the coupled dictionary is constructed by the raw LR and HR image patch vectors, which are defined as D L b and D H b , respectively. In short, our algorithm can be summarized in Algorithm 1.
Algorithm 1. The GRSC-ACD Pansharpening Method.
Input: LR MS image X L , PAN image P H Initialization: Set parameters β, γ, r, s and B 1: Split the PS process into multiple tasks according to the relative spectral response as shown in Figure 1 and the channel correlation matrix as listed in Table 1 2: for t ← 1, 2, . . . , T do 3: Separate all the MS bands X L p,t ,p = 1, . . . , P into image patches and form an image patch set Ω 4: Generate each subset Ω b , b = 1, 2, . . . , B using K-means clustering algorithm 5: for b ← 1, 2, . . . , B do 6: Learn the LR dictionary D L b and the HR dictionary D H b 7: Compute the sparse coefficient matrix b according to (7) To verify the fusion performance of the proposed method, ten PS methods are taken for performance comparison. These methods include the GS algorithm [5], the high-pass filter (HPF) algorithm [60], the partial replacement adaptive component substitution (PRACS) algorithm [8], the MTF-GLP with high-pass modulation (MTF-GLP-HPM) algorithm [18], the band-dependent spatial-detail (BDSD) algorithm [61], the proportional additive wavelet to the luminance component with haze correction (AWLPH) algorithm [62], the robust BDSD (RBDSD) algorithm [63], the PN-TSSC algorithm [40], the OCDL algorithm [41], and the GRSC algorithm [56]. The key parameters of these methods refer to the corresponding articles. In addition, a resampled MS image is also included during the comparison and is referred as EXP.
Quality Assessment Indexes
To quantitatively evaluate the fusion performance, various quality indexes are used. Six quality indexes are used in the simulated experiments, including root-mean-squareerror (RMSE), average spectral mapper (SAM) [64], erreur relative globale adimensionnelle de synthese (ERGAS) [65], Q [66], structural similarity index (SSIM) [67], and Q2n [68] are considered. The ideal values of RMSE, SAM, ERGAS, Q, SSIM, and Q2n are 0, 0, 0, 1, 1, and 1, respectively. The "quality with no reference" (QNR) [69] is used in the real experiments to assess the fusion performance. The QNR index consists of the spectral distortion index D λ and the spatial distortion index D s . The best values of D λ and D s are both 0, while the best value of QNR is 1.
The Choice of Tunning Parameters
For our method, its performance is affected by several tunning parameters, i.e., regularization parameters β and γ, patch size, and overlapping size. To optimize the parameters for better performance, experiments with different parameters are conducted on the degraded and real data, respectively.
Regularization Parameters
In this section, the effects of regularization parameters β and γ on the fusion performance are explored. For the degraded data, the patch size is first set to 7 × 7, and the overlapping size is set to 3. For the regularization parameter β from 1-5 at an interval of 1, and the regularization parameter γ from 50-400 at an interval of 50, their influence on the performance of the proposed method is studied. Six quality indexes are calculated, where the average RMSE of eight bands is presented. In addition, all the values of the quality indexes are normalized to the range of [0, 1]. The normalized results with respect to different parameters are plotted in Figure 4, where the X axis, Y axis, and Z axis stand for the regularization parameter β, the regularization parameter γ, and the normalized results, respectively. The smaller the RMSE, ERGAS, and SAM values, the better the fused results. The larger the Q, SSIM, and Q2n values, the better the fused results. In Figure 4, the proposed method achieves better performance for the degraded data when the regularization parameter β is set to 3 and the regularization parameter γ is set to 250. For the real data, the influence of the regularization parameters on the performance of the proposed method is also discussed. In the experiment, the real MS image has the size of 100 × 100 and the real PAN image has the size of 400 × 400. The patch size is set to 25 × 25, and the overlapping size is set to 1/4 of patch size. For the regularization parameter β from 1-5 at an interval of 1, and the regularization parameter γ from 50-400 at an interval of 50, their influence on the performance of the proposed method is studied. Three indexes including D λ , D s , and QNR are used to evaluate the quality of the pansharpened results. All the quality indexes are normalized to the range of [0, 1]. The normalized results with respect to different parameters are plotted in Figure 5, where the X axis, Y axis, and Z axis stand for the regularization parameter β, the regularization parameter γ, and the normalized results, respectively. The smaller the values of D s and D λ , the better the fused image. The larger the value of QNR, the better the fused image. Figure 5 shows that the proposed method achieves better performance for the real data when the regularization parameter β is set to 3 and the regularization parameter γ is set to 250.
Patch Size and Overlapping Size
In this section, the effects of the patch size and the overlapping size are investigated. For the degraded data, the regularization parameter β is set to 3, and the regularization parameter γ is set to 250. Five patch sizes for the LR MS image, including 5 × 5, 7 × 7, 9 × 9, 11 × 11, and 13 × 13, and three overlapping sizes, including 2, 3, 4, are considered together. The performance surface of the proposed method under different patch sizes and overlapping sizes is exhibited in Figure 6, where the X axis, Y axis, and Z axis indicate the patch sizes, the overlapping sizes, and the normalized results, respectively. In Figure 6, the proposed method provides the optimal RMSE, ERGSA, SAM, Q, SSIM, and Q2n values, when the patch size is set to 7 × 7 and the overlapping size is set to 2. However, our proposed method with smaller overlapping size needs more computational time. Hence, considering the tradeoff between the pansharpening quality and running time, the patch size and overlapping size are respectively set to 7 × 7 and 3 in the following experiments. For the real data, the effect of the patch size on the performance of the proposed method is discussed and analyzed. In the experiment, the regularization parameter β is set to 3, and the regularization parameter γ is set to 250. For the patch size varying from 21-33 at an interval of 2, the quality curves of the proposed method under different patch size are plotted in Figure 7. The green, blue, and red lines represent three quality indexes, i.e., D s , D λ , and QNR, respectively. The proposed method obtains the best QNR value when the patch size is 25 × 25; hence, in the following experiments, the patch size for real data is set to 25 × 25.
Experimental Results on Degraded Images
In this section, the proposed method is evaluated on two pairs of degraded WorldView-2 images. The input images and the pansharpened results of different PS methods are shown in Figure 8. A local region is magnified and put at the bottom-right of each figure. Figure 8n is the reference image. In terms of visual analysis, the fused images of the GS, HPF, MTF-GLP-HPM, PRACS, BDSD, RBDSD, AWLPH, OCDL, PN-TSSC, and GRSC methods suffer from slight spectral distortion, especially in vegetation areas. From the magnified region, the fused images of the OCDL and PN-TSSC methods, as shown in Figure 8j,k, exhibit slight blurring effects. The fused images of the BDSD and RBDSD methods, as shown in Figure 8g,h, show an oversharpening effect in spatial detail preservation. Figure 8m shows the fused image of the proposed GRSC-ACD method. Compared with the reference image and the fused images of the other methods, the proposed GRSC-ACD method achieves better spatial and spectral qualities in the fused image. Table 2 lists the quantitative evaluation results of the fused images of different methods shown in Figure 8, where the best value of each index is highlighted in bold and the second best value of each index is underlined. Table 2 shows that the proposed GRSC-ACD method obtains the best RMSE, ERGAS, Q, SSIM, and Q2n values. However, the proposed method is inferior to the MTF-GLP-HPM method in terms of SAM. Table 2. Quantitative evaluation results of fused images of different methods shown in Figure 8. Figure 9 illustrates the pansharpened results of the second pair of degraded images. A magnified region is put at the bottom-right of each figure. Figure 9a,b show the resampled MS image obtained by the EXP method and the PAN image, respectively. The fused image of the EXP method has poor spatial quality. The reference image is shown in Figure 9n. Figure 9c shows the fused image of the GS method, which exhibits a slight spectral distortion as compared with the reference image. Figure 9d-l illustrates the fused images of the HPF, MTF-GLP-HPM, PRACS, BDSD, RBDSD, AWLPH, OCDL, PN-TSSC, and GRSC methods. The fused images of these methods are comparable to the reference image in preserving the spectral information. From the magnified region, the HPF, MTF-GLP-HPM, PRACS, BDSD, RBDSD, AWLPH, and GRSC methods are capable of effectively preserving the spatial details. Compared with the reference image, the pansharpened images produced by the PN-TSSC and OCDL methods suffer from a slight spatial detail distortion. The fused image of the proposed GRSC-ACD method is shown in Figure 9m, which shows good spectral and spatial qualities. Table 3 lists the quantitative evaluation results of the pansharpened images shown in Figure 9, where the best values are labeled in bold, and the second best values are underlined. The proposed GRSC-ACD method obtains the best values in terms of the RMSE, SAM, Q, SSIM, and Q2n indexes. Regarding the ERGAS index, the PN-TSSC method obtains the best value, and the proposed method obtains the second best value. In general, the proposed method achieves better fusion performance for the degraded datasets based on the subjective and objective assessments. Table 3. Quantitative evaluation results of fused images of different methods shown in Figure 9.
Analysis of Difference Images
The above section only gives the global assessments of the fused results. To better understand where the reconstruction errors are localized, the difference images between the pansharpened images and the reference image for two pairs of degraded images are calculated and analyzed. Figures 10 and 11 show the false color difference images of the fused images shown in Figures 8 and 9, respectively. The RGB channels are composed of 7 (NIR1), 4 (yellow), and 1 (coastal). In Figures 10 and 11, black color means zero difference, while intense red, blue, and green colors mean obvious errors in NIR1, yellow, and coastal channels, respectively. In [43], the abrupt color jumps between black colors are regarded as resolution loss. From this point of view, if the abrupt changes have wider transition region, the resolution loss will be more severe. In terms of spectral distortion, the EXP and GS methods performs the worst, because obvious dominating color appears. For the other PS methods, the intense red, blue, and green colors mainly occur at the boundaries of the objects. This indicates that the boundaries of the objects have severe spectral distortion, which may be associated with the resolution loss. In general, the proposed GRSC-ACD method and the GRSC method outperform the other methods in terms of preserving the spectral information.
Experimental Results of Real Images
To further demonstrate the effectiveness of the proposed method, the proposed method is performed on three pairs of real images. The first pair of real images contains buildings. The second pair of real images contains buildings and vegetations. The third pair of real images contains buildings and vegetations. The fused images of different methods are shown in Figures 12-14, respectively. For better visual comparison, we extract the local magnified region from each figure and put them at the right bottom of each figure. Figure 12a,b shows the resampled MS image and the PAN image, respectively. The pansharpened images of different PS methods on the first pair of real images are shown in Figure 12c-m. The fused image of EXP still has poor spatial resolution and good spectral quality. All the pansharpened methods show good ability to preserve the spectral information as compared with that of the fused image of EXP. From the magnified region, the fused images of the GS, MTF-GLP-HPM, PRACS, RBDSD, AWLPH, OCDL, GRSC, and GRSC-ACD exhibit good spatial qualities, and the fused images of the HPF, BDSD, and PN-TSSC suffer from blurring effects and spatial distortions. Figure 13 shows the pansharpened images of different PS methods on the second pair of real images. Figure 13a is the fused image of EXP, which shows poor spatial resolution. The fused image of the GS method, as shown in Figure 13c, suffers from spectral distortion in the vegetation area. The fused images of the other PS methods, as shown in Figure 13d-m, exhibit natural colors as compared with the resampled MS image (EXP). From the magnified region, the fused images of the HPF and PN-TSSC methods exhibit slight blurring effects and artifacts. The fused images of the GS, MTF-GLP-HPM, PRACS, BDSD, RBDSD, AWLPH, OCDL, GRSC, and GRSC-ACD methods have good spatial qualities. Table 4 lists the associated quantitative results of different methods on the first and second pairs of real datasets, where the best values are labeled in bold, and the second best values are underlined. For the first pair of real data, the GRSC-ACD method provides the second best D λ value and the best QNR value. The AWLPH method obtains the best D s value. The OCDL method obtains the second best values in terms of D s and QNR. For the second pair of real data, the GS method obtains the second best D λ value. Besides, the AWLPH method obtains the best D s value, and the proposed GRSC-ACD method obtains the best QNR value. The GRSC method obtains the second best value in terms of D s , and the OCDL method obtains the second best value in terms of QNR. The pansharpened results of different methods on the third pair of real images are shown in Figure 14. Figure 14a shows the resampled MS image, which has poor spatial resolution and good spectral quality. Figure 14m, gives impressive spectral and spatial qualities. Table 5 lists the associated quantitative results of Figure 14, where the best values are labeled in bold, and the second values are underlined. The proposed method accomplishes the second best D λ and D s values and the optimal QNR value. In short, the proposed GRSC-ACD method has better fusion performance than the other methods on the real data.
Algorithm Exceution Time Analsysis
In this section, we compare the computational time of the proposed method with the other PS methods. Tables 2-5 list the computational time of five datasets. All the algorithms are implemented in MATLAB R2016a on a personal computer with 32 GB-RAM, Intel Xeon W-2125 CPU @4.00 GHz. From Tables 2-5, the EXP, GS, HPF, MTF-GLP-HPM, PRACS, BDSD, RBDSD, and AWLPH methods take computational times less than 1 s. The computational time of the OCDL, PN-TSSC, GRSC, and GRSC-ACD algorithms is higher than that of the above methods because these methods adopt the sparse representation techniques. Parallel processing strategy can be applied to overcome this problem. Although our method takes the highest computational time, it has superior performance for sharpening the Worldview-2 data.
Conclusions
A multitask pansharpening method for the Worldview-2 data via graph regularized sparse coding and adaptive coupled dictionaries is proposed in this paper. We fully consider the spectral and correlation characteristics of the MS and PAN images and separate the pansharpening process into three tasks. The first task processes the MS channels that are fully overlapped by the PAN band. The second task processes the blue channel that is partially outside the wavelength range covered by the PAN band. The third task processes the channels that are almost outside the wavelength range covered by the PAN band. For each subtask, the interband and intraband correlations among image patches are considered. For different subtasks, suitable coupled dictionary pairs are designed to efficiently represent the image patch subsets. A variety of experiments are conducted, and the experimental results demonstrate that the proposed method achieves better performance for sharpening the WorldView-2 data. | 8,960 | 2021-05-21T00:00:00.000 | [
"Computer Science"
] |
Towards an efficient compression of 3D coordinates of macromolecular structures
The size and complexity of 3D macromolecular structures available in the Protein Data Bank is constantly growing. Current tools and file formats have reached limits of scalability. New compression approaches are required to support the visualization of large molecular complexes and enable new and scalable means for data analysis. We evaluated a series of compression techniques for coordinates of 3D macromolecular structures and identified the best performing approaches. By balancing compression efficiency in terms of the decompression speed and compression ratio, and code complexity, our results provide the foundation for a novel standard to represent macromolecular coordinates in a compact and useful file format.
Introduction
The Protein Data Bank (PDB) [1], the archive for 3D structures of biological macromolecules, has rapidly grown over the last few years. Developments in the major experimental techniques enable high-throughput structure determination and the number of deposited structures now exceeds 124,000 entries, increasing by about 10,000 entries per year. The PDB is not only growing in numbers, but newly released PDB entries are also growing in complexity. New integrative methods that combine multiple modelling and experimental techniques, most notably Electron Microscopy, now determine structures of up to the megadalton (MDa) range at atomic resolution [2][3][4].
Such large complexes bring major challenges to analysis and visualization tools since transfer and processing of the structural data is slow. Limitations in network bandwidth and client side memory further reduce the visualization performance on the web and mobile devices. These bottlenecks are caused by inefficiencies of the file formats currently used by the PDB to store macromolecular structures. The PDBx/mmCIF file format [5], the archival text-based format for the PDB, is flexible, extensible and verbose with rich metadata that can represent structures of any size but is not optimized for fast loading and parsing of structural data. The legacy PDB format [6] is a less verbose and more compact textual format, but only supports structures with less than 100,000 atoms. However, the largest structure in the PDB, the HIV-1 a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 virus capsid (PDB ID: 3J3Q) [7], contains more than 2.4 million atoms and takes up 254 MB of disk space when stored as an uncompressed PDBx/mmCIF file. As we expect the deposition of even bigger structures in the future, the development of a novel compact representation of macromolecular structures is necessary. Our goal is a companion format to the PDBx/mmCIF archival format that is designed for the needs of structural data visualization, analysis and transmission over the Internet. Given limited network bandwidth, the reduction in file size is necessary for efficient data transmission. Hence we propose to store the structural data in the encoded and compressed form. The goal of this paper is to evaluate strategies to compress macromolecular coordinates, since a large fraction of the data in structure files are the atomic coordinates. The encoding and compression strategies described here form the basis for the MacroMolecular Transmission Format (http://mmtf.rcsb.org), a file format for the efficient transmission of structural data for interactive visualization and analysis applications, especially for large molecular complexes.
Macromolecules have structural redundancy: they have repeated or similar structural elements and predictable local geometry. Hence we propose to compress macromolecular coordinates employing strategies that consider bespoke structural features. This offers an opportunity for more efficient compression than general-purpose compression algorithms, such as GZIP.
A previous effort to compress the coordinates of individual PDB structures was made by [8]. In their approach, size reduction was achieved by reducing the precision of coordinates to two decimal places, making it a lossy strategy. Other efforts in macromolecule coordinate compression have appeared within the context of the molecular dynamics field, where simulations can produce terabytes of coordinate trajectories. The applied methods there have achieved high compression ratios using various inter-frame encoding schemes, e.g., delta coding, prediction with polynomials, or space curves [9][10][11]. As these methods focus on compressing coordinate trajectories they are of limited value for compressing coordinates of individual structures in the PDB archive.
In the following, we systematically investigate compression strategies applied to the 3D atomic coordinates of macromolecules that target the structural redundancy and spatial adjacency rather than syntactic redundancy. We explore lossy as well as lossless compression approaches, investigate intramolecular as well as intermolecular compression achieved using different encoding strategies. Each of the compression methods is evaluated based on key performance metrics. The results provide a strategy for compressing macromolecular coordinates that balances compression efficiency and implementation complexity.
Materials and methods
In this article, we focus on the compression of 3D coordinates of macromolecules as they are challenging for general purpose compression techniques. General compression tools such as GZIP are efficient when the redundancy in data is high, like the redundancy of the language in a text (e.g. repetitive words). For example, GZIP locates repetitive strings within a text file and replaces those strings temporarily with shorter codes to make the overall file size smaller. However, the coordinates coming from experimental data generally do not exhibit such syntactic redundancy. The proposed approaches use the knowledge about structural features of biological macromolecules to create a compact representation of their atomic coordinates. Specifically, we developed two types of strategies: (i) intramolecular compression that operates on the sequence of atoms within a polymer chain; and (ii) intermolecular compression designed for the compression of special cases of multiple chains with identical atoms, such as NMR models and structures with repeated identical subunits.
Intramolecular compression
Intramolecular compression operates on the coordinates of individual polymer chains. This method exploits spatial adjacency and connectivity of atoms consecutively linked to form a polymer chain. The dataset used to test intramolecular compression strategies consist of the three-dimensional structures present in the PDB (http://www.pdb.org) as of August 2, 2016. The total number of structures was 121,407 with a total size of the gzipped mmCIF files of 29.7 GB.
Intermolecular compression
Intermolecular compression methods operate on identical molecules, i.e., those that contain the same number of identical atoms: (i) ensembles with multiple models; (ii) structures with identical subunits related by non-crystallographic symmetry; (iii) asymmetric structures with repeated identical subunits (e.g., the HIV capsid). In contrast to intramolecular strategies, intermolecular approaches compare corresponding identical atoms across molecules instead of consecutive atoms within each molecule. The order in which the molecules are compared is defined by the traversal strategy. The general idea is that the information can be stored for a single molecule and all other molecules can be referenced to a representative molecule.
Ensembles with multiple models. The first dataset we used to evaluate intermolecular compression included 8108 NMR structures and 250 X-ray structures that contain multiple models. Such models reflect the dynamic nature of biopolymers. They represent a single structure as an ensemble of conformations that satisfy experimental restraints.
Structural ensembles have varying degrees of flexibility (Fig 1) and structures are typically aligned by the authors before the deposition. In few cases, the structures are not aligned ( Fig 1D). The total number of structures with multiple homogeneous models is 8,358, which accounts for 6.9% of the total number of structures in the PDB archive. In terms of size, this dataset occupies 4.6 GB as GZIP compressed mmCIF files, which constitutes 15.5% of the PDB archive size.
Structures with identical subunits related by non-crystallographic symmetry. The second dataset we used to test intermolecular compression includes oligomeric complexes composed of identical protein subunits. Those subunits have the same amino acid composition and very similar 3D structure (Fig 2). The proteins with identical subunits account for 45,477 structures (37.5% of total number of structures in the PDB archive), of which 15,179 structures are homogeneous (15.5% of a total number of structures in the PDB archive), by which we mean that identical subunits have the same number of atoms.
Traversal strategies. Unlike intramolecular compression methods that follow the linear order of atoms to encode their coordinates, for the intermolecular compression methods there are N! permutations to compare N coordinate sets. We have implemented three traversal strategies that define the order in which the molecules are compared (a molecule can be an entire model in a multi-model structure or a single subunit). These strategies include: (i) reference first, e.g., the first molecule is used as a reference for every other molecule ( Fig 3A); (ii) waterfall, e.g., the molecules are traversed subsequently, such as the second molecule goes after the first, the third after the second and so on ( Fig 3B); (iii) Minimum Spanning Tree (MST), e.g., the molecules are represented as an undirected connected weighted graph, from which an MST is built (Fig 3C).
Each vertex of the MST graph is a molecule and vertices are connected by weighted edges. The weight between two connected vertices results from comparison of two molecules. We implemented two edge weight metrics: 1. RMSD-the average distance between the identical atoms of two molecules calculated as follows: where and N is the number of atoms. 2. GZIP-the distances between the coordinates of identical atoms of two molecules are calculated and the result is compressed using the GZIP algorithm.
We used Prim's algorithm to build the MST [12] implemented in JGraphT package (http:// jgrapht.org). The MST is an undirected graph with the least total sum of all weights. To define the traversal order for each graph we calculate the root as the longest path that connects the two most distant vertices, and branches, shorter sequences that originate from the root nodes. The root is found by traversing the MST twice using the breadth-first algorithm. For example, the traversal order for four molecules can be as shown on Fig 3C, the root connects 1 st , 2 nd and 3 rd molecules and the branch connects 2 nd and 4 th molecules.
Superposition to improve intermolecular encoding. For intermolecular compression methods, we superpose each pair of molecules before encoding. The superposition is useful for multi-model structures if models are not aligned by the authors. For the structures with identical subunits superposition is required to make the intramolecular encoding beneficial. We used a least squares fit algorithm, a conventional superposition method that minimizes the sum of squared residuals between atomic coordinates.
To restore the original coordinates the transformation must be stored for each superposition. Each transformation requires 24 additional bytes to store the translation and 32 bytes to store a rotation in form of quaternion. This number is much smaller than the size of coordinates and was not included in the calculation of the compressed size.
Full versus reduced representation of PDB structures
We analyzed the performance of compression algorithms on the PDB structures in two representations named full (F) and reduced (R). The most detailed full representation includes all atoms from the macromolecular structure that belong to the polypeptide or polynucleotide chains. However, the level of detail in full representation is not always necessary. The reduced representation considers only positions of alpha-carbon atoms for polypeptide chains and central phosphorus atoms of phosphate groups for polynucleotide chains. Using a lightweight reduced representation speeds up file transfer and rendering significantly, providing enough information for analysis. One example of a reduced representation application is the visualization of large macromolecules, where the visualization of surfaces or ribbon diagrams is often preferred as an atomic representation becomes overcrowded. The reduced representation has also been shown to be useful for computational comparison of two protein structures, structure prediction, and structure modeling [13,14].
Lossless versus lossy compression
Data compression generally comes in two flavors, lossy (LS) and lossless (LL). Lossless compression restores the original data perfectly (bitwise identical) from the compressed data, in contrast to lossy compression where the full precision of the original data is irreversibly lost after compression, but higher compression ratios can be obtained. We achieve lossy compression by reducing the precision of coordinates to one decimal point, i.e., 0.1 Å. In the visualization of macromolecular structures the precision of 3D coordinates can be sacrificed if lossless and lossy representations look very similar. Visual inspection shows that a lossy representation still maintains reasonably "ideal" geometries for both full and reduced representations (Fig 4).
General compression scheme
The general compression scheme refers to two algorithms: a compression algorithm that takes as an input the original data sequence S o and reduces it to S c that requires fewer bits, and a decompression algorithm that recovers S o from S c . As a general scheme, our compression approaches are based on the following subsequent steps: (i) the encoding step, coordinates are transformed from floating point numbers to a more compact integer representation; (ii) at the packing step 32-bit integers are encoded as 16-bit integers; (iii) entropy compression removes syntactic redundancy in the encoded data ( Fig 5). Below we describe algorithms associated with each step.
Encoding. Encoding refers to the transformation of atomic coordinates to a representation better suited for compression. Here the encodings are fixed-width, e.g., this representation uses the same number of bits to store the encoded value as to store the original value. The transformation aims to reduce the dynamic range of values, so a smaller number of bits can be used to store the values. In conjunction with the packing and entropy compression methods, described later in the article, such representation yields a higher compression rate. The following encoding strategies were considered for this study.
Integer encoding: Macromolecular coordinates are captured as real numbers in Ångstrom (Å, 0.1nm) with a limit on precision, i.e., to within 0.001 Å. Thus, we can represent the atomic coordinates as integers without loss of accuracy by multiplying the coordinate values by a factor of 1000. We also used a smaller multiplication factor of 10 to achieve lossy compression. Using a smaller multiplication factor introduces a loss of accuracy of the original coordinates. However, loss of coordinate accuracy, does not necessarily lead to the loss of experimental data. Experimental measurements determine the atomic position with a degree of uncertainty. The experimental accuracy of the macromolecular coordinates is usually much lower than 3 decimal places. For crystallographic structures the B-factor (in units of Å 2 ) describes indeterminable thermal noise and is related to the mean square displacement of the atomic position. In proteins, B-factors typically range from 5 to 60 Å 2 , corresponding to a positional uncertainty of greater than 0.2 Å 2 [15,16]. NMR structure ensembles do not provide a statistically meaningful description of the true accuracy of coordinates given the experimental uncertainties in deriving distance restraints [17]. Other methods such as EM produce lower resolution structures than X-ray crystallography. This allows us to exploit lossy compression that stores the coordinates up to a tenth of an Å (multiplication factor of 10), which is generally sufficient to preserve the essential structural information provided by lossless representation.
Delta encoding: Delta encoding stores the differences between coordinates instead of absolute values. As the atoms in macromolecular structures are within a certain distance range determined by the length of their chemical bonds and spatial adjacency of amino acids, the distance between two consecutive atoms will be typically smaller than the absolute values of their coordinates. For example, a typical carbon-carbon (C-C) covalent bond has a bond length of 1.54 Å [18].
First, the coordinates are encoded as integers are stored in a single array C = {x 0 . . . x n , y 0 . . . y n , z 0 . . . z n }, where n is a total number of atoms in the structure. The algorithm stores the first encoded value as s 0 = c 0 , all the consecutive values are encoded as s i = c i − c i−1 , where c i 2 C. We can visualize the effect of encoding by plotting the kernel density for the encoded values that shows the shape of the data distribution (Fig 6). After encoding, there is a higher probability of smaller values centered around zero.
Predictive encoding: Similar to delta encoding, the predictive encoding works with the sequence of integer encoded coordinates C = {x 0 . . . x n , y 0 . . . y n , z 0 . . . z n }, where n is a total number of atoms in the structure. The history of points is used to predict the position of the next point and the errors between predicted and original values are stored. Here, the position of the next atom is predicted based on the distance between preceding pair of atoms. The algorithm stores the first encoded value as an original coordinate, e.g. s 0 = c 0 . The second encoded value is s 1 = c 1 − c 0 . Then at each step the algorithm calculates and stores the error e i = c i − p i , where p i is a predicted value calculated as p i = 2c i−1 − c i−2 . In this form, this coincides with delta-delta encoding. The effect of the encoding is shown in Fig 6. Wavelet-based encoding: Discrete Wavelets Transforms have made their way into compression as an efficient image compression technique. For example, the biorthogonal CDF 5/3 wavelet transform, also called Le Gall 5/3 wavelet, which performs an integer-to-integer wavelet transform, is used by the JPEG2000 format for lossless image compression [19]. Here we applied this algorithm for lossless compression of macromolecular coordinates. Wavelet encoding uses the wavelet function to transform the sequence of coordinate values into the sequence of wavelet coefficients. The effect of encoding is shown in Fig 6. Encoding based on unit vector compression: Unit vector compression represents atomic coordinates of a molecule as a set of vectors between every pair of consecutive atoms. The encoding algorithm compresses these vectors as follows: (i) represent each vector by its direction (a unit vector) and length (a scalar value); (ii) compress every unit vector to a single integer value using the unit vector coding technique; (iii) reconstruct the original vector from the decompressed unit vector and original length. In step (i), the length of each vector is subtracted from the average vector length calculated for a given molecule. The average length is stored once and the difference between the actual and average length is stored for every vector. In step (ii), three coordinates of the unit vector 32 bits each (96 bits in total) are compressed to a single 32-or 16-bit signed integer number. The effect of 16-bit and 32-bit encodings is shown in Fig 7. The kernel density for the encoded values shows that higher probability for smaller values with respect to the integer encoded values. However, the values for the compressed unit vector span the entire 16 and 32-bit range of integer values.
The compression algorithm is described in "Compressed Unit Vectors" by David Eberly (https://www.geometrictools.com). Due to the quantization at the step (ii), decompressed vectors contain a rounding error. In the step (iii), we calculate and store the difference between coordinates of original and decompressed vectors to reconstruct the coordinates losslessly.
Packing. The packing algorithm allows a compact memory representation of data. As the values of encoded coordinates are within a smaller dynamic range with respect to original values, fewer bits can be used to store and transmit the data. Below we describe the packing strategies that we explored for this work.
Recursive indexing: Recursive indexing encodes values such that the encoded values lie within the interval between minimum (min) and maximum (max) values [20]. This allows to represent 32-bit integers more efficiently, when most of the values fit into 16-bit (or 8-bit) integers. Recursive indexing works as follows: each value that lies within the open interval (min, max) represents itself, otherwise the max (or min if the number is negative) interval endpoint is stored and subtracted from the input value. This process of storing and subtracting is repeated recursively until the remainder lies within the interval.
Variable-length quantity: The idea behind the variable-length quantity encoding uses variable number of bytes to represent an arbitrary integer. The encoding splits integers into bytes Entropy compression. It is possible to further reduce the size of the encoded and packed coordinates through entropy compression. To meet the requirements of efficient data transmission over the Internet, the method should offer fast decompression and be supported in widely used web browsers. In the following we evaluated the performance of two entropy compression methods that are optimized towards the abovementioned requirements: GZIP (https://www.ietf.org/rfc/rfc1952.txt) is a lossless compression technique that uses two interplaying algorithms: LZ77, a dictionary-based encoding [21] and Huffman coding [22], which uses fewer bits to encode frequently occurring bytes.
Performance metrics
We used different metrics to evaluate the compression efficiency. These include Shannon entropy [23], compression ratio, and compressed size. However, the most objective metric for this study is compressed size, i.e. the amount of space required to store the compressed data. We only report the compressed size here, since the size of coordinates after encoding and packing may differ for different encoding algorithms.
Results and discussion
In this paper, we explored various compression methods for macromolecular structures, describing the main ideas behind each technique. We analyzed intra-and intermolecular, lossy The white region is a kernel density plot representing the distribution of encoded values. The kernel density shows the shape of the data distribution. The wider section of the violin plot represents a higher probability that members of the set will take on the given value; the skinnier section represents a lower probability. https://doi.org/10.1371/journal.pone.0174846.g007 and lossless compression approaches based on different encoding algorithms. Lossy compression can be used in applications that tolerate data loss without noticeable loss of performance, for example molecular visualization. On the other hand, methods such as structure refinement or molecular force field applications may be sensitive to small changes in coordinates. Therefore, lossless compression is usually a preferred choice while compressing scientific data and we centered our analysis on the lossless compression algorithms. In the following, we compare the performance of the presented compression approaches and discuss the combination of methods that yield best compression.
Distribution of encoded values
We analyzed the distribution of encoded values for all the structures in the PDB to understand how encoding reduces the amount of space required to store the data (Fig 8). The results indicate that most of encoded values fit into 16-bit integers for lossless and to 8-bit integers for lossy compression. The unit vector (UV) encodings have the following effects: (i) size of encoded data increases by 40% with respect to original size; (ii) 60-80% of encoded values fit into 8-bit integers; (iii) at least 20% of values require 16-bit integers for UV 16-bit (or 32-bit integers for UV 32-bit).
Packing algorithms performance
The packing algorithms are aimed to create a more compact memory representation of the data. We evaluated the performance of the recursive indexing and variable-length quantity encodings applied to the results of the intramolecular encoding strategies. Recursive indexing outperforms variable-length quantity for all encoding strategies except of the UV (32-bit) encoding. For example, the recursive indexing achieves 58% better compression than variablelength quantity for the intramolecular delta encoding.
Entropy compression algorithms performance
We evaluated the performance of Brotli and GZIP compression algorithms by comparing the size of encoded coordinates after compression. Brotli can achieve 3.5% higher compression on average for all encoding algorithms. Though Brotli offers a slightly better compression ratio, GZIP remains a preferred tool for entropy compression due to its wider availability in all browsers and operating systems. Compression of 3D coordinates of macromolecular structures
Intramolecular compression results
In order to obtain a minimum compressed size, we implemented the following intramolecular compression strategies that are built up from a combination of algorithms mentioned above: (i) Delta is a combination of integer encoding, delta encoding, recursive indexing, and GZIP compression; (ii) Predictive is a combination of integer encoding, followed by predictive encoding, recursive indexing, and GZIP compression; (iii) Wavelet runs integer encoding, delta encoding, wavelet encoding, recursive indexing, and GZIP compression; (iv) UV (16-bit) is based on the unit vector compression to 16-bits integers, followed by recursive indexing and GZIP compression; (v) UV (32-bit) is encoding based on the unit vector compression to 32-bit integer followed by GZIP compression. The recursive indexing step is omitted for UV (32-bit) encoding. We also report the GZIP compressed size of original coordinates when they are represented in memory as Floating point numbers and Integer encoded values to give a baseline for comparison.
Our results suggest that delta encoding performs best for lossless compression (Table 1), with UV(16-bit) gaining a small advantage over the delta encoding with lossy compression. Delta encoding, however, provides a good trade-off between the compression ratio and code complexity as well as compression/decompression speed. The efficiency of intramolecular delta encoding is mainly due to the regular patterns in spatial adjacency and connectivity of atoms.
Intermolecular compression results
Further we investigated intermolecular compression strategies for the "special cases" of macromolecular structures, such as structures with multiple models and structures with non-crystallographic symmetry or repeated identical subunits. To determine if better compression can be achieved, we compared the results of the proposed intermolecular compression strategies with intramolecular delta compression as we have demonstrated above that the delta compression outperforms other intramolecular compression methods.
We implemented the following tree traversal strategies: reference, waterfall, and Minimum Spanning Tree (MST) and two different metrics to build a weighted graph needed to construct the MST (GZIP, and RMSD). We used delta and predictive algorithms for encoding. To select the best combination of the abovementioned strategies, we broke down the analysis into three steps. First, we evaluated the best metric to construct the MST. Second, we selected the best traversal strategy. Finally, we evaluated the encoding algorithm that yielded better compression. The results of this analysis are summarized in the Table 2. The results suggest no significant difference in compression ratio obtained using different metrics in the construction of the MST. RMSD metric has been chosen for further analysis, which involved comparison of different traversal strategies (Table 3).
The results indicate slightly better performance for a waterfall strategy, which has been chosen for further evaluation. At the next step the intermolecular delta and predictive encodings were compared with intramolecular delta encoding strategies ( Table 4).
The results suggest that intermolecular algorithms can compress 7.2% more for lossless compression and 39.7% for lossy compression on the dataset with multi-models structures with respect to intramolecular delta compression. For structures that contain repeated identical subunits, intermolecular compression saves 20.6% for lossless compression and 34.2% for lossy compression. However, the contribution of intermolecular approaches to the lossless compression of the entire PDB is only about 7% with respect to lossless delta compression.
Having considered the different combinations listed above, the overall optimal performance regarding compression and simplicity can be obtained by the combination of integer encoding, intramolecular delta encoding, recursive indexing and GZIP compression. Lossy compression allows to obtain 10-fold reduction in size by reducing the precision of atomic coordinates from three decimal places to one. This table shows the comparison of different intermolecular (IR) encoding strategies using waterfall traversing (W) compared to intramolecular (IA) delta compression. The reported size is a total size of coordinates retrieved from datasets containing the multi-model structures (M) and the structures with repeated subunits (S). Both lossless (LL) and lossy (LS) compression methods were analyzed.
Conclusions
We investigated compression approaches for 3D coordinates of macromolecular structures. The coordinates data contain a high level of entropy and are therefore poorly compressed by the general-purpose compression tools. To achieve better compression, we applied bespoke encoding methods to create a more compact representation of the atomic coordinates. The performance of compression methods was evaluated against benchmark data from the PDB. We demonstrated that the intramolecular compression based on the combination of integer & delta encoding, recursive indexing packing and GZIP entropy compression is very efficient for compressing atomic coordinates of macromolecules with lossless and lossy schemes. Intermolecular compression approaches can attain additional data reduction compared to intramolecular approaches. However, at the scale of the entire PDB archive the contribution of intermolecular compression is not significant, since it is only applicable to a small fraction of the archive. Therefore, the simple intramolecular delta encoding is the preferable choice for efficient compression of macromolecular structures.
The compression approaches investigated in this paper are the foundation for the Macro-Molecular Transmission Format for 3D structures (http://mmtf.rcsb.org). This format allows a compact representation and interactive visualization of the largest macromolecular complexes that are currently in the PDB [24]. By overcoming I/O bottlenecks such as network transfer, reading and parsing, the entire PDB archive can be loaded into memory within minutes, which opens new possibilities for building scalable analytic tools allowing, for example, interactive structural queries.
The simplicity of the selected compression methods allows for the development of lightweight and fast software libraries for de-/compression. While higher compression ratios can be obtained with more complex algorithms, too little is gained to justify the additional burden on implementation and maintenance. In conclusion, we believe that the compression strategies reported in this article offer important building blocks to face the growing size and complexity of the macromolecular 3D structures. | 6,783 | 2017-03-31T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Classification feasibility test on multi-lead electrocardiography signals generated from single-lead electrocardiography signals
Nowadays, Electrocardiogram (ECG) signals can be measured using wearable devices, such as smart watches. Most wearable devices provide only a few details; however, they have the advantage of recording data in real time. In this study, 12-lead ECG signals were generated from lead I and their feasibility was tested to obtain more details. The 12-lead ECG signals were generated using a U-net-based generative adversarial network (GAN) that was trained on ECG data obtained from the Asan Medical Center. Subsequently, unseen PTB-XL PhysioNet data were used to produce real 12-lead ECG signals for classification. The generated and real 12-lead ECG signals were then compared using a ResNet classification model; and the normal, atrial fibrillation (A-fib), left bundle branch block (LBBB), right bundle branch block (RBBB), left ventricular hypertrophy (LVH), and right ventricular hypertrophy (RVH) were classified. The mean precision, recall, and f1-score for the real 12-lead ECG signals are 0.70, 0.72, and 0.70, and that for the generated 12-lead ECG signals are 0.82, 0.80, and 0.81, respectively. In our study, according to the result generated 12-lead ECG signals performed better than real 12-lead ECG.
GAN used is unconditional GAN; therefore, the generated data does not represent other lead vectors.Moreover, precordial leads were not considered.Lan et al. 19 used short-time Fourier transform and GAN to classify and augment data.However, related classification studies mainly focused on augmentation to increase the amount of data.These methods have been verified to increase the performance accuracy; however, the augmented data were not verified.
There have been several studies on lead conversion.Sohn et al. proposed a method for reconstructing a 12-lead ECG from a 3-lead patch device employing an LSTM network 20 .However, their work requires a device measuring at least three leads, whereas our work only needs lead I. Therefore, it is limited in application to single-lead measurement devices.Afrin et al. proposed a handheld ECG device measuring single-lead ECG, which could measure lead I, lead II, and lead III 21 .Three different leads are measured asynchronously.Upon the previously measured ECG history, 12-lead ECGs are synchronously reconstructed.Therefore, the proposed method additionally needs previously measured ECG for reconstruction.Huang et al. proposed an ECG system reconstruction method from temporally asynchronous bipolar ECG recordings 22 .Their reconstruction algorithm is based on multiple ECGs recorded asynchronously from different sites.However, when deriving the optimal weight coefficient, only 11 subjects were recorded, and external data were not tested.SynSigGAN is typically used to generate biomedical signals 23 and implement discrete wavelet transformation and Bidirectional Long Short-Term Memory (Bi-LSTM) 24 layers for the generation model.However, inputs are typically treated as latent variables; therefore, their use is limited to data augmentation.Shin et al. 25 proposed a method for generating a photoplethysmography (PPG) signal from an ECG signal using an LSTM and a CNN. Lee et al. 26 proposed the R-peak alignment and time sequence embedding method to transform a one-dimensional time series into a twodimensional time-series for enhancing the performance of GANs on two-dimensional time series.Chest leads were also converted from limb leads with high accuracy.However, the R-peak, which was used as the median value, resulted in more than one beat during data preprocessing and the model inputs were lead II.Existing methods related to lead conversion performed well.However, the generated ECG signals in these studies were not evaluated based on a classification comparison with real ECG signals.Therefore, the frechet distance (FD) and mean squared error (MSE) scores should be used as evaluation scores, and a comparison with real ECG signals should be considered alongside a Turing test to achieve the usefulness of generated ECG.
In this study, all 12-lead ECG signals were generated using lead I, and their feasibility for usage was determined through a classification performance test.Our study's novelty lies in the analysis of ECG signals generated from Lead I through classification tests, demonstrating their feasibility.In this study, the possibility of generating ECG signals for diagnostic implementation, which reinforces the limitation of single-lead ECG measurement devices, was presented.The proposed method can be applied in out-of-hospital ECG monitoring care without using multiple lead measurement devices.
Methods
In this study, the ECG generation model was based on that in our previous study 27 .The pix2pix GAN model was trained using MUSE data on patients who had visited the Seoul Asan Medical Center Hospital between January 01, 2001, and February 28, 2022.For classification, the PTB-XL database was used as external data.As illustrated in Fig. 2, six classes were extracted from the PTB-XL database for the classification test.Evaluation was based on the F1-score, precision, recall, and accuracy.The overview of this study is illustrated in Fig. 1.
Datasets and preprocessing
The 12-lead ECG data used in this study were obtained from the MUSE and PTB-XL databases 28 .The PTB-XL dataset contains 21,837 records obtained from 18,885 patients, and the MUSE database comprises 4 million records obtained from the Asan Medical Center Hospital.The experimental protocols in the data were approved by the Institutional Review Board (IRB) at the Asan Medical Center Hospital, under the approval number IRB No. 2022-0781.All methods were carried out in accordance with relevant guidelines and regulations and informed consent was obtained from all subjects and/or their legal guardian(s).
The duration of both datasets was 10 s and the sampling rate was 500 Hz, wherein the number of sampling points were 5,000.For the generation model, lead I ECG signals obtained from the MUSE database were considered as the input ECG signals, and the remaining leads as reference signals.Additionally, all 12-lead ECG signals from the PTB-XL dataset were used in the classification model.The data descriptions are listed in Supplementary Tables S1, S2 and S3.To generate and classify raw ECG signals using our model, none of the conventional methods, such as filtering or adjusting the baseline, were implemented.Figure 2 illustrates the data preprocess and exclusion criteria.For the MUSE database records, data on patients under 18 years old, unconfirmed data and patient records with sampling rates under 500 Hz were excluded.Furthermore, the 10 s records were segmented into 2.5 s intervals, each containing 1250 sampling points.
GAN architecture
GAN consists of two main networks: a generator and discriminator 11 .The basis of GAN is a minimax game between the generator and discriminator.In this study, the generator considers lead I as the input and synthesizes the remaining leads; and the discriminator distinguishes the generated signals from the real ones.Figure 1 depicts the overall architecture of the proposed model.The proposed model follows the main objective of conditional GAN, which can be expressed as shown in (1).Conditional GANs 29 learn mapping based on the relationship between the signal x and random noise vectors z and y 14 . (1) where G tries to minimize the objective of GAN against D, which tries to maximize it (1).Moreover, L1 loss was used (2); thus, the final objective of GAN was represented as follows: (2)
GAN and discriminator network
The generator in this study comprises a U-net based encoder-decoder.The U-net generator is depicted in Supplementary Figure S6.The encoder consists of seven convolution layers, with batch normalization and Leaky ReLU applied in all except the first layer.The decoder is composed of seven up-convolution layers.For all Leaky ReLU functions, the slope was set to 0.2; the kernel size and stride length were 4 and 2, respectively.The discriminator, depicted in Supplementary Figure S7, contains five convolution layers with batch normalization and Leaky ReLU.A convolution layer is added after the last layer to map to a one-dimensional output, followed by a Sigmoid function.The slope for all Leaky ReLU functions is 0.2, with a kernel size of four and a stride length of two.The learning rate is set to 0.0005 for the generator and 0.0001 for the discriminator.Additionally, Adam is employed for hyperparameter optimization, and the batch size is set to 32.A total of 11 models were trained to generate 11 leads.As mentioned in the 'Datasets and Preprocessing' section, the input and output dimensions of the generator, and the input of the discriminator, are all (batch size, 1, 1250).
Evaluation method
The classification of the generated 12-lead ECG signals was performed using the ResNet model.The normal ECG, RBBB, LBBB, LVH, RVH, and A-fib values were then used to evaluate the classification classes.The A-fib and normal ECG were used because most out-of-hospital wearable devices are used to detect AF, and both normal ECG and AF can be classified using single-lead ECG measurement 9 .By contrast, RBBB, LBBB, LVH, and RVH were diagnosed using the precordial leads (V1, V2, V3, V4, V5, and V6).To test the feasibility of the 12-lead ECG generated from lead I, 5 different methods were compared.First, the classification results of the generated 12-lead ECG and real lead-I ECG signals were compared.To verify the disadvantages of single-lead measurement, the classification performances of the generated 12-lead ECG and real lead-I were compared.Second, the classification results of the generated 12-lead and real 12-lead ECG signals was compared.From the Einthoven triangle formula 30 (4-7) was then applied to the generated 12-lead ECG signals to conduct the ablation study.Two different limb leads are required when applying the Einthoven triangle formula.Therefore, three different groups of leads were used in the experiment: input lead I and generated lead II, input lead I and generated lead III, and input lead I and generated lead II, III.The groups were separately evaluated to determine the best outcome results and differences in the number of generated leads.
All the five different sets of methods were evaluated based on their precision, recall and f1-score values (10-12); and PTB-XL external data were used to train and evaluate each classification method.
Results
In this section, the generated signals from the PTB-XL database and their evaluation scores are presented and compared.Figure 3 illustrates the generation and classification processes using lead I.The evaluation of the generation model performance was performed in our previous study 27 .The evaluation scores for all five results is shown in Table 1 and Table 2.
The precision, recall, and f1-score values of the generated.12-lead ECG signals and classification performance results of real lead-I are shown in Table 1, where the best results are highlighted in bold.The generated 12-lead ECG signals exhibited the best results followed by the generated lead II.This shows that multi-lead ECG classification is more accurate.Particularly, the classification results of the abnormal.
ECG signals that are typically diagnosed at the precordial lead show a significant difference.The classification performance of all real 12-leads is shown in Table 2, where real lead exhibited poor results using both the 12-lead ECG and single lead I signals.
The confusion matrix results are depicted in Fig. 4. Additionally, the experimented results of the three different lead groups are listed in Table 1 and the confusion matrix is presented in Fig. 5.No significant differences in the results can be observed when the Einthoven formula is used to calculate the other limb leads.Moreover, the ROC and AUC results illustrated in Supplementary Figure S1 exhibit no significant difference.Therefore, generating
Discussion
This study demonstrates that generated ECG signals are capable of diagnosing CVDs.Table 3 shows previous studies related to ECG generation by GANs and have evaluated their work with classifications performance that was listed in review paper by Laurenz Berger 15 .However, as shown in Table 3 most of the previous works' purpose was focused mainly on solving imbalanced data problems.Also, the input data were noise and simulator where our study focused on lead-to-lead conversion.Previous and related studies have only focused on data augmentations and generated signals were not from single lead ECG.A detailed example of the generated ECG signals is shown in Supplementary Figure S1.
Single-lead ECG signals can be can better classified by implementing the proposed method to classify CVDs, which improves the disadvantages of single-lead ECG signals.This method enables the real-time analysis of ECG signals through single-lead ECG measurement, thereby allowing the use of single-lead ECG measurement devices, such as smart watches, on both patients and the general public.Therefore, the proposed method can be used to alert users and patients of potential danger.Additionally, single-lead measurement, which is a more comfortable method, can be adopted in hospitals instead of 12-lead standard ECG measurement.
The results of the RVH performance were lower than those of the other classes.Moreover, the F1-score difference was up to 0.49 lower than that of the normal class, mainly owing to the low quantity of the RVH data in the PTB-XL database used to train the classification model compared to that used in the other classes.However, its performance was still higher than that of real 12-lead ECG signal classification.12-lead ECG signals were generated from lead I rather than lead II because general single-lead ECG devices comprise mainly smart watches, which measure lead I.
The classification experiments were performed using an external dataset that was not used to train the generation model.A comparison of the classification of the generated 12-lead and real lead I signals was also performed.This comparison was performed to determine whether the generated ECG signals will have a better classification performance than real lead I.However, the results were more dramatic in precordial-based CVD diagnosis.This result shows that single-lead ECG measurement devices are not capable of diagnosing the various CVD types; however, applying the proposed method improves their classification capability.
Standard ECG signals are used in 12-lead ECG measurement.Multiple electrodes are attached to the surface of the patients' body, which makes it hard to obtain ECG signals in the long term.However, various devices have been developed with the growth of the single-lead ECG device market.Owing to their real-life ECG measurement capabilities, they are used to detect cardiac diseases, such as A-fib.However, these single-lead measurement devices generally detect or diagnose cardiac diseases based on the rhythmic features; therefore, it is nearly impossible to detect diseases that are diagnosed based on the amplitude or via comparison with other leads.
This study presented the feasibility of generated ECG signals for use in diagnosis.The obtained results were better than those of real ECG signals, which can be implemented in single lead devices.The accuracies, precisions, and F1 scores of the generated 12-lead ECG are shown in Tables 1 and 2. The normal class values are 0.89, 0.92, and 0.91; the A-fib class values are 0.96, 0,76, and 0.84; the LBBB values are 1, 0.96, and 0.98; the RBBB results are 0.87, 0.77, and 0.82; the LVH results are 0.82, 0.94, and 0.87; and the RVH values are 0.38, 0.47, and 0.42, respectively.Among the six classes, A-fib was the only class that was not in sinus rhythm, resulting in higher performance.
The proposed method can also be used to provide insights into various pathological cardiac diagnoses features.This will allow the monitoring of personalized ECG signals during in-and out-of-hospital care, where the cardiologist keeps patient records over a long time.Moreover, further assessment can be made by the cardiologist when a remarkable CVD is detected during the patient's daily life.
Most of all, the novelty of our study is: 1) A large dataset of over 400 million data is used to train the generative model.In (b), only lead II was generated using GAN and limb lead and the rest were calculated using the Einthoven formula.In (c), only lead III was generated using GAN and limb lead and the rest were calculated using the Einthoven formula.In (d), lead II and lead III were generated using GAN and limb lead and the rest were calculated using the Einthoven formula.www.nature.com/scientificreports/ 3) Generated ECG classification exhibits a better performance than reference single-lead ECG classification, indicating that the information obtained from the precordial leads are crucial.
As shown in Tables 1 and 2, the proposed method produces a better performance than real ECG classification.Several questions need to be addressed regarding why the generated ECG signal classification results show higher performance.Our proposed model has been trained on a dataset of 4 million samples and has the capability to generate ECG signals that closely resemble real ones.The primary difference between the generated ECG signals and the corresponding reference ECG signals is that the generated ones can fill in missing data and reduce baseline wandering problems shown in Supplementary Figure S4 and S5.However, the crucial outcome of the study is that CVD diagnosis using the entire 12-lead ECG performs better when employing our proposed method.
However, a few limitations exist in this study.First, 6 CVD types containing both precordial and limb leads were classified.Nonetheless, there are various of types of CVDs, such as acute MI (AMI), that are life-threatening.Certain MI, such as ST elevation, are fairly classified using DL (deep learning) [43][44][45] .However, there are very few AMI record data available owing to its high mortality rate.In the future, more focus should be placed on critical CVDs, which can require out-of-hospital care.Second, for the lead I ECG signals, the input in the proposed method was based on standard 12-lead ECG records.No open data were measured using both the single-lead device and standard 12-lead ECG.However, a few single-lead ECG signals were generated from smart watches, as depicted in Supplementary figure S3, and classified to show the concept of our method.It was important for the proof of concept to show that the ECG signals obtained from single-lead devices can be used to generate 12-lead ECG signals and to detect CVDs.
Conclusion
This study presents a method for generating 12-lead ECG signals that can be used to classify CVDs using DL.ECG data obtained from the Asan medical center and containing 400 million records was used.External data from the PTB-XL database were also used to classify 6 types of cardiac diseases present in the limb and precordial leads.Additionally, the performance of the classification results was compared with those of real and generated ECGs.Consequently, the proposed method exhibited outstanding results during classification, which can be applied in real-life ECG monitoring.Single-lead ECG devices are simple and comfortable to wear; however, owing to the lack of lead information, rhythm features are mainly used to detect abnormal ECG.This approach can be used to solve for the disadvantages of single-lead ECG devices, thereby helping in out-of-hospital CVD detection, which is a crucial step in personalized medicine.
Figure 1 .
Figure 1.Overview of the proposed method.Generated ECG and real ECG signals are equally preprocessed, trained, and classified using the same ResNet model.The output of the classification model is normal, A-fib, CLBBB, CRBBB, LVH, and RVH.
Figure 2 .
Figure 2. Data exclusion and preprocessing.For the MUSE data, data on patients under 18 years old, unconfirmed data, and patient records with sampling rates under 500 Hz were excluded.Both databases were segmented into 2.5 s segments for training.
( 4 ) 5 ) 2 ( 6 ) 2 ( 7 )
Lead III = Lead II − Lead I (Lead aVR = −(Lead I + Lead II)/Lead aVL = Lead I − (Lead II)/Lead aVF = Lead II − (Lead I)/2 https://doi.org/10.1038/s41598-024-52216-yonly Lead II or Lead III and calculating the rest of limb leads using the Einthoven formula reduces both the model complexity and time.The generated 12-lead ECG and reference ECG are illustrated in Supplementary Figure S2.The capability of the generated ECG signals in diagnoses was also tested to verify the classification results of the real ECG signals.
Figure 3 .
Figure 3. Overview of the model training method.Generator models were trained on MUSE data, and the optimized model was implemented to generate 12-lead ECG signals for training.Additionally, the ResNet model was used for classification.
2 )
No other study has investigated the use of generated ECG signals for diagnosis.
Figure 4 .Figure 5 .
Figure 4. Confusion matrix for the real ECG.(a) 12-lead ECG signal results, and (b) classification results using lead I.
Table 1 .
Evaluation of the performance score of the generated ECG signals.Significant are in value [bold].
Table 2 .
Evaluation of the Performance Score of Real ECG signals.
Table 3 .
Comparison with previous ECG Generation model studies. | 4,597.6 | 2024-01-22T00:00:00.000 | [
"Medicine",
"Engineering",
"Computer Science"
] |
The gluon and charm content of the deuteron
We evaluate the frame-independent gluon and charm parton-distribution functions (PDFs) of the deuteron utilizing light-front quantization and the impulse approximation. We use a nuclear wave function obtained from solving the nonrelativistic Schroedinger equation with the realistic Argonne v18 nuclear force, which we fold with the proton PDF. The predicted gluon distribution in the deuteron (per nucleon) is a few percent smaller than that of the proton in the domain x_{bj} = Q^2 / (2 p_N \cdot q) \sim 0.4, whereas it is strongly enhanced for x_{bj} larger than 0.6. We discuss the applicability of our analysis and comment on how to extend it to the kinematic limit x_{bj} \to 2. We also analyze the charm distribution of the deuteron within the same approach by considering both the perturbatively and non-perturbatively generated (intrinsic) charm contributions. In particular, we note that the intrinsic-charm content in the deuteron will be enhanced due to 6-quark"hidden-color"QCD configurations.
Introduction
A primary challenge in nuclear physics is to study the structure and dynamics of nuclei from first principles in terms of the fundamental quark and gluon degrees of freedom of quantum chromodynamics (QCD). The conventional description of nuclear many-body systems, where nucleons are treated as elementary particles with phenomenological potentials, can be justified in the nonrelativistic domain [1][2][3][4][5][6]. However, in the short-distance, high-momentum-transfer region, quark and gluon fields play an essential role in describing nuclear systems, and non-nucleonic phenomena, such as QCD "hidden-color degrees" of freedom [7][8][9][10], become relevant. For example, the six-quark Fock state of the deuteron has five different SU(3) color-singlet contributions, only one of which projects to the standard proton and neutron three-quark clusters. The leading-twist shadowing [11][12][13][14][15] of nuclear parton distributions at small x b j in the Gribov-Glauber theory is due to the destructive interference of two-step and one-step amplitudes, where the two-step amplitude depends on diffractive deep inelastic scattering (DDIS) ℓN → ℓ ′ N ′ X, leaving the struck nucleon intact. The study of the quark and gluon structure of nuclei thus illuminates the intersection between the nuclear and particle physics.
The quark and gluon distributions of nuclei also play an important role in high-energy astrophysics [16,17]. An accurate knowledge of nuclear parton distributions is essential in many physics fields [18]. For example, the gluonic content of light nuclei is important in understanding the production of antiprotons in interstellar reactions. The charmquark distribution function in nuclei at high x b j can significantly change the predictions of the spectrum of cosmic neutrinos and is thus important to interpret the background of ultra-high-energy neutrinos which contribute to the Ice-Cube experimental data [19,20] in the high-x F domain [21][22][23]. Furthermore, the parton-distribution function (PDF) for nuclei is the initial condition controlling the dynamics of the possible formation and thermalization of the quarkgluon plasma (see e.g. [24]).
Collider experiments typically probe proton and nuclear PDFs in the region of small x b j = Q 2 2p N ·q (see [25][26][27][28] for recent works showing the relevance of LHC heavy-flavor data to determine the gluon content of the nuclei at small x b j ). In contrast, fixed-target experiments can unveil the PDF over the full range of x b j up to unity by taking advantage of the asymmetry of the experimental apparatus and the kinematics. New fixed-target experiments using the beams of the LHC are currently investigated (see the works of the AF-TER@LHC study group [29][30][31][32][33]) following the very positive outcome of the data taking of the SMOG@LHCb system [34,35]. In fixed-target experiments, one also has the advantage that the parton distributions of a large variety of nuclei, both polarized and unpolarized, can be measured. It is thus an important theoretical task to predict the gluon and heavy-quark distributions of nuclei.
We will focus on the deuteron, which is the simplest many-nucleon system, and thus can be evaluated with high accuracy in nuclear physics. It is therefore an excellent system where nuclear effects [7,9, can be studied. In addition, a careful study of the structure of the deuteron may provide accurate information on the quark and gluon structure of the neutron [61][62][63]. In particular, the gluon PDF of the neutron is of interest. The PDF of the deuteron near the maximal fraction x b j = 2 (we use this definition in this work) can be constrained by perturbative QCD, since it is the dual of the deuteron form factor at high-momentum transfer Q 2 [64,65]. In this work, we will mostly be interested in the region of x b j ∼ 1, a domain which AF-TER@LHC can access.
As a first study, we have calculated the gluon PDF in the deuteron within the impulse approximation which gives the leading contribution at x b j < 1. To do so, we have solved the Schrödinger equation of the two-nucleon system with a phenomenological nuclear potential [1] using the Gaussian expansion method [66]. We have then derived the boostinvariant light-front wave function [67,68] of the nucleus and convoluted it with the gluon distribution of the nucleon in order to obtain the gluon distribution of the deuteron. The complications of boosting an instant-form nucleon wavefunction to nonzero momentum are discussed in Ref. [69]. This paper is organized as follows. In the next section, we calculate the gluon PDF of the deuteron through the procedure mentioned above. In Section 3, we discuss the applicability of the impulse approximation and show our result. We also extend our discussion to illuminate the intrinsic heavyquark contribution to the deuteron charm-quark distribution (Section 3.2). A summary is presented in the final section.
Deuteron wave function
Let us now explain how we convolute the gluon PDF of the nucleon by the deuteron wave function in the impulse approximation [see Fig. 1 (a)]. The impulse approximation is the leading contribution in the chiral effective field theory (χEFT) [48,58]. We will show later that the two-nucleon contribution [ Fig. 1 (b)] is subleading in the nucleon velocity expansion. These arguments lead us to consider a nonrelativistic framework. We first calculate the wave function of the deuteron, given by the bound-state solution of the nonrelativistic twonucleon Schrödinger equation with the Argonne v18 potential [1] as the nuclear force. To solve the equation, we use the Gaussian expansion method [66], where an accurate solution is provided as a superposition of Gaussians with geometric series of ranges. The Gaussian basis is given by where N nl is the normalization constant of the Gaussian basis,r the unit vector of the relative coordinate r, and ν n = 1 r 2 n = 1 r 1 a n−1 (n = 1, · · · , n max ). We have taken n max = 12 Gaussians with r 1 = 0.1 fm and the common ratio a so that r 12 = 10 fm. Note that the nuclear force has a strong tensor force which may change the orbital angular momentum by two units, so the S -wave and D-wave states are relevant. The deuteron state is thus given by where To solve the Schrödinger equation, we have to diagonalize the Hamiltonian matrix together with the nuclear norm matrix which involves the information of the overlap between Gaussian basis functions. This is a generalized eigenvalue problem (For details, see Section 2.1 of Ref. [66]). By diagonalizing the Hamiltonian, we obtain the wave function shown in Fig. 2, which has a dominant S -wave component and a D-wave component representing 6% of the total probability. In our framework, the wave function is given as a superposition of Gaussians, so further transformations can analytically be performed. We then Fourier transform it and project the wave function onto the z-axis. After some manipulations, we obtain the following expression for the wave function of the unpolarized deuteron expressed in terms of the momentum in the z-axis p z : The corresponding probability distribution is shown in Fig. 3. The distribution of the nucleon momentum is centered at p z = 0, and the standard deviation is close to 50 MeV. This is due to the kinetic energy of the nucleon (about 20 MeV), which is the bound-state effect of the nuclear force. Figure 3 also displays the contribution from the Swave, which is nearly identical to the total result. In Fig. 3, we also show the momentum distribution of the nucleon inside a typical heavy nucleus with the Fermi en- The smearing of the momentum distribution is given by [43] where γ F = 1 5 p 2 F . One sees that the momentum distribution of the deuteron is narrower than that of a typical heavy nucleus. The data for a typical nucleus with a Fermi energy ǫ F = 33 MeV are also shown for comparison (labeled as "Heavy nucleus").
Light-front momentum fraction
We now calculate the light-front momentum distribution of the nucleon in the deuteron. Note that the procedure to obtain a wave function in the light-front frame from the instant-form is not unique. In this work, we follow the recipe of Ref. [70] (see also [43,[71][72][73]) giving the wave function in the light-front frame as where . The momentum fraction of the nucleon in the deuteron z is defined in the interval 0 ≤ z ≤ 2. This can consistently be derived using z defined by where p + N and p + A are the momentum of the nucleon and of the nucleus in the light-front frame, respectively, and A the nucleon number of the nucleus (A = 2 for the deuteron). We then have z ≤ A. The masses of the nucleon and of the nucleus are labeled by m N and m A , respectively. By nonrelativistically reducing the nuclear binding effect (p 2 This can however be improved by considering the shift of the energy by the moving nucleon inside the deuteron. The momentum fraction is then where we still neglect p ⊥ . By solving the above equation in term of p z , the nucleon longitudinal momentum inside the deuteron satisfies We think this manipulation is more suitable for light-front dynamics than the approximation used in Ref. [43]. We then apply this variable change to the previously obtained z-axis momentum fraction P(p z ) ≡ |ψ(p z )| 2 . We have This relation agrees with the recipe of Ref. [70]. This yields the light-front distribution plotted in Fig. 4, where one sees that the momentum fraction of the nucleon is broader in the deuteron than in a typical heavy nucleus, which is expected from the importance of the Fermi motion.
Gluon distribution
Now that we have the light-front distribution of the nucleon in the deuteron, we can derive the gluon PDF in the deuteron using the impulse approximation, by folding the gluon PDF of the nucleon [74][75][76][77][78][79][80][81] by N N/A (z). Since we are interested in the high-x behavior of the gluon PDF, we need a well behaved gluon PDF up to 1. For this reason, we prefer to use GRV98 [82].
The gluon PDF in the deuteron is obtained by folding the gluon PDF of the proton G p (x) by the light-front distribution of the nucleon inside the deuteron N N/A (z): where µ F is the factorization scale. We note that the effect of the scale evolution is contained in G p (x/z, µ F ). This operation consists of calculating the contribution depicted by the diagram of Fig. 1 (a). In our computation, we of course assume that the proton and the neutron have the same gluon PDF, hence the factor of two in Eq. (11).
Domain of applicability
Before plotting our results, let us discuss the domain of applicability of our calculation. Indeed, we assumed that the nucleon inside the deuteron is not modified from the onshell one. The nucleons in the deuteron can be considered almost on-shell when the invariant mass of the nucleon pair M pn has a small virtuality compared to the binding of the deuteron: where m d and ǫ d are respectively the deuteron mass and binding energy. The above condition of virtuality can be converted to a constraint on the nucleon velocity, that is v = p z m N by using Eq. (9) [or Eq. (7)]. This gives v < 0.004 which is obviously nonrelativistic. From this inequality, we can then derive the corresponding region of the momentum fraction of the gluon in the deuteron, by computing the average z as a function of x. This yields a conservative limit, 0 < x < 0.7, outside which the off-shell correction may be relevant.
Let us now inspect what such off-shell effects may be. We start by discussing the two-nucleon effects [see Fig. 1 (b)]. The nth moment of the PDF can indeed be expanded in terms of the velocity of the nucleus v A as [48] x n g|A = v A,µ 0 · · · v A,µ n A|O µ 0 ···µ n g |A , where O µ 0 ···µ n g is the gluon density operator. We note that the nuclear velocity is equal to the nucleon velocity v, up to small x corrections due to the nuclear binding. On the other hand, x n g|A can be expressed in terms of the nonrelativistic nucleon operators as where x n g is the nth moment of the gluon PDF of the nucleon. The first term A is the nucleon number, obtained from the one-nucleon operator A|N † N|A = A. The nuclear matrix element A|α n (N † N) 2 |A provides the nuclear modification effect, and depends on the renormalization scale but not on the momentum fraction. The coefficient α n is proportional to the nth moment of the nuclear modification effect of the PDF, which is the residual piece of the nuclear PDF after subtracting the gluon PDF of free nucleons.
The zeroth moment α 0 is zero, due to charge conservation, and the first moment α 1 is known to be small from experiment [83]. At the hadron level, the leading off-shell correction is the pion exchange-current [48,84,85], but these contributions are N 3 LO in χEFT, thus small. This means that the nuclear modification effect is expected to be small in the nonrelativistic regime. The first off-shell effect therefore starts from v 2 which means that the constraint discussed above, v < 0.004, is probably too conservative.
Let us now see the range of velocities in which our framework holds. In Fig. 5, we plot the averaged squared velocity of the nucleon as a function of the gluon momentum fraction x. We of course exclude the region v 2 > 1 which is unphysical. We note that v 2 is still small at x = 1.1, v 2 ≈ 0.3 and therefore consider the domain of applicability our our framework as 0 < x < 1.1, where the off-shell effects are likely small. x Nucleon < v 2 > (GRV98, µ F =1GeV) According to the above discussion, we will show the result of our calculation of the gluon PDF in the deuteron up to x ≃ 1.1 in Fig. 6. The gluon PDF of the deuteron G d (x, µ F ) shows a monotonic decrease. In the region 0 < x < 0.6, G d (x, µ F ) ≈ 2G p (x, µ F ) within 5%, as expected. It is also notable that the the ratio G d /G p is larger than unity for 0 < x < 0.2, and that it shows a minimum near x = 0.4. Above x ∼ 0.6, the ratio G d /G p grows rapidly due to the falloff of the PDF of the proton. This is due to the Fermi motion, where the momentum of the nucleon in the deuteron is pushed to the high momentum region, in a 4 similar way as the quark PDF. x Ratio deuteron/proton
Charm distribution of the deuteron
Another interesting point to discuss is the charm-quark distribution, which can be analyzed in the same way as that of the gluon. The charm-quark distribution of the deuteron can equally be calculated in the domain of applicability of our framework discussed in Sec. 3.1 (0 < x < 1.1).
The charm quarks in a nucleon are virtually created by the gluon splitting (see Fig. 7) at leading order. The distribution of the charm quark generated by this subprocess inherits the gluon distribution, and decreases monotonically in x. We have calculated this contribution by using the charm PDF of CTEQ-JLAB 15 [86] which we fold with N N/A (z) discussed in Section 2.3. The result of our calculation is shown in Fig. 8. The behavior of the charm PDF of the deuteron due to the gluon splitting is similar to that of the gluon. The ratio of the charm PDFs of the deuteron (per nucleon) to the proton is unity within 5% for x < 0.4, and it deviates from unity for x > 0.4 due to Fermi motion, as expected from the impulse approximation. The distribution of charm quarks in the nucleon however receives additional non-perturbative contributions from the charm quark-antiquark pair creation which are multiconnected by two or more gluons coupling to different va- lence quarks (see Fig. 9). This intrinsic-charm contribution, although suppressed since it is higher order in α s , is favored by a higher probability due to the sharing of momenta from different valence quarks. This is in contrast to the gluon-splitting contributions where the charm and anticharm quarks couples to a single valence quark. In the limit of heavy quarks (Q), the intrinsic heavy quark distribution in a hadron is suppressed as m −2 Q , as can be derived by the application of the operator product expansion [87][88][89]. A model for the charm distribution in the nucleon based on kinematical constraints is given in Refs. [90,91] The normalization N is phenomenologically determined as N ∼ 0.01 [91]. This distribution peaks at x ∼ 0. We plot in Fig. 8 the intrinsic-charm distribution of the deuteron calculated in our framework. As is the case of the gluon splitting, the Fermi motion alters the ratio of the deuteron PDFs (per nucleon) to that of the proton from unity for x > 0.6. We also observe that this ratio, although consistent with unity within 5%, varies more than that of the gluon PDFs in the region 0 < x < 0.6.
We can also derive an intrinsic-charm distribution of the deuteron by considering a six-valence-parton configuration (see Fig. 10). It can be calculated by rescaling the endpoint of Eq. (15) from x = 1 to x = 2. The normalization of the intrinsic-charm contribution to the deuteron is currently not known (we plot it in Fig. 8, with N = 10 −4 ). There are however some arguments suggesting a sizable contribution of this contribution. Indeed, beside the argument of the momentum-fraction sharing by several valence particles enhancing the intrinsic-charm content at high x, there is another enhancement from the combinatoric factors in the deuteron case. For the gluon splitting, we obviously have a factor of 6, whereas for the intrinsic charm generated by the radiation of two gluons from two distinct quarks, we have a factor of 15 (see Fig. 10 (a)). The enhancement may even be larger for the intrinsic charm created by the three-gluon emission although it is even higher order in α s , since we have a combinatoric factor of 20 (see Fig. 10 (b)). Note that this combinatoric enhancement is absent in the case of the nucleon. The intrinsic-charm contribution generated off three-gluon emission may also kinematically be more advantageous than the two-gluon case, since the momenta of valence quarks can stay closer to the valence configuration after the gluon radiation. It would thus be interesting to perform measurements sensitive to the charm content of the deuteron at x ∼ 1. Fixed-target experiments at the LHC with the LHCb or ALICE detector provide an ideal setup for such measurements.
Summary
In this work, we have calculated the gluon and charm PDFs of the deuteron in the light-front quantization. We used the impulse approximation where the input nuclear wave function is obtained by solving the nonrelativistic Schrödinger equation with the phenomenological Argonne v18 nuclear potential as input. Although we only analyzed the nonrelativistic regime, the range of applicability our computation is estimated to extend up to x ∼ 1.1.
We have found that the gluon and charm PDFs of the deuteron (per nucleon) at low x only differs by a few percent from that of the proton, as expected for nonrelativistic nucleons in the nucleus. However as x becomes close to unity, their distributions deviate significantly from that of the nucleon due to Fermi motion. This should taken into account when extracting the gluon PDF of the neutron via this system.
We also discussed the charm PDF of the deuteron, which is potentially very interesting at x ∼ 1 due to the intrinsiccharm contribution. The intrinsic charm of the deuteron is enhanced by the combinatoric factors characteristic for gluon emission and the sharing of the momentum by valence partons, although the overall normalization is somewhat uncertain. We expect the charm distribution in the deuteron to be studied in the region 0 < x < 1.1 by future experiments -particularly in future fixed-target experiments using the LHC beams-in order to determine the normalization of the intrinsic-charm and hidden-color states.
In the limit of high-momentum scale Q 2 → ∞ for exclusive scatterings, other structures with the same quantum numbers as the | NN state, such as the ∆∆ states, or the hidden-color configurations [7][8][9][10], in which quarks are not arranged to form two color-singlet baryons, become relevant as Fock states. Indeed, in the short distance limit, 80% of the deuteron will be composed of hidden-color states. This state should be continuously be related to the almost maximal | NN state at low resolution via the renormalization group equation. The composition at intermediate momentum scales also involves higher Fock states with a valence gluon [92], such as | (uuudddg) . We note that the composition at intermediate distances can only be calculated if the normalization of the Fock state at some scale is known, as is the case for the renormalization group equation analysis. As for now, the implication of these states for inclusive reactions at finite x (away from 2 in the deuteron case), and thus the PDFs, remains to be studied, and is beyond the scope of our exploratory study.
At the endpoint (x ∼ 2), where only one gluon is carrying almost the entire momentum of the deuteron, the gluon PDF behavior is however related to the form factor of the system at short distances [36,93,94], and is known analytically. The counting rules indeed predict G d (x) ∝ (2 − x) 11 [36,68,94,95]. Since the partons are maximally virtual in this limit, the deuteron has to be expressed in terms of quarks and gluons, and it is therefore not possible to discuss with our framework. Extending our nonrelativistic results to the this limiting case, is also left for a future work especially since it seems difficulty experimentally accessible in a near future.
Our framework could be extended to the case of the gluon and charm PDF in heavier nuclei, such as the 4 He, which is one of the main ingredient of the interstellar matter, and for 14 N and 16 O, which are the main components of the atmosphere. Such analyses would be important to reduce the theoretical uncertainty of the cross section of the reactions between primary cosmic rays and the interstellar matter, as well as to predict the ultra-high-energy neutrino background in terrestrial experiments such as IceCube [19][20][21][22][23]. A better knowledge of the gluon PDFs of light nuclei, e.g. 3 He and 4 H, is therefore crucial for high-energy astrophysics, and they could be measured in the near future in LHC fixedtarget experiments. | 5,587.4 | 2018-05-08T00:00:00.000 | [
"Physics"
] |
Complex fault system revealed by 3-D seismic reflection data with deep learning and fault network analysis
. Understanding where normal faults are located is critical for an accurate assessment of seismic hazard; the successful exploration for, and production of, natural (including low-carbon) resources; and the safe subsurface storage of CO 2 . Our current knowledge of normal fault systems is largely derived from seismic reflection data imaging, intra-continental rifts and continental margins. However, exploitation of these data sets is limited by interpretation biases, data coverage and resolution, restricting our understanding of fault systems. Applying supervised deep learning to one of the largest offshore 3-D seismic reflection data sets from the northern North Sea allows us to image the complexity of the rift-related fault system. The derived fault score volume allows us to extract almost 8000 individual normal faults of different geometries, which together form an intricate network characterised by a multitude of splays, junctions and intersections. Combining tools from deep learning, computer vision and network analysis allows us to map and analyse the fault system in great detail and in a fraction of the time required by conventional seismic interpretation methods. As such, this study shows how we can efficiently identify and analyse
Introduction
Understanding the geometry and growth of normal fault systems is critical when assessing seismic hazard, when identifying suitable sites for subsurface CO 2 storage and when exploring for natural resources (traditional and low-carbon).For example, whereas probabilistic seismic hazard analyses based on seismic event catalogues are extremely useful when trying to forecast earthquake likelihood and location, high-resolution fault mapping, preferably in 3-D, can help us constrain the slip tendency of faults, where seismic catalogues are discontinuous and/or incomplete (e.g.Morris et al., 1996;Moeck et al., 2009;Yukutake et al., 2015).Moreover, faults can facilitate (or impede) fluid and gas migration to the Earth's surface; thus determining their geometry and connectivity, as well as their hydraulic properties, is key for assessing their role in the long-term subsurface storage of CO 2 (Bissell et al., 2011;Kampman et al., 2014).In both of these examples, we need robust predictions of 3-D fault geometries over large areas and across a wide range of scales (tens of metres to hundreds of kilometres).
Accurately mapping fault systems in 2-D and 3-D seismic reflection data typically requires expertise and time (e.g.Bond, 2015).While we can map fault systems in great detail over small areas using 3-D seismic reflection data (e.g.Lohr et al., 2008;Wrona et al., 2017;Claringbould et al., 2020), we lack an understanding of the character of 3-D fault populations at the scale of entire rift systems, as regional studies are often limited to only sparse 2-D seismic sections (e.g.Clerc et al., 2015;Fazlikhani et al., 2017;Phillips et al., 2019).Three-dimensional numerical models are now capable of simulating fault networks at the rift scale; however, there are few observational data sets of the same scale to test the predictions of these models and, therefore, help refine them (e.g.Naliboff et al., 2020;Pan et al., 2021).
Supervised deep learning allows us to map faults in seismic reflection data (e.g.Wu et al., 2019;Mosser et al., 2020;Wrona et al., 2021b), but up until now, many studies have laid the foundation by focusing on detecting faults rather than studying their geometries.In this study, by applying supervised deep learning to newly acquired broadband 3-D seismic reflection data, imaging much of the northern North Sea rift (161 km wide in E-W, 266 km long area in N-S, 0-20 km deep), we map the fault network associated with a continental rift basin at an unprecedented level of detail.Using manually labelled data (< 0.1 % of data volume), we train a deep convolutional neural network (U-Net) to predict faults in our data set.The predicted score ranges from 0 (no fault) to 1 (fault).Based on this score, which is available across the entire 3-D seismic volume, we employ a second workflow to extract the normal fault system as a network (a set of nodes and edges), allowing us to investigate the architecture and growth of this extremely complex system consisting of thousands of intersecting faults.
Geological setting
The study area is located in the northern North Sea (Fig. 1), where continental crust consists of 10-30 km thick crystalline basement overlain by as much as 12 km of sedimentary strata deposited during, after and possibly even before periods of rifting in the late Permian-Early Triassic (rift phase 1) and Middle Jurassic-Early Cretaceous (rift phase 2) phases (e.g.Whipp et al., 2014;Bell et al., 2014;Maystrenko et al., 2017).The extension direction of these two phases has long been debated.Whereas most studies agree that the late Permian-Early Triassic rifting was driven by E-W extension (Faerseth et al., 1997;Torsvik et al., 1997), Middle Jurassic-Early Cretaceous rifting has been associated with both E-W (e.g.Bartholomew et al., 1993;Brun and Tron, 1993) and NW-SE extension (e.g.Faerseth, 1996;Doré et al., 1997;Faerseth et al., 1997) (Fig. 1b).This debate is further complicated by the fact that some of the largest normal faults on the Horda Platform developed during rift phase 1 but were subsequently reactivated during rift phase 2 (e.g.Whipp et al., 2014;Bell et al., 2014).The crystalline basement underlying the sedimentary strata formed by terrane accretion during the Sveconorwegian (1140-900 Ma) and Caledonian (460-400 Ma) orogenies (Bingen et al., 2008).Several studies argue that this structural template, in particular the ductile shear zones, controlled the location, strike and overall pattern of rift-related faulting in the overlying sedimentary successions being reactivated as normal faults by limiting the along-strike propagation of faults (e.g.Fazlikhani et al., 2017;Phillips et al., 2019;Osagiede et al., 2020;Wiest et al., 2020).
3 Data and methods
3-D seismic reflection data
In this study, we use one of the largest offshore 3-D seismic data sets ever acquired, which images a large part of the northern North Sea rift across an area of 35 410 km 2 and with excellent depth imaging down to 22 km (i.e. the middle to lower crust) (Figs. 1,2a and 3).The data set was acquired using eight streamers that were up to 8 km long and were towed ∼ 40 m below the water's surface.The BroadSeis technology used for recording covers a wide range of frequencies (2.5-155 Hz), providing high-resolution depth imaging.The data were binned at 12.5 × 18.75 m, with a vertical sample rate of 4 ms.The data were 3-D true amplitude prestack depth migrated.The seismic volume was zero-phase-processed with SEG (Society of Exploration Geophysicists) normal polarity; i.e. a positive reflection (white) corresponds to an acoustic impedance (density × velocity) increase with depth.More details on data acquisition and pre-processing steps are provided by Wrona et al. (2019Wrona et al. ( , 2021b)).
Deep learning
Deep learning describes a set of algorithms and models which learn to perform a specific task (e.g.fault interpretation) on a given data set without requiring explicit feature engineering (e.g. the calculation and calibration of seismic attributes, such as coherence or variance).Deep learning allows the derivation of a fault score volume that highlights normal faults within the entire 3-D seismic volume.This approach requires a large number of examples of faults and unfaulted strata to be labelled in the training seismic data.We extract 80 000 such examples (2-D squares of 128 × 128 pixels) from 22 interpreted seismic sections oriented perpendicularly to the N-S-trending rift (Figs.1a and 2).Note that these seismic sections only constitute < 0.1 % of the entire 3-D seismic volume.Next, we split these examples into three groups: one set for training (80 %), one set for validation (10 %) and one set for testing (10 %).We use the first of these groups to train a deep convolutional neural network (U-Net) designed to perform image segmentation tasks (Ronneberger et al., 2015).Using the validation set, we track the accuracy and loss of the model during training and stop once the validation loss does not decrease further, resulting in a final binary accuracy of 0.83 and an F1 score of 0.76 (see Wrona et al., 2021b).Finally, we apply the model to the entire 3-D seismic volume to derive a fault score volume (Figs. 3 and 4), which is an attribute that ranges from 0 (no fault) to 1 (fault).All details of the workflow and the code are provided by Wrona et al. (2021a).
Automated fault network extraction and analysis
Extracting a fault network from the 3-D volume allows us to perform a comprehensive geometric analysis of the fault https://doi.org/10.5194/se-14-1181-2023 Solid Earth, 14, 1181-1195, 2023 system using our fault analysis toolbox -fatbox (Wrona et al., 2022).The basic idea is to describe a fault system in 2-D as a network (or graph), i.e. sets of nodes and edges (Fig. 5).
Each node marks a location along the fault and each edge connects two nodes.All nodes connected to one another by edges are labelled as a (connected) component.
Our fault extraction workflow consists of the following eight steps: (1) extracting a horizon, (2) Gaussian blur filtering, (3) thresholding, (4) cleaning, (5) skeletonisation, (6) connecting components, (7) adding nodes to the graph, (8) adding edges to the graph and (9) splitting junctions.While applying it to our North Sea target region, we first attempt to capture as many faults as possible by extracting the fault score along a horizon 500 m below Base Cretaceous Unconformity (BCU) (Fig. 1c).Here, we observe a large number of faults, which were either formed in the second rift phase or formed in the first rift phase and reactivated in the second rift phase (Figs. 4 and 6a).Second, we apply a Gaussian blur filter to increase lateral fault continuity (Fig. 6b), which allows us to extract long geologically plausible faults.Using a small filter (σ = 2) results in local smoothing without connecting distant faults with one another.Third, we apply a threshold of 0.35 to separate the faults from the background in the fault score (Fig. 6c).This threshold is a tradeoff, which balances capturing as many faults as possible (lower values) with identifying clearly resolvable faults (high values).Fourth, we further restrict this threshold and essentially filter these points by removing areas smaller than 25 pixels (Fig. 6d).Fifth, we collapse these faults to 1-pixel-wide lines using skeletonisation (Guo and Hall, 1992) (Fig. 6e).Sixth, we label adjacent pixels in the image as connected components (Wu et al., 2009) (Fig. 6f).Each component consists of pixels which are connected to each other.These components represent the faults in our network.At this point, we can build our graph using these connected components of the image (Fig. 6f).Each pixel that belongs to a component becomes a node, whereas edges are created between neighbouring nodes (Fig. 6g-i).This process results in a number of faults with splays, junctions or intersections being grouped into one connected component (Fig. 7a).To correct this, we split up junctions (nodes with three edges) based on the similarity of the strike; i.e. the aligned branches remain connected (Fig. 7b and c).This final network is compared with the base late Jurassic horizon 8).Additionally, we perform the exact same workflow on 10 slices through the fault score volume (1-10 km depth) to capture 3-D fault geometries with depths (Fig. 9).After extracting the fault system, we calculate a series of typical fault properties by using our fault analysis toolbox -fatbox (Wrona et al., 2022) (Fig. 10).First, we calculate the fault length as the sum of the edge lengths of each com-ponent (Fig. 10b).Second, we calculate the strike along the fault from neighbouring nodes (Fig. 10c).If we were to calculate the overall fault strike, we would overlook along-strike variations in the strike.If we were to calculate the strike as the orientation of each edge, we would only obtain values of 0,45 or 90 • , because the nodes are closely spaced.Instead, we calculate the strike from the third degree of the neighbouring nodes (i.e.neighbours of neighbours of neighbours).This selection ensures a robust, high-resolution fault strike calculation.Combining the fault length and strike, we can generate a length-weighted rose diagram (Fig. 10c).Finally, we calculate the fault density as the fault length per area (Fig. 10d).
Comparison to conventional seismic interpretation
We can ask ourselves, "how good are our results compared to a state-of-the-art fault interpretation from the same data set using conventional fault mapping techniques?" (Fig. 8).Tillmans et al. (2021) map the base late Jurassic (base of syn-rift sediments associated with rift phase 2) on the eastern flank of the North Viking Graben (see Figs. 1a and 4 for location), using a combination of manual picking and autotracking on the same seismic data set.This horizon is calibrated with 40 exploration wells, which provide direct constraints on the depth of the surface.Tillmans et al. ( 2021) highlight the fault system by computing the variance attribute (Chopra and Marfurt, 2007) along the horizon (Fig. 7a).On top of the horizon, we plot the fault network that was mapped from the fault score extracted 500 m below the easily mappable Base Cretaceous Unconformity (BCU) (Fig. 8b).This visual comparison shows that, while we are missing a few faults in the southwest of the map, we are able to identify and accurately represent most of the faults identified by Tillmans et al. ( 2021).The missing faults are either overlooked by our model (i.e.false negatives) or result from the difference between the horizons that we compare: Base Cretaceous Unconformity (our study) versus base late Jurassic (Tillmans et al., 2021).
Observations
Our fault extraction allows us to map a complex network consisting of 7983 individual faults across an area approximately 161 km wide and 266 km long, covering 35 410 km 2 of the northern North Sea rift (Fig. 7c).
Fault length
Faults vary in length by 3 orders of magnitude -from 50 m to 75.9 km -with some of the longest faults (> 30 km) extending from the Stord Basin and Bjørgvin Arch in the south to the Uer and Lomre terraces in the north (Fig. 10b).In the cross section, these faults have up to several kilometres of displacement and bound rotated half-graben (e.g.Whipp et al., 2014;Bell et al., 2014) (Fig. 3b and c).While we observe some long (up to 20 km) faults in the Viking graben and Tampen Spur, most faults (> 90 %) are closely spaced (< 5 km) and relatively short (< 10 km long) (Fig. 10b).
Fault strikes
In map view, we observe a complex network consisting of a large number of variably trending faults that display a broad range of intersection styles (e.g.oblique and perpendicular).These faults show a large range of strikes, varying from NW-SE to NE-SW (Figs. 9 and 10c).The length-weighted rose plot shows that most faults strike NW-SE (light blue) or NNE-SSW (light orange), with a large number of faults showing intervening strike directions (Fig. 10c).This general divide occurs between predominantly NW-SE-striking faults along the eastern part of the rift and NE-SW-striking faults in the central and northwestern part of the rift.This divide becomes most evident when comparing the faults on the Lomre Terrace (NE-SW) with the faults on the adjacent Bjørgvin Arch (NW-SE), at least at the structural level of the Base Cretaceous Unconformity (Fig. 10c).
Fault density
In map view, we observe large variations in fault density 500 below the BCU (Fig. 10d).While dense networks of intersecting faults result in high-density areas (e.g. the Lomre Terrace and Bjørgvin Arch), we observe low densities in the Viking and Sogn grabens, where faults occur at greater depths (e.g.Fig. 9c).
Vertical continuity
The faults extracted at different depths are variable in their vertical continuity (i.e.fault height; Fig. 8).Whereas some faults, in particular in the Stord Basin, the Tampen Spur and the Magnus Basin, show parallel fault traces from 1 to 10 km depth (Fig. 9a), we also observe a large number of faults that occur only at shallower (1-5 km) or greater depths (6-10 km) (Fig. 9b and c).Upon closer inspection, we observe that the faults, which occur continuously from 1-10 km depth (e.g. in the eastern Stord Basin and the Bjørgvin Arch), are typically large-displacement normal faults with tens of kilometres of spacing (e.g.Fig. 3b and c), whereas the other faults, which only occur from 6-10 km depth (e.g. the northwestern Stord Basin), are usually shorter and more closely spaced (a few kilometres) (e.g.Fig. 9c).
Advantages of deep-learning-based fault interpretation
When comparing our results to conventional interpretation methods, we can ask ourselves, "what value does deep learning add?" Here, we highlight the advantages of the supervised deep-learning-based fault interpretation workflow, which we present in this study.First, we can predict faults in a seismic section in a fraction of the time (5 s) required by expert interpreters (∼ 10 min).These differences accumulate, in particular, when interpreting such a large data set with > 22 000 inlines.A conventional fault interpretation of such a large data set can take several months, whereas a trained convolutional neural network can identify faults across the entire volume within a day on a single GPU (GeForce GTX 1080 ti).Note that this comparison does not include the time required to label the training data (∼ 2 d), train the initial model (∼ 4 h), and fine-tune and select the final model (days-months).Second, after identifying faults in the data, they also need to be mapped before we can perform the relevant fault analysis.Here, we map the fault network using a series of tools from computer vision and network analyses compiled in our fault analysis toolbox -fatbox (Wrona et al., 2022) (Figs. 6 and 7).Our automated workflow extracts the fault network in less than 5 min compared with the several weeks to months that would have been required to manually map the faults in this large data set.Furthermore, once extracted, we can immediately conduct a number of typical fault analyses using predefined functions implemented in fatbox (Wrona et al., 2022) (e.g.Fig. 10).Third, conventional fault interpretations are often binary (fault vs. no fault), but deep learning delivers a score ranging from 0 (no fault) to 1 (fault).Although this score is not a true fault probability (see discussion by Mosser and Zabihi Naeini, 2022), the fault score nevertheless correlates with the visibility of faults (i.e. the faults which are well resolved by the data are associated with higher fault scores).This correlation allows users to qualitatively select the faults that they want to analyse by using a threshold (as done herein).This selection is particularly useful for assessing the sealing potential of certain layers for CO 2 storage and predicting fluid flow during geothermal exploration.Fourth, seismic interpreters typically focus on the largest faults, whereas our model performs the same prediction across the entire data set, irrespective of the size of the faults encountered.Fifth, given the same data, labels, model and training, our model and results are fully reproducible, which is not the case for conventional fault interpretations, where the interpreter has to make a myriad of decisions in the process of mapping a fault network.
Complex fault system in the northern North Sea
Our study shows how to reveal the complex geometry of normal fault systems in 3-D seismic reflection data, using a combination of deep learning and automated fault extraction.We were able to map an intricate network consisting of almost 8000 individual faults that cover an area approximately 161 km wide and 266 km long (e.g.Figs. 4,6 and 10).This fault network shows large variations in the fault length, strike and density, with extremely complex splays, junctions and intersections between these faults .As such, our work goes far beyond the typical seismic interpretations in previous case studies, which covered only a fraction of the rift (e.g.Duffy et al., 2015;Deng et al., 2017;Tillmans et al., 2021), or regional studies that mapped < 100 of the largest faults, using primarily sparse, 2-D seismic sections (e.g.Fig. 1b; Fazlikhani et al., 2017;Phillips et al., 2019).
Uncertainties during fault mapping
While there are several advantages to our approach, it is worth remembering the uncertainties associated with mapping faults in seismic reflection data.First, seismic reflection data can only image faults with displacement above the seismic resolution (and level of noise) of the data set.The seismic resolution of our data set decreases from 15 m (vertical) and 30 m (lateral) around 3 km depth down to 180 m (vertical) around 20 km depth (see Wrona et al., 2019;Tillmans , 2021).Second, the labels we use to train our model are derived from 22 interpreted seismic sections, which, like any seismic interpretation, contain the expertise and biases of the interpreter (e.g.Bond et al., 2007;Bond, 2015).Third, our current model has not been trained and is, thus, unable to distinguish between different fault types (normal, reverse and strike-slip).We labelled all major faults in the training data, which are predominantly normal faults (probably > 99 %).A handful of these normal faults may show evidence of minor inversion, but they all remain in net extension; i.e. the hanging wall has moved down relative to the footwall.While strike-slip faults are notoriously difficult to resolve in seismic reflection data, as they show little to no vertical offset of reflectors, the normal and reverse faults show differing offsets, which neural networks could learn to recognise by correlating reflectors across the fault.Machine learning models could, thus, be able to distinguish fault types based on their seismic signature in the future.Fourth, the convolutional neural network that we trained achieves an accuracy of 83 %, implying that 17 % of the data are misclassified (see Wrona et al., 2021b).A closer inspection reveals that 36 % are false positives (i.e. the faults that were overlooked) and 5 % are false negatives (i.e. the faults that were misinterpreted) (see Wrona et al., 2021b).Despite these limitations, the robustness of our approach is evident when considering along-strike fault continuity across a large number of different seismic lines (Figs. 10 and 11).
Future research on automated fault mapping
Based on our work, we can identify three related areas for future research.First, conventional neural networks predict a fault score from 0 to 1, which seems to correspond to the visibility of the fault in the data set.Bayesian neural networks, on the other hand, allow the prediction of true fault probabilities (e.g.Mosser et al., 2020).Predicting fault probabilities in regional seismic data sets could significantly accelerate the screening for, and risk assessment of, potential CO 2 storage sites (see Wrona and Pan, 2021).Second, we currently map faults on seismic in-and crosslines, which may contain redundant information regarding the faults.In the future, it may be advantageous to maximise the diversity of the training set (i.e.different fault types or levels of noise), using uncertain estimates and active learning.Third, in addition to predicting where faults will occur, we can explore the prediction of other fault properties, such as displacement, fault zone permeability or even the time when they were active.This would allow us to significantly study the spatial and temporal evolution of fault systems in high resolution at a regional scale.Fourth, while our fault extraction workflow currently focuses on mapping fault networks in a series of 2-D slices or horizons, we really need freely available methods to generate 3-D fault surfaces, which allow for complex fault splays, junctions and intersections, as they could be applied to large 3-D seismic data sets but also to analogue and numerical models.
Conclusions
This study shows that the combination of deep learning and network analysis applied to 3-D seismic reflection data allows us to map almost 8000 normal faults across the entire northern North Sea rift, for the first time.These faults form an intricate network with complex relationships (e.g.splays, junctions and intersections) including large variations in the fault length (50 m to 75.9 km) and strikes (NW-SE to NE-SW).As such, this work goes far beyond previous seismic studies by providing high-resolution fault maps at a regional scale in a fraction of the time required by conventional interpretation methods.
Figure 1 .
Figure 1.(a) Structural overview map of the northern North Sea basin system (from Tillmans et al., 2021, after Faerseth, 1996).The bright blue rectangle marks the outline of the seismic survey in this study.ESB is the East Shetland Basin, B-S is the Brent-Statfjord fault, G-V is the Gullfaks-Visund fault, MS is the Måløy slope and HP is the Horda Platform.(b) The base rift surface (base Permo-Triassic rifting) timestructure map in the northern North Sea rift (from Fazlikhani et al., 2017) and the geology of southwestern Norway, showing the general onshore and offshore structural configuration in the study area.The bold black lines highlight major rift-related normal faults displacing the base rift surface where all units older than upper Permian are considered basement.The black lines in the background show some of the 2-D seismic reflection surveys used by Fazlikhani et al. (2017).NSDZ, Nordfjord-Sogn Detachment Zone; BASZ, Bergen Arc Shear Zone; WGR, Western Gneiss Region; ØC, Øygarden Complex (gneiss); ØFS, Øygarden Fault System; HSZ, KSZ and SSZ: Hardangerfjord, Karmøy and Stavanger shear zones respectively.(c) Regional interpretation of the structure of the northern North Sea after Faerseth (1996).
Figure 2 .
Figure 2. (a) Example of a seismic section across the northern North Sea.Amplitudes are scaled for machine learning.(b) Example of fault interpretation of the section used to train a deep convolutional neural network for fault prediction.
Figure 3 .
Figure 3. Examples of seismic sections extracted from fault score volume of the 3-D seismic data set.Note that these sections were not part of the training data but are actually 6.25 km away from the closest interpreted seismic section (see Fig.1a).To show the correspondence between seismic data and fault score, we needed to define a cutoff value (0.5) below which the fault score becomes transparent and the seismic data become visible.
Figure 4 .
Figure 4. Surface capturing tectonic faults extracted from fault score volume.The surface was extracted 500 m below the Base Cretaceous Unconformity, where we observe a large number of faults, which were either formed or reactivated in the second rift phase.The white rectangle shows the area used for validation (Fig. 8) and the red rectangle indicates the area where we demonstrate our fault network extraction workflow (Fig. 6).Note that this figure shows a whole range of values of the fault score [0, 1].
Figure 5 .
Figure 5. Schematic illustration of fault network (or graph) with nodes, edges and components.Each node marks a location along the fault.Each edge connects two nodes and each (connected) component indicates all nodes connected to one another by edges.
Figure 6 .
Figure 6.Fault network extraction workflow showing the following: (a) fault score extracted along the surface (500 m below BCU), (b) Gaussian blur filter (σ = 2) of surface, (c) threshold (0.35) of filter, (d) cleaned threshold where small patches are removed, (e) skeleton of cleaned threshold, (f) connected components of skeleton, (g) network nodes based on components, (h) network edges based on components, and (i) network nodes and edges combined.Note that the colours in (f), (g) and (i) indicate connected components (i.e.individual faults), before splitting (see Fig. 6).
Figure 7 .
Figure 7. (a) Fault network extracted from BCU (Fig. 4d).Note the large areas with the same colours resulting from multiple faults being grouped into one connected component.(b) Fault network after removal of noise (i.e.small components).(c) Fault network after splitting junctions that previously connected splaying and intersecting faults.Note that large connected components are split up and individual faults are highlighted by different colours.
Figure 8 .Figure 9 .
Figure 8.Comparison of panel (a) base late Jurassic time-structure map interpreted by Tillmans et al. (2021) and panel (b) automatically extracted fault network 500 m below Base Cretaceous Unconformity, using the same seismic data set.Faults are distinguished by colour.
Figure 10
Figure 10.(a) Structural elements of the northern North Sea Rift (NPD, 2022).(b) Fault lengths (500 m below BCU) on top of structural elements.(c) Fault strikes (500 m below BCU) on top of structural elements with length-weighted rose diagram.(d) Fault density on top of structural elements.Note that fault density was measured as fault length per square area.These squares have an edge length of 3.6 km, a value chosen for visual purposes.https://doi.org/10.5194/se-14-1181-2023Solid Earth, 14, 1181-1195, 2023 | 6,455.4 | 2023-11-21T00:00:00.000 | [
"Geology",
"Environmental Science",
"Engineering",
"Computer Science"
] |
An expanded optimal control policy for a coupled tanks system with random disturbance
In this paper, an expanded optimal control policy is proposed to study the coupled tanks system, where the random disturbance is added into the system. Since the dynamics of the coupled tanks system can be formulated as a nonlinear system, determination of the optimal water level in the tanks is useful for the operation decision. On this point of view, the coupled tanks system dynamics is usually linearized to give the steady state operating height. In our approach, a model-based optimal control problem, which is adding with adjusted parameters, is considered to obtain the true operating height of the real coupled tanks system. During the computation procedure, the differences between the real plant and the model used are measured repeatedly, where the optimal solution of the model used is updated. On this basis, system optimization and parameter estimation are integrated. At the end of the iteration, the iterative solution approximates to the correct optimal solution of the original optimal control problem, in spite of model-reality differences. In conclusion, the efficiency of the approach proposed is shown.
Introduction
A coupled tanks system, which consists of a joint of two tanks together through pipes in order to reserve water at the operating height level, is an important study in the control engineering and process industries [1].The applications of the coupled tanks system have been widely used in the real-world process, for example, petrochemical, waste-water treatment, and purification [2] [3].Essentially, the coupled tanks modeling provides a configurable process control experiment to engineers and researchers such that a wide array of modeling and control-related laboratory works on liquid control can be performed in advance [1].
Theoretically, the dynamics of the coupled tanks system is formulated into a system of differential equations.The inflow and the outflow of the coupled tanks system are monitored in a simulation way in which the balance between the change rate of the water height level and the water flow in and out can be measured precisely.In addition to this, the water flow rate shall be controlled such that the equilibrium of the steady state along the operating height level is established.In literature, there are many studies on this steady state of the water level that are carried out.See for more detail in [4] [5] [6].
In practice, the experimental works on the coupled tanks system, which cover the inflow and the outflow, are affected by some disturbances, such as inaccuracy of apparatus, man-made error and unfamiliar skill, and would give the inappropriate results [7].Due to these reasons, the steady-state of the operating height level in the coupled tanks system is not easy to be addressed [8].Therefore, approximating the operating height level in the coupled tanks system as accurately as possible attracts the interest of engineers and researchers.Therefore, in this paper, an expanded optimal control policy, which takes into account different structures and parameters [9] [10] [11], is proposed for determining the steady-state of the operating height level of a couple tanks system with random disturbance.In our approach, the adjusted parameters are added to the model used.Accordingly, the expanded optimal control model is further defined [12] [13] [14].Especially, the optimal control policy, which is known as the expanded optimal control policy, is designed for solving the expanded optimal control model iteratively.On this basis, an illustrative example of the coupled tanks system, which is disturbed by the random noise, is presented.As a result, the optimal operating height level of the coupled tanks system is determined.
Hence, the efficiency of the approach proposed is highly recommended.
The structure of the paper is organized as follows.In Section 2, the problem on the coupled tanks system is described, where the related mathematical model is formulated.In Section 3, the expanded optimal control model, which is added the adjusted parameters into the model used, is introduced.From the iterative calculation, the expanded optimal control policy, which determines the optimal operating height level in the coupled tanks system, is obtained.In Section 4, the illustrative example of the coupled tanks system is discussed.The result shows the application of the approach proposed.Finally, some concluding remarks are made.
Problem Statement
Consider that two tanks are joined to be the coupled tanks system [3] 1 shows the system plant that is determined by relating the flow i Q into the tanks to the flow c Q leaving the valve at the tank bottom.
The flow balance equation for Tank 1 is given by where 1 A is the cross-sectional area of Tank 1, b Q is the flow rate of water from Tank 1 to Tank 2 through Valve B. While, for Tank 2, the flow balance equation is given by where 2 A is the cross-sectional area of Tank 2, c Q is the flow rate of water out of Tank 2 through Valve C. The system plant comes from the two flow balances and the nonlinear equations for flows through the valves.
With the assumption that the valves are ideal orifice, the flows through the valves will be related to the water levels in the tanks by the following expressions: ( ) where b a and c a are, respectively, the cross sectional areas of the orifice at Valves B and C, and db C and dc C are the discharge coefficients of Valves B and C, respectively.These coefficients take into account all fluid characteristics, losses and irregularities in the system to make the two sides of (1) and (2) balance.The gravitational constant is given by g = 9.80 m/s 2 .In addition to this, the two flow balances for ideal valves, which are given by ( 1) and ( 2), are rewritten by Figure 1.A coupled tanks system.
( ) Here, ( 4) and ( 5) describe the coupled tanks system dynamics in the nonlinear manner with ideal equations for the valves.In practice, the cross sectional area is given by the dimensions of the valve and the flow channel, which could be more complicated.
Denote
( ) as the heights of water level in the tanks, In presence of the random disturbances, the system plant dynamics in ( 4) and ( 5) is rewritten as where ω ω ω = is the Gaussian random variable with zero mean and covariance Q ω .Hence, this problem of controlling the water level in Tank 2 can be formulated as an optimal control problem, which is referred to as Problem (P), given below [11] [12]: Problem (P): Find the optimal control input u, which is the flow rate, to minimize the cost function subject to the system dynamics ( 7) and ( 8) with the output measurement where 2 s x and s u are the steady states of the value, r is the positive weight coefficient, 0 t is the initial time and 1 t is the fixed terminal time.
Notice that the structure of Problem (P) is complex and nonlinear.Solving Problem (P) would be computational demanding.However, the optimal solution of Problem (P) could be obtained through solving its simplified model, which is referred to as Problem (M), given by where and s u are the steady states of the value, whereas, ( ) α α α = is the adjustable parameter.Refer to this simplified model, it is highlighted that the aim of adding the adjustable parameter into the model used in Problem (M) is to measure the differences between the system plant and the linear model used repeatedly.By virtue of this, the optimal solution of the model used could be updated in order to approximate the correct optimal solution of Problem (P), in spite of model-reality differences [9]- [14].
System Optimization with Parameter Estimation
Now, refer to the cost function in (9) or (10), setting the weighting coefficient matrices Q and R, and the state transition matrix A and the control coefficient matrix B, respectively, to be Let ( ) represents the system dynamics function of the coupled tanks system, that is, ( ) ( ) ( ) Let us define an expanded optimal control problem, which is referred to as Problem (E), given by r ∈ ℜ are introduced to improve the convexity and to facilitate the conver- gence of the resulting iterative algorithm.It is important to note that the algorithm is designed in such a way that the constraints ( ) ( ) ( ) ( ) are satisfied due on the termination of iterations, assuming convergence is achieved.The state constraint ( ) z t and the control constraint ( ) v t are used for the computation of the parameter estimation and the matching scheme, while the corresponding state constraint ( ) x t and control constraint ( ) u t are reserved for optimizing the linear model-based optimal control prob- lem.Hence, system optimization and parameter estimation are mutually interactive.
Necessary Conditions
Define the Hamiltonian function by where ( ) p t ∈ ℜ is the Lagrange multiplier.Then, the augmented cost function for the cost function in (15) becomes ( ) ( ) with ( ) ( ) and ( ) 0 t θ = .
Modified Optimal Control Problem
From the necessary conditions ( 18)-( 22), a modified optimal control problem, which is referred to as Problem (MM), is defined by
Optimal Control Law
The optimal control law for Problem (MM), which is known as the expanded optimal control policy, is a feedback control [9]- [14].This control law is stated in the following theorem.Theorem 1 (Expanded optimal control policy): Assume that the expanded optimal control policy exists.Then, this optimal control law is the feedback control law for Problem (MM), given by where with the boundary conditions ( ) Proof: From the necessary condition (18), the optimal control is written by Applying the sweep method [15] [16], ( ) ( ) ( ) ( ) into (33), after some algebraic manipulations, the feedback control law (28) is Step 1: Compute the parameter ( ) . This is called the parameter estimation step.
Step 2: Compute the multipliers ( ) Step 3: Using ( ) ( ) ( ) ( ) , , , and ( ) i z t , solve Problem (MM) de- fined in (27) by using the result that is presented in Theorem 1.This is called the system optimization step.
1) Solve (32) forward to obtain ( ) i s t and solve (29) to obtain 2) Use (28) to obtain the new control ( ) 3) Use (38) to obtain the new state ( ) Step 4: Test the convergence and update the optimal solution of Problem (P).
In order to provide a mechanism for regulating convergence, a simple relaxation method is employed: within a given tolerance, stop; else set 1 i i = + , and repeat the procedure staring from Step 1.
Remarks:
a) The nominal solution can be the optimal solution that is obtained from the standard linear quadratic regulator (LQR) optimal control problem.
b) The off-line computation for solving (30) and ( 31) is done at Step 0 before the iteration begins with assuming ( ) ( ) and ( ) 0 c) The numerical scheme for solving the ordinary differential equations of ( ) S t and ( ) s t can be used.
d) The relaxation method given in (40) establishes a matching scheme for the updating of the iterative solution.
Result and Discussion
For the numerical illustration, the physical parameters of the coupled tanks system are shown in Table 1 [4].The steady states of the value are set at Follow from this, the algorithm proposed is applied to obtain the optimal operating height level in the coupled tanks system.For doing this task, the algorithm proposed is implemented in the MATLAB 2016 R1 environment in Window 8.1 Pro with the processor 2.10 GHz and the 64-bit operating system.Refer to Table 1, the system parameters used are calculated and given as follow: 0.019442 0.019442 0.019442 0.024302 The simulation result is shown in Table 2.The final cost gives a smaller value than the original cost, which saves about 0.039 percent of the original cost spent.In addition to this, the water in Tank 2 is controlled at the steady-state value Figure 5 shows the stationary condition of the optimality.It verifies that the iterative algorithm proposed is efficient and the final solution is the optimal solution.As a result, the iterative algorithm proposed is applicable to making the decision on the operating height of the coupled tanks system.
Concluding Remarks
The use of the expanded optimal control policy in determining the operating height level in the coupled tanks system was discussed in this paper.The special feature of this expanded optimal control policy is that the model used, which is a linear model, has a different structure compared to the original problem, which is a nonlinear model.By adding the adjusted parameters into the model used, the differences between the model used and the original model could be measured iteratively.As a result, the flow rate of the coupled tanks system is controlled and the operating height level is achieved, in spite of model-reality differences.In conclusion, the efficiency of the expanded optimal control policy is highly presented.Nonetheless, the optimal operating height level obtained by the algorithm proposed is the expected solution of the coupled tanks system, where the system is disturbed by the random noises.Apparently, this expected solution approximates to the steady-state value.In fact, an optimal solution of the coupled tanks system with random disturbance could be further improved by using the filtering techniques.Therefore, it is suggested to determine the optimal filtering solution of the coupled tanks system with the random disturbance in future research.
)
Advances in Pure Mathematics h) Separable variables:
4 )
Use (34) to obtain the new costate ( ) i p t .5) Use (39) to obtain the new output ( ) i y t .
Figure 2
Figure 2 shows the trajectory of control input for the coupled tanks.The control input reduces its value dramatically from 3.33 units and then increases gradually after 0.2 seconds.It takes 0.6 seconds to converge to the steady state value 3.322 s u = .This behavior of the control input indicates that the flow rate, which is pumped into Tank 1 for the first 0.2 second, is moved into Tank 2 and reaches at the steady state after 0.8 seconds.
in Figure3and Figure4.It can be seen that the original state trajectory is disturbed by the random disturbances.Nonetheless, the expected state trajectory, which is measured from the origin, is increased obviously in such a way that controlling the steady-state value of the water level in the second tank is made.
The system states are the water level H 1 in Tank 1 and the water level H 2 in Tank 2. The control input is the pump flow rate i Q and the variable to be controlled is the second state, which is the water level H 2 , with disturbances that are caused by variations in the rate of flow out of the system by Valve B or by changes in Valve C. Hence, a mathematical model shall be built for each of the tank water levels.Figure [6][5][6].S. L. Kek et al.DOI: 10.4236/apm.2019.94014319 Advances in Pure Mathematics
Table 1 .
Physical parameters of coupled tanks system. | 3,547.6 | 2019-04-11T00:00:00.000 | [
"Engineering"
] |
Digital Architectures for UWB Beamforming Using 2D IIR Spatio-Temporal Frequency-Planar Filters
,
Introduction
Radio-frequency (RF) two-dimensional (2D) infinite impulse response (IIR) space-time (ST) plane-wave frequencyplanar beam filters [1] have potential applications in ultrawideband (UWB) directional filtering of propagating electromagnetic far-field plane-waves.Such plane-wave filters achieve highly directional beamforming for aperture array applications.The proposed beam filters are designed using the concept of frequency-planar resonant 2D inductorcapacitor (LC) ladder network prototypes having resistive terminations [2].For example, UWB beam filters can be employed in radar [3], wireless communications [4], radio astronomy [5], and electromagnetic imaging, and sensing [6].Furthermore, new applications have been proposed in cognitive radio towards enhanced access to radio spectrum (EARS) [7] which requires sensitive spectrum sensing in both space and time domains [8], in turn leading to a strong need for low-complexity directional filters capable of real-time RF operation [9].
High attenuation in the stop-band region as well as a sharp transition from filter passband to stop-band is greatly desired in high-performance beamforming systems because such characteristics are important for achieving a better approximation to the ideally "brick-wall" type transition between main beam and stop-band null of the array pattern.The sharpness of the transition from passband (main beam) to stop-band (null region) of the aperture array factor directly depends on the order of the transfer function of the spatio-temporal filter that is employed for UWB beamforming.The primary objective of this work is to explore the realtime hardware architectures that are necessary for realizing frequency-planar beam plane-wave filters corresponding to 2nd-and 3rd-order 2D passive LC ladder low-pass networks.The proposed architectures are an extension of the elementary 1st-order hardware architecture described in [10].Here, International Journal of Antennas and Propagation we propose novel massively parallel digital architectures with detailed design equations and complexity studies for beamforming networks based on LC-ladder prototypes of order 2 and 3 having significantly better UWB directionality compared to available 1st-order realizations [10].
The 2D frequency-planar beam filters are both practical bounded-input and bounded-output (practical-BIBO) stable [11] and structurally stable under zero initial conditions (ZICs) and can be designed for low computational complexity.We propose massively parallel systolic-array VLSI architectures containing identical and locally interconnected parallel processing core modules (PPCMs) for 2nd-order and 3rd-order beam filters.The proposed architectures are finegrain pipelined using both inter-PPCM and intra-PPCM registers in order to reduce the critical path delays (CPDs) for achieving maximum clock frequency and temporal bandwidth.The main reason to pipeline the filters is to achieve higher throughputs required for real-time filtering of 2D RF signals derived from time synchronously sampled UWB uniform linear arrays (ULAs) of antenna elements.The proposed systolic-array architectures achieve real-time RF plane-wave filtering at a throughput of one-frame-per-clockcycle (OFPCC).The frame sampling rate of the beamformer is equal to the clock frequency F clock ≤ 1/ΔT cpd Hz.
An approximately frequency-independent beam shape over an UWB frequency range is obtained by employing a fan filter bank configuration where each subband of the fan filter bank consists of a temporally bandpass frequencyplanar beam filter [12].For the best real-time throughput, each subband of the fan filter bank [12] may be realized using dedicated massively parallel systolic-array processors.However, this technique results in high circuit complexity due to the high degree of parallelism.In this work, we trade throughput for lower circuit complexity by employing a folded architecture for the realization of the fan filter bank.The hardware design approach of folding leads to the time interleaving of multiple 2D filters.Time multiplexing of filter coefficients in folded hardware allows different filters [13] to share arithmetic hardware thereby reducing the number of multipliers and adders.K-times folding results in K-fold utilization of hardware at the cost of a K-fold loss in ULA linear frame-rate [14].
In digital beamforming, an array of time synchronous A/D converters [72] leads to the discrete-time signals.Such algorithms are based on either time domain delay-andsum (DAS) or frequency domain phased array feed (PAF) techniques [16,19,73].In delay-and-sum-based systems, element signals are delayed and added coherently to form a beam.True time delays for each antenna are found for a particular beam direction and inter antenna spacing and are realized as tapped delay lines [74,75].In finite impulse response (FIR) beamforming [60,61,76,77], the time delays are implemented as FIR digital filters.This method has high computational intensity compared to the proposed IIR technique.In frequency domain PAF beamforming [57,58,65], the digitized DAA signals are converted to frequency domain by evaluating the temporal fast Fourier transform (FFT) [78,79].The FFT bins are subsequently multiplied by a complex weight and coherently summed to achieve frequency domain beamforming.
Principle of 2D IIR Beamforming.
Figure 1 shows the overview of the proposed UWB digital beamforming system.The digital input signals of the proposed massively parallel systolic-array architectures are obtained by amplifying and low-pass filtering the continuous-time (i.e., analog) signals from UWB low-noise antennas.Typical choices for the RF sensing antenna are Vivaldi or BAVA antennas; however, other types of broadband antennas such as biconical antennas may also be employed.
UWB beamforming is implemented here using an array of antennas placed at uniform distance along x-axis.The principle of UWB beam filtering [2,80] is to enhance the spectrum W pass ( jω 1 , jω 2 , ψ) of spatio-temporal plane wave w pass (Δxn 1 , cΔT s n 2 , ψ) propagating with a desired direction of arrival (DOA) of ψ while simultaneously attenuating all undesired signals w stop (Δxn 1 , cΔT s n 2 , ψ) with spectrum W stop ( jω 1 , jω 2 , ψ) which lie outside the passband of the filters.
From Nyquist sampling theorem, Δx = cΔT s , where ΔT s is the temporal sampling period and c ≈ 3 • 10 8 ms −1 is the speed of light in air [81].The plane-wave signal of interest is denoted by w pass (•) while interference signals are denoted by w stop (•), respectively.The 2D sampled input signal is of the form where (W, D) . . .from the broadside direction of the ULA and is denoted by θ where 0 ≤ θ ≤ 90 • and ψ is the corresponding space-time DOA, 0 ≤ ψ ≤ 45 • , 2.3.Review of 2D Plane-Wave Beam Filters.The 2D IIR plane-wave beam filters can be synthesized [83] using 2D LC network prototypes.The Laplace transfer function derived from the network is found and subsequently converted to the 2D z-domain using the complex map of bilinear transformation.The resulting z-domain transfer function leads to a computable 2D difference equation of the filter enabling
International Journal of Antennas and Propagation
Figure 6: Architecture of H 1,m (n 1 , z 2 ) of 2D 2nd-order beam filter with 3-stage scattered look-ahead pipelining.
real-time digital VLSI realization using RF rate systolicarray processors.Examples of a 1st-order beam filter were previously investigated in [10].An example of 1st-order 3D IIR cone filter-bank was first proposed in [12].We here propose a 2nd-order plane-wave filter hardware having useful applications as a building block for achieving fan filter banks.A sharper transition required for aperture arrays can be obtained with higher order filters.Further, we propose 3rd-order plane-wave filter hardware and estimate both complexity, quantization noise level, and performance.The sharper transition (roll-off) for 3rd-order filter in comparison to the 2nd-order filter is demonstrated.
Let the 1D input-output Laplace transfer function of a classical resistively-terminated Nth-order LC ladder low-pass network shown in Figure 2 be given by The above equation can be converted to a 2D Laplace equation by applying frequency-planar transformation, s = s 1 cos ψ + s 2 sin ψ to (3) to obtain T(s 1 , s 2 ): T(s 1 , s 2 ) is mapped to the 2D z-domain by applying bilinear transformation to get (5) Note that the above design equations are limited to filter passbands that exist in the second and fourth quadrants of the 2D frequency space.Given that filter stability requires all components to be nonnegative [2], for beamforming in quadrants one and three (i.e., −π/2 ≤ θ ≤ 0), we use nonnegative values in each branch impedance and shunt admittance of the prototype while mirroring the input array signal spatially because w(−x, ct) ⇐⇒ W(− jω 1 , jω 2 ) such that the passband spectra now fall within quadrants two and four of the frequency space [2].
To obtain 2D difference equations, we apply the inverse Z-transform to the above function under ZICs, leading to Nth-order plane-wave beam filter realizations having general form [2] The frequency response of the Laplace domain prototype is obtained by evaluating T( jω 1 , jω 2 ).The frequency response of the digital realization is obtained in closed form by evaluating H(e jω1 , e jω2 ) [2].That is, on the 2D unit bicircle The frequency response may be verified by computing the 2D discrete Fourier transform (DFT) of the unit impulse response h(n 1 , n 2 ) and comparing with the closed-form response H(e jω1 , e jω2 ).
Systolic-Array Architecture of 2D IIR Filters
3.1.Difference Equations.In our example design for 2ndorder plane-wave filter shown in Figure 3, we employ parameters R s = 1, L 1 = 1.4142,C 2 = 1.4142 [83].For our example of 3rd-order beam filter shown in Figure 4, the design parameters are 1 gives the feedback of 2D 3rd-order beam filter with 3-stage scattered look-ahead pipelining.
coefficients of the filter for a 1st-order, 2nd-order, and 3rdorder plane-wave beam filters [84].
Partially Separable Signal Flow Graphs.
In order to reduce digital implementation complexity, we first separate the 2D z-domain transfer function of the beam filters into separable and nonseparable subfilters.These subfilters correspond to spatial, temporal, and spatio-temporal prototype networks.Therefore, the transfer functions are cascaded to form the final 2D filter as shown in Figure 5.Following [10] to higher-order filters, we propose that the nonseparable function H 1 (z 1 , z 2 ) be realized employing a systolic-array consisting of N 1 similar parallel processing core modules (PPCMs) which are interconnected to each other such that spatio-temporal feed-forward and feedback paths required for recursive computation of the filter are realized using 2D difference equations.
The input RF waves received by the each antenna in the ULA is passed through LNAs, low pass filtered, samples, and quantized within each ADC.The sampled digital signals are connected to the PPCM input ports.The corresponding outputs are sent through H 2 (z 1 ) and H 3 (z 2 ) as shown in Figure 1.The function H 2 (z 1 ) is implemented as filter with spatial delays and H 3 (z 2 ) is implemented using temporal delays: International Journal of Antennas and Propagation Figure 9: Architecture of H 2 (z 1 ) of 2D 2nd-order beam filter.
Zero-initial conditions (ZICs) of the filter along both discrete space and time dimensions are defined by Figure 11: Architecture of H 2 (z 1 ) of 2D 3rd-order beam filter.
The temporal ZICs [10,11] are provided to the design by preloading the input values with zeros.Spatial ZICs are provided by connecting constant value zero as previous state inputs to the first PPCM.The function H 1 (z 1 , z 2 ) for an Nthorder plane-wave beam filter given in ( 7) is implemented as for a 2nd-order filter is shown in Figure 6 and for 3rd-order filter as shown in Figure 7. ).The size of each digital signal representation after each multiplier and adder has been marked at every point in the realization of H 1,m (n 1 , z 2 ).The output signal of a given PPCM is provided as one of the inputs to the next PPCM; this requires that the size of the output signal of each PPCM be the same as the size of the input port in the neighbouring PPCM.
Beam Sensitivity to Filter Coefficient Precision.
Firstorder sensitivity gives a measure of error associated with perturbations in coefficients of the filter.The sensitivity of International Journal of Antennas and Propagation Figure 13: Overview of a K-time multiplexed filter bank architecture [12].
. . . the 2D beam transfer function of the filter due to changes in the values of the coefficients that results due to quantization is studied next.We consider 1st-order sensitivity function to find the magnitude error of the design [85].The sensitivity function for 2D beam function H(z 1 , z 2 ) is given by Sensitivity helps to find relative error in transfer function with respect to a coefficient.If b i j is a coefficient of the filter, then the relative error in the transfer function of the filter due to perturbations in b i j is given by The gain sensitivity in |H(e jω1 , e jω2 )| is computed using The gain sensitivity in |H(e jω1 , e jω2 )| due to fixed-point errors in all the coefficients of the filter is given by the summation of the gain sensitivities with respect to each coefficient [85].We consider nonuniform error in the coefficients given by i .The relative error in H(z 1 , z 2 ) with respect to different sizes of coefficients for 2nd-order and 3rd-order filters is shown in Figure 8. Relative error [85] in |H(z 1 , z 2 )| is calculated as follows: where i + j / = 0. From Figure 8 for the usable frequency range ω 2 ≤ π/2, we observe that, for 2nd-order beamformers, W c = 15-bit precision in the filter coefficients leads to beam accuracy within 3% which improves to better than 0.1% for W c = 20-bit precision.For 3rd-order beamformers, W c = 19-bit precision in the filter coefficients leads to 3% accuracy, which improves to better than 0.1% accuracy when the filter coefficient precision is increased to W c = 23-bits.
Internal
Register/Word Sizes.The word length of the signal at the feedback loop needs to be truncated.To maintain a minimum error, the size of the input signal has been decided based on the maximum value of the output signal.The design has been simulated for various word sizes and compared to the 64-bit precision Matlab outputs.For this experiment, we provided an impulsive UWB signal as an input to the filter because the operation and accuracy of the filter can be tested from the impulse response of the filter.
Tables 2 and 3 show the variation of quantization noise with the word size of the coefficients of the filter and the input size.In general, increased precision in the recursive spatio-temporal feedback sections leads to larger VLSI area and higher power consumption with lower speed due to larger CPDs.A compromise must be found depending on the needs of the target application that balances power, speed, accuracy, and chip area.
Look-Ahead Speed
Optimization.The speed of the designs is maximized using fine-grain pipelining.Multilevel pipelining, that is, both inter-PPCM pipelining and intrapipelining, is used for maximum performance.
Figure 16: Time-multiplexed PPCM of 2nd-order beam filter bank with 3-stage scattered look-ahead pipelining for K-time-multiplexed beam filters.Intrapipelining refers to internal optimization within the PPCMs.The feedback loop of an IIR filter inside the PPCM can be pipelined using look-ahead pipelining.Inter-PPCM pipelining refers to the pipelining of signal paths outside the PPCMs.The application of look-ahead pipelining to nonseparable 2D IIR digital filters was first proposed in [10].
Here, we pipeline each feedback loop of the proposed filters using stable scattered look-ahead pipelining (SLA) [11,86,87].In SLA, additional zeros and poles which are at same angular distance as the original poles are introduced into the transfer function, enabling the CPD of the filter to be reduced.
If the denominator of the transfer function is in the form of D(z), The general equation of M-stage SLA pipelining [86] is given by The feedback loop for 2nd-order beam filter as shown in the following equation is pipelined for 3 stages in our design example: Figure 17: Time-multiplexed PPCM of 3rd-order beam filter bank with 3-stage scattered look-ahead pipelining for K-time-multiplexed beam filters.
Transfer function for 3-stage look-ahead pipelining of feedback loop is given by where ( For the 3rd order, the feedback loop is described using the following equation: The transfer function for 3-stage look-ahead pipelining of feedback loop is given by ( Figure 9 shows the realization of H 2 (z 1 ) of 2nd-order beam filter.H 2 (z 1 ) is a spatial function.Hence, it is realized as sum of three consecutive 1D outputs given by the PPCMs of H 1 (z 1 , z 2 ).H 3 (z 2 ) is temporal function and it is realized as consecutive 1D filters as shown in Figure 10.Similar are the realizations for the 3rd-order filter, as shown in Figure 11 and described by Figure 12.
3.7.Time Multiplexing for Folded Architectures.Timemultiplexed systolic-array designs are useful for the design of fan filters [88] whose transfer function is given in the following.Figure 13 shows how fan filters are designed using time-multiplexed filters and are described using [12] where T FIR,k (z 2 ) are subbands of a perfect reconstruction FIR bandpass filter bank.Time multiplexing of K-filters needs the inputs to be upsampled by K ∈ Z + with copying [88,89].The input signal consists of K samples pertaining to each original sample and are passed through the time-multiplexed filter.In Figure 14, we provide an overview of signal flow in timemultiplexed design of the K beam filters.These filters give Koutputs which are to be demultiplexed before being applied to the fan filter bank perfect reconstruction FIR bandpass filters [12].The time multiplexing of the folded architecture caused each unit delay to be increased to where K is the number of inputs (K = 4, in the example provided) and ΔT clk is the clock delay.The signal flow inside the time-multiplexed PPCM is such that the coefficients of feedback terms of K filters are given to a two-input multiplier through a commutating multiplexer.Input signal w(Δxn 1 , cΔT s n 2 ) is given to the other input of multiplier.A counter is used to select feedback coefficient given to multiplexer.We have designed the filter bank with four filters.Hence, the counter runs from 0 to 3 and restarts.Therefore, when there exists K filter coefficients, the required counter must continuously count up 0 to log 2 K.The critical-path delay reduces as the architecture is finegrain pipelined with SLA pipelining for the feedback path.Figure 15 shows each fully multiplexed multiplier circuit (FMMC) design for beam filters.2nd-order and 3rd-order time-multiplexed designs are as shown in Figures 16 and 17, respectively, with ΔT as in (22).
Simulation and Implementation
4.1.The 2D Frequency Response.The obtained 2D magnitude frequency response of a 2nd-order frequency-planar beam plane-wave filter matches the ideal frequency response shown in Figure 18. Figure 19 shows the magnitude frequency response of 3rd-order plane-wave filter.We observe unavoidable warping effect in discrete domain systems due to the use of the 2D bilinear transform.Warping effects can be avoided by temporally oversampling the signal such that the spectrum lies totally within the straight-line region of the beam response.Figure 20 shows contours of the magnitude response of the filters in log scale.Plots for infinite precision, highest fixed-point precision, as well as lowest fixed-point precision are provided.It can be seen that the transition (rolloff) of the 3rd-order plane-wave filter is sharper compared to that of 2nd-order filter.The performance of the designed finite precision digital systolic-array circuit is measured by calculating the L2 error energy due under quantization effects.The output obtained from difference equation (from Matlab, at 64-bit precision) is considered as ideal output, and the error is calculated by the difference of ideal output and the fixed-point measurement from the FPGA device.The L2 energy of the error, E error , is calculated as follows: y ideal e jω1 , e jω2 − y e jω1 , e jω2 2 , (23) where y ideal (e jω1 , e jω2 ) is the Fast Fourier Transform of the ideal output obtained using difference equation from Matlab and y (e jω1 , e jω2 ) is the Fast Fourier Transform of the output obtained from the hardware design.We have tabulated unnormalized L2-error energies as a metric for quantization noise levels for different values of word sizes for 2nd-order and 3rd-order plane-wave filters in Tables 2 and 3, respectively.
Broadband Signal Filtering.
We demonstrate the directional selectivity of the proposed frequency-planar planewave beam filters by employing a Gaussian impulsive ultrawideband signal [90] as input to the filter.The input to the filters that is shown in Figure 21 where The directional attenuation of the signals was calculated using 20 log 10 |A| dB where A is the magnitude of the attenuated signal.Attenuation in signal energy was calculated by correlation.The directional enhancement of the energy of desired signal (or attenuation of the undesired signal) was calculated using 10 log 10 |A| dB.The attenuation in energy of the signal with ψ 0 = −40 • is 18.78 dB for 2nd-order plane-wave filter and 26.2 dB for 3rd-order plane-wave filter.Energy of the signal with ψ 2 = 41 • is attenuated by 14.26 dB by 2nd-order plane-wave filter and 26.69 dB by 3rdorder plane-wave filter.We observe that the attenuation of the undesired signals improves for the higher-order planewave filter, which confirms that the directional selectivity of a beamformer improves with the order of the filter.Figure 22(a) shows the 2D spectrum of the input Gaussian broadband signal.In Figure 22(b), we show the output of the 2nd-order beam filter, followed by Figure 22(c), which shows the output of 3rd-order beam filter for broadband Gaussian modulated cosine waves.
Beam Patterns (Array Factor
). Directional enhancement of plane-wave beam filters can be observed through beam patterns of filter at different frequencies.Figure 23 shows comparison between beam patterns of 1st-order filter, 2ndorder filter, and 3rd-order filter for frequencies π/4, π/2, 2π/3 radians, respectively.The polar plot of Nth-order filter can be obtained using magnitude of the transfer functions which is given by where i + j / = 0 for feedback loop By substituting ω 1 , z in order to observe the relative improvement in performance.Tables 4, 5, 6, and 7 show the FPGA hardware resources such as slice registers, look-up tables (LUTs), flip-flops (FFs), CPD, and maximum clock frequency.We observe a reduction in speed of operation of the 3rd-order plane-wave filter due to increased complexity at same level of pipelining as 2nd-order plae-wave filter.Time-multiplexed filters having corresponding folded digital architectures were physically implemented on the same board, and resource consumptions are tabulated in Tables 8 and 9, respectively.Folding and time multiplexing were applied to both designs having the SLA pipelining.At this stage, we use FPGA prototypes for verification of operation at an order of magnitude lower clock frequency than the final expected RF digital realization.
Computational Complexity.
Table 10 shows comparison of computational complexity and throughput for one PPCM between 2nd-order and 3rd-order beam filters for different stages of SLA.VLSI metrics area time (AT 2n , 0 ≤ n ≤ 1) [91] is calculated as a measurement of complexity to find the main constraints in designing the filters.In VLSI systems, n = 1 is used for cases where low chip area is more important than clock speed.Similarly, n = 2 is used for cases where clock speed is the driving factor.Table 11 gives measures of both AT and AT 2 for 2nd-order pipelined designs, both with and without LA, for different levels of fixed-point precision.Furthermore, Table 12 gives the same metrics for the 3rdorder filter architecture.
Conclusion
Highly directional steerable ultra-wideband digital antenna aperture arrays have useful applications in wireless communications, radar, radio astronomy, spectrum sensing, RF imaging and remote sensing.We propose novel massivelyparallel systolic-array architectures for the real-time digital realization of beamforming 2D IIR frequency-planar beam plane-wave filters based on resistively terminated LC ladder networks.The proposed architectures are aimed at 2ndand 3rd-order beam filters and are based on the recently proposed 1st-order beam filter architecture [10].These beamformers offer both low computational complexity and UWB performance.The proposed architectures enable RF throughputs when eventually realized using high-speed CMOS technology.The proposed architectures are evaluated for correct operation, area, time, and complexity metrics as well as beam sensitivity and noise levels as a function of finite precision.Extensive prototype FPGA realizations, simulations, and FPGA-based emulations of beamforming performance are used here to validate the architectures and design procedures for high-performance digital UWB beamforming applications.
As part of the proposed optimized designs, a fine-grain pipelined systolic-array 2D IIR plane-wave beam filters of orders 2 and 3 have been designed and optimized using scattered look-ahead to reduce the CPD of the digital circuits.Low CPD results in high real-time throughput with corresponding increase in clock frequency.The architectures were evaluated using CPD (T), area (A), as well as AT and AT 2 .The proposed 2nd-order design filter architecture is verified using a realistic Gaussian modulated cosine wave input signal consisting of several plane-waves.The filter is designed to enhance the plane-wave having space-time DOA = 10 • , while attenuating the undesired signal up to 40.11 dB for space-time DOA = −40 • and 37.22 dB for space-time DOA = 41 • .Similarly, the 3rd-order filter provided up to 50.305 dB of stopband rejection for space-time DOA = −40 • and 52.2 dB of stop-band rejection for spacetime DOA = 41 • for a broadband Gaussian-modulated cosine-wave signal.The corresponding hardware architectures were verified on FPGA chip using measured unit impulse responses obtained using stepped FPGA hardware cosimulation.The 2D frequency response of the measured impulse response from the FPGA chip is compared with a reference response obtained from the difference equation by calculating the L2 error energy for different finite precision sizes.
Figure 1 :
Figure 1: RF front-end for a single array element showing antenna, LNA, low-pass filter, and A/D converters.The smart antenna array consists of several such mixed-signal RF circuits for electromagnetic signal sampling at real-time RF rates.
1 ω 2 ψ = 10 1 ω 2 ψ = 10 1 ω 2 ψ = 10
Relative error in H of 2nd-order filter for W c = 20 ω Relative error in H of 2nd-order filter for W c = 15 ω Relative error in H of 3rd-order filter for W c = 23 ω Relative error in H of 3rd-order filter for W c = 19
Figure 8 :
Figure 8: Relative error in H(z 1 , z 2 ) with respect to the respective coefficients of 2nd-order and 3rd-order plane-wave filters.
3. 3 .
Fixed-Point Arithmetic and Quantization Effects.Signed fractional numbers are quantized in a finite precision digital representation, which in this case is based on the two's complement format.Hence, we quantize the coefficients of 2D digital IIR filters using signed fixed-point arithmetic.The size of registers is designed such that (W, D) being the size of the input signal where W is the total size of the two's complement number and D is the binary point location counted from the rightmost position.Also, we use (W c , D c ) as the size of the coefficients of the filter.The fixed-point finite register size of multipliers has been designed at (W + W c , D + D c
Figure 14 :
Figure 14: Overview of time multiplexing K-beam filters.
Figure 18 :Figure 19 :
Figure 18: Magnitude frequency response of 2D 2nd-order beam filter for impulse input signal.
2 (Figure 21 :
Figure 21: Input and output Gaussian signals in time domain.
Figure 22 :Figure 23 :
Figure 22: Input and output Gaussian signals in frequency domain.
Table 1 :
Filter design equations showing algebraically defined filter coefficients as functions of LC ladder prototype network parameters and steered angle tan ψ = sin θ.Here, • refers to dot product between vectors.
Table 2 :
Calculation of error energy for unit impulse input for different output sizes of 2nd-order beam filter.
Table 3 :
Calculation of unnormalized L2-error energy for unit impulse input for different output sizes of 3rd-order beam filter.
Table 4 :
Hardware resource consumptions for pipelined 2ndorder beam filter with 3-stage SLA for different word lengths for 20 PPCMs.
Table 5 :
Hardware resource consumptions for pipelined 2nd-order beam filter without LA for different word lengths for 20 PPCMs.
Table 6 :
Hardware resource consumptions for pipelined 3rd-order beam filter with SLA for different word lengths for 10 PPCMs.
Table 7 :
Hardware resource consumptions for pipelined 3nd-order beam filter without LA for different word lengths for 10 PPCMs.
Table 8 :
Hardware resource consumptions for time-multiplexed 2nd-order beam filter for different word lengths for 10 PPCMs.
Table 9 :
Hardware resource consumptions for time-multiplexed 3rd-order beam filter for different word lengths for 6 PPCMs.
Table 10 :
Computational complexity and throughput of 2nd-order and 3rd-order beam filters of N PPCMs.
Table 11 :
Area time constraints for 2nd-order filter.
Table 12 :
Area time constraints for 3rd-order filter. | 6,640.2 | 2012-09-17T00:00:00.000 | [
"Computer Science"
] |
Free Radicals and ROS Induce Protein Denaturation by UV Photostability Assay
Oxidative stress, photo-oxidation, and photosensitizers are activated by UV irradiation and are affecting the photo-stability of proteins. Understanding the mechanisms that govern protein photo-stability is essential for its control enabling enhancement or reduction. Currently, two major mechanisms for protein denaturation induced by UV irradiation are available: one generated by the local heating of water molecules bound to the proteins and the other by the formation of reactive free radicals. To discriminate which is the likely or dominant mechanism we have studied the effects of thermal and UV denaturation of aqueous protein solutions with and without DHR-123 as fluorogenic probe using circular dichroism (CD), synchrotron radiation circular dichroism (SRCD), and fluorescence spectroscopies. The results indicated that the mechanism of protein denaturation induced by VUV and far-UV irradiation were mediated by the formation of reactive free radicals (FR) and reactive oxygen species (ROS). The development at Diamond B23 beamline for SRCD of a novel protein UV photo-stability assay based on consecutive repeated CD measurements in the far-UV (180–250 nm) region has been successfully used to assess and characterize the photo-stability of protein formulations and ligand binding interactions, in particular for ligand molecules devoid of significant UV absorption.
Introduction
Biotherapeutics are becoming the mainstream of new medicinal agents, from which monoclonal antibodies and peptides are of great pharmaceutical interest. The development of biopharmaceuticals is often hampered by the reduced or lack of stability during ageing under a variety of environmental factors such as temperature, light, and oxidation that is manifested by a loss of ordered structure or protein misfolding mirrored by the loss or change of function. Photo-stability can be an issue in the development and formulation of biopharmaceuticals [1].
Circular dichroism (CD) spectroscopy is the ideal technique to characterize and monitor the folding of protein in solution as a function of environmental factors such as temperature, pH, solvent polarity, salts, detergents, lipids, and ligand interactions [2][3][4]. Unlike macromolecular crystallography (MX) and NMR where detailed structural models of the proteins at atomic resolution can be achieved, CD spectroscopy provides a fast method to analyze the conformational behavior in solution to confirm that different environmental conditions more appropriate for MX and NMR measurements do not affect the protein folding observed under physiological conditions. There are many techniques that can be used to determine binding interactions such as fluorescence, isothermal titration used to determine binding interactions such as fluorescence, isothermal titration calorimetry (ITC), and surface plasmon resonance (SPR), however, CD spectroscopy is the only technique that can reveal directly what type of protein conformational change might be induced upon ligand addition. Is the protein folding retained or does it increase the content of one element of secondary structure such as α-helix, β-strand, β-turn, or unordered at the expense of others? Certainly, this question cannot be addressed directly with fluorescence, ITC, or SPR techniques.
Although bespoke bench-top CD spectropolarimeters using Xe lamp as the light source can operate in the far-UV-Visible spectral region (175-700 nm), the use of synchrotron light sources can be extended to the lower wavelengths in the Vacuum UV (VUV) region (130-200 nm) for the measurements in the solid state of thin films [5]. In the vacuum and far UV spectral regions (125-250 nm), the photon-flux and brilliance of synchrotron light sources compared to that of Xenon lamps is increased substantially [6,7], which means the signal-to-noise ratio of synchrotron radiation circular dichroism (SRCD) spectra have also increased.
The high photon flux and brilliance in the vacuum UV (VUV) and far-UV region of Diamond B23 beamline can induce protein denaturation on scanning repeated consecutive SRCD spectra. This can be eliminated, however, by reducing the slit width from 0.500 mm to 0.200 mm corresponding to a bandwidth of 1.2 nm and 0.5 nm, respectively [8], or to rotate the cuvette cell around the incident light propagation using a motorized rotating cylindrical cell holder ( Figure 1). However, when UV denaturation is produced, a significant decrease of secondary structure is observed for proteins with α-helical and/or β-sheet conformations. Upon scanning repeated consecutive SRCD spectra in the 185-250 nm region, the rate of the conformational changes induced by the high photon flux of the light source has been found to be dependent on the protein primary sequence as well as the solution environment such as solvent composition, pH, temperature, protein concentration, and chemical agent [7,[9][10][11]. This has led to the development at Diamond B23 beamline of a protein UV denaturation assay by simply conducting consecutive repeated CD measurements in the 180-250 nm region to be used as a method to assess the relative photo stability of proteins as a function of their formulations ( Figure 2). As the rate of UV denaturation is very sensitive to the radiation power of the irradiating light source and the dose of irradiation, it is recommended to determine the number of repeated consecutive scans with the equipment using human serum albumin essentially fatty acid and immunoglobulin free (HSAff) as the control ( Figure 2C). Depending on the radiation power, for example with B23, 20 scans are sufficient, whereas with Chirascan CD instrument with 4 nm bandwidth at least 50 repeated scans are required to induce a significant HSAff denaturation that can be used as a parameter to investigate the proteins of interest. This can be used to grade the protein photostability with respect to albumin under similar environmental conditions. Interestingly, the relative rate of UV denaturation was found to be significantly affected by ligand binding interactions [7][8][9][10][11] (Figure 2A-C). This qualitative assay turned out to be very useful to determine binding interactions of molecules with negligible UV absorption like for drugs devoid of aromatic or π-conjugated moieties, saturated lipids, and sugars that are otherwise challenging systems to be studied by other methods and techniques (Figure 2A-C).
Similar to thermal denaturation, UV denaturation varies from protein to protein, showing different conformational changes that may be correlated with the degree of protein stability ( Figure 2E). Different mechanisms to explain this phenomenon have been proposed, involving the generation of oxygen free radicals and other non-radical reactive oxygen species from the aqueous solution medium [12], the heating of internal bound water molecules implicated in maintaining the protein native structure [13], or both.
The aim of this study is to determine whether the protein UV denaturation is due to the local heating from irradiated protein bound water molecules or from the formation of free radical products. The understanding of the origin of the protein denaturation by irradiating the sample in the far-UV region with high photon flux light sources can also be used as a UV protein denaturation assay to assess and characterize the photo-stability of protein formulations and ligand binding interactions, in particular for ligand molecules devoid of or with negligible UV absorption in the far-UV region. [14]. (C) Rate of UV protein unfolding (denaturation) of human serum albumin fatty acid free (HSAff) in H2O with and without ligands such as fatty acid (octanoic acid), diazepam, and tolbutamide plotted at 190 nm for 100 repeated consecutive SRCD spectra. Redrawn from [15]. (D) Rate of UV protein denaturation of antibody Mab-1 in different formulations assessed with 30 repeated consecutive SRCD spectra. Redrawn from [11]. The rates reported in percentage of protein folding were calculated by dividing the CD intensity at a fixed wavelength of the proteinligand complex by that of the protein alone at the same concentration as that of the complex and multiplied by 100. (E) Plot of SRCD signal at 209 nm as a function of UV-exposure time that is equivalent to light irradiation when scanning repeated consecutive SRCD spectra for free human serum albumin HSA (red circles for experimental data and blue line for fitting) and for HSA bound to AuNP (black squares for experimental data and orange line for fitting). Redrawn from [16]. (F) Plot of repeated consecutive SRCD signal at 190 nm of three related vasoactive intestinal peptide (VIP) in MES buffer: a wild type (peptide 1, red), one mutated W25S (peptide 2, blue) and one mutated W25S with palmitoylated K20 (peptide 3, green). Redrawn from [11]. [14]. (C) Rate of UV protein unfolding (denaturation) of human serum albumin fatty acid free (HSAff) in H 2 O with and without ligands such as fatty acid (octanoic acid), diazepam, and tolbutamide plotted at 190 nm for 100 repeated consecutive SRCD spectra. Redrawn from [15]. (D) Rate of UV protein denaturation of antibody Mab-1 in different formulations assessed with 30 repeated consecutive SRCD spectra. Redrawn from [11]. The rates reported in percentage of protein folding were calculated by dividing the CD intensity at a fixed wavelength of the proteinligand complex by that of the protein alone at the same concentration as that of the complex and multiplied by 100. (E) Plot of SRCD signal at 209 nm as a function of UV-exposure time that is equivalent to light irradiation when scanning repeated consecutive SRCD spectra for free human serum albumin HSA (red circles for experimental data and blue line for fitting) and for HSA bound to AuNP (black squares for experimental data and orange line for fitting). Redrawn from [16]. (F) Plot of repeated consecutive SRCD signal at 190 nm of three related vasoactive intestinal peptide (VIP) in MES buffer: a wild type (peptide 1, red), one mutated W25S (peptide 2, blue) and one mutated W25S with palmitoylated K20 (peptide 3, green). Redrawn from [11].
Assessment of the Hypothesis That Protein Denaturation Induced by UV Irradiation Is Originated by the Heating of Protein Bound Water Molecules
The heating hypothesis was assessed by comparing the effects of heating from 5 • C to 90 • C against the far-UV irradiation induced at 23 • C by scanning 50 repeated consecutive SRCD spectra of two aqueous solutions of human serum albumin fatty acid and essentially globulin free (HSAff) at 5 µM and 10 µM concentration, respectively, repeated for each denaturation type.
In Figure 3, the results of thermal ( Figure 3A-F) and UV-denaturation ( Figure 3G-I) of 5 µM and 10 µM HSA aqueous solutions by CD spectroscopy are compared head-to-head. The melting temperature of both HSA solutions remained unchanged ( Figure 3C,D) with the identical first derivative at 60 • C ( Figure 3E,F) unlike the rates of UV-denaturation ( Figure 3I), which clearly indicated significant differences that were quantified by the rates of changes using the exponential fitting equation. For each CD spectrum, the secondary structure estimation (SSE) calculated using the CONTINLL algorithm [17] showed an increased content of β-strand and β-turns at the expense of the α-helix content ( Figure 4). Both thermal denaturation assays were partially reversible with 86% recovery of the α-helix content when cooled back to room temperature (Table 1).
For the 5 µM and 10 µM HSAff aqueous solutions, the UV irradiation experiment conducted by scanning 50 repeated consecutive SRCD spectra at constant 20 • C using 2 nm bandwidth was found to be concentration dependent ( Figure 3G,H) revealing different rates of spectral changes ( Figure 3I). Similar to the thermal denaturation analysis, the SSE calculated for each SRCD spectrum was displayed for each element of the secondary struc-ture ( Figure 4). The fitting of the CD data at 191 nm using the exponential decay equation described in the Materials and Methods section showed a rate of protein denaturation to be about 20 times bigger for the diluted 5 µM HSAff (k = 0.0855 s −1 ) solution than that of 10 µM (k = 0.0041 s −1 ). This implies that the relative difference in protein denaturation rates induced by UV irradiation being concentration sensitive is diffusion dependent. For both denatured states of the 50th spectrum, there is an increased viscosity compared to the corresponding first spectrum ( Figure 3I) that is consistent with the observations reported in Davies et al. [18] for protein aqueous solutions when irradiated with UV light. Another important difference between the two types of experiments was that the thermal denaturation was partially reversible ( Figure 5 and Table 1) whereas the UV photo-denaturation was completely non-reversible.
If the local heating hypothesis of bound water to the protein proposed by Wien et al. (2005) [13] were correct, the effects of UV irradiation on both HSAff solutions of 5 µM and 10 µM should be partially reversible. As this was not the case, the heating hypothesis was not the mechanism for protein denaturation when UV irradiated.
Although both protein thermal and UV denaturation showed an increase in β-strand content mainly at the expense of the decreased α-helical content, this occurred in different pathways ( Figure 4 and Table 1). (Figure 4). The fitting of the CD data at 191 nm using the exponential decay equation described in the Materials and Methods section showed a rate of protein denaturation to be about 20 times bigger for the diluted 5 µM HSAff (k = 0.0855 s −1 ) solution than that of 10 µM (k = 0.0041 s −1 ). This implies that the relative difference in protein denaturation rates induced by UV irradiation being concentration sensitive is diffusion dependent. For both denatured states of the 50th spectrum, there is an increased viscosity compared to the corresponding first spectrum ( Figure 3I) that is consistent with the observations reported in Davies et al. [18] for protein aqueous solutions when irradiated with UV light. Another important difference between the two types of experiments was that the thermal denaturation was partially reversible ( Figure 5 and Table 1) whereas the UV photo-denaturation was completely non-reversible. If the local heating hypothesis of bound water to the protein proposed by Wien et al. (2005) [13] were correct, the effects of UV irradiation on both HSAff solutions of 5 µM and 10 µM should be partially reversible. As this was not the case, the heating hypothesis was not the mechanism for protein denaturation when UV irradiated.
Although both protein thermal and UV denaturation showed an increase in β-strand content mainly at the expense of the decreased α-helical content, this occurred in different pathways ( Figure 4 and Table 1).
Assessment of the Hypothesis that the Protein Denaturation Induced by UV Irradiation is Originated by the Formation of Free Radicals
Electron spin resonance and paramagnetic resonance (ESR or EPR) are techniques that can detect specifically and directly the presence of free radicals (FRs). The use of spin trapping probe can compensate for the relatively low sensitivity using aqueous solutions at room temperature. However, due to the requirement of specialized and expensive ESR spectrometer, alternative methods have been developed for the FRs detection with more readily available equipment based on the detection of FRs reaction products using a variety of probe molecules (for exhaustive reviews, see Refs. [19,20]). The best and simplest probes are molecules with optical properties that change after reacting with FRs. In particular, fluorescent probes permit the detection of FRs with higher sensitivity compared Figure 4, SRCD data of 5 µM and 10 µM HSAff for UV irradiation (1st and 50th spectra at 20 • C) and from CD data measured at 20 • C, 90 • C and cooled back to 20 • C. SSE was calculated using CONTINLL [17] with SP37 data set of 37 soluble proteins.
Assessment of the Hypothesis That the Protein Denaturation Induced by UV Irradiation Is Originated by the Formation of Free Radicals
Electron spin resonance and paramagnetic resonance (ESR or EPR) are techniques that can detect specifically and directly the presence of free radicals (FRs). The use of spin trapping probe can compensate for the relatively low sensitivity using aqueous solutions at room temperature. However, due to the requirement of specialized and expensive ESR spectrometer, alternative methods have been developed for the FRs detection with more readily available equipment based on the detection of FRs reaction products using a variety of probe molecules (for exhaustive reviews, see Refs. [19,20]). The best and simplest probes are molecules with optical properties that change after reacting with FRs. In particular, fluorescent probes permit the detection of FRs with higher sensitivity compared to other spectroscopic probes. From the fluorescent probes available, especially useful are the "positive" fluorogenic probes. These probes are non-fluorescent or very weakly fluorescent molecules that become substantially fluorescent upon reaction with FRs. In this study, we used dihydrorhodamine 123 (DHR-123), a non-fluorescent molecule in its initial state, that is converted to the highly fluorescent form Rhodamine 123 (Rh-123) upon reaction with free radicals (Scheme S1 in Supplementary Materials) [21].
The capability of the high photon flux of B23 beamline in the far-UV region to generate free radicals in aqueous solutions was assessed by fluorescence spectroscopy using a fluorescent sensor. Initially, the generation of FRs was evaluated by fluorescence spectroscopy after irradiation at 254 nm with the UV-C lamp BioLink 254 photo-reactor whilst the effect of UV exposure to the protein conformation was monitored using the benchtop Chirascan Plus CD instrument with 1 nm bandwidth that did not denature the investigated protein as a result of UV irradiation on scanning the CD spectra ( Figure 6). For this assay, an aqueous solution of DHR-123 was kept under UV-C exposure at 254 nm at room temperature for different amounts of time, from 0 to 60 s. The corresponding fluorescence emission spectra in the 510-700 nm range when excited at 505 nm were measured for the UV-C irradiated and non-irradiated solutions as control. In Figure S1 in the Supplementary Materials section, the fluorescence of DHR-123 in aqueous PBS buffer solution was found to increase as a function of UV-C irradiation. The rate of fluorescence change ( Figure S1, insert, black line) represented the rate of conversion of the non-fluorescent DHR-123 into the fluorescent Rh-123 (Scheme S1), which reflected the rate of free radical formation.
Since the non-fluorescent DHR-123 can be oxidized via photosensitization, a parallel experiment was conducted in the presence of 0.1 mM ascorbate, a known free radical scavenger [22]. Under this condition, the fluorescence emission was detected after UV-C irradiation ( Figure S1, insert, blue line). These results indicated that the transformation of DHR-123 in the fluorescent Rh-123 was due to the action of free radicals generated by the UV irradiation of the aqueous buffer solution and not by a photo-oxidation process. At micromolar solute concentrations, free radicals are predominantly produced from water molecules due to their higher concentration (55 M) [20] involving many different reactive oxygen species (ROS), including HO • , HO 2 • , O 2 •− . These free radicals have limited lifetimes and limited diffusion ranges, for example, a few nanometers for the most abundant hydroxyl free radical [23,24], hence they are considered to be localized on the site of irradiation.
As a control, the fluorescence emission of DHR-123 was also measured as a function of temperature. Ramping between 5 and 85 • C did not produce any significant increase in the fluorescence emission, as shown in Figure S2. This result suggests that the increase in the DHR-123 maximum fluorescence emission at 524 nm observed when the sample is irradiated with the consecutive repeated SRCD spectra is related to the generation of ROS in the aqueous media and is attributable only to the exposure to UV light and not to the heating of water molecules, whether bound to the protein or free.
DHR-123 in the presence of ovalbumin (OVA) showed an enhanced fluorescence at 524 nm ( Figure 6) being consistent with an increased production of free radicals. The initial rate of change in the absence or presence of the protein rose from 0.245 to 0.302 s −1 respectively. This may be due to the injection of electron from the side chain of UV-excited aromatic residues (Trp, Tyr and Phe), which can be captured by both O 2 present in the aqueous solution and by disulphide bridges, leading to the formation of HO 2 • or O 2 • and disulphide electron adduct radicals, respectively [19]. As the addition of protein increases, the viscosity of the solution decreases the motion of molecules favoring the interaction of photons with water molecules as well as ROS with the non-fluorescent probe DHR-123.
The conformational effect on the secondary structure of ovalbumin due to UV irradiation was evaluated by CD spectroscopy under the same parameters of protein concentration and cuvette pathlength of the fluorescence experiments. The far-UV CD spectra of ovalbumin as a function of UV irradiation up to 900 s were qualitatively diagnostic of protein denaturation with loss of α-helical content.
ovalbumin as a function of UV irradiation up to 900 s were qualitatively diagnostic of protein denaturation with loss of α-helical content. These results indicated that the mechanism of UV-photo denaturation of a protein was due to the formation of ROS from the aqueous medium that denatured the protein.
The reaction of protein with ROS may occur via hydrogen abstraction from saturated carbon or hydroxyl addition to unsaturated double bonds or aromatic rings [25]. The Habstraction is dependent upon the single bond strength (the bond energies for C-H, N-H, O-H, and S-H are 411, 386, 459, and 363 KJ/mol, respectively, at 25 °C) and influenced by the electron properties of the substituent: an electron-donating substituent increases the reactivity, while an electron-withdrawing decreases it. The S-H bond, having the lowest bond energy, makes cysteine residue one of the most reactive moiety for the H-extraction. At neutral pH, the amine group that is positively charged and thus electron-deficient would not be subjected to a direct attack of free radicals. Neighboring atoms or groups able to stabilize the nascent radicals will make the radical attacks more likely. Hydroxyl radicals attack preferentially the side chain of solvent-exposed amino acid residues due to the higher accessibility compared to the buried or less exposed backbone chain. This may oxidize the amino acid side chains of key residues and promote a loss of ordered structure without the formations of protein fragments. An extensive description of these two processes is available on Neves-Petersen et al. [19,20]. This was consistent with the observation that the gel page of irradiated and not irradiated proteins showed no detectable protein backbone cleavage upon far-UV irradiations [13], similar to the experiments discussed here.
The fact that only four consecutive repeated SRCD spectra in the far-UV region (185-260 nm) at constant 23 °C were sufficient to induce a change in the fluorescence spectrum of DHR-123 ( Figure S3) indeed demonstrated the production of ROS when the sample was irradiated with the powerful B23 light source. These results indicated that the mechanism of UV-photo denaturation of a protein was due to the formation of ROS from the aqueous medium that denatured the protein.
The reaction of protein with ROS may occur via hydrogen abstraction from saturated carbon or hydroxyl addition to unsaturated double bonds or aromatic rings [25]. The H-abstraction is dependent upon the single bond strength (the bond energies for C-H, N-H, O-H, and S-H are 411, 386, 459, and 363 KJ/mol, respectively, at 25 • C) and influenced by the electron properties of the substituent: an electron-donating substituent increases the reactivity, while an electron-withdrawing decreases it. The S-H bond, having the lowest bond energy, makes cysteine residue one of the most reactive moiety for the H-extraction. At neutral pH, the amine group that is positively charged and thus electron-deficient would not be subjected to a direct attack of free radicals. Neighboring atoms or groups able to stabilize the nascent radicals will make the radical attacks more likely. Hydroxyl radicals attack preferentially the side chain of solvent-exposed amino acid residues due to the higher accessibility compared to the buried or less exposed backbone chain. This may oxidize the amino acid side chains of key residues and promote a loss of ordered structure without the formations of protein fragments. An extensive description of these two processes is available on Neves-Petersen et al. [19,20]. This was consistent with the observation that the gel page of irradiated and not irradiated proteins showed no detectable protein backbone cleavage upon far-UV irradiations [13], similar to the experiments discussed here.
The fact that only four consecutive repeated SRCD spectra in the far-UV region (185-260 nm) at constant 23 • C were sufficient to induce a change in the fluorescence spectrum of DHR-123 ( Figure S3) indeed demonstrated the production of ROS when the sample was irradiated with the powerful B23 light source.
Fluorescence experiments were carried out using a Chirascan-Plus CD Spectrometer with fluorescence attachment (Applied Photophysics Ltd., Leatherhead, UK). Briefly, 1.5 µL of DHR-123 in DMSO (2 mg/mL) were added to 3000 µL of 20 mM PBS, pH 7.4, in a Suprasil fluorescence cell with 1.0 cm path length and irradiated using BioLink 254 photoreactor (Vilber, Eberhardzell, Germany). The fluorescence emission spectrum of irradiated DHR-123 solution in the 510-700 nm range (Ex 505 nm, slit 4 nm) was recorded at different irradiation time. The emission spectra of the not irradiated solution was recorded in a temperature range between 5 • C and 85 • C at 10 • C increments wit 8 min incubation time for equilibration.
The protein denaturation by heating was monitored by CD spectroscopy using Chirascan Plus (Applied Photophysics Ltd., Leatherhead, UK) with 1 nm bandwidth that did not promote any protein UV denaturation in each of the repeated scans at an increased temperature from 5 • C to 90 • C at 5 • C increments with 5 min incubation time.
The UV irradiation was conducted at Diamond Light Source synchrotron (Harwell Science and Innovation Campus, Didcot, UK) using beamline B23 module B for SRCD at constant 23 • C and analyzed with CDApps suite of programs [26]. It is important for the investigated protein systems to keep the same number of repeated scans for the same wavelength range at the same scan speed. For example, 50 repeated consecutive spectra in the 178-255 nm wavelength range took 150 min to be completed, which was similar to that for the thermal denaturation.
The rates of denaturation from CD and SRCD data at 191 nm were calculated fitting the exponential decay equation [y = y 0 + Ae (−x/t) ] (ExpDec1 fit of Origin (OriginLab)) where A = amplitude, t = time constant and y0 = offset. The equation used for the rate of denaturation was k = 1/t (s −1 ).
Conclusions
Peptides and proteins as biotherapeutics are mainstream new medicinal agents, whose development is often hampered by the lack or reduced stability during ageing under a variety of environmental factors such as temperature, light, and oxidation.
The protein UV denaturation assay developed at Diamond B23 beamline for SRCD resulted in a facile, accurate, and fast assessment of the relative protein photo-stability as a function of environment, such as solvent/buffer composition, pH, red-ox, and surfactants to screen the conditions to enhance protein photo-stability. It can be used to qualitatively assess the protein binding interactions of UV transparent ligands or with negligible absorption such as lipids, sugars, and metal ions.
The aim of this study was to demonstrate that the UV protein denaturation was not due to thermal effects but to free radicals. To unambiguously determine the origin of protein denaturation, UV-irradiation experiments to induce the denaturation of albumin and ovalbumin proteins by heating and UV irradiation at constant room temperature were performed.
We have demonstrated the hypothesis that protein denaturation induced by UV irradiation is originated by the local heating of the molecules of water bound to the protein, as proposed by Wien et al. [13], is not correct. At first glance, the comparison of the two methods of denaturation: one by heating from 5 to 90 • C and the other by scanning with B23 beamline 50 repeated consecutive spectra in the 178-255 nm region and both performed within the same length of time, about 150 min, revealed apparent spectral similarity. However, several major differences were observed between the two methods. The thermal denaturation was found to have degrees of reversibility and was protein concentration independent, whereas the UV denaturation was irreversible and protein concentration dependent with substantial differences in the rates of denaturation. Finally, the protein unfolding induced by UV irradiation and heating was occurring under distinct paths with different amount of the secondary structure estimated from CD data. Indeed, for a given protein, in this case the HSA, the detailed analysis of the thermal and UV denaturation experiments showed unambiguously substantial differences, as illustrated in Figure 3.
We determined that the origin of protein denaturation by UV irradiation was solely due to the free radicals-ROS formation revealed by fluorescence spectroscopy using the temperature insensitive photosensitizer DHR-123, as shown in Figure 6. The fact that DHR-123 was becoming fluorescent when converted into Rh-123 only when irradiated at single wavelength (254 nm) using UV lamps or scanning four consecutive repeated SRCD spectra in the far-UV region using B23 beamline, and not by heating ( Figure S2), was unambiguously indicative that the denaturation of aqueous proteins was indeed promoted by the reactive FRs-ROS species and not by the heating effect on the water molecules bound to the protein when UV irradiated.
The scan speed of the measurement dictates the irradiation time for the consecutive repeated scans. In this manner the irradiation time can be quantified. Of course, the SRCD measurement induces a protein conformational change while monitoring it due to the high photon flux. However, as the irradiation time, the spectral range, and scan speed are known, the rate of denaturation is quantifiable.
This manuscript is not a comparison of the UV photo-stability of proteins but rather an assay to assess the relative photo-stability of protein under a variety of environmental conditions such as solvent polarity, ionic strength, and ligand binding interactions.
In summary, protein denaturation induced by FRs and ROS promoted by irradiation at 254 nm using UV lamps or by repeated consecutive SRCD spectra is a facile, accurate and fast method to assess the protein conformation stability and qualitatively the binding interaction of transparent ligands that is distinct and complementary to the thermal denaturation method. | 6,974.8 | 2021-06-01T00:00:00.000 | [
"Chemistry",
"Biology"
] |
Finite-time stabilization of switched nonlinear singular systems with asynchronous switching
This paper is concerned with the finite-time stabilization of a class of switched nonlinear singular systems under asynchronous control. Asynchronism here refers to the delays in switching between the controller and the subsystem. First, the dynamic decomposition technique is used to prove that such a switched singular system is regular and impulse-free. Secondly, based on the state solutions of the closed-loop system in the matched time period and the mismatched time period of the system instead of constructing a Lyapunov function, the sufficient conditions for the finite-time stability of the asynchronous switched singular system are given, there is no limit to the stability of subsystems. Then, the mode-dependent state feedback controller that makes the original system stable is derived in the form of strict linear matrix inequalities. Finally, numerical examples are given to verify the feasibility and validity of the results.
Introduction
A switched system is a class of hybrid system consisting of several continuous or discrete dynamic subsystems and a given switching rule. When simulating complex models, switched systems often have an advantage over a single system, so they are widely used in many fields such as switching power converters, aircraft and air-traffic control, see [1][2][3][4][5]. In recent years, many studies on switched systems have emerged, see [6] and [7,8] and references therein. Most studies on switched systems are concerned with Lyapunov global asymptotic stability. However, in practice, we need the system to be stable within a finitetime interval instead of an infinite interval. The finite-time stability problem of a switched system has been discussed in [9][10][11][12]. Therefore, it is more valuable to study the transient performance of the system in a finite-time interval than Lyapunov asymptotic stability in some situations. The difference between the concept of finite-time stability and Lyapunov stability is mainly manifested in two aspects: one is that finite-time stability analyzes the system within a limited time interval; the other is that finite-time stability requires preset boundaries of system variables. A switched singular system means that the system contains at least one singular subsystem. These systems widely exist in power systems, networked control systems, robotics and other practical systems [13][14][15]. Therefore, the study of switched singular systems has attracted the attention of many workers, and has achieved rich research results [16][17][18]. Compared with general switched systems, the stability analysis and controller design of switched singular systems are more complicated due to the problems of regularity, uniform initial state and impulse-mode cancelation. When more detailed and precise models are pursued, models of nonlinear rather than linear singular systems are established. It is inevitable that switching signals will take a certain amount of time in the transmission process, as even modern technology can not completely eliminate the time delay. Like the butterfly effect, even a small delay of the controller may have a great influence on the system. Thus, in order to simulate a more realistic real system, many workers focus their research on meaningful asynchronous controllers [19][20][21][22].
In the previous paper on switched singular systems [23], based on the equivalent dynamics-decomposition form, the exact description of the state jump is characterized at the moment of system switching. On the one hand, this state jump comes from the switching law of piecewise-constant values, and on the other hand, it comes from the constraint of algebraic equations. On the basis of the refined description for state jumps proposed above, the finite-time stabilization problem of switched linear singular systems has been considered in [24] without considering the occurrence of asynchronism. Some conditions to ensure that the state remains in a bounded region have been derived via the Lyapunov approach. The finite-time stability problem and finite-time bounded problem of switched singular systems with unstable subsystems have been presented by the authors in [25]. With the help of illustrative examples, the criterion given in [25] provides less conservative results than the approach given in [24]. For the vast majority of methods used to solve the finite-time stability of switched systems, Lyapunov methods have been proven to be one of the most efficient approaches [5,26,27]. Moreover, the Lyapunov function method is also a very effective tool when studying fractional-order systems, see [28][29][30][31]. The efficiency of those methods, however, depends crucially on appropriate construction of the Lyapunov-Krasovskii (L-K) functions. Since there is no uniform method to construct L-K functions, it is not easy to construct suitable L-K functions for different systems. Hence, we are curious about one thing: can we solve the problem of finite-time stability of switched singular systems under asynchronous control without using the Lyapunov function method? This is the first motivation of this research.
In fact, the solution of the state equation of the system is an intuitive and useful tool in studying the stability of the system, yet few workers use it. There are two main reasons for this phenomenon. On the one hand, the structure and state equation of the switching system are complex, the switching signals are constantly changing. Meanwhile, the subsystems are alternating so it is difficult to obtain the state solution of the system. On the other hand, even if the state solution is obtained, it is difficult to find effective analysis tools and methods. Thus, starting from the original solution of the system and combining the model with the mode-dependent average dwell time to study the asynchronous problem of a switched singular system has not been given enough attention, which is the second motivation of this paper.
The objective of this paper is twofold. The first is to find the appropriate switching law to make the system stable in finite time. The other is to find the specific form of an asyn-chronous controller that can be solved. Based on the problems raised above, the contributions of this paper are as follows.
(i) The regular and impulse-free properties of switched singular systems is proved based on the dynamic decomposition technique and there is no requirement that all subsystems must be stable. Then, the finite-time stability (FTS) problem of a switched singular system is transformed into the FTS problem of reduced-order switched systems.
(ii) In contrast to [24,25,32], we do not construct any Lyapunov functions in our research. Starting with the state-equation solution of the switched system with nonlinear disturbance and taking the switching time point as the boundary, the operation time period of each switched system is analyzed, and the state solutions of the closed-loop system in the matched time period and the mismatched time period are given, and the state solutions of the whole time period are obtained by alternating iterative derivation.
(iii) Based on the mathematical derivation and analysis of the state solution, and combined with the average dwell time method, the sufficient conditions for the FTS of the closed-loop switched singular system are obtained. Then, sufficient conditions for the system to be FTS are given in the form of strict linear matrix inequality and the gain matrix form of the controller is presented. Compared with [25], sufficient conditions with less conservatism can be obtained to determine the FTS of a switched singular system. The rest of this paper is organized as follows. In Sect. 2, definitions and lemmas useful for the proof of theorems in this paper are listed. Section 3 presents the main results. Based on the decomposition transformation of the original system and taking the asynchronous controller into account, sufficient conditions for finite-time stability of switched singular systems are given. The proof process is concise and to the point. Two specific examples along with numerical and simulation results are provided in Sect. 4. Section 5 gives the conclusion of the work of this paper.
Notations: The notations used in this paper are fairly standard. R n denotes the ndimensional Euclidean space over the reals, R m×n is the set of all m × n real matrices. N + represents all positive integer sets. " * " stands for the symmetric term in a symmetric matrix. Re(A) represents the real parts of the eigenvalues of matrix A. P > 0 (P < 0) means that P is real symmetric and positive-definite (negative-definite). Matrix P > Q(P ≥ Q) is equivalent to P -Q > 0(P -Q ≥ 0). λ max (P)(λ min (P)) denotes the maximum (minimum) eigenvalue of P, and · is the Euclidean norm.
Problem statement and preliminaries
Consider a class of nonlinear switched singular systems described by the following equation: where x(t) ∈ R n , u(t) ∈ R m are the state vector and control input, the index σ (t) : [0, ∞) → N = {1, 2, . . . , N} is a piecewise right-continuous function of time t or x(t), where N ∈ N + is the number of subsystems. The switching sequence satisfies t 0 < t 1 < t 2 < · · · , when t ∈ [t i , t i+1 ) and σ (t) = l i ∈ N , we say that subsystem l i is activated. For all is a singular matrix and satisfying rankE σ (t) = r < n. f σ (t) (t, x(t)) is a continuously differentiable nonlinear perturbation function on x(t), and f σ (t) (t, 0) = 0 and satisfies the following quadratic constraint In practical engineering applications, because the sensor identify subsystem and the corresponding controller will take some time, there will be a switching time delay in the controller, which results in switching asynchrony between them. Therefore, in this paper, the following form of controller is considered where τ (t) is the switching delay of the controller relative to the subsystem while meeting Here, without loss of generality [33,34], the upper bound of the switching delay is known in advance. By substituting this expression into formula (1), we get the following closed-loop system expression The purpose here is to design a state feedback controller (3) such that the loop-closed system (4) is admissible. The switching time series of the controller is t 0 < t 1 + τ (t 1 ) < · · · < t i + τ (t i ) < · · · . Meanwhile,t i is defined as t i + τ (t i ). Further, system (4) can be written in the following form For simplicity, we use the subscripts l i and l i-1 to substitute for σ (t i ) and σ (t i-1 ). We usẽ The above formula can be abbreviated as In order to prove the theorem, we need some definitions and lemmas.
Definition 2.1 ([25])
For the switching signal σ (t) of system (1) and any t 2 > t 1 ≥ 0, let N σ l i (t 1 , t 2 ) denote the switched numbers of the l i th subsystem over (t 1 , t 2 ), T l i (t 1 , t 2 ) as the sum of the running time of the l i th mode, if holds for τ al i > 0, N 0l i ≥ 0, then τ al i is called the mode-dependent average dwell time and N 0l i is called a chatter bound of the switching signal σ (t).
Definition 2.3 ([24]
) For given three positive numbers c 1 , c 2 , T, with c 1 < c 2 , a positivedefinite matrix R > 0 and a given switching signal σ (t) ∈ N , the switched nonlinear singular system (1) is said to be finite-time stabilized under an appropriate control input u(t) with respect to (c 1 , where A, B, C and D are any real given matrices with appro-
Lemma 2.2 ([36]) Let u, v and w be nonnegative piecewise-continuous functions on
holds, where a and c are nonnegative constants. Then, r(s) ) ds , ∀t ≥ a, ∀r > 0.
Main results
In this section, the decomposition technique and average dwell-time method are combined together to investigate the finite-time stabilization problems for the closed-loop system (6). Since rankE l i = r < n, there exist two invertible matrices M l i and N l i such that wherex 1 (t) ∈ R r ,x 2 (t) ∈ R n-r . Then, equation (6) can be converted into Suppose Aσ 22 is nonsingular, the following formula can be further obtained ⎧ ⎨ ⎩ẋ Thus, system (6) can be rewritten as This article follows the previous definition, wherex 1 (t) is called the slow system variable andx 2 (t) is called the fast subsystem variable.
Remark 3.1 It should be noted that the dynamics-decomposition form is not unique because the choice of matrices M l i , N l i is not unique. According to the proof of Theorem 3.1 in reference [37], it can be seen that the properties of the system solution remain unchanged after the coefficient matrix of the system is transformed. Therefore, the regular and impulse-free nature of the solutions of (1) and (11) can be derived from each other. Some similar definitions about the pair (E l i , A l i ) appear in Theorem 1 in [35] and Definition 1 in [38].
Remark 3.2 As stated in [39], finite-time stability and Lyapunov stability are two independent concepts. The former describes the local properties of the system state, and the latter describes the global asymptotic behavior of the system solution. These two properties cannot be deduced from each other. The upper bound T of the system running time is determined according to the specific situation in a practical application. Therefore, in this study, T is a known value given in advance. At the same time, the average dwell time should be as small as possible to reduce conservatism.
The proof will be divided into two steps. Let us start with the observation that system (6) is regular and impulse free.
where 11l i =Ā T l i P l i + P T l iĀ l i -2α l i E T l i P l i , γ = ω -2 hold, then the pair (E l i , A l i ) in system (6) is regular and impulse free and system (6) has a unique solution in the neighborhood of an equilibrium point.
Proof From condition (13), we have 0 0 , we can obtain P l i 11 > 0, P l i 12 = 0. We can conclude from (14) that Substituting (9) into the above formula, we have and finally that according to Lemma 2.1, it follows that and A T l i 22 P l i 22 is nonsingular. Therefore, A l i 22 is nonsingular, by [35] and Definition 2.2, system (6) is regular and impulse free. In the neighborhood of an equilibrium point x(t) = 0, f σ (t) (t, x(t)) can be written as f σ (t) (t, x(t)) = W σ (t)0 (t)x(t) + l i (t, x(t)). Thus, system (6) can be rewritten as E σ (t)ẋ (t) = (Ā σ (t) + W σ (t)0 (t))x(t) + l i (t, x(t)). Then, from [38], we can obtain that From (14), we have According to (16) and (17), we can obtain 11l i + P T l i P l i + W T l i 0 (t)W l i 0 (t) < 0. Further, it can be obtained that Therefore, from the proof process of the first half, the approximation system E σ (t)ẋ (t) = (Ā σ (t) + W σ (t)0 (t))x(t) is regular and impulse free. The rest of the proof is the same as in reference [38], it can be concluded that system (6) has a unique solution in the neighborhood of an equilibrium point. (6), given constants 0 < c 1 < c 2 , α l i > 0, α l i l i-1 > 0, T > 0, δ > 0, and matrix R > 0, if there exist nonsingular matrices P l i , ∀l i ∈ N such that (13), (14) and
Theorem 3.2 Consider the switched singular system
where 11l i l i-1 =Ā T l i l i-1 P l i + P T l iĀ l i l i-1 -2α l i l i-1 E T l i P l i hold, then the average dwell time of the switching signal that guarantees the regular, impulse-free nature and stability of system (6) in finite time satisfies the following formula where η = i k=0 (α l k (t k+1t k ) + (α l k l k-1α l k )T l k l k-1 (0, t)).
Proof It remains to prove that system (1) is finite-time stabilized. By virtue of (15) and repeating the previous argument and using (19) leads to By the definition of a matrix eigenvalue, it can be shown that there exist invertible matrices S l i and S l i l i-1 such that where J(Ā l i 1 ), J(Ā l i l i-1 1 ) are the Jordan forms ofĀ l i 1 andĀ l i l i-1 1 , respectively, λ l i 1 , λ l i 2 , . . . , λ l i n are the eigenvalues of the matrixĀ l i 1 , λ l i l i-1 1 , λ l i l i-1 2 , . . . , λ l i l i-1 n are the eigenvalues of the matrixĀ l i l i-1 1 .
Combining (22) with (23), we deduce that eĀ l i 1 t ≤ e θ l i + 1 2 α l i t , eĀ l i l i-1 1 t ≤ e θ l i l i-1 + 1 2 α l i l i-1 t , where θ l i = ln[λ max (S l i )/λ min (S l i )], θ l i l i-1 = ln[λ max (S l i l i-1 )/λ min (S l i l i-1 )], use λ max (S l i ) to represent the maximum eigenvalue of matrix S l i , λ max (S l i l i-1 ) denotes the maximum of all eigenvalues of matrix S l i l i-1 .
Then, it can be deduced from Lemma 2.2 that That is, Using the expressions of Noting that x(t) = N σ (t)x (t), we can show that Proof In order to obtain the controller gain, we denote D T l i = P -T l i , D l i = P -1 l i . From (5), we haveĀ l i = A l i + B l i K l i ,Ā l i l i-1 = A l i + B l i K l i-1 . Pre-and postmultiplying (14) and (19) by diag{D T l i , I, I} and its transpose, respectively, and using the definition of G l i = K l i D l i , it follows that (44) and (45), respectively, and denoting G l i = K l i l i , (41)((42)) is equivalent to (44)((45)).
Remark 3.5 The proof of the theorem does not require the stability of the subsystem and parameter α can take different values α l i for different subsystems so it is less conservative. Compared with Theorem 3.1 in [25], the constraint conditions of equations (17) and (18) are discarded. The subsystem and the corresponding controller are one-to-one corresponding. Therefore, the design of the controller is only related to the subscript l i and not dependent on l i-1 .
When the switching delay is not considered, that is, when the operation of the controller and the corresponding subsystem is synchronous, we can obtain the following corollary. It is worth noting that the controller of the system becomes Corollary 3.1 Consider the switched singular system (6) with control input (46), given constants 0 < c 1 < c 2 , α > 0, T > 0, δ > 0, matrix R > 0 and a full column rank matrix and (41) hold. Then, the average dwell time of the switching signal that guarantees the finite-time stabilization of system (6) with respect to (c 1 , c 2 , T, R, σ ) satisfies where α = max l i ,l i-1 ∈N {α l i , α l i l i-1 }. Moreover, the controller gain is given by (43).
Numerical example
In this section, two numerical examples are provided to demonstrate the validity and feasibility of the above results.
Example 1 Consider the switched nonlinear singular system (1) with two subsystems and matrix parameters as follows: Subsystem1: Subsystem2: Choosing two sets of matrices as follows that can transform matrices E 1 and E 2 into a unit matrix, respectively. Selecting X 1 , X 2 that satisfy equation EX 1 = EX 2 = 0 as follows Suppose W 1 = W 2 = [1 1 1], ω = 1, α 1 = 0.47, α 2 = 0.14, c 1 = 0.01, c 2 = 1. It can be calculated that τ a = 1.2. The switching signals of the subsystems and the controllers are plotted in Fig. 1, respectively. Choose the initial state response as x 0 = [1.2, -0.8, 0.6], then the state response of switched singular system (1) under the action of asynchronous controller (3) is depicted in Fig. 2. From the curve in this figure, it can be seen that the three state variables of the system tend to be stable in a finite-time interval under the action of the switching signal designed by Theorem 3.2.
Example 2 Consider a set of 2-dimensional switched singular systems selected from the numerical simulation in reference [25]. The corresponding matrix coefficients are shown It is easy to verify that subsystem 1 is a stable system and subsystem 2 is an unstable system. The authors of [25] investigated the finite-time stabilization of switched singular linear systems via the Lyapunov approach. In this paper, we study the finite-time stability of linear switched singular systems based on the form of the initial solution of the equation. We choose M 1 = M 2 = I 2 , N 1 = 1 -1 0 1 , N 2 = 1 -1 1 0 .
Let α 1 = 1.5, α 2 = 5, c 1 = 1, c 2 = 30. The average residence time was calculated to be 0.3, i.e., less than the time given in [25] of 0.38. The corresponding state responses of the 2dimensional linear switched singular system are illustrated in Fig. 3. See Table 1 for more comparison of calculation results. It can be seen that tighter dwell bounds are obtained as long as we choose appropriate parameters.
Conclusion
Finite-time stabilization problems for a class of switched nonlinear singular systems have been discussed in this paper. A controller describing asynchronism has been presented and considered in the analysis. By decomposing the system, the regular and impulsefree nature of the switched system is proved to be valid. Without the help of the Lyapunov method, combining the average dwell-time method with differential equation theory, some necessary conditions for finite-time stabilization of systems are given in the form of linear matrix inequalities. In addition, the conditions for solving the parameters of the controller have been obtained. Finally, two numerical examples have been given to verify the effectiveness and correctness of the method presented in this paper. The extensions of the derived results to the finite-time stabilization problem of a fractional-order switched system will be our future investigation. | 5,502 | 2021-12-01T00:00:00.000 | [
"Mathematics"
] |
The Implementation and Advantages of a Discrete Fourier Transform-Based Digital Eddy Current Testing Instrument †
: An eddy current testing instrument is the core equipment for non-destructive testing (NDT) in nuclear power plants, and its performance is of great significance to ensure the safety of nuclear power units throughout their life cycle. At present, mainstream eddy current instruments use analog circuits for signal processing, whose structure is complex, and there are shortcomings such as large noise and weak anti-interference ability. To improve the performance of eddy current instruments, this paper creatively proposes a digital signal processing method. In this method, ARM+FPGA is used as the core of signal processing, and a DFT digital signal processing algorithm is used instead of traditional hardware detection circuits to complete the processing of eddy current signals. The parallel DFT operation is realized in the algorithm, and up to 10 superimposed signals of different frequencies can be operated simultaneously, which further improves the detection efficiency of the instrument. The measured results show that the digital instrument designed in this paper greatly simplifies the hardware circuit, reduces the overall electronic noise level, and improves the signal-to-noise ratio and detection efficiency. The instrument supports BOBBIN, MRPC and ARRAY detection technologies, which fully meets the application needs of NDT in nuclear power plants.
Introduction
Eddy current testing technology is a non-destructive testing (NDT) method based on the principle of electromagnetic induction [1].If the defect in a conductor interferes with the trajectory of the eddy currents, the equilibrium state will be changed, and the defect information can be obtained by detecting the change of the eddy current magnetic field [2]. Figure 1a shows the trajectory of eddy currents in a defect-free conductor when the excitation coil is applied.Figure 1b depicts the changes when there is a crack in the conductor.
Eddy current testing is essentially a magnetic field disturbance problem that can be calculated using the Maxwell equation.When the excitation signal changes in time harmonics, its mathematical model can be regarded as a derivation issue from the beginning of the time-harmonic electromagnetic field to the disturbance electromagnetic field generated by the defect.Taking the harmonic factor e jωt , ω > 0, the Maxwell equation can be written as Equation (1) [3].
Eng. Proc.2023, 58, 84 where H is the magnetic field strength, J S is the current density of the conductor surface, D is the electric displacement, B is the magnetic induction, E is the electric field intensity, and ρ is the bulk density of free charge.The solution of this equation is complex, and not suitable for engineering applications.Eddy current testing is essentially a magnetic field disturbance problem that calculated using the Maxwell equation.When the excitation signal changes in tim monics, its mathematical model can be regarded as a derivation issue from the beg of the time-harmonic electromagnetic field to the disturbance electromagnetic fie erated by the defect.Taking the harmonic factor , > 0, the Maxwell equati be written as Equation (1) [3].
( ) where H is the magnetic field strength, J is the current density of the conductor s D is the electric displacement, B is the magnetic induction, E is the electric field int and is the bulk density of free charge.The solution of this equation is complex, a suitable for engineering applications.
Further research shows that changes in various factors of conductors will changes in impedance of the detection coil [1].Eddy current detection can be abs into monitoring impedance value of the sensing coil with the following function mula: where Z represents the detection coil impedance, represents the conductivity, sents the magnetic permeability, represents the material defect, represents the tion coil frequency, represents the probe radius, and h represents the distance b the test piece and the probe.In engineering applications, ρ, µ, f, r, and h are ke changed, so that the correspondence between the sensor coil impedance Z and the rial defect x can be established.This makes eddy current testing easier to impleme Further research shows that changes in various factors of conductors will cause changes in impedance of the detection coil [1].Eddy current detection can be abstracted into monitoring impedance value of the sensing coil with the following functional formula: where Z represents the detection coil impedance, ρ represents the conductivity, µ represents the magnetic permeability, x represents the material defect, f represents the excitation coil frequency, r represents the probe radius, and h represents the distance between the test piece and the probe.In engineering applications, ρ, µ, f, r, and h are kept unchanged, so that the correspondence between the sensor coil impedance Z and the material defect x can be established.This makes eddy current testing easier to implement.
To facilitate defect analysis, changes in coil impedance are usually converted into changes in the real and imaginary parts of the signal [1]. Figure 2b
Implementation of Digital Eddy Current Testing Instrument
The eddy current instrument designed in this paper is mainly used for the NDT of core components in nuclear power plants.To eliminate the influence of strong interference signals generated by adjacent support plates, multi-frequency eddy current inspection technology is required [4].Multi-frequency eddy current testing refers to technology that can be inspected at two or more operating frequencies simultaneously.The mixing channel superimposes the response signals of different frequencies to eliminate the response signal of the support plate and extract the defect signal.In the eddy current test of the heat transfer tube of the steam generator, five frequencies are generally used at the same time [4].This section focuses on how to implement a digital multi-frequency eddy current instrument.
Figure 3 is the schematic diagram of the digital eddy current signal processing method, the main functions of which are implemented by ARM+FPGA.ARM is used for interaction with the host computer, receiving configuration information, and uploading detection data.FPGA is mainly used to control the generation of excitation signals and the extraction of detection signals.The specific implementation process is described as follows: 1. Digitization of excitation signals.
Depending on the characteristics of the object to be inspected, different combinations of frequencies are set.Figure 3 shows the flow when configuring five different frequencies.
Each frequency can be individually configured for its frequency, phase, and amplitude.The sinusoidal signals of different frequencies are converted into digital sine waves through Direct Digital Frequency Synthesis (DDS) technology.DDS is based on sampling theory, sampling the signal waveform at very small phase intervals, and calculating the
Implementation of Digital Eddy Current Testing Instrument
The eddy current instrument designed in this paper is mainly used for the NDT of core components in nuclear power plants.To eliminate the influence of strong interference signals generated by adjacent support plates, multi-frequency eddy current inspection technology is required [4].Multi-frequency eddy current testing refers to technology that can be inspected at two or more operating frequencies simultaneously.The mixing channel superimposes the response signals of different frequencies to eliminate the response signal of the support plate and extract the defect signal.In the eddy current test of the heat transfer tube of the steam generator, five frequencies are generally used at the same time [4].This section focuses on how to implement a digital multi-frequency eddy current instrument.
Figure 3 is the schematic diagram of the digital eddy current signal processing method, the main functions of which are implemented by ARM+FPGA.ARM is used for interaction with the host computer, receiving configuration information, and uploading detection data.FPGA is mainly used to control the generation of excitation signals and the extraction of detection signals.The specific implementation process is described as follows: 1.
Digitization of excitation signals.
Depending on the characteristics of the object to be inspected, different combinations of frequencies are set.Figure 3 shows the flow when configuring five different frequencies.
Each frequency can be individually configured for its frequency, phase, and amplitude.The sinusoidal signals of different frequencies are converted into digital sine waves through Direct Digital Frequency Synthesis (DDS) technology.DDS is based on sampling theory, sampling the signal waveform at very small phase intervals, and calculating the amplitude corresponding to the phase to form a phase-amplitude table for generating the desired waveform [5].The resulting excitation signal has the advantages of high resolution and fast conversion speed, and its stability and accuracy are improved to the same level as the reference frequency, and fine frequency adjustment can be performed over a wide range.
As shown in Figure 4a, the excitation signal of an eddy current instrument usually uses continuous sine waves, which is easy to implement.In the application, a continuous signal of a specific length is intercepted according to the set eddy current signal sampling rate ( f s ) and, for subsequent calculations, the specific length is called timeslot (T).It is easy to obtain T = 1/ f s .It is difficult for T to be exactly an integer multiple of the excitation signal period, resulting in inconsistency in each intercepted signal, affecting the detection results.
Eng. Proc.2023, 59, 84 4 of 8 amplitude corresponding to the phase to form a phase-amplitude table for generating the desired waveform [5].The resulting excitation signal has the advantages of high resolution and fast conversion speed, and its stability and accuracy are improved to the same level as the reference frequency, and fine frequency adjustment can be performed over a wide range.As shown in Figure 4a, the excitation signal of an eddy current instrument usually uses continuous sine waves, which is easy to implement.In the application, a continuous signal of a specific length is intercepted according to the set eddy current signal sampling rate( )and, for subsequent calculations, the specific length is called timeslot (T).It is easy to obtain T = 1/ .It is difficult for T to be exactly an integer multiple of the excitation signal period, resulting in inconsistency in each intercepted signal, affecting the detection results.As shown in Figure 4b, in this article, DDS is used to generate stable repetitive signals to ensure that the excitation signal is the same in each timeslot, so that the ADC sampling values are identical under the same defects.Enhance the repeatability of the instrument's response to the same defect.Excitation signal and detection signal processing.
The digitized sinusoidal signals are superimposed by calculation ∑ cos(2 + ), and it should be noted that when superimposing, the phase of different frequencies needs to be adjusted to avoid signal peaks superimposed together and cause amplitude overrange.
Then, the digital signal is converted to analog by DAC, where the signal has no drive capability and needs to be amplified by a power amplifier to drive the excitation probe.
The induced signal generated on the detection coil contains a lot of high-frequency noise that needs to be filtered out by a low-pass filter.The amplitude of the detection signal is generally only a few millivolts, which is prone to attenuation, and when attenuated to a certain extent, it will become an invalid signal.Therefore, this method adds an amplifier to the detection circuit to further improve the signal quality and antiinterference ability.
Discrete Fourier Transform (DFT).
Using DFT to complete signal parsing is core of this method, which is detailed below.The amplified detection signal is converted into digital signal after AD conversion.The digital signal is a multi-frequency superimposed signal, which contains the defect information of the inspected object, and the real and imaginary parts corresponding to each frequency signal need to be calculated to complete the signal analysis.The analytical method adopted in this paper is to make DFT of the multi-frequency detection digital signal at the set frequency point, through which the signal is transformed from time domain to frequency domain, the spectral structure of each different frequency signal is separated, and the real and imaginary parts of each frequency signal are calculated at the same time.
The detection signal obtained by the ADC conversion is a discrete time-domain signal, based on the principle of signal processing, and it can be expressed in form of Equation (3).As shown in Figure 4b, in this article, DDS is used to generate stable repetitive signals to ensure that the excitation signal is the same in each timeslot, so that the ADC sampling values are identical under the same defects.Enhance the repeatability of the instrument's response to the same defect.
2.
Excitation signal and detection signal processing.
The digitized sinusoidal signals are superimposed by calculation ∑ i A i cos(2πω i t + ϕ i ), and it should be noted that when superimposing, the phase of different frequencies needs to be adjusted to avoid signal peaks superimposed together and cause amplitude overrange.
Then, the digital signal is converted to analog by DAC, where the signal has no drive capability and needs to be amplified by a power amplifier to drive the excitation probe.
The induced signal generated on the detection coil contains a lot of high-frequency noise that needs to be filtered out by a low-pass filter.The amplitude of the detection signal is generally only a few millivolts, which is prone to attenuation, and when attenuated to a certain extent, it will become an invalid signal.Therefore, this method adds an amplifier to the detection circuit to further improve the signal quality and anti-interference ability.
Using DFT to complete signal parsing is core of this method, which is detailed below.The amplified detection signal is converted into digital signal after AD conversion.The digital signal is a multi-frequency superimposed signal, which contains the defect information of the inspected object, and the real and imaginary parts corresponding to each frequency signal need to be calculated to complete the signal analysis.The analytical method adopted in this paper is to make DFT of the multi-frequency detection digital signal at the set frequency point, through which the signal is transformed from time domain to frequency domain, the spectral structure of each different frequency signal is separated, and the real and imaginary parts of each frequency signal are calculated at the same time.
The detection signal obtained by the ADC conversion is a discrete time-domain signal, based on the principle of signal processing, and it can be expressed in form of Equation (3).
The DFT is calculated using the correlation-based method, and formulas are as follows [6]: Eng. Proc.2023, 58, 84 5 of 8 From Equations ( 4) and ( 5), it can be seen that the DFT transformation can extract the real and imaginary parts corresponding to different frequency signals in the detection signal, and (ReX[k], ImX[k]) can be obtained, so as to complete the analysis of the detection signal.
Improper use of the DFT method will lead to spectrum leakage; that is, the spectral lines in the signal spectrum affect each other, so that the measurement results deviate from the actual value, and some false spectra with smaller amplitudes will appear at other frequency points on both sides of the spectral line [7].From the perspective of the time domain, DFT treats signals as infinitely long periodic signals when processing them; therefore, the signal needs to be extended processing, and the non-periodic signal should also be extended into a periodic signal.During splicing, if the repeated fragments can be spliced exactly to be consistent with the original signal, it is called perfect stitching.If not, there will be sudden changes at the splicing point, resulting in the generation of other frequency components, and the surrounding frequencies will bisect the frequencies in the original signal, resulting in inaccurate frequency amplitude and spectral leakage [8,9].
In order to avoid spectrum leakage, this method conducted in-depth research on DFT algorithm, and found that when the relationship of Equation ( 6) is strictly satisfied, there will be no spectrum leakage at all, where M is the number of periods in time domain, N is the number of sampling points, and F s is the sampling frequency, F in is the signal frequency.
Because when the above relationship is satisfied, the repeated periodic signal can be spliced exactly to coincide with the original signal, thus avoiding spectral leakage.Figure 5a is the impedance plane when spectral leakage occurs, and Figure 5b is the impedance plane when the relationship ( 6) is satisfied.
follows [6]: ∑ sin(2/) From Equations ( 4) and ( 5), it can be seen that the DFT transformation can extract the real and imaginary parts corresponding to different frequency signals in the detection signal, and ( , ) can be obtained, so as to complete the analysis of the detection signal.
Improper use of the DFT method will lead to spectrum leakage; that is, the spectral lines in the signal spectrum affect each other, so that the measurement results deviate from the actual value, and some false spectra with smaller amplitudes will appear at other frequency points on both sides of the spectral line [7].From the perspective of the time domain, DFT treats signals as infinitely long periodic signals when processing them; therefore, the signal needs to be extended processing, and the non-periodic signal should also be extended into a periodic signal.During splicing, if the repeated fragments can be spliced exactly to be consistent with the original signal, it is called perfect stitching.If not, there will be sudden changes at the splicing point, resulting in the generation of other frequency components, and the surrounding frequencies will bisect the frequencies in the original signal, resulting in inaccurate frequency amplitude and spectral leakage [8,9].
In order to avoid spectrum leakage, this method conducted in-depth research on DFT algorithm, and found that when the relationship of Equation ( 6) is strictly satisfied, there will be no spectrum leakage at all, where is the number of periods in time domain, is the number of sampling points, and is the sampling frequency, is the signal frequency.
𝑀/𝑁 𝐹 /𝐹
Because when the above relationship is satisfied, the repeated periodic signal can be spliced exactly to coincide with the original signal, thus avoiding spectral leakage.Figure 5a is the impedance plane when spectral leakage occurs, and Figure 5b is the impedance plane when the relationship ( 6) is satisfied.Then, the parsed ( , ) values are transmitted to the host computer for professional analysts to complete the analysis and evaluation of eddy current detection results.
Advantages of Digital Eddy Current Instrument
Compared with the eddy current instrument using analog circuits for signal processing, the digital instrument designed in this paper mainly has the following advantages: 1. Higher detection efficiency.Then, the parsed (ReX[k], ImX[k]) values are transmitted to the host computer for professional analysts to complete the analysis and evaluation of eddy current detection results.
Advantages of Digital Eddy Current Instrument
Compared with the eddy current instrument using analog circuits for signal processing, the digital instrument designed in this paper mainly has the following advantages: 1.
Higher detection efficiency.
Due to the limitations of the implementation mechanism, the analog eddy current instrument uses hardware multiplier to extract the detection signals.The hardware detection circuit needs to complete the extraction of different frequency signals in order, which is inefficient and can only set up to five different detection frequencies simultaneously.
The digital eddy current meter can use the computing power of the FPGA to extract the real and imaginary parts of different frequency signals in parallel, which greatly improves the detection efficiency, and this method can support up to 10 signals of different frequencies at the same time, expanding the application scenarios of the instrument [10].
It can be seen from the principle of DFT that when doing N-point DFT operation on the signal of a certain frequency, because the signal is superimposed in phase, sampling N times will increase the amplitude in the frequency domain by N times, and the signal power will increase by N 2 times.There is white noise in the detection signal, the DFT operation of noise is non-in-phase superposition, the amplitude of the noise signal increases by √ N times in the frequency domain, and the noise power increases N times.Therefore, the signal-to-noise ratio (signal power divided by noise power) of the digital instrument increases N times.
Analog instruments, on the other hand, extract effective signals through hardware circuits without improving the signal-to-noise ratio.When the noise floor is large, there is a risk that the measured signal will be drowned out by strong noise in engineering applications.
Therefore, the digital instrument designed in this paper has a higher signal-to-noise ratio.
Figure 6a shows the circuit block diagram of an analog eddy current instrument, which has a complex circuit structure.The digital instrument, on the other hand, uses high-performance 24-bit ADC with a signal-to-noise ratio of up to 100 dB, enabling a large dynamic range in the digital domain and greatly improving the ability to acquire tiny induced signals.In addition, the multi-stage amplification circuit and program-controlled circuit at the front end of the analog circuit are simplified, the influence of the analog circuit on the induced signal is reduced, and the performance of the eddy current meter is improved.different frequencies at the same time, expanding the application scenarios of the instrument [10].
It can be seen from the principle of DFT that when doing N-point DFT operation on the signal of a certain frequency, because the signal is superimposed in phase, sampling N times will increase the amplitude in the frequency domain by N times, and the signal power will increase by times.There is white noise in the detection signal, the DFT operation of noise is non-in-phase superposition, the amplitude of the noise signal increases by √ times in the frequency domain, and the noise power increases N times.Therefore, the signal-to-noise ratio (signal power divided by noise power) of the digital instrument increases N times.
Analog instruments, on the other hand, extract effective signals through hardware circuits without improving the signal-to-noise ratio.When the noise floor is large, there is a risk that the measured signal will be drowned out by strong noise in engineering applications.
Therefore, the digital instrument designed in this paper has a higher signal-to-noise ratio.
Greater dynamic range.
Figure 6a shows the circuit block diagram of an analog eddy current instrument, which has a complex circuit structure.The digital instrument, on the other hand, uses high-performance 24-bit ADC with a signal-to-noise ratio of up to 100dB, enabling a large dynamic range in the digital domain and greatly improving the ability to acquire tiny induced signals.In addition, the multi-stage amplification circuit and program-controlled circuit at the front end of the analog circuit are simplified, the influence of the analog circuit on the induced signal is reduced, and the performance of the eddy current meter is improved.
Application Testing
The eddy current instruments designed in this paper have been successfully applied to non-destructive testing of nuclear power plants with excellent test results.Wear damage in heat transfer tubes in nuclear power plants is often difficult to measure; Figure
Application Testing
The eddy current instruments designed in this paper have been successfully applied to non-destructive testing of nuclear power plants with excellent test results.Wear damage in heat transfer tubes in nuclear power plants is often difficult to measure; Figure 7 shows the results obtained by testing the wear damage of the same heat transfer tube separately using the digital eddy current instrument designed in this article and the traditional analog instrument.It can be seen that the digital instrument has a higher signal-to-noise ratio, and the result is much clearer, which is conducive to analysis.To better present the results, 3D imaging techniques were used in this design.Figure 8 shows the 3D imaging obtained when performing a heat transfer tube eddy current inspection using an array probe.The ʺ+ʺ sign in the figure is the position of the cursor, and the signal of the coil indicated by this cursor is shown on the left side of the figure.This allows the location and size of the various injuries in the tube to be clearly seen, so that the analysis can be completed more accurately.To better present the results, 3D imaging techniques were used in this design.Figure 8 shows the 3D imaging obtained when performing a heat transfer tube eddy current inspection using an array probe.The "+" sign in the figure is the position of the cursor, and the signal of the coil indicated by this cursor is shown on the left side of the figure.This allows the location and size of the various injuries in the tube to be clearly seen, so that the analysis can be completed more accurately.To better present the results, 3D imaging techniques were used in this design.Figure 8 shows the 3D imaging obtained when performing a heat transfer tube eddy current inspection using an array probe.The ʺ+ʺ sign in the figure is the position of the cursor, and the signal of the coil indicated by this cursor is shown on the left side of the figure.This allows the location and size of the various injuries in the tube to be clearly seen, so that the analysis can be completed more accurately.
Conclusions
Through scientific research on eddy current detection, this paper creatively puts forward the design scheme for a digital eddy current instrument, solves a series of problems such as architecture design, signal-to-noise ratio improvement, antielectromagnetic interference, high-speed data processing, and three-dimensional data imaging of the digital eddy current instrument, and finally realizes the successful research and development of a high-end eddy current instrument.The research and development results mentioned in this paper have been successfully applied to eddy current testing in many nuclear power plants, providing a guarantee for the safe and stable operation of nuclear power plants.
Figure 1 .
Figure 1.(a) Distribution of eddy currents when there are no defects in the conductor, (b) d tion of eddy currents when there is a crack in the conductor.
Figure 1 .
Figure 1.(a) Distribution of eddy currents when there are no defects in the conductor, (b) distribution of eddy currents when there is a crack in the conductor.
is an impedance plane plot showing the trajectory of the impedance change of the test coil.Strip charts are formed on the basis of impedance plane diagram.Figure2ashows the strip chart in the horizontal direction, representing the real part signal of the test coil; Figure2cshows in vertical direction, representing the imaginary part signal of the coil.
Figure 2 .
Figure 2. (a) Strip chart in horizontal direction, showing changes in real part of coil impedance; (b) impedance plan, characterizing the trajectory of coil impedance changes; (c) strip chart in vertical direction, showing changes in the imaginary part of coil impedance.
Figure 2 .
Figure 2. (a) Strip chart in horizontal direction, showing changes in real part of coil impedance; (b) impedance plan, characterizing the trajectory of coil impedance changes; (c) strip chart in vertical direction, showing changes in the imaginary part of coil impedance.
Figure 6 .
Figure 6.(a) Circuit diagram of analog instrument, (b) circuit diagram of digital instrument.
Figure 6 .
Figure 6.(a) Circuit diagram of analog instrument, (b) circuit diagram of digital instrument.
Figure 7 .
Figure 7. (a) Test result of digital instrument, (b) test result of analog instrument.
Figure 7 .
Figure 7. (a) Test result of digital instrument, (b) test result of analog instrument.
Figure 7 .
Figure 7. (a) Test result of digital instrument, (b) test result of analog instrument. | 6,294.4 | 2023-11-15T00:00:00.000 | [
"Engineering",
"Physics"
] |
New Records of Neobenedenia girellae (Hargis, 1955) (Monogenea: Capsalidae) in Marine Ornamental Fish Imported to Yucatan, Mexico
ABSTRACT: We detected Neobenedenia girellae infections in 40 species belonging to 12 families of imported marine ornamental fish from a public aquarium in the Mexican state of Yucatan. A total of 348 fish specimens were examined for monogeneans in January 2018 to December 2020. Monogeneans were corroborated morphologically and molecularly with a partial sequence of 28S (region D1–D3) ribosomal DNA and analyzed in a molecular phylogenetic context in combination with other N. girellae sequences available in GenBank. The phylogenetic tree revealed that the specimen found consistently belonged to the N. girellae clade. High infection parameters were detected of N. girellae in most hosts. This identification is relevant to aquarists and aquaculturists in the Gulf of Mexico because N. girellae is considered highly pathogenic in confined fish. This work demonstrates that the importation of ornamental fish coupled with deficient sanitary measures (lack of quarantine areas in distribution centers) contributes to spread of parasites and their establishment within Mexico.
MATERIALS AND METHODS
The marine ornamental fish examined in this study were donated by a commercial aquarium in Merida, Yucatan, Mexico, between January 2018 and December 2020. A total sample of 348 ornamental fish were collected (Table 1). Most of the fish were originally captured from the natural environment of the Indo-Pacific region, although the exact capture locations were not available to the importer. Upon arrival in Mexico, the imported fish is inspected by the Agricultural Health Inspection Office (OISA), which issues a health certificate of fish. Subsequently, the ornamental fish are distributed to several regions of Mexico (specifically, until their point of sale, e.g., Merida, Yucatan) or transferred to the market Morelos in Mexico City, Mexico, which represents one of the main commercialization and distribution centers for ornamental fish from Mexico.
The imported fish were transported in isolated plastic bags with artificial aeration. Once at their point of sale (i.e., the aquarium in Merida), the dead or dying fish were separated and kept in coolers, posteriorly donated, and transported to the Aquatic Pathology Laboratory at CINVESTAV-IPN Unidad Mérida for parasitological examination. Once at the laboratory, fish were measured to obtain total length, standard length, and total weight. The surface of the skin and eyes, gills, scales from the lateral line, and fins were examined under a stereomicroscope (Stemi 305, Carl Zeiss) for ectoparasites. Whenever parasites were found, they were counted, preliminarily identified to the genus level, and fixed depending on the taxonomic group (Whittington, 2004). Capsalid monogeneans were isolated, counted in situ, cleaned with physiological saline, and preserved in 4% formalin or 96% alcohol labeled vials for subsequent morphological or molecular studies, respectively (Brazenor, Bertozzi, et al., 2018, Brazenor, Saunders, et al., 2018. Monogeneans were removed with fine paintbrushes, stained with ammonium picrate, and identified to the species level according to suitable literature (e.g., Whittington and Kearn, 1993;Hargis, 1995;Ogawa et al., 2006). Infection parameters such as prevalence, mean abundance, and mean intensity were those proposed by Bush et al. (1997). Standard measurements were made with an Olympus BX50 compound microscope (Olympus, Tokyo, Japan) and ImageJ software (Wayne Rasband Scientific Software, Kensington, Maryland, U.S.A.). Drawings were prepared by Adobe Illustrator software (Adobe Inc., San Jose, California, U.S.A.). A full-body view of N. girellae, as well as a ventral view of the accessory sclerite, anterior hamulus, posterior hamuli, and marginal hooks, were illustrated (Figs. 1, 2). The following features were measured for morphological and morphometric description: body, length and width; pair of anterior attachment organs, length by width; haptor, length; anterior hamuli, length, posterior hamuli, length, accessory sclerites, length; pair of testes, length by width; ovary, length by width; egg, length by width (Whittington and Kearn, 1993;Whittington, 2004) (Table 2). All measurements are given (mm) with the range followed by the mean in parentheses (Table 2).
DNA amplification, sequencing, and phylogenetic analyses
Genomic DNA was extracted from each specimen of Neobenedenia with a DNeasy TM Blood & Tissue Kit (Qiagen, Hilden, Germany) following the standard manufacturer's protocol. Specimens of different host species were chosen for extraction. Given that the 28S ribosomal gene has been used in other studies to identify species of Neobenedenia (Brazenor, Bertozzi, et al., 2018, Brazenor, Saunders, et al., 2018, we also amplified the D1, D2, and D3 regions of this gene. The amplification was carried out with the primers 391 (Nadler and Hudspeth, 1998) and 536 (García-Varela and Nadler, 2005), and the conditions of the polymerase chain reaction amplification were: 94°C for 5 min, 35 cycles at 94°C for 1 min, 50°C for 1 min, 72°C for 1 min, and a postamplification extension at 72°C for 10 min. For sequencing, the 2 amplification primers plus 503 (Stock et al., 2001) and 504 (García-Varela and Nadler, 2005) were used. Sequencing was carried out in GENEWIZ (South Plainfield, New Jersey, U.SA.). The sequences obtained from each primer were read, edited, and assembled into a consensus sequence for each extracted specimen by Geneious Pro 4.8.4® (Biomatters Ltd.). The new sequences were submitted to GenBank for publication and public access. For phylogenetic analyses, the new sequences were aligned with other 28S sequences from Neobenedenia available in GenBank. The alignment was per-formed by ClustalW (Thompson et al., 1994), implemented in "SLOW/ACCURATE" and "CLUSTALW (for DNA)" (Kyoto University Bioinformatics Center, 2019). The nucleotide evolution model was estimated in jModelTest v.2 (Darriba et al., 2012). A maximum likelihood (ML) analysis was performed to obtain the phylogenetic tree with RAxML v.7.0.4 (Stamatakis, 2006), and 1,000 bootstrap repetitions (bt) were implemented. The ML tree was visualized in FigTree v.1.4.3. (Rambaut, 2000). The genetic distances of the 28S gene were calculated with uncorrected P-value (p-distances) in MEGA v.6.0 (Tamura et al., 2013).
RESULTS
A total of 348 fish specimens of 40 species representing 12 families were examined for monogeneans (Table 1). A total of 213 fish were infected, and 803 N. girellae were collected, infecting the skin, the surface of the eyes, the gills, or a combination of samples from host species of all the families mentioned (Table 1). Macroscopic external lesions were observed in parasitized fish epidermal damage associated with the site of haptor attachment, mild hemorrhages on the skin and eyes, exophthalmia, presence of dyspnea, and anorexia. All monogeneans were identified with morphological characteristics of the genus Neobenedenia described by Whittington and Kearn (1993) and Whittington (2004) with a morphologically flattened, leaflike body shape, absence of haptoral septa, and a vagina, although having accessory sclerites, haptoral hamuli, paired anterior circular discs, and 2 juxtaposed testes, which is a unique combination in species of this genus (Table 2). Monogeneans were collected from species of all the families, therefore representing new host and geographical records. Prevalence ranged from 8 to 100% in the hosts, with mean abundance and intensity ranging from 0.13 ± 2.05 to 32 ± 27.78 and 32 ± 27.78 to 48 ± 42.85, respectively (Table 1). The morphological measurements of the host species are presented in Table 2.
Phylogenetic analyses
Only 6 new sequences were successfully obtained from all the specimens collected from 6 different host species, with a length of 1,123 to 1,203 base pairs (bp). The length of the final alignment was 1,249 bp. The estimated substitution model was GTR + GAMMA and the nucleotide frequencies were 0.259 (A), 0.178 (C), and 0.300 (T). The ML value was -s3209.140011. The phylogenetic tree showed a major clade with a high support value (bt = 86), where N. girellae was grouped with 2 other species of the genus, which were only recognized as Neobenedenia sp. (Fig. 3). In particular, within the clade of N. girellae (bt = 73), most of the specimens identified for this species were grouped, including the specimens of our study. The exception was the specimen MH843692, which despite being named N. girellae was grouped in a different independent clade of the tree. The genetic distance was null among the specimens collected in this study. The intraspecific genetic distance of the clade N. girellae ranged from 0 to 0.9%. The distance between the specimens of the clade N. girellae and the specimen that was grouped in a different independent clade ranged from 6.8 to 7.2%. Finally, the genetic distance between N. girellae and the other species of the genus represented in our phylogenetic analysis ranged from 1.1 to 11.2%.
DISCUSSION
Presented here are the first confirmed molecular and morphological data of N. girellae in Yucatan, Mexico. This monogenean represents both new host and new geographical records and shows the wide range of aquarium fish that this parasite can infect (see Table 1). We consider our findings relevant for aquaculturists and pet shop owners in the Gulf of Mexico because N. girellae is an emerging parasitic infection and a potential threat to the trade of ornamental fish. Although this monogenean species is well established in Mexico (Bravo-Hollis and Deloya, 1973), our findings indicate that constant reintroductions of the parasite occur in different regions, possibly following market routes.
We suggest that the parasite has at least 2 possible origins, although neither is conclusive: the movement of imported infected fish and the possible acquisition of infections within reservoir centers (e.g., Morelos markets), where the fish are kept in confinement without adequate sanitary measures before being distributed to various regions of Mexico. The importation of ornamental fish is one contribution to the introduction of parasites and their dispersal and establishment within Mexico.
In this study, high prevalences were found in most hosts. Elevated infection rates are commonly observed in aquarium fish owing to high stocking density and sometimes inadequate water quality maintenance (Magalhães-Cardoso et al., 2019 ). On the other hand, the stress associated with the capture, handling, and transport of ornamental fish from their origin, coupled with deficient sanitary measures (lack of quarantine areas in distribution centers) and mishandling, facilitates these parasites' dispersal with high infections parameters. During transport, the fish are handled in excess, being placed in overcrowded plastic bags with low oxygen levels and increasing amounts of excreted nitrogenous waste (ammonium). These deteriorating conditions pave the way for the establishment of this monogenean. Putri et al. (2020) reported a prevalence of 60% of N. girellae in Rachycentron canadum (Linnaeus, 1766) from Indonesia, and Gaida and Frost (1991) reported a prevalence of 75% in Medialuna californiensis (Steindachner, 1876) from California. The life cycle of N. girellae is short, a smaller body size being needed to attain maturity. Bondad-Reantaso et al. (1995) identified the rapid development of N. girellae in Japanese flounder, reaching sexual maturity in 10-11 days at 25°C from oncomiracidia. Egg to maturation was 15-17 days. In the present study, this parasite was particularly abundant on the eyes, causing corneal opacity and skin irritation. Neobenedenia girellae harm fish by mechanical attachment of the haptor; Ogawa et al. (2006) found particular histological damage in the cornea of infected fish displaying hyperplasia of squamous epithelial cells and mucous cells. The N. girellae ectoparasite is well-adapted to tropical regions, so successful establishment in wild native fauna and cultured fish in Yucatan can be foreseen if they reach the open environments. Brazenor, Bertozzi, et al. (2018) and Brazenor, Saunders, et al. (2018) found that this parasite completed its life cycle almost twice as quickly in warm, high-saline conditions compared with cooler temperatures (i.e., oncomiracidia's longevity is significantly lower in salinities below 22% compared with higher saline conditions (35-40%). Moreover, at 20-25°C, the parasite attained sexual maturity and produced eggs more slowly than at 30°C. In this sense, the Yucatan marine environment provides suitable habitats for this parasite's establishment and reproduction, given its high temperatures, high salinity, and multiple reef spots.
Unfortunately, in Mexico few regulations exist for the importation and introduction of ornamental species to the market, allowing practically any aquatic organism of this sort to be introduced with limited sanitary control (Contreras et al., 1998;Cedillo et al., 2001). A health certificate declaring that imported fish into Mexico are free from World Organisation for Animal Health (OIE) listed diseases is compulsory; otherwise, the entry of such goods is denied. Although N. girellae is a dangerous pathogen, it is not currently included on the list of diseases. Furthermore, even though health authorities conduct physical inspections at some border points, marine ornamentals, as a valuable commodity, must be transferred swiftly; therefore, fish carrying parasites or disease are practically unnoticed.
We consider it equally important that sanitary agents be trained to recognize significant pathogens besides the listed diseases that can be problematic for aquaculture and the aquarium industry. Containment measures such as quarantine may be worth reviewing in terms of their effectiveness in preventing parasite detection, with the aim of reducing the spread of disease.
ACKNOWLEDGMENTS
This work was supported by the Consejo Nacional de Ciencia y Tecnología (CONACYT) under postdoctoral grant 253392. We are grateful to the Instituto de Ecología, Pesquerías, y Oceanografía del Golfo de México (EPOMEX) for support in the final stage of this research. Thanks are also extended to Dr. Rodolfo del Río. Sincere and grateful thanks are extended to all at Laboratorio de Patología Acuática CINVES-TAVIPN Unidad Merida. Thanks are also extended to Andrea Selina Caamal Pool, who helped with field work, and Dr. Eduardo Garza Gisholt for donating the ornamental fish. | 3,068.8 | 2021-12-17T00:00:00.000 | [
"Biology"
] |
A modified relay-race algorithm for floorplanning in PCB and IC design
: Floorplanning is a fundamental design step in the physical design of printed circuit boards (PCBs) and integrated circuits (ICs), as it handles the complexity of layout design. From a computational point of view, the floorplanning problem is an NP hard problem, and the size of the search space grows exponentially with increasing numbers of modules. Thus, the algorithm used is an essential factor for speed and quality of the floorplanning process. Although polynomial-time floorplanning algorithms can be implemented when solution space is limited to slicing floorplans, optimal solutions often exist only in the nonslicing floorplan search space. Various stochastic algorithms such as simulated annealing (SA), the genetic algorithm (GA), and the relay race algorithm (RRA) can be used with nonslicing floorplans. In this paper, a modified relay race algorithm (MRRA) is proposed. Based on the experimental results utilizing MCNC benchmarks, MRRA improved both solution quality and run time for area optimization when compared with SA, GA, and RRA.
Introduction
The number of components in a circuit and the interconnections between these components increase rapidly as technology improves over time [1]. Floorplanning optimizes the relative locations of the components in the layout to reduce the layout area and wire length of the interconnections, which affect the subsequent routing quality and overall physical design process significantly [2]. The representation method affects the floorplanning process, because it determines the scope of the search space and the complexity of transformation between the floorplanning representation and its corresponding floorplan. Researchers have proposed many representation methods such as Polish notation, bounded sliced grid (BSG), transitive closure graph (TSG), B*Tree, and sequence pairs [3].
The most important factor that determines the time cost and solution quality of the floorplanning process is the algorithm used. Various floorplanning algorithms have been proposed by researchers, including simulated annealing (SA), genetic algorithms (GAs), and the relay race algorithm (RRA). SA was originally proposed as an optimization approach for placement and routing [4], but was later utilized by Wong and Liu to optimize the area of a floorplan [5]. Rebandengo and Reorda used the GA as an evolutionary algorithm [6]. Sheng et al. designed the RRA to overcome the shortcomings of SA and GAs [7]. In this paper, a modified relay-race algorithm (MRRA) has been proposed in order to improve the solution quality and time cost of RRA. Section 2 formulates the problem. Section 3 briefly summarizes the existing approaches for floorplanning. Section 4 proposes the MRRA approach and Section 5 presents experimental results. Section 6 concludes the paper.
Problem formulation
Floorplanning is the determination of relative module positions while considering objectives such as area and wire length minimization. The main inputs for floorplanning are a module set M = {m 1 , m 2 , m 3 , . . . , m n } where m i are rectangular blocks with height h i and width w i , and a net set N = {n 1 , n 2 , n 3 , . . . , n k } where n j are the interconnects between modules. Each net n i , 1 ≤ i ≤ k , has a length l i , which can be computed between the centers of modules that are being connected, unless the pin locations of modules are provided as an additional input to the floorplanning.
There are two types of floorplans, which are called slicing and nonslicing floorplans [8]. Slicing floorplans can be represented by a binary tree, which is called the slicing tree, where the entire layout area is bisected repetitively in horizontal and vertical directions until each part includes only one module. In the slicing tree, the leaves represent the modules, vertices marked as H represent the horizontal bisections, and vertices marked as V represent the vertical bisections. On the other hand, nonslicing floorplans cannot be obtained by bisecting the layout area repetitively; therefore, slicing trees cannot be used to represent them. The constraint graph pair (CGP) method, which consists of a horizontal constraint graph (HCG) and vertical constraint graph (VCG), can be used to model these floorplans. HCG and VCG define the horizontal and vertical relations among the modules, respectively. Figures 1a and 1b depict instances of slicing and nonslicing floorplans, respectively. Slicing floorplans are easier to manipulate, and polynomial time algorithms are available for finding optimum floorplan solutions when restricted to slicing structures only. On the other hand, only nonslicing floorplans have a solution space that is P-admissible, which is guaranteed to contain an optimal solution [9]; therefore, an optimum floorplan solution for all problem instances is only possible with this floorplan type. The objective of floorplanning is to optimize a layout according to a predefined cost function [8]. The most common consideration in this function is the area covered by the rectangular bounding box enclosing all modules. This requires minimization of the dead space, which is called white space. White space is the empty space that is not covered by any module in the floorplan. Another important consideration in this function is the total wire length, which has several types of evaluations such as minimum chain, Steiner tree, and half perimeter wire length (HPWL) methods. Steiner tree estimation is the most accurate but also most computationally expensive method, while HPWL is the most efficient and can still be used to compare the relative wire lengths of different solutions with respect to each other in an optimization engine. HPWL is obtained by dividing the perimeter of the rectangular bounding box that surrounds all the pins of a net by two [10]. A commonly used cost function in floorplanning is the weighted sum of area and wire length as given by Equation 1, where C a , C w , and α represent the area cost, the wire length cost, and the weight factor, respectively. The weight factor α is associated with each objective and is user-defined.
Floorplan representation must be chosen according to the floorplan type, and this choice determines the complexity of the transformation and the scope of the search space [1]. Researchers have proposed several representation schemes in the last couple decades. Polish notation, bounded slicing grid, transitive closure graph, B*Tree, and sequence pairs are the most commonly used representation schemes [3]. In Table 1, the comparison of different floorplan representation schemes is represented. This comparison contains information about the flexibility and the computational complexity of these floorplan representation schemes. As shown in Table 1, Polish notation and B*Tree have better computational complexity and bounded slicing grid, transitive closure graph, and sequence pair approaches have better flexibility. Polish notation is an efficient representation scheme for slicing floorplans, but it cannot handle other floorplan types. The expression of Polish notation is the postfix ordering of a binary tree, which can be reached from the postorder traversal on a binary tree. Bounded slicing grid is a flexible representation scheme, but it cannot handle nonslicing floorplans, either. In a bounded slicing grid, n blocks are placed in a special n by n grid. The transitive closure graph method runs faster using less memory, but it cannot deal with slicing floorplans [3]. B*Tree representation is based on ordered binary trees and can model compacted floorplan structures. It is also an efficient representation scheme with smaller encoding cost. However, it is less flexible than bounded slicing grid, transitive closure graph, and sequence pair. Sequence pair is the most flexible representation scheme and it can handle all types of floorplans, but it has high encoding cost. Sequence pair is utilized as the representation method in this paper because of its flexibility advantage.
Sequence Pair representation is suitable for both slicing and nonslicing floorplans. A sequence pair (Γ+, Γ−) is a pair of sequences of the n modules in a floorplan, where modules can be placed into different orders in each pair [9]. Horizontal and vertical constraints between each pair of modules can be inferred from the sequence pair to be used in generating the constraint graph pair (HCG and VCG). For instance, (Γ+, Γ−) = (bacde, cabde) can be the sequence pair representation for one of the solutions of a floorplan that includes the module set a,b,c,d,e. For this sequence pair, d is placed after a in both Γ+ and Γ−, i.e.
< . . . a . . . d · · · >; thus, d has to be located to the right of a . Also in the same sequence pair, a is placed after b in Γ+, i.e., < . . . b . . . a · · · >, and a is placed before b in Γ−, i.e. < . . . a . . . b · · · >; thus, a has to be located below b. Replacing the orders of (a, b) or (a, d) in the sequences described above reverses the relative positions of these module pairs with respect to each other.
Existing floorplanning approaches
Developments in optimization field led numerous researchers to utilize modern optimization methods. Simulated annealing is the first modern optimization algorithm that has been used to optimize the floorplanning area. Population-based metaheuristic algorithms that imitate the social behavior of species and biological evolution were then utilized. The GA, ant colony, particle swarm optimization, and differential evolution are in this category, and they have been collectively named as evolutionary algorithms [3].
Simulated annealing
Simulated annealing (SA) has been proposed based on statistical mechanics theory and the analogy between solid annealing and optimization problems [4]. The utilization of the SA algorithm to solve the floorplanning problem was first introduced by Otten in 1983 [11]. SA resembles the cooling procedure of molten metal through annealing. In the cooling process of molten metal, the atoms have the highest mobility at high temperatures. As the temperature drops, the movement ability of the atoms is also reduced. Then the atoms are gradually organized to form crystals with the minimum energy state possible.
In SA, each state of the solid structure corresponds to an applicable solution of the problem. The energy of the state is the value of the cost function to assess the solution. The state of the minimum energy represents the optimal solution with the best value of the cost function. SA is a stochastic algorithm with iterative improvements. Each repetitive step includes an alteration of the current solution to a new solution. This action is called movement to a neighborhood. The current temperature of the state determines the acceptance probability of new solutions. Temperature updates are scheduled from the highest temperature to the lowest temperature, where the acceptance probability at higher temperatures is higher than the acceptance probability at lower temperatures. If the temperature is decreased rapidly, it is known as simulated quenching instead of simulated annealing. The main difference between SA and simulated quenching is the parameter used for temperature scheduling. In SA, the temperature needs to be decreased at a slower rate in order to reach the absolute minimum energy state.
Genetic algorithm
The GA has been utilized as a floorplanning algorithm after SA by researchers. Rebandego and Reorda were the first researchers to use the GA to solve the floorplanning problem [6]. They used the GA with Polish notation in 1996. Afterwards, Nakaya et al. and Lin et al. also presented GAs using Polish notation for the floorplanning problem [12,13]. Gwee and Lim proposed a GA with heuristic-based decoder for IC floorplaning in 1999 [14].
This approach was able to achieve an efficient solution to the multiobjective area and wire length optimization problem of floorplanning. In 2006 Drakidis et al. and in 2007 Chatterjee and Manikas presented GA-based floorplanning approaches using sequence pair representation [15].
The utilization of the GA for floorplanning optimization starts with the randomly generated population of solutions. These solutions have random placement of modules along a defined rectangle of the circuit. Then the solutions are evaluated for their fitness values based on the predefined fitness function. The objective of this fitness function can be area or wire length optimization or optimization for both criteria. The modules correspond to genes in chromosomes. After the creation of the initial population, the algorithm follows the mentioned mechanisms repetitively until the specified number of generations is reached. In the crossover operation, two floorplan solutions are taken and they are used to generate a new floorplan arrangement as a new solution. These new solutions are called the offspring of the selected solutions that the crossover operation is performed on. Afterwards, a mutation operation with a small probability is applied by flipping any module of the solution. Finally, the new population is evaluated and the solutions with the lowest fitness values are eliminated.
Relay race algorithm
Sheng et al. proposed the RRA for floorplanning problems to approach a global optimal solution by exploring similar local optimal solutions more efficiently within shorter computation times [7]. Sheng et al. stated that the RRA was designed to overcome the shortcomings of SA, which does not use the experience of past moves, and the GA, which selects the next generation according to a ranking function that has a high time cost despite it not being always necessary.
The RRA contains the three basic parts shown in Figure 2: focusing search, rough search, and relay. An algorithmic flowchart of the RRA and the details of searches are depicted in Figures 3a and 3b, respectively. The aim of the rough search is to pass over little hills in the search space and approach a local optimum as quickly as possible. The focusing search tries to reach as close to the local optimum as possible. The relay works for both running away from the local optimum with a single operation and maintaining the search continuity. Rough search begins with method selection. Three types of move methods are utilized in rough searches: group insertion, group exchange, and group rotation. In group insertion, the order of randomly selected modules in one sequence is changed. Group exchange is the exchange of randomly selected modules. Group rotation rotates randomly selected modules. The number of modules in the group is set to 10 to ensure that the rough moves affect more modules than the focusing moves.
Focusing search starts with the termination of the rough search. After the rough search is completed, the local optimal solution is transferred to focusing search. For the focusing search, three focusing move methods are utilized: insertion, exchange, and rotation. In an insertion move, the order of a single module is changed in one sequence. Rotation moves alter the orientation of a single module. Exchange move is the exchange of the order of two modules in both sequences Γ+ and Γ−. first part of the new solution is inherited from the current solution. The second part of the new solution is randomly generated. The ratio of the solution's randomly generated part is defined as the parameter R e . To find the number of modules that will be affected by the relay operation in a circuit with N m modules, the R e · N m product is rounded to the nearest integer. The modules that will be in the randomly generated part are selected randomly. Figure 4 illustrates the behavior of the RRA in the solution space. Only the solutions with improvement are accepted in both rough and focusing runs. The differences between the rough run and focusing run are in moving methods and in terminal condition. As Figure 4 indicates, the rough run gets over small hills and the focusing search gets a local optimal solution. On the other hand, the relay escapes from the local optimum solution and reaches near another local optimum solution. This process is repeated as many times as the number of runners on the team, N t , in order to find the global optimum solution.
Modified relay race algorithm
Although the RRA was proposed to overcome the shortcomings of the SA and GA algorithms, there are shortcomings of the RRA, too. The MRRA is proposed here to improve the RRA by working on the following algorithmic choices of the RRA: In the MRRA, the search is performed on a dual path to increase the chance of discovering better local optimum solutions. Moreover, the maximum number of iterations without improvement during rough search N r and maximum number of consecutive iterations without improvement during focusing search N f are not fixed for all problems; instead, their values are determined by multiplying the circuit size, N m , by different coefficients. After extensive trial experiments to determine the best values of N r , N f , and N t , the best empirical parameter set was determined as 3N m , 3N r , and 20, respectively. The MRRA starts by getting an initial solution that can be either user-defined or randomly produced. Then rough search and focusing search are applied to the initial solution. Afterwards, the current solution enters the dual path search. Figure 5a illustrates the flowchart of the MRRA, which includes two inner loops. While the first inner loop corresponds to the dual path search of the MRRA, the second inner loop corresponds to the single path search of the original RRA. The single path search phase continues until the total number of runners in the team for the relay N t is reached. In the single path search, the value of the parameter R e is chosen to be 0.1. As a result of the dual path search, the probability of exploring better local optimum solutions in distant regions is increased. Figure 5b depicts a more detailed description of the "Search Path" step in Figure 5a.
During dual path search, both paths implement the same operations. The only difference between these two paths is the R e value, which is the ratio of the randomly generated part of the solution in the relay operation. The implementation of the searching process with two different paths increases the likelihood of achieving a better local optimal solution as the next solution. The value of the parameter R e used in the first path process is chosen to be 0.1, as in the original algorithm. The value of the parameter R e used in the second path process is chosen to be 0.2, which is larger than the R e value used in the first path; therefore, the second path makes it possible to search in farther regions of the solution space. Since R e corresponds to the mutation rate as mentioned in the previous section, the current solution is mutated at a rate of 0.1 and 0.2 in the first and second paths, respectively. Thus, there is an increased chance for exploring better local optimum solutions.
After both first and second paths complete their search operations, the best solutions of these paths are compared and only the better solution is kept as the next solution. However, if dual path search is applied until the algorithm is terminated, the computation time will increase too much. For this reason, there is also a termination condition for dual path search. When two consecutive solutions of the first path ( R e = 0.1) are better than the solutions of the second path ( R e = 0.2), the dual path search is terminated and the algorithm continues with the single path search afterwards.
Move methods
The MRRA utilizes three rough and three focusing move methods. Rough move methods are group rotation, group exchange, and group insertion. Focusing move methods are rotation, exchange, and insertion. These move methods are exactly the same as the move methods used in the RRA. Methods were kept the same as in the original algorithm in order to compare the differences in the results of the RRA and MRRA that are caused by modifying the approach.
The corresponding placement of a sample initial SP (Γ+, Γ−) = (32415, 12534) is shown in Figure 6a. The insertion move places a randomly selected module in one sequence into a random position. Figure 6b shows the placement of modules after the insertion move is applied to the module m 5 . The exchange method changes the order of a randomly selected pair of modules in both the positive sequence Γ+ and the negative sequence Γ−. The placement of modules after the exchange move is applied to the modules m 3 and m 5 is displayed in Figure 6c. The rotation move changes the orientation of a randomly selected module. Figure 6d illustrates the placement of modules after a rotation move is applied to module m 4 . A group insertion move inserts one randomly selected set of modules into a randomly selected set of positions in one sequence. A group exchange move exchanges randomly selected pairs of modules in both sequences. A group rotation move rotates a randomly selected set of modules. Unlike in the RRA, the number of modules in the group varies based on the total number of modules in the circuit, N m . After experimenting with 0.3N m , 0.4N m , and 0.5N m , 0.4N m has been decided as the group size in the MRRA rough moves.
The move method is selected using the probability of move methods [7]. The probability of any method p k+1 (i) is evaluated according to the old probability of the method p k (i) and the short-term improvement , where i represents different move methods, a k (i) is the relative amplitude of improvement, and f k (i) is the frequency of the improvement. In detail, a k (i) is the relative −∆C on average and f k (i) is the ratio of improved trials in the last t trials. For rough move methods and focusing move methods, t is chosen as 30 and 100, respectively. If the evaluation of the solution satisfies the condition −∆C > 0, a k (i) is calculated and updated. Otherwise, the probability of methods is not updated. The new probability p k+1 (i) is given by p for each move method to keep the total probability 100% , where p ′ k+1 (i) equals (p k (i) + s k (i))/2 .
Cost function
The cost function of the MRRA has three components, which are area, wire length, and overlap costs. Area cost is the size of the smallest rectangular bounding box that contains all modules. Wire length cost is an approximation of the sum of length of all interconnects between the modules. Overlap is the amount of overlap between the paths of same and different interconnects. The area and wire length are the most common costs for the typical cost function in floorplanning. The overlap objective is inserted into the cost function to consider the interference between different signals. The cost function of the MRRA is given by Equation 2.
C t , C a , C w , and C o represent total cost function, cost function of area, cost function of wire length, and cost function of overlap, respectively. The cost function of area C a estimates the area of the bounding rectangular shape, which is given by the minimum bounding rectangle, which includes all modules. The area is calculated by multiplying the total width W and the total height H . The cost function of wire length C w estimates the total wire length that is used for the connection of all pins in the circuit. The half perimeter wire length method is utilized to obtain an approximation of the wire length for each net. The cost function of the overlap function C o estimates the total overlap cost between nets. The calculation of the overlap cost is made according to the overlap coefficients.
Experiments and results
Parameter selection directly affects the performance and efficiency of the floorplanning algorithm. Therefore, the best empirical values of parameters N f , N r , and R e were investigated by trial experiments of floorplanning using the ami33 circuit from the Microelectronics Center of North Carolina (MCNC) benchmark suite. The MCNC benchmark is the most commonly used benchmark for comparing floorplanning algorithms; therefore, it is also used in this paper for comparison with other approaches. The MCNC benchmark suite consists of five circuits: apte, xerox, hp, ami33, and ami49. The details of the MCNC benchmark suite are shown in Table 2. of different parameters were also used so that intermediate values of parameters were also implemented to obtain better results. The investigation for better parameter values also aims at speeding up the algorithm by limiting the time increase caused by the dual path search structure used in MRRA. The values of parameters N f and N r were not kept constant as in RRA, but they were automatically scaled with N m , which is the number of modules in the circuit. The cost function coefficient α is set to 1 while β and γ are set to 0 so that the area optimization results can be compared with previous works.
The parameter values were adjusted according to the results of trial experiments. In particular, the final cost, run time, and product of these values that have been obtained as results of trial experiments were used as the most important values in determining the parameters. For trial experiments, 100 initial solutions were generated. Each trial experiment was conducted with this set of initial solutions for enforcing the same initial conditions in all tests.
In the first stage of trial experiments, the best combination of parameters N f and N t was investigated.
For these experiments, the value of N r has been set to three times the N m value, which equals approximately the same value used in the RRA for the ami33 circuit. In the second stage of trial experiments, the best combination of N r , N f , and N t was investigated. In these trial experiments, it was aimed to increase the value of the N t parameter while decreasing the value of the N r parameter and the value of the N f parameter in order to allow more runners to try more solutions. In the third stage of trial experiments, N r was chosen to be N m and the value of N t was increased while the value of N f was decreased. It is seen that there is a trade-off between the final cost and the run time as a result of increasing N t and decreasing N f . After all trial experiments conducted in order to determine the best values of N r , N f , and N t , the best empirical parameter set was determined as (N r , N f , N t ) = (3.N m , 3.N r , 20). This parameter set was utilized for further experiments with the other circuits in the MCNC benchmark suite. After the best empirical parameter set were determined, area optimization results of SA, GA, RRA, and MRRA were compared. For a fair comparison between the algorithms, all algorithms were implemented utilizing the SP representation scheme while following the published details that can be accessed as closely as possible. These algorithms were applied to all circuits found in the MCNC benchmark suite in the Java environment on a 2.40 GHz PC with 8.00 GB memory. Since all these algorithms are stochastic, each algorithm was run 10 times for each circuit and each algorithm was started using the same initial solutions that were generated randomly at the beginning.
The values of the parameters used in the algorithms were chosen taking into account the values of the algorithms for which the run times were close. For the implementation of the RRA, N r and N f were selected as 100 and 1000, respectively, while N r = 3.N m and N f = 3.N r were chosen for the implementation of the MRRA.
On the other hand, different values were used for N t to provide close run times. The parameters of SA were set as follows: the initial temperature was determined by considering average difference cost of moving methods. The number of trials per each temperature and cooling rate were selected as 500 and 0.99, respectively. For the implementation of the GA, the population size and the number of generations were set to 100 and 1000, respectively. In addition, crossover rate and mutation rate were chosen as 0.8 and 0.3, respectively.
The average area cost and average run time comparisons are based on 10 trials for each algorithm, and they are shown in Tables 3 and 4, respectively. The RRA and MRRA have better results than SA and GA for all circuits in both categories.The MRRA also has the best results for all benchmarks among these four algorithms. The improvement of MRRA compared to RRA is between 0.03% and 1.75% for the average area costs. The MRRA also has considerable improvement between 9.4% and 24.9% for average run times.
Conclusion
In this paper, a heuristic approach named MRRA is proposed to solve the floorplanning problem. The MRRA was designed to improve the speed and quality of the RRA by overcoming the shortcomings of the RRA. In the MRRA, a dual path search is designed to increase the probability of exploring a better local optimal solution as the next solution. In both search paths, rough and focusing runs are implemented and the only difference between these two paths is the ratio of the randomly generated part of the solution R e in the relay operation. The dual path search has its own termination condition in order not to increase the run time. Moreover, parameters N r and N f are determined according to the detailed analysis, which also considers the number of modules in the circuit to improve the efficiency of the algorithm. The efficiency of the MRRA is proven by applying it to the floorplanning problem in physical design optimization. Based on the comparisons of the experimental results utilizing the MCNC benchmark suite, the MRRA is better than SA, GA, and RRA in terms of average cost and average run time of area optimization. The MRRA reduced the average run time by an average of 17.5% according to the RRA. With regard to comparison results, the proposed MRRA has the potential to improve more NP-hard problems.
As shown in the comparisons in Section 5, the improvement of the MRRA varies according to the MCNC benchmark. The difference in the number of modules of the circuits may be the cause of this situation. Although the parameter values used in the MRRA were determined as a result of a detailed analysis, they may need to be changed according to the region where they are located in search space. Searching by more than one path, as evidenced by the MRRA, increases the efficiency. However, the most suitable number of initial paths and their termination conditions can be determined to increase the improvement of efficiency. In addition, these multiple paths can be operated on different cores to further decrease the computation time. | 7,236.6 | 2020-03-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Novel Microcrystal Formulations of Sorafenib Facilitate a Long-Acting Antitumor Effect and Relieve Treatment Side Effects as Observed With Fundus Microcirculation Imaging
The tyrosine kinase inhibitors (TKIs), including sorafenib, remain one first-line antitumor treatment strategy for advanced hepatocellular carcinoma (HCC). However, many problems exist with the current orally administered TKIs, creating a heavy medical burden and causing severe side effects. In this work, we prepared a novel microcrystalline formulation of sorafenib that not only achieved sustainable release and long action in HCC tumors but also relieved side effects, as demonstrated by fundus microcirculation imaging. The larger the size of the microcrystalline formulation of sorafenib particle, the slower the release rates of sorafenib from the tumor tissues. The microcrystalline formulation of sorafenib with the largest particle size was named as Sor-MS. One intratumor injection (once administration) of Sor-MS, but not Sor-Sol (the solution formulation of sorafenib as a control), could slow the release of sorafenib in HCC tumor tissues and in turn inhibited the in vivo proliferation of HCC or the expression of EMT/pro-survival–related factors in a long-acting manner. Moreover, compared with oral administration, one intratumor injection of Sor-MS not only facilitated a long-acting antitumor effect but also relieved side effects of sorafenib, avoiding damage to the capillary network of the eye fundus, as evidenced by fundus microcirculation imaging. Therefore, preparing sorafenib as a novel microcrystal formulation could facilitate a long-acting antitumor effect and relieve drug-related side effects.
INTRODUCTION
Currently, hepatocellular carcinoma (HCC) remains one of the most important threats to the public health system of China because of the highly infectious rates of hepatitis viruses [e.g., hepatitis B virus (HBV) or hepatitis C virus (HCV)], and many patients often present with advanced stages of HCC at the initial diagnosis (1)(2)(3)(4)(5). The use of tyrosine kinase inhibitors (TKIs) or the oral administration of molecularly targeted agents, represented by sorafenib (sorafenib tosylate tablets), could prolong the overall or progression-free survival of patients (6)(7)(8). However, some problems associated with TKIs include the following: (1) Gastrointestinal-digestive function injury in compromised patients often attenuates the absorption of TKIs (9,10). (2) The current strategy of daily oral administration of TKI tablets could induce the systemic distribution of TKIs throughout the entire body, leading to insufficient local concentrations of TKIs in HCC lesions (11,12). (3) The high daily dose (> 800 mg every day) of TKIs such as sorafenib could induce not only a heavy financial burden but also side effects (13). Therefore, research about more effective therapeutic strategies to enhance the antitumor effect of TKIs and reduce their side effects is warranted.
How to improve the effects of molecularly targeted drugs such as sorafenib and concurrently alleviate their side effects is of great significance. Long-term use of sorafenib can cause skin rashes, diarrhea, increased blood pressure, and skin swelling (11,14). The inhibitory effect of sorafenib on VEGFR (vascular endothelial growth factor receptor) and other RTKs (receptor tyrosine protein kinases) (6)(7)(8)11) is the foremost mechanism causing these side effects. Existing animal models for sorafenib toxicity studies have many shortcomings: experimental animals cannot accurately reflect the various pathological changes in the human body, and patients with advanced HCC often have different degrees of liver fibrosis and cirrhosis, which are difficult to replicate at the animal level (15,16). At the same time, many difficulties exist in tissue microcirculation-related research: The resolution of contrast-enhanced ultrasound can reflect the bleeding supply to a certain extent, but detection of microcirculation changes is limited (17,18), and pathological analyses, such as hematoxylin and eosin (H&E) staining, cannot reflect the state of tissue microcirculation throughout the body in animals (19,20).
Among the TKIs (tyrosine-kinase inhibitors) that treat HCC, sorafenib has been used widely and for a long time; thus, sorafenib is the best understood treatment and a logical choice for research (14,21,22). Analysis of the chemical features of sorafenib show that it is insoluble in water; a current formulation strategy provides sorafenib as sorafenib mesylate tablets (23)(24)(25). A microcrystal formulation is a pharmaceutical formulation that exchanges drug powder to microcrystals with diameters of 30-50 mm (26-28). Previously, the microcrystals have been used to improve the absorption of insoluble drugs via oral administration because, unlike drug powders, they can contact and mix with digestive fluid much easier (29,30). Since sorafenib is insoluble in water, it can be prepared as a microcrystal formulation. The microcrystalline preparation of sorafenib is directly injected into the tumor tissue, and it can stay in the tumor tissue for a long time. The larger the particle-size of the microcrystalline formulation, the easier it is to stay inside the tissue for a long time. In tumor tissues, through the erosion of sorafenib microcrystals by tumor tissue cells, sorafenib is gradually released and kills tumor cells. At the same time, because sorafenib microcrystals are directly injected into the tumor tissue, it can protect normal tissues from damage. In this work, we prepared the pure-powders of sorafenib as a novel microcrystal formulation. This approach could overcome the insoluble features of sorafenib powder and condense the drug concentration in the tumor without affecting the surrounding tissue. We also used retinal/fundal imagine in small animals to examine whether one-dose administration of microcrystal formulation of sorafenib could achieve a long-acting antitumor effect and improve the side effects associated with sorafenib.
Cell Culture and Preparation of Sorafenib Formulations
MHCC97-H cells (a highly aggressive HCC cell line) purchased from the Type Culture Collection of the Chinese Academy of Sciences were cultured using DMEM with 10% FBS at 37°C with 5% CO 2 . The pure-powder formulation of sorafenib (purity > 99% by high performance liquid chromatography) was a gift from Dr. Xi He in the Fifth Medical Center, General Hospital of Chinese People's Liberation Army of China (PLA). To make the sorafenib solution (Sor-Sol) formulation, sorafenib was first dissolved with sodium dodecyl sulphate, DMSO (Dimethyl sulfoxide), PEG400, or Tween 80 (all purchased from Sigma Aldrich Corporation, St. Louis, MO, USA) and then was diluted with physiological saline accompanied by ultrasonic or churning conditions (ultimate concentrations of DMSO, PEG400, or Tween 80: 1%, 4%, or 4%, respectively) (31)(32)(33). To prepare a microcrystal formulation of sorafenib, the pure-powder formulation was dispersed by aqueous solution with 6.25% Tween 80 (27). Next, the systems were mixed using magnetic stirring, and the microcrystal formulation of sorafenib was prepared with a MiniZeta machine (NETZSCH Machinery and Instruments Corporation, Germany) equipped with the grinding media of yttrium-stabilized zirconium oxide beads (0.6 mm in diameter) forming a coarse suspension of sorafenib. Then, the coarse suspension was transferred into the milling bowl, and the individual particle diameter of sorafenib in microcrystal formulation was controlled by the agitator speed (500 rpm for large individual-particle diameter; 1500 rpm for medium diameter; 3000 rpm for small diameter) (27). The sorafenib concentration in the solution formulation was almost 2 mg/ mL; conversely, the sorafenib concentration in the microcrystal formulation could reach 30 mg/mL, according to LC-MS/MS (liquid chromatography tandem mass spectrometry/mass spectrometry) (34). To perform a comparison experiment between Sor-Sol and Sor-MS, Sor-MS needed to be diluted to achieve a matching sorafenib content of 2 mg/mL. The sorafenib microcrystal formulations were observed by an optical microscope and a conventional transmission electron microscope according to the methods of Yuan et al. in 2021 (35) and Quan et al. in 2020 (36). The particle size distribution charts were obtained as described in our previous publication (27).
Next, the microcrystalline formulation of sorafenib were analyzed for particle size. About 10 ml of the sample was measured with a pipette and diluted with 500 ml of physiological saline; the sample was thoroughly mixed and used Mastersizer particle size analyzer (model hydro 2000MU, product of Malvern), selects the Dynamic Light Scattering (DLS) module-method for particle size analysis; uses the measurement to obtain particle size distribution data, and the particle size distribution diagram of the formulations were obtained.
Subcutaneous Tumor Model in Nude Mice
The experimental design and the protocol of animal-related experiments, which were performed in accordance with the U.K. Animals Act, 1986 (Scientific Procedures) guidelines, were reviewed and approved by the Institutional Animal Care and Usage Committee, the Fifth Medical Center of General Hospital of Chinese PLA. For the subcutaneous tumor experiments, MHCC97-H cells were cultured and prepared as a single-cell suspension for injection subcutaneously into nude mice (5 × 10 6 cells injected in every nude mouse) (37)(38)(39). Nude mice (BALB/c mice with the absence of thymus/T cells) aged 4-5 weeks were purchased from the Si-Bei-Fu Corporation (Beijing City, China) and reared in specific pathogen-free conditions. After 2-3 weeks, in preparation for the next step (experiments of the in vivo sustaining ability of sorafenib formulations), the volumes of the subcutaneous tumors reached almost 1200 mm 3 . For the intrahepatic tumor model in immunodeficient rats (40), the MHCC97-H cells were cultured and injected into the nude mice to form the subcutaneous tumor tissues. When the tumors were formed, the tumor tissues were separated and prepared as tissue micro-blocks for the next experiments.
Release of Sorafenib From Formulations In Vitro or In Vivo
The rate of sorafenib releases from different formulations was examined by in vitro and in vivo methods. For the in vitro testing, Sor-MS was mixed with 10 mL of physiological saline (0.9% NaCl solution), added to 0.1% Tween 80, and analyzed by vortex shock conditions (27). A 1-mL volume of solution was removed at the indicated time points. After removal, physiological saline was added to maintain a total volume of 10 mL (27). For the in vivo experiments, Sor-Sol (as the control) or Sor-MS was directly injected into the subcutaneous tumors formed by MHCC97-H cells (percutaneous puncture), and tumor tissues were harvested at each time point. The physiological saline samples containing sorafenib or the tumor samples containing sorafenib obtained from these experiments were mixed with acetonitrile, and sorafenib was extracted from the samples. The amount of sorafenib release into physiological saline or the amount of sorafenib sustained in the tumor tissues at the indicated time points was identified by LC-MS/ MS according to the methods described in a previous publication (38). The half-life (t 1/2 ) values of sorafenib were calculated according to the methods described by Wang et al. in 2020 (41). The expression of cellular proliferation, prosurvival/antiapoptosis factors, and epithelial-mesenchymal transition (EMT)-related factors in the subcutaneous tumor tissues was examined by qPCR (qualitative polymerase chain reaction) according to the methods by Ma et al. (42), and the primers used in the qPCR also were from Ma et al. (42). The heatmap of the qPCR results was performed according to the methods of Zhou et al. (43).
For the intrheptic tumor model, the Sor-MS was directed injected into the intrahepatic lesion formed by MHCC97-H (rats are injected directly into the tumor tissue after opening the abdomen). The in vivo release of sorafenib from HCC tissues injected with sorafenib formulations was measured by the concentration of sorafenib in the blood of nude mice or immunodeficiency rat in
Intrahepatic Tumor Model in Immunodeficiency Rats
To produce an intrahepatic tumor model (the liver in situ tumor model) in immunodeficient rats, the HCC cells (MHCC97-H cells) were injected subcutaneously into nude mice to form tumor tissues. Then, the micro-blocks of tumor were separated from subcutaneous tumors formed by MHCC97-H cells and directly inoculated into the livers of the immunodeficient rats (44)(45)(46). The weights of the micro-blocks were shown as Supplemental Table 1. After 3-4 weeks of growth, the MHCC97-H cells formed intrahepatic lesions/nodules in the rat livers, and the sorafenib formulations were directly injected into the intrahepatic lesions. At the same time, another batch of rats received oral administration of sorafenib once every 2 days. The livers were collected, and photographs were obtained and analyzed by Image J Software (Version No. 1.51j8; National Institutes of Health, Bethesda, Maryland, USA) (47). Next, the intrahepatic lesions/nodules were confirmed by the pathological analysis of Masson staining (48).
Side Effects of Sorafenib on Animals
The side effects of sorafenib on animals were identified by searching for injury to the fundal capillary network induced by sorafenib treatment. The fundus capillary network was examined by microcirculation imaging using the Retinal Imaging System (OPTO-RIS, Optoprobe, Canada). Immunodeficiency rats were intraperitoneally injected with ion of 1% pentobarbital sodium (0.3 mL/100 g) plus sumianxin (0.05 mL, 100% concentration). After general anesthesia, compound tropicamide eye drops (with eye surface anesthesia using oxybuprocaine hydrochloride eye drops) were used to induce mydriasis. The images of the fundus and retina of rats were obtained and quantitatively analyzed by Image J (47). Moreover, the body weight, hematological parameters, and mass of the main organs of animals (nude mice or the immunodeficiency rat) were examined according to the methods descripted by Huo et al. (32).
Statistical Analysis
All statistical significance analyses were performed using SPSS 9.0 statistical software (IBM Corporation, Armonk, New York, USA). The half-life values of the release from sorafenib formulations in vitro and in vivo were calculated with Origin 6.0 software (OriginLab, USA). Statistical significance was analyzed by Bonferroni correction with two-way analysis of variance for the groups. Paired samples were tested by pairedsample t tests.
Preparation of the Sorafenib Formulations
First, the microcrystal and solution sorafenib formulations were prepared ( Figure 1). The microcrystal formulations of sorafenib contained irregularly shaped crystals with varied particle diameters. According to the size of the individual particle diameter (large, medium, or small) of the sorafenib crystals, three kinds of formulations were obtained ( Figure 1). The results were visualized as optical microscope images (Figures 1A-C) or transmission electron microscope images (Figures 1B-D) as well as particle-size distribution images ( Figures 1E-G). Examination of the concentration of sorafenib in formulation showed that the concentration of the microcrystal formulation reached more than 30 mg/mL ( Table 1).
Next, the formulations of sorafenib were filtered through a 0.1-mm pore-size filter to confirm the size of the individual particle diameter of the sorafenib crystals. As shown in Table 1, multiple filtrations via 0.1-mm apertures filtrations significantly decreased the concentration of sorafenib in the microcrystal formulations but not the sorafenib solution. Moreover, there were no significant differences between the concentrations of the microcrystal formulations after filtration ( Table 1). Multiple filtrations did not affect the concentration of Sor-Sol. Thus, the microcrystal formulations of sorafenib were successfully prepared ( Table 1).
Release of Sorafenib Formulations In Vitro or In Vivo
LC-MS/MS was used to examine whether the prepared sorafenib formulations could achieve long-sustaining delivery of sorafenib, and the in vitro or in vivo release of sorafenib from the formulation was revealed by t 1/2 values (as shown in Table 2). particle diameter), and 51.33 ± 10.42 h (small particle diameter). The large particle diameter released sorafenib in vitro most slowly among the three formulations and was named Sor-MS.
The in vivo releasing rates of sorafenib from the microcrystal formulations were examined by LC-MS/MS; the t 1/2 values for the different formulations were as follows: 26.71 ± 7.44 h for the sorafenib solution formulation [Sor-Sol]), 410.36 ± 17.93 h for the sorafenib microcrystal formulation with large particle diameter, 240.55 ± 10.40 h for the sorafenib microcrystal formulation with medium particle diameter, and 78.67 ± 9.82 h for the sorafenib microcrystal formulation with small particle diameter. These results demonstrated that microcrystal formulations of sorafenib could achieve in vitro sustained release.
Single Administration of Sor-MS, But Not Sor-Sol, Significantly Inhibited In Vivo Growth of MHCC97-H Cells The "Section 3.2 results" showed that preparation of sorafenib as a microcrystal formulation could achieve sustained releasing/ long-sustaining of sorafenib in tumor tissues. Whether preparation of sorafenib as a microcrystal formulation could achieve the long-acting antitumor activation was also examined in the subcutaneous tumor model. As shown in Figure 2, onetime intratumor injection of Sor-MS, but not Sor-Sol, could significantly inhibit the subcutaneous growth of MHCC97-H cells in nude mice. Moreover, the tumor tissues were collected and analyzed for qPCR, which showed that single administration of Sor-MS, but not Sor-Sol, could inhibit the EMT process of HCC cells in subcutaneous tumor tissues ( Figure 3).
Sor-MS Alleviated the Side Effects of Sorafenib in Animals
The side effects of sorafenib were examined in the rats with the intrahepatic tumor tissues. As shown in Figure 4, one-time administration of Sor-MS, but not Sor-Sol, could significantly inhibit the intrahepatic growth of MHCC97-H cells in the livers of immunodeficient rats. Oral administration of sorafenib (mimicking the long cycle of clinical sorafenib treatment) could also inhibit the intrahepatic growth of MHCC97-H cells.
Next, a small animal fundus imager examined the side effects of sorafenib formulations in the microcirculation of the rats. As shown in Figure 4, oral administration of sorafenib could significantly disrupt the microcirculation and decease the retinal thickness of rats compared with the untreated group. Conversely, single administration of Sor-MS did not disrupt the microcirculation and decrease the retinal thickness of rats compared with the untreated group or the group that received sorafenib orally. The effect of sorafenib's formulation of the body weight, hematological parameters, and mass of the main organs of animal were also examined to further reveal the adverse effects induced by sorafenib. As shown in Tables 3 and 4, oral administration of sorafenib (Sor-Sol), but not intra-tumor injection of Sol-Sol or Sor-MS, significantly induced the decrease in hematological parameters (leukocyte, Red blood cell, Hemoglobin or Platelet count), body weight, or the major organs (heat, liver, lung, kidney or spleen) of the nude mice mentioned in Figure 2. Moreover, it is worth noting that the oral administration of sorafenib for a long-term induced the serious injury of immunodeficiency rats' hematological parameters, body weight, and weights of major organs mentioned in Figures 3 and 4 (Tables 5 and 6). A single intra-tumor injection of Sor-MS not only exerted the antitumor activation on the intrahepatic growth of HCC cells, but also did not affect the hematological parameters, body weight, and weights of major organs of immunodeficiency rats mentioned in Figures 3, 4 (Tables 5 and 6). Therefore, the Sor-MS preparation of sorafenib could improve the side effect profile of sorafenib.
The Blood-Concentration of Sorafenib Released From Sorafenib
Although the calculating the half-life values in tumor tissue could reflect the metabolism and clearance rate of sorafenib, it is still insufficient. Therefore, the concentration of sorafenib in the A B D C FIGURE 4 | Fundus intravital imaging of immunodeficiency rats with intrahepatic lesions that received sorafenib formulations. MHCC97-H cells were cultured, and the intrahepatic lesions of hepatocellular carcinoma (HCC) were established in the live organs of immunodeficient rats. Rats received one intratumor injection (50ml amount) of sorafenib microcrystal formulation with the largest particle-size (Sor-MS) (30mg/ml) or sorafenib via oral administration (2mg/kg concentration, repeatedly over a long period of time). Results are shown as images of the fundus microcirculation capillary network (A) and fundus retinal intravital images (B). Results are shown as images of the rat fundus retinal capillary network (A), images of the rat fundus retinal thickness (B), and a quantitative analysis of the images of the rat fundus retinal capillary network (C) or rat fundus retinal thickness (D). *P < 0.05. The write arrow indicated the capillaries and retina. blood of animals after intra-tumor injection of sorafenib formulations was further examined by LC-MS/MS. As shown in Figure 5A, after injection of Sol-Sol in nude mice, sorafenib was rapidly cleared from the subcutaneous tumor tissues, and its blood concentration peaked within 24h. However, after intratumor injection of Sor-MS, the clearance rates of sorafenib from the tumor tissues was much slower compared with Sor-Sol, and the concentration of sorafenib in nude mice's blood was constantly low and could be detected at the 240h time point after injection. Similar results were obtained from the intratumor injection of Sor-MS in immunodeficiency rats' intrahepatic lesions ( Figure 5B). These results further confirmed the in vivo long-sustaining feature of Sor-MS.
DISCUSSION
The molecularly targeted agents represented by sorafenib remain the first-line choice to treat advanced HCC (49)(50)(51). Although some clinical trials have shown that the oral administration of sorafenib (as NATCO) could improve the survival of patients, the side effects in these trials cannot be ignored (52). As research has progressed, some new molecularly targeted drugs, including regorafenib (53), lenvatinib (54,55), and cabozantinib (56), have been approved to treat advanced HCC. These drugs have better therapeutic effects than sorafenib in advanced HCC (53)(54)(55)(56). Nevertheless, these drugs are similar in structure to sorafenib and have the general structural formula of 1-(4-(pyridin-4-yloxy) phenyl)urea. Thus, these drugs may not be able to completely overcome many of the shortcomings of sorafenib. Improvements in the pharmaceutical preparation process for sorafenib will help achieve better therapeutic effects and use a different strategy than pure compound structure modification (57). To overcome the challenges associated with sorafenib administration/application, we prepared a novel formulation of sorafenib based on its insoluble features that could be easily administered into a tumor and that offered sustained-release of sorafenib in HCC tissues. One-time administration of Sor-MS achieved antitumor activation of sorafenib. This work extended our knowledge about sorafenib, and injection of Sor-MS into HCC tissues of patients, guided by computed tomography or digital subtraction angiography, would be a promising strategy for advanced HCC treatment.
Interventional therapy and molecularly targeted therapy are both treatment strategies for advanced HCC (10,58). The existing combined therapy strategy of interventional therapy and molecularly targeted therapy mainly involves patients receiving interventional therapy, such as RFA (radiofrequency ablation) or TACE (transcatheter arterial chemoembolization), and taking molecularly targeted drugs, such as sorafenib, at the same time (59)(60)(61)(62)(63)(64). Although existing research shows that molecularly targeted drugs combined with interventional therapy can significantly improve outcomes in patients, the current treatment strategy still fails to fully utilize the synergistic advantages of the two treatment strategies (59)(60)(61)(62)(63)(64). Interventional therapy is an ideal strategy for the comprehensive treatment of advanced HCC: (1) TACE and other drugs can enter the HCC tumor tissue directly to avoid affecting the surrounding normal liver tissue (10, 58-62); (2) RFA can directly damage the HCC tumor tissue while avoiding damage to the surrounding tissues as much as possible (59)(60)(61)(62)(63)(64). These advantages make interventional therapy useful in precision drug delivery for HCC tumors, but many shortcomings to the related research remain. Only a few antitumor drugs, such as doxorubicin, are widely used (65,66). Therefore, the results of this study of great significance: sorafenib not only has been developed into a new pharmaceutical preparation suitable for TACE but also can provide more options for safer and more effective treatment of HCC in the future.
Sorafenib and other molecularly targeted drugs have side effects, and the core mechanism of these effects is the destruction of the microcirculation (i.e., human normal vascular endothelial cells). However, there are many difficulties in related research. Experimental animals and their tissues with developed microcirculation, including the intestinal mucosa, spleen, and alveoli, can be used for side effect research. Ultrasound may be included to determine the blood supply of these organs, and H&E staining can detect the tissue microenvironment and the microstructure of the mucous membranes.
This study explored the side effects of sorafenib, and it has many advantages compared with previous research methods. In this study, a new microcrystal formulation of sorafenib was developed to simulates interventional therapy by direct injection into the tumor tissue and long-term sorafenib treatment. Sor-MS was injected directly into the tumor tissue, and a single injection had long-term antitumor activity. At the same time, the slow-release characteristics of Sor-MS ensured that sorafenib was mainly distributed in the tumor tissues and had minimal impact on normal organs. With the control (the sorafenib oral gavage treatment), sorafenib was distributed throughout the animal, and the long-term effect of this sorafenib distribution could include damage to normal organs.
As the only transparent organ of the human body, the eyeball can be directly imaged and observed. Fundus imaging can not only take pictures of the vascular network but also detect the thickness of the retina. The results of this study show that a single injection of Sor-MS into the tumor tissue will not affect the fundus microcirculation and retina of experimental animals, whereas long-term oral administration of sorafenib to animals can destroy the fundus microcirculation and retina. Therefore, this study not only expands our understanding of sorafenib-related toxicology but also provides new insights about imaging of live small animals.
Moreover, in recent years, some particle carriers with targeted drugs have been developed. For example, Shi et al. prepared an "Apatinib-loaded CalliSpheres Beads" for embolization and examined the pharmacokinetics and tumor response in a rabbit VX2 liver tumor model (67). The strategy of this study is fundamentally different from these studies: these studies must use microspheres made of polymer materials (such as CalliSpheres Beads) to physically adsorb molecular targeted drugs, and the chemical properties of molecular targeted drugs affect the drugamount carried by microspheres; and the drug-loaded microspheres obtained in these strategies are mainly the microspheres themselves, and the drug content is limited. The microcrystalline preparation prepared in this study does not contain polymer materials, so it can achieve a dosage of more than 30mg/ml. At the same time, the particle size of the obtained Sor-MS can be controlled through the adjustment of the process, so as to realize the embolization of the blood vessel with the molecularly targeted drug itself.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Ethics Committee of fifth medical center of Chinese PLA. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. The animal study was reviewed and approved by Animal Ethics Committee of Fifth medical center of Chinese PLA.
AUTHOR CONTRIBUTIONS
HX, XY, and JW designed research. JW, RL, and YZ performed the experiments. ZM, ZS, and ZW participated in the preparation of the manuscript. HX and XY wrote the manuscript with contributions from all authors. All authors contributed to the article and approved the submitted version.
FUNDING
This work was supported by grants from the National Natural Science Foundation of China (No. 81971720). | 6,056.4 | 2021-08-26T00:00:00.000 | [
"Chemistry",
"Biology",
"Medicine"
] |
Assessment of the Practices for Early Mathematics Thinking in Preschools of Pasaje City, Ecuador
Preschool education is fundamental to shape children’s aptitudes and skills in early life. Ecuador is following a global education trend for starting-up mathematical thinking at earlier development ages, but this is only reflected in vehement curricular changes that are not supported properly. As a result, the safeguarding of a good children’s education appears to be lessened. This work aimed to evaluate the mathematics thinking practices in preschools of Pasaje city, Ecuador. The investigation employed a descriptive approach, hence data were collected from 65 teachers and 810 parents from public and private preschools by means of interviews and questionnaire in an attempt define some causes (teacher’s education, children’s socio-economical and family circumstances) that impair the initiation of the numerical, spatial and metric and geometric aptitudes in children. The results showed that not only preschool practices but also the home environment linked to socio-economic status appear to have positive or unfavorable influences on their children’s education.
Introduction
The dictionary form for the word "kindergarten" denotes a preschool playgroup and also defines its main attribute "a place to play" where game activities are essential practices. Preschool instruction is fundamental to shape children's aptitudes and skills, offering the elementary tools to access future educational levels (Espinoza, 2018). In Scandinavian countries, the views and attitudes of preschool teachers follow a basic common answer "mathematics is everywhere and is something we use every day" (Benz, 2012;Lee & Ginsburg 2007). Mathematically oriented activities play important roles as a formative instrument at preschool (Initial Education, 2005); also helping the children to improve rational thinking to access learning information.
As a rule of thumb, curriculum developers include lessons focusing on rational awareness for preschool education, but in the Ecuadorian context, something that should mean became a mandatory end. Nevertheless, due to the bad results of the rational and mathematical thinking components of preschool's curriculum, these topics have lost importance in preschool practices. In this context, the Ministry for Education has engaged in a permanent searching for strategies to educate the population on the transfer of good capabilities so they can live in a learning technology society ([Ecuadorian Ministry of Education] MEE, 2014). In Ecuador during the so-called "Citizens' Revolution", the preschool curricula have changed drastically (but were not properly strengthened), and as a consequence, the official claims for safeguarding a good education from childhood to adulthood (higher education) were weakened. Several incongruences still surrounds the praxis in Ecuadorian preschools, where the accelerated changes in the curricula do not correspond with human socio-economical situations of teachers and parents such as inconsistences and problems that are consequences of the reforms design, its implementation and structural and historical circumstances (Guayasamin, 2016) among which the acquisitions of skills by teachers ("teachers do not understand the importance of mathematical thinking or do not understand what it really is" Katagiri, 2004, chapter 2) and a lack of understanding of the reasons for such changes (society). Authors suggest that Correa's reforms from 2006 to 2017 were a negative example of top-down reform lacking the construction of an ample coalition of unions with strong social support (Schneider, Cevallos & Burns, 2017). The incoherence begins from the very scientific grounds. For example, the cognitive development of young children has scarcely been studied in Ecuador and practically does not exist in reports, nor in the socio-economical and family circumstances (poor or wealthy; rural or urban). Similarly, the lack of socio-pedagogical research that combines theory with practice in the formation and development of mathematical thinking in preschool is rare. These combined circumstances form an enormous contradiction in educational policies pretending that effective mathematic thinking in preschool will construct the better citizens that the country needs. The fails are evident. For instance, undeveloped mathematical thinking in older children still occurring in basic education schools (Espinoza, 2018) and typified by the fails in mathematical language and unfamiliarity of metrics. Deficiency in numerical thinking (counting), in spatial awareness and geometric systems, among other circumstances.
Theoretical framework
Preschool education is the start-up for children's cognitive growth and requires unique pedagogical foundations according to their short age, as well as solid didactic methods by teachers. Authors such as Fernandez (2003), Ortiz (2002), Ruesga ( 2011), Siegler andSvetina (2002), Tobon (2012), Valencia and Galeano (2005) and Villegas (2011), made contributions on various factors involving intellectual, motoric, artistic, child games and children thinking.
With regard to mental growth, it has been determined that the development of logical thinking requires the acquisition of rising reasoning levels, interpretation, argumentation and the ability to plan proposals and alternative solutions to any type of difficulty/conflict that the child confronts every day (Valencia & Galeano, 2005). Similarly, Bishop (1999), Fernandez (2003 and Ruesga (2011) went deep into the singularities of mathematical thinking at early ages. The comprehension of mathematical actions by children depends on their level of cognitive development, and it can be stimulated by mathematical logical activity, as well as analytical capacity and assertiveness such as confidence in their own abilities, perseverance in the search for solutions and the pleasure to learn. By developing mathematical reasoning children can develop their normal and abstract thinking in a coherent pathway. The associations established by children within different life atmospheres permit them to acquire life experiences for future school life (Villegas, 2011).
It can be stated that mathematical thinking is unique quality in each child and is built according to their own infant experiences, these conditions represent a challenge for teachers who must meet the cognitive needs for every child according to their individual learning styles, but the combined actions child-teacher can trigger meaningful experiences. According to Tobon (2012), "starting-up from simple things such as buying, playing, measuring, singing, selecting, the child will be able to develop logical thinking skills" (p. 13).
There are different theoretical frameworks to address the development of children's mathematical thinking. Such as the Japanese scholastic approach (Katagiri, 2004) or as a cultural-historical point of view as in the Netherlands (based on Vygotsky`s theory) that has generated educational approaches under the play perspective and the imitative participation (Van Oers, 2010. In the Ecuadorian context the beliefs of a hidden mathematical talent in every child must not be applied arbitrarily as a constructivist assumption (Cobb, 1994) or pseudoconcept, on the contrary can only be encouraged (if it is socially relevant and accepted), also in an amusing way according to the natural infant behavior and social context.
In Latin American countries the direct importation (copy-paste) of foreign teaching stereotypes is a common bureaucratic practice; a sort of 'pedagogical cloning' that attempts to replicate innate geniuses (chess players, violinists, numerologists or scientists), pretending that filling-up a country with mathematicians will improve the economy and general welfare. There are examples of mismatching for the copy-paste practices that can not meet the real context within a developing country, such as the of use of drastic mathematical concepts forced to fit-in the preschool curriculum, for instance, the case of Valencia and Galeano (2005) who recommend elusive components to address the initiation of mathematical thinking.
Numerical thinking: the intuitive concept of numbers thinking and counting tasks, organization and sequence with different elements.
Geometric spatial thinking: to provide the ability for examining and analyzing the properties of two dimensional and three-dimensional spaces, as well as the shapes and figures present inside, by spontaneous and amusing activity.
Metric thinking and measurement system: to approach the process of measuring physical elements and comparing them; which one is longer or shorter? etc.
The same authors (Valencia & Galeano, 2005), suggest for the starting-up of mathematical thinking in preschool the following operations would be included : (1) Classification, that lies in learning to group objects according to one or more conditions or qualities: For instance, to classify objects present in the environment by color and to make subgroups having the same qualities; (2) Seriation, that consist of ordering objects according to a pattern: For instance, order from highest to lowest; (3) The number, that is the acceptance that the number is a property of the groups; (4) Representation, a sort of inner mirroring of the outside world based on the principle of conservation of Piaget, that objects exist despite not being present at any given time. The preschool children can exercise the representation through: (i) Imitation of an act of supposition; (ii) Serial representation: Objects sorting according some of its parts; (iii) Allegorical representation: Sorting bi-dimensional objects by drawing; (iv) Codification: Arbitrary classification shared by society through the word, number or graph.
Other forms of logical teaching practices for starting-up mathematical thinking are the awareness of space and the understanding of time. Regarding space, the child constructs notions, relationships and structures of the objects that surround him. The understanding of time is related to the physical and social knowledge of the child at the moment in which it constructs events and attends to a logical and chronological sequence of events. Valencia and Galeano, 2005 (p.250) suggests the following preschool activities: (a) Perform some operations between groups; (b) Recognize, analyze and symbolize certain relationships between elements in a numerical group from 1 to 99; (c) Use addition and subtraction of numbers from 0 to 99; (d) Differentiate problems that pose an additive situation and give solution.
Luckily, on the other hand, there exist normal methods for triggering mathematical thinking that come naturally from the child's behavior such as liking to play which is an inherent activity in childhood, but also the best way to get in contact with objects and other infants, to stimulate children learning and delineate their character (Claro, 2013). It has been proved that to play is one of the six cultural activities motivating the development of mathematical ideas, the other five are counting, measuring, locating, designing and explaining. In addition, the play promotes communication skills, sets-up challenges, generates situations of doubt and develops reasoning (Bishop, 1999, Fernandes et al. 2017). Also, children use games to define and respect rules and to become predisposed to discipline. Therefore, it is indispensable to introduce game activities to help the most basic and natural aspects (counting, measuring and shaping) of the child's reasoning in preschool. For example, it can be done by making simple questions such as where is more or where is less (Garcia & Perez, 2011). Throughout playing games, the boy and the girl can interact with the surrounding world to shape their own learning ([National Ministry of Education] MEN, 1998). Parents and teachers can use the game activity to guide the first steps in mathematical learning (De Souza, Concentino, Bazan & Luccas, 2017), a positive teacher-child association for game/learning (which remains to be developed) could be a useful tool for the development of mathematical thinking in preschool in Ecuador.
Research goal
The objective of this research was to evaluate the mathematical thinking practices in preschools of Pasaje city, Ecuador.
The researchers expected to get insights in the social-economic factor influencing teacher attitudes (low salaries) and home circumstances (good or poor welfare). Various concerns were identified from participants' responses, exposing the obstacles that teachers and children encounter during the starting-up of mathematical thinking of the children.
Participants
The total participants' population was obtained from preschool records or another direct counting.
The participants were 104 teachers and 986 parents that belong to the 54 preschools (including the respective children groups, 3-5 years old) in Pasaje city. From this participant's population, a stratified working sample was constituted by 65 teachers and 810 parents randomly selected to guarantee the representativeness of each school, at least one teacher and 15 parents per school. In this way, 357 parents correspond to private schools and 453 to public schools.
Data Collection
The research data was obtained by interviews and questionnaires. A simple protocol was used to collect, process and analyze the data, the interviews and opinions from teachers and parents. The regular teaching practices in local preschools, the components and practices involved were identified by means of systematic observation. The interviews with parents and teachers were conducted to determine whether if both were qualified or not for children's tuition during this stage. The interview questions presented to teachers were a modification to those proposed by Acosta de la Cueva (2010).
Procedure
In order to avoid gathering biased data, the interviews with parents, teachers and the classroom observations were performed by 8 social workers trained particularly for this specific interview and nobody of them worked in Pasaje city.
In the interview to the parents, the examiners verified (using impartial questioning) the literacy level of the parents and their aptitudes for didactic interventions in support of the stating-up of mathematical thinking. The direct observations of teaching activities in the classroom allowed the diagnosis of the starting-up mathematical thinking in children as adequate or inappropriate. The teaching practices in the classroom were evaluated by impartial social workers with groups of 22 to 30 infants (from private and public school respectively) with the supervision of the preschool principal.
The observers focused on dichotomous responses, adequate or inadequate according to the following mathematical cases: Numerical thinking: The children performed operations between simple groups including to recognize, analyze and represent some relationships between the elements of the numerical group from 1 to 99, to use addition and subtraction, in the group of numbers from 0 to 99 (distinguish problems and give solution).
Metric thinking, the activities took into consideration: (1) To establish comparison relationships between elements and conclude which one is longer or shorter; (2) To measure the length of some elements with unconventional tools and conclude which one is longer or shorter.
Geometric spatial thinking: Evaluated by the children's performance of some topological spatial relationships, recognize and classify solids and flat surfaces, start the recognition of the concept of symmetry from certain regularities present in bodies and figures.
The parents' interviews focused on their literacy for tuition the start-up of mathematical awareness in their children. The parameter evaluated were (I) parents with good literacy, (II) parents with understanding how to stimulate the mathematical thinking of their children, (III) parents who assist the children's study activities and attend to their concerns and (IV) the quality of life in affective terms: the child feels respected, loved and protected in home environment.
Analyzing of Data
As the research adopted a simple descriptive approach. The data collected was computed with standard spreadsheets for basic descriptive statistics and presented in tables and graphs.
Results
The results are exposed and analyzed below. According with Table 1, it has clearly showed the scarce systematization of game and amusing practices adopted by teachers to prompt mathematical responsiveness, which in turn caused rejection of mathematic topics by children. Teachers did not apply innovative practices in the class to stimulate mathematical thinking. Secondly, the preschool textbook had a low or erroneous use according to the evaluators.
Teachers consensus for mathematics in preschool. Oddly enough, in the teacher community there was a consensus for initiation of mathematical reasoning and thinking in pre-school children as shows Figure 1.
The interview conducted by trained observers to the 810 parents revealed the information presented in Table 2 Table 2. Parents literacy as support for the start-up of mathematical awareness in their children. There was correspondence between the parents' literacy and the good study performance of their children, including mathematics related classes. Diagnosis of children's mathematical awareness by school type private or public Table 3. Parents literacy and school type. The information showed different effects in children`s mathematical responsiveness concerning the home atmosphere, private schools and public schools that suggest a correlation between quality of preschool and cognitive development of preschool children. Figure 3 show low teaching performance in pedagogical practices for promoting mathematical operations of the preschool students. The number concept had a negative appreciation and consequently the seriation and classification components of mathematical thinking were negatively affected. Similar occurred with the representation component; if the children lack abilities to enumerate, measure or classify, mathematical games could also be perceived as less amusing. .
Discussion
The diagnosis of preschool teachers for perceptions and practices in mathematics combines aspects from classroom tasks to classroom interactions. The preschool teacher's perceptive consensus for the starting-up of mathematical thinking in preschool partially agreed with Acosta de la Cueva (2010 p.34), who recognized the teacher`s importance to stimulate and guide the children in this class. However, the consensus was contrary to Barnett (2004) who focused on the importance of the teacher`s training for better preschool practices, here in the Pasaje`s teachers fell evidently short on theoretical and practical basis to achieve this goal. The proficiency of preschool teachers had been perceived as defective in Ecuador (Fabara, 2013). However, this fact can not be the unique cause of the deficiencies occurring in Pasaje preschools. For example, in Japan for 45 years the teaching of mathematical thinking was by no means sufficient and far from suitable in reality (Katagiri, 2004). Concerning the children's home context, Avalos et al. (2018) also considered that a good family context is indispensable in the integral formation of the child. Consequently, a family`s aspects such as affective, cognitive, psychological and social factors must be taken into account. In the cognitive part concrete actions are required to enhance the children's attitudes, while the affective area deals with emotional control to manage love or anger (p. 2). The parents' views pretended (a false belief) to offer good home environment (respect and protection) for their children. However, the results suggest constraints in the home environment that frustrate a good support for starting-up the children's mathematical thinking. On the other hand, the preschool quality and good cognitive development of children seem to be related as occurring in other countries (Tomar & Kumari, 2017).
The observations of teachers performing mathematical game activities showed overprotective and incorrect guidance, without freedom from teacher`s control when they perform mathematical tasks and games. This was contrary to the finding by Wing and Beal (2004) where preschool children were successful at similar tasks when were prompted by teachers. The inter-subjectivity incidence when children are required to think freely or solve the classroom tasks has been reported in other countries (Nurlaily, Soegiyanto & Usodo, 2019), the overprotection could represent an indeterminate factor in delaying the typical development of mathematical thinking of preschool children.
McMullen, Hannula-Sormunen, and Lehtinen (2013) show there was a significant increase in the use of quantitative relations with age as they acquire the sense and command of numbers. This lack of teacher`s prompting could be a difference in the experimental settings between the current study and previous studies of children's reasoning about numbers, metrics and forms which may have caused inconsistencies in the use of mathematical thinking. Assessing children's mathematical thinking is a complex task in terms of teaching. This requires understandings and theories about young children, learning and teaching. The evaluators were trained to avoid inter-subjectivity, however, it can not be said whether the children feel comfortable in expressing their learning in the presence of strangers (evaluator) and authorities (the school principal) or not.
Conclusion
This work was a preliminary diagnosis of the pedagogical practices and parents tuition aptitudes to promote the starting-up of mathematical thinking in pre-school children and found the following barriers and deficiencies: -Insufficient teaching training for including game practices.
-Erroneous use of the official textbook.
-Short methodological resources by the teacher -Differences in family welfare, parent literacy and parental tuition.
These social, psychological and pedagogical deficiencies could diminish the acquisition of initial mathematical skills and block continuity in later basic school education.
On the other hand, the necessity of mathematics training programs for preschool teachers, and the socio-economical family circumstances play a role that remains to be examined. | 4,573.8 | 2019-01-01T00:00:00.000 | [
"Mathematics",
"Education"
] |
Variation in IgE binding potencies of seven Artemisia species depending on content of major allergens
Background Artemisia weed pollen allergy is important in the northern hemisphere. While over 350 species of this genus have been recorded, there has been no full investigation into whether different species may affect the allergen diagnosis and treatment. This study aimed to evaluate the variations in amino acid sequences and the content of major allergens, and how these affect specific IgE binding capacity in representative Artemisia species. Methods Six representative Artemisia species from China and Artemisia vulgaris from Europe were used to determine allergen amino acid sequences by transcriptome, gene sequencing and mass spectrometry of the purified allergen component proteins. Sandwich ELISAs were developed and applied for Art v 1, Art v 2 and Art v 3 allergen quantification in different species. Aqueous pollen extracts and purified allergen components were used to assess IgE binding by ELISA and ImmunoCAP with mugwort allergic patient serum pools and individual sera from five areas in China. Results The Art v 1 and Art v 2 homologous allergen sequences in the seven Artemisia species were highly conserved. Art v 3 type allergens in A. annua and A. sieversiana were more divergent compared to A. argyi and A. vulgaris. The allergen content of Art v 1 group in the seven extracts ranged from 3.4% to 7.1%, that of Art v 2 from 1.0% to 3.6%, and Art v 3 from 0.3% to 10.5%. The highest IgE binding potency for most Chinese Artemisia allergy patients was with A. annua pollen extract, followed by A. vulgaris and A. argyi, with A. sieversiana significantly lower. Natural Art v 1-3 isoallergens from different species have almost equivalent IgE binding capacity in Artemisia allergic patients from China. Conclusion and clinical relevance There was high sequence similarity but different content of the three group allergens from different Artemisia species. Choice of Artemisia annua and A. argyi pollen source for diagnosis and immunotherapy is recommended in China.
autumn seasonal allergic respiratory disease, especially along the Asia-Europe silk-road and in north-western United States [1][2][3][4][5][6]. Between 350 and 500 Artemisia species have been recorded in the plant kingdom [1,7] worldwide, 187 in China [8]. Phylogeny of the Artemisia genus, updated by molecular marker analysis [7,9], has reached a consensus of six sections: Artemisia, Abrotanum, Dracunculus, Absinthium, Seriphidium and Tridentata. Most Artemisia species are in the first four sections and are distributed in temperate climate regions, where the majority of mugwort pollen allergic patients live. The few species belonging to Seriphidium and Tridentata are distributed in semi-desertic to steppic environments [10]. Some Artemisia species are dominant in natural plantations, contributing to the geographic difference of the pollen allergy [5]. Artemisia vulgaris is the best studied species, mainly distributed in northwestern and central Europe. Five major species have been listed in China (A. annua, argyi, sieversiana, capillaris, lavandulifolia) in a national pollen survey [11], and there is preliminary clinical and immunological evidence of the potential IgE binding potency of the first three species [12,13]. A few species, such as A. annua, have invaded Europe and America, becoming potentially severe allergenic sources [14]. Artemisia pollen allergy is directly related to the distribution of Artemisia spp., density, climate [6] and risk factors [15]. Currently, commercial mugwort pollen allergen extract CAPs are from A. absinthium (w5) and A. vulgaris (w6), the latter being the most commonly used in diagnosis.
Molecular characterization of Artemisia vulgaris and Artemisia annua has revealed seven allergens, with the clinical data and reference DNA and protein sequences published [16,17]. Art v 1 and Art v 3 have been shown to be major allergens worldwide, and a newly identified group, Art an 7 also seems to be important, although its IgE values are usually much lower [3,[18][19][20]. By sequence cloning of a single species of Artemisia vulgaris pollen, seven Art v 1 isoforms have been identified, with only slight variation in the C-terminal and very similar IgE reactivity [21]. Five Art v 3 isoforms have also been identified, one a partial sequence by N-terminal sequencing [22] and the other four by gene cloning [23]. Diversity of group 7 allergen sequences of seven Artemisia species has recently been reported, where two isoforms for each species have been found with over 95% identical sequence [17].
The current commercial mugwort pollen extract used for skin prick and immunotherapy in China is mainly from A. sieversiana, even though A. annua was recognized as an important allergen source in the 1980 s [5], and a recent report states that a mixture of pollens from three species (argyi, annua, sieversiana) would be better for immunotherapy (Bai et al. China Patent, CN102512673B). With a serum pool from the USA, high levels of cross-reactivity has been found with ELISA inhibition in nine Artemisia species, with two local sage species being the strongest inhibitors [1]. Very recently, using immunoblots, similar IgE binding patterns of seven Artemisia species have been found, with some degree of difference in three major allergen bands [17].
Cross-reactivity has been found in different Artemisia species [1], but whether different species in China have an impact on the allergen diagnosis and treatment has not been fully investigated. This study aims to provide a comprehensive analysis of sequence variation of different isoforms and variants, content of allergens Art v 1, Art v 2 and Art v 3, and their impact on IgE binding of six representative Artemisia species in China.
Materials and methods
A graphic research design is presented in Fig. 1, with detailed information given in the following sections.
Artemisia species and protein extract
We used pollens of seven [24]. Aqueous protein extracts of pollen were prepared by resuspending 0.2 g pollen grains in 3.5 ml PBS or 2 g in 35 ml PBS buffer (0.14 M NaCl, 2.7 mM KCl, 7.8 mM Na 2 HPO 4 , 1.5 mM KH 2 PO 4 ), and shaking for 12 h at 4 °C. Extracts were centrifuged at 10,000 g for 10 min at 4 °C, filtered through 0.22 µm filters (Millipore), and the concentration of protein in extracts was determined using the BCA protein assay kit (Takara Bio, Japan). Three independent extracts from different pollen samples of each species from China, and one A. vulgaris sample were prepared and used in whole protein and individual allergen quantification. The rArt v 1.0101 and rArt v 3.0201 standards were from previous studies [21,23].
RNA extraction and Transcriptome
Total RNA was extracted from pollens of six Artemisia spp., collected from China, using the RNAprep pure kit (Tiangen, China) [17], and sequenced by BGI-Shenzhen and Hangzhou One Gene Ltd using Illumina HiSeq ™ 2000 (San Diego, CA, USA). De novo transcriptome assembly was using the Trinity software package with a minimum K-mer of 3 and a minimum contig size of 100 bp. After extraction of allergenic protein sequences, blastx was used for alignment (e-value, 1e −5 ) between Unigenes and protein databases from the Artemisia annua genome [25].
Cloning of Art v 1, Art v 2 and Art v 3 homologues
Pollen cDNA was prepared with the PrimeScript ™ RT reagent kit with gDNA Eraser (Takara Bio, Japan) using mRNA fragments as templates. The full length of Art v 1, Art v 2 and Art v 3 homologues were obtained by PCR using primers based on Art . At least eight clones were selected for sequencing. Nucleotide sequences and deduced amino acid sequences from different pollens have been deposited in GenBank. Isoallergens and variants were named following the nomenclature and the updated official list of the WHO/IUIS Allergen Nomenclature Sub-committee [26] and have been approved.
Natural allergen purification and protein identity
Monoclonal antibodies (mAb) used in this study were from previous research [27]. A7-G4-E6 specific to Art v 1, C9-C1 to Art v 2, and A2-B8 to Art v 3, were used to purify three groups of allergens from six selected Artemisia spp. from China as described previously [27]. LC-MS/MS (Thermo Scientific Q Exactive) was used for identity-matching of the purified proteins to deduced allergens from each species. The purity of natural allergens was estimated by SDS-PAGE.
Quantification of three components by ELISA
A. argyi extract was used to immunize two New Zealand rabbits to produce polyclonal antibodies (pAb), injecting with 500 μg protein in incomplete Freund's adjuvant, followed by three subcutaneous boosters of 250 μg protein at intervals of 7-14 days. The quality was checked using both Western blot and ELISA. The antibodies were produced by Hua An Biotech Ltd., Hangzhou, China. An mAb (A7-G4-E6) and rabbit pAbs were used for quantification of Art v 1 homologous proteins. A mAb (C9-C1) and rabbit pAbs were used for quantification of Art v 2 homologous proteins. A selected mAb pair (A2-B8 and biotinylated A9-G10) ELISA assay was used to quantify Art v 3 homologous proteins with different recombinant or natural allergen standards. ELISA plates (Corning, USA) were coated with 0.3 μg capture antibodies (A7-G4-E6, C9-C1, A2-B8) at 4 °C overnight, after blocking with 100 μL 5% skimmed milk at 37 °C for 1 h, 100 μL serially diluted allergen standards and pollen extracts were added and incubated at 37°C for 1 h. After washing, the wells were incubated with 0.3 μg biotinylated detection antibodies at 37 °C for 1 h followed by incubation with 100 μL HRP-conjugated Streptavidin (1:5000 dilution) at 37 °C for 1 h. Finally, 100 μL TMB (3, 3′, 5, 5′-tetramethylbenzidine) was added as colorimetric substrate, and after incubation in the dark for 10 min, the reaction was stopped by adding 50 μL 2 M HCl. The optical density was measured at 450/620 nm (MultSkan FC, Thermo Fisher, USA). For each species, the allergen content was measured using three independent extracts with duplicate wells.
Patients
A total of 150 patients (Additional file 1: Table S1) allergic to mugwort were recruited from Datong-Shanxi (111); Taiyuan-Shanxi (11); Beijing (10); Yantai-Shandong (10), and Qvjing-Yunnan (8) in China based on a convincing case history and positive IgE reactivity to mugwort extracts determined by ImmunoCAP (Thermo Fisher Scientific, Uppsala, Sweden). Eighty-two of these patients have been reported previously [17,20,28]. Specific IgE to the major mugwort allergen components, Art v 1 and Art v 3 was determined by ImmunoCAP. Individual sera and serum pools from the five areas were used to assess IgE binding capacity. The sera of five non-atopic individuals were pooled and used as a negative control. Written consent was obtained from all participants (or their representatives) and the study was approved by the local ethics committee.
ELISA binding and inhibition analyses
Pollen extracts from the seven species were used to analyze IgE binding by ELISA, with serum pools of patients from the cities of Datong and Taiyuan in Shanxi province, Beijing, Yantai-Shandong and Qvjing-Yunnan. Further IgE reactivity of each pollen extract of the six Chinese Artemisia species was assessed by ELISA, using sera of 142 individual mugwort-allergic patients. ELISA plates (Corning, USA) were coated with 0.5 μg/well pollen extracts in PBS buffer (pH 8.3). After blocking with 100 μl 5% skimmed milk, 100 μl serum pool was added, with a negative serum pool as control. After washing, 100 μl goat anti-human IgE coupled with HRP (1:3000 in PBS buffer) was added and bound IgE was detected using TMB. The ELISA was quantified using the colorimetric reaction at 450/620 nm. We also compared the IgE binding values with rArt v 1.0101 and rArt v 3.0201 allergens tested by ELISA and the values of Art v 1 and Art v 3 sIgE tested by ImmunoCAP. For the patients who were positive in ImmunoCAP with A. vulgaris but negative in ELISA with the extracts of six Chinese Artemisia spp., IgE binding capacity was further tested using a mixture of pollen extracts of A. annua, A. argyi, and natural purified Art an 3 and Art ar 3 in a mass ratio of 4:4:1:1 (total of 0.5 μg/well mixture). Inhibition curves were obtained using inhibitors with serial dilutions of pollen extracts and recombinant A. vulgaris allergens in competition with a solid phase coated with rArt v 1.0101 and rArt v 3.0201 for IgE binding, using the serum pools from Shanxi and Shandong. ImmunoCAP inhibition on commercial mugwort (A. vulgaris) extract was with serial dilutions of pollen extracts from three species (A. annua, A. sieversiana, A. vulgaris) against individual serum from four groups of different sensitization patterns (Art v 1 and Art v 3 IgE positive or negative).
ImmunoCAP tests
According to the sequence diversity, different natural purified allergens from the three groups were selected for testing the IgE by ImmunoCAP. Allergens were biotinylated and coupled to streptavidin-conjugated Immu-noCAPs (Thermo Fisher Scientific, Uppsala, Sweden) at 37 °C for 30 min and then were tested with sera of 18 individual mugwort-allergic patients.
Statistical analyses
Data were analyzed by SPSS21.0, with a value of P < 0.05 considered significantly different. Graphs were drawn with GraphPad Prism6.0. The ANOVA model with Tukey's post hoc test was used to analyze the differences in protein content between seven Artemisia spp. Difference in IgE reactivities was analyzed with the Friedman test and Dunn's multiple comparison test. The Kruskal-Wallis test with Dunn's test was used to determine the quantitative variables of the three allergen components, and Spearman's correlation coefficient analysis to evaluate correlations between ImmunoCAP scores and ELISA values. The four-parameter dose-response curve models were used to build the ELISA standard curves.
Sequence variation of Art v 1, Art v 2 and Art v 3 homologous proteins
Three types of allergens in seven Artemisia spp. were identified by a joint analysis of pollen transcriptome assembly, PCR cloning and sequencing (Additional file 1: Table S2). The natural allergens purified by mAb (Additional file 1: Figure S2) were matched to the target allergen sequences, and no other allergens were found by mass spectrometry (Additional file 1: Figure S3). This gave six new deduced defensin-like (Art v 1 type) proteins in six Artemisia spp. in China, with six in the IUIS from reference A. vulgaris and the other five species (Fig. 2a). They are highly conserved at the N-terminus, with seven variable amino acids (Fig. 2a). We identified a unique amino acid, 13 and Art la 2.0101 were identical, as were Art gm 2.0101 and Art an 2.0101, and Art si 2.0101 had an isoform with two extra amino acids (Fig. 2b). The current reference Art v 2.0101 in IUIS was deduced from AM279693, it was not confirmed in this study. More sequence variations were observed in the lipid transfer proteins (Art v 3 type), with a total of nine isoforms or variants and up to 38 amino acids difference (Fig. 2c). Identical isoforms were found in different species. Two isoforms, Art an 3.0102 and Art si 3.0101, had a few specific amino acids (Fig. 2c). Most isoforms from the six Chinese Artemisa spp. were verified by mass spectrometry after immuno-affinity purification of targeted allergens. Art v 1.0101, Art v 2.0101, Art v 3.0201, Art v 3.0202 and Art v 3.0301 were confirmed in the reference A. vulgaris, while Art v 3.0101 partial sequences were not. Rather it appeared in Art si 3.0101, because a unique peptide QGGEVPADCCAGVK was found.
Quantification of pollen extracts and three components
Total extracted protein per gram pollen weight from the seven Artemisa spp. ranged from the lowest 90 mg in A. gmelinii to the highest, 172 mg, in A. sieversiana (Fig. 3a). Standard ELISA quantification curves were established for different allergens and isoforms (Additional file 1: Figure S4), giving a range of homologous allergen content of single allergen components in protein extracts from seven species: Art v 1 ranged from 3.4% in A. lavandulifolia to 7.1% in A. annua; Art v 2 from 1.0% in A. capillaris to 3.6% in A. lavandulifolia, and Art v 3 from 0.3% in A. sieversiana to 10.5% in A. argyi (Fig. 3b). The yield of natural allergens purified by mAb was approximate in accordance with the result obtained by ELISA quantification (Additional file 1: Table S3), while the productivity was significantly lower than expected because of a certain amount of loss during the purification for highest purity.
IgE binding comparison
Using the serum pools from the five areas in China, we compared the IgE binding of six Chinese Artemisia spp. with the reference extract A. vulgaris. We demonstrated that the IgE binding capacity of A. annua and A. vulgaris was significantly higher than that of A. gmelinii, A. lavandulifolia and A. sieversiana. The IgE binding potency of A. capillaris varied in the five areas: highest in Datong-Shanxi and Beijing (Fig. 4a, c), and significantly lower in Shandong and Yunnan compared to A. annua, A. argyi and A. vulgaris (Fig. 4d, e). The IgE binding of 142 individual sera to pollen extracts from six Chinese mugwort species again demonstrated higher IgE reactivity to A. annua than to the other Artemisia spp., with A. lavandulifolia and A. sieversiana the lowest (Fig. 4f ).
Of 142 mugwort allergic patients, 39 showed negative IgE reactivity to six Chinese mugwort pollen extracts in ELISA, and in one patient only A. capillaris was recognized and in another only A. sieversiana. These 41 patients had significantly lower IgE reactivity to mugwort extract and Art v 1 and a slightly higher IgE reactivity to Art v 3 in ImmunoCAP. After testing with the mixture of extracts spiked Art an 3 and Art ar 3, in 30 of these 41 patients there was positive IgE binding, especially with the Art v 3 positive patients (IgE reactivity to the mixture was positive in 17/19). The response in the 11 remaining patients was still negative to the mixture (Fig. 5), these patients were negative to Art v 1, and the IgE reactivities to mugwort extract (w6 range: 0.46-5.8) and Art v 3 (w233 range: 0-2.7) was low.
The IgE binding strength to mugwort extract of the 142 individual patients, measured as ELISA OD values, was closely related to the nArt v 1 IgE ImmunoCAP score, but not to the nArt v 3 score (Fig. 6a). However, when rArt v 1.0101 and rArt v 3.0201 were coated in the ELISA assay, there was good correlation for both components (Fig. 6b).
By testing IgE reactivity of natural Art v 1, Art v 2 and Art v 3 homologous allergens by the ImmunoCAP system, we found that the IgE positive rates and values were quite similar for these allergens with high sequence identity, such as Art v 1 homologues (Fig. 7a), for Art v 2 type, Art ar 2 and Art ca 2 were slightly higher than Art si 2 and Art an 2 (Fig. 7b), while Art v 3 homologues were more variable, Art ca 3 was significantly lower, and the positive rates and IgE values of Art an 3, Art ar 3, Art gm 3, Art la 3 and Art si 3 were higher than Art v 3 and Art ca 3 (Fig. 7c).
IgE inhibition using ImmunoCAP
Mugwort ImmunoCAP assays with A. annua, A. vulgaris and A. sieversiana extracts on 16 different patients of the four groups (sensitized to Art v 1 and Art v 3 positive or negative) confirmed that in the Art v 3 positive sera group, the IgE inhibiting capacity was higher with the A. annua extract and lower with A. sieversiana, especially when Art v 1 was negative (Fig. 8a, c), but this was not the case in the Art v 3 negative sera group (Fig. 8b, d).
In patient DT22 (component profile of high Art an 7 IgE and positive Art ar 2), IgE inhibition was even higher with A. sieversiana. These results indicate that the IgE binding potency was dependent on the presence of specific allergen molecules in the extract. Using ELISA to test for inhibition to rArt v 1.0101 and rArt v 3.0201 with seven pollen extracts, using serum pools from Shanxi and Shandong, again a large difference was found. Inhibition to both allergen molecules and in two areas was highest with the A. argyi extract and lowest with A. sieversiana compared to the other species (Additional file 1: Figure S5). In general, there was cross-reactivity in ELISA assays coated with different Artemisia spp. extracts, except for A. sieversiana (Additional file 1: Figure S6). This suggests that A. sieversiana pollen is not the primary sensitizing source.
Discussion
Here we present a comprehensive analysis of three group allergens, with amino acid sequence, quantity measurement and IgE binding strength of pollen extracts, from seven Artemisia spp. These species are representative of four sections of botanical classification and distribution in China. The degree of allergen sequence variation in different Artemisa spp. is related to the phylogenic classification, being similar if they belong to the same section, such as A. vulgaris, A. argyi, and A. lavandulifolia (Fig. 2). Both Art v 1 and Art v 2 homologous allergen sequences in the seven Artemisia spp. were highly conserved, with only a few amino acid changes, indicating a general cross reactivity in all species of this genus (Fig. 2a, b).
Art v 3 type is more variable: 26 new amino acid differences were found, mainly in A. annua and A. sieversiana. Including Art an 7 type sequences investigated in a previous study [17], the amino acid sequences of four allergens in the seven Artemisa spp. indicated phylogenic relationship and fit into four botanically classified sections of this genus: Artemisia, Abrotanum, Dracunculus and Absinthium. A recent report on the Art v 1 group allergen sequences from American mugwort (A. ludoviciana A. californica, A. frigida, and A. tridentate belong to Tridenta section) showed additional amino acid variations, 81T and 85T, in the proline domain (Fig. 2a) [29]. Previous sequencing of cDNAs from A. vulgaris (pollen source assumed to be a single species) identified seven Art v 1 isoforms and four Art v 3 [21,23], while we deduced one or two variants/isoforms from each species by gene cloning and transcript assembly, verified by proteomic mass spectrometry to isoform level. Mass spectrometry of natural Art v 1 from A. vulgaris purified by mAb could be matched to Art v 1.0101, while the natural allergen Art v 3 in A. vulgaris purified by mAb could be matched to isoallergens Art v 3.0201 and Art v 3.0301, and the first partial Art v 3.0101 peptide (37aa) to A. sieversiana. Since all except A. lavandulifolia are diploid [30], there are putatively two variants for each species. We suspect the pollen sources used in previous research were not from a single species, A. vulgaris, but mixed with A. sieversiana, commonly distributed in Europe. In this study, comprehensive transcript analysis, gene specific cloning and identification of the allergen protein by mass spectrometry guaranteed reliability.
The first evaluation of in vitro cross-reactivity, among nine Artemisia spp., was done in the USA [1]. This showed the inhibitory capacity of two local Artemisia spp. (A. biennis, A. tridentate) was greater than that of A. annua and A. vulgaris, and A. ludoviciana the least potent. There was no difference in IgE binding capacity between E. coli-expressed recombinant Art v 1 isoforms or Art v 3 isoforms within A. vulgaris because the sequences were identical [21,23]. From our results on sequence diversity, we expect little difference in the Art v 1 homologous isoforms, possibly with greater differences in Art v 2 and Art v 3 homologous isoforms in species such as A. annua and A. sieversiana. When rArt v 1.0101 and rArt v 3.0201 were used as coating antigens, ELISA inhibitions with different Artemisia species were not in agreement with the results of Art v 1 and Art v 3 homologues quantification, but related to the sequence similarity of the coated isoform (Additional file 1: Figure S5 and Fig. 2), indicating the potential impact of isoforms on IgE binding.
IgE binding strength of the pollen extract is largely dependent on the quantity of major allergens in the extract and the sensitization profile of a patient's serum to a single component. The concentrations of pollen extract influence the sensitivity and specificity of diagnosis [31]. Here we found that IgE reactivity of six Chinese Artemisia spp. measured by ELISA was mainly related to the Art v 1 homologues content in extracts: A. argyi and A. lavandulifolia pollen have almost identical sequences in four groups of allergens, but the content is different in Art v 1, causing significantly lower IgE binding of A. lavandulifolia. Natural pollen extract is not sufficient to measure all component IgEs, especially for Art v 3 type (Figs. 6a and 8c) where there is low content, and there are other interfering factors, such as IgG antibodies [32]. Moreover, for the 41 mugwort allergic patients who gave negative IgE reactivities to Chinese mugwort pollen extracts by ELISA, 30 gave positive results when coated with a mixture containing extracts spiked with mAb purified nArt an 3 and nArt ar 3 (Fig. 5). This indicated again that the pollen extracts alone were not suitable for in vitro IgE diagnosis, because of the low content of some major and minor allergen molecules in pollen extracts, in addition, for the patients with sensitization to minor allergens alone, using extracts for immunotherapy may not succeed or even worse [33].
The commercial diagnostic from European mugwort A. vulgaris was quite similar to Chinese silver mugwort, A. argyi, in allergen sequence and in IgE binding potency. Two Chinese mugwort species are worthy of attention: A. annua and A. sieversiana, both with more sequence variability than the reference A. vulgaris. The IgE binding capacity of A. annua was also equivalent or slightly higher than that of A. vulgaris, while that of A. sieversiana was significantly lower. We consider that the IgE binding capacity is determined by the quantity of the major allergens, especially Art v 1, in given pollen extract. Sequence variations in the critical locations are very important, as illustrated in the Amb a 1 isoforms with distinct immunological features [34]. In our research, the IgE values were almost the same in the allergens with high sequence identity (Fig. 7), while for Art v 3 type, the positive rates and IgE reactivity in the five Chinese species except A. capillaris were higher than A. vulgaris: it is probable that A. vulgaris was not the primary sensitizer for Chinese patients. Recombinant isoallergens with large amino acid variations from different species need to be evaluated in a large number of representative sera from different geographic areas to get a more comprehensive view. In different geographic regions, there are different dominant Artemisia species with varying flowering time. Pollen peaks and Art v 1 content levels have been reported as higher during A. campestris flowering than that of A. vulgaris [35]. Choosing the most relevant species in specific areas could improve the accuracy and efficiency of diagnosis. The three allergen quantification methods established in this study could be applied in monitoring the Artemisia pollen allergen exposure and association analysis to allergy symptoms.
Conclusions
The commercial European mugwort ImmunoCAP (A. vulgaris) extract has entered the Chinese diagnostics market, and this research indicates its general suitability in China as in vitro test. Our study demonstrated that A. sieversiana, the current laboratory-based mugwort pollen extract used for diagnosis in China, is not sufficient due to the low concentration of major allergen Art v 3 type in extract, especially for those patients who are sensitized to Art v 3 homologous allergens. A. annua and A. argyi pollens are potentially suitable sources for both diagnosis and immunotherapy, the former extract has been chosen as a sublingual immunotherapy product for seasonal allergic rhinitis [36]. There is high sequence identity of the major mugwort allergens in seven different mugwort species which are common in China. Differences in IgE binding capacities among pollen extracts from the seven mugwort species were mainly due to variations in the quantity of major allergens. We therefore consider that purified mugwort pollen allergen components from A. annua and A. argyi are better suited for diagnosis and treatment than crude pollen extracts which have considerable variations in IgE binding capacity and major allergen content.
Additional file 1: Table S1. Clinical and demographic data of 150 mugwort pollen-allergic individuals sIgE against mugwort extract (w6), Art v 1 (w231) and Art v 3(w233) determined by ImmunoCAP, ND, not determined. AS asthma; AR allergic rhinitis; C, conjunctivitis; E, eczema. I-1, I-2, I-3, I-4 indicates the patients serum used in ImmunoCAP inhibition assay belonging to four groups of different sensitization patterns (1, Art v 1 and Art v 3 positive; 2, Art v 1 positive, Art v 3 negative; 3, Art v 1 negative, Art v 3 positive; 4, Art v 1 and Art v 3 negative). The 82 patients reported in previously studies 17,20,28 are indicated by an asterisk. Table S2. GenBank accession numbers for three allergen groups in seven Artemisia species. Table S3. Productivity of the three group allergens purified by specific mAb. Table S4. ImmunoCAP IgE characterization of serum pools from five areas. Figure S1. Six Artemisia species collected from China. Figure S2. SDS-PAGE of natural purified Art v 1, Art v 2 and Art v 3 homologous allergens from six Chinese Artemisia species. a, natural Art v 1 homologues purified by specific mAb A7-G4-E6; b, natural Art v 2 homologues purified by specific mAb C9-C1 shown in six different gels; c, natural Art v 3 homologues purified by specific mAb A2-B8. Figure S3. Mass spectra of natural purified Art v 1(a), Art v 2(b) and Art v 3(c) homologues. The peptides verified by LC-MS/MS are shown in red and highlighted. Figure S4. ELISA quantification of three allergen components in Artemisia spp. pollen. a, Chinese silver mugwort (A. argyi) pollen extract (ArE) in SDS gel and reaction to polyclonal antibodies (pAb) by Western blot; b, ELISA standard curve for Art v 1 allergen (mAb A7-G4-E6 and rabbit pAbs); c, ELISA standard curve for Art v 2 homologous allergen (mAb C9-C1 with rabbit pAbs); d, ELISA standard curve for Art v 3 homologous allergen with two mAbs (mAbs A2-B8 and A9-G10) with representative different isoforms. Figure S5..Inhibition of IgE binding to Art v 1 and Art v 3 with seven pollen extracts using two serum pools. a: inhibition ELISA coated with rArt v 1.0101, serum pool from Datong, Shanxi (nArt v 1: 11 kUA/l); b: inhibition ELISA coated with rArt v 1.0101, serum pool from Yantai, Shandong (nArt v 1: 6.51 kUA/l); c: inhibition ELISA coated with rArt v 3.0201, serum pool from Datong, Shanxi (CAP nArt v 3: 11.27 kUA/l); d: inhibition ELISA coated with rArt v 3.0201, serum pool from Yantai, Shandong (CAP nArt v 3: 9.65 kUA/l). Figure S6. Inhibition of sera pool from Shanxi with extract from different species at 100 μg/ml in ELISA coated with 10 μg/ml of different pollen extracts and mixture. 6-mix, mixture of six Artemisia spp. extract in the same proportions. | 7,156.8 | 2020-11-18T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
Automatic Detection of Cortical Bone ’ s Haversian Osteonal Boundaries
This work aims to automatically detect cement lines in decalcified cortical bone sections stained with H&E. Employed is a methodology developed previously by the authors and proven to successfully count and disambiguate the micro-architectural features (namely Haversian canals, canaliculi, and osteocyte lacunae) present in the secondary osteons/Haversian system (osteon) of cortical bone. This methodology combines methods typically considered separately, namely pulse coupled neural networks (PCNN), particle swarm optimization (PSO), and adaptive threshold (AT). In lieu of human bone, slides (at 20× magnification) from bovid cortical bone are used in this study as proxy of human bone. Having been characterized, features with same orientation are used to detect the cement line viewed as the next coaxial layer adjacent to the outermost lamella of the osteon. Employed for this purpose are three attributes for each and every micro-sized feature identified in the osteon lamellar system: (1) orientation, (2) size (ellipse perimeter) and (3) Euler number (a topological measure). From a training image, automated parameters for the PCNN network are obtained by forming fitness functions extracted from these attributes. It is found that a 3-way combination of these features attributes yields good representations of the overall osteon boundary (cement line). Near-unity values of classical metrics of quality (precision, sensitivity, specificity, accuracy, and dice) suggest that the segments obtained automatically by the optimized artificial intelligent methodology are of high fidelity as compared with manual tracing. For bench marking, cement lines segmented by k-means did not fare as well. An analysis based on the modified Hausdorff distance (MHD) of the segmented cement lines also testified to the quality of the detected cement lines vis-a-vis the k-means method.
Introduction
Medical microscopy continues to produce increasingly higher resolution images and presenting opportunities to observe ever more detailed microscopic pathologies.As compared with images handled manually, automatic segmentation of such information-rich images adds value due to the increased speed of data collection and reduction of observer subjectivity.Segmentation of high definition medical digital images facilitates the manipulation and visualization of data with automated methods can provide clinical data about ongoing processes that may go otherwise unnoticed [1].Reported applications in image segmentation include: segmentation of features of interest [2], quantitative measurements of image features [3], delineation of contours and surfaces [4].Computer image segmentation may facilitate the visualization of entities of interest for automated morphometry when there is no direct correspondence between the image pixel properties and the type of tissue e.g.detecting cancerous cells [5,6].Specific to bone imaging, segmentation was used in numerous applications such as in calculating bone mineralization of extracted bone features [7,8], determining bone mineral density [9] and bone density using CT [10]), investigating tarsal bone kinematics [11], early osteoporosis [12], and correlating bone features with age and osteoporosis [13,14].
At the micro-scale, cortical bone is made-up mostly of concentric osteon units.Figure 1 illustrates the salient micro-features present in the osteon architecture.At the center of each osteon lies the Haversian canal.These canals are surrounded by concentric lamellae that are punctuated by micro-sized osteocyte lacunae that are in turn connected to each other via capillary channels called canaliculi leading to what is known as the lacuna-canalicular network (LCN).Consisting of remnants of osteons created by remodeling, so-called interstitial bone fills the space among osteons.Segregating osteons would require proper identification of their boundaries also known as cement lines.Relatively few studies (e.g., [15,16]) report on segmenting bone microscopic images.While manual segregation of cortical bone into its basic Haversian osteonal units remains the golden standard (e.g., [17]), few studies have reported on segmentation via automated methods [2,12].For example, k-means clustering is used [18] to segment micro radiographic bone image features such as Haversian canals and osteons.Computer-aided manual segmentation and volumetric 3D rendering are employed to visualize osteons in [19].Segmentation of the LCN is preformed [20] through a variational region-growing based on an energy functional combining grey level information from the original image and shape information extracted via a 3D tube enhancement filter based on the eigenvalues of the Hessian method.
In histology slides of cortical bones, cement lines are boundaries that surround osteons and demarcate them from adjacent, shapeless interstitial matrix.In decalcified sections, these boundaries are difficult to detect due to loss in mineralization of these highly mineralized lines via the decalcification processes.To the authors' best knowledge, no work exists that reports on full automatic segmentation of cement lines to reveal Haversian osteons.This study aims to automatically identify osteon systems via computer-based segmentation in slides of femur cortical bone from a bovid (as a proxy for human bone).Previous work by Hage and Hamade [21] reported on automatic segmentation of cortical bone's LCN.Micro-features (lacunae, canaliculi, Haversian canals) were segmented via color thresholding of bone images using the k-means clustering method.Later, the authors utilized an AI-based methodology that combined PCNN (pulse coupled neural networks), PSO (particle swarm optimization), and AT (adaptive threshold) and where PSO optimization fitness functions were constructed based on entropy and energy [22] or on the micro-sized features' geometric attributes such as size and shape [23].Both approaches yielded high fidelity feature segmentation of said salient micro features.The algorithm is comprised of several self-adapting parameters that are determined via utilizing particle swarm optimization (PSO) as parameter optimizer.Yet another variation utilized in this methodology is that of adaptive threshold (AT) where the PCNN algorithm is repeated until the best threshold, T, is found corresponding to the maximum variance between two segmented regions.In [24,25], Hage and Hamade utilize resultant segments for the purpose of replicating the microstructure of cortical bone and conducted micro-FEM simulations of bone cutting.
In this work, the authors use a PCNN-PSO-AT methodology similar to that used in [22,23] but utilize novel optimization functions that are found to be better suited for the specific purpose of detecting cement lines.The authors first tried functions based solely on only one of the micro-feature attributes: orientation or Euler number.While these individual functions yielded good results, better results were obtained when using PSO function that utilizes a combination of these two attributes as well as one additional geometric attribute namely that of feature size.The results of PCNN-PSO-AT network obtained from a single training image are applied to several test images of bone slices resulting in high fidelity segmented images of cement line demarcations identifying individual osteons.To establish a bench line versus a well-known segmentation method, the 'k-means' method was also used to try to detect cement lines.For both methods, the quality of detected cement lines was compared against manually traced osteons.Quality is established using quantitative metrics as precision, sensitivity, specificity, accuracy, and dice.Furthermore, the modified Hausdorff distance (MHD) method was used to verify the quality of the segmented cement lines using our methodology vis-a-vis those detected by the k-means method.
Preparation of slides
Two sections were cut in a transverse direction from the mid-diaphysis of a femur (2-year old bovine).They were kept in a saline solution for blood and bone marrow removal in order to obtain sections as seen in Figure 2a.These sections were cut into pieces (of about 1 cm × 1 cm) and immersed in a formalin solution for 3 days for softening and fixation purposes.EDTA solution was used for decalcifying the specimen for 3 days.After softening, 2 mm-thick sections of cortex were cut.This is followed by tissue processing on a Leica machine where slides are infiltrated with a sequence of different solvents followed by molten paraffin wax after dehydration.Slides are then placed in glass covers.3-5µm thin sections are cut in a rotary microtome (model 340 E microm).Finally, slides are stained in H&E solution and covered with glass cover slip as shown in Figure 2b.
Imaging
Images of slices were acquired using a BX-41M LED optical Olympus microscope at 20× magnification using Olympus SC30 digital microscope camera (based on 3.3 megapixel CCD chip with CMOS color sensor).One such image is shown in Figure 2c.
Pulse coupled neural networks, PCNN
PCNN is widely used for image segmentation.In order to properly identify and self-adapt the PCNN parameters, enhancements to its algorithm are sometimes applied by coupling to other techniques such as threshold adaptation and optimization.Mathematical formulations of the PCNN technique [26] are as listed in equations 1-5:
1) Input part:
Feeding input: (1) Linking input: 2) Linking part: 3) Pulse generator: Output: Threshold: The feeding (F) receives an external stimulus (S) as well as local stimulus (Y).The linking (L) receives local stimulus.These compartments are linked via a linking coefficient β to create a voltage U that is then compared to a local dynamic threshold θ and an output (0 or 1) is extracted.The dynamic threshold value increases via the potential coefficients V L , V F , V θ but then decreases with the decaying coefficients α L , α F , α θ until the neuron fires again and a binary pulse image is created (where n is the iteration step) [26].The PCNN algorithm is summarized in Figure 3
Particle swarm optimization, PSO
Particle swarm optimization (PSO) simulates a population with particles assigned randomized velocity.Each particle flies through an n-dimensional search space and maintains: current position and velocity as well as particle-specific best position.Particle reaches the best position when it flies over the best positions [28].PSO parameters are: fitness function, dimension of particle, population size, inertia factor, and terminal condition of the algorithm.The PSO algorithm is schematically summarized in Figure 4.
Adaptive threshold, AT
Adaptive thresholding is a threshold selecting method built on the basis of analyzing the least square method [29].Two categories are divided according to gray scale namely C0: {0, 1,…, t} and C1: {t+1, t+2,…, L−1}.The probability and mean-value layer of the categories C0 and C1 are calculated in order to get the variances, the class-inner variance and, finally, the between-class variance.The best threshold T is determined when the variance between 2-segmented regions is the maximum [30].The PSO algorithm is outlined in Figure 5.
The combined PCNN-PSO-AT methodology
In this paper, the combined methodology is used in a fashion similar to that developed in [22,23].Initially and for the purpose of osteon detection, PSO fitness functions were utilized based solely on feature orientation (explained in Section 4.1.1).It was found, however, that fitness functions based on combining the three attributes: orientation, Euler number (Section 4.1.2) and size (Section 4.1.3)yields higher fidelity segmentations of cement lines.After building the fitness functions, the algorithm is run according to the steps illustrated in the flowchart (Figure 6).The attribute generator for image properties is the "regionprops" function in MATLAB® which utilizes the orientation attribute and calculates Euler number and perimeter attributes.
Identification of Osteon Boundaries (cement lines)
In this Section, demonstrations of cement lines segmentations are presented based on the PCNN-PSO-AT methodology.Segmentation is based on utilizing fitness functions that employ three feature attributes: orientation, Euler number, and size (perimeter of fitted ellipse) as presented in Section 4.1.
Feature attributes
4.1.1.Orientation (feature vector direction) Micro-sized LCN features are observed to be disposed in concentric bands surrounding the Haversian canal with an exterior band being the cement line.These micro-features share similar orientations along specific bands, a fact employed in using this orientation attribute to formulate fitness functions for segmenting osteons.Orientation of each object in the image is calculated using "regionprops" of MATLAB®.The fitness function is built based on maximizing the identified number of features having the same orientation.Thus a ratio parameter is developed by dividing the number of regions with orientations containing more than 2 objects to the total number of objects identified in an image.Maximizing the ratio value towards unity means that the PCNN have identified multiple features with associated orientations, i.e., identified osteons and excluded single oriented features having random orientations.The fitness function is given the target ratio: Where n refers to the number of regions having more than 2 objects with the same orientation and N stands for the total number of regions in the image.
The code for orientation finds the number of regions having more than 2 objects with same orientations as well as the total number of regions in the image.Based on those 2 parameters, values of ratio r are calculated.Figure 7 represents a hypothetical simulation of this method.Considered are cases of: 6 objects having same orientation 0º and 2 at 90º; 2 objects having same orientation of 0º and 3 objects at 90º; 2 objects at same orientation 30º and 1 object at 90º (excluded is 1 object having orientation −30º).In this figure, the total number of regions with objects having more than 2 identical orientations is 16: 8 objects of 0º degree orientation, 6 objects of 90º orientation, and 2 objects of 30º orientation.With total number of objects of 17, the ratio is calculated as = The algorithm aims to maximize r toward unity so that all objects with single orientations are excluded thus retaining keeping only regions that consists of multiple objects orientations that represent the osteon system.This ratio instructs the fitness function to find and extract the objects having more than 2 regions with same orientation.The target is limited to a minimum of 2 regions since osteons in the first stage of growth have at least one Haversian canal and one lacuna.The algorithm attempts to exclude regions with single orientations outside of the osteon.
Euler number
Euler number is a scalar that specifies the number of objects in a region minus the number of holes.Given cortical bone's porosity-dominated topology, Euler number is an appropriate feature employed here to detect cement lines.MATLAB's "regionprops" was used for this purpose.
Size (of the perimeter of fitted ellipses)
Cement lines have the largest perimeter in bone images since they surround the largest concentric layer of lamella.Size attribute is based on automated image approximate fits of cement lines shapes into ellipses.Therefore, perimeter lengths are estimated based on [23] = �(2( 2 + 2 )) (7) Where a and b are the major and minor axes, respectively, of the ellipse, the values of which are estimated using the 'major/minor axis length' property of the "regionprops" function in the image processing toolbox of MATLAB ® 4.2.Combination of attributes: orientation, Euler number, and size .In order to obtain enhanced pulses for detecting the cement line, introduced and tested here is a method that uses a combination of the three fitness functions based on the feature attributes discussed above: orientation, Euler number, and geometric size.The combination of three fitness functions aims to find the largest perimeter of concentric lamellae, e.g.cement line.An H&E-stained image is manually segmented to reveal the cement lines and their corresponding mean perimeter, orientation and Euler number.Said image is used for training as a target for the PSO fitness function.The PCNN-PSO-AT algorithm attempts to identify the 7 PCNN parameters that best match the combined target of that of the PSO fitness function.Found parameters are tested on other images.
PCNN-PSO-AT cement line segmentation results
Results for the combined fitness functions are summarized in Figure 8 and their corresponding PCNN parameters are listed in Table 1.The first and third columns in Figure 8 show the original images that are to be segmented, where image 1 in the 1st row is the training image and the other 7 images on the remaining rows are test images.The second and last columns represent the cement lines segments obtained using PCNN with the assigned parameters in Table 1.The cement lines play an important role in separating the osteon system from the interstitial regions (cement lines separate newly formed osteons from old interstitial regions).The following section 4.2.2 explains how the cement lines of images in Figure 8 are isolated by extracting their boundaries and widening these boundaries using the "imdilate" function in MATLAB ® , an image processing toolbox that thickens bright objects in binary images using the specified structuring element that determines the shape of a pixel neighborhood over which the maximum is taken [31].The resultant segments in Figure 8 show these demarcating lines as being brighter than their surroundings.This accomplishment is due to the fundamental characteristic of PCNN that is to return output pulse images with plenty of information on the original image such as texture [32].
Benchmarking of PCNN-PSO-AT methodology against k-means method
A baseline segmentation method is used to benchmark against the methodology results found in 4.2.1 against those obtained by a (special approach) of the k-means clustering method.A criterion proposed by Deng et al. [33] where the derivation of pixels is obtained from color quantization.The color information in each image region has a uniformly distributed color-texture pattern as represented by few quantized colors and are distinguishable across 2 regions.In order to capture cement lines in white, class labels set at 5 as can be seen from the 5 colors in the segmentation maps in Figure 9.The figure shows the resultant segments obtained using the k-means method (numbers 1-8 correspond to the image numbers in Figure 8).The first and third columns show the segmentation maps obtained using k-means method, while the second and last columns represent the resultant cement lines segmented (in white).Visually contrasting the images in Figures 8 and 9 reveals that k-means did not reveal the osteon's cement lines as clearly as when employing the PCNN-PSO-AT methodology.10 indicates that the resultant cement lines are quite comparable to those segmented manually.Nevertheless, a quantitative evaluation is performed as proof of this similarity as is done in the following Section.
Quality Assessment of Identified Osteon Boundaries
The main goal of segmentation algorithms is to capture as accurately as possible features of interest, herein cement lines.In 5.1, classical quality metrics widely used for image segmentation evaluation namely precision rate, sensitivity, specificity, accuracy, and dice [35][36][37][38] are used to quantify the efficacy of the proposed methodology.As values of these metrics approach unity, the closer the segmentation method is to approaching the ground truth (taken here as manual segmentation).The results of another quality evaluation are also presented in 5.2 namely those of the modified Hausdorff distance (MHD) method.
Classical quality metrics
In order to calculate these metrics, a pixel comparison between the resulting segmentation images to ground truth ideal (manual) segmented images is accomplished through the calculation of true positive pixels (TP), false positive pixels (FP), false negative pixels (FN), and the true negative pixels (TN) (see [23] for more details).
In order to evaluate the performance of the combination method, the evaluation is conducted comparing with manually segmented cement line images.Similarly, k-means are compared to manually segmented results.In addition, an evaluation of the extracted cement lines boundaries in Figure 10 is presented against manually-drawn cement line segments (second column in Figure 10) with the results reported in Table 2.
Table 2 summarizes the quality results, where it can be seen that larger metrics values are reached for the combined PCNN-PSO-AT methodology compared to those for the k-means method.Operating the PCNN-PSO-AT methodology on image 4, for example, yielded values for precision, sensitivity, specificity, accuracy, and dice of 0.7678, 0.5647, 0.7573, 0.7721, and 0.6053, respectively, as compared with k-means obtained values for 0.337, 0.315, 0.313, 0.3778, and 0.3821.For tested image 8, obtained values for PCNN-PSO-AT were 0.7124, 0.7684, 0.5905, 0.7849, and 0.6113 as compared with k-means values of 0.282, 0.3576, 0.368, 0.4108, and 0.4334, respectively.For all 8 images, the mean values reveal that for all of the 5 quality metrics the PCNN-PSO-AT methodology is found to outperform the k-means method in detecting cement lines.Furthermore, better results with higher values, attesting to the efficacy of the methodology, for the metrics are found for the cement lines extracted through PCNN-PSO-AT.For image 1 for example, the values obtained for precision, sensitivity, specificity, accuracy, and dice were found to be 0.8262, 0.8858, 0.8592, 0.9637, and 0.8472, respectively.Comparable results were found for the other images.Mean values for all images obtained for precision, sensitivity, specificity, accuracy, and dice metrics were found to approach unity as 0.8862, 0.8905, 0.8828, 0.9080, 0.9016, respectively.
The modified Hausdorff distance (MHD) method
Another supervised evaluation method is employed to quantify the accuracy of the proposed segmentation methodology as compared with manually traced lines.Unlike the classical measures quality in equations 8-12, the modified Hausdorff distance (MHD) method [39,40] measures and compares the distance between two segmentation methods.The distance provides a normalized metric of the maximum distance that separates the points in segmented images from the points in the ground truth images.The lower the values of MHD the better are the matching with the ground truth image, thus the better the segmentation.It was determined [39] that an object with MHD value lower than 3 was similar enough to the ground truth.
In Figure 11, the MHD values are reported for the resulting cement lines of the ground truth (manual segmented images) against those obtained by the k-means, PCNN-PSO-AT, and the extracted cement line boundaries using PCNN-PSO-AT.It could be clearly inferred that the values of MHD for the PCNN-PSO-AT methodology are superior to those of k-means.MHD distance values obtained for the extracted cement lines boundary using the PCNN-PSO-AT methodology are significantly less than 3 which is a testament to the accuracy of the methodology advanced in this work in segregating cement lines.
Summary
In cortical bone, cement lines are ill-defined boundaries at the outermost and largest perimeters in secondary osteons and which demarcate osteons from adjacent interstitial matrix.This study aims to automatically detect cement lines by employing an AI-based methodology that takes advantage of characteristic attributes of the lacuna-canalicular network (LCN) features present in the osteon namely 1) feature orientation, 2) Euler number, and 3) feature size (of the largest perimeter of fitted ellipse).The methodology combines fitness functions from 3 methods typically used separately in image segmentation: pulse coupled neural networks (PCNN), particle swarm optimization (PSO), and adaptive threshold (AT).The methodology was demonstrated to achieve high fidelity, automated segmentation of cement lines (Figure 8; and with enhanced visualization in Figure 10).In contrast with the popular k-means method, higher fidelity segmented cement lines are obtained with the PCNN-PSO-AT methodology as advanced in this work as evidenced by larger values of both quality metrics (Table 2) and modified Hausdorff distance (Figure 11).
Figure 7 .
Figure 7. Simulation representing four hypothetical cases of micro-structural features with various orientation schemes.
Figure 8 .
Figure 8. Resultant pulses for PCNN-PSO-AT fitness function using combinations of orientation, Euler number, and size: first and third columns show the original images, second and last columns represent the segmented resultant images.
Figure 11 .
Figure 11.Modified Hausdorff distance MHD values for evaluating the quality of extracted cement lines for the k-means method, PCNN-PSO-AT methodology, and the extracted cement lines using PCNN-PSO-AT as compared to ground truth (manual segmentation). | 5,208 | 2015-10-13T00:00:00.000 | [
"Computer Science"
] |
Phoretic association between larvae of Rheotanytarsus ( Diptera : Chironomidae ) and genera of Odonata in a first-order stream in an area of Atlantic Forest in southeastern Brazil
In this note, the occurrence of phoresy between larvae of Rheotanitarsus sp. (Diptera: Chironomidae) and larvae of Heteragrion sp. (Odonata: Megapodagrionidae) and of unidentified genera of Calopterygidae (Odonata) collected in a first-order stream in an area of Atlantic Forest in southeastern Brazil is reported. During the dry season of 2007 and the rainy season of 2008, with the aid of a Surber sampler, 15 samples of each of the following mesohabitats were collected: litter from riffle areas, litter from pool areas and sediment in pool areas. Eighty-five Odonata larvae were obtained, 10 (11.76%) with cases of phoresy by Rheotanytarsus sp.. These chironomids were associated with only one specimen of Megapodagrionidae, whereas the other larvae were recorded in association with Calopterygidae. Most of the Odonata with cases of phoresy by Rheotanytarsus sp. were recorded in the dry season. In the present study, the absence of the phoretic association with other potential hosts for Rheotanytarsus sp. found in the samples indicates a possible preference of these larvae for Odonata, which accounted for only 2.42% of the collected macroinvertebrates in litter and sediment.
Phoresy, a relationship in which an organism lives on the body of another organism and is thus carried (TOKESHI 1993), occurs among various organisms from aquatic environments.It has been described as a commensal interaction for several invertebrate taxa, especially chironomid insects, which is the group with the greatest number of records of phoretic association with different genera and species of invertebrates (SEGURA et al. 2007).The commensal hosts for this family are mostly members of the Plecoptera (EPLER 1986), Ephemeroptera (CAL- LISTO & GOULART 2000), Megaloptera (PENNUTO 2003), Odonata (FERREIRA-PERUQUETTI &TRIVINHO-STRIXINO 2003), andTrichoptera (ROQUE et al. 2004).
Most works on chironomid larvae in phoretic association with other organisms are concentrated in North America and Europe (TOKESHI 1993).In Brazil, this relation was previously recorded in lotic environments from the states of São Paulo (FERREIRA-PERUQUETTI & TRIVINHO-STRIXINO 2003), Rio de Janeiro (DORVILLÉ et al. 2000), Mato Grosso do Sul (ROQUE et al. 2004), andMinas Gerais (CALLISTO et al. 2006).The present study records the occurrence of phoresy between larvae of Rheotanytarsus Thienemann & Bause, 1913 (Chironomidae) and larvae of Odonata (Calopterygidae and Megapodagrionidae) in a first-order stream in an area of Atlantic Forest in southeastern Brazil.
The stream is located in a secondary forest, which comprises an area of biological conservation called Reserva Biológica Municipal Poço D'Anta (21°45'S, 43°20'W; altitude varying from 800 to 1040 m).This reserve is located in the municipality of Juiz de Fora, state of Minas Gerais, Brazil.The stream is a shallow environment (5.63 ± 1.43 cm), whose bed is mostly constituted of sand and patches of substrate of stones and leaf litter.The water is transparent and well oxygenated (10.03 ± 0.42 mg/l), with electric conductivity and pH varying around 17.75 ± 2.06 µS/cm and 6.38 ± 0.41, respectively.
During the dry season of 2007 (July to September) and the rainy season of 2008 (January to March), 15 samples of each of the following mesohabitats were collected with the aid of a Surber sampler (250 µm mesh): litter from riffle areas, litter from pool areas and sediment from pool areas.In each month, patches from the four mesohabitats, located in a stretch of 300 m of the stream, were individually sampled during 30 seconds.The samples were fixed in formaldehyde solution 4%,
SHORT COMMUNICATION
Phoretic association between larvae of Rheotanytarsus (Diptera: Chironomidae) and genera of Odonata in a first-order stream in an area of Atlantic Forest in southeastern Brazil Beatriz F. J. Vescovi Rosa In order to verify if there was a significant difference among the number of larvae collected with or without larval cases in the two sample seasons, the Mann-Whitney test was performed.This statistical test was also used in order to verify if there was a significant difference of the mean velocity of the water and the outflow between the dry and the rainy season.The computer program Past, version 1.49 (HAMMER et al. 2001), was employed for performing the statistical tests.
Most of the larval cases (83.33%) were attached on the sternal portion of the Odonata specimens, with the head capsule pointing to the posterior portion of the host's body (Figs 1 and 2).Only two hosts presented more than one case of phoresy by Rheotanytarsus sp.: in Heteragrion sp.one larval case with larvae and another without larvae were found, both on the sternum, while in one individual of Calopterygidae we observed that there were two larval cases with larvae, one located on the sternum and another on the prothoracic leg.Empty larval cases were recorded in 40% of the hosts (Tab.I).
The number of larvae without larval cases was significantly higher than that of larvae with larval cases (p< 0.05) in both seasons of the analysis (Fig. 3).In the rainy season, only a single Odonata specimen with a case of phoresy by Chironomidae was found.The mean velocity of the water and the outflow were significantly higher (p < 0.05) in the rainy season, possibly increasing the carrying of vegetal debris and their associated organisms, and thus making the phoretic association more difficult during this period.However, CALLISTO & GOULART (2000) recorded a higher number of Nanocladius sp.larvae (Chironomidae) in phoresy with nymphs of Ephemeroptera in the rainy season, relating this fact to the dispersion of vegetal debris in the studied stream.
Rheotanytarsus sp. has already been recorded from Brazil in association with Odonata larvae of Aeshnidae, Coenagrionidae, STRIXINO 2003, ROQUE et al. 2004).The phoretic association of this chironomid with the Calopterygidae is herein for the first time recorded from Brazil.The phoretic association was observed in mesohabitats of litter in riffle areas and sediment in pool areas, possibly because the accumulation of leaves in these areas makes possible the maintenance of a greater abundance of invertebrates.Such abundance is related to the stability of the substrate and to the significant amount of debris (HYNES 1970).These conditions result in a greater availability of hosts to Rheotanytarsus sp.larvae, making possible the establishment of the phoretic association (TOKESHI 1993).
Rheotanytarsus sp. is a filter-feeding organism (COFFMAN & FERRINGTON 1984), which usually builds its tubes in lotic waters (SANSEVERINO & NESSIMIAN 2001).The individuals find in Odonata larvae an appropriate body surface to the setting of their tubes and the accomplishment of their physiological and behavioral activities (SANSEVERINO et al. 1998).
Calopterygidae larvae and Heteragrion sp.larvae live in lotic habitats and they can be found in riffle areas and pool areas (CARVALHO & NESSIMIAN 1998) with a lot of marginal vegetation (FERREIRA-PERUQUETTI & DE MARCO 2002, COSTA et al. 2004, DE MARCO & PEIXOTO 2004).The establishment of the phoretic association between Chironomidae and Odonata, reported in the present work, may be related to the co-occurrence of the organisms in habitats with similar physical characteristics.
The results obtained in the present study suggest, even with the relatively low percentage of the phoretic relation observed, that Odonata larvae represent steady substrates for Rheotanytarsus sp., providing an increase in the mobility and in the capacity of the chironomid of exploring the environment (TOKESHI 1993, DOSDALL & PARKER 1998).Moreover, although Rheotanytarsus sp. is considered a commensalist without preference for a specific host (TOKESHI 1993), the results of this study indicate a possible preference for larvae of Odonata as the hosts, since the latter were found in the stream in a lower percentage in relation to other potential hosts.
Figure 3 .
Figure 3. Number of Odonata larvae with and without cases of phoresy by Rheotanytarsus sp.collected in the dry season of 2007 and in the rainy season of 2008 in a first-order stream in southeastern Brazil.
2Corresponding author.E-mail<EMAIL_ADDRESS>this note, the occurrence of phoresy between larvae of Rheotanitarsus sp.(Diptera: Chironomidae) and larvae of Heteragrion sp.(Odonata: Megapodagrionidae) and of unidentified genera of Calopterygidae (Odonata) collected in a first-order stream in an area of Atlantic Forest in southeastern Brazil is reported.During the dry season of 2007 and the rainy season of 2008, with the aid of a Surber sampler, 15 samples of each of the following mesohabitats were collected: litter from riffle areas, litter from pool areas and sediment in pool areas.Eighty-five Odonata larvae were obtained, 10 (11.76%) with cases of phoresy by Rheotanytarsus sp..These chironomids were associated with only one specimen of Megapodagrionidae, whereas the other larvae were recorded in association with Calopterygidae.Most of the Odonata with cases of phoresy by Rheotanytarsus sp. were recorded in the dry season.In the present study, the absence of the phoretic association with other potential hosts for Rheotanytarsus sp.found in the samples indicates a possible preference of these larvae for Odonata, which accounted for only 2.42% of the collected macroinvertebrates in litter and sediment.KEY WORDS.Calopterygidae; Heteragrion; mesohabitats.ZOOLOGIA 26 (4): 787-791, December, 2009 washed
Table I .
Rheotanytarsus sp.associated to Odonata larvae found in a first-order stream in an area of Atlantic Forest, state of Minas Gerais, southeastern Brazil.It was not possible to distinguish the genera Hetaerina Hagen inSelys, 1853 and Mnesarete Cowley, 1934 (Calopterygidae).See text for details.Phoretic association between larvae of Rheotanytarsus and genera of Odonata * | 2,121 | 2009-12-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
A short-term mouse model that reproduces the immunopathological features of rhinovirus-induced exacerbation of COPD
Viral exacerbations of chronic obstructive pulmonary disease (COPD), commonly caused by rhinovirus (RV) infections, are poorly controlled by current therapies. This is due to a lack of understanding of the underlying immunopathological mechanisms. Human studies have identified a number of key immune responses that are associated with RV-induced exacerbations including neutrophilic inflammation, expression of inflammatory cytokines and deficiencies in innate anti-viral interferon. Animal models of COPD exacerbation are required to determine the contribution of these responses to disease pathogenesis. We aimed to develop a short-term mouse model that reproduced the hallmark features of RV-induced exacerbation of COPD. Evaluation of complex protocols involving multiple dose elastase and lipopolysaccharide (LPS) administration combined with RV1B infection showed suppression rather than enhancement of inflammatory parameters compared with control mice infected with RV1B alone. Therefore, these approaches did not accurately model the enhanced inflammation associated with RV infection in patients with COPD compared with healthy subjects. In contrast, a single elastase treatment followed by RV infection led to heightened airway neutrophilic and lymphocytic inflammation, increased expression of tumour necrosis factor (TNF)-α, C-X-C motif chemokine 10 (CXCL10)/IP-10 (interferon γ-induced protein 10) and CCL5 [chemokine (C-C motif) ligand 5]/RANTES (regulated on activation, normal T-cell expressed and secreted), mucus hypersecretion and preliminary evidence for increased airway hyper-responsiveness compared with mice treated with elastase or RV infection alone. In summary, we have developed a new mouse model of RV-induced COPD exacerbation that mimics many of the inflammatory features of human disease. This model, in conjunction with human models of disease, will provide an essential tool for studying disease mechanisms and allow testing of novel therapies with potential to be translated into clinical practice.
Inflammatory responses in the airways during virus-induced exacerbations of COPD are poorly understood.Some insight has been gained from naturally occurring COPD exacerbation studies, but these studies are limited by variability in factors such as time between virus infection and presentation and treatments initiated prior to sampling.To address these issues, we have developed a model of experimental RV-induced COPD exacerbation in humans that allows sequential measurement of a range of clinical and inflammatory parameters and has provided a clearer understanding of the relationship between virus infection, inflammatory responses and biological and physiological markers [7].Key features of exacerbation in comparison with stable-state COPD reported in this and other human studies include increased neutrophilic [7][8][9][10][11][12] and lymphocytic [7,9,11,12] cellular airways inflammation, enhanced production of cytokines such as tumour necrosis factor (TNF)-α [7,13], CXCL10 (C-X-C motif chemokine 10)/IP-10 (interferon γ -induced protein 10) [14] and CCL5 [chemokine (C-C motif) ligand 5]/RANTES (regulated on activation, normal T-cell expressed and secreted) [9,10] in the airways, deficient type I interferon responses to RV infection, increased virus load and enhanced airway mucus production [7].Additionally, RV infection in patients with COPD has been shown to be associated with enhanced airway neutrophilia and lymphocytosis and increased neutrophil chemokine CXCL8/IL-8 expression compared with RV infection in healthy smokers [7,15,16].
Animal models of chronic respiratory diseases have historically played important roles in broadening our understanding of disease mechanisms, including development of the proteinase/anti-proteinase imbalance hypothesis in COPD [17].A mouse model of RV-induced COPD exacerbation that can mimic what is known of human disease could therefore provide further critical insight into disease mechanisms and be used to test novel therapies.However, this presents a considerable challenge due to a limited understanding of the mechanisms driving underlying COPD and of the distinct clinical phenotypes in humans.
Previously described animal models of COPD have used one of three main approaches: inhalation of noxious stimuli (most commonly cigarette smoke), instillation of tissue-degrading proteinases such as elastase or genetic manipulation [18,28,52].Cigarette smoke administration models require at least 2 months' exposure before some of the pathological features of COPD are evident [18].Models that use instillation of elastase produce a rapid onset of emphysematous destruction of the lungs with mucin induction and may be considered the best short-term method for modelling severe disease.A number of studies have described elastase-induced models of COPD with exacerbation precipitated by bacteria and, more recently, RV infection [19][20][21].These models have used various protocols, including single [19,20] or multiple [21,22] doses of intranasal elastase, differing intervals between elastase dosing and infection [19,20,23] and the addition of lipopolysaccharide (LPS) to model chronic bacterial colonization [21,22].Given this array of approaches, the optimal protocol for recreating the features of virus-induced COPD exacerbation that have been identified in humans is unclear.
In the present study, we describe a 10 day mouse model consisting of a single dose of elastase administration to establish severe emphysematous lung disease followed by RV infection that recreates many of the inflammatory features of human RVinduced COPD exacerbation.
Animals
All studies were performed in 8-10-week-old, wild-type, female C57BL/6 mice, purchased from Charles River Laboratories and housed in individually ventilated cages under specific pathogenfree conditions.During all experiments, animal welfare was monitored at least twice daily.
COPD models
Isofluorane-anaesthetized mice were intranasally challenged with 1.2 units of porcine pancreatic elastase (Merck) on day 1 and with 70 endotoxin units of LPS from Escherichia coli 026:B6 (Sigma-Aldrich) on day 4 of the week for up to 4 consecutive weeks, as previously described [21].In some experiments, mice were alternatively treated with a single dose of 1.2 units of elastase alone.Mice treated with intranasal PBS instead of elastase or LPS were used as controls.
RV infection
RV serotype 1B was obtained from the A.T.C.C. and propagated in Ohio HeLa cells, as described previously [24].Mice were infected intranasally under light isofluorane anaesthesia with 2.5 × 10 6 tissue culture infectious dose (TCID 50 ) RV1B or UV-inactivated RV control either 7 days after final LPS challenge in the case of combined elastase and LPS models or 10 days after elastase challenge in the single-dose elastase model.
Cytospin assay
Bronchoalveolar lavage (BAL) was performed as previously described [24].Cells were pelleted by centrifugation, resuspended in ammonium-chloride-potassium (ACK) buffer to lyse red blood cells, washed with PBS and resuspended in RPMI 1640 medium with 10 % FBS.Cells were then spun on to slides and stained with Quik-Diff (Reagena) for differential counts.Counts were performed blinded to experimental conditions.
ELISA
Cytokine and chemokine protein levels in BAL were measured using commercial duoset ELISA kits (R&D Systems), according to the manufacturer's instructions.
Myeloperoxidase assay
To indirectly assess neutrophil activation, the chlorination activity of released myeloperoxidase (MPO) was measured in BAL using the EnzChek MPO activity assay kit (Invitrogen), according to the manufacturer's instructions.
Histopathological analysis
Following BAL, lungs were perfused with PBS via the heart and inflated with 4 % paraformaldehyde (PFA), then immersion fixed in 4 % PFA for 24 h.Fixed lung samples were embedded in paraffin wax and 5-μm-thick histological sections were cut and stained with haematoxylin and eosin (H&E) or periodic acid-Schiff (PAS).Mean linear intercept was determined by measuring the diameter of air spaces in ten random fields per slide using Zeiss Axiovision software v4.8.3.0.PAS staining was scored using a system described previously [26].Ten to twenty airways were counted per section.All counting was performed blind to experimental conditions.
Assessment of lung function
Lung function was assessed as previously described [18].Mice were anaesthetized with ketamine (125 mg/kg) and xylazine (16 mg/kg) and were then cannulated (tracheostomy with ligation).Work of breathing, functional residual capacity (FRC), total lung capacity (TLC) and dynamic lung compliance were measured using a forced pulmonary manoeuvre system (Buxco).An average breathing frequency of 200 breaths/minute was applied to anaesthetized animals.Each manoeuvre was performed a minimum of three times and the average was calculated.Dynamic compliance readings were taken every 2 s for 2 min and the average was calculated.The FlexiVent FX1 apparatus (SCIREQ) was used to assess hysteresis and tissue damping.Maximal pressure/volume (PV) loops were used to calculate hysteresis.For all perturbations, a coefficient of determination of 0.95 was the minimum allowable for an acceptable measurement.Each perturbation was conducted three times per animal and the average was calculated, with a minimum ventilation period of 20 s allowed between each perturbation.
Assessment of airways hyper-responsiveness
Airway hyper-responsiveness (AHR) was measured as enhanced pause (PenH) in response to nebulized challenge with methacholine, using an unrestrained whole-body plethysmography system (Electromedsystems), as previously described [26].PenH is displayed as average values for a 5 min log period post-methacholine challenge.
Statistical analyses
Mice were studied in groups of four or five and data are presented as means+ − S.E.M., representative of or comprising at least two independent experiments.Data were analysed by ANOVA and Bonferroni's multiple comparison test.All statistics were calculated using Prism 4.2 software (GraPhPad).
Study approval
All animal work was completed in accordance with U.K. Home Office guidelines (U.K. project licence PPL 70/7234).
Multiple doses of elastase and LPS in combination with RV infection do not accurately model COPD exacerbation
We initially attempted to reproduce a previously reported mouse model of RV-induced COPD exacerbation [21] using exactly the same dosing protocol of once weekly intranasal elastase and LPS administration for 4 weeks followed by RV infection (Supplementary Figure S1a).Consistent with the previous report of this model, we found that induction of IFN-β and IFN-λ mRNAs in lung tissue in vivo were reduced with 4 weeks of elastase/LPS administration followed by RV infection (elastase/LPS + RV) compared with treatment with PBS and infection with RV (PBS+RV; modelling RV infected healthy subjects) (Figures S1b and S1c).Lung tissue IL-13 mRNA was also increased in elastase/LPS + RV-treated mice compared with either treatment alone (Figure S1d), as previously reported [21].
However, in contrast with the original report of this model, we found that elastase/LPS treatment followed by RV infection led to reduced rather than increased lung virus loads compared with non-COPD mice infected with RV (Figure S1e), reduced rather than increased expression of TNF-α and no difference in MUC5AC mRNA levels in lung tissue compared with PBS + RV-treated mice (Supplementary Figures S1f and S1g).AHR was increased in mice treated with elastase/LPS + RV compared with PBS + RV but was reduced compared with mice treated with elastase/LPS + UV (Supplementary Figure S1h).
We also measured a number of other inflammatory endpoints associated with human disease that were not originally reported [21].BAL neutrophil numbers on day 1 and BAL lymphocyte numbers on day 4 post-challenge were increased with RV infection as previously shown [24] (PBS + RV compared with PBS + UV, Supplementary Figures S2a and S2b).BAL neutrophil numbers were increased in elastase/LPS + RV-compared with elastase/LPS + UV-treated mice, but were decreased compared with PBS + RV treatment at day 1 post-challenge (Supplementary Figure S2a).BAL lymphocyte numbers were no different in elastase/LPS + RV-compared with elastase/LPS + UV-treated mice, but were increased on day 1 post-challenge compared with PBS + RV treatment (Supplementary Figure S2b).Levels of the virus-inducible chemokines CXCL10/IP-10, CCL5/RANTES and CXCL2/macrophage inflammatory protein 2 (MIP-2) in BAL were increased by RV infection compared with uninfected controls (PBS + RV compared with PBS + UV treatment), but were not increased in elastase/LPS + RV-compared with PBS + RVtreated mice and CXCL10/IP-10 was reduced in elastase/LPS + RV compared with PBS + RV administration at day 4 postchallenge (Supplementary Figures S2c-S2e).MUC5AC protein levels in BAL on day 4 post-challenge were increased with RV infection alone (PBS + RV compared with PBS + UV; Supplementary Figure S2f) and were also increased in elastase/LPS + RV-compared with PBS + RV-treated mice on day 1 post-infection, but significantly decreased compared with elastase/LPS + UV treatment at the same time point (Supplementary Figure S2f).
Comparison of single-compared with multiple-dose elastase and LPS to model COPD
In our hands, RV infection in the 4-week elastase/LPS COPD model failed to produce most of the inflammatory features of human COPD exacerbation.We speculated that inducing very severe lung damage with multiple doses of elastase interfered with virus infection and associated inflammatory responses, as previously reported [19].We therefore investigated whether reducing the number of doses of elastase/LPS could still induce significant alveolar destruction with less severe lung damage.Initial comparisons of one, two, three and four weekly doses of elastase and LPS indicated a dose-dependent increase in emphysematous lung changes apparent both visually in H&E-stained lung sections (Figures 1a-1e) and when quantified by measuring mean linear intercept (Figure 1f).A single dose of elastase and LPS was sufficient to induce emphysematous lung changes as defined by significantly increased mean linear intercept compared with control PBS-treated mice (Figure 1f).Despite the histological changes induced by intranasal elastase with or without LPS administration, none of the animals studied showed any outward signs of illness or respiratory compromise, regardless of the dosing protocol used.
To determine whether reducing elastase/LPS-induced lung damage increased responses to infection, we compared single with up to four doses of elastase and LPS followed by RV infection.Regardless of the number of doses administered, elastase/LPS failed to enhance RV-induced airway inflammation.We observed reduced viral RNA levels in lung tissue (Figure 2a) and reduced or no difference in BAL neutrophilia, BAL lymphocytosis (except for the four-dose protocol) and BAL CXCL10/IP-10, CCL5/RANTES and IL-6 in elastase/LPS + RV-compared with PBS + RV-treated mice (Figures 2d-2f).The number of doses of elastase and LPS therefore had little effect on the efficacy of this model when comparing elastase/LPS + RV treatment to RV infection alone.However, a number of inflammatory endpoints including BAL neutrophilia (one-dose elastase/LPS protocol), BAL lymphocytosis (oneand two-dose protocols) and protein levels of CXCL10/IP-10 (one-, two-and three-dose protocols), CCL5/RANTES (oneand two-dose protocols) and IL-6 (one-, two-and four-dose protocols) in BAL were increased in elastase/LPS + RVtreated mice compared with elastase/LPS + UV-treated mice (Figures 2b-2f).
Single-dose elastase in combination with RV infection more accurately models COPD exacerbation
Alternative mouse models of COPD have successfully used single-dose elastase administration protocols and demonstrated enhanced inflammatory responses to bacterial challenge [19,20].Since the combination of elastase and LPS with RV did not produce a phenotype that we considered to be consistent with human COPD exacerbation, regardless of the number of doses administered, we reasoned that LPS may be activating innate immunity and thus directly interfering with RV infection.We therefore determined whether removal of the LPS component from the protocol would lead to a more representative disease model (Figure 3a).Similarly to combined elastase/LPS, single-dose 3i).Lymphocytes in BAL were greater on day 1 post-challenge in mice treated with elastase + RV compared with PBS + RV treatment and on day 4 post-challenge compared with elastase + UV treatment (Figure 3f).Total cell and macrophage numbers in BAL were increased in elastase + RV-compared with both elastase + UV-and PBS + RV-treated mice at day 4 postchallenge (Figures 3g and 3h).
We also observed significant increases in BAL protein levels of CXCL10/IP-10 and CCL5/RANTES (day 1 post-challenge) and lung tissue TNF-α mRNA expression (day 4 post-challenge) in elastase + RV-treated mice compared with either PBS + RV or elastase + UV treatments (Figures 4a, 4b and 4d).In addition, BAL protein levels of CXCL2/MIP-2 were increased in elastase + RV-compared with elastase + UV-treated mice at day 1 postchallenge (Figure 4c).Lung tissue gene expression of IL-13 was significantly lower in elastase + RV-compared with PBS + RVtreated mice (Figure 4e).
Increased mucus production and mucus plugging of the airways is a recognized feature of COPD and has been shown to be further increased by RV infection [27].Staining of lung sections with PAS revealed abundant PAS-positive mucus-producing cells in the airways of elastase + RV-treated mice 4 days after RV challenge and, to a significantly lesser extent, in the airways of elastase + UV-treated mice (Figures 5a, 5b and 5e).No PAS-positive cells were visible in the airways of mice receiving PBS in combination with either RV or UV-inactivated virus (Figures 5c, 5d and 5e).We also assessed airway mucin gene and protein levels.On day 4, after virus infection lung tissue MUC5AC mRNA levels were increased in elastase + RV-compared with PBS + RV-and elastase + UV-treated mice (Figure 5f).Lung tissue MUC5AC mRNA levels were similarly increased compared with PBS + RV treatment, but not compared with elastase + UV treatment, at day 1 (Figure 5f).Lung MUC5B mRNA levels were increased at day 4 post-challenge in elastase + RV-compared with PBS + RV-treated mice (Figure 5g).BAL MUC5AC protein levels were also increased in elastase + RV-compared with PBS + RV-treated mice at both time-points and compared with elastase + UV-treated mice at day 1 post-challenge (Figure 5h).BAL MUC5B protein was increased in elastase + RV-compared with PBS + RV-treated mice on day 4 post-challenge (Figure 5i).Assessment of lung function parameters in the single-dose elastase model showed abnormalities consistent with human COPD including increased FRC, TLC and increased dynamic lung compliance associated with elastase administration (elastase + UV compared with PBS + UV-treated mice; Figures 6a-6c).We did not observe any additional effect of RV infection on these abnormal parameters at day 1 post-challenge with no increases in FRC, TLC or dynamic compliance observed in elastase + RVcompared with elastase + UV-treated mice (Figures 6a-6c).There were no significant effects of elastase treatment and/or RV infection on tissue damping or lung hysteresis (Figures 6d and 6e).We also assessed AHR measured as PenH using wholebody plesmythography at 24 h post-RV challenge.Neither RV infection nor elastase treatment alone caused increased AHR compared with PBS + UV-treated controls.However, mice exposed to single-dose elastase followed by RV infection had significantly increased PenH at the highest dose of methacholine compared with PBS + RV-or elastase + UV-treated mice (Figure 6f).
In our human model of RV-induced COPD exacerbation, there was evidence of a deficiency in type I interferon responses to RV [7].We therefore assessed innate anti-viral immune responses and virus loads in the single-dose elastase-induced COPD model.Lung tissue IFN-λ levels were reduced in elastase + RV-compared with PBS + RV-treated mice on day 1 post-infection (Figure 7a).There was no significant difference in lung IFN-β mRNA levels (Figure 7b) and no significant effect of elastase treatment on lung tissue RV RNA levels on either day 1 or day 4 postinfection (Figure 7c).
DISCUSSION
Respiratory viral infections, especially with RVs, are associated with a large proportion of COPD exacerbations [6,8], but understanding of the mechanisms by which viral infection enhances disease is severely lacking.The development of mouse models of COPD exacerbation, in parallel with the existing human experimental model [7], will allow insight into disease mechanisms and testing of potential therapies.In the present study, we report a new mouse model of RV-induced COPD exacerbation.Our model is simple in comparison with the other existing animal model of RV-induced COPD exacerbation [21], comprising just a single intranasal administration of porcine pancreatic elastase, followed by infection with minor group RV1B.We found that our model mimics many of the key pathological features reported in human experimental and naturally occurring disease, including enhanced neutrophilic and lymphocytic airways inflammation, exaggerated inflammatory cytokine production and increased airways mucus production.
A variety of mouse models of COPD have previously been described including various transgenic strains (e.g.overexpression of matrix metalloproteinase-1 [28] or IL-13 [29]) and cigarette smoke exposure [18].Our base model of COPD comprises administration of porcine pancreatic elastase to induce emphysematous lung damage.A criticism of this model is that it does not employ the primary disease-causing agent unlike models based on cigarette smoke administration.However, smokeexposure models are acknowledged to be complex to set up, require prolonged exposure and do not induce significant emphysematous changes or lung function abnormalities consistent with advanced disease.It is also notable that only 15-20 % of smokers develop COPD [30], thereby suggesting that cigarette smoke exposure alone is insufficient to generate disease.Additionally, protease dysregulation can also cause COPD in humans (in the case of patients with α-1 anti-trypsin deficiency), thereby providing further rationale for use of elastase to induce features of COPD in mice.Furthermore, acute exacerbations of disease become more frequent as the disease progresses [31] and, therefore, elastase models may be more appropriate when studying pathophysiological mechanisms involved in exacerbations.Some previous studies have combined cigarette smoke exposure with influenza or respiratory syncytial virus infection to model COPD exacerbation in mice [32,33].These studies have reported various effects of cigarette smoke including increased [33] or reduced [34] virus loads and enhanced [32,33] or suppressed [35] airway inflammation.However, other disease-relevant parameters such as mucus hypersecretion and lung function impairments have not been assessed in these models and, to date, no study has combined cigarette smoke exposure with RV infection in mice.
Airway inflammation is known to be a key underlying pathological process in COPD and neutrophilic inflammation is a recognized characteristic of COPD in both stable-state and during exacerbations [7,36,37].In our initial efforts to try to reproduce a published model [21] and then to try to optimize this model, we found that multiple doses of elastase and LPS led to suppression or no change rather than enhancement of RV-induced airways neutrophilia and levels of inflammatory cytokines such as TNF-α, CXCL10/IP-10, CCL5/RANTES and CXCL2/MIP-2 compared with control PBS-dosed and RV-infected mice, the equivalent of an RV-infected healthy control patient.This effect on neutrophilia in particular could be due to the LPS component of the model because a previous comparison showed attenuated BAL neutrophilia with chronic compared with acute LPS challenge which was believed to be due to the resolution phase of acute inflammation preventing further neutrophil recruitment [38].Additionally, a recent in vitro study demonstrated that LPS administration attenuates RV-induced neutrophil chemokine expression [39].More generally, the lack of enhancement of airway inflammation is also in keeping with a previous study in which a very high dose of elastase was administered to mice (12 units, 10-fold higher than in the present study) leading to severe lung damage and impairment of subsequent inflammatory responses to Streptococcus pneumoniae [19].It was speculated that this could be a consequence of airway epithelial damage or perhaps altered alveolar macrophage function [19].Therefore, given this finding that severe lung damage can suppress the inflammatory response to pathogens and the fact that chronic LPS challenge in itself also causes emphysematous lung damage [38,40], it is perhaps not surprising that chronic challenge with both of these agents led to suppression of inflammatory responses to RV.In contrast, our model of single-dose elastase led to significantly increased neutrophil numbers in the BAL compared with naive mice with further significant increases in neutrophilia at days 1 and 4 post-infection when elastase was combined with RV infection compared with either treatment alone.In addition to increased neutrophil numbers in elastase-treated mice infected with RV, we also observed concomitant increased MPO activation, a protein that is released from primary neutrophil granules following activation [41].It is known that neutrophil activation markers are increased in sputum of patients with COPD compared with healthy controls [42,43] and previous studies have also reported increased MPO activity in sputum [13] or exhaled breath condensate [44] of patients with COPD during exacerbations.
We also observed increased BAL lymphocytosis in mice receiving single-dose elastase followed by RV compared with control mice treated with PBS and RV or mice treated with elastase and UV-inactivated virus.This finding is also in keeping with our human model of COPD RV exacerbation where increased lymphocytes in BAL were seen at 7 days after RV infection in patients with COPD compared with healthy controls [7] with a predominance of CD8 + T-cells [16].Whether this represents an appropriate or exaggerated response to RV infection and/or contributes to lung parenchymal damage in COPD is unclear [16].Further consistent with human studies, we observed increases in airway inflammatory cytokines in single-dose elastase and RV-treated mice compared with either treatment alone, including CXCL10/IP-10, CCL5/RANTES and TNF-α which have all been shown to be up-regulated during naturally occurring COPD exacerbations in comparison with stable state [7,9,10,13,45].
Mucus hypersecretion and plugging of the airways is another cardinal feature of COPD and increased MUC5AC and MUC5B production has been demonstrated in histopathological specimens from patients with COPD [46].Furthermore, RV has been shown to increase airway mucins in vitro [27,47] and in vivo [24,48], and increased sputum production is a key symptom described during experimental exacerbations of disease [7].In our model, we found increases in lung tissue gene expression and BAL protein levels of the major respiratory mucins MUC5AC and MUC5B in mice treated with elastase followed by RV compared with control mice receiving PBS followed by RV.There is considerable interest in selective therapeutic targeting of mucin production in COPD and our mouse model provides an in vivo system that may facilitate mechanistic dissection of the pathways involved to aid development of therapeutic targets.
Acute exacerbations of COPD are associated with increased airway obstruction, which is believed to be secondary to inflammation and mucus hypersecretion [49].In our human model of disease, we observed significant reductions in postbronchodilator peak expiratory flow in patients with COPD infected with RV [7].Assessment of airway resistance by whole-body plethysmography in our single-dose elastase mouse model did not show any baseline differences between mice treated with elastase compared with mice treated with PBS, but we did observe increased AHR to methacholine challenge in mice exposed to elastase and RV compared with treatment with elastase or infection with RV alone.AHR is considered to be a hallmark feature of asthma, but is increasingly being recognized as a feature in COPD [50].However, it should be noted that the applicability of non-invasive measurements of lung function such as whole-body plesmythography may be questionable, as the technique does not provide a direct assessment of lung mechanics and thus may not be the optimum method for measuring lung function changes associated with chronic obstructive lung disorders such as COPD.We therefore additionally used invasive techniques to directly measure lung function in our model and found single-dose elastase induced abnormalities consistent with human COPD including increased TLC and FRC and increased pulmonary compliance.Similar findings have been reported in previous studies that have utilized single-dose elastase mouse models of COPD [51,52].In contrast with whole-body plethysmography, we did not observe additional worsening of these parameters when RV infection was combined with elastase treatment.
Our model of elastase-induced COPD did not, however, recreate all features of human RV-induced COPD exacerbation that have been reported.In our human model of experimental COPD exacerbation we observed that deficient RV induction of IFN-β in stable COPD ex vivo was followed by increased virus load following subsequent RV infection in vivo [7].However, all of the experimental protocols we assessed in mice, including singledose elastase and up to four doses of elastase and LPS, led to similar or lower lung RV RNA levels compared with control PBS + RV-treated mice.These lower virus loads were accompanied by the expected lower levels of IFN-β and IFN-λ in lung tissues taken at the same time points in vivo.The lower virus loads and accompanying lower levels in interferon induction in vivo might, in part, be explained by the fact that the intranasal elastase mouse model is associated with mucus hypersecretion in the large airways (as shown by PAS-positive staining in the airway lining).This may theoretically impair efficient binding of RV to the bronchial epithelium and thereby lead to a reduction in virus loads, as demonstrated by a previous study which reported reduced virus loads following influenza virus challenge in MUC5AC-overexpressing mice [53].We are unable to explain the difference between our results in mice (lower virus loads accompanied by the expected lower levels of IFN-β and IFN-λ in lung tissues taken at the same time points in vivo) and the findings in the previous mouse model study employing four doses of elastase and LPS [21] which reported the surprising findings of greater virus loads accompanied by absent induction of IFN-α and IFN-β in lung tissues taken at the same time point in vivo.We also cannot explain the differences between our results re-porting deficient RV induction of IFN-β in BAL cells from stable COPD subjects ex vivo [7] and work from the same group in air/liquid interface-cultured bronchial cells from patients with moderate-to-severe COPD which demonstrated enhanced virus replication but increased rather than decreased interferon induction at the same time points [54].There may be subtleties in design that can explain these apparently contradictory findings, but relationships between interferon responses to RV infection and virus replication in vitro and in vivo in COPD clearly require further study in both humans and mice.It is also notable that, despite type I and III interferon responses being unchanged or reduced in our single-dose elastase + RV model, BAL protein levels of the interferon-stimulated gene CXCL10/IP-10 were actually enhanced.However, RV may induce certain interferonstimulated genes, independently of type I interferon signalling [55], and other mediators such as TNF-α, which was enhanced in our model, have been shown to up-regulate CXCL10/IP-10 in vitro [56] In summary, we report a mouse model of RV infection in COPD that mimics a number of inflammatory features of human disease.This model, in conjunction with our human model, will provide a useful tool for studying disease mechanisms and will allow testing of novel therapies with potential to be translated into clinical practice.
CLINICAL PERSPECTIVES
r RV infections commonly trigger exacerbations in patients with COPD and are a major cause of morbidity and mortality.There is a lack of understanding of the underlying immunopathological mechanisms involved in virus-induced exacerbations and no available effective therapies.
r The aim of the present study was to establish a mouse model that reproduces the hallmark features of RV-induced exacerbation of COPD.r A single elastase treatment followed by RV infection in mice mimicked a number of hallmark inflammatory features of human disease including enhanced cellular airways inflammation, increased inflammatory cytokine expression and mucus hypersecretion.This model will provide a useful tool for studying disease mechanisms and allow future testing of novel therapies with potential to be translated into clinical practice.
Figure 1
Figure 1 Single elastase/LPS treatment is sufficient to induce emphysema Mice were challenged intranasally with elastase on day 1 and LPS on day 4 of each week or PBS as control for 1, 2, 3 or 4 weeks.At day 7, following final LPS or PBS challenge, lungs were formalin-fixed, paraffin-embedded and stained with H&E.Representative images of mice treated with (a) PBS, (b) single dose of elastase and LPS, (c) two doses of elastase and LPS, (d) three doses of elastase and LPS, and (e) four doses of elastase and LPS.Scale bars: 50 μm.Magnification ×100 (f) The diameter of air spaces were measured in at least ten random fields per slide and were averaged to determine mean linear intercept.n=4 mice/group.Data were analysed by ANOVA and Bonferroni post-hoc test.*P < 0.05; ***P < 0.001.
Figure 2
Figure 2 Effect of differing elastase and LPS dosing protocols on RV load and RV-induced airway inflammation Mice were challenged intranasally with elastase on day 1 and LPS on day 4 of each week or PBS as control for 1, 2, 3 or 4 weeks.At day 7 following final LPS or PBS challenge, mice were additionally challenged with RV1B or UV-inactivated RV1B 7 days after final LPS challenge.(a) RV RNA copies in lung tissue were measured by Taqman quantitative PCR at 24 h post-infection.(b) Neutrophil numbers at 24 h post-infection and (c) lymphocyte numbers at day 4 post-infection were enumerated in BAL by cytospin assay.(d) CCL5/RANTES, (e) CXCL10/IP-10 and (f) IL-6 proteins at 24 h post-infection were measured in BAL by ELISA.n=5 mice/group.Data were analysed by two-way ANOVA and Bonferroni post-hoc test.*P < 0.05; **P < 0.01; ***P < 0.001.
Figure 3
Figure 3 Single-dose elastase treatment induces histological emphysema and enhances pulmonary inflammation in RV-infected mice (a) Mice were challenged intranasally with a single dose of elastase or PBS as control and at day 10 post-challenge, lungs were formalin-fixed, paraffin-embedded and stained with H&E.Representative images of mice treated with (b) elastase and (c) PBS.Scale bars: 50μm.Magnification ×100 (d).The diameter of air spaces were measured in at least ten random fields per slide and averaged to determine mean linear intercept.On day 10 after elastase or PBS challenge, mice were additionally challenged intranasally with RV1B or UV-inactivated RV1B (UV).(e) Neutrophil, (f) lymphocyte, (g) macrophage and (h) total cell numbers in BAL were enumerated by cytospin assay.(i) MPO activity was measured indirectly by assessment of chlorination of 3 -(p-aminophenyl fluorescein) in BAL.n=5 mice/group.Data were analysed by two-way ANOVA and Bonferroni post-hoc test.*P < 0.05; **P < 0.01; ***P < 0.001.
Figure 4
Figure 4 Single-dose elastase treatment enhances inflammatory chemokine and cytokine production in RV-infected mice (a) Mice were challenged intranasally with a single dose of elastase or PBS as control.On day 10 after elastase or PBS challenge, mice were additionally challenged intranasally with RV1B or UV-inactivated RV1B (UV).(a) CXCL10/IP-10, (b) CCL5/RANTES and (c) CXCL2/MIP-2 proteins were measured in BAL by ELISA.(d) TNF-α and (e) IL-13 mRNA in lung tissue was measured by Taqman quantitative PCR.n=5 mice/group.Data were analysed by two-way ANOVA and Bonferroni post-hoc test.*P < 0.05; **P < 0.01; ***P < 0.001.
Figure 5 RV
Figure 5 RV infection enhances mucus production in a single-dose elastase COPD model Mice were challenged intranasally with a single dose of elastase or PBS as control.Ten days later, mice were infected intranasally with RV1B or UV-inactivated RV1B (UV).At day 4 after RV challenge, lungs were formalin-fixed, paraffin-embedded and stained with PAS.Representative images of mice treated with (a) elastase + RV1B, (b) elastase + UV, (c) PBS + RV1B and (d) PBS + UV.Scale bars: 50 μm.Magnification ×400 (e) Scoring for PAS-positive mucus-producing cells.(f) MUC5AC and (g) MUC5B mRNA in lung tissue was measured by Taqman quantitative PCR.(h) MUC5AC and (i) MUC5B proteins were measured in BAL by ELISA.n=5 mice/group.Data were analysed by two-way ANOVA and Bonferroni post-hoc test.*P < 0.05; **P < 0.01; ***P < 0.001.
Figure 6
Figure 6 Single-dose elastase treatment induces lung function changes Mice were challenged intranasally with a single dose of elastase or PBS as control.Ten days later, mice were infected intranasally with RV1B or UV-inactivated RV1B (UV).At day 1 after RV challenge, forced manoeuvre techniques and Flexivent were used to assess lung function parameters including (a) FRC, (b) TLC, (c) dynamic compliance, (d) tissue damping and (e) lung hysteresis.(f) AHR was measured by whole-body plethysmography at day 1 post-infection.(a-e) n=10 mice/group, two independent experiments combined.Data analysis by one-way ANOVA and Bonferroni post-hoc test (f) n=8 mice/group, two independent experiments combined.Data analysis by two-way ANOVA and Bonferroni post-hoc test.*P < 0.05; **/ψψP < 0.01; ***P < 0.001.In (f), * indicates statistical comparison between elastase + RV and PBS + RV groups and ψ indicates comparison between elastase + RV and elastase + UV groups.
Figure 7
Figure 7 Deficient IFN-λ production in RV-infected mice with elastase-induced COPD Mice were challenged intranasally with single-dose elastase or PBS as control.On day 10 after elastase or PBS challenge, mice were additionally challenged intranasally with RV1B or UV-inactivated RV1B (UV).(a) IFN-λ mRNA, (b) IFN-β mRNA and (c) RV RNA in lung tissue was measured by Taqman quantitative PCR.n=5 mice/group.Data were analysed by two-way ANOVA and Bonferroni post-hoc test.***P < 0.001. | 8,043 | 2015-03-18T00:00:00.000 | [
"Biology",
"Medicine"
] |
Analysis of the effect of water activity on ice formation using a new thermodynamic framework
In this work a new thermodynamic framework is developed and used to investigate the effect of water activity on the formation of ice within supercooled droplets. The new framework is based on a novel concept where the interface is assumed to be made of liquid molecules “trapped” by the solid matrix. It also accounts for the change in the composition of the liquid phase upon nucleation. Using this framework, new expressions are developed for the critical ice germ size and the nucleation work with explicit dependencies on temperature and water activity. However unlike previous approaches, the new model does not depend on the interfacial tension between liquid and ice. The thermodynamic framework is introduced within classical nucleation theory to study the effect of water activity on the ice nucleation rate. Comparison against experimental results shows that the new approach is able to reproduce the observed effect of water activity on the nucleation rate and the freezing temperature. It allows for the first time a phenomenological derivation of the constant shift in water activity between melting and nucleation. The new framework offers a consistent thermodynamic view of ice nucleation, simple enough to be applied in atmospheric models of cloud formation.
Introduction
Ice formation by the freezing of supercoooled droplets is an important natural and technological process.In the atmosphere it leads to the formation of cirrus and determines the freezing level of convective clouds (Pruppacher and Klett, 1997).At temperatures below 238 K and in the absence of ice forming nuclei, freezing proceeds by homogeneous nucleation.A significant fraction of cirrus in the upper troposphere form by this mechanism (Gettelman et al., 2012;Barahona et al., 2013).Cirrus clouds impact the radiative balance of the upper troposphere (Fu, 1996) and play a role in the transport of water vapor to the lower stratosphere (e.g., Barahona and Nenes, 2011;Jensen and Pfister, 2004;Hartmann et al., 2001).Correct parameterization of ice formation is therefore crucial for reliable climate and weather prediction (Lohmann and Feichter, 2005).Many experimental and theoretical studies have been devoted to the study of homogeneous nucleation (e.g., Kashchiev, 2000;Murray et al., 2010b;Wu et al., 2004, and references therein).Yet the role and meaning of the interfacial tension at the microscopic scale and the properties of the ice germ during the first stages of nucleation remain unclear and make the theoretical prediction of ice nucleation rates difficult.
Molecular dynamics (MD) simulations have advanced the fundamental understanding of homogeneous nucleation (e.g., Matsumoto et al., 2002;Moore and Molinero, 2011;Brukhno et al., 2008;Errington et al., 2002;Bauerecker et al., 2008).Density functional theory and direct kinetic models have also been employed (e.g., Laaksonen et al., 1995).Matsumoto et al. (2002) showed that ice nucleates when long-lived hydrogen bonds accumulate to form a compact initial nucleus.Errington et al. (2002) suggested that the formation of the initial nucleus is cooperative and only occurs when molecules accrete into clusters forming low density (LD) regions.The enthalpy of water molecules in such regions tends to resemble that of the liquid.It has been shown that the formation of LD regions within supercooled water is associated with an increase in the fraction of four-coordinated molecules (Moore and Molinero, 2011), Published by Copernicus Publications on behalf of the European Geosciences Union.
MD and other detailed approaches offer a unique look at the microscopic mechanism of ice nucleation.However for climate simulations and other large scale applications, simplified and efficient descriptions of ice nucleation are required.Thus, in atmospheric modeling the theoretical study of homogeneous ice nucleation has been historically approached using the classical nucleation theory (CNT) (e.g., Khvorostyanov and Curry, 2004;Dufour and Defay, 1963;Pruppacher and Klett, 1997) and used to generate ice cloud formation parameterizations (Khvorostyanov andCurry, 2004, 2009).
CNT is often criticized due to the usage of the so-called capillary approximation, i.e., the assumption that the properties of ice clusters at nucleation are the same as those of the bulk (Kashchiev, 2000).This assumption is critical when considering the ice-liquid interfacial tension (also called specific surface energy), σ iw , as CNT calculations are very sensitive to σ iw .Direct measurement of σ iw is typically difficult and surrounded with large uncertainty (Pruppacher and Klett, 1997;Digilov, 2004).Challenges to the measurement of σ iw are related to difficulties in maintaining equilibrium between a growing ice crystal and the liquid phase at supercooled temperatures.The presence of impurities and crystal defects and the large temperature gradients near the ice-liquid interface also pose a challenge to the experimental determination of σ iw (Jones, 1974).Factors like crystal shape, type and size, and the characteristics of the ice-liquid interface may also affect the determination of σ iw (Wu et al., 2004;MacKenzie, 1997;Kashchiev, 2000).
Using independent estimates of σ iw within CNT, as for example those obtained from contact angle measurements, typically leads to large discrepancy between CNT predictions and nucleation rate measurements (MacKenzie, 1997).Thus, σ iw is often found by fitting CNT predictions to experimental measurements of the nucleation rate (e.g., Murray et al., 2010a;Khvorostyanov and Curry, 2004;MacKenzie, 1997).However σ iw obtained by this method often differs significantly from independent estimates (MacKenzie, 1997).Moreover, CNT introduces several assumptions to calculate the work of nucleation (e.g., a negligible excess of solute at the interface, a spherical ice germ, and capillarity; Kashchiev, 2000) that cannot be independently tested by obtaining σ iw from nucleation rate measurements.More fundamentally, finding σ iw by fitting CNT to measured nucleation rates unties σ iw from its theoretical meaning.This may lead to inconsistencies within the theory as it is not clear what σ iw actually represents within CNT and whether it is accessible by independent methods.
Empirical correlations are most often used to describe homogeneous freezing in atmospheric models (e.g., Barahona et al., 2010;Kärcher and Lohmann, 2002;Koop et al., 2000).Experimental studies generally agree on the freezing temperature of pure water with typical variation of the order of 1 K (which however may represent about 2 orders of magnitude variation in nucleation rate) (Murray et al., 2010a;Pruppacher and Klett, 1997;Riechers et al., 2013).For aqueous solutions empirical correlations were often developed based on (NH 4 ) 2 SO 4 and H 2 SO 4 model solutions (e.g., Tabazadeh et al., 1997;Jensen et al., 1991).However Koop et al. (2000) demonstrated that when parameterized in terms of the water activity, a w , freezing temperatures become independent of the nature of the solute.Furthermore, the authors showed that when plotted in a T − a w diagram, the melting and nucleation curves can be translated by a constant shift in water activity.This particular behavior has been confirmed in several independent studies (e.g., Zobrist et al., 2008;Knopf and Rigg, 2011;Alpert et al., 2011) and has been referred as the "water activity criteria".The Koop et al. (2000) (hereafter K00) parameterization has been incorporated in several global atmospheric models (e.g., Barahona et al., 2010;Liu et al., 2007;Lohmann and Kärcher, 2002).
The empirical model of Koop et al. (2000) suggests that a general thermodynamic formulation of ice nucleation in supercooled solutions, independent of the nature of the solute, can be achieved.Yet, such theory has been elusive.Current formulations of CNT carry a dependency on a w and it has been suggested that CNT can explain the water activity criteria (e.g., Khvorostyanov and Curry, 2004).However by adjusting the parameters of CNT to reproduce observed nucleation rates, CNT by design reproduces the observed water activity dependency of J hom .Thus CNT cannot independently explain the water activity criteria.In fact, Koop et al. (2000) suggested that CNT and K00 can be empirically reconciled if σ iw is allowed to vary with a w (also shown by Alpert et al., 2011).Baker and Baker (2004) followed an alternative approach and showed that the freezing temperatures measured by K00 were consistent with the point of maximum compressibility of water.The authors derived an empirical relation between a w and the osmotic pressure which was then used to determine freezing temperatures.The work of Baker and Baker (2004) demonstrated that the water activity criteria can be understood in terms of the compressibility of water as long as certain empirical criteria are met.Recently Bullock and Molinero (2013) assumed that low density regions in supercooled water are in equilibrium with bulk water and developed an expression for the freezing temperature of water solutions as a function of a w that roughly agrees with the results of Koop et al. (2000).Their parameterization however depends on the enthalpy difference between the hypothetical four-coordinated liquid and pure water, which is semiempirically treated and found by fitting their MD results.
In this work a new thermodynamic framework is proposed to describe ice formation by homogeneous nucleation.The new model relies on a novel picture of the solid-liquid transition placing emphasis on entropy changes across the interface.The new thermodynamic framework is introduced Fig. 1.Scheme of the formation of an ice germ from a liquid phase.Subscripts 1 and 2 represent the state of the system before and after germ formation, respectively.Nw and Ny represent the total molecular concentration of water and solute in the system, respectively.The subscripts ls and s refer to the liquid-solid interface and solid phases, respectively.
24
Figure 1.Scheme of the formation of an ice germ from a liquid phase.Subscripts 1 and 2 represent the state of the system before and after germ formation, respectively.N w and N y represent the total molecular concentration of water and solute in the system, respectively.The subscripts ls and s refer to the liquid-solid interface and solid phases, respectively.
within CNT to study the effect of water activity on the ice nucleation rate.
Theory
Consider the system depicted in Fig. 1.The liquid droplet is assumed to be large enough so that nucleation is more likely to occur within the bulk of the liquid than at the droplet surface.The liquid is assumed to be homogeneously mixed and its cluster distribution in steady state.For simplicity it is assumed that only two components are present in solution, water (subscript, "w") and a solute (subscript, "y"), although this assumption can be easily relaxed if more than one solute is present.The Gibbs free energy of the system in stage 1 (before the formation of the ice germ) is given by where N w and N y are the total number of water and solute molecules present in the liquid phase, respectively, and µ w,1 and µ y,1 their respective chemical potentials.
After the formation of the germ (stage 2, Fig. 1) it is advantageous to consider the solid-liquid interface as a phase distinct from the bulk (Gibbs, 1957).It is assumed that no atoms of y are present in the bulk of the solid phase although they may be present at the interface.However, the dividing surface is selected so that the molecular excess of solute at the interface is zero.This leads to a molecular excess of solvent at the interface and is further analyzed in Sect.2.1.The assumption of a solute-free solid is justified on molecular dynamics simulations showing a rejection of ions into an unfrozen layer of brine away from the germ (Bauerecker et al., 2008).With this, the Gibbs free energy of the system in stage 2 is given by where n s and n ls are the number of atoms in the bulk of the germ and in the interface, respectively, and µ w,s and µ w,ls , their chemical potentials.Equation ( 2) can be reorganized as Using Eqs. ( 1) and ( 3), the work of germ formation G = G 2 − G 1 can be written as where G sln is the change in the Gibbs free energy of the bulk solution caused by the appearance of the germ, i.e., G sln = N w µ w,2 − µ w,1 + N y µ y,2 − µ y,1 . (5) Equation ( 4) indicates that the work of germ formation originates from (i) changes in the composition of the liquid phase, (ii) the formation of the interface and (iii) the formation of the bulk of the solid.Using the equilibrium between ice and the liquid solution as reference state, the latter can be written in the form (Kashchiev, 2000) where k B is the Boltzmann constant, a w,eq is the equilibrium water activity between bulk liquid and ice, and a w is the activity of water in stage 2. The term G sln in Eq. ( 5) arises because the solute must be "unmixed" (Black, 2007) to form a solute-free germ.This causes a change in the molar composition of the liquid phase and an entropic cost to the system (Bourne and Davey, 1976).Thus, G sln is proportional to the mixing entropy of the system where n = n s + n ls is the total number of molecules in the germ, and a w,1 and a y,1 are the activities of water and solute in the initial stage (Fig. 1), respectively.If the droplet size is much larger than the ice germ, which is almost always the case for ice nucleation, then a w ≈ a w,1 and a y ≈ a y,1 , and to a good approximation The term G sln is usually neglected on the basis that the liquid phase is much larger than the ice germ (i.e., the liquid phase is considered semi-infinite with respect to the solid).However, Eq. ( 8) shows that although G sln is typically small for dilute solutions, it may become comparable to G for a w < 1.
Energy of formation of the interface
To further develop Eq. ( 4) it is necessary to introduce a model of the solid-liquid interface.Theoretical models suggest that the solid-liquid interface is characterized by the organization of randomly moving liquid molecules into positions determined by the solid matrix (Spaepen, 1975;Karim and Haymet, 1988;Haymet and Oxtoby, 1981).Associated with this increased order is a decrease in the partial molar entropy of the liquid molecules.Since the solid determines the positions of the molecules at the interface, the partial molar entropy at the interface must approximate the bulk entropy of the solid.However the interface molecules are liquidlike, and their enthalpy remains close to the bulk enthalpy of the liquid (Black, 2007).This is in line with the work of Reinhardt et al. (2012) who consider the molecules in the bulk ice as those with at least three connections whereas those at the surface of the solid as having only two connections but with at least one neighbor with three connections.This picture implies that the system must pay the maximum entropic cost during the formation of the germ (Spaepen, 1975;Black, 2007).The entropic nature of the thermodynamic barrier for nucleation has been confirmed by molecular dynamics simulations (Reinhardt and Doye, 2013).Following the conceptual picture described above, the interface is assumed to be made of liquid molecules "trapped" by the solid matrix.The outermost layer of the solid along with the adjacent liquid are considered part of the interface.In reality the interface may resemble a continuous transition between solid and liquid, characterized by increasing order on the solid side (Karim and Haymet, 1988).Assuming the interface as a distinct phase creates molecular excesses of solute and solvent, which must be explicitly accounted for.This conceptual model is used below to develop an expression for the energy of formation of the interface.
The change in the partial molar free energy of water associated with the formation of the interface is given by where s w,ls is the entropy of the interface molecules.Assuming that the entropy of the molecules at the interface approximates the entropy of the bulk solid, i.e., s w,ls ≈ s w,s , Eq. ( 9) can be written as Taking into account that µ w,s = h w,s − T s w,s , and using Eq. ( 6) into Eq.( 10) we obtain where h w,ls = h w,ls − h w,s is the excess enthalpy of the water molecules at the interface.
If no solute is present, the enthalpy of the molecules at the interface approximates the enthalpy of water in the bulk, i.e., h w,ls ≈ h f , being h f the latent heat of fusion of water.However the adsorption of solute and solvent at the interface affects h w,ls .Following Gibbs (1957), the effect of the molecular excess of solute and solvent on h w,ls can be written in the form (Hiemenz and Rajagopalan, 1997;Gibbs, 1957) where w and y are the surface excess of water and solute, respectively, and represent the ratio of the number of molecules in the interface to the number of molecules at the dividing surface.w and y depend on the position of the dividing surface (Gibbs, 1957), which is arbitrary but typically chosen so that the surface excess of solvent is zero (i.e., w = 0) (Kashchiev, 2000).However since a w is typically a control variable in ice nucleation, it is convenient to choose the dividing surface as equimolecular with respect to the solute (i.e., y = 0) making the surface excess a function of a w , but not of a y .Thus, with y = 0, Eq. ( 12) becomes Equation ( 13) suggests that h w,ls must be independent of the nature of the solute.This can be explained as follows.Considered as a separate phase, the interface obeys the Gibbs-Duhem equation (Schay, 1976).Therefore the chemical potential of the solute, and its molecular excess at the interface, can be written in terms of the chemical potential of water, hence a w .In other words, the Gibbs-Duhem equation guarantees that the interface energy can be expressed in terms of water activity only.It follows that the dependency of h w,ls on a w must be independent of the nature of the solute.Since h w,ls determines to great extent the nucleation rate, the dependency of J hom on a w will to first order be independent of the nature of the solute.
To complete the model of the ice-liquid interface an expression for the interface thickness, hence n ls and w , must be derived.The number of molecules at the outermost layer of the solid is given by s n 2/3 , where s is a geometric constant depending on the crystal lattice (1.12 for hcp crystals and 1.09 for bcc crystals; Jian et al., 2002), and n is the total number of atoms in the germ.Notice that in this approximation the ice germ is allowed to have any shape, as long as it has a defined lattice structure.However the interface is likely to extend beyond the outermost layer of the solid as the solid matrix imprints some order to the adjacent liquid (Spaepen, 1975;Haymet and Oxtoby, 1981).To account for this "coverage" by the liquid on the solid, the model proposed by Spaepen (1975) is used.This model results from the explicit construction of the interface following the rules: (i) maximize the density, (ii) disallow octahedral holes and (iii) preference for tetrahedral holes (Spaepen, 1975).Using this model, Spaepen (1975) showed that there are about 1.46 molecules at the interface for each molecule in the outer layer of the solid matrix, that is, w = 1.46 and n ls = w s n 2/3 .Spaepens' classic model has been confirmed by experimental observations and molecular simulations (Asta et al., 2009, and references therein ).The sensitivity of J hom to the values of w and s is analyzed in Sect.3.5.
Introducing Eq. ( 13) into Eq.( 11) we obtain Equation ( 14) expresses the energy cost associated with the formation of the interface accounting for solute effects.Since it results from the consideration of the entropy reduction across the interface (i.e., negentropy production; Spaepen, 1994), this model will be referred as the negentropic nucleation framework (NNF).
The germ size at nucleation, n * , and the nucleation work, G nuc , are obtained by applying the condition of mechanical equilibrium to Eq. ( 15 Solving Eq. ( 16) for n * and rearranging gives The nucleation work is obtained by replacing Eq. ( 17) into Eq.( 15).After rearranging we obtain The nucleation rate, J hom , is given by where J 0 is a T dependent pre-exponential factor.As in CNT, it is assumed that J 0 results from the kinetics of aggregation of single water molecules to the ice germ from an equilibrium cluster population (Kashchiev, 2000), therefore where N c is the number of atoms in contact with the ice germ, ρ w and ρ i are the bulk liquid water and ice density, respectively, g is the germ surface area, and G act is the activation energy of the water molecules in the bulk of the liquid.G act represents the energy required for the water molecules to move from their equilibrium positions in the bulk to a new equilibrium position at the solid-liquid interface, and is closely related to the self-diffusion coefficient of water (Pruppacher and Klett, 1997).Z is the Zeldovich factor, given by
Classical nucleation theory
CNT is commonly used to describe homogeneous ice nucleation (e.g., Khvorostyanov and Curry, 2004) and is therefore important to compare the NNF model against CNT predictions.According to CNT, the work of nucleation, G CNT , is given by (Pruppacher and Klett, 1997) where S i = a w (p s,w /p s,i ) is the saturation ratio with respect to the ice phase.The critical germ size is given by The nucleation rate for CNT is obtained by replacing Eq. ( 23) into Eq.( 19).
where J 0 is defined as in Eq. ( 20).
Interfacial tension
The usage Eq. ( 24) requires the knowledge of σ iw which is typically found by fitting J CNT to experimental measurements (e.g., Murray et al., 2010a;Khvorostyanov and Curry, 2004).Several empirical expressions for σ iw have been developed using this approach (e.g., Pruppacher and Klett, 1997;Dufour and Defay, 1963).Here instead two new general expressions, one empirical and one theoretical, are derived to express σ iw .Attempts to derive general expressions for σ iw are often based on the approach of Turnbull (1950), who noticed that for a large number of compounds σ iw was approximated by the relation where k T is an empirical constant equal to 0.32 for water.Equation ( 25) is mostly valid at low supercooling although it has been applied in the analysis of ice nucleation (MacKenzie, 1997).The model presented in Sect. 2 as well as the results of Koop et al. (2000), indicate that besides T , σ iw must also depend on a w , which is not captured by Eq. ( 25).
An independent estimate of σ iw , not obtained from nucleation rate measurements, can be derived from the NNF model as follows.Taking into account that the energy of formation of the interface in CNT is given by σ iw g and using Eq. ( 13) we can write Assuming a spherical ice germ and using n ls = w s n 2/3 , Eq. ( 26) can be solved for σ iw in the form Equation ( 27) provides an independent, first principles estimate of σ iw , obtained without the usage of nucleation rate data.It incorporates the dependency of σ iw on both, T and a w .For a w = 1, Eq. ( 27) has the same form as the Turnbull (1950) expression (Eq.25).Comparing Eqs. ( 27) and ( 25) and rearranging, we obtain for a w = 1 The surface area parameter, s, is set to 1.105 mol 2/3 , that is, the ice germ structure is assumed to lie somewhere between a bcc (s = 1.12 mol 2/3 ) and a hcp (s = 1.09 mol 2/3 ) crystal (Jian et al., 2002), justified on experimental studies showing that ice forms as a stacked disordered structure (Malkin et al., 2012).From the model of Spaepen (1975), w = 1.46 (Sect.2.1).Using these values into Eq.( 28) gives k T = 0.33, which is very close to reported values around 0.32 to 0.34 (Turnbull, 1950;Digilov, 2004).Thus, Eq. ( 28) helps to elucidate the meaning of k T in the empirical expression of Turnbull (1950): it is a measure of the thickness of the interface between the liquid and the solid.25).
To explain the dependency of the interfacial tension on a w one must consider the Gibbs model of the interface.By introducing the arbitrary dividing surface, an excess number of molecules is created around the interface between the liquid and the solid (Hiemenz and Rajagopalan, 1997).This is typically dealt with by selecting the so-called equimolecular dividing surface (EDS), in which the interface has energy but its net molecular excess is zero (Kashchiev, 2000;Schay, 1976).However the EDS cannot be defined simultaneously for the solute and the solvent.In fact, using the EDS with respect to the solvent, results in a molecular excess of solute at the interface.In Sect.2.1 it was shown that it is advantageous to define the EDS with respect to the solute, and account explicitly for the excess of water molecules at the interface.Thus the consistency between the choice of the dividing surface and the molecular excess at the interface is explicit in NNF.
A final approach to parameterize σ iw takes advantage of the water activity criteria to derive expressions for σ iw by fitting CNT to K00.Although these expressions may depend on the specific assumptions made in implementing CNT, they would in principle be more general than other empirical approaches since the water activity criteria applies to a large number of solutes.Alpert et al. (2011) derived values for σ iw by fitting CNT to K00 and using a simplified form of the Zeldovich factor and customized expressions for G act (Fig. 2).Here a similar approach is followed, although based on Eq. ( 24) which uses a more rigorous form of Z. Also, linear dependencies of σ iw on T and a w are assumed to extrapolate σ iw outside of the interval where K00 is applicable.With this, a correlation for σ iw was obtained by fitting J CNT (Eq.24) to K00 in the form, with 180 K < T < 273 K and 0.75 < a w < 1.0.The linear dependency of σ iw on T and a w is consistent with theoretical studies (Spaepen, 1994;Schay, 1976).In agreement with experimental measurements (Ketcham and Hobbs, 1969), Eq. ( 29) predicts σ iw = 33.9mJ m −2 for T = 273 K and a w = 1 (Fig. 2).Equations ( 25), ( 27) and ( 29) are selected to parameterize σ iw because they represent a progression towards incorporating additional effects of a w within σ iw .That is, Eq. ( 25) depends only on temperature, whereas Eq. ( 27) corrects for the effect of the excess of solute at the interface making σ iw a function of a w .As will be discussed in Sect.3, the empirically derived σ iw (Eq.29) implicitly incorporates additional effects neglected in CNT accounting for the change in the composition of the liquid phase upon nucleation (i.e., the "unmixing" energy).However, it must be emphasized that despite this progression, Eqs. ( 25), ( 27), and ( 29) are completely independent.
Interfacial tension
The different parameterizations of σ iw presented in Sect.2.4 are depicted in Fig. 2. As expected, σ iw obtained from the empirical correlation derived from K00 (EMP, Eq. 29) and the data reported by Alpert et al. (2011) are in good agreement, with σ iw from the latter being slightly higher.Since the same data is used in deriving both expressions (i.e., the K00 parameterization), differences between the values of σ iw of Alpert et al. (2011) and Eq. ( 29) only result from differences in the implementation of CNT.That is, the different values of G act and Z used in each case.The empirical correlation presented here (Eq.29) represents the best fit between CNT and K00, with CNT as described in Sect.2.3.
For a w = 1 there is good agreement in σ iw from all the models presented in Sect.2.4.This is remarkable given that they are completely independent, derived from different nucleation rate data, or in the case of NNF completely theoretical.Still, σ iw differs by about 2 mJ m −2 which may represent up to 3 orders of magnitude difference in J hom (Sect.3.2).
The NNF model predicts slightly higher σ iw than the value found by application of Eq. ( 25).This is because the implied constant k T by the NNF model is slightly higher (0.33) than the value of 0.32 used by Turnbull (1950).Still, since Eq. ( 25) depends only on T , the difference between the NNF and the TUR curves for a w < 1 (Fig. 2) represents the effect of a w on σ iw .
For a w = 1 the K00 and the NNF curves in Fig. 2 are in a good agreement.However for a w < 1, σ iw increases less steeply for the NNF-derived σ iw than suggested by the empirical correlation, Eq. ( 29).This difference however does not result from additional surface effects, but from an empirical correction to the assumption of a negligible change in the composition of the liquid phase upon nucleation in CNT.This can be explained as follows.Introducing the NNFderived σ iw (Eq.27) into Eq.( 22) does not make the nucleation work by NNF and CNT equal due to the quadratic dependency on a w in the denominator of Eq. ( 18), which results from the additional term, G sln , in the NNF model (Eq.4).Removing G sln from NNF would make the nucleation work by NNF and CNT equal when σ iw derived from NNF is used.Since the empirical σ iw correlation (Eq.27) is obtained by constraining CNT to K00, and as will be shown in Sect.3.2, J hom from NNF is close to K00, it follows that the empirical σ iw fit does not only parameterizes the effect of a w on σ iw but also corrects for the assumption of a negligible G sln in CNT.This explains the higher sensitivity of σ iw to a w in the empirical correlation (EMP, Fig. 2) than in the NNF-derived expression.
Nucleation rate
Figure 3 shows the nucleation rate calculated from K00, NNF and CNT.The values used for the parameters of Eqs. ( 18) to ( 24) are listed in Table A1 2010a) compared experimentally determined nucleation rates from several sources and found about a factor of 10 variation in J hom for pure water.Riechers et al. (2013) recently developed a new experimental technique based on microfluidics to measure J hom .Although these correlations are only applicable around 236 K, they are included as reference for the limiting case of a w = 1.
The "freezing temperature", T f , is defined as the solution to where t is the experimental timescale and v d the droplet volume.T f represents the temperature for which about 63 % of droplets in a monodisperse droplet size distribution are frozen (or 50 % in a lognormal distribution; Barahona, 2012).Defining T f as in Eq. ( 30) minimizes the impact of droplet volume dispersion on T f (Barahona, 2012).T f is calculated by numerical iteration, assuming t = 10 s and a mean droplet diameter of 10 µm, selected to match to the conditions used by Koop et al. (2000).
There is overlap between all the curves of Fig. 3 for T around 236 K, that is, near the homogeneous freezing temperature of pure water (a w = 1) with the correlation of Riechers et al. (2013) being slightly lower than the other curves (although likely within the range of uncertainty of J hom , Sect.3.5).For J hom > 10 20 m −3 s −1 , CNT-TUR predicts about two orders of magnitude higher J hom than CNT-NNF.Such high J hom is however rarely encountered at atmospheric conditions.The agreement between CNT-EMP 2013), respectively.For CNT σ iw was calculated using the Turnbull ( 1950) correlation (CNT-TUR, Eq. 25), an empirical correlation derived from fitting CNT to the K00 parameterization (CNT-EMP, Eq. 29), and a theoretical expression derived from the NNF model (CNT-NNF, Eq. 27).Results using the NNF model (Eq.19) are also shown.
and K00 is by design since K00 data was used to develop Eq. ( 29), however for J hom > 10 15 m −3 s −1 CNT-EMP tends to predict lower J hom than K00 and NNF which results from the linear extrapolation assumed in σ iw (Sect.2.4).
There is in general good agreement in J hom predicted by the NNF and the K00 models (Fig. 3).Since no data from K00 (or any other nucleation rate measurements) were used in the development of NNF, comparison against K00 constitutes an independent test of the NNF model and shows its capacity to explain observed nucleation rates.For a w < 1, NNF and K00 agree within the typical scatter of experimentally determined J hom (e.g., Murray et al., 2010b;Alpert et al., 2011).However for a w ≈ 0.8, NNF seems to underpredict J hom by about 3 orders of magnitude, particularly for J hom < 10 10 m −3 s −1 .
CNT and NNF show an initial increase in J hom as T decreases, however this tendency reverses at low T , i.e., they predict a maximum in J hom when measured at constant a w .This behavior is caused by an increase in G act as T decreases, as the role of activation of water molecules becomes increasingly more significant at low T limiting J hom (Sect.3.4).For a w > 0.9, J hom peaks at values greater than 10 20 m −3 s −1 .Such high J hom may be difficult to measure experimentally.However for a w ≈ 0.8, J hom peaks around 10 15 m −3 s −1 , typically found in small droplets at low T , and may be more accessible to experiment.The existence of a maximum in J hom also implies that around its peak value J hom is relatively insensitive to T .Thus around the maximum J hom , measured freezing temperatures would be very sensitive to small changes in droplet size and cooling rate.The existence of a maximum in J hom is however a theoretical result and more research may be needed to elucidate its nature.
The expressions used for σ iw within CNT progressively account for additional effects of a w on J hom (Sect.2.4).Thus the impact of a w on J hom through surface excess effects is represented by the difference between the CNT-NNF and the CNT-TUR curves in Fig. 3 (middle and right panels).Similarly, the difference between the CNT-EMP and the CNT-NNF curves corresponds to the additional empirical correction required in σ iw to account for the energy cost of making a solute-free germ, neglected in CNT (Eq.8).Both effects imply an additional burden to G nuc and dramatically decrease J hom .As a w decreases, mixing effects tend to be more significant representing a decrease of more than 10 orders of magnitude in J hom .
Figure 4 shows that there is a wide variation in ∂J hom ∂a w at constant T between CNT, NNF and K00 around the freezing line (defined as in Eq. 30), even at a w = 1 where Fig. 3 (left panel) shows relatively good agreement in J hom .This is significant since ∂J hom ∂a w determines to great extent the germ size (Sect.3.3).J hom from the NNF model seems to decrease slightly more steeply with a w than K00, although the agreement is within the models' uncertainty.Again, this represents an independent test of the validity of the NNF model.The agreement between CNT-EMP and K00 is by design with some deviation beyond the range of applicability of K00.J hom is much less sensitive to a w for the CNT-TUR and CNT-NNF curves than for the other models, particularly at low T , indicating the strong impact of solute surface excess and mixing effects on J hom .
Critical germ size
Figure 5 shows the critical germ size in terms of the number of water molecules in the germ, calculated using NNF, CNT, and derived from the K00 expression.For the later, the nucleation theorem (Kashchiev, 2000) allows to determine n * directly from experimental measurements in the form where is the energy of formation of the interface, and µ w = −k B T ln ( a w a w,eq ).Equation ( 31) can be rewritten as (Kashchiev, 2000) .Homogeneous nucleation rate.K00 and NNF correspond to J hom obtained using the correlations of Koop et al. (2000) and the NNF model (Eq.19) respectively.For CNT σ iw was calculated using the Turnbull ( 1950) correlation (CNT-TUR, Eq. 25), an empirical correlation derived from fitting CNT to the K00 parameterization (CNT-EMP, Eq. 29), and a theoretical expression derived from the NNF model (CNT-NNF, Eq. 27).
Figure 5. Critical germ size, n * calculated at T f with D p = 10 µm and t = 10 s.Lines labeled as empirical were obtained using the K00 correlation and a form of the nucleation theorem (Kashchiev, 2000).CNT results were obtained using σ iw from the Turnbull (1950) correlation, (CNT-TUR, Eq. 25), an empirical correlation derived from fitting CNT to the K00 parameterization (CNT-EMP, Eq. 29), and a theoretical expression derived from the NNF model (CNT-NNF, Eq. 27).Results using the NNF model (Eq.17) are also shown.
Equation ( 32) is typically used assuming that does not depend on a w (Ford, 2001;Kashchiev, 2000), i.e., Using Eq. ( 33), along with the K00 parameterization results in n * between 400 and 600 molecules for T between 190 and 236 K (Fig. 5).On the other hand, using CNT with σ iw derived from a fit to K00 (Eq.29) results in n * between 100 and 250 (Fig. 5, CNT-EMP).A similar discrepancy between K00 and CNT was found by Ford (2001) who ascribed it to limitations in CNT in describing the surface energy excess.Ford (2001) however did not account for the dependency of σ iw on a w .From Sects.2.4 and 3.2 it is clear that the energy of formation of the interface is not independent of a w and may affect n * .Using the assumption of CNT that = σ iw g and introducing Eq. ( 29) into Eq.( 32), we obtain for a spherical ice germ Solving Eq. ( 34) iteratively results in n * around 200 for T between 180 and 240 K (Fig. 5).This value is much lower than implied by Eq. ( 33) and in better agreement with CNT-EMP.Thus most of the discrepancy in n * between CNT and Eq. ( 33) results from neglecting the dependency of on a w .This implies that ∂ ∂ µ w is not negligible and Eq. ( 32) instead of Eq. ( 33), must be used in the analysis of ice nucleation data.
The NNF model (Eq.17) predicts n * around 260 for T between 180 and 240 K (Fig. 5, line NNF).This value is slightly higher than obtained using Eq. ( 34).However the empirical correlation derived for σ iw (Eq.29) used in Eq. ( 34) does not only account for surface effects but also corrects for neglecting G sln in CNT.Thus it is likely that Eq. ( 34) overestimates ∂ ∂lna w , even though J hom predicted by CNT-EMP is in agreement with K00.The slight increase in n * as temperature decreases predicted by NNF results from a faster decrease in the interfacial term than in the thermodynamic term of Eq. ( 18).
It must be noticed that n * shown in Fig. 5 is calculated at T = T f , which implies that a w is not constant but varies with T f .For T f > 210 K the CNT-NNF and NNF curves in Fig. 5 remain close.However, due to the lower sensitivity of σ iw to a w in CNT-NNF than in NNF, T f remains above 210 K in the former (Fig. 6).CNT-TUR shows a strong increase in n * 25), ( 27) and ( 29), respectively.Tf obtained using the K00 parameterization (Koop et al., 2000;Koop and Zobrist, 2009), the NNF model, and the correlation of Bullock and Molinero (2013) are also shown.The experimental range represents ∆aw = 0.313 ± 0.025 (Koop and Zobrist, 2009;Alpert et al., 2011).
as T f decreases, similarly to the behavior observed by Ford (2001).
It is also important to test whether the picture presented in Sect.2.1 is physically plausible.The pressure change across the interface can be calculated using the generalized Laplace equation (Kashchiev, 2000), where the solid is assumed incompressible.Direct application of Eq. ( 35) is somehow difficult because n * is not independent of a w .However for a w = 1, n * can be approximated as only dependent on T .Thus, making = (µ w,ls − µ w,2 ) n ls and replacing Eq. ( 14) into Eq.( 35) we obtain for a w = 1, Using the parameters of Table A1, P = 336 bar for n * = 260.This value is below the compressibility limit of water (Baker and Baker, 2004).Thus, for atmospheric conditions the increased pressure at the interface will not result in destabilization of the water structure.This indicates that the picture of the interface presented here is physically plausible.
P is of the same order as the osmotic pressure defined by Baker and Baker (2004), however the relation between P and the osmotic pressure is not clear.
Freezing temperature
In this section we investigate whether the model presented in Sect. 2 is able to explain the water activity criteria of Koop et al. (2000), that is, whether the NNF model is able to independently predict a constant difference between a w and a w,eq when calculated at T f .Figure 6 shows T f (Eq.30), calculated using K00, CNT and NNF.Results using the correlation of Bullock and Molinero (2013) (hereafter BM13) derived from MD simulations are also included.The gray area in Fig. 6 represents experimental uncertainty and was obtained by setting a w = a w − a w, eq = 0.313 ± 0.025, which is the typical range of a w found in experimental observations (Koop and Zobrist, 2009;Alpert et al., 2011;Knopf and Rigg, 2011).
Using K00 directly into Eq.( 30) and finding a w and T f iteratively, results in an average a w of about 0.302 for 238 K > T f > 180 K.The slightly lower a w than reported by Koop and Zobrist (2009) ( a w = 0.313) results from using a fixed droplet size of 10 µm whereas in Koop et al. (2000) D p varied between 1 and 10 µm.Carrying out the same exercise with J hom derived from the NNF model results in overlap of T f between K00 and NNF down to 190 K (Fig. 6).This shows that the NNF model is able to reproduce the water activity criteria and constitutes an independent theoretical derivation of the results of Koop et al. (2000).
BM13 agrees with K00 and NNF within experimental uncertainty for T f between 200 and 233 K, but it tends to overpredict T f for lower temperature.This overprediction was also observed by Bullock and Molinero (2013) and was ascribed to the temperature dependency of the water activity coefficient.
Figure 6 also shows T f calculated with CNT using the different approximations to σ iw presented in Sect.2.4.The CNT-EMP line has been omitted as by design it overlaps with the K00 line.As discussed in Sect.3.2, the difference between the CNT-NNF and CNT-TUR curves represents the effect of the surface excess of solute on J hom , hence T f .This effect results in about 10 K lower T f than when σ iw is assumed independent of a w (curve CNT-TUR).Mixing effects, represented by the difference between the CNT-NNF and the K00 curves, become increasingly significant at low T and represent about 20 K decrease in T f for a w ≈ 0.8.
The NNF model allows to further explore the origin of the constant shift in water activity observed by Koop et al. (2000).Using Eq. ( 19) into Eq.( 30), and rearranging gives, Since solutions Eq. ( 37) are also solutions to Eq. ( 30), Eq. (37) determines T f and a w .Because of this, the left hand side of Eq. ( 37) is termed the characteristic freezing function.
Inspection of Eq. ( 37) shows that the characteristic freezing function depends only on T , where a w acts a parameter defining its roots.By exploring the parameter space of Eq. ( 37) we can determine what values of a w allow for real solutions to Eq. ( 37).This is shown in Fig. 7, where T f is defined at the intersection between the characteristic freezing function and the horizontal axis.Figure 7 shows that Eq. ( 37) only has real solutions over a very narrow set of values of a w , i.e., 0.298 < a w < 0.306.In other words, for T f to exist, a w must be almost constant between 180 and 240 K.This explains the water activity criteria since the variation in a w shown in Fig. 7 is well within experimental uncertainty (Fig. 6).An interesting feature of the characteristic freezing function is that it produces similar T −a w curves for different a w values.This means that the multiple roots of Eq. ( 37) are located at similar T f for different values of a w , and always fall on the same curve (Fig. 6).The oscillating behavior of the freezing function results from the relative variation in the temperature derivative of the interfacial and thermodynamic terms defining the nucleation work (Eq.18).
Figure 7 shows that Eq. ( 37) constitutes a theoretical derivation of the water activity criteria.a w can be obtained by numerically solving Eq. ( 37).However for a w = 1, Eq. ( 37) is simplified and a w can be found by direct analytical solution, in the form, where T * = 236.03 is the freezing temperature at a w = 1.The value of a w in Eq. ( 38) was obtained using the parameters of Table A1 calculated at T * .a w is very close to the experimental value of 0.302 found by application of K00 (Fig. 6) and within experimental uncertainty of reported values (e.g., Koop and Zobrist, 2009;Alpert et al., 2011).For T > 190 K, a w calculated from Eq. ( 37) is fairly constant (being 0.300 at T = 190 K).For T < 190 there is a slight increase in a w reaching about 0.31 at T = 180 K.This increase is due to the increase in G act at low T .
From the agreement of BM13 with K00 (Fig. 6) Bullock and Molinero (2013) concluded that the formation of fourcoordinated water controls T f , which implies a kinetic control for nucleation.This view can be reconciled with the thermodynamic framework presented here by taking into account the role of G act in determining J hom .The product 20) is almost constant between 180 and 236 K. Therefore the flux of molecules to the germ is controlled by G act .In fact, introducing Eq. ( 18) into Eq.( 19) and then into Eq.( 30), we obtain after rearranging, Equation ( 39) implies that an increase G act is balanced by a decrease in G nuc , i.e., the increase in the driving force for nucleation at low T balances the decrease in the mobility of water molecules.One can hypothesize that the formation of low density patches of water within a supercooled droplet becomes less frequent at low a w (hence low T f ), which translates into a larger G act .Hence G act exerts a kinetic control on T f and G nuc responds accordingly (Eq.39).In other words, a kinetic constraint to nucleation implies a thermodynamic one (and vice versa), and T f represents the temperature at which they balance.G act is closely related to the self-diffusivity of water (Pruppacher and Klett, 1997) and it follows that diffusivity must play a critical role in determining J hom at low T .Since G nuc can be defined over a purely thermodynamic basis (Sect.2), Eq. ( 39) suggests that G act may also admit a thermodynamic description.
Sources of uncertainty
Besides the physical properties of water the NNF model depends on two constants: the surface coverage, w , and the geometric constant defining the crystal lattice, s.It is clear that variation in physical properties, particularly the heat of fusion, will affect J hom .The parameterization of G act , here assumed that of pure water, would also have an effect on nucleation rates particularly at low T (Pruppacher and Klett, 1997).The physical properties of water can be obtained by independent methods and it is out of the scope of this work to evaluate their accuracy.Since they are elevated to the third power in the work of nucleation, J hom is very sensitive to w and s.In principle their variation would have a similar effect on the nucleation rate as variation in σ iw in CNT.However w and s can be constrained independently without using nucleation rate measurements.Furthermore, their plausible range of variation is well-constrained by the underlying physics.Variation in w may originate from crystal defects in the germ, and from significant order beyond the second interfacial layer.The former may be rare since defects will be energetically unfavored.The latter is more difficult to assess, however the percentage of molecules that would display order beyond the second layer is expected to be small.From Spaepen (1975)'s model, w is expected to be close to 1.46 since order is rapidly lost when moving from the interface into the bulk of the liquid.Assuming that 10 % of the third layer molecules belong to the interface (which is likely an upper limit of variability) will increase w to 1.51.The factor, s, is 1.09 for hcp crystals and 1.12 for bcc crystals (Jian et al., 2002) and it is not likely that s would be outside of this range.Figure 8 shows the expected variation in J hom from variation in w and s within these intervals.It represents between 1 and 3 orders of magnitude variation in J hom , and about 2 K variability in freezing temperatures.
Conclusions
The model presented here constitutes a new thermodynamic framework for nucleation that does not use the interfacial tension as defining parameter.It is therefore free from bias induced by uncertainties in the parameterization of σ iw .Instead, an expression for the interfacial energy was developed from first principles using thermodynamic arguments.The new framework is based on a conceptual model in which the interface is considered to be made of "water molecules trapped by the solid matrix".It also accounts for the finite droplet size leading to changes in the composition of the liquid phase upon nucleation.The proposed framework model is fundamentally different from classical nucleation theory in that it does not consider the curvature of the germ as determinant of nucleation but rather emphasizes the entropic changes across the interface.Since it places emphasis on the increase in order and the reduction in entropy across the interface, the new model has been termed the Negentropic Nucleation Framework, NNF.Comparison against experimental results showed that the new framework is able to reproduce measured nucleation rates and is capable of explaining the observed constant shift in water activity between melting and nucleation (Koop et al., 2000).The constant water activity shift originates because the freezing temperature only exist for a very narrow range of a w (Eq.37), and represents a balance between kinetic and thermodynamic constraints to nucleation.NNF shows that the effect of water activity on nucleation is a manifestation of the entropic barrier to the formation of the germ.A theoretical expression for a w was derived and was shown to agree well with experimental values (Koop et al., 2000;Koop and Zobrist, 2009).This constitutes the first phenomenological derivation of the water activity criteria found by Koop et al. (2000).
The new framework shows that the interfacial energy depends strongly on a w .This dependency originates from the excess concentration of either solute or solvent when the dividing surface is defined.Such excess is present even if the EDS is defined with respect to the solvent.Since a w is a control variable in nucleation it is advantageous to define the EDS with respect to the solute and explicitly calculate the solvent surface excess.By application of this procedure it was shown that the interfacial energy is a function of water activity only and independent of the nature of the solute.
The origin of the dependency of J hom on a w was elucidated by applying several independent expressions for the interfacial tension within the framework of CNT.It was shown that a w alters J hom by modification of the surface excess affecting σ iw and by increasing the energy of "unmixing" required to create a solute-free ice germ.Two new expressions were derived to parameterize σ iw .The first one uses the NNF model and accounts explicitly for surface excess.By using this expression it was shown that the constant in the classical Turnbull (1950) approximation to σ iw (Eq.25) can be interpreted as a measure of the thickness of the interfacial layer around the ice germ.The second expression for σ iw was empirically derived by fitting CNT to K00.It was inferred that σ iw derived in this way does not only account for surface effects but also acts a correction factor for the assumption of negligible mixing effects in CNT.Since in CNT σ iw represents surface effects only, it is not clear whether empirically derived expressions for σ iw are consistent with the assumptions of CNT.
Analysis of the new framework suggested that the temperature dependency of G nuc and G act plays a significant role in defining J hom and T f .It was shown that around T f the increase in G act as T decreases is balanced by a decrease in G nuc .Thus an increased driving force for nucleation compensates for the slower molecular diffusion at low T .Such coupling between kinetics and thermodynamics during nucleation suggests that a thermodynamic description of the pre-exponential factor (Eq. 19) may be possible.
The model presented here emphasizes the entropic nature of homogeneous nucleation.Molecular simulations may shed further light on the role of entropy changes across the interface in ice nucleation.Measurements of the interface thickness would also help elucidate the role of the ice crystal lattice structure and the thickness of the interfacial layer (represented by the constants s and w , respectively) in determining J hom .
The framework introduced here reconciles theoretical and experimental results.Since it obviates the usage of σ iw as defining parameter, it may help reducing the uncertainty in J hom associated with the parameterization of σ iw in theoretical models.The new framework offers for the first time a thermodynamically consistent explanation of the effect of water activity on ice nucleation.Its relative simplicity makes it suitable to describe ice nucleation in cloud models, and may lead to a better understanding of the formation of ice in the atmosphere.Ice germ surface area * From the data of Johari et al. (1994) the following fit was obtained: h f = 7.50856 × 10 −7 T 5 − 8.40025 × 10 −4 T 4 + 0.367171 T 3 − 78.1467 T 2 + 8117.02T − 3.29032 × 10 5 (J mol −1 ) for T between 180 and 273 K.
Figure3shows the nucleation rate calculated from K00, NNF and CNT.The values used for the parameters of Eqs.(18) to (24) are listed in TableA1.The experimental results ofMurray et al. (2010a) (M10) andRiechers et al. (2013) (R13) are also included in Fig.3.Murray et al. (2010a) compared experimentally determined nucleation rates from several sources and found about a factor of 10 variation in J hom for pure water.Riechers et al. (2013) recently developed a new experimental technique based on microfluidics to measure J hom .Although these correlations are only applicable around 236 K, they are included as reference for the limiting case of a w = 1.The "freezing temperature", T f , is defined as the solution to
Fig. 4. Homogeneous nucleation rate.K00 and NNF correspond to J hom obtained using the correlations of
Fig. 8 .Figure 8 .
Fig. 8.Estimated range of variability in T f (D p = 10 µ m and ∆t = 10) and J hom for the NNF model.
www.atmos-chem-phys.net/14/7665/2014/ Atmos. Chem. Phys., 14, 7665-7680, 2014 7678 D. Barahona: Ice formation and water activity Appendix ATable A1 .
List of symbols.a w , a y Activity of water and solute, respectively a w,eq Equilibrium a w between bulk liquid and ice (Koop and Zobrist, 2009) Planck's constant h w,s , h w,ls Partial molar enthalpy of water in bulk ice and in the interface, respectively Total number of molecules in the solid germ n * Critical germ size n s , n ls Number of molecules in the bulk of the solid and in the interface, respectively N c Number of atoms in contact with the ice germ, 5.85 × 10 18 m −2 (Pruppacher and Klett, 1997) N w , N y Total number of water and solute molecules, respectively p s,w , p s,i Liquid water and ice saturation vapor pressure, respectively (Murphy and Koop, 2005) s Geometric constant relating n and n ls , 1.105 mol 2/3 S i Saturation ratio with respect to ice s w,s , s w,ls Partial molar entropy of water in bulk ice and in the interface, respectively T Molecular surface excess of solute µ w , µ y Chemical potential of water and solute, respectively µ w,ls Chemical potential of water at the interface µ w,s Chemical potential of bulk ice ρ w , ρ i Bulk density of liquid water and ice, respectively (Pruppacher and Klett, 1997) σ iw Ice-liquid interfacial energy g y | 13,419 | 2014-07-30T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Constraining the physical properties of large-scale jets from black hole X-ray binaries and their impact on the local environment with blast-wave dynamical models
Relativistic discrete ejecta launched by black hole X-ray binaries (BH XRBs) can be observed to propagate up to parsec-scales from the central object. Observing the final deceleration phase of these jets is crucial to estimate their physical parameters and to reconstruct their full trajectory, with implications for the jet powering mechanism, composition and formation. In this paper we present the results of the modelling of the motion of the ejecta from three BH XRBs: MAXI J1820+070, MAXI J1535$-$571 and XTE J1752$-$223, for which high-resolution radio and X-ray observations of jets propagating up to $\sim$15 arcsec ($\sim$0.6 pc at 3 kpc) from the core have been published in the recent years. For each jet, we modeled its entire motion with a dynamical blast-wave model, inferring robust values for the jet Lorentz factor, inclination angle and ejection time. Under several assumptions associated to the ejection duration, the jet opening angle and the available accretion power, we are able to derive stringent constraints on the maximum jet kinetic energy for each source (between $10^{43}$ and $10^{44}$ erg, including also H1743$-$322), as well as placing interesting upper limits on the density of the ISM through which the jets are propagating (from $n_{\rm ISM} \lesssim 0.4$ cm$^{-3}$ down to $n_{\rm ISM} \lesssim 10^{-4}$ cm$^{-3}$). Overall, our results highlight the potential of applying models derived from gamma-ray bursts to the physics of jets from BH XRBs and support the emerging picture of these sources as preferentially embedded in low-density environments.
INTRODUCTION
Relativistic jets appear as an ubiquitous feature among accreting black holes (BH) in the Universe, from supermassive BHs in active galactic nuclei (AGN) to stellar-mass BHs in galactic X-ray binaries (XRBs).Highly relativistic jets are also produced in energetic transients events, such as gamma-ray bursts (GRBs) and tidal disruption events (TDEs), a large fraction of which are believed to be powered by accreting BHs.The short timescales of evolution (days to weeks) and the relative proximity of BH XRBs make them ideal targets on which to study the properties of relativistic jets (Fender 2006;Romero et al. 2017), some of which appear to also be scale-invariant, and thus valid for all accreting BHs (Körd-⋆ E-mail<EMAIL_ADDRESS>† Former NASA Einstein Fellow ing & Falcke 2005).In BH XRBs, different jets are produced in different phases of the outburst (Corbel et al. 2004;Fender et al. 2004).Compact jets, causally connected to the accretion flow, are observed during the hard spectral state (see Remillard &McClintock 2006 andHoman &Belloni 2005 for a review on spectral states), as they emit self-absorbed synchrotron radiation, which dominates in the radio through near-infrared (Corbel et al. 2000;Fender 2001;Markoff et al. 2001;Corbel & Fender 2002;Russell et al. 2013).On the other hand, discrete ejecta are observed to be launched during transitions between the hard and the soft states, producing strong multi-wavelength flares during which the synchrotron emission is initially self-absorbed and then optically thin at radio wavelengths (e.g.Tetarenko et al. 2017).These components consist of bipolar blobs of plasma that travel away from the core, often at apparently superluminal speeds, and might be considered as less-relativistic analogs of what is observed in AGN (e.g.Marscher et al. 2002;Gómez et al. 2008).As of today, spatially resolved discrete ejecta have been observed with radio-interferometric observations in 15 sources: GRS 1915+105 (Mirabel & Rodríguez 1994), GRO J1655-40 (Hjellming & Rupen 1995;Tingay et al. 1995), Cyg X-3 (Mioduszewski et al. 2001), GX 339-4 (Gallo et al. 2004), XTE J1550-564 (Hannikainen et al. 2001;Corbel et al. 2002), XTE J1752-223 (Yang et al. 2010;Miller-Jones et al. 2011), H1743-322 (Corbel et al. 2005;Miller-Jones et al. 2012), XTE J1859+226 (Rushton et al. 2017), MAXI J1535-571 (Russell et al. 2019), V404 Cyg (Miller-Jones et al. 2019), MAXI J1820+070 (Bright et al. 2020;Espinasse et al. 2020), MAXI J1348-630 (Carotenuto et al. 2021), EXO J1846-031 (Williams et al. 2022), MAXI J1803-298 (Wood et al. 2023) and MAXI J1848-105 (Bahramian et al. 2023).This sample represents around 20% of the current population of confirmed and candidate BH XRBs, which are, however, all believed to produce jets (Tetarenko et al. 2016;Corral-Santana et al. 2016).In case of non-detection, this is likely due to source being too far, or with an unfavorable inclination angle (due to the effect of Doppler boosting) or to a failed transition from the hard to the soft state, which was found to happen, approximately, for a third of the observed outbursts (Alabarta et al. 2021).For sources that do display hard-to-soft state transitions, optically thin radio flares can be always detected with an adequate radio monitoring.In some cases, discrete ejecta can propagate up to parsec scales far from the core, displaying re-brightenings and deceleration phases likely due to the interaction with the interstellar medium (ISM), which also result in the production of broadband synchrotron radiation (radio to X-rays) from in-situ particle acceleration, up to TeV energies (Corbel et al. 2002(Corbel et al. , 2005;;Migliori et al. 2017;Espinasse et al. 2020;Carotenuto et al. 2021).
Despite the wealth of multi-wavelength observations collected over the recent years, multiple aspects related to the formation, evolution and overall physics of these jets remain unclear.For instance, the powering mechanism of the jets is still an open problem, as jets could be powered from the extraction of energy from a spinning BH (Blandford & Znajek 1977) or from its accretion disk (Blandford & Payne 1982), or from a combination of the two, as suggested by general relativistic magnetohydrodynamic (GRMHD) simulations (McKinney 2006) and recent Event Horizon Telescope observations probing a possible light spine vs. massive sheath jet structure (Janssen et al. 2021).The plasma composition, either baryonic or purely leptonic, is also unknown, and it is notoriously difficult to constrain as most jets display only a simple featureless synchrotron spectrum (Fender 2006).Moreover, while radio/infrared timing techniques are opening a new window on the physical parameters of compact jets (Casella et al. 2010;Tetarenko et al. 2019Tetarenko et al. , 2021;;Zdziarski et al. 2022), and recent results found evidence for a luminosity dependence of their properties (Prabu et al. 2023), we still lack precise constraints on the physical parameters of discrete ejecta, such as their mass, speed, energy and volume.In particular, a key open problem is the quantification of their total energy content.Measuring the jet's energy is of prime importance not only to estimate the balance between inflows and outflows in BH XRBs, but also because of the implications on the jet composition, powering mechanism and impact on the surrounding environment (e.g.Fender & Muñoz-Darias 2016).
The total energy (internal plus kinetic) of discrete ejecta can be estimated with different approaches.First, given the jet synchrotron emission, it is possible to infer the internal energy of the plasma that is required to produce the observed radiation by relying on the knowledge of the size of the emitting region and of the source distance, while assuming equipartition conditions (Longair 2011).The size of the emitting region can be most easily estimated by directly resolving the plasmon with radio or X-ray observations (e.g.Rushton et al. 2017).When this is not possible, the synchrotronemitting region size can be estimated through the detection of the radio spectral peak due to synchrotron-self absorption (e.g.Fender & Bright 2019 for BH XRBs), or it can be computed assuming a jet expansion speed and an ejection timescale (usually the duration of the rise of the radio flare at the jet's launch), although this may largely underestimate the jet's internal energy (Bright et al. 2020;Carotenuto et al. 2022).An additional way of measuring the jet size relies on simultaneous radio-interferometric observations of the ejecta at the same frequency, but with different angular resolutions probing different spatial scales.By measuring the percentage of flux resolved out between the observations, it is possible to infer the size of the emitting region and subsequently the jet internal energy (Bright et al. 2020).Alternatively, it is possible to identify jet-produced structures in ISM and then use them as calorimeters to measure the mechanical power, and consequently the kinetic energy that the jets need to deposit in those structures in order to create and sustain them (Gallo et al. 2005;Russell et al. 2007;Tetarenko et al. 2018Tetarenko et al. , 2020)).
Independently, focusing on the kinematics of these jets and covering their full trajectory allows us to obtain a complete dataset (angular separation vs. time) of their evolution, which can later be used to test physical models for the jet propagation in the ISM.The application of these models, which are mostly derived from the physics of GRBs (Wang et al. 2003), can yield important constraints on multiple physical parameters of the ejecta, such as their Lorentz factor, mass, inclination angle, ejection time and kinetic energy (Wang et al. 2003;Hao & Zhang 2009;Steiner & McClintock 2012).In particular, covering the jet deceleration phase with dense monitoring campaigns can significantly improve the constraints from these models (Carotenuto et al. 2022).Furthermore, modelling the jet motion is also of prime importance to precisely constrain the time of ejection, which is fundamental to put the jet launch in context with other multi-wavelength observational signatures, such as radio flares, X-ray spectral changes and X-ray quasi-periodic oscillations (QPOs, e.g.Ingram & Motta 2019).This allows us to ultimately obtain a comprehensive view of the source evolution during the state transition, with a special focus on the hot corona of electrons that surrounds the BH, which is responsible for the hard X-ray emission and it is thought to be intimately connected to the jet (e.g.Rodriguez et al. 2003;Markoff et al. 2005;Kara et al. 2019;Méndez et al. 2022;Ingram et al. 2023).
Tracing the jet motion has also turned out to be to especially useful to make use of the jets as probes of the environment surrounding BH XRBs.In fact, different works considering the propagation of ejecta in the ISM have provided strong evidence that BH XRBs are generally located in environments that appear 2-4 orders of magnitude less dense than the canonical Galactic ISM density of 1 particle per cm −3 (at least in the direction of the jet propagation), unless these jets are very narrowly collimated, with opening angles ≪ 1°, or extremely energetic, with kinetic energies above 10 46 erg (Heinz 2002;Hao & Zhang 2009;Carotenuto et al. 2022;Zdziarski et al. 2023).
The wealth of information that can be extracted from this type of modelling was first shown with the application on the large-scale decelerating jets from the BH XRBs XTE J1550-564 (Hao & Zhang 2009;Steiner & McClintock 2012), H1743-322 (Steiner et al. 2012), and, more recently, for MAXI J1348-630 (Carotenuto et al. 2022;Zdziarski et al. 2023).In this paper, as a continuation of the work started in Carotenuto et al. (2022), we expand the sample of sources that displayed unambiguously decelerating discrete ejecta, and for which such modelling has been applied, to include the jets from the BH XRBs MAXI J1820+070, MAXI J1535-571 and XTE J1752-223.These sources displayed resolved, large-scale decelerating jets observed between 2010 and 2018 that were observed to propagate up to ∼15 arcsec far from the core (Yang et al. 2010;Miller-Jones et al. 2011;Yang et al. 2011;Russell et al. 2019;Bright et al. 2020;Espinasse et al. 2020).However, the jet motion in these sources has only been described with basic phenomenological models, mostly applied in order to constrain the ejection date in relation with the simultaneous X-ray activity of the core.Since the quality of the jet angular separation data justifies the application of a physical model to describe the entire jet evolution, we performed such modelling and we present the detailed results in this paper.In particular, we present and discuss new constraints on the jet Lorentz factor, inclination angle, ejection time, as well as upper limits on the maximum energy available to the jets, on the density of the ISM that surrounds the systems and on the mass of the ejecta.For the last part, we also consider the ejecta launched in 2003 by H1743-322, where similar modelling has been already published by Steiner et al. (2012).
In Section 2 we present the sources and the observational data considered for the modelling work, while in Section 3 we discuss in detail the dynamical model that we adopted.Then, in Section 4 we present the results of the application of such model to our data and we discuss our findings in relation to the current understanding of jets from XRBs in Section 5. Finally, we summarize our conclusions in Section 6.
DATA
The data on the ejecta launched by the three BH XRBs considered in this paper have been already published and are therefore available in the literature.In the following sections, we present the sources and discuss the data used for this work.
MAXI J1820+070
The BH XRB MAXI J1820+070 was discovered by the Monitor of All-sky X-ray Image on board the International Space Station (Matsuoka et al. 2009) in March 2018 (Kawamuro et al. 2018), and it was subsequently identified with the optical transient ASASSN-18ey (Denisenko 2018).MAXI J1820+070 is one of the most wellobserved and well-studied BH XRBs in recent years.It harbors a 6.75 +0.64 −0.46 M⊙ BH accreting from a 0.5 ± 0.1 M⊙ companion star (Torres et al. 2019(Torres et al. , 2020;;Mikołajewska et al. 2022).Due to its impressive brightness, primarily in the X-rays (Fabian et al. 2020), it has been the subject of numerous multi-wavelength observing campaigns across the entire electromagnetic spectrum (e.g.Shidatsu et al. 2018;Hoang et al. 2019;Bright et al. 2020;Tetarenko et al. 2021;Abe et al. 2022;Cangemi et al. 2023;Echiburú-Trujillo et al. 2024) yielding a dataset of extremely high quality for a BH XRB in outburst.A model-independent measurement of the distance of 2.96 ± 0.33 kpc is available thanks to Very Long Baseline Interferometry (VLBI) radio parallax observations (Atri et al. 2020).
Bipolar relativistic discrete ejecta from MAXI J1820+070 have been detected and monitored at radio wavelengths for almost one year, with observations at different angular resolutions with the Multi-Element Radio Linked Interferometer Network (eMERLIN), the Very Long Baseline Array (VLBA), the Arcminute Microkelvin Imager Large Array (AMI-LA), the Karl G. Jansky Very Large Array (VLA) and the MeerKAT radio interferometer, showing the jets to propagate out to ∼10 arcsec from the core of the system with a high proper motion (Bright et al. 2020;Wood et al. 2021).Notably, these jets have also been detected at large scales, up to 12 arcsec, in the X-rays with five Chandra X-ray telescope exposures between the end of 2018 and 2019 (Espinasse et al. 2020).These X-ray detections are particularly important because they cover the deceleration phase, not immediately evident from the radio data alone.In this work, we use both the radio and X-ray coordinates of the jets to model their motion, and we also take into account the updated jet coordinates from Wood et al. (2021), obtained with the application of the new dynamic phase centre tracking technique to the VLBA data (see also Wood et al. 2023).
MAXI J1535-571
The BH XRB MAXI J1535-571 was discovered by MAXI in September 2017 (Negoro et al. 2017) when it entered into outburst, and it was subsequently monitored at all wavelengths between radio and the hard X-rays during its 1-year long outburst (e.g.Tao et al. 2018;Huang et al. 2018;Russell et al. 2019;Parikh et al. 2019;Bhargava et al. 2019;Baglio et al. 2018).In particular, the full outburst evolution with the associated state transitions is discussed in Tao et al. (2018) and Nakahira et al. (2018).The source is located at a distance D = 4.1 +0.6 −0.5 kpc (Chauhan et al. 2019), determined from observations of Hi absorption carried out with the Australian Square Kilometre Array Pathfinder (ASKAP).
The radio monitoring campaign presented in Russell et al. ( 2019) covered the evolution of the jets from MAXI J1535-571 throughout the whole outburst, with Australia Telescope Compact Array (ATCA) and MeerKAT observations.Compact jets were detected during an initial brightening in the first hard state and they were subsequently observed to quench as the source transitioned to the intermediate state, displaying intense flaring activity (see also Russell et al. 2020).During the hard-to-soft state transition, MAXI J1535-571 launched a fast single-sided discrete jet that was detected and monitored with MeerKAT and ATCA for almost one year.The relativistic components was observed to propagate and decelerate up to an angular distance of ∼15 arcsec (Russell et al. 2019).Its monitoring allowed the authors to place model-independent constraints on the jet speed, inclination angle and ejection date, which we take into account in the modelling presented in this work.
XTE J1752-223
XTE J1752-223 is a BH XRB discovered by the Rossi X-ray Timing Explorer in 2009 (Markwardt et al. 2009) that remained active for almost one year in outburst and that has been the subject of dense multi-wavelength observing campaigns, mostly focused in the radio and X-ray bands (Shaposhnikov et al. 2010;Ratti et al. 2012;Brocksopp et al. 2013).A recent estimation based on the Bayesian analysis of the soft spectral state and the hard-to-soft state transition yields the following constraint on the source distance: D = 7.11 +0.27 −0.25 kpc (Abdulghani et al. 2024), which is notably more than twice the first distance estimation of 3.5 kpc from Shaposhnikov et al. (2010), and it is consistent with another recent estimation of D = 6 ± 2 kpc based on Gaia DR3 (Fortin et al. 2024).We note that Abdulghani et al. (2024) also provide a first BH mass estimation of 12 ± 1M⊙, based on the same method.In this paper, we adopt 7.1 kpc as the source distance, noting that adopting the 6 kpc value would not substantially change the main conclusions of this work.Also, we do not use the 3.5 kpc distance, as it is obtained with an uncertain X-ray spectral-timing correlation scaling technique based on the evolution of the photon index and the QPO frequency during the outburst (Shaposhnikov & Titarchuk 2009).
Notably, during the outburst and after the first hard state, the source performed a standard transition to the intermediate and then soft state, but then displayed multiple short-lived returns to the intermediate state accompanied by strong radio flaring activity observed with ATCA, implying the production and launch of multiple ejecta (Brocksopp et al. 2013).These ejecta, at least three separated approaching components, were eventually imaged with the European VLBI Network (EVN) and with VLBA (Yang et al. 2010;Yang et al. 2011;Miller-Jones et al. 2011), appearing to propagate only at small scales, i.e. less than one arcsec.At least one component (labeled "A" in Yang et al. 2010) displayed evidence of deceleration (Miller-Jones et al. 2011), while no receding component was detected and no ejecta were detected at larger distances from the core with interferometers probing larger angular scales (such as ATCA).In this paper we will only focus on this decelerating component, as not enough data are available for the other ejecta.
THE DYNAMICAL MODEL
We adopt a numerical blast-wave dynamical model to describe propagation of jets in the ISM.The model was originally developed to describe GRB afterglows (Piran 1999;Huang et al. 1999), and has been applied to describe the evolution of mildly relativistic ejecta in BH XRBs for the first time in Wang et al. (2003), including the transition from relativistic to non-relativistic motion.In particular, we consider the same implementation of Carotenuto et al. (2022).
The model considers a symmetric pair of confined conical ejecta launched simultaneously in opposite directions, at an inclination angle θ with respect to the line of sight.The ejecta start to move away from the core with an initial Lorentz factor Γ0 and kinetic energy E0, and expand with a constant half-opening angle ϕ inside an ambient medium with constant density nISM.Matter in the same ambient medium is entrained by the jets, which are therefore continuously decelerating, and during this process their kinetic energy is converted into internal energy of the swept-up ISM through external shocks, in a similar fashion as GRB afterglows (Wang et al. 2003).In particular, a forward shock develops at the contact discontinuity between the jet and the ISM.The shock continuously heats the encountered ISM and randomly accelerates particles.In this context, the radiated energy is assumed to be negligible and the jet expansion is considered adiabatic throughout its whole evolution (Chiang & Dermer 1999;Huang et al. 1999;Wang et al. 2003;Hao & Zhang 2009), an assumption that has been proven to be robust in recent works (Steiner & McClintock 2012;Bright et al. 2020;Carotenuto et al. 2022;Zdziarski et al. 2023).Considering one of the two ejected components, it is possible to write the energy conservation equation where the two terms on the right-hand side are, respectively, the instantaneous kinetic energy of the ejecta and the internal energy of the swept-up ISM.More in detail, Γ is instantaneous jet bulk Lorentz factor, M0 is the mass of the ejecta and Γ sh is the Lorentz factor of the shocked ISM.Here, σ is a numerical factor equal to 6/17 for ultrarelativistic shocks and ∼0.73 for non-relativistic shocks (Blandford & McKee 1976), and it is possible to interpolate between the two regimes with the following numerical scaling (Huang et al. 1999;Wang et al. 2003;Steiner & McClintock 2012): where β = (1 − Γ −2 ) 1/2 is the intrinsic jet speed in units of c.The mass of the entrained material msw can be written as where R is the distance from the compact object, n is the ambient density and mp is the proton mass.The Lorentz factor of the shocked ISM can be expressed as a function of the jet bulk Lorentz factor by using the jump conditions for an arbitrary shock (Blandford & McKee 1976;Steiner & McClintock 2012): where γ is the adiabatic index that varies between 5/3 for ultrarelativistic shocks and 4/3 for non-relativistic shocks.As the jet decelerates, we interpolate between the two limits with The relativistic kinematic equations for the approaching and receding components are (Rees 1966;Blandford et al. 1977;Mirabel & Rodríguez 1994): where the ∓ refers, respectively, to the approaching and receding jet, and t is the arrival time of the photons at the observer.
The measurable projected angular separation from the core is where D is the source distance.
In total, the model depends on 7 parameters: the jet initial kinetic energy E0 and Lorentz factor Γ0, the inclination angle of the jet axis θ and the jet half-opening angle ϕ, the source distance D, the external ISM density nISM and the ejection time tej.It is crucial to note that a degeneracy exists in this model between the three parameters E0, ϕ and nISM, as they appear as a single term in Equation 1 (taking into account the expression for msw in Equation 3).Hence, only the factor E0/nISMϕ 2 can be independently constrained by the application of this model.Similarly to Steiner & McClintock (2012), we refer to such factor as "effective energy", which here we define as Therefore, in order to obtain a reliable estimate of the jet kinetic energy, one needs to independently measure the two parameters ϕ and nISM, or to assume reasonable values for them (see Section 5.4).
For every set of parameters that compose the model, it is possible to obtain the proper motion curve of the jet on the plane of the sky and then to compare it with the data.This can be done by integrating Equation 6 starting at a time tej from an assumed distance R0 = 10 8 cm (from which there is only a weak dependence), and then numerically solving Equation 1 at every time step for the instantaneous jet Lorentz factor.The information on the instantaneous speed is then used to update the distance traveled by the jet, which is converted to the angular separation α (Equation 7), that can be directly compared to the observational data.
Fit setup
We fit the data for the three BH XRBs considered in this work with the dynamical model presented in Section 3. We adopt a Bayesian approach, applying a Monte Carlo Markov Chain (MCMC) code implemented with the emcee package (Foreman-Mackey et al. 2013).
For every point of the parameter space, Equation 6 was integrated using odeint from the SciPy package (Virtanen et al. 2020).
We include the maximum amount of available information in the choice of our priors, which are physically motivated from our knowledge of the source in question and of BH XRBs in general.We discuss the specific choices in the following sections dedicated to each source.Every MCMC run was conducted using 110 walkers.For each run, after manual inspection, we consider that convergence is reached when the positions of the walkers in the parameter space are no longer significantly evolving.Once the chains have converged, the best fit result for each parameter is taken as the median of the one-dimensional posterior distribution obtained from the converged chains, while the 1σ uncertainties are reported as the difference between the median and the 15th percentile of the posterior (lower error bar), and the difference between the 85th percentile and the median (upper error bar).
MAXI J1820+070
We first consider the bipolar ejecta from MAXI J1820+070.The angular separation for the two components is shown in Figure 1, including the measurements presented in Bright et al. (2020), Espinasse et al. (2020) and Wood et al. (2021).The approaching and receding components are marked, respectively, by red and blue points.We performed a joint fit of the dynamical model presented in Section 3 to the approaching and receding components.We adopted a flat prior for Γ0 (between 1 and 100) and a log-flat prior for Ẽ0 (between 10 35 and 10 55 erg).We further assumed a normal prior for the source distance centered on D = 2.96 kpc and with a width of 0.3 kpc (Atri et al. 2020), and we assumed a flat prior for tej centered on the ejection time of component C (MJD 58305.95)presented in Wood et al. (2021) and ranging between MJD 58300 and 58310.For the inclination angle, again relying on Wood et al. (2021), we used a normal distribution centered on 64°and with a width of 5°, while truncated outside the interval 0°-90°.
The best fit is shown in Figure 1, along with the proper motion of the two jet components, and the results are reported in Table 1.The statistical uncertainty range on the plot is represented as the ensemble of trajectories corresponding to the final positions of the walkers in the parameter space.From Figure 1, it is possible to see that the model fits exceptionally well to the data, and the agreement with observations can be seen from the residuals on the bottom panel of the same figure.The deceleration of both jets can be adequately described by a single Sedov phase in a homogeneous environment.This type of deceleration has also been modeled using a simple polynomial fit in Espinasse et al. (2020) and Wood et al. (2021), but we note that in our case the whole jet motion can be described by a single physical model.The statistical uncertainty on the fit is remarkably small thanks to the fact that we detected both components and that we had VLBI observations taken at the beginning of the jet motion (Bright et al. 2020;Wood et al. 2021), which allowed us to constrain with great accuracy the ejection time.The high-resolution Chandra observations at the end of the monitoring are also important to cover the deceleration phase (Espinasse et al. 2020).According to this model, the jet is launched at tej = MJD 58305.96+0.02 −0.02 with a bulk Lorentz factor Γ0 = 2.61 +0.54 −0.39 , an effective energy of Ẽ0 = 2.6 +0.4 −0.4 × 10 46 erg, and a medium-to-high inclination angle θ = 59.6°+ 1.2°− 1.0°.The source distance is D = 2.96 +0.11 −0.13 kpc, which tracks the prior choice based on the the radio parallax measurement by Atri et al. (2020).The posterior distributions for the parameters of the model are shown the Appendix A (Figure A1), where we present the corner plot displaying the one-dimensional posterior distribution for all the parameters and the two-parameters correlations.
MAXI J1535-571
We fit the external shock model to large scale jet data from MAXI J1535-571, using the measurements reported in Russell et al. (2019).Similarly to the fit already presented in Carotenuto et al. (2022), we fit the data for the approaching ejection only, given the non-detection of the receding counterpart.The associated proper motion is shown in Figure 2. As for MAXI J1820+070, we adopted a flat prior for Γ0 (between 1 and 100) and log-flat prior for Ẽ0 (between 10 35 and 10 55 erg).We assumed a flat prior for tej between MJD 58005 and 58025 and a normal prior for the source distance centered on D = 4.1 kpc, with a width of 0.5 kpc (Chauhan et al. 2019).For the inclination angle, we used a uniform distribution in cos θ truncated outside the interval 0°-45°, following the constraints reported in Russell et al. (2019).
The best fit results are reported in Table 1 and are shown in Figure 2, from which can be seen that the model fits the data remarkably well.The bottom panel of the figure displays the residuals, revealing a good agreement with the observations.Also in this case, the jet deceleration can be accurately described by a single Sedov phase in a homogeneous environment.From the fit, we can place constraints on the jet ejection date tej = MJD 58017.4+4.0 −3.8 , on its bulk Lorentz factor Γ0 = 1.6 +0.2 −0.2 and on its medium-to-low inclination angle θ = 30.3°+ 6.3°− 6.3°.The source distance is D = 4.2 +0.8 −0.9 kpc, which, also in this case, tracks the prior choice based on the the Hi absorption measurement by Chauhan et al. (2019).We also constrain the effective energy of the jets to be E0 = 5.8 +16.6 −4.0 × 10 48 erg.The posterior distributions for the parameters of the model are shown in Figure A2.
XTE J1752-223
Finally, we fit the dynamical model to the VLBI data of the approaching ejecta launched by XTE J1752-223, as reported in Miller-Jones et al. (2011).As for the previous cases, we adopted a flat prior for Γ0 (between 1 and 100) and a log-flat prior for Ẽ0 (between 10 35 and 10 55 erg).We assumed a uniform distribution in cos θ, with a truncation outside the interval 0°-45°, consistent with the constraints reported in Miller-Jones et al. (2011).Moreover, we assume a normal prior for the source distance centered on D = 7.1 kpc and with a width of 0.3 kpc, from Abdulghani et al. (2024).Given that only four data points are available, we fixed the ejection date tej to MJD 55217, just before the 20 mJy peak of the radio flare observed with ATCA (Brocksopp et al. 2013), and one day before the transition from the hard-intermediate state (HIMS) to the soft-intermediate state (SIMS) occurred around MJD 55218 (Shaposhnikov et al. 2010).
The best fit results are reported in Table 1 and −0.6 × 10 45 erg, at a medium-to-low inclination angle of θ = 18.4°+ 2.5°− 2.3°.Given the scarcity of data on this source, we are unable to provide a robust value for the jet initial Lorentz factor Γ0, but from the posterior we can constrain Γ0 > 3.4 (99.7% of confidence), according to the choice of ejection date discussed above.As shown in Figure A3, the data provide a median value of ≃5.4,but this constraint depends directly on the chosen tej.In the same way, the lower limit on Γ0 can be relaxed if we assume an earlier 2021).The un-shaded, gray and seashell regions mark periods in which the source was, respectively, in the hard, intermediate and soft state (Shidatsu et al. 2018).The black horizontal dashed line represents the zero separation from the core, while the black continuous line represents the best fit obtained with the external shock model.The orange shaded area represents the total uncertainty on the fit and it is obtained by plotting the jet trajectories corresponding to the final positions of the MCMC walkers in the model parameter space.Residuals ([data -model]/uncertainties) are reported in the bottom panel.The model appears to provide an excellent description of the motion of both the approaching and receding ejecta, with a low statistical uncertainty.ejection date.For illustration, performing the same fit, but assuming tej = MJD 55210 or 55215 leads to Γ0 > 1.8 (median value ≃ 2.1) and Γ0 > 2.9 (median value ≃ 4.2), respectively.On the other hand, for ejections at later times, fixing tej = MJD 55218 at the transition from the HIMS to the SIMS (Shaposhnikov et al. 2010) results in an equally acceptable fit, yielding similar constraints for the initial jet Lorentz factor: Γ0 > 3.4 (99.7% of confidence, but with a median value ≃ 7.4).Similarly, fixing tej = MJD 55221 at the transition from the SIMS to the soft state (an ejection time which we deem more unlikely for BH XRBs) yields Γ0 > 3.5 (99.7% of confidence), with an extremely high median value of ≃ 13.4.We note that, for our initial Lorentz factor posterior distributions, fixing tej in the range MJD 55217-55821 results in very similar lower limits on Γ0, while the median value is much more sensitive to the choice of tej.
DISCUSSION
We successfully modeled the motion of the large-scale jets from three BH XRBs, MAXI J1820+070, MAXI J1535-571 and XTE J1752-223 with a dynamical blast-wave model based on external shocks.This physical model is found to provide an excellent description of the propagation of the ejecta in the ISM, regardless of whether we detect or not both ejected components, and it also allows us to place meaningful constraints on various parameters of the ejecta.After the application of this model to the jets of XTE J1550-564, H1743-322 and MAXI J1348-630 (Steiner & McClintock 2012;Steiner et al. 2012;Carotenuto et al. 2022), and considering the results shown in the previous section, it appears that all the large-scale jets display a deceleration consistent with a Sedov phase.The goodness of the fits obtained for all the sources confirm the validity of the application of physical models derived from our knowledge of GRBs to XRBs (e.g.Wang et al. 2003), and highlights the potential of using the largely developed set of theoretical GRB models (including their entire multi-wavelength emission) to explain even more features observed in the less-relativistic jets from XRBs, as for instance the presence of forward and reverse shocks within the jet.We discuss in the following sections the constraints on the jet physical parameters and on the source environment that we obtained in this work, comparing them with our current knowledge of jets from BH XRBs.
Lorentz factor
We first discuss the constraints on the initial Lorentz factor Γ0 for the ejecta in our sample.It is generally difficult to constrain this parameter from the simple observation of the jet propagation, especially if the ejecta are significantly superluminal.The reason is that a source of significantly relativistic jets (with proper motions µapp and µrec) will usually be observed close to its maximum allowed distance Dmax = c(µappµrec) −1/2 , where Γ0 diverges (Fender et al. 1999;Fender 2003).Therefore, only lower limits on Γ0 are available for most of the sources displaying discrete ejecta (e.g.Fender 2003;Miller-Jones et al. 2006;Bright et al. 2020).
For MAXI J1820+070, we obtain an interesting estimate of the initial Lorentz factor of the jets: Γ0 = 2.6 +0.5 −0.4 , which implies a mildly relativistic ejecta.Such constraint is consistent from the previous lower limit Γ0 > 2.1 (Bright et 2011) and information on the spectral states from Shaposhnikov et al. (2010) and Brocksopp et al. (2013).Once the ejection date is fixed (here at MJD 55217), the model with a single Sedov phase appears to fit reasonably well the data.
Table 1.Parameters of the blast-wave model applied in this work inferred from the Bayesian fit described in Section 4. The values quoted are the median parameter and the 1σ confidence intervals derived from the MCMC run.The effective energy is defined as Ẽ0 = E 0 /n ISM ϕ 2 (see Section 3, Equation 8).
† : The constraint on Γ 0 for XTE J1752-223 strongly depends on the preferred ejection date (see Section 4.4).and, interestingly, this determination has been possible despite the fact that MAXI J1820+070 is located close to its Dmax = 3.1 kpc (Wood et al. 2021).
Parameter
In the case of MAXI J1535-571, we are also able to place an important constraint on the initial Lorentz factor of the approaching component Γ0 = 1.6 +0.2 −0.2 , implying a relatively slow ejecta, traveling initially at ∼0.77c.Jets from MAXI J1535-571 appear to be among the less relativistic in the observed sample of discrete ejecta (Miller-Jones et al. 2006;Steiner & McClintock 2012;Steiner et al. 2012;Carotenuto et al. 2022).Interestingly, the ejecta from MAXI J1535-571 is the one that propagates to the largest distance from the core (up to 0.8 pc).Launched with a high Ẽ0 (see Table 1), it is likely among the most energetic and the least relativistic jets observed so far, similar in nature to MAXI J1348-630, which also displayed large scale ejecta propagating up to 0.6 pc from the core, with Γ0 = 1.85 +0.15 −0.12 (Carotenuto et al. 2021(Carotenuto et al. , 2022)).Since it appears that jets from MAXI J1348-630 and MAXI J1535-571 are among the most energetic observed so far, this likely implies that the mass content of the ejecta is probably the driving factor that determines the large distance at which the ejecta propagate, with M0 being the dominant factor in the (Γ0−1)M0c 2 kinetic energy term in Equation 1 (see also the discussion in Zdziarski & Heinz 2024).At the same time, again similarly to MAXI J1348-630, such determination of Γ0 is one of the most precise and robust (although model-dependent) constraints on the Lorentz factor of a jet from a BH XRB to-date, with this being due to the ideal combination of low Γ0 and low inclination angle, which is less affected by the degeneracy between θ and the source distance (Fender 2003, Fender et al., in prep.).
We can only provide a lower limit Γ0 > 3.4 for the ejecta of XTE J1752-223, which, however, directly depends on the chosen tej, and it can be relaxed if we assume an earlier ejection date, as mentioned in Section 2.3.On the other hand, if tej is fixed to closer to the state transition, our robust lower limit on Γ0 does not vary (see Section 4.4).However, if the ejecta were launched on MJD 55218, at the peak of the radio flare on the day of the HIMS-to-SIMS transition, they would have a most likely Γ0 ≃ 7.4, which would be the highest Lorentz factor ever observed for these objects and it would challenge the common assumption that jets from BH XRBs are only mildly relativistic, unlike what is observed in AGN and GRBs (e.g.Jorstad et al. 2005;Ghirlanda et al. 2018).Observations of the ejecta closer to the core would have greatly helped to determine the initial Lorentz factor of this source, which is likely to be higher than the average among the available sample of ejecta.Overall, the data available so far appear to suggest that BH XRBs are able to accelerate relativistic jets in the mildly relativistic range 1 ≲ Γ0 ≲ 2, but a growing number of possible exceptions that suggest even faster jets (as for instance MAXI J1820+070 and XTE J1752-223), and this appears to be similar to the range of generally inferred bulk Lorentz factor for compact jets (e.g.Casella et al. 2010;Saikia et al. 2019;Tetarenko et al. 2019Tetarenko et al. , 2021;;Zdziarski et al. 2022).
Inclination angle
The inclination angle that we obtain for MAXI J1820+070 is θ = 59.6°+ 1.0°− 1.2°.We note that our posterior is still consistent with the chosen normal prior distribution θ = 64°± 5°from Wood et al. (2021).However, the peak of the posterior is slightly lower than the peak of the prior, which is based on the first part of the jet motion.With the values obtained for Γ0 and θ, we can compute the Doppler factor for the two ejecta.The Doppler factor for the approaching component at launch is δapp = Γ −1 0 (1 − β0 cos θ) −1 ≃ 0.7, while for the receding is δrec = Γ −1 0 (1+β0 cos θ) −1 ≃ 0.25.This implies that at the beginning of their motion both components are Doppler de-boosted, and their intrinsic luminosity is decreased, respectively by a factor δ 3−α app ≃ 0.3 and δ 3−α rec ≃ 0.008, using the formalism for discrete jet components, where the flux density follows Sν ∼ ν α (e.g.Blandford et al. 1977) and a spectral index α = −0.6 (Espinasse et al. 2020).
We constrain the inclination angle of the jet axis in MAXI J1535-571 to be θ = 30.3°+ 6.3°− 6.3°.Already, from the simple measurement of the jet proper motion and the source distance, Russell et al. (2019) strongly constrained the maximum jet inclination to θ < 45°(which is included in our choice of the prior on θ).Notably, such a value does not appear to be consistent with the inclination angle of the inner edge of the accretion disk obtained from NICER X-ray observations Miller et al. (2018).The authors report an angle i = 67.4°±0.8°f rom the spectral fitting of the relativistic reflection component in the high-SNR X-ray spectrum obtained during the intermediate state (they also report a near maximal BH spin of a = 0.994 ± 0.002, Miller et al. 2018).At the same time, our value is broadly consistent with the inclination angle (i = 37°+ 22°− 13°) of the region emitting a narrow and asymmetric iron line (Miller et al. 2018).Interestingly, the authors explain such difference in i with potential disk warping.The results of this comparison appear counter-intuitive, as we would generally expect jets to be launched along the direction of the BH spin, which, in turn, should be aligned with the inner edge of the accretion disk (Bardeen & Petterson 1975).However, at least a case in which jets were launched along the axis of a rapidly precessing inner disk has been observed (Miller-Jones et al. 2019).As in the case of MAXI J1348-630 (Carotenuto et al. 2022), a measurement of the orbital plane of the system would be useful to test the potential 3D alignment between the disk and the jet axis, taking into account that cases of disk/jet misalignment have been observed (Miller-Jones et al. 2019;Poutanen et al. 2022) and this is also supported by Table 2. Time delay ∆t ej,X between the inferred ejection times t ej and the possibly associated observed X-ray signatures.Here a positive ∆t ej,X means that the the jet is launched before the first appearance of the X-ray signature.The uncertainties on ∆t ej,X are the same as the ones obtained for t ej in this work and in Carotenuto et al. (2022).For MAXI J1820+070, we include both Components A (slow jet) and C (fast jet) as labeled in Wood et al. (2021), reminding the reader that in this work we only focus on Component C. The uncertainty on MAXI J1535-571 is due to the large uncertainty on t ej and to the fact that Type-B QPOs might have been present between MJD 58016.8 and 58025 (Stevens et al. 2018).GRMHD simulations (e.g.Liska et al. 2018).The Doppler factor for the approaching component at launch is δapp ≃ 1.9, implying a Doppler boosting with a factor δ 3−α app ≃ 10, while for the receding component we can compute δrec =≃ 0.37, implying a de-boosting of a factor δ 3−α rec ≃ 0.03, again adopting α = −0.6.Such a high Doppler de-boosting easily explains the non-detection of the receding component, as the radio flux density was pushed below the ATCA and MeerKAT sensitivity limit for the available exposure times (Russell et al. 2019).
Ejection time
The ejection time is a crucial piece of the puzzle in the current effort to reconstruct the precise sequence of events that lead to the formation and launch of discrete ejecta, which is still unclear, and modelling the jet motion is a reliable way of obtaining such information.
In the case of MAXI J1820+070, we infer an ejection date tej = MJD 58305.96+0.02 −0.02 , which is completely consistent with the most up-to-date estimation reported in Wood et al. (2021).We note that the quality of the data from Bright et al. (2020), especially the dense VLBI monitoring at the hard-to-soft state transition, and the additional significant improvements obtained with the new dynamic phase-center tracking technique adopted in Wood et al. (2021), already allowed the authors to constrain the ejection time with great accuracy, and it is worth noting that a precision of roughly 30 minutes has rarely been achieved for this type of events.It is also worth nothing that the obtained tej places the jet launch close to the peak of the radio flare observed at 15.5 GHz with AMI-LA (Bright et al. 2020;Homan et al. 2020).Moreover, this jet appeared to be launched approximately 6 hours after the first detection of Type-B QPOs in this outburst, defining the beginning of the soft-intermediate state (Homan et al. 2020).However, it is important to remark that Wood et al. ( 2021) associate the radio flare and the detection of Type-B QPOs with the ejection of a different pair of ejecta (with the approaching jet labeled as Component A), which had an intrinsic speed β ∼ 0.3, were not detected beyond the milli-arcsec scale and were ejected ∼9 h before the fast large-scale jets that we consider in this work (Wood et al. 2021).Specifically, Component A displayed an elongated structure and its ejection was inferred to last ∼6 h, hence partially overlapping with the first appearance of Type-B QPOs in this source.Interestingly, Wood et al. (2021) do not identify any Xray or radio flare counterpart to Component C. It has been suggested that Type-B QPOs could correspond to the time of jet launching (e.g.Fender et al. 2009;Miller-Jones et al. 2012), implying a strong causal relation between the two phenomena that has been particularly highlighted in Homan et al. (2020).However, for MAXI J1820+070 such link appears to be stronger with Component A than with Component C. Therefore, if this timing signature is indeed linked to jet ejections, it is unclear at the moment whether there is any connection with the fast ejecta observed to propagate at large scales.Such connection is also not confirmed for other sources (Miller-Jones et al. 2012;Russell et al. 2019;Carotenuto et al. 2021), for which in general the ejections are inferred to happen from hours to days before the detection of Type-B QPOs, as can also be seen in Table 2 (for which we include data from MAXI J1348-630, Carotenuto et al. 2022).
Regarding MAXI J1535-571, we infer tej = MJD 58017.4+4.0 −3.8 , with a much larger uncertainty with respect to MAXI J1820+070, mostly due to the lack of early-time VLBI observations of the ejecta.Interestingly, our new tej places the ejection in the soft intermediate state (Tao et al. 2018), updating the previous estimation in the hard-intermediate state (Russell et al. 2019).Given the large radio flare reported on MJD 58017.4 by Russell et al. (2019), it is more likely that the ejecta was launched before this date than later, despite our statistical uncertainty being almost symmetrical around the same MJD 58017.4.Interestingly, our preferred ejection date is now roughly 4 days after the well-monitored quenching of the compact jets, which, from the tracking of the evolution of the break frequency from the infrared to the radio bands, appeared to be switched off over a timescale of 1 d on MJD 58013 (Russell et al. 2020).If such a result is confirmed, it would imply that discrete ejecta do not result immediately from the destruction of the compact jets, but that instead they are formed and launched sometime afterwards (see also Echiburú-Trujillo et al. 2024 for MAXI J1820+070).
For MAXI J1535-571, a tentative detection of Type-B QPOs with NICER was reported in Stevens et al. (2018).Specifically, possible Type-B QPOs were detected when stacking the NICER data in the range between MJD 58016.8 and 58025, but it is worth remarking that the authors could not clearly differentiate between Type-A and Type-B QPOs in the data (Stevens et al. 2018).The reported QPO interval overlaps with our inferred ejection date tej = MJD 58017.4+4.0 −3.8 , but, given the 4-d uncertainty, we cannot precisely conclude on the exact sequence of events.However, if Type-B (or Type-A) QPOs were present during the whole stacked interval and lasted until MJD 58025, then we could at least conclude that they persist after the ejecta are produced.
Unfortunately, we are unable to provide constraints on the ejection date for XTE J1752-223, given that tej is a fixed parameter in our modelling.This is also due to the fact that, considering the four available detections, the jets were already strongly decelerating (Yang et al. 2010;Miller-Jones et al. 2011).The radio flare peaking peaking on MJD 55218 and reported in Brocksopp et al. (2013) suggests that the ejection might have happened on that day or before, while it is unlikely to have happened at later times.Our choice of tej on MJD 55217 (one of the possible options) places the ejection in the hard-intermediate state.Type-B QPOs were instead reported on MJD 55218 and 55220 (Shaposhnikov et al. 2010), possibly again suggesting that these variability features could persist after the launch of discrete ejecta.However, the data on XTE J1752-223 do not allow us to draw any firm conclusion on this particular aspect of the jet production.
A summary of the time delay ∆tej,X between the jet ejection date and the first appearance of Type-B QPOs for the sources considered in this work is reported in Table 2.We mention that, rather than the appearance/disappearance of QPOs, it has been recently proposed that jet ejections in GRS 1915+105 could be linked to a change in the coronal geometry observed through changes in the Type-C QPO frequency and a change of sign in the phase lags at the QPO frequency, along with the simultaneous radio emission (Méndez et al. 2022).This result (see also García et al. 2022) appears to be also consistent with previous findings on the non-trivial geometry of the corona in MAXI J1820+070 (Kara et al. 2019) and MAXI J1348-630 (García et al. 2021).
Jet kinetic energy and external ISM density
We discuss in this section our results for the jet kinetic energy.Due to the degeneracy between E0, ϕ and nISM (see Section 3), only the effective energy Ẽ0 = E0/nISMϕ 2 can be independently constrained by our fit.This parameter can be easily interpreted as the kinetic energy required for a ϕ = 1°jet to propagate to the observed distance in a 1 cm −3 ISM.Considering our sample, we constrain an effective energy of Ẽ0 = 2.6 +0.4 −0.4 × 10 46 erg for MAXI J1820+070, a higher value Ẽ0 = 5.8 +16.6 −4.0 × 10 48 erg for MAXI J1535-571 and a lower value Ẽ0 = 1.1 +1.2 −0.6 × 10 45 erg for XTE J1752-223.The highest value obtained for MAXI J1535-571 can be explained by the fact that, assuming the same nISM, the jets from MAXI J1535-571 propagate up to a larger angular distance with less evident deceleration when compared to the other two sources.
Through the constraints on Ẽ0, we plot in the first three panels of Figure 4 the explicit dependence of E0 on nISM for different values of the jet opening angle.Given that no independent information is available regarding the exact values of nISM and ϕ for any of our sources, it is not possible to provide a preferred estimate of the jet kinetic energy.Regarding the jet half-opening angle, the value ϕ = 1°i s generally adopted in the literature (Steiner & McClintock 2012;Steiner et al. 2012;Carotenuto et al. 2022).Such value is consistent with the large number of observational upper limits available for these jets (e.g.Kaaret et al. 2003;Miller-Jones et al. 2006;Russell et al. 2019;Carotenuto et al. 2021;Wood et al. 2021;Williams et al. 2022).Moreover, for cases in which these jets have been resolved in radio or X-rays, the inferred opening angles were of the same order of magnitude (e.g.Bright et al. 2020;Espinasse et al. 2020;Chauhan et al. 2021).Therefore, we assume ϕ = 1°as a reasonable choice, but we also show our solutions for narrower or less-likely wider jets in Figure 4.
On the other hand, much less information is available on the density of the environment of BH XRBs.Two other BH XRBs that displayed large-scale decelerating jets, XTE J1550-564 and MAXI J1348-630, were inferred to be located in a low-density ISM cavity, for which an reasonable external ISM density of 1 cm −3 was assumed (Steiner & McClintock 2012;Carotenuto et al. 2022).Applying the same model to those jets allows us to place independent constraints on the density jump between the interior and exterior of the cavity, but in these cases the value of the internal density still relies on the aforementioned assumption on the external ISM density.In the following, we discuss how our Ẽ0 solutions, coupled to independent constraints on the jet energy, allow us to place some very valuable and informative constraints on nISM for jets decelerating in an uniform ISM.Before that, we note that the jets launched from H1743-322 during its 2003 outburst are considered to be decelerating in a uniform ISM, and Steiner et al. (2012) report an effective energy Ẽ0 = 1.0 +2.2 −0.7 × 10 47 erg.Therefore, we choose to include H1743-322 in our sample for the following considerations, and the dependence of E0 on nISM is shown on the fourth panel of Figure 4.
While the kinetic energy of these jets cannot be directly measured, it is possible to estimate an independent upper limit on the total energy that the system can provide to the jets during the ejection.In fact, under several assumptions, it is fairly straightforward to estimate an upper limit on the jet energy by considering the power available from accretion during the timescale associated to the ejection, and assuming that no energy is transferred to the jets after the launch.If the jet is accelerated during a timescale ∆t, we can write for a single component: where Pjet is the jet power in the rest frame of the source, and can be expressed as the fraction ηjet of the accretion power Ṁ c 2 , which can in turn be traced by the simultaneous X-ray bolometric luminosity LX corrected by the radiative efficiency of the accretion flow η rad (as LX = η rad Ṁ c 2 ).As generally done in the literature, we assume that the duration of the ejection phase ∆t should be roughly equivalent to the duration ∆t obs of the rising phase of the radio flares observed at the moment of launch, which is observable through a radio monitoring with adequate cadence.We note that ∆t can be shorter than ∆t obs if the rise is due to synchrotron self-absorption.Furthermore, in Equation 9we adopt the standard assumption η rad = 0.1 (e.g.Frank et al. 2002;Coriat et al. 2012) and, since we are interested in an upper limit on Ejet, we consider the (roughly) maximum possible jet power by simply assuming ηjet = 1.We note that ηjet could also exceed 1 in presence of a magnetically arrested accretion disk (MAD) and with a contribution from the BH spin (e.g.Bisnovatyi-Kogan & Ruzmaikin 1974;Narayan et al. 2003;Tchekhovskoy et al. 2011;McKinney et al. 2012;Davis & Tchekhovskoy 2020).
After obtaining an upper limit on the jet energy Emax, it is possible to combine such information with the constraints on Ẽ0 to place an upper limit on nISM by: Following this approach, we now discuss the constraints on the external ISM density for each of the sources considered in our sample.
MAXI J1820+070
Starting with MAXI J1820+070, we consider the radio flare produced by the ejection and observed on MJD 58306 with AMI-LA, which was characterized by a rising timescale of 6.7 h and a peak flux density of ≃ 50 mJy at 15.5 GHz (Bright et al. 2020;Homan et al. 2020).We also consider the simultaneous X-ray luminosity from Fabian et al. (2020), extrapolated using the measured spectral parameters to the 0.5-200 keV energy range using the multicomponent feature of the webpimms1 tool.Using Equation 9, we The dependence is obtained through the constraints on the effective energy Ẽ0 and it is here shown for the four sources considered in Section 5.4.The horizontal dot-dashed orange line represents the maximum energy Emax available to the jet from the simultaneous accretion power, which sets a strong upper limit on n ISM for all sources except XTE J1752-223.The dotted black line shows instead the minimum energy E flare in the jet frame inferred from the radio flare associated to the ejection, here implying that such value is likely a large underestimation of the jet kinetic energy, given that it would require extremely low value of n ISM for the jet to propagate up to the observed distances.Regions excluded by our constraints on Emax and E flare are shaded in grey.
Table 3. Source and jet parameters used for the calculations discussed in Section 5.4.From left to right, we list the name of the source, the peak bolometric X-ray luminosity L X used to compute the available accretion power, the derived maximum kinetic energy Emax available to the jet, the duration ∆t obs and the peak flux density S ν,peak at the frequency ν of the radio flare used to compute the jet-frame minimum energy E flare from equipartition, and the inferred upper limits on the external n ISM and jet mass M 0 .Given the multiple sources of uncertainty in the computations of these numbers, we associate a conservative 30% uncertainty to L X and Emax, and a 50% uncertainty to E flare .The references from which these data are obtained are reported in Section 5.4.obtain Emax ≃ 2 × 10 43 erg, which is shown as a horizontal dashdotted orange line in Figure 4, and reported in Table 3. Notably, such upper limit is broadly consistent with the jet internal energy Eint measured by Bright et al. (2020), which obtained values between 10 41 and 10 43 erg, thanks to a reliable estimation of the size of the jet emitting region 90 days after the jet launch.In addition, Espinasse et al. (2020) obtained Eint ≃ 5 × 10 41 erg by resolving the ejecta in the X-ray band with Chandra and measuring the broadband radioto-X-ray spectrum.Both these results were obtained with minimum energy calculations, assuming equipartition between electrons and magnetic fields in the jet plasma (Longair 2011).
Source
Using Equation 10, we can place the following constraint on the ISM density surrounding MAXI J1820+070, assuming ϕ = 1°: nISM,J1820 ≲ 10 −3 cm −3 , (11) with the upper limit being relaxed in case the jet is significantly narrower than 1°(but unlikely given that the jet appeared to be resolved in one of the X-ray detections reported in Espinasse et al. 2020).We also note that Tetarenko et al. ( 2021) measured ϕ = 0.45°+ 0.13°− 0.11°f or the compact jets in the same source.Interestingly, such result on the nISM appears to place MAXI J1820+070 in a low-density region filled with hot/coronal-phase ISM (Cox 2005), similarly to MAXI J1348-630 and XTE J1550-564 (Steiner & McClintock 2012;Carotenuto et al. 2022), but in this case the proper motion data can be adequately described with the propagation in a uniform, low-density ISM.If MAXI J1820+070 is in a low-density cavity, it is possible that these jets were not sufficiently energetic to travel up to the cavity "wall", if present, or, alternatively, the ISM around MAXI J1820+070 might have a much smoother distribution compared to the two sources mentioned before.Finally, we can also obtain a lower limit on the jet kinetic energy, which is represented by the internal energy inferred from the radio flare observed at the moment of ejection, when applying minimum energy calculations (e.g.Fender 2006).The internal energy E flare,obs in the observer frame is computed considering that the flare spectrum evolves from optically thick to optically thin, under the assumption that such evolution is due to decreasing optical depth to synchrotron self-absorption (Fender & Bright 2019).As in most of our cases, if the flare has been monitored at a single frequency ν, we can write: where S ν,peak is the peak flux in mJy and D is the source distance in kpc.We note that this approach has the advantage of being independent of the flare rising timescale, and it can be applied to any flare for which there is evidence for self-absorption and a measurement of the peak flux is available.Most of the flares from BH XRBs detected so far have shown this particular evolution (e.g.Tetarenko et al. 2017;Russell et al. 2019;Carotenuto et al. 2021;Fender et al. 2023).Adopting the formalism from (Fender & Bright 2019), we compute the same energy in the jet frame and we account for the relativistic bulk motion by where δ is the Doppler factor for the approaching jet and we rely on the assumption that α = 0 at the time of peak flux (Fender & Bright 2019).We further note that this lower limit on E0 gives the lowest jet mass M0 = E0/(Γ0 − 1)c 2 and corresponds to the jet composition of pure e ± pairs.For MAXI J1820+070, using Γ0, θ from the fit and Snu, ν, ∆t obs from the AMI-LA flare, we obtain, through Equation 13, E flare,RF ≃ 9 × 10 37 erg, which is also shown with a black dotted line in Figure 4 and reported in Table 3.Given the significant uncertainties associated to this method, we associate a conservative 50% error to these estimations.We note that, from Figure 4, a jet with such kinetic energy would necessarily be propagating in an extremely low-density environment, with nISM ≪ 10 −4 cm −3 .It is reasonable to suggest that the E flare and Emax lines enclose the physical parameter space for the ejecta, and the true (E0, nISM) value should lie between the two horizontal lines.
MAXI J1535-571
Similarly to MAXI J1820+070, we consider the radio flare associated to the jet ejection to estimate the maximum and minimum energies available to the jet.An upper limit on the jet kinetic energy can be obtained with Equation 9, by considering the radio flare produced by the ejection of the S2 component (Russell et al. 2019).Such flare, observed with MeerKAT and ATCA on MJD 58017, had an approximate rising timescale of ≃ 24 h and a peak flux density of ≃ 600 mJy at 1.3 GHz.At the same time, we consider the simultaneous X-ray luminosity from Tao et al. (2018), extrapolated using the measured count rates and reported spectral parameters to the 0.5-200 keV energy range using webpimms.We infer a maximum jet energy of Emax ≃ 5 × 10 44 erg, which, assuming ϕ = 1°, leads to a more stringent constraint on the ISM density surrounding MAXI J1535-571: nISM,J1535 ≲ 10 −4 cm −3 .( Such upper limit is roughly one order of magnitude lower than what obtained for MAXI J1820+070, and also lower than what inferred for MAXI J1348-630 and XTE J1550-564 (Steiner & McClintock 2012;Carotenuto et al. 2022;Zdziarski et al. 2023).This is consistent with the picture of the jets from from MAXI J1535-571 propagating up to a larger distance without the abrupt deceleration observed in the two latter sources.Considering again the radio flare, we can also use the rising timescale, peak flux density and observing frequency to compute with a minimum energy of E flare,RF ≃ 10 39 erg in the jet frame, with the same procedure outlined in the previous section and using Equation 13.Therefore, the true kinetic energy of the jets from MAXI J1535-571 likely lies in the interval between 10 40 and 10 44 erg, which, for instance, is in line with what estimated for the ejecta from by GRS 1915+105 (Mirabel & Rodríguez 1994;Fender et al. 1999;Zdziarski 2014).All the constraints are shown in Figure 4 and reported in Table 3.
XTE J1752-223
We apply the same method used above to the jets launched by XTE J1752-223.First, we consider the radio flare observed with ATCA on MJD 55217, characterized by a rising timescale of ≃ 24 h and a peak flux density of 20 mJy at 5.5 GHz (Brocksopp et al. 2013).
Including the simultaneous X-ray bolometric luminosity reported in the same paper and extrapolated to the 0.5-200 keV range, we use Equation 9 to compute again the maximum energy available to the jet Emax ≃ 5 × 10 44 erg, represented in the third panel of Figure 4.The application of Equation 10 yields the following upper limit on the external ISM density, when assuming ϕ = 1°: which is below the standard galactic nISM ≃ 1 cm −3 , but does not stringently constrain the environmental density as in the cases of MAXI J1820+070 and MAXI J1535-571.On the other hand, given the short distance up to which the jets from XTE J1752-223 propagate (see Figure 3), this result would be consistent with a scenario in which the ejecta have energies similar to the other sources, but travel in a much denser environment.
Considering again the radio flare, we combine the rising timescale, peak flux density and observing frequency to estimate a minimum energy of E flare,RF ≃ 10 37 erg with the same method as before (and Equation 13).Since we do not have a preferred value for the initial Lorentz factor, we assume Γ0 = 3.5, consistent with the lower limit from the fit.Again, we associate a conservative 30% uncertainty to this estimation, and we report all values in Table 3.In particular, we mention that the main source of uncertainty is the duration of the ejection ∆t, which could be overestimated in case of a sparse radio monitoring.Nevertheless, the minimum and maximum energies together with the low effective energy appear to point to a denser environment with respect to the other sources.We note that XTE J1752-223 displayed a complex flaring activity during the 2010 outburst, with rapid oscillations between the intermediate and the soft state, and with the likely production of additional, undetected ejecta 9and from the simultaneous bolometric X-ray luminosity, and it is here shown for the four sources considered in Section 5.4.For ISM densities above the upper limits presented in Section 5.4, marked with dotted vertical black lines, the higher energies required would imply the full accretion power to be supplied to the jets over timescales not compatible with the observed flares and state transitions.Regions excluded by our constraints on ∆t and n ISM are shaded in grey.(Brocksopp et al. 2013).Hence, it might also be possible that jets from this source are intrinsically less energetic than the single pair of ejecta displayed by MAXI J1348-630 or XTE J1550-564 (with the counter-example of GRS 1915+105, Fender & Pooley 2000).Such behavior has been also observed in a significant number of other sources, which are inferred to produce multiple subsequent ejecta, which are, however, rarely detected and spatially resolved (e.g.Homan et al. 2001;Brocksopp et al. 2001;Brocksopp et al. 2002;Fender et al. 2009;Tetarenko et al. 2017;Carotenuto et al. 2021;Fender et al. 2023).As of today, it is unclear if these multiple ejecta that should result from the complex flaring activity are not detected due their lower energy content, or for different reasons, possibly linked to the source environment that might be affected by the previous ejecta.
H1743-322
Lastly, we consider H1743-322 and the radio flare associated to the ejecta that was observed to peak on MJD 52768 with the VLA (McClintock et al. 2009).The flare had a rising timescale of ≃ 24 h, with a peak flux density of ≃ 35 mJy at 4.8 GHz.Combining this information with the extrapolated X-ray bolometric luminosity yields a maximum energy available to the jet Emax ≃ 1 × 10 44 erg (Equation 9), represented in the fourth panel of Figure 4, which then translates through Equation 10 to the following upper limit on the ISM density: nISM,H1743 ≲ 10 −3 cm −3 . (16) Assuming a half-opening angle of 1°, we confirm the requirement for a low density environment, as already pointed out by Hao & Zhang (2009) and Steiner et al. (2012).Considering the radio flare, and again assuming Γ0 = 3 and an inclination angle θ = 75° ( Steiner et al. 2012), the jet in its rest frame has a minimum energy of E flare,RF ≃ 7 × 10 39 erg from Equation 13.
Ejection duration
As an alternative way to represent the constraints obtained on the density of the ISM surrounding our sources, we can use Equation 9to compute the ejection duration associated to the measured LX for any possible value of E0, which is then linked to a specific value of nISM through the definition of Ẽ0.The results are shown in Figure 5 for different values of ϕ.Here, the orange dash-dotted line represents the ejection duration used in the previous subsections to estimate the maximum and minimum energies available to the ejecta.Considering our constraints on Ẽ0, we show that, for MAXI J1820+070, MAXI J1535-571 and H1743-322, if the jet were to propagate in a denser environment with respect to the upper limits reported in Table 3, the ejection would have required a sustained supply of accretion power Ṁ c 2 over timescales much larger than the ones associated with the radio flares, by orders of magnitude.For instance, if the ejecta from MAXI J1535-571 were propagating in a 10 −2 cm −3 ISM with ϕ = 1°, to reach the same distance the system should have supplied energy to the jet at the measured rate for more than 1000 h, which is longer than the time interval between the beginning of the outburst and the inferred jet ejection.
It is crucial to remark that these arguments strongly rely on the assumption that the rising timescale of the radio flares is a good proxy for the true ejection duration, to which we have no direct observational access.We note that this is not true for models that consider continuous jets instead of discrete ejections.In a continuous jet model, the resolved radio knots that we observe are believed to be caused by internal shocks between plasma shells accelerated at different speeds (Kaiser et al. 2000;Jamil et al. 2010;Malzac 2013).While this does not decrease the total amount of energy required for the jet, it relaxes the requirement on the ejection timescale, since the majority of energy is stored in the material of the unseen continuous jet (Kaiser et al. 2000).
BH XRBs in low-density environments
The propagation of the ejecta considered in this work can be adequately described with a single deceleration phase in an homogeneous environment, with an assumed constant ISM density.For three out of the four sources considered in the previous section, combining the upper limits on the maximum available energy with the strong constraints on the effective energy provides us with robust upper limits on the external ISM density, implying that MAXI J1820+070, MAXI J1535-571 and H1743-322 are all harbored in a low-density region in the ISM.For XTE J1752-223, the results are instead not conclusive.At the same time, the motion and the light curve evolution of the ejecta observed from MAXI J1348-630 and XTE J1550-564 also suggest the presence of low-density ISM cavities in which these sources might be embedded, with internal nISM = 10 −3 ∼ 10 −2 cm −3 (Hao & Zhang 2009;Steiner & McClintock 2012), and with either a sharp border (Carotenuto et al. 2022), or a more physical, smooth transition layer (Zdziarski et al. 2023).Despite this difference, it is remarkable to note that for all of the sources displaying large-scale decelerating jets it has been necessary to invoke an environment with a density up to 4 orders of magnitude lower than the canonical ISM density of 1 cm −3 .As already argued in Heinz (2002) for GRS 1915+105 and GRO J1655-40, a low-density environment seems to be a necessary requirement for the jet to propagate up to such large distances (fractions of pc) far from the central compact object.This highlights the importance and the great potential of using the current and future observations of large-scale jets as probes of the environment surrounding BH XRBs.
Despite the emerging scenario, it is currently unclear how such low-density environments might be produced.BH XRBs might be preferentially located in regions occupied by the hot ISM phase (e.g.Ferrière 2001), or, more likely, the low-density region/cavity could formed from the feedback of the system itself.The BH surroundings might have been evacuated by the supernova explosion that created the compact object or by a different type of outflow.Several possibilities (with different degrees of plausibility, see discussion in Hao & Zhang 2009) include the winds from the progenitor star (e.g.Gaensler et al. 2005), winds from the companion star (e.g.Sell et al. 2015) and winds from accretion disk (e.g.Miller et al. 2006;Fuchs et al. 2006;Muñoz-Darias et al. 2016), in addition to the previous activity of the jet itself, whether collimated (e.g.Gallo et al. 2005;Russell et al. 2007;Heinz et al. 2007Heinz et al. , 2008;;Yoon et al. 2011;Coriat et al. 2019) or, as more recently proposed, uncollimated (Sikora & Zdziarski 2023).Interestingly, such scenario appears to be sup-ported by laboratory experiments testing multiple supersonic plasma ejections in a drift chamber (Kalashnikov et al. 2021).After the passage of each ejection, a low-density region can be observed, called "vacuum trace", which causes the subsequent ejections to encounter much less environmental resistance in their propagation.
It is worth mentioning that the properties, distribution and chemistry of the ISM around these systems can be investigated through independent observation at different wavelengths, of which a prime example is the mapping of the molecular line emission from the material shocked by the jets, as already done for GRS 1915+105, GRS 1758-258 and 1E 1740.7-2942(Mirabel et al. 1998;Chaty et al. 2001;Tetarenko et al. 2018Tetarenko et al. , 2020)).Alternatively, it is possible to search for optical Hα emission from the same shocked ISM and independently infer its density from the measurement of the integrated luminosity in the diagnostic line and from realistic assumptions on the shock velocity (Dopita & Sutherland 1996;Russell et al. 2007).Therefore, the importance of these approaches resides also in their potential for partially solving the E0, ϕ, nISM degeneracy that is currently present in our model.
Jet mass
The mass of the ejecta is a parameter of great importance in the study of these systems, and it has not yet been constrained with sufficient accuracy for any source.A mass measurement can directly give us information on the long sought-after composition of the jets from BH XRBs, which is still an open problem.While we have evidence for baryons in the jets from SS 433 from the detection of Dopplershifted iron emission lines in the X-rays (Kotani et al. 1996;Migliari et al. 2002), no information is available from the synchrotron spectra of the other jets, both compact and discrete (Fender 2006), although there have been attempts to model compact jets SEDs with hadronicleptonic models (e.g.Romero et al. 2005;Pepe et al. 2015;Romero et al. 2017;Kantzas et al. 2021).In this context, finding evidence for massive ejecta could strongly suggest the presence of cold protons, balanced by a long tail of non-relativistic electrons (Carotenuto et al. 2022).This is also supported by Zdziarski et al. (2023), which argue that a massive jet is unlikely to be pair dominated, and by more recent results reported in Zdziarski & Heinz (2024), which suggest the existence of a fundamental difference in composition between compact jets, pair-dominated, and discrete ejecta, which should instead have a baryonic composition.Evidence of protons in these jets bolster arguments that BH XRBs could represent a class of PeV cosmic ray sources (e.g.Fender et al. 2005;Cooper et al. 2020).However, the mechanism for which protons are loaded in the jets remains unclear, as they could be already present at the jet formation or could be entrained during the early phases of the jet motion (see for instance O' Riordan et al. 2018 andKantzas et al. 2023).
In this work, we cannot directly provide estimations of the masses of the ejecta for the four sources considered, but we can still place interesting upper limits on it by writing and using the best value of Emax and Γ0 from the sections above.
With the values reported in Table 1, we find the ejecta launched by MAXI J1820+070 to have a mass which is equivalent to an upper limit of ≲ 8 × 10 −12 M⊙.This estimation is consistent with the ≃ 10 20 g obtained with minimum energy calculations, assuming one proton per electron (Espinasse et al. 2020), and it is also consistent with jet masses estimated in other sources with the same method (e.g.Fender et al. 1999;Gallo et al. 2004).
Regarding MAXI J1535-571, we infer a higher upper limit: or ≲ 5 × 10 −10 M⊙.It is likely that the ejecta from MAXI J1535-571 have a larger density contrast than MAXI J1820+070 with the surrounding ISM, as this is probably required in order to propagate a larger distances.Notably, the ejecta from MAXI J1535-571 were unresolved in all the ATCA and MeerKAT detections (Russell et al. 2019), hence it is not expected to have a volume significantly larger than the jet from MAXI J1820+070.Assuming a most likely value Γ0 = 3.5, the mass of the jets from XTE J1752-223 is inferred to be equivalent to an upper limit of ≲ 1×10 −10 M⊙.In this case, while the mass upper limit lies in between what obtained for the two previous sources, the early jet deceleration observed in XTE J1752-223 likely points towards a denser environment (see Section 5.4).Similarly to XTE J1752-223, by assuming Γ0 = 3 in the case H1743-322, we can constrain equivalent to an upper limit of ≲ 3 × 10 −11 M⊙.All the inferred upper limits on M0 are reported in Table 3.Assuming a high accretion rate of ∼10 18 g s −1 (roughly 0.1L Edd ), typical of the hard-to-soft state transition (Maccarone 2003), and assuming that during the ejection the majority of the accreted mass is channeled into the jets, it would take from minutes to hours to accumulate a jet mass in the range 10 20 ∼ 10 22 g.On the other hand, it is generally known that the mass outflow rate for thermal winds can be up to ten times higher than the simultaneous accretion rate (e.g.Higginbottom & Proga 2015;Dubus et al. 2019).This appears to be consistent with the general picture in which, for BH XRBs, most of the outflow mass is carried by winds, while most of the kinetic feedback is carried by jets (Fender & Muñoz-Darias 2016).
The sample of large-scale jets
After discussing the results of the modeling work presented in this paper, we can consider for the first time the entire sample of large-scale jets detected so far and for which we have information on the source parameters, with the aim of looking for possible interesting and informative trends/correlations.The current sample includes the three sources considered in this work, namely MAXI J1820+070, MAXI J1535-571 and XTE J1752-223, with the addition of MAXI J1348-630 (Carotenuto et al. 2021(Carotenuto et al. , 2022)), XTE J1550-564 (Sobczak et al. 2000;Hannikainen et al. 2001;Wu et al. 2002;Corbel et al. 2002;Steiner & McClintock 2012) and H1743-322 (McClintock et al. 2009;Steiner et al. 2012).We further add GX 339-4, that displayed large-scale jets in 2003 and for which, albeit not detecting deceleration, we have a lower limit on the initial Lorentz factor Γ0 > 2.3 (Gallo et al. 2004).
We first compare the initial jet speed (Γ0) with the measured dimensionless BH spin a * , as can be seen from panel (a) of Figure 6.At visual inspection, it is not clear whether there is any evidence of correlation between the two parameters, even if we note that we only have three estimations of Γ0 in our sample, while the rest are lower limits.Furthermore, we note that these spin values are obtained with different methods that often yield different results (e.g.Reynolds 2021; Draghis et al. 2023).Specifically, the spin of H1743-322 is obtained through continuum fitting (Steiner et al. 2012), while the spins of MAXI J1348-630, XTE J1752-223, MAXI J1535-571 and GX 339-4 have been obtained with relativistic reflection modeling (Parker et al. 2016;García et al. 2018;Miller et al. 2018;Jia et al. 2022).Lastly, the spins of XTE J1550-564 and MAXI J1820+070 were obtained with the application of the RPM (Relativistic Precession Model, e.g.Stella & Vietri 1998, 1999;Stella et al. 1999), from Motta et al. (2014) and Bhargava et al. (2021), respectively.From our sample, the BH spin does not seem to have a significant effect on the initial jet speed.
With the aim of comparing the jet speed with the simultaneous accretion rate, we show in panel (b) Γ0 as a function of the simultaneous bolometric X-ray luminosity LX obtained from the literature and converted to the 0.5 − 200 keV energy range with webpimms, as in Section 4.2.We plot LX in units of L Edd , which is the Eddington luminosity L Edd ≃ 1.3 × 10 39 (MBH/10 M⊙) erg s −1 that represents the limit for spherically stable hydrogen accretion (Frank et al. 2002).To estimate L Edd we need a measurement of MBH, but we note that a dynamically confirmed BH mass is only available for XTE J1550-564 and MAXI J1820+070 (Orosz et al. 2011;Torres et al. 2020).In this plot we updated the mass of MAXI J1348-630, which was previously estimated from the normalization of the disk blackbody component in the X-ray spectral fitting reported in Tominaga et al. ( 2020), for a non-spinning BH.Adopting the recent spin a * = 0.78±0.02measurement by Jia et al. (2022), we computed the spin-dependent radius of Innermost Stable Circular Orbit (ISCO) and then use this updated parameter to obtain MBH = 15.2 ± 2.3 M⊙ (see Tominaga et al. 2020 for the explicit dependence of the mass on the ISCO radius).Moreover, we included the new BH mass estimation MBH = 12 ± 2 M⊙ obtained for H1743-322 through X-ray reflection spectroscopy (Nathan et al., submitted) and the new mass estimation of MBH = 12 ± 1 M⊙ obtained for XTE J1752-223 through the analysis of the the soft state and the soft-to-hard spectral state transition (Abdulghani et al. 2024).As for the previous panel, we do not have enough robust estimations of Γ0 and MBH to draw a conclusion, but we note that this plot, after Fender et al. (2004), is starting be populated and will be highly relevant once more measurements become available.
Another interesting comparison, that we show in panel (c) of Figure 6, can be done between the jet speed and the internal energy inferred from the radio flare observed at the moment of ejection, when applying minimum energy calculations (Fender & Bright 2019, Equation 12) and converting the internal energy to the jet frame, accounting for the bulk relativistic motion with Equation 13 (with 50% uncertainties associated).We notice that most flares in our sample had a minimum energy ranging between 10 37 and 10 40 erg.For H1743-322 and XTE J1550-564, we assumed Γ0 = 3 and inclination angles of, respectively, θ = 75° (Steiner et al. 2012) and θ = 70° (Steiner & McClintock 2012), and for which we obtain E flare,RF ∼ 10 41 erg.We caution that the energies reported in this plot are affected by large uncertainties, both in the peak flux (the peak could be missed in the monitoring or it could be optically thin) of the emitting region and in the conversion from the observer frame to the jet frame.Again, a visual inspection seems to show that there is no clear correlation between these two parameters.In the current understanding, Γ0 is the bulk Lorentz factor of the whole jet, while E flare,RF should only result from the relativistic electrons present in the jet plasma.If discrete ejecta have a predominant baryonic composition (e.g.Zdziarski & Heinz 2024), the mass of the proton will be more important in determining Γ0 than the energy contained in the relativistic electrons.
Lastly, we compare the de-projected distance traveled by the jet with Γ0, as shown in panel (d) of of Figure 6.Due to the uncertainties on the source distance and jet inclination angle, we assume a conservative 30% uncertainty on the physical distance traveled by the jet.The initial jet speed does not appear to be a driving factor in determining the distance at which the jet propagates to.We might in fact expect that the distance could be more correlated with the jet mass (or with the jet/ISM density contrast, Savard et al. in prep.)than with its speed.In this context, a massive jet with a low Lorentz factor will propagate further in a given ISM density than a lighter jet with a higher Γ0 and the same kinetic energy.
CONCLUSIONS
In this paper, we have presented a physical modelling of the motion of the decelerating jets launched by MAXI J1820+070, MAXI J1535-571 and XTE J1752-223.Adopting a Bayesian approach, we fitted the jet angular distance data with the dynamical blast-wave model developed by Wang et al. (2003), and we found that the model provides an excellent description of the jet motion, from the first phase of ballistic motion to the final deceleration phase.In particular, a single Sedov phase in a homogeneous ISM appears to adequately capture the dynamics of the decelerating jets for the entire sample considered.The results obtained from a simple model derived from GRBs demonstrate the high potential of applying some of the welldeveloped theoretical advancements on GRBs to the jets from BH XRBs.These discrete ejecta can be considered as less-relativistic analogues of GRB jets, with the advantage of providing better access to their physics due to their location in the Galaxy.
From the fits, we are able to place constraints on multiple physical parameters of the jets, including a first estimation of the initial Lorentz factor of the ejecta from MAXI J1820+070 (Γ0 = 2.6 +0.5 −0.4 ) and estimates for the Lorentz factor (Γ0 = 1.6 +0.2 −0.2 ), ejection date (MJD 58017.4+4.0 −3.8 , soft-intermediate state) and inclination angle θ = 30.3°+ 6.3°− 6.3°f or the ejecta produced by MAXI J1535-571.By considering the constraints on the effective energies and on the maximum energy available to the jets from the accretion power, we are also able to provide new upper limits on the jet mass and on the ISM density surrounding our sources.Overall, our results support the emerging scenario for which BH XRBs displaying large-scale jets appear to be mostly located in low-density environments.
Considering the current sample of large-scale jets, we find no clear correlations between the initial jet speed and the BH spin, the simultaneous accretion rate, the jet minimum energy inferred from the flare at the moment of ejection, or the distance traveled by the jet.It is worth mentioning that the lack of evident correlation between most of the parameters considered is nevertheless informative for our current understanding of jets from BH XRBs.While our sample is currently limited to a small number of sources, more observations of decelerating ejecta from BH XRBs are needed in order to test our results, an this will lead us to a significant progress in our understanding of the jet production, propagation, and feedback on the surrounding environment.In this context, two new large-scale decelerating ejecta have been recently discovered in MAXI J1848-015 (Tremou et al. 2021;Bahramian et al. 2023) and4U 1543-47 (Zhang et al. in prep.), and these jets represents ideal targets for the continuation of this work.In the near future, this approach will be greatly improved by the joint modelling of kinematics and radiation from these ejecta (Cooper et al. in prep.) and will also benefit from the comparison with the first relativistic hydrodynamic simulations of these objects (Savard et al. in prep.).
It is worth to mention that the data currently available for MAXI J1820+070 are of outstanding quality, with a monitoring campaign that covered the first phases of jet motion (VLBI) as well as the final deceleration phase, allowing us to obtain much smaller statistical uncertainties on the model parameters with respect to the other sources.In the coming years, similar monitoring campaigns can be planned and performed in order to cover the entirety of the jet evolution.Milliarcsec-resolution VLBI observations are crucial to observe the jets much closer to the compact object and hence to obtain more stringent constraints on their physical parameters, especially the ejection date (Miller-Jones et al. 2019;Wood et al. 2021Wood et al. , 2023)).In the future, the new generation Event Horizon Telescope (ngEHT) will also be able to perform extremely high-resolution (order of ∼10 µas) observations of the jets in the mm range (e.g.Johnson et al. 2023).At the same time, dense and long-term monitoring campaigns with sensitive interferometers such as MeerKAT, which already detected a large number of discrete ejecta, but also the future SKA-MID (Braun et al. 2015) and ngVLA (Selina et al. 2018) will be fundamental to follow the deceleration phase and to probe the final phases of the jet evolution.
APPENDIX A: POSTERIOR DISTRIBUTIONS
We show in this section the corner plots with the posterior distributions of the model parameters for the three sources considered in this work.
This paper has been typeset from a T E X/L A T E X file prepared by the author.
are shown in Figure 3, along with the proper motion of the jet, while the posterior distributions for the model parameters are shown in Figure A3.According to the model, the jet is launched with an effective energy of Ẽ0 = 1.1 +1.2
Figure 1 .
Figure 1.Angular separation in arcsec between the discrete ejecta and the position of MAXI J1820+070, with data from Bright et al. (2020); Espinasse et al.(2020) andWood et al. (2021).The un-shaded, gray and seashell regions mark periods in which the source was, respectively, in the hard, intermediate and soft state(Shidatsu et al. 2018).The black horizontal dashed line represents the zero separation from the core, while the black continuous line represents the best fit obtained with the external shock model.The orange shaded area represents the total uncertainty on the fit and it is obtained by plotting the jet trajectories corresponding to the final positions of the MCMC walkers in the model parameter space.Residuals ([data -model]/uncertainties) are reported in the bottom panel.The model appears to provide an excellent description of the motion of both the approaching and receding ejecta, with a low statistical uncertainty.
Figure 2 .Figure 3 .
Figure 2. Same as Figure 1, but for MAXI J1535-571, with data from Russell et al. (2019) and information on the spectral states fromTao et al. (2018).Dark and light gray regions differentiate, respectively, between the HIMS and the SIMS.The model appears to fit the data remarkably well, implying that a Sedov phase is an adequate physical scenario for the jet deceleration.
Figure 4 .
Figure 4. Explicit dependence of the jet kinetic energy E 0 on the external ISM density n ISM for different values of the half-opening angle ϕ.The dependence is obtained through the constraints on the effective energy Ẽ0 and it is here shown for the four sources considered in Section 5.4.The horizontal dot-dashed orange line represents the maximum energy Emax available to the jet from the simultaneous accretion power, which sets a strong upper limit on n ISM for all sources except XTE J1752-223.The dotted black line shows instead the minimum energy E flare in the jet frame inferred from the radio flare associated to the ejection, here implying that such value is likely a large underestimation of the jet kinetic energy, given that it would require extremely low value of n ISM for the jet to propagate up to the observed distances.Regions excluded by our constraints on Emax and E flare are shaded in grey.
Figure 5 .
Figure 5. Explicit dependence of the ejection duration ∆t on the external ISM density n ISM for different values of the half-opening angle ϕ.The dependence is obtained through the constraints on the effective energy Ẽ0 , from Equation9and from the simultaneous bolometric X-ray luminosity, and it is here shown for the four sources considered in Section 5.4.For ISM densities above the upper limits presented in Section 5.4, marked with dotted vertical black lines, the higher energies required would imply the full accretion power to be supplied to the jets over timescales not compatible with the observed flares and state transitions.Regions excluded by our constraints on ∆t and n ISM are shaded in grey.
Figure 6 .
Figure 6.Comparison between the inferred initial Lorentz factor Γ 0 of the ejecta in our sample of large-scale jets and: (a) -the dimensionless spin parameter a * , (b) -the bolometric X-ray luminosity L X simultaneous to the ejection in Eddington units, (c) -the jet frame internal energy E flare inferred from the radio flare associated to each ejection (see text for details), (d) the de-projected distance traveled by the jet.We find no clear correlation between Γ 0 and the parameters shown in the four panels, and more sources are needed to increase the sample of large-scale jets.
Figure A1 .
Figure A1.Corner plots showing the constraints on the physical parameters of the ejecta from MAXI J1820+070.The panels on the diagonal show histograms of the one dimensional posterior distributions for the model parameters, including the jet initial Lorentz factor, effective energy, inclination angle and ejection time (here represented as MJD −58300), as well as the source distance.The median value and the equivalent 1σ uncertainty are marked with vertical dashed black lines.The other panels show the 2-parameter correlations, with the best-fit values of the model parameters indicated by green lines/squares.The plot was made with the corner plotting package (Foreman-Mackey 2016).
Figure A3 .
Figure A3.Corner the constraints on the physical parameters of the ejecta from XTE J1752-223, same as Figure A1.Note the log scale for Γ 0 . | 21,761.6 | 2024-05-26T00:00:00.000 | [
"Physics"
] |
Williamson On the Margins of Knowledge: A Criticism
In this paper, we argue that Williamson’s arguments against luminosity and the KK principle do not work, at least in a scientific context. Both of these arguments are based on the presence of a so-called “buffer zone” between situations in which one is in a position to know p and situations in which one is in a position to know ¬p. In those positions belonging to the buffer zone ¬p holds, but one is not in a position to know ¬p. The presence of this buffer zone triggers two types of sorites arguments. We show that this kind of argument does not hold in a scientific context, where the buffer zone is controlled by a quantitative measurement of the experimental error.
Introduction
It is difficult to overestimate the importance that Williamson's Knowledge and its Limits (hereafter, K&L) had and still has in the philosophy community. As is well known, Williamson extensively argues against the classical analysis of knowledge as justified true belief. According to Williamson (2000, p. 6) "knowledge" is a mental state: sometimes we are in a position to know, i.e. "knowledge is a primitive kind of mental event." In this perspective, knowledge is a part of the world. However, K&L is not only a "long argument" in favor of the conception that knowledge is a fundamental mental state; it contains legions of sub-arguments as well connected to crucial and very often provocative issues in the philosophy of knowledge.
Williamson's book is deeply rooted in a progressive program in philosophy that probably started with Hintikka's seminal Knowledge and Belief (see Hintikka 2010). Hintikka shows that philosophical reflection benefits from the powerful instrument of formal logic in order to adequately characterize fundamental concepts such as knowledge, belief, justification, and reliability. 1 Moreover, the intended domains to which those very sophisticated logical frameworks apply usually belong to common sense knowledge. We believe, however, that in scientific contexts many of the most important problems in epistemology acquire a different structure. Does this mean that the entire debate in formal epistemology applied to everyday contexts must be abandoned? Of course not. But, on the other hand, formal epistemology, even its non-Bayesian part, could be implemented and extended if opened to the immense pool of scientific knowledge.
Here let us first define the two contexts: everyday knowledge and scientific knowledge. Common sense knowledge contexts involve instances of either perception or self-knowledge, and in such contexts, it is almost impossible to disregard the mental states and actions of the knower. Scientific contexts, on the other hand, seem to be different in that, while the beliefs and actions of scientists and researchers are an interesting topic of research in psychology and sociology of science, from a logical point of view they are not particularly relevant. Perhaps an example can help to clarify this issue. Take for instance "Hubble's principle", according to which "spacetime is dilating". (1) Our best scientific theories can justify either its truth or its falsehood. (2) Empirical data can entail that it must be formulated with a different constant of expansion. (3) The mathematical language in which it is formulated must respect certain a priori constraints, such as those of elementary arithmetic and differential geometry. (4) The inductive connection between astrophysical data and its formulation can be very complex and ambiguous etc. These all are epistemological questions. On the other hand, historical and sociological situations like the fact that Einstein did not at first accept Hubble's principle and that Hubble persuaded him with his empirical data are very interesting, but not within an epistemological context. 2 Williamson builds on an externalist view of the mind, arriving at the reasonable consequence that knowledge is a mental state dependent on the external world. This is an interesting point of view if we are concerned with perception and common knowledge in general. However, when one speaks of the rational belief of a scientific community, one is not arguing about a peculiar relation between scientists' mind and the external world, and the same holds true for their knowledge. In other words, the notion of normativity of belief and knowledge has-in a scientific framework-a different meaning from what applied to everyday knowledge context. And this normativity is what we plan to investigate on the basis of our best scientific practices; that is, epistemology of natural sciences discusses those methodologies endowed with large empirical success. Of course, everyday and scientific knowledge are not altogether independent and a comparison between them is very interesting.
Behind the almost exclusive attention of epistemologists to everyday contexts most likely lies the assumption that, at the end of the day, scientific knowledge is only a peculiar kind of refined common sense knowledge. It could also be that some scholars believe that considering certain scientific practices as epistemological models-as proposed in the preceding paragraph-is a strong presupposition not epistemologically validated. In other words, some epistemologists wish to start with establishing which is, in general, the right epistemology and then apply it in different contexts, the scientific one included. 3 We intend to show, on the contrary, that scientific contexts are essentially different from the common sense ones, at least in certain cases. Moreover, concerning the criticism of the idea of taking good scientific practices as models, one can answer that perhaps to formulate a good epistemology without presuppositions is impossible and the success of empirical methods in natural sciences seems a good reason for building on those procedures.
In this vein, the aim of this article is to show that if one is assuming a notion of knowledge as it is presupposed by working scientists, a couple of Williamson's main arguments cannot be framed in the same form. In particular, we want to argue that Williamson's antiluminosity argument and margins of error argument against the KK principle cannot be devised if a standard notion of scientific and empirical knowledge is assumed. Of course, this does not prove that it is impossible to re-shape the original argumentative schema in order to adapt it to scientific scenarios; however, the burden of proof is on the opponent's shoulders.
The paper is organized in the following manner: in the next section we briefly summarize Williamson's anti-luminosity argument and show its inapplicability in a scientific context; in the Sect. 4 we present his specific argument against the KK principle. In the Sect. 3 , which is the core of the paper, we propose a different semantics for the margins of knowledge, and we show that one of Williamson's key principles can no longer be formulated.
Anti-Luminosity
Williamson defines a mental state (or a condition) luminous 4 as follows: (Luminosity) Condition C is luminous iff for every case α, if in α C obtains, then in α one is in a position to know that C obtains (Williamson 2000, p. 95).
Where "to be in a position to know p" is factive; that is, "to be in a position to know p" is not a sufficient condition to know p, but if one is in a position to know p, then p is true.
A paradigmatic example is a headache: suppose Mary has a headache; so, to acquire knowledge of her own headache Mary must do practically nothing; just consider that she has headache. Therefore, the headache seems luminous. The same, obviously, does not hold for other states, typically non-mental. Consider the presence of beer in the refrigerator. Of course, this condition is not luminous: Mary is not always in the position to know if there is beer in the refrigerator. Maybe she forgot whether there was beer and she must check it out. Indeed, Mary would be in a position to know whether there is beer in the refrigerator either if she were looking in the refrigerator, or if she reliably remembered to have put beer in the refrigerator, or she were in other similar situations.
Williamson's aim is to show that mental conditions are not luminous. To do this, he considers a peculiar example of a mental condition, that of feeling cold. 5 Let us suppose, along with Williamson, that "one feels cold at dawn, very slowly warms up, and feels hot by noon" (Williamson 2000, p. 94). Without losing generality, we assume that "one" is "Mary". The interval between dawn and noon is divided in very small amounts of time, say, m-milliseconds intervals; more formally, we have α 0 , which is the case at dawn, and α n , which is the case at noon. Now, the salient facts are the following: (i) In α 0 Mary feels cold.
(ii) In α n Mary does not feel cold. (iii) During the process, Mary considers how cold or hot she feels, but the m milliseconds interval is too small for her to be aware of the thermal difference between time i and time i + 1.
These facts are supposed to describe a quite common situation. Williamson argues that if feeling cold is a luminous condition then we get a contradiction. Therefore, by reduction, feeling cold is not a luminous state. Since the choice of the mental state is not relevant, this argument can be generalized to all similar cases of self-knowledge and beyond.
The first crucial premise of Williamson's argument is the following: If Mary feels cold in α i , then Mary knows that she feels cold in α i .
According to a luminosity defender, (A) could be justified by Mary's reliable introspection of her own feelings. In other words, (A) amounts to the luminosity of feeling cold. Moreover, Williamson assumes that: (B) If Mary knows that she feels cold in α i , then she feels cold in α i+1 .
The defense of (B) can be articulated a bit more using the insight that knowledge requires safety. That is, if one knows p in a given case, then p is true in every similar 6 case in which one believes that p. So, Mary knows that she feels cold in α i and the content of this knowledge-that is, the fact that she feels cold-must be true in any case similar to α i in which she believes she is feeling cold. But, by assumption (iii), the case α i+1 is impossible to discriminate from case α i . Therefore, Mary would believe that she feels cold in α i+1 . 7 Now, let us examine how Williamson's reduction works. By assumption (i) and (A) we have that Mary knows that she feels cold in α 0 . But then (by (B)) we have that Mary feels cold in α 1 . Of course, she knows that she feels cold in α 1 . (That holds for principle (A)). But applying again (B), we derive that she feels cold in α 2 . And then the argument can be reiterated many times. In the end, however, we reach the case in which Mary feels cold at noon, which is against (ii) (see Steup 2009).
Since, according to Williamson, principle (B) is deeply rooted in our epistemic structure, the crucial premise of the reduction is principle (A), that is, the alleged luminosity of the mental states. (B) is not eliminable because every kind of knowledge must have a buffer zone in which, even if the situation changes a bit, the knower does not modify his/ her knowledge, since what happens is under the threshold of his/her awareness.
The logical engine of Williamson's argument is similar to a sorites paradox: there is a series of cases, each very similar to adjacent ones, that starts with a case where C clearly obtains (feeling cold at dawn) and ends with a case where C clearly does not obtain (feeling cold at noon). But the luminosity principle imposes that whenever C obtains, we are in a position to know if C obtains. Moreover, knowledge must be safe, that is, if we know that C obtains, C obtains in all similar cases. But then C must obtain in all similar cases, which is contrary to the idea that through very similar (and indistinguishable) cases we get to a very different case, in which C does not obtain at all. The main point of Williamson's argument is that in every kind of knowledge there are situations which are completely similar from our subjective point of view, even if something is objectively changed. And this issue, expressed by (B), is incompatible with luminosity, expressed by (A).
If Williamson's argument is sound, it should be applicable to any mental state; that is, it should hold also for that peculiar mental state which, according to Williamson, is knowledge. If knowledge were luminous, then if we know that p, we should be in a position to know that we know that p. This principle is known in literature as the KK principle, and we will see in the following how Williamson argues against it.
Here it is in order to consider again the difference between common sense and scientific knowledge. Williamson's argument is based on what he calls a buffer between to be in a position to know p and the falsehood of p. This is a margin of error zone, one in which p is no longer true, but we are not in a position to know that ¬p. This zone is epistemological, that is, it concerns the situation of the knower. But in a scientific context the situation is quite different. In Williamson's example: one builds knowledge on a certain inner state, that is, Mary's cold feeling. But what would happen in a similar situation if we apply his argument to a scientific context?
Let us consider what could be a scientific translation of (A)'s antecedent: "Mary feels cold in α i ". In the case of scientific practice, the possibly luminous state is not a simple inner feeling, but the repetition of many situations α i with many different subjects. And the result of this experiment can be so described: A randomized sample of people in situation α i the most part of times judges that it is cold.
The consequent of the original (A) would be that in state α i Mary knows that she feels cold.
In the new framework, one can say that: The difference between (a i ) and (A i ) is that the former refers to an experimental sample, whereas the latter is a generalization to the whole population.
Let us emphasize that in a scientific context it is not necessary to introduce the operator "know", as Williamson does in the case of Mary, since the result of an experiment (a i ) justifies a given statement (A i ), not the knowledge by someone of that statement. At first sight our choice seems a bit strange, since Williamson's argument concerns knowledge. But it is not so obvious, as many scholars maintain, that knowledge is a statement preceded by a subjective knowledge operator. The latter is actually a "representation" of a knowledge. In empirical science, knowledge instead is a true justified statement. For this reason, we omit the knowledge operator. Nevertheless, if one reintroduces the knowledge operator, the argument is still valid. In other terms, we can say that: If (a i ) then (A i ).
Note that, in the new context, (A′) expresses luminosity in science, which is a completely reliable inference from an experimental situation to a truth. Now we move to the antecedent of (B), that is the same as the consequent of (A). Therefore, the antecedent of (B′) will be: People in situation α i feel cold.
The consequent of (B) instead was: "Mary feels cold in situation α i+1 ". The analogous situation in the new framework will be: The majority of people in situation α i+1 the most part of times judges that it is cold. Therefore: At this point we must translate (i)-(ii) in the new language. This is straightforward: The majority of people in α 1 the most part of time judges that it is cold.
(ii′) The majority of people in α n the most part of time judges that it is not cold.
It is easy to show that from (A′), (B′), and from (i′) to (ii′) it is possible to deduce the contradictory of (ii′), in a similar way to what done in the case proposed by Williamson: The majority of people in α n the most part of time judges that it is cold. Therefore, at first sight, it seems that Williamson's argument holds true even in a scientific context. However, the issue deserves more attention.
The most controversial assumption of this new argument is (B′). To better understand it, we present (B′) in an unpacked form: (B′) If people in situation α i feel cold, then the majority of people in situation α i+1 the most part of times judges that it is cold.
Remember that in the case presented by Williamson, the justification of (B) was (iii).
(iii) During the process Mary considers how cold or hot she feels, but the m milliseconds interval is too small for her to be aware of the thermal difference between time i and time i + 1.
The analogous situation for (iii) in the new context would be: (iii′) During each repetition of the process people consider how cold and how hot they feel, but the m milliseconds of interval between situation α i and situation α i+1 is for the majority of people the most part of times too small to appreciate any thermal difference.
We must now establish whether (iii′) is reasonable in an experimental contest.
Before discussing the validity of (iii′), let us remember a bit of measurement theory. To have a comparative scale, like the one we are involved with here, transitivity of comparative judgments is a necessary condition. That is, a condition of this kind must hold true: (T) If a subject judges α i not colder than α j and α j not colder than α k , then s/he must judge α i not colder than α k .
It is evident that the set of judgements on which our experiment is based does not respect (T). 8 Indeed, we have chosen the judgements of our subjects so that they experience confusion between two neighboring situations. In experimental contexts, if (T) does not hold, scientific investigation is not allowed involving comparison. 9 In other words, to apply a comparative scale, transitivity between judgements is required. 10 This means that in the case of non-transitive judgements, at best we can use a nominal classification, that is sentences of the kind "people feel cold" and "people do not feel cold". In a reasonable theory of measurement that uses a comparative scale, like the present one, statements like (iii′) and (B′) are devoid of cognitive meaning.
A defender of Williamson's perspective could object to our argument that a comparative scale is not necessary. Indeed, all judgements of the involved subjects are either of the form "people feel cold" or "people do not feel cold". Therefore, the violation of transitivity would not block Williamson's argument. It seems to us that without a comparative scale, Williamson's argument does not work adequately. Moreover, even a nominal scale has its necessary pre-conditions. For, the necessary condition to apply a so-called nominal scale is that each item is ascribed to only one scientific name. To understand better this point, let us consider again (B′): (B′) If people in situation α i feel cold, then the majority of people in situation α i+1 the most part of times judges that it is cold.
In this sentence two kinds of stimuli appear: α i and α i+1 . By hypothesis the majority of people most parts of time is not able to distinguish between them, therefore they say in both cases that they feel cold. In his deduction Williamson applies (B′) again and again, but even if his notion of the safe or buffer zone would compel him a priori to transmit the label "cold" along the series of α i 's, empirically in order to be epistemologically possible to speak of a nominal scale-such as "cold", "not cold"-each item must belong to only one of the two labelled sets. In other terms, in a scientific nominal scale there is no buffer zone: each α i either is cold or it is not cold. Experimentally, if a certain α i is ascribed the most part of time by the majority of people to the label "cold" it is cold and vice versa. On the contrary, if it happens that α i is ascribed exactly half of the times to "cold" and half of the time to "not cold", it is not possible to say either that is cold or that it is not cold.
Note that Williamson's buffer zone in scientific practice concerning measurement of mental features disappears. That is, in measuring the mind, to take into account the sensitivity of the subject, we repeat many times the experiment with many different subjects. Moreover, to use both a nominal and a comparative scale, the experimental reports of the subjects' judgements must respect certain rules, that is, respectively, dichotomicity and transitivity. If these rules do not hold, measurement and, consequently, experiments are not possible. In other terms, if judgements are neither dichotomous nor transitive, we are not in a position to apply (A′), which is scientific luminosity. But this inapplicability does not mean that (A′) is not true. This consequence does not mean that we are persuaded that luminosity holds in scientific contexts. Our point is much more modest; only that in scientific contexts this kind of sorites' argumentation is not relevant.
This seems enough to show the difference between common sense knowledge of internal states, where Williamson's argument seems compelling, and scientific contexts, where the situation is quite different. But in the next section we will see how Williamson applies this argument to knowledge as a mental state.
Against the KK Principle
Chapter 5 of K&L is dedicated to arguing against the so-called KK principle. Axiom 4 of modal systems states that if it is necessary that p, then it is necessary that it is necessary that p (□p →□□ p). It is a logical routine to show that 4 holds only in frameworks in which the accessibility relation is transitive (and vice versa). Applied to epistemic scenarios, axiom 4 is called the "KK principle": if Jane knows that snow is white then Jane knows that she knows that snow is white.
What follows is a brief recap of Williamson's argument. But first, let us fix the language. The argument is couched in propositional logic enriched by the knowledge operator K; K is regimented by: Necessitation Rule: If p is a theorem of Γ, then so is Kp. Distribution Axiom: K(p → q) → (Kp → Kq). The axiom T: Momentarily we do not accept 4, that is KK, since it is the object of our discussion. We can further use subscripts in order to represent schematic propositional variables, that is, if p 1 means that the table is 1 m long, we can write p n to indicate that the table is n meters long, since it is not relevant for the argument we can neglect the unit of measurement.
Say Jane looks at a distant tree. Her evaluation of the height of the tree from such a big distance cannot be very precise. We make the hypothesis that her error is of 1 of some length unit. Hence her evaluation is h ± 1 in the chosen measurement unit. We assume that the results of Jane evaluations are natural numbers only. She knows how big her error might be. Moreover, if "e" is her evaluation, Jane knows that the tree is neither "10e" high, nor "0e" high.
Let us assume that the height of the tree is a certain value t; therefore, h t is the true proposition that says that the tree is t high. The first principle (margins of error) states that: (1) K(K¬h n → ¬(h n+1 )) We omit the subscript "J" from all of the K operators, since that the knowledge always belongs to Jane is understood. The contrapositive of (1) is obviously Jane's knowledge of "h n+1 → ¬K¬h n ". That is, Jane knows that "if the tree is n + 1 high, then Jane would not know that it is not n high". The justification of the latter is the following. 11 Let us consider the set W of all possible worlds accessible to Jane where the tree is n + 1 high; in order that (1) be true, in each world belonging to W "¬K¬h n " must be true as well. For "¬K¬h n " to be true, from each world belonging to W, there must be at least one world where h n holds true. In each one of the W worlds, Jane looks at the tree and evaluates its height. Since the tree is n + 1 high, the result of her evaluation will be a number belonging to the set {n, n + 1, n + 2}. Therefore, her evaluation can be n. Then there is at least one world-related to each world belonging to W-in which h n . From this follows that "¬K¬h n " is true.
But from (1), by distributivity of K, we get: (2) KK¬h n → K¬(h n+1 ) Now, (2) is a schema since it contains the (meta-)variable "n." Well, let us suppose, as already said, that Jane is reasonably certain that the tree is not 0 high. So, K¬h 0 But if knowledge was luminous, Jane should know that she knows that the tree is not 0 high. And this is precisely the content of the KK principle. So, for reductio: (4) KK¬h 0 Let us consider the schema (2); by instantiating n with "0", we have: (2′) is a sentence, not a scheme; by modus ponens (and a little bit of arithmetic), from (2′) and (4) we obtain: This brief deduction can be obviously re-iterated, obtaining then as a list the following: But among those propositions there is also K¬(h t ), which by factivity (T) entails ¬h t . The height of the tree is t and so there is a contradiction with our assumption that the tree is t high.
According to Williamson the weak assumption in this proof is the KK principle. Indeed, to deny T, that is factivity, seems worse than to deny KK. The only further possibility is to not accept (1), that is, Jane's awareness of her sight limitations. Perhaps in a non-scientific context, this could be a reasonable alternative. But we are applying the argument to scientific knowledge, that is, to experiment; and the experimentalist is supposed to know the experimental errors of her measurement.
Luminous Intervals
In this section we want to argue that Williamson's margins argument does not work for scientific knowledge; as previously said, this would introduce a sort of discontinuity in his treatment of knowledge: the case of everyday knowledge has to be characterized following some principles and abandoning others (for instance, the famous KK principle). However, because in other regions of knowledge things work differently, principles we abandoned for everyday knowledge can be included again in our theoretical setting.
Let us consider a measurement, as, for instance, that of length by means of a rod. It is clear that any experimental method-considered in a certain laboratory setting-has a superior threshold of resolution. For instance, it is not possible to determine the length of a stick by means of our rod with a resolution smaller than "1" measured in a suitable measurement unit. This means that if a stick either increases or decreases 1 unit, our rod is not able to register the change. With a notation similar to the preceding one, let us indicate the schema "a stick length is n" with "H n ". However, there is no scientific measurement that has the form H n, but on the contrary, has the form H n±j , since any scientific instrument has its sensibility. The intended meaning of H n±j is that the length read on the rod has the value n, with an accuracy of plus or minus j.
While the discussion of this experimental situation can be articulated at length, since a measurement must be repeated many times and many other sources of uncertainty are normally involved, due to causes different from the resolution of the instrument, in this context, the consideration of resolution as a source of uncertainty is enough for our argumentation.
Of course, the epistemological reasons for the existence of the range j are well-known by scientists and philosophers of science: this range depends, essentially, on the fine tuning of our instruments and on the perturbative experimental conditions. And it is well known as well that those limitations cannot be eliminated. It goes without saying that with the passing of time our technological precision increases more and more, but our investigation of scientific knowledge must be applied to a given cognitive situation when resolution of our instruments reaches a certain threshold. Moreover, better instruments will produce the same condition at a different level of resolution; therefore, from an epistemological point of view the situation with better instruments would be similar. Now, an advocate of Williamson's argument could accept our gloss on the scientific approach to measurement and recast principle (1) in these new clothes: (1*) K(K¬H n → ¬(H n+j )) Where the operator "K" refers to the knowledge of the experimenter. Remember that (1*) is the contrapositive of "K(H n+j → ¬K¬H n )". But within the most interesting cases of experimental contexts the sentence "¬K¬H n " is meaningless, since the knowledge of the experimenter involves necessarily the uncertainty due to the resolution of her rod. For this reason (1*) must be replaced by: in the antecedent and on experimenter's knowledge in the consequent. But, and this is crucial, reality has no margins, whereas knowledge does have margins. In a common sense knowledge context one can "forget" these margins-as done by Williamson-but not in an experimental one. The presence of margins of error in knowledge sentences blocks the sorites' chain. Summing up: in scientific practice, measurement judgements are always in the form: X n±j ; Williamson's reductio against KK exploits a form of regression principle which hinges on a series of very small increments under the threshold of observability. However, in an experimental context, the degree of resolution of the instrument is essential to the representation of our knowledge. And the introduction of the margins in our knowledge blocks Williamson's sorites' argument.
Concluding Remarks
In contemporary epistemology, the majority of investigated cases come from common sense knowledge. However, it is not evident that what holds true for external perception or for the self-knowledge of our mental states is suitable either for experimental sets or for the acceptance of highly abstract scientific theories.
Williamson's book K&L is built on everyday knowledge examples. One of the main theses of the book is that we are homeless from a cognitive point of view, that is, that there is no place where we can be completely sure of our knowledge. Although this is a very reasonable thesis, since Williamson believes that knowledge is a peculiar mental state defined in an externalist framework, anti-luminosity compels him to deny the validity of the KK principle, else luminosity would reappear. However, if one thinks that experimental knowledge has a peculiar form of normativity, the problem of luminosity disappears. In a scientific context, knowledge is not a mental state but the satisfaction of a set of reasonable criteria determined by our best scientific practices. For this reason, knowledge is an altogether conscious endorsement of either one or more sentences, so Williamson's argument against KK is not so straightforward.
Williamson couches an argument against luminosity in a sorites form. But we have shown that when the same argument is applied in an experimental situation, it is no longer valid. This does not mean that there are luminous scientific situations, but it does show that this is not a good road for proving non-luminosity in a scientific context.
In the fifth chapter of L&K, Williamson applies a similar sorites argument to prove the falsehood of KK. We have again shown that when we transfer his reasoning to a case of scientific measurement, the sorites is not triggered; hence this refutation of the KK principle does not work.
We conclude that a good amount of progress in epistemology can be made if we take into consideration the difference between everyday knowledge contexts and scientific ones. material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. | 7,700 | 2020-11-13T00:00:00.000 | [
"Philosophy"
] |
The Phytochemistry and Biological Aspects of Caryocaraceae Family
The Caryocaraceae family is constituted of 25 species distributed in two genera (Caryocar and Anthodiscus). Plants of this family have been used in several phytochemical studies for isolation and characterization of chemical compounds. Some of these studies evaluated in vitro and in vivo biological activities of extracts and pure substances isolated from plants of this family. Nine species of Anthodiscus genus have been described, while no phytochemical study related to them has been reported. On the other hand, Caryocar genus presents 16 species with several medicinal uses like for the treatment of colds and bronchitis, in the prevention of tumours, as a regulating agent of the menstrual flow, to treat ophthalmological problems and for the cure of hematomas and bruises. Some species of this genus were targeted by phytochemical studies and presented, in their composition, the following classes of secondary metabolites: triterpenes, fatty acids, tannins, carotenoids, triterpenic saponins, phenolic coumarins, phenolic glycosides, and others. The fruits of Caryocar species are very nutritive, containing in their composition fibers, proteins, carbohydrates and minerals. Seeds have been widely used as oil source with nutritional and cosmetic value. The biological evaluation of some species was carried out by using relevant biological assays such as: antioxidant, allelopathic and antifungal activities against Biomphalaria glabrata and toxicity on Artemia salina.
INTRODUCTION
Caryocaraceae is a small botanic family, widely distributed in Central and South America, composed by 25 species, distributed in two genera, Caryocar and Anthodiscus.Plants from Caryocar genus are very well studied especially because they are good sources of oils and wood, this latter being of good quality due its hardness, resistance to humidity and to attacks by insects (Prance, 1990).Caryocar genus presents sixteen species some of which possess economical potential since their fruits are often used as a source of edible oil.Among Caryocar species, Caryocar brasiliense is far the most studied species, because of the wide use of its fruit, called pequi, in central Brazil.C. brasiliense fruits are highly nutritious and very used in the preparation of juices and liquors; both the fruits and the leaves are used for therapeutic purposes, e.g. in the treatment of colds, coughs and bronchitis; the seeds are used in the manufacture of soaps and as aphrodisiac, and the wood, sturdy and durable, is employed in construction and shipbuilding (Prance, 1990;Araujo, 1995).C. brasiliense wood is also used in the manufacture of fences, due to its durability and resistance to deterioration (Araújo, 1995).Prance (1990) reported the flowering of C. brasiliense occuring between September and January.The fruits develop quickly after flowering, being often ripe between the months of January and March (Barradas, 1972).The fruit of C. brasiliense is round, green, drupoid, possess diameter of about 10 cm, a persistent calyx with one to four pirenes.The mesocarp is subdivided into external and internal (edible yellow part), involving the endocarp thorny (fine and rigid spines, with 2 to 5 mm long) with a white almond or seed.The inner mesocarp and endocarp thorny seed constitute the pyrenes (Almeida et al., 1998).The fruit of Caryocar villosum resembles C. brasiliense one, being irregularly shaped with a round, oblong shape with approximately 7-9 cm diameter.C. villosum fruit has external and internal (edible part), mesocarp, and a thorny endocarp.The seeds are also widely used in cooking for the preparation of dishes and occasionally its oil is extracted from the internal mesocarp (Marx, 1997) for culinary appilcations.
Anthodiscus genus comprises nine species, mainly used as source of wood.No chemical description of compounds from Anthodiscus genus has been found in the literature so far (SciFinder®, CAS, August, 2012).Morphological and anatomical studies have been reported for this species, like the description of buds.In relation to the peduncle and to the receptacle, the richomes are unicellular in Anthodiscus.The calyx consists of five sepals close to each other in a cupuliform calyx.The corolla consists of four or five petals that are basally adnate with the adjoining androecial ring.The petals are narrow at the base and thick at the ends, and the whole corolla is circumscissile at the base and fused at the apex to form caducous calyptra (Dickison, 1990).The androceu is composed of numerous stamens, around 100-280 (Prance & Silva, 1973).In general the filaments are long, upper petals, and varied in length, presenting bright yellow color and are short and bent on the button.Anatomically, the carpels are modified leaves, forming the ovaries of flowers.In the genus Anthodiscus, numerous carpels are opened in the central cavity of an ovary.Each carpel contains a single ovule attached to the placenta (Dickison, 1990).Polinization is mediated by insects (Prance & Silva, 1973).
Chemical constituents
The fruits of Caryocar species are primarily a source of vegetal oils.These oils are used in the cosmetic industry, for illumination and lubrication (Araújo, 1995).Both the pulp of the fruit and the oil obtained from the pulp present high concentration of lipids.
Several studies were conducted to evaluate the lipid composition of the oils and high incidence of unsaturated fatty acids has been found.Pulp of C. brasiliense (pequi) fruit presents an intensive aroma, being rich in vitamins, lipids and proteins.This part of the fruit is generally cooked with rice and chicken and also used in the manufacture of sweets and liqueurs.The lipid content of pequi fruit was found to be high (51.51%).This fruit also contains proteins (25.27%), carbohydrates (8.33%), fibers (2.2%), low water contents and high amounts of minerals.Contents of dietary fiber in the pulp is higher (10.02%) in comparison with the almond, showing also lipids (33.40%), carbohydrates (11.45%), and proteins (3.00%) besides a high water content (41.50%) (Lima et al., 2007).
Both pulp and almonds of C. brasiliense have high quantities of oil that reaches approximately 30% of the composition of each of these parts.This high amount of oil places C. brasiliense fruit as a potential raw material for the production of esters and acids to be used in biodiesel production.The flour obtained from external mesocarp of pequi presents high amount of starch that can be used as feedstock in the production of ethanol.The chemical characterization of flour from pequi pulp showed the following levels: starch 27.70%, proteins 21.38%, fibers 19.70%, ashes 3.82%, lipides 3.60%; moisture in this part reaches 16.97%.The almond contains mainly proteins (64.41%) and moisture (16.50%).As minor components, there were detected 9.88% of starch, 8.76% of ashes, 6.72 of fibers, and 6.01% of lipids (Macedo et al., 2011).
Fatty acids profile in C. brasiliense has been widely studied and, despite of some deviations, possible caused by seasonal variations, acids 1 and 2 were consistently detected as the very major components of the oil in most studies.The presence of high amounts of oleic acid (1), known as an essential omega-9 fatty acid, is very interesting.Oleic acid participates of the human metabolism, playing a key role in the synthesis of hormones, being one of the predominant fatty acids recommended in diets aiming at prevention of heart diseases (Lorgeril & Salen, 2006;Berry et al., 1991).Oleic acid (1) is also reported to possess several biological activities like prevention of cardiac diseases, lowering of blood pressure and cholesterol reduction (Navarro-Tito et al., 2010;Pontes-Arruda, 2009;Hanna et al., 2009;Stein et al., 2008, Terés et al., 2008).One of these studies demonstrated that the hypothensive effect of olive oil is caused by high levels of oleic acid present in this oil (70 -80%), indicating that the intake of olive oil increases levels of compound 1 in the membrane, lowering blood pressure (Terés et al., 2008).
Qualitative and quantitative comparison of fatty acids found in C. brasiliense, C. villosum and C. coriaceum can be found in Table 2.
For two species, C. brasiliense and C. villosum, phytochemical studies reported the presence of carotenoids in fruit pulps, an interesting fact since these fruits are edible and carotenoids are precursors of vitamin A, possess antioxidant activity, and are associated with reduction of cancer and cardiovascular diseases risk (Bender, 2005).Ramos et al. (2001) carried out an evaluation of pro-vitamin A capacity of pequi fruit.Carotenoids were characterized in order to evaluate losses of nutritional factors under conventional cooking of pequi pulp with rice.The carotenoids extracted from pequi pulp, raw or cooked, were identified as ζ-carotene (19), β-carotene (20), β-cryptoxanthin (21), cryptoflavin (22), anteraxanthin (23), zeaxanthin (24) and mutatoxantin (25).The contents of carotenoids found were 231.09 and 154.5 µg/g for raw and cooked pulp, respectively.
It was noticed that, from the carotenoids found in the raw pulp of C. brasiliense, 23 was present in higher concentration (40.54%), followed by 24 and 26 presented activities as vitamin "A" precursors and 20 was found to be the principal responsible for pro-vitamin "A" activity in all samples analyzed.It was noticed that after conventional cooking, loss of carotenoids in pequi pulp occurred on approximately 30.25%, corresponding to the average loss of 12.12% in vitamin "A" level (Ramos et al., 2001).
Structure of the major carotenoids isoladed from C. brasiliense are shown in Figure 2.
C. brasiliense fruit presents 209 mg of total phenolics in 100 g of the pulp; this value is higher than those found in the majority of pulp of fruits consumed in Brazil, such as açaí (136.8 mg/100 g), soursop (84.3 mg/100 g) and guava (83.1 mg/100 g).The levels of phenolics and carotenoids are reduced in pequi kernel in relation to contents found in the pulp (Lima et al., 2007).
Studies carried on with fruits, stem barks and skin of the species C. villosum and C. glabrum led to the isolation, by the first time, of triterpene saponins, phenolic glucosides and polar dihydroisocoumarin compounds in those species.The isolated saponins showed amphiphilic behavior and ability to form complexes with steroids, membrane proteins and phospholipids.This behavior determined a number of biological properties for these substances, such as hemolytic activity, cytotoxic and molluscide (Schenkel, 2003).
A combination of silica gel, reversed phase column chromatography and semi-preparative HPLC of a fraction from the methanolic extract of C. villosum stem barks also led to the isolation of triterpene saponins, five of them being reported for the first time: 6).Through acid hydrolysis it has been possible to identify two aglicones, 80 and 81 (Magid et al., 2006a).
The same group related the isolation of seven new phenolic glycosides (Figure 7).(Magid et al., 2008).
Biological activities
The oil of C. brasiliense fruits was tested as an antifungal agent against strains of C. albicans but it was inactive (Passos et al., 2003).The crude ethanolic extract from barks and leaves of C. brasiliense presented toxic activity against Biomphalaria glabrata, intermediate host of Schistosoma mansoni, the causative agent of schistosomiasis.At the concentration of 100 ppm, both ethanolic extracts of barks and leaves showed high toxicity against B. glabrata, reaching 90% of mortality after 48 hours.At a lower concentration (50 ppm) a drastic decrease of toxicity was observed, leaves presenting 20% of mortality rate; barks extract killed 10% of the parasites after 48 hours (Bezerra et al., 2002).
The ethanolic extract of C. brasiliense barks presented a significant effect on the parasitemia caused by Trypanosoma cruzi Y strain, at the concentration of 400 ppm, reducing the number of circulating parasites in the blood.However, total mortality was not achieved (Herzog-Soares et al., 2002).
By the other hand, parasitemia of ethanolic extract from C. brasiliense barks was evaluated on mice inoculated with T. cruzi Y strain, in the acute phase of infection.A significant reduction of parasitemia was observed at the concentration of 400 ppm and only eigh days after infection.The percentage of growth inhibition corresponded to 52.9% when compared with the control (Herzog-Soares et al., 2006).
The effect of methanolic and ethanolic extracts, obtained from leaves and floral buds, external and internal mesocarps of C. brasiliense fruits was tested against phytopathogenic fungi species Botrytis cinerea, Colletotrichum truncatum and Fusarium oxysporum.It was observed a stimulus in the fungal growth caused by the extracts (Marques et al., 2002).
Hydroalcoholic extract from C. brasiliense leaves inhibited the proliferation of promastigota form of L. amazonensis, an effect significantly superior to the effect shown by glucantime, a drug used for leismaniasis treatment.This extract inhibited growth of several bacteria tested in the same study.The best antibacterial activity was observed against Pseudomonas aeruginosa (1.5 x 10 3 µg/mL) and Staphylococcus aureus (2.0 x 10 3 µg/mL) (Paula-Junior et al., 2006).
The antioxidant activity of hydroalcoholic extract of C. brasiliense leaves was evaluated by DPPH method showing a potent antioxidant activity of extract.It was noticed that, at the concentration of 1 x 10 3 µg/mL, the extract showed activity similar to that presented by vitamin C and rutin.However, no differences between their EC 50 values were found (Paula-Junior et al., 2006).
The crude extract and fractions obtained from the epicarp and mesocarp of C. brasiliense were tested for their antioxidant activity using the DPPH method.A high IC 50 value was found, which may be linked to the presence of gallic acid and ethyl gallate (Ascari et al., 2010).Aqueous and ethanolic extracts of pulp, seeds and barks from C. brasiliense (pequi) and other fruits, known to be consumed mainly by the native population of Cerrado region in Brazil, were evaluated for their free radicals scavenging activity, by using DPPH assay.Aqueous and ethanolic extracts of pequi peels presented IC 50 = 9.4 and 17.9 µg/mL, respectively.Gallic acid (52), used as a reference compound, presented IC 50 = 1.4 µg/mL.Determination of total phenols for these extracts has been accomplished by the Folin-Ciocalteau assay.Ethanolic and aqueous extracts of the pequi fruit peels presented values of 209.37 and 208.42 g GAE/kg, respectively in this assay.Both extracts showed to be rich in phenolic compounds by using the same method having, in consequence, high scaveneging activity (Roesler et al., 2007).Antioxidant activity of pequi fruit ethanolic extract was also assessed using lipid peroxidation in vitro model, using rats liver microsomes as an oxidative system.The extract showed high antioxidant activity; the respective IC 50 has not exceeded 0.8 µg/mL (Roesler et al., 2008).
Alelophatic activity in the growth of L. sativa (lettuce) was evaluated for crude ethanolic extract and compounds isolated from the epicarp and mesocarp of C. brasiliense.Among the samples tested, gallic acid presented greater inhibitory effect on root and high stimulatory effect on the stem.The same extract and fractions were also evaluated for antimicrobial activity on microorganisms S. aureus, Salmonella typhymurium, E. coli, Citrobacter freundi, Bacillus cereus, L. monocytogenes, P. aeruginosa and C. albicans, demonstrating activity for all bacteria tested but not against the yeast C. albicans (Ascari et al., 2010).
Crude ethanolic extract, ethyl acetate fraction and epicuticular wax of leafes (collected in March and October), in addition to the oils from seed and almonds of C. brasiliense were tested for the inhibition of 23 isolates of Cryptococcus neoformans, being 19 isolates of C. neoformans var.neoformans and 4 of C. neoformans var.gattii.Crude ethanolic extract presented antifungal activity (89% inhibition) against C. neoformans var.neoformans.The epicuticular wax collected in October presented greater antifungal activity than the wax collected in March and presented growth inhibition of 73% of C. neoformans var.neoformans.Fixed oils from both, seeds and almonds, as well as the ethyl acetate fraction, presented high fungistatic activity on C. neoformans.In respect of the two varieties of Cryptococcus tested, the analysis of the in vitro susceptibility showed that C. neoformans var gatii possessed lower sensitivity to extracts of C. brasiliense than C. neoformans var.neoformans (Passos et al., 2002).
In another study, the aqueous extract of C. brasiliense pulp showed anticlastogenic activity and was able to inhibit damage on bleomycin-induced DNA in mice.The extract also presented antiproliferative activity when tested in vitro in Chinese hamster cells.Through this study it was seen that the clastogens bleomycin and cyclophosphamide were effective as positive controls in DNA damage to be used in the CHO-K1 bioassay (Chinese hamster ovary cells).The antioxidant property of aqueous extract was assessed using the degradation of 2-deoxyribose in the Fenton reaction.It was noticed that the extract inhibited the Fenton reaction, decreasing the formation of hydroxyl radical and reducing oxidative degradation of 2-deoxyribose (Khouri et al., 2007).
The toxicity of C. brasiliense was evaluated to analyze changes in mitotic index of Guaru gills (Poecilia vivipara) epithelial cells exposed to fractions obtained from leaves and stem barks of pequi extracted with ethyl acetate.There were not detected significant changes in mitotic index, in comparison with control group (Motter et al., 2004).
C. brasiliense extracts were assayed for molluscicidal activity against B. glabrata, toxicity to Artemia salina, antifungal activity against Cladosporium sphaerosperum by autobiography and antibacterial activity by agar diffusion test against S. aureus, E. coli, B. cereus and P. aeruginosa.The leaves extract showed high cytotoxic activity against larvae of A. salina and also antibacterial activity against microorganisms S. aureus, B. cereus and P. aeruginosa (Alves et al., 2000).
The potential of C. brasiliense fruit oil as promoter of in vitro sodium diclofenac penetration through human skin was tested.It was noticed that the combination of pequi oil and papain showed a better performance in formulations than pequi oil alone (Lopes et al., 2008).Intersterification of pequi oil with stearic acid catalyzed by specific sn 1.3-1.3lipase Lipozyme, showed efficient incorporation of stearic acid into pequi oil triglycerides (Facioli et al., 1998).
The ability of chloroform and aqueous extracts of C. brasiliense pulp to protect cells against genotoxicity induced by two antineoplasic drugs, cyclophosphamide injection (CP) and bleomycin (BLM) was evaluated.The pulp fruit extracts did not present clastogenic or genotoxic effect in the studied cells.Both extracts showed protective effects against oxidation damage in DNA caused by CP and BLM, indicating ability to inhibit in vivo chemical mutagenesis.There was a differentiation on the results in relation to the gender of the mice tested.The antioxidant activity of extracts was also evaluated by measurement of lipid peroxidation by TBARS in mice plasmas.Chloroform extracts enhanced lipid peroxidation only in male animals with no significant effect on females.These results suggest that, with appropriate adjustment in the dosis, the organic extract can be used as a supplement in the diet (Miranda-Vilela et al., 2008).
The same group has evaluated the protective antioxidant effects of pequi oil in runners.This study was carried with athletes divided in two groups.Both groups run the same distance in a race, at the same time and under the same environmental conditions (before and after taking pequi oil as a supplement).It was noted that the pequi oil was effective in reducing tissue injuries, based on the values of proteins aspartate aminotransferase (AST) and alanine aminotransferase (ALT), especially in women.Pequi oil also caused reduction of DNA damage in athletes, regardless the gender.Protein creatine kinase (CK) levels were influenced by the MnSOD genotypes; heterozygotes individuals had less damage in DNA and smaller tissue injuries and a decrease in the levels of lipid peroxidation.These proteins presented better response to the pequi oil against damage induced by exercise.Therefore, pequi oil showed to be a good antioxidant supplement besides possessing many other nutritional properties (Miranda-Vilela et al., 2009b).
The anti-inflammatory power of C. brasiliense (pequi) oil and its effects on management of postprandial lipidemia and on the blood pressure of female and male athletes was also evaluated by Miranda-Vilela group.Athletes participating of this study were evaluated after racing (under the same conditions) before and after ingestion of capsules containing 400 mg of pequi oil for 14 days.Pequi oil presented anti-inflammatory effect and reduced total cholesterol, HLD and low-density lipoprotein in participants aged above 45 years, mostly the male ones.There was a decrease in blood pressure, suggesting that pequi oil, when used as a supplement for athletes, have a hypotensive effect (Miranda-Vilela et al., 2009c).The study followed to evaluate the effects of pequi oil on oxidative exercise-induced damage in plasma and erythrocytes.The athletes were subjected to weekly training under the same conditions.The assessment was made before and after the ingestion of capsules containing 400 mg of pequi oil for 14 days.Blood samples were analyzed after they raced and were subjected to TBARS test and to an erithrogram analysis.To verify if the antioxidant effect of pequi oil was influenced by antioxidant enzyme genotypes, polymorphisms MnSOD (Val9Ala), CAT (21A/T) and GPX1 (Pro198Leu) genes were analyzed.It was observed that the pequi oil assists in the improvement of exercising in athletes because of its ability to carry oxygen through the blood.The best result in treatment using the pequi oil was presented by individuals that have proalell genotypes MnSOD Val / Val, CAT AA or AT and GPX1 (Miranda-Vilela et al., 2010).
Dermocosmetic activity of the saponins 82 and 134, isolated from methanolic extract of C. villosum stem barks, was evaluated.Ex vivo lipolitic activity was not observed using a human adipose tissue explant in the concentration of 100 µg/mL and no inhibition of DOPA oxidase activity was observed at a concentration of 50 µg/mL.A moderate in vitro cytotoxic activity was noticed against keratinocyte cells for 134 and for 82 (Magid et al., 2006a).
Methanolic extracts of both, pulp and peels of C. villosum fruit, as well as the fraction rich in saponins from the peels were evaluated for toxicity by brine shrimp (A.salina) assay.All samples tested showed good larvicidal activity.Methanolic extract from the pulp presented LC 50 100 µg/mL being more toxic to the methanolic extract of the peels (500 µg/mL).The fraction rich in saponins from the fruit peels showed a LC 50 value of 100 µg/mL, and a mortality rate of 17% at 10 µg/mL, being more active than the methanolic extracts.The antimicrobial activity of saponins 121, 123, 124, 137 and 3-O-β-D-glucopyranosyl-(1 2)-β-D-galactopyranosyl hederagenin-28-O-β-D-glucopyranosyl ester (154) was evaluated against E. coli, S. aureus, P. aeruginosa, Mycobacterium smegmatis and Enterococcus faecalis.However, at the used doses, none of these compounds were active against these micro-organisms (Magid et al., 2006b).
The hemolytic activity of metahanol extracts from peels and skin of C. glabrum fruit, as well as of ten saponins isolated from the fruit pulp (123, 124, 126, 133, 134, 135, 136, 138, 145 and 154) were evaluated, showing to be less active that the reference saponin mixture used in the bioassay.The monodesmosidic saponin, 124 and 134 showed greater hemolytic activity than bidesmosidic saponins 126 and 138.The disaccharide saponin 134 was more active than the monosaccharide saponin 154.These results demonstrate that bidesmosidic saponins are generally less haemolytic than the monodesmosidic ones and that hemolytic activity increases with the number of sugar units attached in position 3 of the aglicon (Magid et al., 2006c).
Leaves of C. microcarpum presented repellent activity/toxicity against cutting leaves ants.These leaves are used by the indians of Northwestern Amazonia as well as the pulp of C. glabrum, C. gracile and C. microcarpum fruits as fish poison.This application is related to the presence of saponins in this part of the plant (Kawanishi et al., 1986).
CONCLUSION
Caryocaraceae family, distributed in Central and South America, presents two genera, Anthodiscus and Caryocar.Plants from Anthodiscus genus have not been targeted by phytochemical studies so far, and its use has been restricted as a wooden source.By the other hand, there are several reports on the phytochemical constituents and biological studies of species from Caryocar genus.
The fruits of many Caryocar species are used to produce oil and in the culinary, in the elaboration of several foods due to their high contents of vitamins, lipids and proteins.The oil, also very used in the cosmetic and food industries, possesses high amounts of unsaturated fatty acids like palmitic and oleic acids.Amongst the species of Caryocar genus, oils from the fruit pulp of C.
FIGURE 2 .
FIGURE 2. Structures of carotenoids found in C. brasiliense fruit pulp.
FIGURE 3 .
FIGURE 3. Structures of some volatiles constituents of C. brasiliense fruit pulp.
FIGURE 6 .
FIGURE 6. Structures of some triterpenoid present in the stem bark of C. villosum.
FIGURE 7 .
FIGURE 7. Structure of some phenolic glycosides present in the stem barks of C. villosum and C. glabrum.
FIGURE 10 .
FIGURE 10.The new dihydroisocoumarin glucosides from the stem barks of C. glabrum.
brasiliense, C. villosum and C. glabrum were chemically and biologically studied, as well as the almond of C. brasiliense.Levels of carotenoids in the C. brasiliense and C. villosum fruit pulp were determined.In C. brasiliense, the major found was anteraxantin and, in C. villosum, b-cryptoxanthin and b-carotene were the predominat ones.The composition of volatile compounds present in fruits pulp and in the seeds of C. brasiliense and C. villosum was studied and C. brasiliense presented ethyl hexanoate as the major constituent in the pulp and in the seed.C. brasiliense presents several phenol compounds confirming the antioxidant activity reported for its fruit.C. glabrum and C. villosum are rich in saponins.Caryocar species demonstrated great biological potential.C. brasiliense was active against B. glabrata, T. cruzi, L. amazonensis and inhibited the growth of the bacteria S. aureus and P. aeruginosa.Several pre-clinical and toxicological studies confirm the antioxidant activity of C. brasiliense.C. brasiliense did not present clastogenic or genotoxic effects in mice cells but instead, protected them against DNA damages induced by bleomycin or ciclosphosfamide.The oil of C. brasiliense also has cardiovascular protective effect, and can be used as a dietary supplement.In this way, this review brings about the great potential of plants from Caryocar genus.
TABLE 1 .
Species from Caryocar and Anthodiscus genus.Species belonging to Caryocar and Anthodiscus genera are listed in Table | 5,819.8 | 2013-01-01T00:00:00.000 | [
"Biology",
"Chemistry",
"Medicine"
] |
Development of a Framework for the Communication System Based on KNX for an Interactive Space for UX Evaluation
Domotics (Home Automation) aims to improve the quality of life of people by integrating intelligent systems within inhabitable spaces. While traditionally associated with smart home systems, these technologies have potential for User Experience (UX) research. By emulating environments to test products and services, and integrating non-invasive user monitoring tools for emotion recognition, an objective UX evaluation can be performed. To achieve this objective, a testing booth was built and instrumented with devices based on KNX, an international standard for home automation, to conduct experiments and ensure replicability. A framework was designed based on Python to synchronize KNX systems with emotion recognition tools; the synchronization of these data allows finding patterns during the interaction process. To evaluate this framework, an experiment was conducted in a simulated laundry room within the testing booth to analyze the emotional responses of participants while interacting with prototypes of new detergent bottles. Emotional responses were contrasted with traditional questionnaires to determine the viability of using non-invasive methods. Using emulated environments alongside non-invasive monitoring tools allowed an immersive experience for participants. These results indicated that the testing booth can be implemented for a robust UX evaluation methodology.
Introduction
Domotics, also referred as Home Automation, is the integration of controlling and monitoring systems in the home area in a unified system allowing the transition of conventional homes into smart homes [1].The goal of smart homes is to monitor inhabitants' activities through the interaction with different devices in the home area to control and adapt the environment in order to provide a better experience for its inhabitants [2].
Domotics has a variety of devices that can be programmed according to the application context, demonstrating the potential to be adapted to different scenarios such as education [3], healthcare [4], assisted living [5], etc. Domotics is also used for relaxation therapies to reduce stress by generating scenarios with smart lighting systems, environmental sound players, temperature variation, and scent diffusers [6][7][8][9].These applications inspired us to create the Emotional Domotics (ED) research line.The motivation behind ED research is to improve the quality of life by reducing the stress levels of the inhabitants with domotic systems.This goal can be achieved by creating an inhabitable space that monitors and analyzes inhabitants' emotional responses leading to the selection of the best environmental condition in which a sense of comfort and well-being is transmitted to the inhabitants, reducing stress levels as a result [10].Considering this, a testing booth was built to conduct experiments centered around the impact of the environmental variables in the emotional behavior of the inhabitants.The testing booth was instrumented with KNX devices to facilitate the replication of experiments; KNX is an instrumented standardized open-source communication protocol for building automation and domotics.Meanwhile, the emotion recognition task is carried out using non-invasive methods such as wearables and computer vision to acquire biometric data from the inhabitants [11].There are similar works that implement smart home systems to recreate scenarios and use physiological sensors such as wearables to monitor stress levels [12][13][14].In these works, predefined scenarios are generated for relaxation therapies, while stress levels are calculated using signals acquired from electrodermal activity and heart rate.Based on the implementation of these solutions, inhabitants benefit from selecting the scenarios that are more suitable for them.
Meanwhile, as this research progressed, we established the possibility of using the testing booth in UX research due to the generation of environments according to product design and testing in conjunction with the emotion recognition systems to analyze users' behavior during the interaction [15].The application of the methods and tools used in the testing booth can also improve the efficiency of individuals in workstations [16] and student engagement in learning environments [17].However, the KNX communication protocol was not designed to integrate sensors to monitor users' emotional response, but it can integrate biometric sensors for security purposes such as fingerprint scanners.This led us to develop a framework for the communication system of the testing booth to integrate KNX-based systems alongside with emotion recognition systems.The framework was designed based on the Python programming language, taking advantage of the opensource condition of the KNX communication protocol in order to integrate and synchronize data collected from the environmental sensors with user data collected from biometric sensors during the interaction process.Using Python also allows the possibility to integrate Artificial Intelligence (AI) and Machine Learning (ML) tools for a more advanced emotion recognition system and facilitate the control when recreating scenarios for product testing.A diagram of the general solution based on the requirements of the framework of the communication system of the testing booth is presented in Figure 1.The diagram consists of an interactive space, the testing booth, that contains KNX actuators and sensors.Within the interactive space, different activities are performed related to the interaction between users and products.Biometric data for emotion recognition such as facial expressions are acquired with sensors that do not belong to KNX.The acquired data from the environmental sensors and biometric sensors are synchronized and processed for their interpretation to generate a UX report.To test this framework, an experiment was proposed in which a new design of detergent bottle was evaluated using the emotional response of the participants.The testing booth was used to recreate the environment of a laundry room (relative humidity, light hue and intensity).The recreation of environments in conjunction with non-invasive monitoring sensors allowed participants to have a more immersive experience.Further-more, using emotional responses allows researchers to acquire a more genuine response from the participants while they are interacting when evaluating product usability criteria.This is important since UX researchers can make a deeper analysis of the elements that could impact on the acceptance of a new product and reduce any possible bias generated by self-reported metrics such as questionnaires and surveys [18].These findings can be implemented to design a methodology for an objective UX evaluation based on emotions.
This paper is structured as follows: Section 2 provides the background to understand this research line.Section 3 describes the testing booth as well as the instruments it has.In Section 4, the proposed framework for the communication system of the testing booth is presented.Section 5 covers the application of the proposed framework in an experiment related to the analysis of UX for the evaluation of prototypes of detergent bottles in a simulated scenario.Finally, a discussion about the findings and the implications of this work is provided in Section 6.
Background
In this section, a brief description about the topics covered in this research is provided.
Domotics
Domotics focuses on transforming conventional homes into smart homes.This means that smart technologies are used to control indoor environments by installing intelligent lighting systems, entertainment systems, temperature controllers, and other applications to improve the comfort and safety for any user [19,20].All mechanical and digital devices are interconnected to a network allowing the communication with each other and with the final user to create an interactive space [21].As previously mentioned, the main objective of domotics is to improve the quality of life of inhabitants by automatizing most home tasks.This can be achieved by providing a proactive environment that is aware of their inhabitants' personal and emotional needs based on their location in the smart home, and it can have several solutions for those specific needs [22].Every smart home system must have the following features [20]: • Automation: the ability to accommodate automatic devices or perform automatic functions.
•
Multi-functionality: the ability to perform several duties to generate several outcomes.• Adaptability: the ability to adjust to inhabitants' needs.• Interactivity: the ability to interact with or allow interaction among inhabitants.• Efficiency: the ability to perform functions in a time-saving, cost-saving, and convenient manner.
As domotics evolves, new tendencies and technologies have emerged.Among those tendencies, there is the implementation of the Internet of Things (IoT) to manage all the different devices in a smart home, facilitating the monitoring of individuals and boosting independent living through sensors and actuators connected to internet networks to manipulate the environment [23].Meanwhile, IoT systems allow the processing and storage of a great amount of data; this means that the domotic system can take advantage of the collected data, detect patterns on inhabitants' behavior by implementing ML models, and adjust the environment to routines according to those routines, resulting in the energy efficiency [24].As all domotic devices are connected by an internet network, inhabitants can monitor and control their smart homes remotely.However, these developments have also led to increased investments in cybersecurity solutions to prevent intruders from having access to the collected data and controlling the smart home [25].
KNX Technology
Every domotic installation must have a proper communication protocol to exchange data between all devices in the home area to control and monitor all the smart home devices.Many communication protocols for domotics have been developed over the years like BACnet, C-Bus, CC-Link, and KNX [26].Among these communication protocols, the standard KNX is one of the most popular for smart homes and building automation because it provides a standardized communication protocol that allows the exchange of data between KNX domotic devices.The KNX protocol offers backward compatibility, allowing an easy installation and scalability as well as minimizing upgrade costs.Many manufacturers can develop their own KNX devices that must use the same communication protocol; this means that all KNX devices can exchange data regardless of whether they were made by different manufacturers.Additionally, the KNX-based systems offer a variety of solutions for smart lighting, heating systems, energy efficiency, and security systems.Due to these features, KNX has more than a 70% share of the automation market in Europe [27].
The KNX protocol was developed by the KNX Association.This association was founded in 1999 by the European Installation BUS Association, European Home System Association, and BatiBUS Club International.The KNX protocol was accepted as an international standard for home automation in 2006 as ISO/IEC 14543 [28] as well as CENELEC EN50090 [29] and CEN EN 13321-1 [30] (Europe), ANSI/ASHRAE 135 [31] (USA), and GB/T 20954 [32] (China) [33].
The minimum KNX working system must include at least a KNX Power Supply of 30 V DC, an actuator, and a sensor.Data exchange between devices is accomplished by different transmission media such as KNX TP (Twisted Pair) or KNXnet/IP (Ethernet).Any KNX installation can be programmed by using the Engineering Tool Software (ETS) Version 5 [34].In a KNX installation, devices are identified by a physical address and a group address.The physical address is used to initialize and program the device.Meanwhile, the group address is used for the communication and interaction between KNX devices.This allows an easy and fast exchange of data between KNX devices [35].
Being an open-source communication protocol, KNX opens the possibility for anyone to develop their own solutions for home automation or building automation.This includes the possible adaptations of the communication protocol to other platforms such as MATLAB 2017b (or superior) or Python 3.6 (or superior) for different applications [36,37].Finally, KNX allows all instruments to be under the control of the same communication protocol; this simplifies the process of adding other instruments that are based on KNX regardless of whether they are from different manufacturers.Due to these features, KNX-based systems were selected to instrument the testing booth.The specifications of the devices used in this research are provided in Section 3.
Emotion Recognition
Emotions are complex mental reactions that can be conscious or unconscious responses to events, objects, and situations.Emotions combine feelings, thoughts, and behaviors that can manifest through various channels, including human speech, gestures, facial expressions, and physiological signals [38].Emotions can impact on both the physiological and psychological states of individuals [39].There are different technologies to identify and measure the intensity of emotions.These methods can be divided into contact and non-contact methods.Contact methods require specialized devices to acquire physiological data such as EEG, EMG, GSR, or ECG.Meanwhile, non-contact methods use tools for the processing of videos to analyze facial expressions and body posture as well as audio recordings for voice analysis.In the context of this research, the computer vision algorithms are used for the analysis of facial expressions to identify emotions.
One of the most recognized researchers in facial expression analysis is Dr. Paul Ekman.Ekman and his colleagues suggested that emotions can be consciously or unconsciously expressed through facial expressions.A set of basic emotions can be identified by analyzing facial expressions; these emotions are joy, anger, contempt, disgust, fear, sadness, and surprise [40].These expressions are innate and consistent across individuals regardless of factors such as age, culture, or ethnic origin [41].This means that there are similar patterns in facial expressions when an individual is experiencing a specific emotion.
Another key factor is the advances in ML and deep learning models that led to the creation of solutions around computer vision algorithms like emotion recognition using facial expressions analysis for different purposes such as marketing, psychology, education, among others [42].The methodology to analyze facial expressions for emotion recognition generally follows the steps presented in Figure 2. The first step consists of acquiring an image that will be analyzed.This image can be obtained from a photo or a video; this image is called the input image.In the pre-processing step, the input image is analyzed to obtain a Region of Interest (ROI).ROIs are used to simplify the processing of the image by giving the algorithm relevant data: in this case, the face of an individual.ROIs are used to crop the areas that contain all possible faces within the input image.Most facial expressions recognition systems are based on the Viola-Jones algorithm [43] or Dlib algorithm [44].Several filters are applied in the input image based on the architecture of the model; these filters include resizing and changes in color channels.
During the feature extraction phase, relevant data are obtained from the pre-processed image.One of the most popular techniques to obtain facial features is Action Units (AUs).AUs are individual facial muscle movements involved when a subject is expressing an emotion.AUs are based on the theory behind of the Facial Action Coding System proposed by Dr. Paul Ekman [45].
In the classification/regression step, the processed image is labeled with the emotion that the individual is experiencing.The most popular algorithms to classify emotions based on facial expression analysis are Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) [46].
User Experience
As previously mentioned, smart homes are adjusted to the users' needs and requirements in order to provide an active scenario that generates a sense of well-being and improves the quality of life [47].Therefore, it is important to know user perception when interacting with these systems.Creating new metrics to quantify user acceptance and satisfaction has gained major relevance recently.These new metrics are related to the concept of UX [48].
Although UX is a relevant concept in research, there is not a clear definition about UX and how it should be measured.This problem emerges due to the unique applications given by different disciplines and areas of knowledge [49].From this discussion, to create a unique concept of ISO 9241-210 [50], ISO 9241-210 standardized the concept of UX as "A person's perceptions and responses that result from the use or anticipated use of a product, system or service".This definition is used by new researchers to have a better understanding about UX and how it could be applied to the area of knowledge they are working on [51].
Questionnaires are the most used methods to evaluate UX because it is possible to obtain quantitative data about the interaction with the product or service.However, the obtained data can be biased or impacted due to external influences such as the participant's attitude or their current emotional state, among others [52].Therefore, many UX researchers have introduced the use of psycho-physiological methods such as electroencephalography (EEG), electromyography (EMG), or Galvanic Skin Response (GSR) in order to obtain a relative objective UX evaluation [18].
Previous Work in Emotional Domotics
In previous works [10], the impact of environmental variables on a user's emotional state was analyzed.The first experiments were conducted in a Gilbreth and Taylor testing booth in which participants were asked to assemble a LEGO ® vehicle under different environmental conditions; these environmental conditions include light intensity, temperature, and humidity.Each experimental test was designed to last 5 min.The experiments were conducted in Mexico; therefore, all environmental conditions were adjusted according to the Official Mexican Norms: NOM-015-STPS-2001 [53] and NOM-025-STPS-2008 [54], which are related to temperature conditions and lighting conditions, respectively.iMotions™ software version 8.1 was used to analyze participants' emotional responses during the experiments [55].iMotions™ software implements the FACET™ module for the analysis of facial expressions with computer vision for emotion recognition; although more recent versions of iMotions™ implement Affectiva™ model for this task, since FACET™ was acquired by Apple© [56].The results of these experiments led to the design and construction of a testing booth according to the needs of ED research.
After the testing booth was built, different experiments were conducted.In this case, the experiments were passive; this means that participants received the stimuli instead of performing an activity.The stimuli were obtained from the Interactive Affective Picture System (IAPS) [57].The experiments were conducted using sensors compatible with iMotions™ software: the FACET™ module and the Shimmer™ wristband to collect GSR and photoplethysmogram (PPG) data.Temperature, light hue and intensity were modified for each experiment in the ranges allowed by the Official Mexican Norms.The results of these experiments were used to generate equations that correlate the emotional state of the user with the environmental variables.These equations are planned to be used to create a control loop which modifies the inhabitable space variables based on the user's emotional needs [11].
Methodology
In this section, a deep description about the testing booth is provided.This description includes the KNX devices and how they are installed as well as the process of the interpretation of the KNX protocol for the communication framework.
The Testing Booth
Based on the conclusions from previous experiments [11], it was stated that a testing booth was needed to conduct different experiments to analyze in depth the impact of the environmental variables on users' emotions.In this case, the environmental variables we are working with are temperature, light intensity and hue, relative humidity, and CO 2 levels.The testing booth was designed and built with the help of students studying toward a Bachelor degree in Industrial Design as an academic challenge.Aspects such as good light reflection from the inside, low heat transfer, and isolation from any visual distraction from the outside were considered during the design process.The testing booth has the necessary elements that can be used by the participants to perform different tasks.The testing booth has two Microsoft LifeCam HD-3000 webcams: one is used to record the facial expressions in order to analyze them and identify emotions, and the another is used to record the interaction to identify the tasks performed.The testing booth is presented in Figure 3.
The testing booth has a square base of 1.65 m and a height of 2.4 m.Its structure is made of steel, and it is covered with medium-density fiberboard (MFD) panels except for the roof that is composed of panels of polycarbonate.
As previously mentioned, the testing booth was instrumented with different KNX devices: • CO 2 Multisensor CD 2178: Sensor to measure different variables within the testing booth.These variables are CO 2 levels (parts per million-ppm), relative humidity (%), and temperature (degrees Celsius).The installation can be controlled using KNXnet/IP protocol using any PC with the ETS5 software [58].KNX has its own special bus cable to communicate with every KNX device in the building.The bus cable can go alongside the domestic tension cable in a KNX installation as long as both cables are covered with their respective isolation material.The wiring diagram of the testing booth is shown in Figure 4.The wiring diagram consists of the KNX instruments mentioned above and a PC to control the environmental variables within the testing booth.To connect the PC with the instruments of KNX, a router is needed to send control commands to the KNX controller IP interface.Finally, the two webcams are connected to the PC to record the facial expressions and the interaction between the user and product.A monitor is also installed within the testing booth to present stimuli if they are needed.More KNX devices could be added, but they need to be configured with the ETS5 software.Adding new KNX devices does not affect the settings of previously installed devices.
Interpretation of the KNX Communication Protocol
The communication of the testing booth is based on the KNXnet/IP protocol.The KNXnet/IP protocol is a variant of the KNX protocol that allows data exchange through an IP network using Ethernet as its physical layer.The User Datagram Protocol is used as the transport layer, while the KNXnet/IP serves as the application layer [36].
The KNXnet/IP protocol uses a telegram system to exchange data between all devices in a KNX installation.Telegrams are data packets containing commands with the necessary data to carry out several instructions in a KNX installation [34].KNX documentation contains information about the content of those telegrams; however, it does not provide a depth description about how data are handled.Wireshark™ was used to obtain the content of KNX telegrams and analyze every line.The methodology to acquire the telegrams in order to examine them consists of five steps: 1.
The monitoring is initialized to capture all data packets that belong to the KNXnet/IP protocol; other data packets are discarded.
2.
The connection is established with the KNX system to generate all the telegrams related to the connection request from the computer to the KNX IP BUS.
3.
Commands are sent to all sensors and actuators installed in the testing booth to register the generated telegrams.
4.
A command is sent to disconnect the computer from the KNX IP BUS and identify the telegram corresponding to this command.
5.
Monitoring is stopped since all necessary telegrams were captured.
The content of each KNX telegram was analyzed with Wireshark™ .This analysis has the purpose to have a better understanding of about how the data are encapsulated in a KNX telegram in order to replicate its behavior.
After analyzing each recorded telegram, it was discovered that telegrams are coded in hexadecimals, and they have the following elements: Once the process of KNXnet/IP protocol was processed to generate the corresponding commands, a programming language was selected to replicate the KNXnet/IP protocol to control the testing booth.In this case, Python was chosen due to the facilities it offers such as tools for computer vision, AI, and interface design, and it can work with asynchronous systems.
In the search for tools and libraries for asynchronous systems, the XKNX library was found; XKNX is an open source library developed by third parties.However, this library is not officially licensed by the KNX Association.This led to conducting different tests in order to verify the communication between KNX devices installed in the testing booth by implementing this library.
To establish a successful connection using this library, the parameters must be set according to the group addresses of the different KNX devices inside the testing booth.The first tests consisted of turning on a white light and modulating its brightness.Wire-shark™ software 2.6 was used to monitor this process.How data were handled is shown in the handshake diagram in Figure 5. From these tests, it was proved that the XKNX library is compatible with all KNX devices installed in the testing booth.With Python, a graphical user interface (GUI) was proposed to control and monitor the environmental variables within the testing booth using the XKNX library.
The GUI has space to show the image captured by the webcam.The image captured by the webcam can be processed by emotion recognition models based on facial expressions analysis.The displayed image has a resolution of 920 × 750 p, and it has a 15 mS refresh rate.Meanwhile, limitations of the actuators were also considered during the programing of the GUI.For example, a range of temperatures was set for the air conditioning in which users can only select temperatures between 20 and 28 °C.All data collected by the sensors, as well as the changes in the actuators are taken every 3 s and saved into a CSV file.This time was set due to there not being any abrupt changes in the environmental variables during a short period of time.Meanwhile, the analysis of facial expressions should be made at a frame rate of 24 FPS.The prototype of the GUI is presented in Figure 6.
Proposed Framework for the Testing Booth Communication System
With the acquired knowledge during this research and delving into the opportunities of the KNX communication protocol, a framework was proposed for the communication system of the testing booth to integrate the tools for emotion recognition with the devices based on the KNX protocol.The proposed framework for the communication system of the testing booth is presented in Figure 7; this framework is divided into seven main sections.
Interactive Space
The focus of this section is the physical space in which the experiments are conducted.In this case, it includes the testing booth with all its elements: design, furniture, equipment, and electrical systems.
In this interactive space, participants interact with different elements.Experiments will be designed in order to make the participants interact most of the time with the mouse and keyboard, but it is not limited and it is possible to interact with other kinds of objects, as will be presented in Section 5. Passive audiovisual stimuli in the screen are also considered.
From these interactions, the emotional response of the participants is acquired by noninvasive means as recordings of the facial expressions.However, other devices such as wearables to acquire physiological signals such as GSR or heart rate are considered for experiments that require them.These signals will be used as complements for the analysis of the emotional response.
Actuators
Actuators are used to control the environment in the interactive space to generate a scenario according to the experiment.There is a white light system and an RGB LED strip to adapt the intensity and the hue of the light.The testing booth also has an air condition system to modify the temperature based on the requirements of the experiment.The air conditioning system has different modes like automatic, cold, dry, and CO 2 mode; Official Mexican Norms are considered for the selection of the environmental variables.
Sensors
This section can be divided into environmental sensors and biometric sensors.The environmental sensors are installed in the testing booth to acquire signals such as CO 2 levels, relative humidity, temperature, and light intensity.
For the biometric sensors, a 720 p resolution webcam is installed to capture each participant's facial expressions.The face must be captured frontally as much as possible; therefore, the webcam is placed above the screen in which the audiovisual stimuli is shown.Other sensors such as wearables are included to acquire complementary physiological signals.The selected wearable must allow participants to interact with the elements within the testing booth freely.
Domotic Interface
The domotic interface is the key point to establishing communication among all the devices in the testing booth.This communication uses the KNX IP BUS to send KNXnet/IP telegrams to all KNX devices.The telegrams have the commands to make KNX devices execute different orders as well as to request data from the sensors and state of actuators.The acquired data are sent via a KNX IP BUS to the computer that controls the testing booth.
Data
In this section, the obtained data from the sensors are collected.The variables from the sensors are temperature in degree Celsius, CO 2 levels in particles per million (ppm), relative humidity in percentage, and light intensity in lux.Meanwhile, the data acquired by biometric sensors include the GSR and heart rate as well as the recording from the webcam with the participant's facial expressions.
Additionally, there is a database with the audiovisual stimuli that will be presented to the participants depending on the experiments' requirement.However, acoustic stimuli will not be used at this stage of the research due to the emotional complexity related to music.
Processing
In this section, participant's facial expressions are analyzed as well as the acquired physiological data collected by the wearables to complement the processed data from facial expressions.Although there are instruments to recognize emotions with a better accuracy such as EEG, those instruments tend to generate a bias on participants' behavior, since they feel a constant sense of observation.The process of emotion recognition includes ML algorithms, primarily CNNs, to analyze facial expressions and detect which emotion the participant is experiencing in that moment.The collected data related to emotions, stimuli, and interaction are synchronized frame by frame with the data from the environmental sensors.The synchronized data are analyzed with AI tools to identify patterns in participants' behavior during the interaction and interpret those patterns to understand the impact of the interaction variables and the environmental variables on participants' emotional response.
A key aspect considered during the design process of this framework was the privacy of the participants since their biometric data are collected; therefore, an informed consent document must be redacted.This document must inform participants how their data will be processed, and it will not be spread to other entities that may misuse the collected data.This extra step guarantees privacy to all participants involved in future experiments.
Application
Once the patterns are interpreted, a UX report is generated based on participants' emotional behavior during the experiments.The environmental variables are selected for the next iteration of the experiment; therefore, KNXnet/IP telegrams are generated to be sent to the KNX devices in the testing booth to modify the environmental variables to create a scenario for the experiment.Also, the system is programmed to acquire signals from the KNX sensors during a period depending on the requirements of the experiment.In case any device does not receive an instruction, the telegram must be sent again.The KNX protocol has an "Acknowledge" telegram system to inform that a telegram was received successfully by the target device, and it is carrying out the requested task.Finally, the stimuli that will be presented to the user in the scree is selected.
Application of Proposed Framework for UX Analysis
The testing booth and its framework have the potential to be implemented in applications related to UX research.To evaluate the framework presented in Section 4, an experiment was conducted in order to evaluate the design of detergent bottle prototypes based on the emotional response of participants.Seven detergent bottles were designed and divided into three groups: Group A included 600 mL three detergent bottles, Group B featured 1 gallon detergent bottles, and Group C contained 5 L detergent bottles.The design of the bottles is presented in Figure 8.In this experiment, the testing booth was used to replicate a scenario corresponding to a laundry room in which participants interacted with the prototypes of detergent bottles, while data related to the emotional response of the participants were collected to be analyzed in order to determine the acceptance rate of the new designs of the detergent bottles.
This experiment was conducted in collaboration with students pursuing a Bachelor's of Industrial Design who designed and created the prototypes of the detergent bottles.They even recreated the weight of the detergent bottles to create the most genuine interaction as possible.Other students that collaborated in this research were students studying toward a Bachelor of Industrial Engineering that helped with the logistics of the experiment.The impact of the involvement of these students and how this experiment contributed to their academic formation is explained in [59].
Development of the Experimental Protocol
The main goal of this experiment was to determine the level of acceptance of the new design of detergent bottles based on the emotional response of participants.To achieve this goal, participants' facial expressions were analyzed as well as signals such as GSR and PPG (heart rate).
Meanwhile, the interaction with the bottles was also recorded to synchronize them with participants' emotional responses and identify the key moments in which participants have a more intense emotional reaction.The collected data are sensitive; therefore, an informed consent document was redacted which ensured the anonymity of the participants, and the data related to their facial expressions will be used exclusively for the emotion recognition process, and they will not be used for other purposes.
To collect the data, two cameras were used: one camera to record the facial expressions of the participants and another camera to record the interaction with the detergent bottles.The cameras used for this experiment were two Microsoft LifeCam HD-3000.Facial expressions were analyzed with the FACET model which has an accuracy of 97% when predicting emotions [60].The data related to GSR and PPG were obtained using Shim-mer™ wristbands.Facial expressions and physiological signals were processed with the iMotions™ software.
For this experiment, thirty participants (27 female and 3 male, in an age range between 18 and 43 years) were gathered.The participants were distributed according to the capacity of the detergent bottles from Figure 8. Table 1 presents how the participants were distributed between the bottles; none of the participants repeated the experiment with another group of bottles.Based on this characteristic, instructions about the interaction were recorded with a neutral voice.Each instruction was designed in order to make the execution time about two minutes.The estimated time of each experiment per participant was 35 min.In this time, we considered the indications given to the participants about how the experiment will be conducted, the reading and signing of the informed consent document, the instrumentation of the participants, the interaction with the detergent bottles, and an application of a questionnaire about the new designs of the detergent bottles.The results of the questionnaire were contrasted with the emotional response of the participants.
Once experiments were over, the acquired data were interpreted to determine which emotions have a major intensity during the interaction with the detergent bottles; the interpretations were reported as the results of the UX analysis.
Setup of the Testing Booth
As mentioned above, the testing booth was adapted to emulate a laundry room; this means that the environmental conditions such as temperature, illumination, and relative humidity had to be adjusted.The temperature inside the testing booth was between 24 and 26 °C; this margin was acceptable according to the Official Mexican Norm [53].Relative humidity was adjusted to 50%.Finally, light intensity was adjusted to 93 lux, while light tonality was orange.These parameters were selected according to the typical laundry room in a Mexican context.
Environment-User-Product Interaction Process
Prior to starting each experiment, instructions about the experiment were provided to the participants; this was followed by the reading and signing of the informed consent document.Once the document was signed, participants were fitted with the Shimmer™ sensor, and it was verified that the sensor did not cause any discomfort when performing the different activities.One of the cameras was used to record participants' facial expressions.Meanwhile, another camera was used to verify that the participants were doing the activities as well as to check that there were no problems during the experiment.At the end of the experiment, participants were separated in another room where they could not have contact with the participants who had not participated in the experiment yet; this prevented the spread of information that may cause a bias regarding the participants' interactions.
Data Post-Processing
When the experiments were over, a post-processing of the acquired data was needed to filter non-relevant data as well as to synchronize the recordings of the interaction with the physiological signals.The first step was to clean up and remove data in which participants' facial expressions were not captured due to the fact that during the activities, they accidentally covered their face, they were out of the field of view of the camera, or the angle of the face did not allow capturing all the facial expressions, leading to a potential misclassification of the emotions reflected in the face.Once the data were cleaned, a synchronization process was conducted to correlate facial expressions with the signals acquired by the Shimmer™ sensor.The synchronized data were processed by the iMotions™ software in order to determine the emotions that were presented by the participants during the experiment.The data related to the emotions were synchronized with the recordings of the interaction between the participants and the detergent bottles.This synchronization allowed finding key moments during the interaction.Finally, the environmental variables were included, although these variables were constant most of the time.
Data Interpretation and Analysis
Once the data were processed and synchronized, analysis and interpretation steps were required to generate a UX report.For this experiment, the following emotions were considered: joy, anger, surprise, and disgust.These emotions were selected due to the impact they have on customers' behavior when buying a new product.The results presented in Tables 2-4 indicated the most prominent emotion during the interaction based on the analysis of facial expressions.Those results were contrasted by the self-reported answers provided by the participants at the end of each test.
The emotional response results of the participants that interacted with the bottles of Group A are presented in Table 2.As it can be observed, confusion can arise when comparing emotions such as anger and joy at the same time.However, when making a deeper analysis, the emotion recognition process could lead to identifying a concentration with anger, since they share a lot of the facial gestures.It can be observed that bottle 1 had the most positive impact in terms of utility, while bottle 2 has the greatest impact regarding aesthetic aspects.The emotional response results of the participants that interacted with the bottles of Group B are presented in Table 3.Similar to the case of Group A, the emotion recognition system tended to confuse concentration with anger while interacting with the bottles.In this group, bottle 2 has the most positive impact regarding both aesthetic and practical use.Only bottle 1 had a better response in the case of first impressions.The emotional response results of the participants that interacted with the bottles of Group C are presented in Table 4.In this case, bottle 1 had the most acceptance by the participants due to the joy presented when manipulating that bottle.Meanwhile, the disgust presented in bottle 2 could have been caused by its usability.
Discussion
KNX is an open-source communication protocol; this means that different types of applications can be developed that involve domotic systems.This provides the possibility of integrating KNX-based systems with systems not strictly related to domotic applications.This served as a starting point for the ED research line focusing on the use of smart systems beyond the original vision that society has about smart homes.As can be seen in the previous work [10,11,15] and other works of the same nature [12][13][14], home automation allows the modulation of environmental variables within a inhabitable space to generate the ideal conditions in which the inhabitants have a sense of comfort and well-being that, as a result, decreases their stress levels.
On the other hand, the generation of these conditions is related to the topic of UX.This opens the possibility of using KNX technology to recreate environments in which the user has greater immersion when interacting with a product or service.Although the original purpose of the testing booth presented in Section 3.1 was related to generating the ideal conditions for an individual based on the emotional response, the possibility of implementing it in experiments related to UX research was explored.Meanwhile, the use of emotion recognition tools used in this research allows monitoring the emotional behavior of the user during the interaction with the products and services that would be evaluated, obtaining a more genuine response about the expectations that the user has when interacting with the product or service; this leads to a more objective UX evaluation.With the framework presented in Section 4, a synchronization of the data obtained from the environmental sensors within the testing booth can be carried out with the biometric data of the users obtained during the interaction, facilitating the identification of key moments that influence the acceptance of a product or service through their emotional response.
The experiment presented in Section 5 served to validate that the framework of the testing booth can be used for UX-related experimentation.This can be verified, since the testing booth could be adapted to generate a scenario according to the context of the product through the control of KNX actuators, in this case a laundry room for the evaluation of the detergent bottle design.However, this framework could be replicated and adapted to other domotic technologies, but that could increase the complexity of the framework.On the other hand, the analysis of facial expressions to determine the user's emotional response in conjunction with the recordings of the interaction was carried out with the commercial software iMotions™.The use of this software had the purpose of not having any type of uncertainty when analyzing the user's emotional response, since it is a software with validated models.As future work, we plan to develop our own facial analysis model as well as use tools such as OpenPose [61] to monitor the user's body language.By obtaining the emotional response of the user and carrying out the processing corresponding to the synchronization of the data, it can be interpreted which factor of the product influenced the emotional response of the user either for better or worse.All this information was collected to generate a report about the emotional behavior of the users.
This was a first approach to use the tools from the ED research line in experiments related to UX.The use of the testing booth can be part of the design of a methodology for the objective UX evaluation.However, the limitations of emotion recognition tools must be considered, since it is mainly based on the analysis of facial expressions.This means that if the activities to be carried out include those in which the face is totally or partially obstructed, the emotional response during the interaction with the product to be evaluated will not be able to be identified.Although wearables are also used to obtain GSR and heart rate, these signals serve more as a complement for the analysis of facial expressions; these signals alone cannot provide a prediction about the user's emotional response.Finally, the ethical aspect must also be considered when collecting data related to the user's emotional response, since they are sensitive data that can be used for other types of purposes.
Figure 1 .
Figure 1.Diagram of the general solution.
Figure 2 .
Figure 2. General process for facial expression analysis for emotion recognition based on computer vision systems [42].
•
Universal presence detector Jung 3361WW KNX: Sensor with a 360-degree detection angle divided into three 120-degree zones to measure light intensity (lux) inside the testing booth.• 3902 REGHE-KNX 2 channel Universal Dimmer Actuator: This actuator allows changing the light intensity using dimming.The lamps that can be manipulated are incandescent lamps, 230 V halogen lamps, inductive transformers, inductive transformers with low-voltage LED, and dimmable compact fluorescent lamps.• BX-DM01-Blumotix KNX: This actuator works as a dimmer for an RGB LED strip.It is a four-channel dimmer actuator that can be configured to work with LED strips of 12 to 24 V; each channel has an output of 4 A. • ZN1CL-IRSC-Zenio KNX: This actuator is an infrared control module that allows us to control the air conditioner remotely by means of a series of previously programmed commands without the need to use a remote control.An important point is that the transmitter must be in the line of sight of the infrared receiver of the air conditioner.It has a voltage range of 21 to 31 VDC with a maximum consumption of 10 mA.• 320 mA Power Supply-JUNG KNX: It supplies 320 mA and controls the system power for the KNX installation.The devices can be connected to that through the BUS line.• Communication Module IP: The IP communication module allows access to the system via IP from any PC loaded with the ETS3 or higher or with a visualization software.It works in "Tunnelling" mode and offers up to 4 simultaneous KNXnet/IP connections.
•
Header length: It indicates the start of the telegram, and it never varies.• Protocol version: It indicates the version of the KNXnet/IP protocol that is being applied.• KNXnet/IP service type identifier: It indicates the type of action that will be performed in the KNX installation.• Total length: It indicates the number of bytes that the telegram contains.• KNXnet/IP body: It contains all the necessary commands to do the requested action.
Figure 5 .
Figure 5. KNX telegrams transmission and reception to turn on a light.
Figure 6 .
Figure 6.GUI to monitor the inside of the testing booth.
Figure 7 .
Figure 7. Framework for the communication system of the testing booth.
Table 1 .
Participants' distribution in each detergent bottle group.
Table 2 .
Results from Group A.
Table 3 .
Results from Group B.
Table 4 .
Results from Group C. | 10,382.4 | 2023-12-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
A Statistical Examination of Distinct Characteristics Influencing the Performance of Vector-Borne Epidemiological Agent-Based Simulation Models
: The spread of infectious diseases is a complex system in which pathogens, humans, the environment, and sometimes vectors interact. Mathematical and simulation modelling is a suitable approach to investigate the dynamics of such complex systems. The 2019 novel coronavirus (COVID-19) pandemic reinforced the importance of agent-based simulation models to quickly and accurately provide information about the disease spread that would be otherwise hard or risky to obtain, and how this information can be used to support infectious disease control decisions. Due to the trade-offs between complexity, time, and accuracy, many assumptions are frequently made in epidemiological models. With respect to vector-borne diseases, these assumptions lead to epidemiological models that are usually bounded to single-strain and single-vector scenarios, where human behavior is modeled in a simplistic manner or ignored, and where data quality is usually not evaluated. In order to leverage these models from theoretical tools to decision-making support tools, it is important to understand how information quality, human behavior, multi-vector, and multi-strain affect the results. For this, an agent-based simulation model with different parameter values and different scenarios was considered. Its results were compared with the results of a traditional compartmental model with respect to three outputs: total number of infected individuals, duration of the epidemic, and number of epidemic waves. Paired t -test showed that, in most cases, data quality, human behavior, multi-vector, and multi-strain were characteristics that lead to statistically different results, while the computational costs to consider them were not high. Therefore, these characteristics should be investigated in more detail and be accounted for in epidemiological models in order to obtain more reliable results that can assist the decision-making process during epidemics.
Introduction
Infectious disease outbreaks are among the largest and oldest challenges faced by humanity. Although the theme is old, characteristics of the current globalized world, such as interconnectivity and frequent movement of people, allow infectious diseases to spread further and more quickly nowadays. Moreover, due to cross-species transmission, there is also an increased risk for novel diseases to emerge.
Recently, the 2019 novel coronavirus (COVID- 19) quickly spread around the globe, leading countries to activate emergency plans, travel restrictions, and quarantine [1]. Apart from the large number of cases and deaths, the outbreak led to anxiety, struggle in the health systems, and global economic slowdown as restrictions were imposed [2]. As highlighted by the World Health Organization [3], the year 2020 was the scenario we all had feared for decades, "a virus that spread quickly around the world". This scenario shows the The overview, design concepts, and details (ODD) protocol proposed by Grimm et al. [26] was used in this study to provide researchers with a rigorous structure and information that would allow them to systematically replicate the experiments in different contexts. The protocol is shown in Table 1. Table 1. Overview, design concepts, and details protocol for the study.
Overview Purpose
The model developed in this study, which represents a simplified abstraction of reality, was designed to investigate and highlight the importance of three different factors, namely, data quality, human behavior, and multi-strain, multi-vector, when developing epidemiological models. Rather than developing a detailed representation of an epidemic to predict the outcomes of future epidemics or to understand past epidemics, the goal was to call the attention of the academic community to the importance of investigating the aforementioned factors more in depth when developing epidemiological models.
State variables and scales
Agents: (i) humans, (ii) mosquitoes, and (iii) the environment. In this simplified abstraction, individuals and mosquitoes were randomly distributed within the environment. The epidemiological parameters, such as latent rate and recovery rate, were considered independent of age, gender, time, or any other parameters. Detailed information about the parameters used in each model is provided in Section 2.1.
Process overview and scheduling
In vector-borne diseases, there are generally three types of agents: (i) the pathogen, which may be a virus or bacteria; (ii) the vector, in this work a mosquito that can be either Aedes aegypti or Aedes albopictus; and, (iii) the final host, a human in this case.
The model was built based on the traditional susceptible, exposed, infectious, and recovered (SEIR) and susceptible, exposed, and infectious (SEI) compartmental models. The SEIR model was used to represent humans, while the SEI model was used to represent the vector. The baseline conceptual model represented by the SEIR-SEI compartmental model is shown in Figure 1, and Table 2 provides the definition of the symbols presented in Figure 1.
The life cycle of the pathogen can be described in four stages: (1) The pathogen is transmitted from an infectious mosquito (Mi) to a susceptible host (Hs) when the mosquito feeds on human blood.
(2) The pathogen infects the exposed host (He), who still has no ability to transmit the disease to another mosquito. After the latent period, the pathogen reaches sufficiently high densities in the blood of the infectious host (Hi), who is now able to infect another susceptible mosquito (Ms).
(3) Whenever a susceptible mosquito feeds on an infectious host, the susceptible mosquito inoculates the pathogen and becomes an exposed mosquito (Me). Similar to the host, the mosquito is not able to immediately transmit the disease to other susceptible hosts.
(4) After the latent period, the pathogen develops in the mosquito to the point that the pathogen becomes present in the salivary glands of the mosquito, who becomes an infectious mosquito (Mi) and can now transmit the disease by biting a susceptible host. After the recovery period, the host is considered to be recovered (Hr) and immune to the pathogen (this is not true for all mosquito-borne diseases but it is true for some of them, such as dengue with respect to the same virus strain and chikungunya). An infectious mosquito never recovers from the disease and will stay infectious until its death. Three different models were developed in this work to meet the proposed goal. Each model works slightly differently and they are described in detail in The transition between each state is based on the epidemiological parameters as shown in Figure 1 and described in Tables 2 and 3. The data were collected at the end of each day on the basis of an event.
Design concepts Design concepts
Stochasticity was considered in all models through the infectious disease parameters used as input. These data are provided in Tables 4-7. Adaptation was considered in Model C when humans changed their behavior in response to the total number of infected individuals, either instantaneously or after a specific amount of time. This was considered at an individual and a population level. In summary, human behavior was considered to change in four different situations, as described in Section 2.1.3.
Details Initialization
Humans and mosquitoes were uniformly randomly distributed in the continuous space. The human and mosquito population size and the initial number of infectious humans and mosquitoes are provided in Tables 4-7.
Input
The input data used in the model were defined within the model and they are provided in Tables 4-7.
Sub-models
This consists of the "skeleton" of the model, as well as its description, which are provided in Figures 1-4 and Section 2.
of these modules. The transition between each state is based on the epidemiological par as shown in Figure 1 and described in Tables 2 and 3. The data were c at the end of each day on the basis of an event.
Design concepts Design concepts
Stochasticity was considered in all models through the infectious dise rameters used as input. These data are provided in Tables 4-7. Adaptation was considered in Model C when humans changed their b ior in response to the total number of infected individuals, either insta ously or after a specific amount of time. This was considered at an ind and a population level. In summary, human behavior was considered change in four different situations, as described in Section 2.1.3.
Details Initialization
Humans and mosquitoes were uniformly randomly distributed in the uous space. The human and mosquito population size and the initial n of infectious humans and mosquitoes are provided in Tables 4-7.
Input
The input data used in the model were defined within the model and are provided in Tables 4-7.
Sub-models
This consists of the "skeleton" of the model, as well as its description, are provided in Figures 1-4 and Section 2. He Number of exposed humans
Experiment Design
The input data used in this study was taken from the World Health Organization [27,28], Araújo [29], and Yakob and Clements [30].
To answer the research questions discussed in the Introduction section, we consid-
Experiment Design
The input data used in this study was taken from the World Health Organization [27,28], Araújo [29], and Yakob and Clements [30].
To answer the research questions discussed in the Introduction section, we considered 4 different models: (1) Model A or baseline (single-strain, single-vector dengue spread model); (2) Model B, which is the baseline model with different parameter values to investigate the impact of data quality; (3) Model C, coupling human behavior and dengue spread model; and (4) Model D, a multi-strain, multi-vector dengue spread model. The models were developed from modifications of the baseline model (Model A) in order to answer the research questions discussed in Section 1. The required modifications of each model are discussed in their respective sections.
The parameters used in each one of these models are presented in Table 3. Model A is the baseline model and contains the parameters as described in Section 3. Model B is also the baseline model with different parameter values to assess the impact of data quality on the results of the epidemiological model. In other words, there is no logic modification between Model A and Model B. Model C is the model where the impacts of human behavior are investigated. The parameters related to human behavior, such as population cautious factor and population time to switch behavior, are included. Finally, Model D is the model where the impacts of multi-strain and multi-vector are investigated. Parameters such as the daily human latent rate for DENV2 and the proportion of wild and Wolbachia-carrier mosquitoes are included. A more detailed discussion about each model and the number of parameters added in each model is presented below. Each model was run for 2 years (730 days), which was long enough for the epidemic to die out in all iterations and replications of the experiment. A total of 50 replications were performed and 3 output responses of interest were considered: (1) total number of infected individuals, (2) duration of the epidemic in days, and (3) number of epidemic waves. Three model measures were also collected: (1) runtime, (2) number of agents/entities, and (3) number of states.
Paired t-test with α = 0.05 was applied to the results of the computational models to investigate whether the results of Model A (baseline low-level versus baseline high-level) were statistically significantly different or not for each one of the three output responses and runtime. Multi-way analysis of variance (ANOVA) with α-level of 0.05 was used to investigate (i) in Model B, the impact of changing the values of the different inputs per baseline level (low or B1 and high or B2) on the three responses and runtime; (ii) in Model C, the impact of considering human behavior on the vector-borne disease model; and (iii) the impact of considering multi-strain and multi-vector on the vector-borne disease model. Next, the Tukey multiple comparison method was used to identify which pair of treatment (or inputs values) was significantly different among the inputs. The tests were performed using the software JMP.
The baseline model was described in the process overview and scheduling row of Table 1. The parameter values used in the model are given in Table 4. The assumptions adopted in this study are based on the work of Ross and Thomson [31], Kermack and McKendrick [32], Dumont et al. [33], and Yakob and Clements [30]. Such assumptions lead to a simplified model compared to reality. However, such limitation should not affect the objective of this work, since this study proposes to explore the importance of data quality, human behavior, multi-strain, and multi-vector in the results of disease spread models. Rather than foresee possible future epidemics or seek to understand past epidemics, the goal here is to draw the attention of researchers and experts in the field to the importance of these characteristics and to serve as a starting point for the elaboration of more detailed and more realistic models.
Model B (Data Quality)
In order to assess the impact of data quality on the model's results, we decided to perform a sensitivity analysis on each of the factors of the baseline model. In the sensitivity analysis, the factors were varied one at a time. By varying the parameters one at a time, we were able to gain insight into the impact of the parameters on the model results, but it was not possible to assess the impact of the interaction between the parameters on the simulation responses. Table 5 presents the parameter values used to assess the impact of the information quality. For all scenarios, the parameters were varied one at a time. Therefore, the experiment in this model had a total of 65 scenarios: (1) 3 different levels (C1, C2, and C3) for each of the 10 parameters of Table 5 for low-and high-level baselines (B1 and B2), except for level C3 of "initial number of infectious mosquitoes" for low-level baseline because it is not feasible. This gives a total of 3 × 10 × 2 − 1 = 59 scenarios. (2) Low-level (B1) of parameters "mosquito population size", "initial number of infectious mosquitoes", and "human population size" for high-level baseline (B2), which gives 3 scenarios. Finally, (3) high-level (B2) of parameters "mosquito population size", "initial number of infectious mosquitoes", and "human population size" for low-level baseline (B1), which gives 3 iterations. The justification for varying the value of each parameter is given below. • Mosquito population size: variation in mosquito population size may represent either the adoption of control strategies (e.g., insecticide use), the elimination of mosquito breeding sites (e.g., cleaning pots with standing water), climatic variation (e.g., increased rainfall and temperature that favor the reproduction of mosquitoes), or errors in estimating the mosquito population through techniques such as mosquito trap. • Initial number of infectious mosquitoes: this parameter was varied to represent regions in which the disease is imported by travelers who bring infectious mosquitoes to the area and regions where the disease is endemic.
•
Mosquito daily latent rate, mosquito daily mortality rate, human daily latent rate, and human daily recovery rate: considered low and high rates, on the basis of the values found in the literature, as well as low and high variation. The variations of these rates represent the existence of several types of virus that can reproduce in mosquitoes and humans more slowly or quickly and, consequently, also affect the human recovery rate; the genetic and immune variation of humans and mosquitoes; the use of medical treatment that affects the recovery rate of individuals; and climatic variations and use of control measures, such as the use of screens in windows and insecticides, which may alter the mortality rate of mosquitoes. • Daily infectivity rate from mosquito to human: the variations of this rate are due to reasons similar to mosquito daily latent rate, such as the existence of different types of virus, and genetic and immune variation of mosquitoes. • Human population size: the size was varied to represent different neighborhoods or sizes of cities. • Initial number of infectious humans: this parameter was varied for reasons similar to mosquito population size and initial number of infectious mosquitoes. The scenarios in Table 5 with initial number of infectious humans equal to 0 are equivalent to an epidemic-free population where a new epidemic is normally carried by a mosquito brought from an epidemic area. Besides representing an epidemic-free society and a society in which the disease is endemic, the variation may also represent large events, such as big sporting events, music events, or refugee entry into a region, which can lead to several cases imported at a single time.
•
Daily infectivity rate from human to mosquito: the variations of this rate were for reasons similar to mosquito daily latent rate and daily infectivity rate from mosquito to human, such as the existence of different types of virus, genetic and immune variation of humans, and use of medical treatment.
Model C (Coupled Human Behavior and Dengue Spread Model)
Despite the impacts of human behavior in the course of an outbreak, many disease spread models still ignore the human behavior factor. Lack of data on human behavior during outbreaks and the difficulty in quantifying some human behaviors may be one of the primary reasons for not including human behavior in disease spread simulation models. Moreover, to incorporate behavior in simulation models, it is necessary to have a more detailed model that makes use of agents, which requires more processing power, especially when the agent population is large.
Human behavior is considered in this study in 4 different situations: (1) situation 1, where the whole population adopts the same cautious behavior after the epidemic has reached a specific threshold; (2) situation 2, where each individual adopts his/her own cautious behavior after the epidemic has reached a specific threshold; (3) situation 3, where the whole population adopts the same cautious behavior after the epidemic has crossed the specific threshold for a specific amount of time; and (4) situation 4, where each individual adopts his/her own cautious behavior after the epidemic has crossed the specific threshold for a specific amount of time.
Compared to the baseline, the population behavior model and the individual behavior models (situations 1 and 2 of Model C) have two extra varying parameters each, namely, the percentage of infected individuals to trigger cautious behavior and the population/individual cautious factor. Compared to both previous situations, the inclusion of time to change population or individual behavior adds 1 extra parameter in each case. A total of 3 states had to be added to represent the change in human behavior. Table 6 presents the parameter values used to investigate the coupled human behavior. Usually, dengue is modeled as a single-strain, single-vector disease. This can be represented by the traditional SEIR-SEI compartmental model discussed in Model Abaseline. This was realistic in the past when only one virus strain was encountered in the endemic regions, and when multiple strains existed, they were not simultaneously encountered. However, nowadays many countries, including Brazil, have 2 or more dengue virus strains at the same time. Moreover, countries are investigating new vector control methods, such as Wolbachia-carrier mosquitoes and genetically modified mosquitoes, as possible alternatives to contain the spread of the disease. Wolbachia-carrier mosquitoes can become infected by feeding on infectious humans, but they cannot further transmit the disease to susceptible humans [34].
In this study, we consider the possibility that 2 virus strains coexist and the use of Wolbachia-carrier mosquitoes as an alternative method for disease prevention. Currently, Wolbachia-carrier mosquitoes are Aedes aegypti mosquitoes that have been altered in a laboratory and are introduced into the environment on the basis of health policy decisions made by government agencies. This altered mosquito has already been approved as a safety Aedes aegypti control strategy in different countries after empirical studies have been successfully conducted [35].
To represent this multi-strain, multi-vector context, some changes had to be made in the baseline model. For the human population, 4 extra states were added and 1 of the existent states was modified. Six states were added for the mosquito population. Table 7 presents the parameter values used to investigate the coupled human behavior.
Baseline
As discussed in Section 2.1, paired t-test with α = 0.05 based on the high-level baseline (B2)-low-level baseline (B1) was applied to investigate whether the results of Model A were statistically significantly different or not for each one of the three output responses and runtime. The test results comparing the low-and high-levels of the baseline model for each of the output responses and for runtime are presented in Table 8, followed by a discussion of the results. As is expected, the total number of infected individuals was statistically different due to the larger human population in the high-level baseline. The results also show that in the larger population, the epidemics lasted longer, but the number of epidemic waves was not statistically different than in the smaller population, which was the first interesting observation from this study. As also expected, the runtime when modelling a larger population was statistically greater due to the increase in the number of agents in the agent-based model. Figures 5 and 6 show the boxplot of the output responses. One can see that the variability was greater for the total number of infected individuals on the high-level of the baseline, while the opposite was observed for the number of epidemic waves. For the response duration of the epidemic, the variability did not seem to change considerably in terms of the human and mosquito population sizes. Figure 7 shows the evolution of the epidemic on a large population. In Figure 7, it is possible to observe the total number of humans and mosquitoes in each epidemiological state (susceptible, exposed, infectious, and recovered). As is expected, the total number of infected individuals was statistically different due to the larger human population in the high-level baseline. The results also show that in the larger population, the epidemics lasted longer, but the number of epidemic waves was not statistically different than in the smaller population, which was the first interesting observation from this study. As also expected, the runtime when modelling a larger population was statistically greater due to the increase in the number of agents in the agentbased model. Figures 5 and 6 show the boxplot of the output responses. One can see that the variability was greater for the total number of infected individuals on the high-level of the baseline, while the opposite was observed for the number of epidemic waves. For the response duration of the epidemic, the variability did not seem to change considerably in terms of the human and mosquito population sizes. Figure 7 shows the evolution of the epidemic on a large population. In Figure 7, it is possible to observe the total number of humans and mosquitoes in each epidemiological state (susceptible, exposed, infectious, and recovered).
Impact of Data Quality
Multi-way analysis of variance (ANOVA) with an α-level of 0.05 was used to investigate the impact of changing the values of the different inputs per baseline level (low or B1 and high or B2) on the three responses of interest and runtime. First, ANOVA was used to identify whether there was a relationship between the output and the inputs, that is, whether the model was statistically significant. Next, ANOVA was used to identify which inputs were statistically significant for the model. Finally, for the inputs found to be statistically significant, the Tukey multiple comparison method was used to identify, among the inputs, which pair of treatment (or inputs values) was significantly different. The ANOVA results can be found in Table 9 and the Tukey multiple comparison results can be found in Table 10.
As shown in Table 9, in all the experiments performed, there was statistically significant evidence of relationship of at least one of the inputs and each of the outputs at αlevel of 0.05. In general, the inputs were not statistically significant for the runtime per replication when a low population (low number of agents) was considered. Human population size and the initial number of infectious humans were the only inputs that were
Impact of Data Quality
Multi-way analysis of variance (ANOVA) with an α-level of 0.05 was used to investigate the impact of changing the values of the different inputs per baseline level (low or B1 and high or B2) on the three responses of interest and runtime. First, ANOVA was used to identify whether there was a relationship between the output and the inputs, that is, whether the model was statistically significant. Next, ANOVA was used to identify which inputs were statistically significant for the model. Finally, for the inputs found to be statistically significant, the Tukey multiple comparison method was used to identify, among the inputs, which pair of treatment (or inputs values) was significantly different. The ANOVA results can be found in Table 9 and the Tukey multiple comparison results can be found in Table 10.
As shown in Table 9, in all the experiments performed, there was statistically significant evidence of relationship of at least one of the inputs and each of the outputs at αlevel of 0.05. In general, the inputs were not statistically significant for the runtime per replication when a low population (low number of agents) was considered. Human population size and the initial number of infectious humans were the only inputs that were
Impact of Data Quality
Multi-way analysis of variance (ANOVA) with an α-level of 0.05 was used to investigate the impact of changing the values of the different inputs per baseline level (low or B1 and high or B2) on the three responses of interest and runtime. First, ANOVA was used to identify whether there was a relationship between the output and the inputs, that is, whether the model was statistically significant. Next, ANOVA was used to identify which inputs were statistically significant for the model. Finally, for the inputs found to be statistically significant, the Tukey multiple comparison method was used to identify, among the inputs, which pair of treatment (or inputs values) was significantly different. The ANOVA results can be found in Table 9 and the Tukey multiple comparison results can be found in Table 10.
As shown in Table 9, in all the experiments performed, there was statistically significant evidence of relationship of at least one of the inputs and each of the outputs at α-level of 0.05. In general, the inputs were not statistically significant for the runtime per replication when a low population (low number of agents) was considered. Human population size and the initial number of infectious humans were the only inputs that were shown to be statistically significant in this case. In a small population scenario, a few factors also appeared to not be significant for the number of epidemic waves, such as the mosquito population size, the daily infectivity rate from mosquito to human, and the daily infectivity rate from human to mosquito. Interestingly, mosquito population size was also not significant for the responses total number of infected individuals and duration of the epidemic in days on the small population baseline level. This can lead to discussion and further investigation of the effectiveness of control measures, such as controlling mosquito population by the elimination of mosquitoes' habitats such as bottles, tires, and fountains versus reducing the personal contact with mosquitoes by using window and door screens, mosquito repellents, long sleeve clothes, etc.
A detailed discussion of the results is presented below. The mosquito population size appeared to have a higher impact on the epidemic responses for larger human population size. While in the low-level baseline, the mosquito population size was not a significant input for any of the responses considered, in the high-level baseline, 9 out of 10 comparisons were statistically different with respect to the total number of infected individuals, and at least 4 comparisons were also statistically different for duration of the epidemic in days and number of epidemic waves. The output response "total number of infected individuals" appeared to be the most sensitive to the size of the mosquito population, followed by runtime per replication, number of epidemic waves, and last by the duration of the epidemic in days.
The mosquito population size had an opposite impact on the pair of output responses "total number of infected individuals" and "duration of the epidemic"-an increase in the mosquito population size increased the total number of infected individuals, but it reduced the duration of the epidemic in days. A similar characteristic was observed for the pair of output responses "total number of infected individuals" and "number of epidemic waves". This was possibly due to the herd immunity effect, where the majority of the population has more quickly become infected and immune to the disease, shortening the duration of the epidemic and the number of epidemic waves.
With respect to runtime, by increasing the mosquito population size, there was an increase in runtime. An average increase of 10.87 s per replication from the baseline was observed when the human population size was large.
2.
Initial number of infectious mosquitoes: The initial number of infectious mosquitoes appeared to have a great impact on the results of epidemiological models, especially for the high-level baseline. The input appeared to not be as significant for runtime, wherein the low-level baseline was not considered significant, and in the high-level baseline, only a few pairs (5 out of 10) were found to be significant. The input also appeared to be less significant for the total number of infected individuals in small populations, where only two pairs were found to be significant. For larger populations, with the exception of one pair for the output response "duration of the epidemic in days" and one pair for "number of epidemic waves", both in the high-level baseline, all pairs were statistically different for the output responses "total number of infected individuals", "number of epidemic waves", and "duration of the epidemic in days". The impact of this parameter was similar to parameter #1-an increase in the initial number of infectious mosquitoes led to an increase in the total number of infected individuals, but in a reduction of the duration of the epidemic and in the number of epidemic waves.
An average increase of 7.50 s per replication from the baseline was observed when the human population size was large. This increase in runtime is probably explained by the increase in the number of state changes in the simulation model-with the increase in the initial number of infectious mosquitoes, there was an increase in the total number of infected individuals and, hence, the humans went through more state changes (from susceptible to exposed to infectious to recovered).
3.
Mosquito daily latent rate: This parameter appears to impact all three output responses, with the output response "total number of infected individuals" being the one most sensitive to variations in the mosquito daily latent rate and the "number of epidemic waves" the least sensitive. The output response "duration of the epidemic" did not appear to be as sensitive in the lowlevel baseline as it was in the high level. The runtime decreased with the increase in the mosquito daily latent rate. The reduction in runtime can be explained following similar logic to parameter #2-there was a lower number of state changes in the simulation model and consequently a reduction in runtime.
It is worth pointing out that, contrary to parameters #1 and #2, a reduction in this parameter led to a decrease in all three output responses. This indicates the need to direct research on reducing the mosquito daily latent rate (i.e., increasing the latent period), because contrary to what happened when reducing the mosquito population size, reducing the mosquito daily latent rate had a positive effect, that is, reduces all three output responses being investigated.
4.
Mosquito daily mortality rate: This parameter impacted the epidemiological model results similar to parameter #3 above. First, the mosquito daily mortality rate appeared to impact all three output responses, with the output response "total number of infected individuals" being the one most sensitive to variations in the parameter and the "number of epidemic waves" the least sensitive. Similar to parameter #3, the output response "duration of the epidemic" did not appear to be as sensitive in the low-level baseline as it was in the high level. The runtime also decreased with the increase in the mosquito daily mortality rate. Finally, an increase in the mosquito mortality rate led to a reduction in all three output responses. This may indicate that more important than controlling the birth of new mosquitoes is assuring that the mosquito lifetime is shortened, which has been the focus of some new strategies such as the release of genetically modified mosquitoes.
The results of this parameter also agree with what is known about mosquito-borne diseases. For instance, it is known that temperature, rainfall, and mosquito density in the environment are factors that have a considerable impact on the lifetime of mosquitoes and, therefore, disease epidemics such as dengue are stronger in summer months and weaker in winter months, when the mosquito lifetime is shorter. Using this finding, it is important that researchers, especially entomologists, conduct more empirical experiments to know more accurately the mosquito mortality rate and how this parameter varies int erms of climatic and population factors. The variability of the parameter also seems important, because tests that compared scenarios with less variability (e.g., B1-C2) resulted in larger statistical differences.
5.
Daily infectivity rate from mosquito to human: This parameter impacts the epidemiological model results similar to parameter #3. It is important to mention that for small population size, this parameter was not significant for the output response "number of epidemics waves" and there were only one or two pair comparisons where the parameter was shown to be significant for the output response "number of epidemic waves" for large population sizes and the response "duration of the epidemic in days" in both small and large population sizes.
6.
Human population size: Contrary to parameter #1, an increase in the human population size led to an increase in all three output responses and vice versa. This is expected and, unfortunately, it was not as useful in terms of control strategies. However, it indicates the need for research that investigates the impact of quarantine and isolation in controlling dengue. These strategies have been discussed at length during the investigation of airborne and direct contact transmitted diseases, such as influenza, but the strategies have not been extensively explored in terms of mosquito-borne diseases.
The runtime was highly affected by the number of humans or agents in the model. Human population size affected runtime regardless of the population size, and there was only one pair comparison that was not statistically significant, which indicates that runtime was very sensitive to this parameter.
Another result was that larger populations prolonged the epidemic (since more individuals are transmitting the pathogen). However, it is important to investigate whether these results remain in epidemics in which the pathogen is transmitted more slowly or more rapidly. With epidemics that spread too quickly, the total number of infected individuals will most likely still increase with the increase in the human population size, but the duration may decrease because all individuals will quickly become infected, as discussed in parameter #1. On the other hand, with epidemics that spread slowly, the epidemic may end before it infects many individuals, which would lead to a decrease in the total number of infected individuals, as well as the duration of the epidemic.
7.
Initial number of infectious humans: As expected, an increase in the initial number of infectious humans led to an increase in the total number of infected individuals for large population size. However, the increase in the initial number of infectious humans led to a decrease in the duration of the epidemic and the number of epidemic waves for large population size. For a small population, initially, an increase in the parameter led to an increase in the total number of infected individuals, duration of the epidemic, and the number of epidemic waves. However, a further increase of the parameter led to a reduction of all three output responses. This was likely due to the disease spreading more quickly over all the population and it may also indicate an interaction among the parameters. This also highlights the importance of investigating the quality of parameters values because the output responses do not monotonically increase or decrease as a function of the parameter. The output response "total number of infected individuals" appeared to be the most sensitive to this parameter.
8.
Human daily latent rate: This parameter had a similar impact on the model results as parameter #3. However, it is important to highlight two main differences. First, this parameter was not found to be significant for the total number of infected individuals, while parameter #3 was considered significant for the total number of infected individuals. The second difference is that conversely to parameter #3, where an increase in the parameter led to an increase of the total number of infected individuals only and a reduction in the duration of the epidemic and number of waves. Here, an increase in this parameter led to a decrease in the total number of infected individuals, but also to an increase in the duration of the epidemic and the number of waves. This can be intuitively explained-since the individuals become infectious faster, the total duration of the epidemic for one individual is shorter, and, therefore, there may not be enough time to infect many mosquitoes and, consequently, other humans. However, the epidemic may last longer and have more waves due to sporadic cases here and there. This parameter is another important factor for future control actions.
9.
Human daily recovery rate: Similar to parameter #4, an increase in the human daily recovery rate led to a reduction in the total number of infected individuals and in the duration of the epidemic. However, in this case, the number of epidemic waves increased. According to the Tukey multiple comparison test, in a large population, the human daily recovery rate resulted in a different number of infected individuals, different duration of the epidemic, and different number of epidemic waves in almost every test performed. For small population size, the output responses appeared to not be as sensitive to the variation in the parameter, but the parameter was still found to be significant in many pair comparisons. Thus, due to the impacts of this parameter on the epidemiological model results, health agencies must investigate the recovery rate of each disease to provide accurate information to researchers working on disease spread models. Likewise, due to the impact of the human recovery rate on the epidemic responses, the population must follow the treatment prescribed by doctors and health agents to maximize the recovery rate. During the recovery period, it is also important to follow the guidelines in adopting control measures, such as protecting against mosquito bites, in order to avoid infecting new mosquitoes.
Daily infectivity from human to mosquito:
The results were similar to parameters #3 and #5 with respect to the total number of individuals infected. However, the duration of the epidemic and the number of epidemic waves were not as sensitive to this parameter, with only 3 comparisons out of 24 being statistically different.
Overall, initial number of infectious mosquitoes (#2), mosquito daily latent rate (#3), mosquito daily mortality rate (#4), human population size (#6), initial number of infectious humans (#7), and human daily recovery rate (#9) were the parameters that appeared to have a greater impact on the three output responses considered simultaneously. However, it is worth noting that initial number of infectious mosquitoes had an opposite impact on the output responses-an increase in the parameter increased the "total number of infected individuals" and decreased the "duration of the epidemic in days" and the "number of epidemic waves".
The mosquito population size (#1) for small population size, the human daily recovery rate (#8), and the daily infectivity rate from human to mosquito (#10) were the parameters that appeared to have less impact on the three output responses considered simultaneously. Figure 8 shows the results discussed above in a succinct way. It is possible to identify that parameters #1 and #10 were the ones that led to less variation in the results when compared to other parameters in the low-level baseline, while parameter #8 was the one that led to less variation in the high-level baseline. On the other hand, parameter #6 was the one that led to the highest variation in both levels, followed by parameters #2, #3, #4, #7, and #9. Figure 8 shows the results discussed above in a succinct way. It is possible to identify that parameters #1 and #10 were the ones that led to less variation in the results when compared to other parameters in the low-level baseline, while parameter #8 was the one that led to less variation in the high-level baseline. On the other hand, parameter #6 was the one that led to the highest variation in both levels, followed by parameters #2, #3, #4, #7, and #9.
Impact of Human Behavior
Multi-way analysis of variance (ANOVA) with an α-level of 0.05 was used to investigate the impact of considering human behavior on the vector-borne disease model. The human behavior was considered through different scenarios: (i) individual versus population level, (ii) with and without time to switch behavior, (iii) with different thresholds of the total number of infectious individuals to trigger cautious behavior, and (iv) different human cautious behaviors. The ANOVA results can be found in Table 11. A discussion of the results is presented below.
From the results shown in Table 11, it is observed that including human behavior in epidemiological models did not impact the runtime for small population sizes. However, in any of the cases, the difference was not greater than 20 s per replication, which is a reasonable increase when considering the trade-off between accuracy and computational needs.
For the parameter values used in the experiments of this work, all three output responses were not found to be sensitive to the different coupled human behavior and dengue spread models investigated when the population was large. When the population was small, all the output responses were found to be sensitive to the coupled human behavior, with the total number of infected individuals appearing to the most sensitive response, followed by the duration of the epidemic in days, and lastly by the number of epidemic waves. Although the number of results that were statistically different was not large, the existence of a few differences already shows the importance of human behavior in the results of epidemiological models. Moreover, this is just one model with specific parameters.
A work of [36] also investigated the impacts of human behavior on the results of the epidemiological model. The authors tested the same scenarios mentioned in this work, but the model mimicked the spread of the chikungunya disease and, hence, the parameter values were slightly different. In that work, Scheidegger and Banerjee [36] found that the total number of infected individuals and the duration of the epidemic were statistically
Impact of Human Behavior
Multi-way analysis of variance (ANOVA) with an α-level of 0.05 was used to investigate the impact of considering human behavior on the vector-borne disease model. The human behavior was considered through different scenarios: (i) individual versus population level, (ii) with and without time to switch behavior, (iii) with different thresholds of the total number of infectious individuals to trigger cautious behavior, and (iv) different human cautious behaviors. The ANOVA results can be found in Table 11. A discussion of the results is presented below.
From the results shown in Table 11, it is observed that including human behavior in epidemiological models did not impact the runtime for small population sizes. However, in any of the cases, the difference was not greater than 20 s per replication, which is a reasonable increase when considering the trade-off between accuracy and computational needs.
For the parameter values used in the experiments of this work, all three output responses were not found to be sensitive to the different coupled human behavior and dengue spread models investigated when the population was large. When the population was small, all the output responses were found to be sensitive to the coupled human behavior, with the total number of infected individuals appearing to the most sensitive response, followed by the duration of the epidemic in days, and lastly by the number of epidemic waves. Although the number of results that were statistically different was not large, the existence of a few differences already shows the importance of human behavior in the results of epidemiological models. Moreover, this is just one model with specific parameters.
A work of [36] also investigated the impacts of human behavior on the results of the epidemiological model. The authors tested the same scenarios mentioned in this work, but the model mimicked the spread of the chikungunya disease and, hence, the parameter values were slightly different. In that work, Scheidegger and Banerjee [36] found that the total number of infected individuals and the duration of the epidemic were statistically different in every comparison of the coupled human behavior disease spread model against the baseline model with a larger population. The authors also found a few differences in the low-level baseline.
Tukey multiple comparison test was used to investigate whether the results from the population-based Model C were statistically different from the individual-based Model C. With a few exceptions (three scenarios for the total number of infected individuals, one scenario for the duration of the epidemic, and one scenario for the number of epidemic waves), the results of the population-based Model C were not statistically different from the results of the individual-based Model C. This indicates that in some cases and depending on the parameter values and the response of interest, human behavior may be accurately represented at the population level, instead of at the individual level.
The differences found in this work and the work of Scheidegger and Banerjee [36] may indicate an interaction between the disease parameter values and human behavior. This highlights the importance of considering the impacts of both data quality and human behavior on the results of epidemiological models-while including human behavior may improve the accuracy of the results, the improvement may be compromised if the disease data are not accurate. More investigation in this area is needed to better understand the impacts of human behavior on vector-borne disease dynamics.
Impact of Multi-Vector and Multi-Strain
Multi-way analysis of variance (ANOVA) with an α-level of 0.05 was used to investigate the impact of considering multi-strain and multi-vector on the vector-borne disease model. Multi-strain and multi-vector were considered through different scenarios: (i) multistrain model versus baseline model, (ii) multi-vector model versus baseline model, and (iii) multi-strain and multi-vector model versus baseline. The ANOVA results can be found in Table 12. A discussion of the results is presented below. The multi-strain, multi-vector model against the baseline had a statistically different total number of infected individuals, duration of the epidemic in days, number of epidemic waves, and runtime for all the scenarios investigated: multi-strain only, multi-vector only, and multi-strain and multi-vector for both small and large populations.
From the results, one can observe that in terms of runtime the differences decrease when the population increase. For a large population, a significant difference was only observed among the high-level baseline and the multi-strain, multi-vector model, but no difference was observed when considering multi-strain only or multi-vector only. This indicates that in large populations, the increase in runtime caused by adding these characteristics in the model may not be as big as the increase in runtime caused by the increase in the number of agents, as discussed in Section 3.2. Moreover, in any case, the increase in runtime was not greater than 18 s, which is a reasonable increase considering the significant differences in the other three output responses.
As the ANOVA results showed, every scenario considering multi-strain and multivector, or multi-strain or multi-vector individually was statistically different than the baseline model at either low or high level. Including multi-strain and/or multi-strain and multi-vector led to an increase in the total number of infected individuals and duration of the epidemic, but a decrease in the number of epidemic waves. The increase in the total number of infected individuals and duration of the epidemic was expected as there were different strains of virus circulating simultaneously and people are immune to specific strains only.
As previously discussed, Wolbachia-carrier mosquitoes are laboratory-modified mosquitoes that can be infected by the dengue virus but cannot transmit the disease to other humans. Contrary to what one would expect from such a disease control strategy, the introduction of the Wolbachia-carrier mosquitoes led to a reduction in the duration of the epidemic and in the number of waves, but it increased the total number of infected individuals in comparison to the baseline. However, using Tukey multiple comparison test, we could check that using multi-vector strategy was significantly capable to lower the total number of infected individuals and the duration of the epidemic in scenarios with multi-strain, although there was no evidence to reduce the number of epidemic waves in comparison to the multi-strain scenario. This may indicate that the Wolbachia-carrier mosquitoes may not be a good strategy for regions where there is the circulation of only one strain of the dengue virus, but it may be a promising strategy for regions with a simultaneous circulation of multiple virus strains. We recognize that this is one model with specific parameters. Therefore, more studies must be performed to check whether this recommendation is valid for different contexts.
Discussion
From the discussion of the above results, some general inferences can be made. First, it was possible to observe that the variation of the parameter values had a greater impact on the total infected individuals than on the total duration of the epidemic or the number of epidemic waves. While 94 out of 140 comparisons led to statistically different results for the total number of infected individuals, 76 were statistically different for the duration of the epidemic, and 66 for the number of epidemic waves.
Parameters #1 and #8 were the ones that had the least impact on the total number of infected individuals, while parameters #1, #5, and #10 on the duration of the epidemic, and parameters #1 and #10 on the number of epidemic waves. On the other hand, parameters #6, followed by different parameters such as #2 and #7 were the most impactful for the output responses.
This allows us to emphasize two findings: first, the importance of defining the response of interest in epidemiological models, and second, the importance of accurately estimating the parameters. While some parameters may lead to little to no change to one output response, that same parameter may cause large changes in another output response.
Moreover, as discussed in Section 3.2, in some cases, a change in the parameter led to a decrease in the total number of infected individuals and an increase in the duration of the epidemic and/or in the number of epidemic waves. Thus, before implementing control measures, it is important to clearly define the priority for the population and the health system: reducing the total number of infected individuals, reducing the duration of the epidemic, or reducing the number of epidemic waves. In general, it is believed that the total number of infected individuals is more important than the duration of the epidemic. However, a long epidemic can generate greater rumors and fear among the population, as well as it potentially leading to a lower awareness of the population over time, which can reduce adherence to control measures and, consequently, increase the total number of infected individuals later. Depending on the control measures adopted, a longer epidemic may also have other long-term and unexpected consequences, such as economic losses and psychological impacts. The contrary results that the same input parameter has on the output responses highlight the importance of adequately estimating disease parameters such as the disease latent rate and the infectivity rate, which have a positive effect on all three output responses discussed in this work. Although important, many epidemiological models use estimates for these parameters without relying on empirical studies or some scientific support. It is recognized that it is difficult to perform experiments to define these parameters, but we call for more multidisciplinary attention to these parameters and for greater investment in the area. Figures 9 and 10 show the relationship between the output responses per parameter. According to Figure 9, it is possible to verify that, with exception of parameters #6 and #7, there was no apparent correlation between the total number of infected individuals and the duration of the epidemic, and between the total number of infected individuals and the number of epidemic waves. On the other hand, according to Figure 10, except for parameter #9, there appeared to exist a positive correlation between the duration of the epidemic and the number of epidemic waves. tion of the epidemic and the number of epidemic waves. Figure 11 shows the relation between the output responses as well, but the response total number of infected mosquitoes was included. As it can be seen, the total number of infected mosquitoes was highly correlated with the total number of infected individuals and slightly correlated with the duration of the epidemic. The runtime did not seem to be correlated with any of the output responses. except for parameter #9, there appeared to exist a positive correlation between the duration of the epidemic and the number of epidemic waves. Figure 11 shows the relation between the output responses as well, but the response total number of infected mosquitoes was included. As it can be seen, the total number of infected mosquitoes was highly correlated with the total number of infected individuals and slightly correlated with the duration of the epidemic. The runtime did not seem to be correlated with any of the output responses. Figure 11 shows the relation between the output responses as well, but the response total number of infected mosquitoes was included. As it can be seen, the total number of infected mosquitoes was highly correlated with the total number of infected individuals and slightly correlated with the duration of the epidemic. The runtime did not seem to be correlated with any of the output responses.
Modelling 2021, 2, FOR PEER REVIEW 29 Figure 10. The relation between the number of waves vs. the duration of the epidemic per parameter. Figure 11. The relation between the output responses.
Conclusions
The main conclusions that can be derived from this work are as follows: (i) The data quality is indeed an important factor and must be investigated in more detail by researchers and simulation specialists modelling disease spread. In fact, we suggest that a data quality impact analysis should be included as a section of any rigorous epidemiological simulation model study to acknowledge the uncertainties that might underly the model responses.
(ii) Variations in the parameters were shown to have a greater impact on the total number of infected individuals than on the duration of the epidemic or the number of epidemic waves.
(iii) Variations in the parameters may lead to divergent results of what is desired in an epidemic, i.e., a variation may lead to a reduction in the total number of infected individuals and an increase in the duration of the epidemic and the number of epidemic waves or vice versa.
(iv) Some parameters were shown to be significant in low population size, while others were shown to be significant in large population size only. This reinforces the importance of investigating the accuracy of data in epidemiological studies and considering the different contexts that exist, such as different population sizes, different geographies, different human behavior, how disease parameters change over time, etc.
(v) Similar to the item (iv), the responses appeared to not monotonically increase or decrease as a function of some of the parameters. This also reflects the importance of investigating the data accuracy on epidemiological studies, as a slight change in the value of the parameters may bring opposite effects on the responses of interest.
(vi) Human behavior appears to be appropriately mimicked in either the individualbased level or in the population-based level, which could save some computational resources.
(vii) Human behavior appears to present a strong interaction with the parameter values, which indicates that although in some cases it may not impact the results, it must be investigated to make the appropriate modelling decision.
(viii) Wolbachia-carrier mosquitoes, which is a recent control strategy being investigated, appear to be a promising control strategy to regions with a simultaneous circulation of multiple virus strains, but it may increase the total number of infected individuals in regions with a single virus strain.
Conclusions
The main conclusions that can be derived from this work are as follows: (i) The data quality is indeed an important factor and must be investigated in more detail by researchers and simulation specialists modelling disease spread. In fact, we suggest that a data quality impact analysis should be included as a section of any rigorous epidemiological simulation model study to acknowledge the uncertainties that might underly the model responses.
(ii) Variations in the parameters were shown to have a greater impact on the total number of infected individuals than on the duration of the epidemic or the number of epidemic waves.
(iii) Variations in the parameters may lead to divergent results of what is desired in an epidemic, i.e., a variation may lead to a reduction in the total number of infected individuals and an increase in the duration of the epidemic and the number of epidemic waves or vice versa.
(iv) Some parameters were shown to be significant in low population size, while others were shown to be significant in large population size only. This reinforces the importance of investigating the accuracy of data in epidemiological studies and considering the different contexts that exist, such as different population sizes, different geographies, different human behavior, how disease parameters change over time, etc.
(v) Similar to the item (iv), the responses appeared to not monotonically increase or decrease as a function of some of the parameters. This also reflects the importance of investigating the data accuracy on epidemiological studies, as a slight change in the value of the parameters may bring opposite effects on the responses of interest.
(vi) Human behavior appears to be appropriately mimicked in either the individualbased level or in the population-based level, which could save some computational resources.
(vii) Human behavior appears to present a strong interaction with the parameter values, which indicates that although in some cases it may not impact the results, it must be investigated to make the appropriate modelling decision.
(viii) Wolbachia-carrier mosquitoes, which is a recent control strategy being investigated, appear to be a promising control strategy to regions with a simultaneous circulation of multiple virus strains, but it may increase the total number of infected individuals in regions with a single virus strain.
While discussing mosquito-borne diseases, one of the first recommendations given by health agencies is to control the mosquito population growth. As discussed here, although reducing the mosquito population size reduces the total number of infected individuals, it increases the duration of the epidemic and the number of epidemic waves. Therefore, a recommendation that seems more important is to ensure humans follow the proper treatment to make the recovery process from the disease faster in order to reduce the life span of the mosquitoes, as well as to search for strategies that would increase the latent period of the disease in mosquitoes. These three parameters, for instance, lead to positive changes in all three output responses discussed here. Following this suggestion, the introduction of genetically modified mosquitoes that have a shorter lifecycle should be further investigated in epidemiological models. Contrary to what one expects for mosquitoborne diseases, quarantine and isolation that would make the human population size of the endemic region temporarily smaller appears to be useful, due to the positive effects on all three output responses, and should also be further investigated.
We recognize that the model developed in this work is a large simplification of the real world. However, the focus of this work was not to develop a model for epidemic prediction. Instead, we wanted to illustrate the possible impacts of data quality, human behavior, multi-strain, and multi-vector on epidemiological results, and to attract the attention of the academic community to the importance of not overlooking these characteristics when modelling disease spread.
We also wanted to assess the trade-off between model accuracy and the required computational power. As the results indicate, due to the impacts on the results and the generally low to no increase in runtime when considering human behavior, multi-vector, or multi-strain, it appears beneficial to include those characteristics in the models. Although the model developed in this work is simple, the results align with what is known in this field of research, which indicates that modelling is a suitable tool for exploratory research and it is a good start point for showing the cost-effect of mimicking the reality more accurately.
Due to the simplicity of the model, further investigation is needed to evaluate whether these results would persist for larger human populations, for different values of the parameters, and for more detailed models. Therefore, several suggestions for future research can be made, such as (i) to repeat the same analyses, but using a larger number of replications to verify whether with a larger sample and consequently greater accuracy, the results will be similar or not; (ii) to repeat the same analyses for other human population sizes and other variations of the parameters and verify whether the results are similar; (iii) to carry out more experiments, possibly with complete or at least fractional factorial planning to evaluate the interaction between factors; (iv) to increase the level of detail of the model to more accurately represent reality; (v) to include genetically modified mosquitoes; and (vi) to perform similar analysis on different rules for behavior inclusion, such as change in behavior in terms of the number of infected individuals within a specific distance or in terms of the number of infected individuals in a social network (emotional proximity).
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. This study also used data available in previously published studies as appropriately cited. | 14,525 | 2021-03-24T00:00:00.000 | [
"Environmental Science",
"Mathematics",
"Medicine"
] |
Dynamical Trust and Reputation Computation Model for B2c E-commerce
Trust is one of the most important factors that influence the successful application of network service environments, such as e-commerce, wireless sensor networks, and online social networks. Computation models associated with trust and reputation have been paid special attention in both computer societies and service science in recent years. In this paper, a dynamical computation model of reputation for B2C e-commerce is proposed. Firstly, conceptions associated with trust and reputation are introduced, and the mathematical formula of trust for B2C e-commerce is given. Then a dynamical computation model of reputation is further proposed based on the conception of trust and the relationship between trust and reputation. In the proposed model, classical varying processes of reputation of B2C e-commerce are discussed. Furthermore, the iterative trust and reputation computation models are formulated via a set of difference equations based on the closed-loop feedback mechanism. Finally, a group of numerical simulation experiments are performed to illustrate the proposed model of trust and reputation. Experimental results show that the proposed model is effective in simulating the dynamical processes of trust and reputation for B2C e-commerce.
Introduction
The development of the Internet provides small and medium enterprises (SMEs) new opportunities to extend their operations.However, new challenges also arise as a result of the virtual characteristics of the Internet.The Internet easily broadcasts information.However, websites of SMEs have relatively low degrees of trust and reputation [1][2][3][4][5].Trust is one of the most important factors that influences the successful application of network service environments such as e-commerce, wireless sensor networks, and online social networks, and so on [6][7][8].How to improve the degree of trust and establish a reputation in online marketplaces is critical for these enterprises [1,[9][10][11][12].Computation models associated with trust and reputation are becoming focal issues in computer societies, management societies, and service science in recent years [13][14][15][16][17][18][19][20][21][22][23][24][25].In this paper, a new computation model of dynamical trust and reputation for B2C e-commerce is proposed.The proposed framework simulates the dynamical processes of trust and reputation evaluation for B2C e-commerce by using a mathematical model.
One principal role of the Internet is information dissemination and communication.E-commerce is defined as transactions of products and services over the Internet.The prototype of e-commerce is Electronic Data Interchange (EDI) [26,27].With the development of information exchange techniques and the popularization of the Internet, Business-to-Business (B2B) e-commerce is exceeded by Business-to-Consumer (B2C) e-commerce in the 1990s [5,28,29].Commercial data exchange in EDI used the private value-add-network between those who are familiar with and trust each other.However, the situation changed in B2C e-commerce which is based on the Internet.On one hand, in the service-oriented Internet field, many kinds of e-services can be provided for different types of users.On the other hand, many users are reluctant to purchase products or services over the Internet.For example, only five percent of active Internet users made purchases via the Internet in Hong Kong over the last ten years [30].A report made by China Internet Network Information Center (CINIC) of China revealed that only about 40 percent of Internet users in China had the experience of purchasing via the Internet in 2014 [31].The lack of trust is still one of the most important reasons for users not purchasing over the Internet [14,15,22,[32][33][34][35][36][37].This phenomenon is not surprising when the characteristics of the Internet are considered.The Internet is known to be a loosely coupled system.The Internet is designed primarily as a medium of cooperation and information sharing.Furthermore, the Internet is not originally designed as a commerce environment [38].Thus, trust is hard to achieve and maintain.The reasons behind the lack of trust in B2C e-commerce are complex.First of all, service clients cannot interact with products and service providers directly, and they are also unfamiliar with online added services.The credibility of online information is weak [15,39].Security and privacy of key pieces of information that consumers provide through the Internet are also important issues for consumers [8,40,41].In such an uncertain environment, trust becomes one of the most important factors in the development of Internet services such as e-commerce [22,42].Indeed, the Internet provides maximal convenience in the information era.The number of total financial transactions made via the Internet has been increasing.The problems associated with trust are those derived from certain techniques, and the human element in e-commerce can be solved step by step.In terms of the evaluation of trust and reputation in regards to the Internet, many practical trust evaluation systems focus mainly on Peer-to-Peer networks such as C2C, B2B e-commerce, and B2C e-commerce [43][44][45][46][47][48].As for B2C e-commerce, some trust models and reputation evaluation methods have been put forward [11,49,50].Various researchers have paid special attention to computation models of trust and reputation for B2C e-commerce in recent years [14][15][16]25,[51][52][53][54].It is important to take into account the dynamical factor in the processes of trust development and reputation evaluation.This paper proposes new dynamical trust and reputation computation models for B2C e-commerce.
The remainder of this paper is organized as follows.The notion and meanings of trust for B2C e-commerce in network environments are reviewed in Section 2. The components and mathematical formula of dynamical trust for B2C e-commerce are discussed in Section 3. The main factors that influence the dynamical trust in B2C e-commerce are analyzed.Then a computation model of reputation is proposed based on the relationship between trust and reputation in B2C e-commerce in Section 4. In the proposed model, the main variation processes of reputation in B2C e-commerce are discussed.Iterative trust and reputation computation models are expanded based on the trust and reputation models.A group of simulation experiments are performed to illustrate the proposed model in Section 5. Finally, conclusions are drawn in Section 6.
State-Of-The-Art
Trust is an important component of different social relationships such as interpersonal relations, economical relations, and so on.The conception of general trust has been researched in different academic disciplines such as philosophy, psychology, management, and economics [14,19,22,36,37,55].Trust in the Internet has some common and different characteristics with traditional theories of trust.To begin with, general research on trust is examined before the particularity of online trust is analyzed, with special emphasis on trust in B2C e-commerce.
Ancient philosophers studied trust as part of human nature, and modern philosophers mainly study interpersonal trust and the morality of trust relationships.Political philosophers look at trust as having social value and benefits.Psychology mainly investigates interpersonal trust.Psychologists consider interpersonal trust to be an important concept in psychology and vital to personality development, cooperation, institutions, and society [56].In management, trust is studied in organizational contexts.
Trust is identified as one control mechanism that enables employees to work together more productively and effectively [57].As for economics and marketing, trust is one of the components of consumer relation management (CRM) [8,22,58].Disagreements as to definitions of trust occur for two reasons.First, trust is an abstract concept and is often used interchangeably with relative concepts such as reputation, credibility, and confidence.Second, trust is a multi-faceted concept that incorporates cognitive, emotional, and behavioral components.
Following the above, trust and reputation models for the Internet environment will now be reviewed.Computational trust and reputation models in virtual societies were reviewed by Jordi and Carles [55].The authors listed some typical criteria by which main computational trust and reputation models are classified.The authors noted that reliability measures of the calculated trust and reputation values are not mentioned in previous methods, and few models proposed the links between trust and reputation.However, as we know, reputation is one of the most important ways to help build trust.Different trust and reputation models for Internet mediated service provisions were surveyed by Jøsang, Ismail, and Boyd [4].The authors discussed the notion of trust, the relationship of trust and reputation, and research agendas for trust and reputation.Different reputation network architectures are described.Reputation computation methods and some well-known commercial reputation systems are surveyed.Main problems and solutions of these models and systems are also mentioned.Main approaches to model reputation systems were reviewed by Gutowska, Sloane and Buckley [21].The authors presented a new reputation model for the distributed reputation system in B2C e-commerce applications.The model considered several aspects that influence trust and reputation such as age of ratings, transaction value, credibility of referees, number of malicious incidents, and unfair ratings.
Additionally, other aspects that affected online trading decisions such as trust-mark seals, payment intermediaries, privacy statements, security, and privacy strategies were also discussed [48][49][50][51].It is understood that the Internet provides opportunities at the same time as presenting numerous threats and risks.Reputation evaluation is one way to minimize threats.Reputation evaluation is usually performed through feedback reviews subsequent to transactions.However, the quality of feedback degraded the quality of reviews.An evaluation method for reviewing quality in terms of multiple metrics was presented by Li and Li, Du and Tian [53].Based on the filtered reviews, a service reputation evaluation was proposed by Cho, Kwon and Park [18].A trust management framework that is event-driven and rule-based in service-oriented computing environments was proposed by Wang, Lin and Wong [52].The paper proposed trust evaluation metrics and trust computation formulas.The incremental characteristics of the trust establishment process were embodied in the proposed model.Reputation ranks were determined by a fuzzy-logic based approach in which new service providers and old ones were differentiated [25].Service users are important information sources for the reputation of service providers, and transaction-feedback is used as the quality indicator for trust building in e-commerce.Unfair or incorrect information from unreliable or malicious users have significant negative effects on the fairness and objectivity of reputation evaluation.Therefore, the collaborative filtering method is used in reputation systems by different researchers in order to detect and isolate unfair or inaccurate information provided by unreliable or malicious users [18,20].In recent years, probability-based approaches such as the Bayesian model and the Dirichlet model [6,17,22,59] are derived into trust and reputation computation.Online trust and reputation issues for B2C e-commerce have been investigated by researchers of different disciplines, yet there are further issues that remain to be solved.
(1) There are some practical reputation rating systems of B2C e-commerce such as taobao (www.taobao.com),dangdang (www.dangdang.com)and 360buy (www.360buy.com).In these reputation evaluation systems, ratings and reviews of users are stored.The reputation rankings of these systems are mainly based on simple means algorithms.Theoretical research on the reputation systems is also put forward [9,18,[59][60][61].However, the rating algorithms are particularly simple.We know that reputation is a long-term process during which trust is established.How to describe the characteristics of trust and reputation is one goal for this research.
(2) The temporal relationship of trust and reputation are not considered in previous research projects [ [14][15][16]18,22,52,53].In terms of both real life and theory, the reason that certain things or service providers are trusted may be their positive reputation.On the other hand, reputation is built via word-of-mouth and trust.So a reputation computation model for B2C e-commerce will be investigated by trust values from reviewers.Reputation is aggregated by several aspects, such as trust values, different reviewers, different time, and different transactions.The temporal relationship of trust and reputation is included in our proposed computation model.The closed-loop iterative computation model is established, which is based on the conceptions of local trust and overall reputation.
(3) How to design a fair and optimal reputation management system is still an open problem for both practical usages of website services, such as B2C e-commerce and theoretical research of the reputation system.Centralized and distributed management architectures are two design options [52,55,60].In centralized reputation management systems, reviews of service clients are independent of service providers [2,52].We adopt centralized management architecture in the proposed reputation computation model.The dynamical trust computation model is put forward, in which different aspects of various factors are considered.A reputation evaluation system is further established by fusing trust values.An iterative trust and reputation computation model is further proposed.The proposed evaluation method is intended to be relatively fair and objective.
B2C e-commerce is defined as consumers purchasing products and services via the Internet, otherwise known as online shopping.Trust in B2C e-commerce is associated with consumer experiences, asymmetry of information, interval of space and time of transactions, and transaction risk [62,63].The conception of trust in B2C e-commerce can be outlined as follows.Trust in B2C e-commerce is the subjective psychology status of consumers that relies on the promise made by online firms, their websites, or transaction environment under certain societal and technical circumstances.The psychology of perceptive reliance is developed from previous customer practices.The objective of the psychology of perceptive reliance is to reduce transaction risks and the uncertainty that come from information asymmetry, time-space interval and other factors.Reputation systems collect, process, and aggregate information about participants or services, which can help future users make optimal decisions [61].Systems of good reputation should encourage trustworthy behaviors and punish dishonest participation [50].Following, the dynamical trust computation model for B2C e-commerce will be formulated, and the relationship of trust and reputation will be discussed further.Based on the logical relationship of trust and reputation, the iterative trust and reputation computation model is derived.
Means and Mathematical Formula of Trust
Characteristics of trust for B2C e-commerce are firstly analyzed in this section.Then, components of trust in B2C e-commerce are discussed.Mathematical formula of dynamical trust for B2C e-commerce is further derived.
Characteristics of Trust for B2C E-Commerce
In general, trust is associated with the trustor and trustee; vulnerability arises from uncertainty and perceived risks, while actions are encouraged by trust.Trust in B2C e-commerce is a particular form of trust.When trust is discussed in the network environment, some new characteristics of the Internet and online shopping should be included in the trust system.
In trust relationships of B2C e-commerce in network environments, the trustors are usually service clients or buyers who browse e-commerce websites and make transaction decisions.Trustees are generally online service providers and e-commerce websites.Due to the complexity and ambiguity of online information and virtual environments, higher degrees of vulnerability occur when compared with traditional trust situations.Because of information asymmetry, time-space intervals, the openness of the Internet, and weakness of information techniques, trust is more important in virtual environments such as B2C e-commerce than in traditional trades.The attitude of the trustor towards society and technology differs for each individual.Therefore, trust in B2C e-commerce is a subjective conception and the characteristics of different kinds of service clients or buyers will be considered in our proposed computation model.Trust is closely associated with reputation in both a virtual environment and real life.How service providers' reputations influence trustors' decisions will also be considered in the proposed trust computation model.If service clients or buyers trust a website, he or she will deduce the trust status of the online service provider, and make purchase decisions accordingly.The components of the trust system will be derived from the above characteristics of trust in B2C e-commerce.
Components of Trust for B2C E-Commerce
Components of traditional trust have heuristic roles when we draw the component system chart of trust in B2C e-commerce.Personality, environment, and risk are three factors that influence the establishment of trust [53].Three attributes namely benevolence, competence, and integrity constitute the main elements of trustworthiness, which is thought of as an antecedent of trust [50].Predictability is also an important component of trust [56].Belief is regarded as an important mental experience in trust [62].The components of trust in B2C e-commerce are related to not only the participants named buyers and the online service provider, but also the circumstances containing Internet-related techniques and social cultures [62].There are four main antecedents that influence the trust of service clients in B2C e-commerce: trustworthiness of the Internet merchant, trustworthiness of the Internet as a shopping medium, infrastructural factors such as security, third-party certification, and other factors such as company size and demographic variables.
There are also some conclusive remarks regarding online trust components.Trust was formulated to have six dimensions: consumer behavior, institutional, information, product, transaction, and technology [50].Trust was investigated from multi-dimensional antecedent components namely the beliefs of integrity, ability, and benevolence [64].In B2C e-commerce, integrity is the belief that companies act in a consistent, reliable, and honest manner while keeping their promises.Ability refers to the belief that the company has the ability to fulfill their promises.Benevolence is the belief in the ability of companies to care about consumer interests and be concerned for the welfare of their customers.Trust in B2C e-commerce is based on the confidence of consumers.Both belief and confidence are derived from the uncertainties and risks involved in the online transaction environment.Following three aspect components of trust named trustee, trustor, and the transaction environment in B2C e-commerce are discussed separately.
The trustee is the e-commerce website, merchant, and online service provider.This includes components such as branding, offline presence, faith, cooperation, familiarity, benevolence, company history, merchant quality, price, and website quality in terms of convenience, usability, efficiency, reliability, privacy and security.Components of the trustor include the disposition to trust, purchase history, attitude towards online shopping websites, attitude towards information techniques, personal values, age, education, gender, and subjective perceptive risk.Environmental components include technique and social factors.Components concerning the techniques of environments include privacy, security, transparency, credibility of information, Internet-relative techniques, information techniques, encryption, and third party certification.Social components of environments include policy, law, morality, and culture.
Direct Trust and Indirect Trust
Components that influence trust in B2C e-commerce can be divided into direct and indirect factors.Factors that influence direct trust and indirect trust will be discussed here separately.Further, a dynamical computation model of trust for B2C e-commerce is established based on the primary direct trust computation model.
When we analyze the factors influencing direct trust in B2C e-commerce in a simple situation, service clients or buyers are assumed to browse the website and make purchases without any transcendental knowledge about the online service providers in advance.Environments of technique and society provide the surrounding in which the transactions occur.From the service clients' or buyers' point of view, the disposition to trust is developed throughout their whole lives.As for the service provider's website, these websites that the service clients or buyers face are main tools that can persuade him or her to make their decision.Therefore, the design of the website is crucial.Convenience, usability, efficiency, and reliability can be important characteristics of websites to enable them to ensure the integrity, privacy, and security of both themselves and the online service provider.Additionally, merchant quality, price, and service are also very important in terms of increasing the trust of the consumer.In terms of the online service provider, information about the provider of the website should be dependable.Both the offline presence and the history of the provider should be addressed.Brand, faith, cooperation, familiarity, and benevolence of the provider should also be embodied in the website of the provider.
Factors that influence indirect trust in B2C e-commerce also arise from three different entities of trust.When service clients or buyers wish to make a purchase from an online firm, the decision will be influenced by indirect trust information taken from other sources.Environments of technique and society in B2C e-commerce are exterior factors.Disposition and attitude towards online or offline trust of the consumer are formed gradually throughout his or her life.Factors that influence indirect trust are mainly focused on the reputation of service providers and the recommendation of online service clients and other service clients or advertisements.The dynamical trust computing model of B2C e-commerce is established based on the primary direct trust.Total trust and reputation computation models will be further proposed.
Direct Trust Model
Based on the above analysis, the direct trust computation function dir f can be defined as ( ) ( ( ), ( ), ( )) where ( ), ( ) There are two stages of the decrement processes.In the first stage, values of ( ) p T t fall quickly.Then, the values of ( ) p T t will be relatively low slowly.Concrete numerical simulations of the trust computation model will be discussed in Section 5.The reputation computation model based on trust will be further discussed.
Iterative Trust and Reputation Computation Model
Trust is divided into direct trust and indirect trust.In Section 3, the primary direct trust computation model is established.Indirect trust refers to the reputation of service providers and recommendation of other service clients.The total trust and reputation computation model is investigated in this section.
Reputation System in B2C E-Commerce
Trust in a service-oriented network environment is the subjective psychological status of the service clients or buyers, and relies on the promise made by online service providers.The aim of developing trust is to reduce transaction risks and uncertainties.Reputation systems collect, process, and aggregate trustworthy information regarding participants or services which help future users make optimal decisions.The role of trust is to nurture and sustain reputations, which is essentially a trust-building process.In network environments, service providers earn and retain trust via reputations.Without trust, a good reputation will erode.As a child is the father of a man, trust begets reputation.Loss of trust can damage a reputation.Reputation is a close relative of trust, yet they are not the same.Therefore, while a reputation may be trustworthy, it generally consists of a whole range of characteristics other than trust.Online service providers can develop and sustain reliable reputations via the processes of building trust.
There are two types of reputation system architectures in service-oriented network environments in which different trust ratings are gathered and stored [52].These are referred to as centralized and distributed architectures.In centralized reputation architectures, the management of trust and reputation in network environments is based on centralized trust databases.Further, trust information regarding the performance of participants is collected, stored, and computed by a third central reputation authority.Reputation rankings and scores are available publicly, and service clients and users can consult with these for their next round of transactions.Such reputation systems include taobao, dangdang, and 360buy.Distributed reputation systems have no central reputation authority to submit their reputation ratings.Each client collects ratings from other participants, and each client records their opinion regarding their transactions with the service providers.When a client desires knowledge of the reputation of a potential transaction partner or service provider, he or she needs to obtain ratings from other clients who have already completed transactions with the same service provider and thus calculate the reputation status based on the score ratings.Such reputation systems are reputation architectures in C2C e-commerce and P2P networks.Reputation system architecture can also be classified as bidirectional or unidirectional [53].In bidirectional architecture, all users can be both service providers and service clients.In the case of unidirectional architecture, service providers and their products are rated by service clients.The proposed reputation system for B2C e-commerce based on the relationship of trust and reputation is centralized and is unidirectional in terms of its architecture, which can be illustrated in Figure 1.The trust status and reviews of the service provider provided by the service clients are collected, stored, and computed by a third central authority.Further, reputation computation results are performed by the values of trust reviews, which can be consulted within the next round of transactions.
Reputation Computation Model
As the proposed reputation system for B2C e-commerce is based on the trust review records of other service clients or buyers, the trust value computation process for different types of service clients is discussed first.Thereafter, the aggregated reputation computation model is proposed which integrates all different trust values.From Section 3, the dynamical direct trust computation function dir f can be further shown as Concrete values of , , H M and L will be discussed in Section 5.
Lifetime of trust value is also considered in the proposed model.Weight function associated with lifetime is used to construct the time sensitive trust valuation.The more recent trust value has a high weight.Weight function ( ) W t can be defined as the following exponential function [21] 0 ( ) where (0,1) is factor of weight, 0 t t , and 0 t is reference destination time.The weighted trust value computation function dir f is as Parameters of Equation ( 10) are the same as those in Equations ( 4) and ( 7).Consulting with Figure 1, there are total N service-clients, and the reputation evaluation computation formulate ( ) r t can be shown as The decrement process of the reputation computation model is similarly shown as We see the values of reputations contain dynamical direct trust values from all N service clients.The relationship of direct trust values and the values of reputations will be simulated in Section 5.
Model of Total Trust and Reputation
Trust in service-oriented network environments such as B2C e-commerce include direct and indirect factors.However, when a new service client or buyer evaluates the trust status of the service provider before making a transaction decision, direct trust and indirect trust factors are not separated.Trust evaluation is as an integral psychological conception.Indirect trust is focused on the reputation of the service providers based on recommendations of the total online service clients, and the computation formulate of reputation is denoted by Equation ( 11).The reputation system based on the trust reviews is illustrated by Figure 1.There are in total N service clients and one service provider.In order to describe the total trust computation model clearly, we give a discrete difference formulate of total trust, direct trust, and reputation.In the following equations, time variable t is with the unit step such as denotes reputation value at that time.The integrated quantitative total trust value can be calculated as where the weight factor β [0,1] balances the values of direct trust and reputation values.If β 1 , the total trust value from the new service client will be equal to his direct trust.If β 0 , the total trust value come from the reputation.Equation (10), Equations ( 11) and (13) give the relation of total trust, direct trust, and reputation value such as If Equation ( 12) is used to simulate the decrement processes of the total trust, the integrated total trust value can be calculated as The iterative total trust and reputation model will be further proposed.Total trust values of j th service client at time i t and 1 i t are ( ) ( ) T t and ( ) 1 ( ) T t , and the reputation values of the service provider at time i t and 1 i t are ( ) i r t and 1 ( ) i r t , and the j th service client is in the total N service clients.As we see, reputation is the collecting, processing, and aggregating of trustworthy information about the service provider.The role in trust of each service client is to change and sustain the reputation of the service provider.In addition, the service provider earns and retains this trust via his reputation.The reputation of the service provider with the whole range conception is interactive with psychological status of trust.If we assume that the total trust value of the service client has the same derivative with direct trust, the difference form of Equation ( 3) can be used to calculate the total trust values ( ) 1 ( ) of j th service client by ( ) ( ) The total trust ( ) 1 ( ) T t at time 1 i t is determined by the total trust value ( ) ( ) T t at time i t and reputation value ( ) i r t at time i t .The time variable t has a unit step.The total trust value can be calculated as where the weight factor β [0,1] balances the values of trust and reputation value at time i t .
In real life, the changes of the reputation of a services provider are derived from the psychological trust status of a service client.Therefore, the reputation value of the services provider is updated by the trust values of N service clients.When the mean algorithm is adopted, the iterative reputation computation is shown as Combining Equations ( 17) and ( 18), the iterative trust and reputation computation model is shown .
If initial trust values of the total N service clients and parameters of the derivative functions of trust values are given, the trust values of the total N service clients and the reputation of the services provider at the following time can be calculated by using the iterative model Equation (19).Similarly, if the difference form of Equation ( 5) is used to simulate the decrement processes of total trust values ( ) ( ) T t and ( ) i r t , the iterative total trust and reputation computation model is similarly shown as .
Equations ( 19) and ( 20) are a set of difference equations with varying coefficients which describe the dynamical processes of trust and reputation in B2C e-commerce.
Simulations and Discussions
Simulations of the direct trust model, reputation computation model, and iterative total trust and reputation computation models are shown in this section to illustrate the performance of the proposed model.In the simulations, three classical types of service clients (risk taking, cautious, and conservative) are assumed by concrete values, and the influence of the parameters of these models are discussed.Both improvement and decrement processes of trust and reputation are simulated.
Simulation of Direct Trust
We mainly simulate the influence of the parameters in the direct trust evaluation model in this subsection.The processes of improvement and decrement of the direct trust evaluation function dir f with three types of service clients (risk taking, cautious, and conservative) are simulated.Figure 2 shows the processes of improvement of the direct trust function dir f denoted by Equation ( 7) and derivative function of dir f with different values of parameter α, where the service client is a cautious type, such as 0.5 . Because the value of ( ) e T t is constant, which denotes environmental factors, C is as 1 in following simulations.Figure 3 shows the processes of a decreasing of direct trust function associated with Equations ( 5) and ( 6), where the service client is also a cautious type.From Figures 2 and 3, we see that parameter determines the rates of the processes of improvement and decrement of the direct trust function.If parameter α is bigger, the process of improvement and decrement of the direct trust function is more rapid.The derivative functions of direct trust show that direct trust functions change rapidly in the beginning.Finally, values of derivative functions of the direct trust approach zero.From Figure 4, we see that values of the direct trust function of different types of clients (risk taking, cautious, and conservative) will approach their trust characteristics values, which are denoted as , H M and L in the processes of improvement of direct trust.On the other hand, values of the direct trust function of different types of clients will approach negatives of their trust characteristics values, which are denoted as , H M and in the processes of the decrement of direct trust.In the former case, risk-taking service clients develop high levels of trust.And in the latter case, risk-taking service clients develop high levels of distrust.Simulation results are consistent with real life.
Simulation of the Reputation Computation Model
In this subsection, the reputation computation model denoted by Equations ( 11) and ( 12) is simulated.The influence of weight λ on the model is illustrated.Both the processes of improvement and decrement of reputation evaluations are simulated, which contain three types of clients.Figure 5 shows the processes of improvement of the reputation computation model denoted by Equation (11).In Equation (11), is as 50.In other words, there are a total of 50 direct truth evaluations to combine the values of reputation.Parameter is the normal distribution as , , and service clients are the cautious type, such as , and 0 200 t .Figure 6 shows the processes of decrement of the reputation computation model denoted by Equation (12).There are also total of 50 direct truths to combine the values of reputation.Parameter ( ) α i are also normal distribution as (0.04, 0.01) N , 1, 2,...,50 i , and service clients are the risk taking type such as ( ) 0.8 Figure 7 shows the processes of improvement of the reputation computation model by Equation (11), which contain three types of service clients.There are total of 150 direct truth evaluations to combine Form Figures 5 and 6, we see that the values of weight λ of Equations ( 11) and ( 12) affect the rates of the processes of improvement and decrement of the reputation evaluation when values of parameter α are given.Further, the values of reputation evaluation will approach the trust values of a certain type of client in the processes of the improvement of reputation.The values of the reputation evaluation will approach the negative trust values of a certain type of client in the processes of the decrement of reputation.Figure 7 shows that when all three types of service clients occur in the processes of the improvement of reputation, the values of reputation will approach a positive value.Figure 8 shows that when all three types of service clients occur in the processes of the decrement of reputation, the values of reputation will approach a negative value.Figures 7 and 8 simulate the processes of improvement and decrement of the reputation derived from the direct trust function.Reputation values with different numbers of service clients can also be simulated in similar manner.
Simulation of Total Trust Containing Reputation
In this subsection, total trust containing direct trust and reputation evaluation denoted by Equations ( 14) and ( 15) is simulated.Figure 9 shows the processes of improvement of total trust denoted by Equation ( 14), which contain both direct trust and reputation.There Parameters ( ) α j is 0.05, and β is 0.7. Figure 10 shows the effect of weight factor β, which balances the values of direct trust and reputation.Figure 11 shows the processes of the decrement of total trust denoted by Equation (15), which contains direct trust and reputation.Parameters are the same as in Figure 9. Figure 12 shows the processes of change of total trust which contains direct trust and reputation evaluation.Parameters are the same as in Figure 9, however, direct trust is in the processes of improvement, while reputation is falling down.From Figure 9 to Figure 12, we see that when direct trust values and reputation values are in the processes of improvement and decrement, they both determine the total trust values.Further, the weight factor β balances the values of direct trust and reputation.When other parameters are given, Figure 10 shows the processes of improvement of total trust values with different values of β.We see values of β control the effect of values of direct trust and reputation.
Simulation of Iterative Total Trust and Reputation Model
The iterative total trust and reputation computation model denoted by Equations ( 19) and ( 20) is simulated in this subsection.Figure 13 shows the processes of improvement of the iterative reputation model.There are a total of 150 trust evaluations to combine the values of reputation .In Figure 13, the cross line shows the processes of improvement of reputation evaluation, and the red, green, and blue lines are the typical iterative total trust values of the three types of service clients.Figure 14 shows the processes of the decrement of the iterative reputation model denoted as Equation (20).In Figure 14, the cross line shows the processes of the decrement of reputation, and the red, green, and blue lines are the typical iterative total trust values of the three types of service clients.
From Figure 13, we see that, when the iterative formulation of Equation ( 19) is used to simulate the processes of improvement of the iterative trust and reputation evaluation, the total trust of the different types of service clients, and the values of reputation will converge to some positive value (0.48 in Figure 13).In other words, the reputation of the service provider will sustain this value stably.Figure 14 shows that when the iterative formulation of Equation ( 20) is used to simulate the processes of the decrement of iterative trust and reputation, the total trust of different types of service clients and values of reputation will converge to a certain negative value (−0.44 in Figure 14).Thus, the reputation of the service provider will sustain this value stably.Therefore, the processes of changes in total trust and reputation are reasonably simulated.Values of iterative trust and reputation converge faster than those of the non-iterative trust model denoted by Equation ( 14).
Conclusions
Trust is one of the most important factors that influence the success of application of Internet services such as B2C e-commerce.The practical trust evaluation systems for B2C e-commerce are based on an open loop relationship of trust and reputation in which reputation is computed by mean of trust.The dynamical closed-loop trust and reputation computation model for B2C e-commerce in service-oriented network environments has been proposed in this paper.Based on previous related works, characteristics and components of trust for B2C e-commerce have been discussed.Dynamical processes, such as the improvement and decrement of direct trust are simulated via mathematical formulae.Then, the reputation system architecture for B2C e-commerce is established, and the iterative trust and reputation computation model is further proposed, in which the closed-loop relationship of trust and reputation was consulted.Finally, several groups of simulation experiments were performed to illustrate the proposed models.The main contributions of this paper are as follows: Based on our previous work [23,24,65], the conceptions of trust, direct trust, and indirect trust in B2C e-commerce are introduced.The dynamical mathematical model is established to simulate the processes of improvement and decrement of direct trust. The reputation system architecture for B2C e-commerce is established, which is based on the trust records of all service clients.The aggregated reputation computation model that integrates trust values of all service clients is then proposed. In order to combine trust values from each service client and with the values of aggregated reputation, the iterative trust and reputation computation model is further proposed for B2C e-commerce based on the closed-loop relationship of trust and reputation. Several groups of numerical simulation experiments are performed to illustrate the proposed model.The processes of improvement and decrement of dynamical direct trust are simulated by Hyperbolic Tangent functions.Three types of service clients, those being risk-taking, cautious, and conservative are expressed numerically.The processes of the changes of direct trust values, total trust values, and reputation are simulated.The processes of improvement and decrement of iterative trust and reputation show that trust values of different types of service clients will converge to the stable value which denotes the reputation of the service provider.
We will further investigate the processes of the changes of trust and reputation by probability theory.We will also verify the proposed model by some practical dataset of service-oriented networks applications such as B2C e-commerce shopping evaluations.
Figure 1 .
Figure 1.Reputation system in service-oriented networks environments based on trust reviews.
Figure 2 .Figure 3 .Figure 4 .
Figure 2. The processes of improvement of direct trust function.(a) The processes of improvement of direct trust; (b) Derivative functions of direct trust.
reputation.Each type of service client has 50 direct truth evaluations.Parameter ( ) α i are normal distribution as (0.04, 0.01) N , 1, 2,...,150 i .Values of three types of service clients are normal distributions as same as before.
Figure 5 .
Figure 5.The processes of improvement of the reputation computation model for cautious client type.
Figure 6 .
Figure 6.The processes of decrement of the reputation computation model for risk taking client type.
Figure 7 .
Figure 7.The processes of improvement of the reputation model containing three types of clients.
Figure 8
Figure 8 shows the processes of decrement of the reputation function denoted by Equation (12) which contain three types of service clients.Parameters and notations are the same as before.In Figures 7 and 8, the line of cross tar denotes the reputation values that contain 150 service clients of three different types.The blue line denotes the reputation values containing 50 risk-taking clients.The red line denotes the reputation values containing 50 cautious clients.The green line denotes the reputation values containing 50 conservative clients.
Figure 8 .
Figure 8.The processes of decrement of the reputation model containing three types of clients.
Figure 9 .
Figure 9.The processes of improvement of total trust values, direct trust values and reputation.
Figure 10 .
Figure 10.The processes of improvement of total trust values with different values of .
Figure 11 .
Figure 11.The processes of the decrement of total trust values, direct trust values and reputation.
Figure 12 .
Figure 12.The processes of changes of total trust values with direct trust values and reputation.
Figure 13 .
Figure 13.The processes of improvement of the iterative trust and reputation computation model.
Figure 14 .
Figure 14.The processes of the decrement of the iterative trust and reputation computation model.
are trust value concerning environments, service clients, and the service provider, the values of which are limited in domain [0,1] , and t is denoted as time.If a simple multiple function is used to derive the explicit form of Equation (1), the direct trust evaluation function dir is related with the attitude of society towards techniques and societal factors.The change of attitude towards techniques and societal factors is very slow.So the values of ( ) is related to service clients.The different backgrounds of service clients, such as disposition to trust, attitude, and personal values can also be viewed as a constant.In our simulations, three classical types of service clients, which are risk taking, cautious, and conservative, are discussed.
p T t p T t are defined separately based on the factors that influence values of trust.We notice that ( ) e T t e T t is assumed to be some constant.( ) c T t p T t .From Equation (3), ( ) p T t can be solved as p T t is the transformation of Hyperbolic Tangent.The images of ( ) p T t and its derivative function shows that in the trust establishment stage, values of trust do not increase rapidly.In the improvement stage, however, values of trust do increase rapidly.In the stabilization stage, values of trust do not increase rapidly.Therefore, Equation (4) simulates the processes of the improvement of trust values.The derivative function of the process of the decrement of values of ( ) p T t can also be similarly simulated by following equation p T t can be solved as | 9,815.4 | 2015-10-27T00:00:00.000 | [
"Business",
"Computer Science"
] |
Chhattisgarh's Perspective on Investigating Cybercrime: Challenges and Solutions
: Chhattisgarh is not an exception to the ubiquitous effects of cybercrime, which has become a global problem. Cybercriminals take advantage of weaknesses as digital technologies develop, providing particular difficulties for Chhattisgarh's law enforcement organisations and authorities. In order to boost cybersecurity measures and improve law enforcement capacities, this research study addresses the specific difficulties the state faces in pursuing cybercrime. The study examines case studies, evaluates current cybersecurity activities, and provides thorough recommendations while also analysing the current cybercrime scenario. Chhattisgarh can successfully fight cybercrime and establish a safer online environment by comprehending and tackling these issues.
Introduction
The alarming rise of cybercrime has cast a menacing shadow over societies worldwide, leaving no region untouched by its detrimental consequences.Chhattisgarh, an Indian state, has witnessed a steady surge in cybercrimes, posing significant challenges for its law enforcement and policymakers.In an era dominated by digital reliance, cybercriminals exploit vulnerabilities to execute various unlawful acts, encompassing hacking, data breaches, online scams, identity theft, and cyberbullying.Even within Chhattisgarh's progressing economy and rapid development, the threat of cybercrime looms large.The interconnected digital ecosystem, fuelled by internet accessibility, mobile devices, and e-commerce platforms, provides cybercriminals with ample opportunities.This research endeavour aims to delve into the intricacies of Chhattisgarh's struggles in investigating cybercrime while proposing actionable recommendations to fortify cybersecurity measures and empower law enforcement.By pinpointing the unique challenges impacting cybercrime investigations in this region, we can chart focused strategies to counteract this evolving menace.Through the exploration of pertinent case studies and an assessment of the present cybercrime landscape in Chhattisgarh, coupled with the hurdles faced by law enforcement bodies during cyber investigations, this study seeks to comprehensively address the issue.Drawing inspiration from global best practices, technological advancements, and regulatory adaptations, Data Analysis: Quantitative Analysis: Through statistical tools, we will dissect the quantitative data, unveiling patterns, trends, and connections among distinct types of cybercrimes.This will quantify the tangible impact on both the economy and society.Qualitative Analysis: The qualitative data will undergo thematic analysis, identifying recurring themes and patterns in the literature.This will illuminate the challenges faced by law enforcement, the intricate legal landscape, and the experiences of those impacted by cybercrime.
Review of Existing Initiatives:
We will meticulously evaluate current cybersecurity measures in Chhattisgarh, exploring government policies, law enforcement efforts, and public awareness campaigns.Scrutinizing official documents, reports, and media coverage will provide insight into ongoing initiatives.
Recommendations:
Guided by the synthesized findings from data analysis and literature reviews, we will offer pragmatic recommendations to tackle identified challenges.These solutions will draw from both quantitative insights and qualitative nuances.By weaving these various research threads together, our endeavour is to present a comprehensive, wellrounded analysis of the cybercrime landscape in Chhattisgarh.This approach ensures a deep understanding of the challenges, consequences, and potential avenues for progress in the realm of cybersecurity.
Chhattisgarh's Cybercrime Landscape
As the state has become more plugged into the internet, there has been a noticeable change in the state's cybercrime environment.Cybercriminals now have an abundance of opportunities to operate thanks to the general availability of internet connection, the widespread usage of smartphones, and the explosive rise of e-commerce platforms.The Chhattisgarh cybercrime landscape is characterised by the following factors: Rising Cyber occurrences: Chhattisgarh has seen an increase in cyber occurrences, which can include identity theft, financial fraud, and online scams.To carry out their illegal actions, cybercriminals take advantage of weaknesses in computer systems, networks, and digital platforms.Online financial fraud is still a common cybercrime in the state and includes phishing, online banking fraud, and credit card frauds.Unwary people frequently become the targets of clever fraud schemes, incurring significant financial losses.
Cyberbullying and Online Harassment:
The growth of social media and digital communication tools has made people more vulnerable to cyberbullying and online harassment, especially young users.Cyberbullies target and harass their victims using anonymous identities, which upsets the victims' minds.Chhattisgarh is a desirable target for cyber espionage and hacking operations since it is home to a number of sectors, including mining, steel, and agriculture.For the state's economic interests, theft of intellectual property and industrial espionage pose serious concerns.
Data Breach and Privacy Issues:
As personal and sensitive information becomes more digital, there are a rising number of data breaches.Cybercriminals compromise the privacy of people and organisations by exploiting security flaws to obtain unauthorised access to databases.
Ransomware Attacks: In Chhattisgarh, ransomware attacks have become a significant cyber menace.Important data is encrypted by malicious software and rendered unavailable until a ransom is paid, disrupting operations and essential services.Cybercriminals frequently employ social engineering techniques to trick people into disclosing private information or taking acts that help further their criminal activities.Pretexting, baiting, and tailgating are examples of frequent tactics.
Lack of awareness:
The general public's ignorance of cybersecurity best practises is a serious obstacle in the fight against cybercrime.Many people and companies are not aware of the hazards they face or the precautions they might take.Underreporting of cyber events is common in Chhattisgarh as a result of reasons including concern for one's reputation, mistrust of the police, and a sense that there are few real options available.
Threats that are Emerging: As technology develops, more advanced and novel cyberthreats appear, making it difficult for the government to stay on top of changing cybercrime strategies.A comprehensive strategy that incorporates technical developments, legislative changes, public awareness initiatives, and cooperative efforts between governmental organisations, law enforcement, and business partners is needed to address the Chhattisgarh cybercrime scenario.
Types of cybercrime that are common in the region
Just like in many other places, Chhattisgarh is facing its share of cybercrime due to our growing reliance on all things digital.The tech boom and easy internet access have brought about various types of cyber mischief.Let's break down some of the common cybercrime flavours that are buzzing around in our region: 1. Phishing: Picture thiscyber tricksters sending you fake emails or texts that pretend to be from real companies.They're fishing for your sensitive info like passwords and credit card details.Sneaky, right? 2. Online Banking Fraud: Hold onto your hats for this one.Bad actors are sneaking into people's online banking accounts and pulling off unauthorized transactions.It's like they're pulling a virtual bank heist! 3. Identity Theft: Brace yourself.Cyber crooks are snatching up personal stuff like your social security numbers and addresses.Then they cook up all sorts of schemes using your stolen identity.It's like your digital twin is going rogue. 4. Social Networking Scams: Oh, the tangled web they weave!Fake profiles and accounts on social media are luring folks into clicking on dodgy links or spilling the beans on personal info.It's like a digital charade. 5. Cyberbullying: The dark side of the internet, sadly.Bullies are using the web to torment people with threats and nasty comments, all through social media or messaging apps.It's like schoolyard bullying with a digital twist.6. Ransomware Attacks: Talk about a digital hostage situation!Cyber villains are locking up data with encryption and demanding a ransom for its release.It's like your files are being held for ransom, but without any swashbuckling heroes.7. Data Breaches: Sneaky intruders are breaking into databases to nab sensitive stuffthink financial records, personal details, and even valuable secrets.It's like digital cat burglars on the prowl.8. Hacking and Website Defacement: Imagine unauthorized digital break-ins where hackers mess around with websites, change stuff, and basically cause chaos.It's like graffiti, but for websites.9. Online Fraud and Scams: Hold onto your wallet!Cyber tricksters are cooking up scams like fake job offers or winning a lottery.They're banking on your trust to snatch your cash or personal data.It's like a virtual con game.10.Cyberextortion: The cyber bad guys are turning to threats delivered through emails or messages to strong-arm folks into coughing up cash or secrets.It's like a digital shakedown.11.Distributed Denial of Service (DDoS) Attacks: Imagine a digital traffic jam!These attacks overload websites or services, causing them to slow down or crash.It's like a cyber traffic pileup.12. Cyberespionage: Picture sneaky infiltrators stealing sensitive info or trade secrets for some big-time financial or political advantage.It's like a digital spy game.13.Sextortion: This one's just creepy.People are being blackmailed with explicit content or personal info.It's like a bad soap opera plot in the digital world.14.Online Child Exploitation: The heart-wrenching reality is that children are at risk too.From grooming to sharing harmful content, there's a dire need to shield them from the dark corners of the internet.
The surge of cybercrime in Chhattisgarh is waving a red flag, reminding us to buckle up with topnotch cybersecurity, spread awareness, and ensure the law is on our side.We're all in this digital journey together, working hand in hand to make sure Chhattisgarh's residents and businesses can thrive safely.
Impact of Cybercrime on the state's economy and society
The impact of cybercrime on Chhattisgarh's economy and society is like a puzzle with many pieces, affecting everyone from regular citizens to big corporations and even government bodies.It's like a ripple effect that spreads through various aspects of life.Here's how cybercrime shakes things up in our state: Financial Jolts: Imagine a sudden hit to the wallet.Businesses, individuals, and even the government take a hit when cybercriminals strike.Online fraud, data breaches, and those ransomware attacks drain money right out of pockets.It's like a financial rollercoaster with a twist.
Service Standstills: Picture this: essential services like banking, healthcare, and government operations coming to a sudden halt.Cyberattacks like DDoS are like digital roadblocks, causing chaos in everyday life.It's like a traffic jam for the virtual world.
Reputation Rollercoaster: Think of your favourite restaurant getting a bad review.Companies and government agencies hit by cyberattacks suffer more than just financial lossestheir reputations take a massive hit.Losing the trust of customers is like losing a precious gem.
Stifled Innovation: Imagine your favourite recipe getting stolen.Cyber espionage and theft of intellectual property can put a brake on our state's innovation and growth.When ideas are swiped, progress takes a hit.To tackle this digital challenge, Chhattisgarh needs to take a few important steps.It's like putting on Armor to face a battle: we need better cybersecurity, people need to know about the risks, and law enforcement needs to be sharp.Teamwork between the government, businesses, schools, and regular folks is like our shield against cyber threats.Let's build a digital world that's both exciting and safe.
Challenges in Investigating Cybercrime in Chhattisgarh: 1. Lack of Awareness and Reporting:
In the digital world, there's a big problemmany people don't know much about the dangers lurking online.Think of it like this: not everyone knows how to spot those tricky emails trying to steal personal info or the sneaky scams that want to trick you.Because of this, more people end up falling for these online tricks.And when something bad happens, some folks are scared to speak up because they worry it might damage their reputation.It's like not telling anyone you got a scratch on your new bike because you're afraid your friends will think you're not careful enough.But this silence makes the problem worse.We need to help everyone understand the risks and teach them how to stay safe online, just like we learn to look both ways before crossing the street.
Not Enough Fancy Tools:
Imagine being a detective trying to catch a sneaky thief who's hiding in the digital world.You need really cool gadgets to track them down and collect clues.But in some places, these cool tools are hard to find, making it tough to catch the bad guys.It's like trying to build a sandcastle without a bucket and shovelyou'll end up with a lopsided pile of sand.Without the right tools, it's super hard for the police to catch the cybercriminals.
Confusing Laws and Borders:
Cybercrime is a bit like a puzzle with missing pieces that have been scattered around the world.Imagine a thief who's stealing from houses in different neighbourhoods, and you need to figure out which police station should catch them.But sometimes, it's hard to decide who's in charge, just like when you play a game but can't agree on the rules.And to make it even trickier, the rules for catching these online bad guys might be outdated or not very clear.Solving this puzzle requires different police teams from different places to work together and agree on how to play the game.
Clever Crooks:
Picture this: there are some people who are really, really good at breaking codes and locksbut in the digital world.These smart folks use secret tricks and tools to steal information without anyone knowing.They're like the ultimate hide-and-seek champions who are really hard to find.As they get better at hiding, the police need to learn new tricks too, just like how you practice more to become better at a video game.
Sneaky Secrets:
Some people want to keep their messages private, just like having a secret code that only you and your best friend know.But imagine if thieves started using secret codes to plan their crimes that would make it hard for the police to stop them!It's like playing hide-and-seek, but with secret notes that no one else can read.We need to figure out how to balance everyone's privacy with catching the bad guys.
To fix these challenges, we need to help everyone learn about online dangers and how to stay safe, give the police the right tools to catch cybercriminals, make clear rules for catching them no matter where they hide, and teach the police new tricks as the crooks get smarter.And while we're doing all of this, we also need to make sure that everyone's privacy is protected.Working together, we can make the digital world safer and catch those sneaky cybercriminals.
Strengthening Law Enforcement Capabilities:
Specialized Training and Cybercrime Units: In the pursuit of combating cybercrime, envision a transformation where ordinary police officers evolve into skilled cybercrime detectives through targeted and specialized training.This upskilling equips them to navigate the intricacies of digital mysteries, akin to detectives honing their abilities to decipher cryptic clues.Moreover, visualize the establishment of dedicated cybercrime units within the police force-a force of cyber superheroes poised to confront online malevolence and shield the citizenry from digital threats.
Collaboration with National and International Agencies: Conceive a scenario where local law enforcement collaborates extensively with experts from across the nation.This symbiotic exchange involves the sharing of expertise, tools, and insights, reminiscent of superheroes uniting to vanquish a common adversary.Further extend this collaboration to an international scale, envisioning Chhattisgarh forging connections with law enforcement counterparts worldwide.This global alliance bridges gaps in the global crusade against cybercrime, echoing the spirit of camaraderie seen in a united league of superheroes.
Improving Technological Infrastructure: Enhanced Cyber Forensics Laboratories: Picture cyber forensics laboratories as cutting-edge sanctuaries, brimming with sophisticated gadgets.Here, adept investigators work their digital wizardry, akin to modern-day detectives peeling back layers to unearth concealed digital evidence.These advanced labs utilize powerful tools that illuminate clandestine trails in the digital realm, resembling magnifying glasses uncovering hidden clues in the narratives of classic detective tales.Digital Evidence Management Systems: Imagine a fortified digital vault, wherein digital artifacts are meticulously stored, safeguarded from tampering and manipulation.This secure repository resonates with the concept of preserving precious relics within a museum's protective confines.Visualize a technological marvel that ensures the integrity and authenticity of digital evidence-a digital equivalent of a lock and key fortifying a treasure chest.
Legal and Policy Reforms:
Streamlining Cybercrime Laws: Envision a legal landscape evolving to address the nuances of contemporary crimes in the digital sphere.This evolution prevents cybercriminals from eluding justice, akin to revising the rulebook for a new game, adapting to the changing playing field.Picture a legal framework that provides unequivocal guidance on handling digital criminals, much like road signs steering us through intricate journeys.
Establishing a Cybercrime Court: Imagine a specialized court equipped to efficiently handle cybercrime cases, presided over by judges well-versed in digital complexities.This specialized tribunal draws a parallel to the appointment of expert referees in high-tech sporting events.Envision a swifter legal process for cybercrimes, ensuring that justice is expedited-a scenario analogous to quickly solving a complex puzzle.
Promoting Awareness and Reporting:
Cybersecurity Education Programs: Envision a world where cybersecurity knowledge permeates all age groups, from youngsters to adults, paralleling the way we learn to navigate streets safely.Visualize individuals confidently reporting suspicious online activities, each individual playing a role akin to vigilant neighbours, collectively safeguarding their digital community.Encouraging Public-Private Cooperation: Envision a dance of collaboration between businesses and law enforcement, with synchronized moves and shared insights forming a harmonious cybersecurity strategy.Further, conjure an image of interdisciplinary cooperation, as experts from various domains unite to confront cybercrime-an alliance resembling friends collaborating to solve an intricate puzzle.
Expert Opinions
Here are the main points from conversations with experts in Chhattisgarh regarding the reasons behind cybercrime and its prevention: Expert: Santosh Singh IPS, Batch 2011, Chhattisgarh Every day, numerous individuals are falling victim to cybercrime.One prominent aspect is the rising number of online fraud cases.People are being deceived through various means, including obtaining One-Time Passwords (OTPs) at any time.Cybercriminals utilize tactics ranging from exploiting greed to intimidation or emotional manipulation.A concerning trend is the usage of videos for blackmail.Typically, this involves establishing an online friendship, luring the victim into a romantic relationship, and coercing them into sharing explicit content.Subsequently, the criminals blackmail the victim, demanding large sums of money under the threat of filing an IT Act-based complaint with the police.This scare tactic often leads to financial loss.Another strategy involves the creation of cloned Facebook profiles for cheating.Some criminals even use voice changers to demand money while impersonating someone else.As technology evolves, new methods of online fraud are emerging continuously.Interestingly, rural areas are witnessing a higher incidence of cyber fraud due to a lack of digital literacy.
The root cause of these incidents can be traced back to insufficient knowledge about the technology being used.Comprehensive understanding of the technology is the key to protecting oneself from cyber fraud.
Expert: Ratanlal Dangi IPS, 2003, IG Chhattisgarh
In today's world, cybercriminals engage in a variety of unlawful activities.Using computers and networking devices, they exploit individuals for personal gain, often financially.Anyone can become a victim, regardless of education level, with middle-class individuals being particularly vulnerable.The lack of awareness, coupled with the increasingly digital nature of daily activities, leaves people susceptible to deception.Cybercriminals can strip unsuspecting victims of significant sums, capitalizing on people's reliance on mobile devices and digital platforms.However, the real challenge isn't necessarily apprehending the criminals but rather educating and raising awareness among the public.During the pandemic, instances of fraud proliferated, taking advantage of people's heightened need for help and assistance.As more activities migrate online, proper awareness becomes the frontline of defence against cybercrime.If anyone experiences fraudulent activity, it's crucial to promptly report it to the police, enabling law enforcement to take action against the criminals.
Expert: Dr Abhishek Pallava , IPS ,Batch ,2013
Online gaming based cyber fraud has drastically increased specially in children and youth.Post covid children have got access to i-pads and mobiles for eradication.Children and youth are prone to game addiction and lured to multi-level tasks and are made to pay higher as levels increase.Many children commit suicide under pressure od payments/ debts.
Solution:
Risky games should be banned/ restricted.Parents should strictly monitor and regulate children's online behaviour and should not give children access to their online accounts.Parents should get their children counselled and treated for online addiction if required.Online betting is another grave threat and people lose lot of money if they get addicted.People should bet in authentic reliable sites only and should fix their maximum monthly limits.In Chhattisgarh mainly middle and low income groups are targeted by mobile based call and messaging scams.They lure customers by greed or threats and make them believe in their scams.They lure customers to give one time passwords of their accounts or make them install apps which give fraudsters control of mobiles or make customers click on the links and thus getting money fraudulently from their accounts linked to mobile number.Cases od sextortion are also on increase; although very few are reported due to shame.Fraudsters make video call and do screen recording of obscene events and then extort money.
Expert: Ram Gopal Garg, IPS, Batch,2007
Cyber Crime is any crime in which a computer resource is either a target of crime or a tool used to commit the crime.With the advancement of technology, almost every crime has some part which can be considered as cybercrime.However, the term Cyber Crime is used for the category of economic offences in which fraud is committed using computer resources.It is a matter of concern in the society because it is affecting almost everybody.The cyber criminals target the victims without knowing their identity.Moreover the advanced level of cybercrimes have the capability to halt the economic activity at large scale which can adversely affect the economy of the nation.The threats of cybercrime are omnipresent as the internet has reach to all strata of population and every walk of life.Phishing, vishing, sextortion etc. can lead to large scale cheating and extortion.At the same time, Virus attacks, defacement, smuggling on darknet etc. can affect the economies and sovereignty of the nations.
Expert: Ronak Kotecha, Journalist, Radio Presenter and Film Critic Dubai
Cybercrimes happen mostly because of the lack of awareness and the growing penetration of the mobile phone technology into all aspects of our life and the lack of awareness.Technology is changing at such a rapid pace.And hackers and unscrupulous cons are able to be ahead of the regular public who takes time to understand and comprehend things.While the government is doing its bit to spread awareness more needs to be done at administrative level through trainings and campaigns to empower the law enforcers with the knowledge they need.Companies like Google and Meta are also unforgiving in their security systems and always at the receiving end is the poor end user.
Future Prospects and Challenges:
The horizon of cybersecurity holds both promises and challenges for Chhattisgarh.As the digital landscape continually evolves, cybercrime persists as an adaptive and potent threat.In response, Chhattisgarh's approach to cybersecurity must remain dynamic and vigilant.
Anticipated Trends in Cybercrime:
In the ever-shifting world of technology, cybercriminals remain agile, adapting to new tools and strategies.Some expected trends include: Rise of Ransomware: The emergence of digital hostage situations, where cybercriminals lock away valuable data and demand payment for its release.As Chhattisgarh navigates this digital journey, the convergence of challenges and opportunities mirrors an ongoing narrative.It's a collective endeavour where individuals, institutions, and the community unite to safeguard the digital realm, guided by the same spirit that drives the protection of the physical world.
Conclusion:
In the heart of Chhattisgarh's digital world, the fight against cybercrime is like a gripping story that unfolds with every click and keystroke.As we wrap up this journey through challenges, efforts, and possibilities, it's clear that securing our online world is a mission we all share, a tale that connects us all.
The challenges we've discussed aren't unconquerable monsters.They're more like puzzles waiting to be solved.The lack of awareness, the tech struggles, the legal tangles, and those sneaky cybercriminals-they can all be tackled if we work together.Chhattisgarh's determination to crack down on cybercrime shines through, reflecting the dedication of its people, police, and decision-makers.
Those solutions we've explored?They're like guideposts lighting up our way.From training our local heroes to creating cyber squads, from arming ourselves with the latest tools to rewriting digital laws, each solution paints a picture of progress and adaptability.
Looking ahead, the cybercrime trends on the horizon are like challenges in a thrilling game.Emerging technologies?They're both the obstacles and the power-ups.They can be used for evil, but with the right approach, they're our secret weapons to defend our digital realm.
As we close this chapter, remember that the strength of our journey lies in unity.Just as neighbours come together to watch out for their streets, we too must stand as a digital community.The story of Chhattisgarh's fight against cybercrime is not just about tech and laws-it's about us, the people.By embracing challenges, learning, and raising our digital shields, we're building a safer online world, one where we're all the heroes of our own stories.
Recommendations: 1. Comprehensive Cybersecurity Education:
Imagine kids excitedly learning about online safety in school, just like they're taught to look both ways before crossing the street.Picture local workshops where people from different walks of life gather to swap stories and tips about staying safe online, like a neighbourhood watch for the digital world.
Multi-Stakeholder Collaboration:
Think of a big team of experts-police, teachers, tech whizzes, and local businesses-sitting down together, brainstorming ideas on how to protect digital neighbourhood.Envision community events where people share their experiences and learn from one another, just like friends swapping gardening tips.
Continuous Skill Development:
Imagine our local police going to "cybercrime detective school" to stay up-to-date with the latest digital tricks, just like doctors attending workshops to learn new medical techniques.Picture young students excitedly taking cybersecurity classes in college, eager to become the digital protectors of the future, much like aspiring superheroes.
Cyber Hygiene Practices:
Think of families sitting down together to set strong passwords and update their devices, just like putting on helmets before riding bikes.Imagine businesses proudly displaying a "Cyber Safe" badge, showing they're doing their part to keep customer information secure, like a seal of approval.
Technological Advancements:
Envision experts working in high-tech labs, inventing new tools to catch cyber villains and keep us safe, like scientists in a secret laboratory cooking up solutions.Picture digital superheroes guarding our important information, using the latest gadgets to fend off the bad guys, much like characters in action movies.
Legal and Regulatory Reforms:
Imagine laws evolving to catch up with the digital world, like rewriting the rules of a game to make sure it's fair for everyone.Think of a special courtroom where judges understand digital mysteries, ensuring that justice is served swiftly and fairly, just like in detective stories.
Public Awareness Campaigns:
Picture billboards, social media posts, and TV ads reminding us to be safe online, just like road signs guiding us on our digital journey.Envision local events where families and friends gather to learn about digital safety, like a community picnic where everyone shares tips and stories.
Imagine having to spend more on security than on fun stuff.Businesses and the government have to pour money into cybersecurity measures, which means less money for other important things.It's like a never-ending financial tug-of-war.
's Trust Quandary: Think of your secrets getting out in the open.Cybercrime against government bodies can expose private info, making citizens doubt their data's safety.It's like a breach of trust with the government itself.Tightening Budgets:Emotional Bruises: Imagine words on a screen causing real pain.Cyberbullying and stalking hurt victims emotionally, leading to anxiety, sadness, and even worse.It's like invisible wounds that take a toll.Privacy in Peril:Think of your secrets being splashed online.Cybercrime messes with personal data's safety privacy, making people reluctant to share info.It's like locking away your life's story.Tech Hesitation: Imagine missing out on the digital fun.The fear of cybercrime makes people and even businesses wary of new tech.This can slow down progress and innovation.It's like missing out on a great party.Image Makeover: Picture negative news spreading like wildfire.High-profile cybercrime cases can put our state in the spotlight for the wrong reasons, affecting how others see us.It's like getting a bad review on a global stage.Education Upended: Imagine your hard work vanishing overnight.Cyberattacks on schools and colleges disrupt learning, research, and even the safekeeping of ideas.It's like a storm that scatters your notes everywhere.
Sophisticated Phishing Attacks: Deceptive emails becoming even more convincing, luring unsuspecting individuals into revealing sensitive information or falling for scams.IoT Vulnerabilities: Exploitation of vulnerabilities in Internet of Things (IoT) devices, infiltrating homes and networks through seemingly innocuous devices.Deepfake and AI-based Attacks: The proliferation of AI-generated fake content, sowing confusion and misinformation.Crypto jacking: Covert use of devices' computing power to mine cryptocurrencies without the owners' knowledge.
Sustainability of Initiatives:To maintain a secure digital landscape, a sustained effort is imperative: Continued Investment: Allocating resources for the upkeep of cybersecurity measures, akin to maintaining physical infrastructure.Adapting to Change: Keeping cybersecurity personnel updated with evolving techniques and threats, mirroring the constant learning and adaptation process.Collaboration and Engagement: Fostering a culture of information sharing and teamwork, akin to neighbours looking out for one another's safety.Skill Development: Nurturing a new generation of cybersecurity experts, equipping them with the tools to protect the digital world. | 6,567.4 | 2024-01-13T00:00:00.000 | [
"Law",
"Computer Science",
"Political Science"
] |
Stability Analysis for Nonlinear Second Order Differential Equations with Impulses ∗
In this paper we investigate the impulsive equation � (r(t)x 0 ) 0 + a(t)x + f (t, x, x 0 ) = p(t), tt 0, t 6 tk, x(tk) = ckx(tk − 0), x 0 (tk) = dkx 0 (tk − 0), k = 1, 2, 3, . . . , and establish a couple of criteria to guarantee the equations of this type to possess the stability, including boundedness and asymptotic properties. Some examples are given to illustrate our results and the last one shows that, to some extent, our criteria have more comprehen- sive suitability than those given by G. Morosanu and C. Vladimirescu.
Then the null solution of ( 1) is stable.The questions posed here to answer are whether we can weaken the conditions in Theorem A, such as weakening the restrictions that h(t) > 0 and α > 1, and the conclusion is also true.To these ends, in this paper we consider a more general form than (1) and study the impulsive second order nonlinear differential equation (r(t)x ′ ) ′ + a(t)x + f (t, x, x ′ ) = p(t), t ≥ t 0 , t = t k , where Let N be the set of positive integers and R be the real axis.Before proceeding our discussions, we give the blanket assumptions for (2) as follows: where X = (x 1 , x 2 ) T and for all t ∈ [t 0 , ζ) and t = t k , and X(t k + 0) as well as X(t k − 0) exist and satisfy Let X(t) = X(t; t 0 , X 0 ) be a solution with X(t 0 ) = X 0 .It is clear that (3) has null solution when p(t) ≡ 0. The null solution of (3) is said to be stable if for any ε > 0, there exists a δ = δ(ε, t 0 ) such that ||X 0 || < δ implies that X(t) exists on [t 0 , ∞) and ||X(t)|| < ε for all t ≥ t 0 .
Preliminaries
For the convenience, we will view C(t), D(t), E(t) and F (t, u, v) as and whenever these notations are defined.Let U(t) = (u ij (t)) be any matrix.In this paper the norm of U(t) is defined by the maximum of the row sums of (|u ij (t)|).
First of all, we consider the relation between the solutions of (3) and the solutions of the following equation where and F is defined as in (5).
Let t 0 < ζ ≤ ∞.By a solution of (6) we mean a continuous function Now let Y (t) be a solution of (6).Then, by straightforward verifications, we learn that X(t) = E(t)Y (t) satisfies and it renders (3) into an identity when t = t k .Consequently, Conversely, suppose that X is a solution of (3).Then, for Y (t) = E −1 (t)X(t), we have In addition, it is easy to verify that Y (t) = E −1 (t)X(t) satisfies (6) when t = t k .So far the following result is obvious.
We next consider the solutions of (6).It is clear that the solutions of (6) exist by the theory of ordinary differential equations [9].Specially, if where then, the fundamental matrix of the linear system corresponding to (6): ds sin ds cos Let Y (t) be a solution of (6) with Y (t 0 ) = Y 0 , then it satisfies that As a special case, we consider For example, we consider r(t) = 2 + t + sin t, t ≥ 0 and t k = (2k − 1)π.
Then it holds that r ′ (t k ) = 0 for all k ∈ N. At this stage we set Then, similarly to [1,8], from (2) it follows that when t ≥ t 0 and EJQTDE, 2012 No. 29, p. 5 where X and I k are defined as in (3), and Analogously to (6), we consider the following equation where Y = (y 1 , y 2 ) T , and .
Since the relation between the solutions of ( 10) and the solutions of ( 11) is similar to Lemma 1, we wish to refrain from the repeating statements.Let us set as well as Then it follows that Now we take into account the following system corresponding to (11): It is easy to verify that the fundamental matrix of ( 12) is given by ds sin Subsequently, the solution Y of (11) with Y (t 0 ) = Y 0 satisfies that EJQTDE, 2012 No. 29, p. 6
Main results
In the sequel we give the stability criteria for (2).Recall that the definitions of C(t), D(t) and E(t) have been defined in (4).For simplicity, we introduce another notations as follows.Let λ 1 (c) and λ 2 (c) be denoted, respectively, by The notations λ 1 (d) and λ 2 (d) can be defined similarly.
Theorem 1 Suppose that the following conditions hold: Then the null solution of ( 3) is stable.
We observe that when c k ≡ d k on N, C(t) −1 D(t) = C(t)D(t) −1 = 1 for all t ≥ t 0 .Hence the following result is clear.
Corollary 1 Suppose that the following conditions hold: Then the null solution of ( 3) is stable.
We notice that, by similar arguments, we may show that the solution X(t; t 0 , X 0 ) of (3) exists on [t, 0 , ∞) for any X 0 ∈ R 2 under the provisions in Theorem 1.Now we consider the case that p(t) is of constant sign and is EJQTDE, 2012 No. 29, p. 9 not identically zero.In this case we impose the assumption (in (H2)) that f (t, u, v) is monotone decreasing in u and learn that the vector function Recall the Comparison Theorem [10].Briefly speaking, if F (t, x) : R 2+1 → R 2 and F is quasi-monotone increasing in x, and if ψ is the maximal solution of x ′ = F (t, x) with ψ(t 0 ) = ψ 0 for t ≥ t 0 , then ϕ ′ ≤ F (t, ϕ) with ϕ(t 0 ) ≤ ψ 0 for t ≥ t 0 implies that ϕ(t) ≤ ψ(t) for t ≥ t 0 .Hence, with the aid of comparison theorem we may also show that the solution X(t; t 0 , X 0 ) of (3) exists on [t 0 , ∞) for any X 0 ∈ R 2 .For simplicity we ignore the details of proof.
Next we consider the boundedness for (3).
Theorem 2 Suppose that the following conditions hold: (i) Then every solution of (3) is bounded.
Proof.We first assume that Y (t) is the solution of (6) with Y (t 0 ) = Y 0 .Then X(t) = E(t)Y (t) is a solution of (3).Let M be defined as in (14) and Furthermore, let w(t) be defined as in (16).Note that the function B in (8) satisfies that for the time being.Case 1. Suppose that α = 1.Then, similarly to (15) we have which shows that every solution of ( 3) is bounded when α = 1.Case 2. Suppose that α > 1.For any given ε > 0, we take T > t 0 so that Analogously to (8) we have which leads to By the same manner as (19), we have from ( 22) that where where Since ε is arbitrary, we can ensure that ( 24) is valid for Y (T ).Further, from ( 22) and ( 24) we learn that R(t) is bounded on [T, ∞), which implies that the solution X(t) of ( 3) is bounded on [t 0 , ∞) when α > 1.The proof is complete.
The following result is concerned with the asymptotic behavior of (10) (or (2)) under the assumptions (9).It is based on the fact that the solution X(t; t 0 , x 0 ) of ( 10) exists on [t 0 , ∞) for any X 0 ∈ R 2 .The reasons are similar to the proof of Theorem 1 and the statements before entering Theorem 2, and therefore we skip them. | 1,929.2 | 2012-01-01T00:00:00.000 | [
"Mathematics"
] |
Minimum length uncertainty relations in the presence of dark energy
We introduce a dark energy-modified minimum length uncertainty relation (DE-MLUR) or dark energy uncertainty principle (DE-UP) for short. The new relation is structurally similar to the MLUR introduced by K{\' a}rolyh{\' a}zy (1968), and reproduced by Ng and van Dam (1994) using alternative arguments, but with a number of important differences. These include a dependence on the de Sitter horizon, which may be expressed in terms of the cosmological constant as $l_{\rm dS} \sim 1/\sqrt{\Lambda}$. Applying the DE-UP to both charged and neutral particles, we obtain estimates of two limiting mass scales, expressed in terms of the fundamental constants $\left\{G,c,\hbar,\Lambda, e\right\}$. Evaluated numerically, the charged particle limit corresponds to the order of magnitude value of the electron mass ($m_e$), while the neutral particle limit is consistent with current experimental bounds on the mass of the electron neutrino ($m_{\nu_e}$). Possible cosmological consequences of the DE-UP are considered and we note that these lead naturally to a holographic relation between the bulk and the boundary of the Universe. Low and high energy regimes in which dark energy effects may dominate canonical quantum behaviour are identified and the possibility of testing the model using near-future experiments is briefly discussed.
The concept of superposition is the very essence of quantum theory. As the mathematical embodiment of wave-particle duality, it determines the state space structure of canonical non-relativistic quantum mechanics (QM) and its relativistic extension, quantum field theory (QFT). However, despite the unparalleled success of both QM and QFT in describing the micro-world, such duality does not manifest itself in our every day experience: the macro-world does not admit superpositions of states. This gives rise to the so-called measurement problem, recognised since the early days of quantum theory, whereby a classical 'observer' (an experimenter or apparatus not subject to the quantum formalism) is required to reduce the quantum superposition via the act of 'measurement'.
This glaring ontological disparity, yet otherwise arbitrary distinction between observer and observed, has led many physicists to argue that canonical quantum theory is incomplete. Though proposals for the resolution of the measurement problem are varied (see [1][2][3] for reviews of contemporary approaches, plus [4] for a discussion of foundational issues), many involve modifications of the quantum dynamics that lead to spontaneous reduction of the state vector in some mesoscopic regime, which interpolates between the microscopic (quantum) and macroscopic (classical) worlds [5][6][7][8][9][10]. In modern terminology, this spontaneous reduction is known as decoherence, and is believed to be caused by the interaction of the system with its environment [11]. Thus, prior to the act of measurement, micro-systems are weakly coupled to their environment, whereas meso-or macro-systems are strongly coupled. The former behave quantum mechanically, whereas the latter behave classically.
With the measurement problem in mind, it is natural to consider the weakness of gravity, as compared to the three other known fundamental forces -electromagnetic, weak nuclear and strong. Indeed, classical gravitational interactions may typically be ignored in the micro-world and only become relevant on macroscopic, even astrophysical or cosmological, scales [12]. Nonetheless, the exact nature of quantum gravitational interactions is unknown and their description remains the holy grail of theoretical physics research [13,14]. It is therefore natural to suppose that what is missing from canonical quantum theory is not an adequate description of the observer, visa-vis the observed, but gravity. Since the gravitational interaction is universal, affecting all forms of matter and energy, it may be hoped that gravity, or space-time itself, may play a fundamental role in the 'spontaneous' decoherence of quantum systems.
In fact, the idea that quantum gravitational effects may play an important role in the resolution of the measurement problem encountered in canonical nongravitational QM has a long and distinguished history [15][16][17][18][19][20][21][22][23][24][25][26][27]. Originally published in 1966, Károlyházy's model [15,16] was one of the first to consider the possibility of gravitationally-induced wave function collapse. The fundamental idea proposed in [15] is that quantum fluctuations of the metric give rise to an intrinsic and irremovable 'haziness' in the space-time background, corresponding to a superposition of classical geometries. As a result, an initially pure state vector develops, over time, into a mixed state. Coherence is maintained only over a small region, known as a 'coherence cell', whose size depends on the space-time curvature induced by the body and, hence, on its mass. For micro-objects, the effect of curvature is small, giving rise to canonical quantum behaviour but, for macro-objects, the maximum size of a coherence cell lies within the classical radius of the body itself. Thus, the quantum nature of the macro-body remains hidden, as the wave function associated with its centre of mass (CoM) spontaneously decoheres on extremely small scales: the larger the body, the smaller the size of the cell.
From a theoretical perspective, a major advantage of the Károlyházy model is that it contains no free parameters. It is therefore able to make clear predictions regarding gravitational modifications of the canonical quantum dynamics, utilising only the known constants G, c and . Specifically, the existence of a minimum length uncer-tainty relation (MLUR), representing a modification of the canonical Heisenberg uncertainty principle (HUP), necessarily follows from the intrinsic haziness of spacetime assumed in the K-model. The resulting uncertainty, inherent in the measurement of a space-time interval s, is where l Pl = G/c 3 is the Planck length [15,16]. For space-like intervals, this represents the minimum possible uncertainty in the position of a quantum mechanical particle, used to 'probe' the distance s. When ∆s is identified with the Compton wavelength, λ C = /(mc), s may be identified with Károlyházy's estimate of the width of a coherence cell for a fundamental particle, Though motivated by an attempt to resolve the measurement problem, the MLUR (1) represents an important theoretical prediction in its own right. Since its inception, the literature on quantum gravity phenomenology has expanded significantly and many modifications of the HUP, known as generalised uncertainty principles (GUPs), have been proposed [28][29][30]. These share the common feature of giving rise to a minimum resolvable resolvable length in nature, which is usually assumed to be of the order of the Planck length [31,32]. Hence, the existence of some form of MLUR is now regarded as a generic feature of candidate quantum gravity models [33,34].
In the present paper, we will not concern ourselves with the measurement problem per se, though the possible implications of our model for this important open question are briefly discussed in Sec. V. Instead, we will focus on the second major prediction stemming from the introduction of a 'hazy' space-time, i.e., that of a fundamental MLUR in nature. In particular, we will focus on a major advance in fundamental physics, which should have radical implications for any model of gravitationally-induced wave function collapse, as well as for quantum gravity phenomenology in general, including MLURs [35][36][37], namely, the discovery of dark energy [38,39].
Though the precise microphysical origin of dark energy remains unknown, and is an active area of research within the cosmology/astrophysics community, the current best-fit to all available cosmological data favours a 'cosmological concordance' or ΛCDM model [40], in which dark energy takes the form of a positive cosmological constant, Λ > 0. This accounts for approximately 69% of the total energy density of the Universe, whereas cold dark matter (CDM) accounts for around 26% and ordinary (visible) matter for around 5% [41,42].
For our purposes, it is important to note that, although dynamical dark energy models cannot be excluded on the basis of presently available data, any viable dark energy model must give rise to an effective cosmological constant at late times, comparable to the present epoch.
(See [43][44][45][46] for reviews of current dark energy research.) Furthermore, though Λ may, ultimately, turn out to have a particle physics origin (i.e., the dark energy field may correspond to a form of 'matter' in the usual sense, albeit of an exotic kind), its precise origin is unimportant for the derivation of dark energy-modified MLURs. What is important are its gravitational effects. Specifically, regarding the influence of dark energy on physical bodies, it makes no difference whether we write the Λ-dependent term on the left-hand side or the right-hand side of Einstein's field equations. On the right, it may be interpreted as a form of matter, on the left, as a geometrical effect.
As a geometrical effect, Λ may be interpreted as a minimum space-time curvature, or minimum gravitational field strength. This clearly has implications for any model of gravitationally-induced wave function collapse, including the K-model, as well as for any MLUR purporting to include quantum gravity effects, irrespective of the measurement problem. Nonetheless, even if the true origin of dark energy is of a particle nature, the exotic form of matter to which it corresponds necessarily sources a minimum positive curvature, Λ > 0, in otherwise 'empty' space. As we will see, this has profound implications for Károlyházy's model, which originally assumed quantum fluctuations of asymptotically flat (i.e. Minkowski) space [15,16].
By contrast, we embed a K-type model in a realistic background geometry, incorporating the effects of dark energy. A key consequence of the existence of a positive cosmological constant is the existence of a fundamental horizon for all observers (including quantum mechanical 'particles'), the de Sitter horizon, l dS ∼ 1/ √ Λ. We argue that this necessarily implies a modification of the MLUR (1), including minimum curvature/finite-horizon effects.
As with the original model presented in [15,16], our model has the theoretical advantage of involving no free parameters. The main difference is that the MLUR obtained by considering a hazy space-time,à la Károlyházy, in the presence of dark energy, necessarily involves G, c, and Λ. The structure of this paper is as follows. In Sec. II A, we consider classical perturbations of the cosmological Friedmann-Lemâitre-Robertson-Walker (FLRW) metric, induced by the presence of point particles. Although the FLRW metric is not valid on local scales, we note that its perturbed form, at the present epoch, is similar to the Schwarzschild-de Sitter metric. Thus, it predicts approximately the same gravitational potential (up to numerical factors of order unity) in the vicinity of a local compact object. This allows us to view the local fieldfor example, around a microscopic particle located close to the surface of the Earth -as a perturbation away from the cosmological background geometry. Throughout our analysis, Λ is treated as a fundamental constant of nature which gives rise to a constant dark energy density, and minimum curvature, at all points in space. In Sec. II B, we show how the formula for the perturbed line element relates to Károlyházy's scheme for measuring the minimum positional uncertainty of a gravitating, quantum mechanical, 'point' particle. Sections II C-II D review the original derivation of the MLUR given in [15,16] and the alternative derivation given in [47,48], respectively, while Sec. II E outlines motivations for dark energy-induced modifications of the standard result. The physical basis of the dark energy uncertainty principle (DE-UP) is laid out in Secs. III A-III B and its basic properties, including applications to both neutral and electrically charged particles (Secs. III C-III D), as well as its implications for the holographic conjecture [49,50] (Sec. III E), are explored. Possible cosmological consequences of the DE-UP are considered in Sec. IV and Sec. V contains a summary of our main conclusions together with a brief discussion of prospects for future work. Potential conceptual issues regarding the limits of applicability of the model, which arise at various points throughout the text, are discussed at greater length in the Appendix.
II. KÁROLYHÁZY'S MLUR -NEW PERSPECTIVES
In [15,16], Károlyházy et al consider 'resolving' a space-time interval s, traversed by a quantum mechanical particle of mass m, by projecting it into the lab frame using light signals emitted by the particle over the course of its path. They claim that classically, the observed interval s is related to the original ('true') interval s via where r S (m) = 2Gm/c 2 is the Schwarzschild radius associated with the mass m. By explicitly taking into account the quantum nature of the particle traversing s, they then obtain an estimate of the minimum uncertainty in the measurement of s, denoted ∆s. The derivation of the MLUR given in [15,16] is considered in detail in Sec. II C and Károlyházy's measurement procedure is illustrated in Fig. 1. In Sec. II A, we show that a formally similar result, in which the quantities s and s in Eq. (3) have different physical meanings, may be obtained using gravitational perturbation theory. In this formulation, the quantities s and s do not a priori represent 'true' (CoM frame) and 'measured' (lab frame) values of the length of a space-time interval but, instead, the lengths of an interval in an unperturbed background space and in the perturbed space induced by the presence of the particle, respectively. Nonetheless, the new formulation may be reconciled with Károlyházy's picture, since we are free to consider receiving light signals in a lab frame far away from the particle's CoM, in which the gravitational perturbation induced by it is small. The formal equivalence of the two pictures is shown explicitly in Sec II B.
A. Classical intervals in perturbed and unperturbed backgrounds: s and s We now consider the classical perturbation induced by the presence of a point particle in a realistic space-time background, requiring the perturbed metric to satisfy the linearised Einstein equations. Bt 'particle' we mean a spherically symmetric compact object that is point-like with respect to large -in principle, up to cosmologicallength-scales.
In the presence of dark energy, represented by a positive cosmological constant Λ > 0, the gravitational action is and the field equations take the form where g µν denotes the space-time metric, G µν = R µν − (1/2)Rg µν is the Einstein tensor, R µν is the Ricci tensor, R = g µν R µν is the scalar curvature and T µν is the matter energy-momentum tensor. For a perfect fluid, T µν may be represented covariantly as where ρ denotes the rest-mass density, p is the isotropic pressure and u µ is the 4-velocity of an infitesimal fluid element. The Friedmann-Lemaître-Roberston-Walker (FLRW) metric, describing a homogenous, isotropic, expanding Universe, may be written as where τ is the cosmic time and a(τ ) is the cosmological scale factor which is normalized to one at the present epoch, a(τ 0 ) = a 0 = 1. In spherical polar coordinates, dΣ 2 takes the form where dΩ 2 = r 2 (dθ 2 + sin 2 θdφ 2 ) is the line-element for the unit 2-sphere and k is the Gaussian curvature, with dimensions [L] −2 . In appropriate units, k ∈ {−1, 0, +1} for negative, zero, and positive curvature, respectively. Substituting Eqs. (6), (7) and (8) where a dot represents differentiation with respect to τ [51]. For future reference, we note that the Hubble parameter as is defined as and that its present day value is H 0 = 67.74 ± 0.46 kms −1 Mpc −1 , or H 0 = 2.198 × 10 −18 s −1 (ignoring error bars) in cgs units [42]. The critical density is defined as giving ρ crit = 8.639 × 10 −30 gcm −3 . This is the value of ρ required to give zero curvature (k = 0) in the absence of a cosmological constant (Λ = 0). Dividing Eq. (9) by H 2 0 = 8πGρ crit /3, it may be rewritten in terms of the density parameters Ω r , Ω M , Ω k and Ω Λ . These denote the present day contributions, as fractions of the critical density, to the total energy density of the Universe for radiation, matter, curvature and dark energy, respectively. To three significant figures, the values obtained from current observations are Ω r = 0.00, Ω M = 0.31, Ω k = 0.00 and Ω Λ = 0.69, where the matter sector is composed of both non-relativistic baryons Ω b = 0.05 and non-relativistic (cold) dark matter Ω DM = 0.26 [42]. Thus, where Ω(τ ) = ρ total (τ )/ρ crit . In other words, the present day density is very close to the critical density (Ω 0 = 1.00) and the Universe is approximately flat on large scales, with the exception of the minimal curvature induced by Λ.
In an arbitrary spatial coordinate system, Eq. (7) may be written in the general form where γ ij is the spatial part of the metric, and an arbitrary metric perturbation may be written as The gauge invariant tensor perturbations ('gravitons') satisfy the transverse-traceless conditions, Let us now switch back to spherical polar coordinates and consider a spherically symmetric perturbation, induced by the 'birth' of a particle of mass m, at some time τ < τ 0 . Our ansatz for the perturbative part of the energy-momentum tensor T µ ν then takes the form where Θ is the Heaviside step function and all other components are zero. Strictly, Eq. (16) models the birth of a particle, at τ = τ , which remains at rest with respect to a comoving coordinate system at all later times. It also holds approximately for particles that are not subjected to extreme accelerations. In this case, dynamical tensor perturbations, which would otherwise lead to gravitational wave emission, may be neglected. In addition, we may set B i = h 0i = 0 since, at linear order, vector perturbations are associated with vorticity in the cosmic fluid and do not arise in this scenario [52,53]. The full evolution of the scalar and tensor-type perturbations for the birth of a point-like mass may be determined by following a procedure analogous to that used in [54], though such a detailed treatment is unnecessary for our current purposes. Instead, we note that the covariant metric (14) contains four extraneous degrees of freedom associated with coordinate invariance. In the Newtonian gauge, which holds approximately for situations in which h 0i 0 and where wave-like tensor perturbations can be neglected, this 'gauge' freedom may be used to diagonalise the perturbed metric, giving where Ψ and Φ are Newtonian potentials obeying Poisson's equation [52,53]. In our scenario, this is consistent with the fact that, since the source term is timedependent only instantaneously, the time-dependence of the perturbations must be small on scales r r lc (τ −τ ), where r lc (τ − τ ) is the maximum extent of the particle's light cone. In other words, we assume that the metric perturbation induced by the particle's creation propagates radially outwards at the speed of light, but remains approximately 'static', with respect to comoving coordinates, within its horizon. Any additional timedependence is confined to a thin spherical shell at r r lc (τ − τ ).
In the absence of anisotropic stresses, Φ = Ψ [52,53], and Poisson's equation for a mass distribution ρ m immersed in a dark energy background in an expanding Universe is∇ where is the dark energy density and∇ 2 is the Laplacian, defined with respect to comoving coordinates. For spherically symmetric systems, this reduces The current experimental value of Λ, inferred from observations of high-redshift type 1A supernovae (SN1A), Large Scale Structure (LSS) data from the Sloan Digital Sky Survey (SDSS) and Cosmic Microwave Background (CMB) data from the Planck satellite, is Λ = 1.114 × 10 −56 cm −2 [41,42]. This is equivalent to the vacuum energy density ρ Λ = 5.971 × 10 −30 gcm −3 . Now let us consider the case in which ρ m is given by a δ-function density profile corresponding to a classical point-like mass m, ρ m (τ, r) ∝ mδ(r)/a 2 (τ )r 2 . In this scenario, Eq. (18) is simply Poisson's equation with two source terms, a regular point-mass (m > 0) and an 'irregular' constant negative density, −ρ Λ . (Recall that, when written on the right-hand side of the field equations, Λ may be interpreted as a negative energy density belonging to the matter sector.) This is satisfied by the modified Newtonian potential which gives rise to the gravitational field strength [55] g Thus, the cosmological constant corresponds to an effective gravitational repulsion whose strength increases linearly with the comoving distance ar and we note that, for r ≤ (≥)r grav , where and l dS = 3/Λ = 1.641×10 28 cm, the force between two particles is attractive (repulsive). l dS is the asymptotic de Sitter horizon and is of the same order of magnitude as the present day radius of the Universe r U 1.306 × 10 28 cm (13.8 billion light years). In [57,58], it was also referred to as the first Wesson length, after the pioneering work [59], and denoted l W .
In the Newtonian picture, r grav marks the separation distance beyond which the effective gravitational force between two spherically symmetric bodies becomes repulsive (i.e. beyond which the repulsive effect of dark energy overcomes the canonical gravitational attraction). Up to numerical factors of order unity, the same result may be obtained in general relativity by evaluating the Kretschmann invariant, K = R αβγδ R αβγδ , for the Schwarzschild-de Sitter metric, and noting the value of r at which it changes sign.
Including contributions to ρ m from the background baryonic and dark matter densities -that is, embedding the perturbation in a full FLRW background -similar arguments yield where H is a solution to Eqs. (9)-(10), so that This is known as the gravitational turn-around radius, and may also be derived rigorously in a fully general relativistic context [56]. For τ → ∞, the matter density is diluted to such an extent that H 2 → c 2 /l 2 dS and the space-time becomes asymptotically de Sitter, yielding Eq. (23). The implications of de Sitter-type cosmological evolution are discussed further in Sec. III B.
In more complex local environments, we may expect H(τ ) appearing in Eq. (24) to be replaced by a local Hubble parameter, which is not a solution of Eqs. (9)-(10). However, if Λ is a genuine constant of nature, giving rise to a constant dark energy density at all points in space (as assumed in this analysis), there exists a correction term to the canonical Newtonian potential, Φ ∼ Gm/R, determined by the local Hubble parameter, H/c ≥ 1/l dS .
Hence, in general, the infitesimal line-elements of the perturbed and unperturbed metrics, ds and ds, are related via where minimum value of the Hubble term is set by the dark energy scale. Here, ds denotes the lineelement for the flat, unperturbed, space-time. Next, we rewrite the unperturbed line-element Eq. (14) as Restricting ourselves to time-like intervals within the present day horizon then gives ds cdτ .
Note that, since explicit r-dependence drops out of the expression for the unperturbed line-element, the coordinate distance r need not be equal to s(τ ). Formally, Eq. (28) gives the difference between the perturbed and unperturbed line-elements, traversed by a light-like signal (e.g. a photon) over time τ , as seen by an observer at r. The flight time of the photon(s) and the position of the observer relative to the mass m are independent. Hence, so are r and s(τ ).
This corresponds to the following experimental procedure. Suppose we place a 'detector' at a coordinate distance r from a specified origin. (We assume throughout that our detector represents an idealized observer whose gravitational field may be considered negligible, even compared to that of the perturbing particle: though unrealistic, this is a valid assumption in our idealized gedanken experiment.) If the massive particle is absent, a photon travelling for a time τ traverses a space-like interval s(τ ) = cτ . In flat space, this may simply be identified with the coordinate distance, so that s(τ ) = cτ = r.
However, if, instead, we assume the photon is emitted by a massive particle located at r = 0, and absorbed by a detector at r > 0 after the same time τ , the traversed interval 'seen' by an observer at r is s (r, τ ) (28). The simple relationship between the coordinate distance and space-like interval is destroyed by the gravitational field of the particle and, in general, the light signal will not reach the same value of r at the same time τ (i.e., r = cτ ).
Furthermore, τ need not correspond to the flight time of single photon. Instead, we may consider spitting the measurement of the interval s (r, τ ) into two (or more) parts. For simplicity, however, we consider only a two part measurement process. In the first part, a photon travels from the perturbing particle at r = 0 to the detector at r > 0. In the second, an additional photon travels from a (generally different) point, to r. If the total flight time of both photons is τ , the space-like interval that would have been traversed if the particle had not been present is s(τ ) = cτ , but the interval traversed in the perturbed space in s (r, τ ).
Since r can label any point in space, regardless of the value of τ , which we here identify with flight times of the photons used to perform the measurement, it follows that the measured interval s (r, τ ) depends on where we place our detector in relation to the perturbing particle. This fact also enables us to reinterpret Eq. (28) in terms of an experimental procedure to resolve time-like intervals traversed by massive, self-gravitating particles,à la Károlyházy. During the photon flight time τ , the CoM of a classical non-relativistic particle also traverses a timelike interval approximately equal to s(τ ) = cτ . Hence, s (r, τ ) and s(τ ) may be interpreted as the 'observed' (lab frame) and 'true' (CoM frame) values of the spacetime interval traversed by a massive particle, as claimed in [16]. This procedure is discussed in greater detail in Sec. II B and is illustrated in Fig. 2.
For r r grav (τ ), the Hubble expansion term in Eq. (28) dominates, so that r r grav (τ ) marks the limit of the validity of the perturbative Newtonian gauge picture. Physically, r r grav (τ ) corresponds to a region in which the effect of the perturbation is negligible and the standard Hubble expansion takes over. Thus, the Hubble expansion gives rise to small, additive correction term to Károlyházy's formula (3), plus a modification of the original canonical gravitational term, corresponding to the substitution r → ar. Since the additive term is subdominant within the region of physical interest, r r grav (τ ), the latter modification is the most important.
Equation (28) holds both at the current epoch τ 0 4.352 × 10 17 s (13.8 billion years) and at all earlier times for which the FLRW metric is valid, including epochs where the average curvature was far higher than today. In addition, it holds for regions of the present day Universe in which space-time curvature is well above the FLRW background level (k 0). This may be seen by taking the static, spherically symmetric, weak-field limit of the full Einstein equations (5), which also reduce to Eq. (24) with a(τ 0 ) = a 0 1 and H(τ 0 )/c = H 0 /c l dS , for r r grav (τ 0 ), regardless of the profile of the gravitational field on scales r grav (τ 0 ) r l dS . This limit applies to all experiments carried out on (or near) the surface of the Earth at the present epoch.
Taking both these factors into account, it is reasonable to suppose that Eq. (28) holds (at least approximately), far more generally, remaining valid at any epoch under non-extreme conditions. We may expect it to break down close to the inflationary era [60][61][62], or for spacetime intervals close to the event horizon of a black hole. However, we note that, using H 2 0 = 4.830 × 10 −36 s −1 c 2 /l 2 dS = 3.338 × 10 −36 s −1 and substituting the Newtonian potential (21) into Eq. (17), we obtain a time-like metric component directly proportional to that of the Schwarzschild-de Sitter solution, which describes a black hole in the presence of a cosmological constant Λ > 0. It therefore seems probable that Eq. (28) is valid in all physically interesting scenarios. Based on the arguments presented in Sec. II A, we see that, in addition to describing the difference between the 'true' and 'measured' values of a time-like interval traversed by a self-gravitating particle, Eq. (3) also describes the difference between the perturbed space-time interval, s , induced by the presence of the particle, and the unperturbed space-time interval, s, that would have existed if the particle had not been present (assuming Λ = 0 and a a 0 = 1). Physically, this makes sense, since we may consider projecting the particle's worldline onto a detector in the lab frame, a distance r from the CoM, at which the induced gravitational potential is Φ(τ 0 , r) −Gm/r. For relatively small r, we project this interval onto a region of locally curved space (i.e., a region in which the curvature is above the background level of the FLRW metric), induced by the particle's self gravity.
Similar arguments apply even when the background curvature is well above the FLRW average, for example, due to the presence of macroscopic lab equipment, or the lab's proximity to the surface of the Earth. Practically, we may restrict our attention to projections within a very small region in the vicinity of the CoM, over which the particle's (extremely small) self-gravity may be considered non-negligible compared to the background level, whatever this may be. Classically, such a region is well defined for any perturbed metric and traces out a 'world-tube' of width r grav (c.f. Eq. (25)) surrounding the CoM world-line [15,16]. Projecting the world-line onto a 'detector' within this tube gives rise to significant deviations in the measured value of the interval, as compared to its 'true' value, due to the space-time curvature induced by the particle.
Clearly, once the 'fuzziness' of the CoM due to canonical quantum mechanics is taken into account things become even more complicated, as a second radius -the Compton radius -may be associated with the particle. Nonetheless, in our model, we will find that the counterintuitive results implied by the considerations above remain the same: once the particle's self-gravity is taken into account, physical measurements of space-time intervals -for example, the space-like position of a particle, relative to a predefined origin -yield more accurate results if the measurements are made from further away. Below a certain optimum length-scale, attempting to probe the position of the particle's CoM with greater accuracy becomes self-defeating. The resulting 'gravitational uncertainty' caused by the fuzziness of the spacetime close to the particle's CoM outweighs the gain in localising the canonical quantum wave packet. By contrast, far away from the CoM, metric fluctuations reduce to the background level (assumed to be of the order of the Planck length) and canonical quantum behaviour is recovered. The measurement scheme considered above is shown, for particles with both classical gravitational (turn-around) and quantum mechanical (Compton) radii, in Fig. 1.
The explicit connection between this procedure and the perturbed space-time induced by the presence of the particle is illustrated in Fig. 2. For simplicity, let us begin by assuming that the gravitational effect of the particle mass can be neglected, so that s s in our notation. This scenario is represented by the flat blue line. Now let us consider measuring a space-like distance by means of a photon, emitted from the particle at r = 0 and absorbed by a detector in the lab frame at some distance r = ct, where t is the proper time measured by particle's CoM. Note that, in general, this need not be identified with the cosmic time τ , so that we are free to consider t τ . If the particle's recoil velocity is non-relativistic, it may be considered negligible at the classical level, so that dt dτ . Thus, if t is small compared to the cosmic time (t τ ), we may set a a 0 = 1. In this case, it is clear that the time-like interval traversed by the particle in time t is identical to the spacelike interval measured by the experimental apparatus (i.e the particle-photon-detector system). In Károlyházy's notation, we have s (t) = s(t) = ct ≡ s (r) = s(r) = r, where s and s denote the world-lines traversed by the particle and measured in the lab frame, respectively. Now let us consider the more general case, in which the space-time curvature induced by the presence of the particle cannot be ignored. This scenario is represented by the curved red line in Fig. 2. In this case, if the photon travels from the particle at r = 0 to the detector FIG. 1: Measurement of the time-like interval traversed by a massive particle located at r 0, by projecting light-like signals emitted over the course of its path onto a 'detector' at r > 0. The outer tube surrounding the centre of mass (CoM) represents the region r < rgrav, in which the particle's gravitational field may be considered non-negligible compared to the background curvature. Placing the detector within rgrav leads to significant differences between the measured (lab frame) and 'true' (CoM frame) values, even in the classical regime (28). The inner tube represents the fuzziness of the particle's CoM due to the nonzero width of the canonical quantum wave packet. Generally, the tubes defined by the gravitational and quantum mechanical radii have different thicknesses, but coincide for the minimum-mass particle predicted by the DE-UP. (See Sec. III C.) at r > 0 in time t, this corresponds to the measurement of a space-like interval s (t, r) (1 − r S /2r)ct. As shown in Sec. II A, once the gravitational field of the particle is taken into account, the simple relation between the coordinate distance r, traversed by the photon, and the space-like interval this corresponds to breaks down (s (r) = r). Likewise, the simple relationship between the coordinate distance and the time elapsed no longer holds (r(t) = ct).
The time-like interval traversed by the particle is still s(t) = ct, so that s (t, r) (1 − r S /2r)s(t), as in Eq. (3). However, s(t) = ct also represents the space-like interval that would have been measured, had the particle's mass not perturbed the background. Hence, Károlyházy's interpretation of the symbols s and s , as representing the 'true' (CoM) frame and measured (lab frame) values of the space-time interval traversed by the particle, is equivalent to ours, in which they represent intervals in the non-perturbed and perturbed backgrounds, respectively.
As stated in Sec. II A, we now show explicitly that Eq. (3) holds even more generally. Suppose that, rather than measuring the space-like interval between the particle and the detector -which corresponds to the coordi- If the gravitational field of the particle is considered negligible, space-time is approximately flat. In this case, a photon emitted from the particle at r = 0 travels to the point r(t) = ct in time t. This completes a measurement of the space-like interval s(t) = ct. During this time (ignoring recoil), the particle traverses a time-like interval s (t) = ct, so that s (t) = s(t) = r(t) = ct. Taking the particle's gravity into account, if the photon travels from r = 0 to r > 0 in time t, this corresponds to a measurement of the spacelike interval s (t, r) (1 − rS/2r)ct, (r = ct). The time-like interval traversed by the particle is still s(t) = ct, so that s (t, r) (1 − rS/2r)s(t). This formula relates the perturbed line element s (t, r) to the unperturbed line element s(t) or, equivalently, the space-like interval measured at r to the 'true' time-like interval traversed by the particle. Hence, the relation between s and s obtained from Károlyházy's measurement procedure is equivalent to the perturbative result.
nate distance r, even if the two are not equivalent -we instead choose to measure a much larger interval. For example, let us imagine that the particle is surrounded by a horizon, at a (classically) fixed distance s from its CoM. Furthermore, let us imagine that, if the gravitational field of the particle were absent, the horizon would be located at a fixed distance s = l * rather than s .
Our experimental procedure is then as follows. A photon is emitted from the particle at r = 0 and absorbed by the detector in the lab frame (as before) after a time t 1 . This completes a measurement of the space-like interval Simultaneously, or near simulataneously, a photon emitted from a point on the horizon at t 2 = t 1 − t * , where t * = l * /c, also arrives at r and is absorbed by the detector. This completes a measurement of the space-like interval s 2 (1−r S /2r)s 2 , where s 2 = −ct 2 > 0. This result follows directly from the independence of the space-time coordinates r and t where, in our experimental procedure, t is identified with the flight time of a photon and r is identified with the position of the detector. Together, these interactions complete the measurement of a space-like interval given by The time-like interval traversed by the particle during the flight time of both photons is s = ct * = l * , so that this procedure is equivalent to projecting the entire world-line of the particle, traced out over t * , onto the detector at r.
Modifying this argument to include the effects of universal expansion, dark energy (Λ > 0) and the background matter density on the Newtonian potential induced by the perturbation, gives which is simply Eq. (28) with s(τ ) → l * (τ ). In this case, we may identify t τ , and the relevant horizon is the particle horizon, l * (τ ) = r H (τ ), given by where η(τ ) is the conformal time [55] Hence, the measured value of the space-like distance between the particle and the horizon depends on where we place our detector in relation to each. This is a simple consequence of the fact that the perturbation breaks the global symmetry (i.e. homogeneity or, equivalently, isotropy about every point) of the FLRW background. If r is very small, the detector sits within a (relatively) deep potential well, in which the difference between the curvature of the perturbed and the unperturbed backgrounds is large. From Károlyházy's viewpoint, the time-like interval traversed by the particle, over the time taken for a photon to reach the horizon, is projected onto a detector in the lab frame at r. If r r grav (τ ) r * , where r * is the coordinate distance corresponding to the position of the horizon, the distortion induced by the gravitational field of the particle renders the measured value significantly different from the true (CoM frame) value.
Implicitly, this argument assumes that the particle formed in the very early Universe (τ 0). However, even if this is not the case, r H (τ ) still marks the furthest point in causal contact with the particle at the cosmic epoch τ . As such, it still represents the largest distance that can be measured by means of the particle-photondetector system, at time τ . Strictly, for 0 τ τ 0 , Károlyházy's interpretation is not applicable to Eq. (30), since the world-line of the particle is much shorter than l * (τ ) = r H (τ ). Nonetheless, this formula remains physically meaningful in relation to the gedanken experiment described above, in which the detector at r receives signals from both the particle at r = 0 and its horizon at r H (τ ).
The above argument demonstrates the classical equivalence of Károlyházy's measurement scheme and the perturbative result, Eq. (28). In canonical QM, the picture of the classical point-particle is replaced by the wave function ψ, representing a superposition of position or, equivalently, momentum states of the particle's CoM. Thus, it is not difficult to imagine that, in the quantum regime, the classical region over which the particle's selfgravity cannot be neglected gives rise to an irreducible haziness of the underlying space-time metric, induced by the presence of the wave function. This is equivalent to an irreducible 'smearing' out of the particle mass or, equivalently, of the CoM associated with ψ.
This observation, which formed the basis of Károlyházy's predictions [15,16], will also form the basis of our own analysis, though we will depart from his original prescription in a number of crucially important ways. In particular, we will attempt to incorporate the effects of a space-filling dark energy, which exists in the form of a cosmological constant Λ > 0, with effective energy density and pressure given by Eq. (19).
We note that in this model, as in Károlyházy's original [15,16], Dirac δ-function position states do not exist. Even if the position of a quantum particle is ideally localised, from the perspective of the gravitationallymodified quantum theory, its CoM remains 'smeared' over some minimum length-scale, which is a function of the size, mass and possibly charge of the body, and of fundamental physical constants. This point is discussed in detail in Sec. III, in which the dark energy-modified MLUR is derived.
C. Derivation of the MLUR (Károlyházy, 1968) To highlight both the similarities and the differences between the arguments presented in [15,16] and those presented in the present work, we briefly review the original derivation of Károlyházy's MLUR. Special emphasis is placed on the physical assumptions that underly the model and on the chain of reasoning that gives rise to the final result. For clarity, where new or supplementary assumptions are introduced for the first time, they are explicitly stated.
Beginning with Eq. (3), Károlyházy effectively defines the uncertainty in s in terms of an assumed uncertainty in m, via where β is a positive numerical constant of order unity. In fact, following Eq. (3), β is set exactly equal to one in Károlyházy's original derivation [15,16]. We explicitly include it, from here on, for the sake of comparison with the results of Ng and van Dam [47,48], presented in Sec. II C, and their modification in the presence of dark energy, given in Sec. III. While this idea is reasonable from a gravitational perspective -where one may expect statistical fluctuations in space-time configurations to be equivalent to fluctuations in the mass that 'sources' the gravitational field (or at least correlated with them) -it is problematic from the quantum point of view, since 'uncertainty' refers to the statistical spread of measurement outcomes, where the physical quantity in question is represented by a Hermitian operator. However, in both canonical QM and QFT, mass is a parameter, not an operator.
In [15,16], Károlyházy obtains the expression for ∆m from the 'canonical' uncertainty relation ∆E∆t , though this too is potentially problematic, as time t is not an operator in the canonical non-relativistic theory. Defining the uncertainty in the rest-energy of the particle as and using s ct to infer ∆s c∆t, yields ∆m /(c∆s) .
By substituting (35) into (33), then assuming that the self-gravity associated with the particle's wave function is non-negligible only over the interval 0 ≤ r ∆s [i.e., replacing r → ∆r ∆s in (33)] and noting that the minimal value of ∆s is (∆s ) min ∆s, then yields where we define the Planck length l Pl and mass m Pl , for later convenience, as In his original papers [15,16], Károlyházy's MLUR was related to the concept of a coherence cell via a special gravitationally-modified dispersion relation, which enabled estimates of the cell width, a c , and Eq. (36) to be satisfied simultaneously. However, in the present paper, we will not consider the implications of dark energy for models of gravitationally-induced wave function collapse. Their detailed examination is left to future work [63].
D. An alternative derivation (Ng and van Dam, 1994) An alternative derivation of Eq. (36) is based on a gravitational extension of the MLUR obtained in canonical QM, and was originally proposed by Ng and van Dam [47,48]. That an MLUR exists, even in the canonical non-gravitational theory, can be seen by considering the dependence of the positional uncertainty ∆x on the time interval t over which measurements are made. (Note that we again distinguish between this and the cosmic time τ .) The approximate dependence of ∆x on the time interval t may be determined from the non-relativistic quantum dispersion relations, ω = ( /2m)k 2 , which give rise to the group velocity The uncertainties in v group and k at any time t are related via ∆v group (t) ( /2m)∆k(t), or, equivalently Using the fact that ∆x(0) ∆v group (t)t for t > 0 then gives Next, we define the uncertainty over all measurements, made at both t = 0 and t > 0, as the geometric mean of the canonical uncertainties at both times, i.e.
This yields where λ C = /(mc) is the Compton wavelength and where we have defined the distance r = ct, assuming that the wave function is spherically symmetric and spreads radially outwards. Interestingly, Eq. (42) may also be derived using the 'canonical' energy-time uncertainty relation, More rigorously, it may be obtained as a direct solution to the Schrödinger equation in the Heisenberg picture [64,65]. In the absence of an external potential (V = 0), the time evolution of the position operatorx(t) is given by which may be solved directly, yieldinĝ The spectra of any two Hermitian operators, andB, obey the general uncertainty relation [66,67] ∆A∆B and Using the definition of ∆x canon. (t), Eq. (41), together with t = r/c, we recover Eq. (42).
Historically, this result was first obtained by Salecker and Wigner using a gedanken experiment in which a quantum 'particle' is used to measure a distance r by means of the emission and reabsorption of a photon [68]. In this description ∆x canon. (r), given by Eq. (42), represents the minimum possible canonical quantum uncertainty in the measurement of r.
The argument presented in [68] proceeds as follows. Suppose we attempt to measure r using a 'clock' consisting of a classical mirror and a quantum mechanical device (e.g. a charged particle such as an electron), initially located at r = 0, that both emits and absorbs photons. A photon is emitted at t = 0 and reflected by the mirror, which is placed at some unknown distance r > 0. The photon is then reabsorbed by the particle after a time t = 2r/c (not t = r/c).
Assuming that the velocity of the particle remains well below the speed of light, it may be modelled nonrelativistically. By the standard Heisenberg uncertainty principle (HUP), the uncertainty in its velocity at any time t ≥ 0 obeys the inequality where ∆x(t) is the positional uncertainty obtained by evolving the initial wave function ψ(x, 0) via the Schrödinger equation (i.e. neglecting recoil). However, if the initial positional uncertainty is ∆x(0) then, in the time required for the photon to travel to the mirror and back, t, the particle acquires an additional positional uncertainty The total canonical positional uncertainty is now defined as and obeys the inequality Minimizing this expression with respect to ∆x(t), or equivalently ∆v(t), and using the fact that ∆v max /(2m∆x min ), gives where we have again used r = 2ct.
We note that similar arguments apply if we consider a modified experimental set up, in which a photon is emitted by the particle at r = 0 and absorbed by a device in the lab frame at r = ct, or vice versa. (In other words, we note that reflection by the mirror is not an essential part of the experimental procedure and, in addition, that it does not affect the order of magnitude estimates of the minimum quantum uncertainty inherent in the measurement.) We also note that requiring r > r S = 2Gm/c 2 (i.e., that photons cannot be emitted from within the Schwarzschild radius of our 'probe' particle), we obtain (∆x canon. ) min = l Pl . Alternatively, requiring r > λ C , the measurement process devised by Salecker and Wigner gives rise to a MLUR which is consistent with the standard Compton bound of the non-relativistic theory.
For fundamental particles, it is therefore interesting to ask, what happens if a photon is emitted from the particle and reabsorbed within the interval r ∈ (r S , λ C ]? Strictly, the answer is that, for r < λ C , the non-relativistic theory breaks down and we must switch to a field theoretic picture. In this, the 'measurement' of r corresponds to a self-interaction, described by a one-loop process in the relevant Feynman diagram expansion, in which the photon remains virtual. However, it is important to remember that interactions corresponding to 'measurements' of r < ∆x canon. (r, m) < λ C (m) in the non-relativistic theory are physical. It is therefore reasonable to apply the non-relativistic formulae, such as Eq. (52) and its gravitational 'extensions', in this regime, on the understanding that 'measuring' distances r < λ C via photon emission/reabsorption corresponds to virtual photon exchange via a one-loop process.
A related point concerns the existence of superluminal velocities for r λ C , as implied by Eq. (52). However, though virtual particles can travel faster than the speed of light, this does not imply a violation of causality, as information is not transmitted outside the light cone of a given space-time point [69]. In fact, a similar effect occurs with respect to the standard Heisenberg term: for ∆x λ C , the HUP implies ∆p mc, or equivalently ∆v c. Hence, superluminal velocities and sub-Compton probe distances in the non-relativistic theory are associated with the regime in which field theoretic effects become important. Nonetheless, we may continue to apply the non-relativistic formulae in this region, subject to the caveats stated above. These issues are discussed in detail in the Appendix.
It is straightforward to extend the arguments presented in [64,65] and [68] to include an estimate of the uncertainty in the position of the particle due to gravitational effects, ∆x grav . By assuming that this is proportional to the Schwarzschild radius r S , Ng and van Dam defined the the total uncertainty due to canonical quantum effects, plus gravity, as where β > 0, which is also assumed to be of order unity [47,48]. (For β = 2, we recover ∆x grav = r S exactly.) Minimizing Eq. (53) with respect to m yields and, substituting this back into Eq. (53), we obtain Neglecting numerical factors of order unity, and relabelling ∆s → ∆x total in Eq. (36), in accordance with standard QM notation for distance measurements, we see that Eq. (55) is equivalent to Károlyházy's result with r = ct ≡ s(t).
Equivalently, (∆x total ) min may be written as a function of m, using Eq. (54). By performing the minimization procedure with respect to m, we have effectively asked the question "what mass must the probe particle have, in order to measure the distance r with minimum quantum uncertainty?". Physically, this is equivalent to asking, "if our particle has mass m, what distance r can be measured with minimum uncertainty?". However, although Eq. (54) fixes the relation between m and r for an uncertainty-minimizing measurement, we note that there is no minimum of the function ∆x total (r, m), given by Eq. (53), in the r-direction of the (r, m) plot. Intuitively, we may expect to be able to minimize ∆x total (r, m) with respect to either m or r, and to obtain the same result in either case, since this gives rise to a procedure which is self-consistent in the limit r → λ ± C (i.e. when the 'probe' distance r tends to the Compton wavelength of the particle, either from above or below). This point is discussed further in Sec. II E.
Finally, before concluding the present subsection, we note that similar results hold, even for electrically neutral particles, whose interactions are mediated by massive, short-range bosons. For electrically charged particles, real photons may be emitted or absorbed, or virtual photon exchange may take place via a one-loop self-interaction. For uncharged particles, photons (either real or virtual) are replaced by the appropriate forcemediating boson(s). For example, in the case of the weak nuclear force, the W ± and Z 0 bosons are massive, and hence short-range, giving rise to short-range probe distances r ≤ λ C r H (τ ). To realize the measurement scheme outlined in Sec. II B, in which a neutral particle communicates with -and effectively 'measures' the distance to -its own horizon, we must instead imagine a higher order self-interaction processes involving W ± exchange, taking place on some scale r ≤ λ C r H (τ ), coupled with the exchange of virtual photons between the intermediate W ± bosons and a charged particle located at ∼ r H (τ ).
E. Motivations for the DE-UP
As shown in Sec. III C, in [15,16] Eq. (36) was obtained by considering a gedanken experiment to measure the length of a space-time interval with minimum quantum uncertainty. This derivation relies on the fact that the mass of the measuring device (probe particle) m distorts the background space-time. Equating the uncertainty in the particle's rest energy with uncertainty in its mass then implies an irremovable uncertainty or 'haziness' in the space-time in the vicinity of the particle itself. This results in an absolute minimum uncertainty in the precision with which a gravitating system can be used to measure the length of any given world-line, s. By contrast, the arguments presented in [47,48] circumvent the need to assume quantum fluctuations in the rest mass, and hence the need to define a rest-energy Hamiltonian, H rest =mc 2 .
Nonetheless, Károlyházy's arguments [15,16] are similar to those of Ng and van Dam [47,48], in that β ∼ O(1) arises as a direct result of the assumption that the Schwarzschild radius of a body, r S (m) = 2Gm/c 2 , represents the minimum 'gravitational uncertainty' in its position. In fact, for MLURs of the form (36)/(55), it is usually assumed that β ∼ O(1) in most of the existing quantum gravity literature [33,34]. For all the scenarios leading to Eq. (55) considered above, this is directly equivalent to assuming a minimum gravitational uncertainty of order r S (m).
An important physical consequence is that, since Eq. (55) holds if and only if Eq. (54) also holds, it is straightforward to verify Substituting the minimization condition for ∆x total (r), Eq. (54), into Eq. (56) then gives For β ∼ O(1), we require the '>' inequality in Eq. (57), since many arguments imply that l Pl represents the minimum resolvable length-scale due to quantum gravitational effects. (See, for example [70][71][72], plus [33,34] for reviews of minimum length scenarios in phenomenological quantum gravity.) This implies that the '>' inequality also holds in Eq. (56) and, hence, that the minimum quantum gravitational uncertainty predicted by Károlyházy/Ng and van Dam is always greater than the Compton wavelength of the particle that minimizes it.
However, from a physical perspective, the assumption ∆x grav (m) r S (m) may be questioned on at least two grounds. First, we see that, for fundamental particles with masses m m Pl , ∆x grav (m) r S (m) l Pl . Although the total uncertainty may remain super-Planckian, the assumption of simple additivity, ∆x total (r, m) = ∆x canon. (r, m) + ∆x grav (m), on which Eq. (55) is ultimately based, implies that canonical quantum uncertainty and the gravitational uncertainty arise independently, without influencing one other (i.e., that the gravitational uncertainty remains fixed, regardless of how dispersed the quantum wave packet becomes). It is therefore not clear whether a gravitational uncertainty given by ∆x grav (m) r S (m) < l Pl is physically meaningful. Second, gravity is a long range force. Intuitively, we may expect that, however it is defined, the gravitational uncertainty induced by the presence of a point-like or near point-like particle at r = 0 should fall with the gravtational field strength. Naïvely, we may assume that the gravitational uncertainty varies in proportion to the classical Newtonian potential, ∆x grav (r, m) ∝ |Φ(r, m)| ∝ β(r)r S (m) as r → ∞.
If this is indeed the case, we see that, rather than being a simple constant, β(r) must take the form of a ratio, β(r) = β l * /r, where β ∼ O(1) and l * l Pl is a phenomenologically significant length-scale which is well motivated by fundamental physical considerations. In the context of a dark energy Universe, it is clear that the de Sitter horizon, l dS = 3/Λ, fulfils this criterion. As we shall see, one consequence of this is that states for which r > l Pl and (∆x total ) min < λ C become possible, in contrast to the predictions obtained from Eqs. (53)- (55). We also note that replacing β = const. → β(r) = β l * /r in Eq. (53) allows us to minimize (∆x total ) min (r, m) with respect to either m or r. It is straightforward to demonstrate that this minimum is unique and is independent of both r and m. As a result, the minimization procedure remains self-consistent in the limit r → λ ± C . In Sec. III, we derive MLURs in which the minimum uncertainty in a physical quantity Q is given by the cube root of of three (possibly distinct) scales, Q 1 , Q 2 , Q 3 , but which differ from relations derived from Eqs. (36)/ (55) in two important ways. First, the new relations attempt to incorporate the effects of dark energy, in the form of a cosmological constant, on the 'smearing' of space-time and, thus, on the minimum quantum gravitational uncertainty inherent in a measurement of position and related physical observables. Second, they lead to substantially different but physically reasonable predictions in a number of scenarios. Specifically, they may be combined with other results obtained in general relativity and canonical quantum theory to give estimates of both the electron (e − ) and electron neutrino (ν e ) masses, in terms of fundamental constants. These estimates yield the correct order of magnitude values obtained from experiment.
In deriving the new relations we follow a procedure analogous to that used by Ng and van Dam [47,48] (outlined in Sec. III B) but assume the existence of an asymp-totically de Sitter/FLRW, rather than Minkowski, spacetime. The results are obtained in two different ways. In the first, it is unnecessary to assume fluctuations in basic parameters, such as the mass m. This avoids the need to promote parameters to observables, represented by Hermitian operators in the non-relativistic quantumgravitational regime. (From a technical point of view, it removes the need to define the operatorm or, equivalently, the rest HamiltonianĤ rest =m/c 2 .) In this case, it is, however, necessary to make certain assumptions about the properties of space-time superpositions in the Newtonian limit. In particular, we assume the existence of an upper bound on ∆x grav , given by the difference between line-elements in two classical space-times: one in which the particle is present and one in which it is absent. This is equivalent to assuming that the 'spread' of quantum states cannot exceed the difference between the two classical extremes.
In the second, we promote the classical Newtonian potential to an operator, Φ = −Gm/r →Φ = −Gm/r,à la Károlyházy, and estimate the associated uncertainty, ∆Φ, by considering a superposition of CoM position states. We then relate ∆Φ to ∆x grav by considering the associated uncertainty induced in the measurement of space-time line-elements. From here on, we refer to all minimum quantity uncertainty relations of the form (∆Q) min (Q 1 Q 2 Q 3 ) 1/3 as 'cubic', due to the value of the exponent on the right-hand side. Like Károlyházy, for τ τ 0 , we take Eq. (3) as our starting point for the quantum mechanical definition of a 'hazy' space-time. In this case, the Hubble flow correction term in Eq. (28) is subdominant within the turnaround radius, r ≤ r grav . However, rather than following the steps expressed in Eqs. (34)- (35), leading to Eq. (36), we instead make the following physical assumption.
We assume that the quantum mechanical uncertainty in the space-like interval between a particle of mass m (located at r = 0) and the coordinate distance r, is of the order of the difference between the classical values s (r, m) and s(r), where s(r) = s (r, m)| m=0 .
Classically, the presence of the particle induces a perturbation in the background space-time, whose magnitude at r is given by so that our assumption is equivalent to setting ∆s(r, m) ∆s pert (r, m) = |s (r, m) − s(r)| , where s(r) and s (r, m) represent the two (classical) extremes.
In the classical picture, the underlying space-time may be in one of two distinct states. In the first, in which the particle is absent, the underlying metric corresponds to the unperturbed line element s(r). In the second, in which the particle is present, the metric corresponds, instead, to the perturbed line element s (r, m). It is reasonable to suppose that, whatever the final theory of quantum gravity may be, a wave function of the form describing a superposition of space-time background states, is possible in at least some limiting cases. Here, we use the notation Ψ(t, r) to distinguish between wave functions representing space-time superpositions and ψ(t, r), which represents a canonical quantum wave function that exists on a definite classical space-time background. Though the mathematical formalism of a theory that contains both is not developed in the present work, we have in mind a composite wave function, that reduces to ψ(t, r) when Ψ(t, r) corresponds to a particular geometry. More realistically, we may assume that the space-time background on which the canonical quantum wave function |ψ propagates is, in fact, in a superposition of an infinite number of states, each corresponding to a unique classical line element s, i.e.
An expansion of this form will yield Eq. (59) if either the limits of integration are such that s i = s, the unperturbed line element, and s f = s , the perturbed line element, or, more generally, if s i and s f take arbitrary values but the wave packet |Ψ st maintains a standard deviation of order |s − s|. This holds true even for s i → 0, s f → ∞, as s i and s f reach the extremal classical limits. Though a complete theory still eludes us, we may imagine a path integral over some kind of phase space, in which space-times corresponding to all other possible line-elements contribute negligible amplitudes to the total state vector expansion. These would include states corresponding to flat or negative curvature in the presence of m, as well as states giving rise to extreme positive curvature, which could only be sourced classically by much larger masses. This scenario is illustrated graphically in Fig. 3.
Incorporating the effects of universal expansion, we have ∆s(r, m) → ∆s(t, r, m) , where t represents the time taken to complete the measurement (r = ct). Note that ∆s becomes a function of time even if we choose to neglect the subdominant Hubble flow term in the perturbed Newtonian potential, since we must still shift to comoving coordinates r → a(t)r. For the measurement procedure considered explicitly in Figs represents the space-like interval, at time t, between the particle (at a(t)r = 0) and the 'detector' (at a(t)r > 0), in the unperturbed space-time. This gives ∆s(t, r, m) Gm c 2 a(t)r s(a(t)r) .
However, as described in Sec. II B, we may use a more general measurement procedure to measure much larger space-like intervals, up to and including the particle horizon r H (τ ). In this case, t τ and the space-time uncertainty takes the form where r H (τ ) is given by Eq. (31). In Eq. (64), direct r-dependence drops out of the expression for s, since this now represents the distance to the horizon, which may be expressed purely in terms of the cosmic time.
With Eq. (64) as our new starting point, we may now ask the question: how is this scenario affected by the presence of dark energy, in the form of a cosmological constant Λ? Clearly, the main physical consequence at the present epoch (τ τ 0 , a a 0 = 1) is the existence of a cosmological horizon at a fixed distance from any observer for all τ τ 0 . This is the de Sitter horizon, which corresponds to the (unperturbed) space-like interval s l dS = 3/Λ, and its formation is discussed in Sec. III B.
Thus, in applying Eq. (64) to particles at the present epoch, we have in mind a particle interacting simultaneously with an object at r, close to its CoM, and with the furthest reaches of its environment, represented by r H (τ 0 ) l dS . For r > λ C , this object may be a detector in the lab frame, which simultaneously receives signals (e.g. photons) from the particle and from distant objects close to l dS . However, for r < λ C , the local object with which the probe particle interacts is simply itself and the local interaction involves the exchange of virtual particles. In principle, the long-range interaction between r < λ C and r H (τ 0 ) l dS may also involve the exchange of virtual particles -if necessary, via an appropriate higher-loop process.
For our purposes, the fact that the interaction between the particle and its horizon may involve the exchange of virtual rather than real particles is extremely important. In effect, such interactions constantly 'measure' the distance from the particle's CoM -or, more specifically, from a point r < λ C close to the CoM -to its horizon. Hence, any irremovable uncertainty present in the result of this measurement is directly equivalent to an irremovable uncertainty in the position of the particle. Classically, both the position of the CoM and the position of the horizon are well-defined, so that any quantum uncertainty in the distance between them is equivalent to an uncertainty in the position of either (or both).
We may obtain the same result using an operational procedure in canonical quantum theory, as follows. In the classical picture, a point-particle of mass m, located at r (i.e., represented by the function δ( r − r )), generates a well-defined gravitational potential at a general point r, given by In the quantum picture, the classical potential is promoted to an operator, Φ →Φ, such that In other words, acting on the canonical position eigenstate δ( r − r ),Φ recovers the classical potential Φ. For superpositions of position states, ψ(t, r ), the gravitational potential at r will also be given by a superposition of states. We then have Φ = ψ|Φ|ψ , yielding and higher-order moments Φ n may be defined in like manner.
In the limit ψ(t, r ) → δ( r − r ), the above definition can easily be modified to ensure that the classical limit is recovered, i.e. that Φ (r, m) → Φ(r, m). The tricky part is dealing with the fact that position eigenstates cannot be normalized, though, in principle, this causes no more fundamental problems here than it does in canonical QM. However, as we shall see (and as mentioned previously in Sec. II B), the presence of irremovable gravitational uncertainty makes the physical realisation of canonical quantum δ-function states impossible. Such a modification is therefore not required: Eq. (66) remains formally valid but the minimum positional uncertainty is greater than zero, for any physically realizable state.
As an example, we consider spherically symmetric Gaussian states, for which where we have chosen our coordinate system so that the wave packet CoM is located at r = 0 and ∆x denotes the canonical quantum uncertainty. For Gaussian states, this is given by where σ = ∆x(0) is the initial spread at t = 0 and r ≡ ct.
For σ (∆x canon. ) min √ λ C r (r σ 2 /λ C ), the spread of the wave function is given approximately by ∆x(r) σ, whereas, for σ √ λ C r (r σ 2 /λ C ), the late-time spread is given by ∆x(r) λ C r/σ. Hence, any 'measurements' (including one-loop self-interactions) occurring on time-scales t σ 2 /(cλ C ) are effectively 'instantaneous' and do not significantly disturb the initial (t = 0) quantum state. However, as t = 0 is an idealization, which is likely not physically realizable, we restrict our attention henceforth to time-scales t σ 2 /(cλ C ). Equation (68) then gives The two expressions above coincide for σ λ C -which is a reasonable assumption in the canonical theory -yielding The next step is to determine the relationship between ∆Φ and ∆s, the uncertainty in the measured space-time interval. This can be done by setting τ τ 0 in Eq. (28) and ignoring the sub-dominant Hubble flow term for r ≤ r grav , giving where s(τ 0 ) l dS = const.. In the quantum picture, we then haveŝ giving ∆s (r, m) as claimed. ∆s (r, m) can then be identified with ∆x grav , as before. In fact, even if we consider alternative timeintervals, t τ 0 , and identify s(t) with the total flight time of photon(s) in the generalized measurement procedure outlined in Sec. II B, an analogous argument still holds. Since we may set s(t) = ct, and because t is a parameter, not an operator, in canonical QM,ŝ(t) may still be regarded as a 'constant' from an operator perspective.
Note that here, as in the derivations of the canonical quantum uncertainty given in [64,65] and [68] (outlined in Sec. II D), we continue to identify r = ct in deriving the expression for ∆Φ. In this sense, the geometric nature of the gravitational field is not explicitly accounted for in this step and (65) is treated like any other potential existing on a flat space background. This is an unavoidable limitation of working within the framework of canonical QM up to this point.
However, combining this with the classical relation given by Eqs. (28)/(72), and 'quantizing' the latter by promoting the classical potential to an operator Φ →Φ, allows us to obtain an expression for the standard deviation of the space-time line-element operatorŝ which takes us beyond canonical QM. Thereafter, the geometric nature ofΦ is made explicit -via its relation toŝ -and the r appearing in Eq. (73) cannot be identified with the flat-space interval corresponding to the unperturbed line-element, i.e. r = s(t) = ct. Nonetheless, it is interesting to note that such a procedure yields results analogous to Eq. (59), which is (explicitly) based on the physical picture illustrated in Fig. 3. At the very least, we may say that this picture does not contradict the results of canonical QM, but allows us to reinterpret them in terms of a Károlyházy-type 'hazy' space-time.
B. Physical basis of the DE-UP
For later convenience, we now define the first and second de Sitter length-scales as together with the associated mass-scales where unprimed quantities are referred to as 'first' and primed ones as 'second', respectively. (In [57,58], these were referred to as the Wesson length/mass-scales).
As discussed in Sec. II A, l dS is the distance to the asymptotic de Sitter horizon, which is of the order of the present day radius of the Universe, l dS r U 10 28 cm. This, in turn, is a manifestation of the so-called 'coincidence problem', which refers to the fact that the current epoch marks the transition between deccelerating and accelerating phases of universal expansion [55]. For τ τ 0 (excluding any short-lived inflationary phase in the very early Universe), Ω r (τ ) + Ω M (τ ) Ω Λ (τ ) and the gravitational attraction of matter and radiation dominated over the repulsive effect of dark energy. At the present epoch, τ τ 0 , we have Ω M = 0.31 Ω Λ = 0.69, and a phase of late-time accelerated expansion has begun. From the Friedmann equations (9)-(10), we see that, for τ → ∞, the dilution of ρ m relative to ρ Λ implies H =ȧ/a → c/l dS , yielding a(τ ) a 0 exp (cτ /l dS ) , (a 0 = 1) .
Hence, any Universe in which dark energy exists in the form of a cosmological constant undergoes exponential expansion as τ → ∞, since Ω Λ → 1. Furthermore, the transition point between deccelerating and accelerating phases of expansion occurs when Ω M Ω Λ ⇐⇒ τ τ 0 l dS /c, r U l dS .
In our Universe, we live at precisely this point of transition, beyond which the FLRW metric becomes approximately equal to the de Sitter metric. In standard spherical polar coordinates, this takes the form where dΣ 2 is given by Eq. (8) with Gaussian curvature k = 0. However, introducing static coordinates, defined by the transformations this may be rewritten as where dΩ 2 = dθ 2 + r 2 sin 2 θ dφ 2 . In this coordinate system, the existence of a Universal horizon at r = l dS , for all time, is made explicit. (The interested reader is referred to [73] for further discussion of this point.) In the presence of a perturbation induced by a point mass m, the local late-time metric tends, instead, to the Schwarzschild-de Sitter solution, which is obtained by inserting an additional −2Gm/c 2 r term into the non-trivial metric components in Eq. (80). Identifying Φ/c 2 = −Gm/c 2 − r 2 /2l 2 dS , and using the fact that (1 − 2Φ/c 2 ) −1 1 + 2Φ/c 2 for small m, we recover the late-time limit implied by Eqs. (17) and (21) for spherically symmetric perturbations. Though, technically, the perturbation induced by m shifts the outer (cosmic) horizon slightly, relative to its position in unperturbed (pure) de Sitter space, the effect is negligible for all practical purposes.
Let us now consider the mass-scales (76). E dS = m dS c 2 represents the intrinsic mass-energy of a 'particle' whose wavelength is of the order of the de Sitter horizon. However, as shown in [58], a particle with rest mass m dS and Compton wavelength l dS would be unstable, having insufficient self-gravity to overcome the effects of dark energy repulsion. Hence, E dS = m dS c 2 = hc/l dS represents the energy of a minimum-energy photon, whose wavelength is equal to its maximum possible value, l dS . By contrast, E dS = m dS c 2 represents the total mass-energy contained in the dark energy field, within the de Sitter horizon (l dS ) of pure de Sitter space (Ω Λ = 1). Since, at the present epoch, Ω Λ = 0. 69 1 and r U l dS , this is of the order of the present day mass-energy of the Universe [58]. Returning to the length-scales (75), we note that l dS is sub-Planckian, so that its physical meaning is unclear, though we include it in our definitions for the sake of formal completeness. Finally, we note that the primed and unprimed scales are related via Hence, since the particle's communication with the outside world is effectively confined within the de Sitter radius -that is, within the region r ∈ [0, l dS ) -the minimum value of the gravitational uncertainty, induced at a given point r from its CoM, is ∆x grav (r, m) Gm c 2 r l dS .
To this we must add the canonical uncertainty due to the gradual diffusion of the wave function, predicted by the canonical (non-gravitational) theory. We here assume that the respective uncertainties are additive, which is consistent with the perturbative approach to the gravitational sector, considered in Sec. II. We then have ∆x total (∆v, r, m) = ∆x canon. (∆v, r, m) + ∆x grav (r, m) ≥ ∆x(∆v) + ∆x recoil (∆v, r, m) + ∆x grav (r, m) ≥ (∆x canon. ) min (r, m) + ∆x grav (r, m) .
Instead of using the order of magnitude estimates for ∆x recoil and ∆x grav , obtained in Secs. II D and III A-III B, together with an order of magnitude inequality ' ', we assume that Eq. (83) holds exactly when these quantities are defined precisely, up to appropriate numerical factors. Hence, we introduce two new parameters, α , β > 0, which are assumed to be not hierarchically larger than unity. Equation (83) may then be rewritten as where α , β ∼ O(1). However, note that, for this reason, they do not count as free parameters of the model. If the value of either constant were permitted to be hierarchically larger (or smaller) than unity, this would, in effect, alter the existing mass/length-scales present in the theory, indicating new physics. The physical basis of Eq.
and hence From here on, we refer to Eq.
It is straightforward to show that minimizing the DE-UP-1 with respect r, followed by m or ∆v, or with respect to m, followed by ∆v or r, yields the same final result (90). Viewed as a function of all three variables, ∆x total (∆v, r, m) has a unique minimum.
After completely minimizing ∆x total (∆v, r, m) to obtain the DE-UP-3 (90), an interesting critical mass-scale is obtained by setting the recoil velocity of the particle equal to the speed of light, The unique properties of this mass, including its relevance for holography in an asymptotically de Sitter Universe, were considered in [74]. In addition, we note that all expressions, Eqs. (84)- (90), are invariant under simultaneous re-scalings of the form where α Q > 0 is a positive real parameter, which does not depend on any of the three variables ∆v, m, or r.
As we shall see in Sec. III D, α Q may depend at most on the charge Q of the probe particle. Equation (91) then becomes In summary, the first two terms in Eq. (84), ∆x and ∆x recoil , give the canonical quantum uncertainty inherent in the measurement of a distance r. This distance is measured by means of a force-mediating boson emitted from a 'probe' particle of mass m, whose CoM is initially located at r = 0, and its subsequent absorption by a 'detector' at r > 0. For r ∈ (0, λ C ], the detector is simply the particle itself, and the boson remains virtual. This uncertainty is equivalent to the canonical uncertainty in the position of the particle, as viewed by an observer situated at r. For charged particles, the relevant boson is a photon but, for neutral particles, it may correspond instead to one of the weak force mediators (the W ± or Z 0 bosons), or perhaps even a graviton in the case of dark matter particles. We note that, in the canonical (nongravitational) theory, the distance r and the time taken to perform the measurement t are related via r = ct.
The third term ∆x grav represents the gravitational uncertainty at r due to the 'haziness' of the underlying space-time metric, induced by the presence of the particle. This is equivalent to the irremovable uncertainty inherent in a measurement of the horizon distance r H (τ ). The measurement is completed by means of real or virtual boson exchange between the probe particle and a 'detector' at r > 0, and between r H (τ ) and r. As with the canonical uncertainty, for r ∈ (0, λ C ], the detector is simply the particle itself. In the non-relativistic picture, the space-time haziness is related to the haziness of the Newtonian potential, which exists in a superposition of states (71). We note that, once gravitational effects are taken into account, the simple relationship between the time taken to perform the measurement t and the coordinate distance r no longer holds, r = ct.
Together, all three terms give the total uncertainty, incorporating both the uncertainty in the space-time metric -including the effects of dark energy in the form of a cosmological constant Λ -and the canonical uncertainty in the position of the particle's CoM.
Finally, we also note that, using ∆p = m∆v, the canonical recoil term can be rewritten as ∆x recoil = α ∆pr/(mc). Hence, in the limit r → r S (m), Salecker and Wigner's MLUR reduces to the string-inspired GUP [75][76][77][78]. In this, the term proportional to ∆p may be interpreted as the uncertainty induced by the gravitational interaction between the probe particle and the mediating boson (typically a photon), as shown by Adler et al [31,32]. Though we may choose to include an additional term of this form in Eq. (84), we note that, for fundamental particles, r S < l Pl < r, so that it is automatically subdominant to ∆x recoil . We therefore choose to neglect it when applying the DE-UP to fundamental particles. Nonetheless, the existence of an Alder-type term may be relevant if we wish to apply the DE-UP to black holes. This possibility is discussed in the Conclusions, Sec. V, though its explicit application is left to a future work.
C. Basic properties of the DE-UP
We now investigate the basic properties of the DE-UP. Since l Pl is expected to form a fundamental lower bound on the resolvability of all physically measurable lengthscales [34,[70][71][72], we start by imposing the conditions (∆x canon. ) min , ∆x grav , r ≥ l Pl . As we shall see, imposing all three constraints gives rise to a fundamental lower bound on the mass of a system obeying Eq. (86). Furthermore, this bound may be derived independently by combining minimum-density requirements, obtained from the generalized Buchdahl inequalities for a spherically symmetric system in the presence of dark energy (Λ > 0) [79][80][81][82], with the simple requirement of the existence of a Compton wavelength [58].
Hence, beginning with the independently derived result, we see that both the canonical and gravitational terms in the DE-UP-2 (86), together with the probe distance r, remain super-Planckian under physically reasonable conditions. More generally, imposing (∆x canon. ) min , ∆x grav ≥ l Pl places constraints on the ratio r/m, or, equivalently, on the range of validity of r as a function of m and, thus, on the range of validity of Eq. (86).
Since we require ∆x grav ≥ l Pl , let us paramaterize it such that giving Likewise, setting (∆x canon. ) min ≥ l Pl so that gives For later convenience, we now define the ratio and, comparing Eqs. (95) and (97), we have To within numerical factors of order unity, N equals the number of Planck-sized bits on the present day boundary of the observable Universe or, equivalently, the number of cells with volume ∼ (∆x total ) 3 min in the present day bulk [57]. This point is discussed in detail in Sec. III E.
Let us also require r ≥ l Pl , setting Combining this with Eqs. (95) and (97) yields and respectively, which themselves combine to give We now define a new mass-scale, It is straightforward to demonstrate that m Λ is the minimum mass of a stable, spherically symmetric, gravitating, charge-neutral and quantum mechanical object. This result was first obtained in [58], though we briefly review its derivation for the sake of clarity.
In [81], it was shown that the density of a stable, spherically symmetric, gravitating, charge-neutral and classical compact object must satisfy the inequality where ρ Λ is the dark energy density given by Eq. (19). Though the proof of this statement, which follows directly from the generalised Buchdahl inequalities [79][80][81][82], is rather complicated, its physical meaning is intuitively obvious: compact objects with energy densities significantly lower than the vacuum density have insufficient self-gravity to overcome the repulsive effect of dark energy. For bodies of fixed mass m, classical radius R and initial density ρ < ρ min , the spatial expansion caused by Λ > 0, which acts as repulsive force, causes R to expand indefinitely and the object is unstable. For a quantum mechanical object, whose mass m is localized within a sphere of radius R = λ C , we then have Since c 1 , c 2 , c 3 ≥ 1 by construction, we may then identify For later convenience, we define which is simply the reduced Compton wavelength associated with the minimum mass m Λ . Thus, requiring both the individual components of the DE-UP-2 (86) and the probe length r to be super-Planckian ensures its consistency with both general relativistic and quantum mechanical constraints. These, in turn, allow us to fix the relation between the parameters α and β on purely theoretical grounds. However, we must remember that, in reality, a length-scale of order l Pl (up to numerical factors of order unity) may be the true fundamental cut-off for resolvable length-scales in nature, so that this relation must be taken as tentative and some ambiguity still remains.
Equivalently, we see that, beginning with the result m ≥ m Λ and reversing our previous logic, the existence of a minimum stable mass for self-gravitating quantum mechanical objects ensures that all three length-scales (∆x canon. ) min , ∆x grav and r, appearing in the DE-UP-2 (86), remain super-Planckian under appropriate conditions. We now investigate these conditions for masses in the range m Λ ≤ m ≤ 2 −1/2 m Pl , which corresponds to the fundamental particle regime.
Rearranging Eq. (99) and imposing c 2 ≥ 1 yields while imposing c 1 ≥ 1 gives Substituting (110) into (96) then gives l Pl ≤ (∆x canon. ) min (r, m) ≤ 2α β l Pl l dS , (111) and where the upper bound is equivalent to the condition ∆x grav (r, m) ≥ l Pl . Next, we impose the following condition, stemming from Eq. (95) with c 1 ≥ 1: Hence, setting allows us to recover the standard constraint which defines the fundamental particle regime. To treat black hole states, we must instead impose together with Eq. (114), giving However, the possibility of applying the DE-UP to black hole physics are considered in discussed in Sec. V and we here confine our attention to fundamental particles.
Having fixed the values of the parameters α and β via purely theoretical considerations, Eqs. (84)-(90) can be rewritten as (∆x canon. ) min = 2 −1/4 λ C r , (∆x canon. ) min = 2 1/3 (l 2 Pl l dS ) 1/3 , and (∆x total ) min = 27 4 We note that Eqs. (119), and hence the first term in Eq. (120), deviate slightly from the canonical result obtained by Salecker and Wigner (52), and by Ng and van Dam using more rigorous methods. Equations (91) and (93) then become and Hence, for fundamental particles, the ranges of m and r are restricted such that For the limiting mass scales m Λ and m Pl / √ 2, and the critical mass scale m crit , we have These mass-scales also have interesting gravitational properties. To within numerical factors of order unity, the smallest possible mass m Λ is the the unique mass scale satisfying the equation In other words, it is the unique mass-scale whose quantum mechanical (Compton) radius is equal to its classical gravitational (turn-around) radius in the presence of dark energy. This gives an alternative interpretation of the stability condition m ≥ m Λ -for smaller masses, the gravitational turn-around radius lies within the Compton wavelength of the particle. Considering the ranges of r for which the canonical quantum uncertainty is greater or less than the gravitational uncertainty in the DE-UP-2 (86), we have Setting r eq equal to the present day turn-around radius of an object of mass m (and again neglecting numerical factors of order unity), yields For the critical mass m crit , we have r eq r min λ C (∆x canon. ) min ∆x grav (∆x total ) min (l 2 where we recall that r min is the probe distance that minimizes the total uncertainty, yielding Eq. (90), and r grav (τ 0 ) (l 4 Pl l 5 dS ) 1/9 .
In general, we note that, when r is approximately equal to the present day turn-around radius, we have Beyond this range, the classical gravitational influence of the particle is effectively negligible, in comparison to the repulsive effect of dark energy. In terms of spacetime curvature, for r r grav (τ 0 ), the additional contribution to the total curvature due to m is less than the background value ∼ Λ. However, in order for the quantum gravitational influence of the particle to be considered negligible, it must induce metric fluctuations smaller than the background average, which are believed to be of order ∼ l Pl [70][71][72]. We now consider this scenario in detail.
To begin with, we note that, for (∆x canon. ) min to be super-Planckian at r grav (τ 0 ) requires m m dS = m 2 Pl /m dS , which is clearly satisfied for any physically realizible mass, up to and including the present day mass of the Universe. However, for ∆x grav to be super-Planckian at the turn-around radius requires m m Λ √ m Pl m dS .
This result implies that metric fluctuations of order ∼ l Pl are associated with pure (empty) de Sitter space, since m Λ may also be interpreted as the mass of an effective dark energy 'particle' [58]. It therefore follows that, for any mass larger than m Λ , the quantum gravitational influence of the particle at its turn-around radius will be non-negligible, in comparison to the magnitude of the background metric fluctuations, even if its classical gravitational influence may be ignored. This is an important point, and may be relevant to future experimental attempts to distinguish between classical and quantum gravitational phenomenology predicted by the DE-UP model. Finally, it is straightforward to determine the ranges of ∆v (or equivalently ∆p), r and m for which the three terms in the DE-UP-1 (84) satisfy ∆x ≥ ∆x recoil ≥ ∆x grav , or any other ordering. The results are summarized, for general values of α and β , in Table 1.
For r (l Pl l 2 dS ) 1/3 (m/m Pl ), low-momentum states are given by 1, intermediate-momentum states by 2-3 and high-momentum states by 4. As r → 0, the limits in 1 and 4 tend to zero and infinity, respectively. For r (l Pl l 2 dS ) 1/3 (m/m Pl ), low-momentum states are given by 5, intermediate-momentum states by 6-7 and highmomentum states by 8. As r → ∞, the limits in 5 and 8 tend to zero and infinity, respectively.
Hence, for r (l Pl l 2 dS ) 1/3 (m/m Pl ), ∆x grav may dominate ∆x recoil , but not ∆x, in the low-momentum regime, or ∆x, but not ∆x recoil , in the high-momentum regime. However, it may also dominate both in the intermediatemomentum regime. For r (l Pl l 2 dS ) 1/3 (m/m Pl ), the situation is similar in the 'low-' and 'high-' momentum regimes -though these now correspond to different physical ranges of momentum uncertainty -but is reversed in the intermediate regime, where ∆x grav is subdominant to both ∆x and ∆x recoil .
From the point of view of future experiments, the r (l Pl l 2 dS ) 1/3 (m/m Pl ) regime is more accessible, and we are free to choose the ratio of the probe distance to the mass of the probe particle, r/m, to lie in this range. In this case, the very high-and very low-momentum regimes are where we may hope to observe modifications of canonical quantum dynamics. Nonetheless, the observability of these effects depends, ultimately, on the ratio of ∆x grav , to the remaining (non-negligible) canonical uncertainty term.
When the DE-UP-1 (84) is minimized with respect to ∆v, yielding the DE-UP-2 (86), ∆x ∆x recoil (∆x canon. ) min and the value of ∆p is fixed in terms of r by Eq. (85). Under these conditions, Table 1 Hence, for r (l Pl l 2 dS ) 1/3 (m/m Pl ), ∆x grav is always subdominant to (∆x canon. ) min . That said, the two need not, necessarily, be of comparable magnitude in order for ∆x grav to be detectable. The possibility of experimentally testing the DE-UP-1 (84) using current technology will be addressed in a future publication, but is discussed briefly in Sec. V.
Before concluding this subsection, we note that the minimum mass-scale m Λ = 4.832 × 10 −36 g (104) is compatible with the current upper bound on the average neutrino mass obtained from the Planck mission data is m ν ≤ 0.23 eV = 4.100 × 10 −34 g [42]. According to the arguments presented here, m Λ may be interpreted as the mass of the electron neutrino, which corresponds to the mass of the lightest possible neutral particle in a dark energy Universe with Λ 10 −56 cm −2 .
As shown in [58], m Λ may also be interpreted as the effective mass of a dark energy particle. In this picture, the dark energy field is composed of a 'sea' of quantum particles, each occupying a volume ∼ l 3 Λ . Under these conditions, and if dark energy particles are charge-neutral but fermionic, the usual laws of quantum mechanics imply that they will readily pair-produce. However, this is impossible without a concomitant expansion in space itself. (In short, 'empty' space is, in fact, full of dark energy particles.) Borrowing a term from basic chemistry to describe this state of affairs, we may say that the space is saturated. It is straightforward to see that, if the probability of pair-production remains constant, the scale factor of the Universe will grow exponentially since the number of particles produced in any given volume, per unit time, is proportional to the volume itself. This leads naturally to a de Sitter-type expansion, da/dτ ∝ a, in which the macroscopic dark energy energy density remains constant, in spite of spatial expansion. For particles of mass m Λ , the additional (positive) energy of the newly created rest mass is exactly counterbalanced by the additional (negative) energy of its gravitational field, which may be seen by considering the Komar mass [85].
However, if this picture is correct, we may expect 'empty' three-dimensional space to exhibit granularity on scales ∼ l Λ . For this reason, it is particularly intriguing that recent experiments provide tentative hints of fluctuations in the strength of the gravitational field on scales comparable to l Λ , which is of order ∼ 0.1 mm [86,87]. Though many theoretical models may account for this, including those exhibiting spatial variation of the gravitational constant G, the influence of dark energy particles on sub-millimetre gravitational interactions cannot be discounted a priori.
D. DE-UP as MLUR -application to charged particles
In this subsection, we consider the implications of the DE-UP derived in Sec. III B for charged particles. As we saw in Sec. III C, combining the existence of a classical minimum density, which follows from the generalised Buchdahl inequalities for uncharged particles in the presence of dark energy [79][80][81][82], with the standard expression for the Compton wavelength, gives rise to a minimum mass for compact, stable, gravitating, chargeneutral and quantum mechanical objects. Furthermore, this mass-scale is physically interesting as it is comparable to present day bounds on the mass of the lightest known particle, the electron neutrino [58]. Combining the minimum-mass bound for neutral particles with the DE-UP also yields interesting results, since it implies that both the canonical and gravitational uncertainty terms, (∆x canon. ) min and ∆x grav , as well as the probe distance r, always remain super-Planckian.
Similarly, generalised Buchdahl inequalities exist for charged particles, both in the presence and absence of dark energy [57,83,84]. However, in this case, they fix only the minimum value of the radius-to-mass ratio, R/m, of a stable compact object, where R is the classical radius. Alternatively, they fix the minimum classical radius in terms of m, or vice versa. This bound may again be combined with the existence of a minimum quantum mechanical radius, λ C ∝ 1/m, and with the existence of a minimum total uncertainty given by Eq. (90). The latter implies that the mass of the object may be written in terms of the critical mass, m crit (m 2 Pl m dS ) 1/3 , multiplied by an arbitrary constant α Q , as in Eq. (93).
By combining all three mass bounds -that is, by assuming that a charged particle exists in nature whose total uncertainty minimizes the DE-UP, according to Eq. (90), whose classical radius satisfies the appropriate generalised Buchdahl inequalities [57,83,84], and whose Compton radius is given by the canonical formula -we fix the value of the free parameter α Q in terms of the the physical charge (Q) of the system. This, in turn, allows us to obtain an explicit expression for the mass m in terms of Q and the physical constants {G, c, , Λ}. Setting Q = ±e and evaluating this expression numerically, the mass-scale obtained is comparable to the measured value of the lightest charged particle, the electron [57]. According to our procedure, this may be interpreted as the minimum possible mass for a compact, stable, gravitating, charged and quantum mechanical object, which also obeys the DE-UP proposed in Sec. III B.
We proceed as follows. The generalized Buchdahl inequality for a charged compact object in the presence of a positive cosmological constant is [83] 2Gm Hence, for R 2 Λ 1, the effect of dark energy is subdominant to electrostatic repulsion and Eq. (139) reduces to 2Gm This expression can be Taylor expanded to give 2Gm so that, to leading order, we have In this limit (and to within numerical factors of order unity), we recover the standard expression for the classical radius of a 'particle' with mass m and charge Q, i.e. the radius at which the electrostatic potential energy associated with the object is equal to its rest energy, mc 2 .
In special relativity, this is roughly the radius the object would have if its mass were due only to electrostatic potential energy. Nevertheless, Eq. (142), which was originally obtained in [84], is a fully general-relativistic result. The fact the the standard formula for the classical radius of a charged particle is recovered via the Taylor expansion (141) simply reflects the fact that Eqs. (139)-(140) remain valid, even in the weak gravity limit.
Next, we note that a natural way to define the quantum gravitational regime for a fundamental particle is to require its positional uncertainty, due to combined canonical and quantum gravitational effects, to be greater than or equal to its classical radius, ∆x total = (∆x canon. ) min + ∆x grav ≥ R. This is essentially the inverse of the requirement for classicality, that the macroscopic radius of an object be larger than its total positional uncertainty. Thus, the conditions correspond to a regime in which the particle behaves 'quantum-gravitationally', but in which specific quantum gravitational effects are subdominant to the standard Compton uncertainty.
Assuming that the total uncertainty takes its minimum possible value, given by the DE-UP-3 (90), we may then set where γ ≤ 1, in this regime. Likewise, we may set where ξ ≥ 1, if we expect the object to display no classical behaviour. Clearly, with equality holding if and only if γ = ξ = 1. For convenience, we now rewrite the three independent expressions we have obtained for m throughout the preceding sections of this work, namely where q Pl = √ c is the Planck charge. Equations (147a) and (147b) are simply Eqs. (93) and (144) restated. Equation (147c) corresponds to saturating the bound in Eq. (142) by assuming that R = (∆x total ) min /ξ represents the value of the classical radius that minimizes the ratio R/m, for a sphere of mass m and charge Q. (For the sake of generality, we have retained the 3/4 numerical factor in Eq. (142) but kept the numerical constants α and β as unfixed parameters for now.) Thus, m in Eqs. (147a)-(147b) is the mass of the body for which the total uncertainty of the object, given by the DE-UP, is minimized for (∆v) max = α −1 Q c, whereas the m in Eq. (147c) is the mass of a body for which the classical bound (142) is saturated. As shown in [57], this is also the radius at which the classical gravitational energy is minimized. We proceed by assuming the equivalence of the two masses, which is equivalent to assuming that the particle saturates all available bounds simultaneously.
The resulting model has much in common with Dirac's extensive model of the electron [88], which was intended to remove singularities from the electric and gravitational fields of charged particles, except that, here, the classical electron is considered as a three-dimensional fluid sphere, rather than a two-dimensional shell. Nonetheless, the relevant Buchdahl bounds can be re-formulated in terms of two-dimensional (surface) quantities [57].
By equating the three expressions for m in Eqs. (147a)-(147c), we may fix the relations between the three unknowns γ, ξ and α Q , explicitly. For our purposes, the key point is that, for ξ ∼ O(1) (i.e. when (∆x total ) min R, its minimum possible value), we have γ α Q Q 2 /q 2 Pl . Equations (147b) and (147c) immediately imply or, equivalently, This gives an interesting (and self-consistent) interpretation of the Planck charge q Pl as the maximum possible charge of a stable, gravitating, quantum mechanical object, obeying the DE-UP. The bound (149) may also be obtained in a more direct way by combining the general relativistic result (142) with canonical quantum theory. Rewriting this as Q 2 ≤ (4/3)q 2 Pl Rmc 2 and taking the limit R → λ C yields the same result.
For the sake of concreteness, we now set yielding and choose the values α = 1/2 √ 2 and β = √ 2 obtained previously in (114), so that and We then have Though the precise numerical factors chosen here are to some degree arbitrary, we see that, for ξ ∼ O(1), the following, general, order of magnitude relations hold, where α Q is given by Eq. (151). Keeping in mind the alternative measurement procedure outlined in Sec. III B, the physical picture we obtain is as follows. A particle of mass m and charge Q 'measures' the distance to its outermost horizon, the de Sitter radius, by means of a two-stage photon exchange. In the first stage, photons (either real or virtual) are exchanged between the particle CoM and a 'detector' at r. The 'detector' simultaneously (or near simultaneously) receives real or virtual photons from the de Sitter horizon. The minimum total uncertainty in the position of the particle is also the minimum uncertainty in the measurement of l dS .
However, as discussed in Sec. III B (and at length in the Appendix), for r < λ C , the 'detector' is simply the particle itself and the first part of the 'measurement' corresponds to a self-interaction. What the relations above show is that the total uncertainty given by the DE-UP-1 (84) obtains its minimum possible value, given by the DE-UP-3 (90), when the charge-squared to mass ratio of the particle Q 2 /m, and the corresponding self-interaction distance r min , are fixed according to Eq. (155). Under these circumstances, the order of magnitude values of R, (∆x total ) min and λ C are also fixed, yielding a strict hierarchy of length-scales associated with m. These are related via the parameter α Q = Q 2 /q 2 Pl according to Eq. (156).
That the minimum uncertainty in the position of the particle is larger than the probe distance r need not concern us, since r min may be associated with the energy scale of the self-interaction via the usual Compton formula, giving E max c/r min (q 4 Pl /Q 2 )/λ C as a natural UV 'cut-off' in the DE-UP model. Though not strictly a cut-off, attempting to probe self-gravitating particles on scales r < r min (E > E max ) is self-defeating, since this only increases ∆x total .
Thus, in this picture, a particle that interacts with its environment (including self-interactions), over the range r min ≤ r ≤ l dS , naturally acquires a charge-squared to mass ratio that satisfies the bound This is obtained simply by rewriting the expression for m Eq. (155) and reinserting the directional inequality originally present in Eq. (142). Thus, it is straightforward to see that, to within numerical factors of order unity, saturating the bound (157) is equivalent to setting Q 2 = e 2 , which yields the correct order of magnitude value of the electron mass, i.e. m = α e (m 2 Pl m dS ) 1/3 = 7.332 × 10 −28 g m e = 9.109 × 10 −28 g , where α e = e 2 /q 2 Pl is the usual fine structure constant. Alternatively, Eq. (157) which close to the best-fit value obtained from current cosmological observations [41,42]. The result (159) was previously obtained by Harko and Boehmer in [82], in which it was expressed in the form Λ l 4 Pl /r 6 e , where r e = e 2 /(m e c 2 ) is the classical electron radius, and justified on the basis of a 'Small Number Hypothesis' (SNH). By analogy with Dirac's Large Number Hypothesis (LNH), which posits that the numerical equality between two very large quantities with a very similar physical meaning cannot be a simple coincidence [89][90][91][92], Harko and Boehmer proposed the same for small numbers, though we note that the reciprocal of a large number is a small number, so that the two hypotheses may, in fact, be considered equivalent. (For contemporary viewpoints on the LNH and current status reports, see [93][94][95]. ) We stress, however, that in this work, the identification (159) is not based on numerical coincidence. Rather, our requirement that the total uncertainty ∆x total , incorporating canonical quantum and gravitational effects according to the DE-UP, be minimized for a stable, compact, charged, gravitating and quantum mechanical object realised in nature, leads inevitably to Eq. (159).
Remarkably, an algebraic formula for Λ, having the same general form as Eq. (159), namely where m is fundamental mass-scale found in atomic physics, was originally proposed by Zel'dovich in 1968 [96]. The origins of this proposal go back to Dirac's formulation of the LNH in 1937, in which he noted the approximate order of magnitude equivalence between several large dimensionless numbers obtained from atomic physics and cosmology [89]. These included the ratio of the present day radius of the Universe r U to the classical electron radius r e and the ratio of the electric and gravitational forces between an electron and a proton, namely r U r e 10 40 , e 2 Gm e m p 10 39 , where m p = 1.673 × 10 −24 g is the proton mass. Assuming that this equivalence was not coincidental, he for-mulated the Large Number Hypothesis (LNH), which required the existence of a time-varying gravitational constant, G(t) ∼ 1/t, under the assumption that Λ = 0 [89,91,92].
In 1968, Zeldovich noted the same (approximate) equivalence between the ratio of r U and the Compton wavelength of the proton, λ p ≡ h/(m p c), and between λ p and the proton's Schwarzschild radius, r S (m p ). In addition, he noted that, if Λ = 0 and r U ∼ 1/ √ Λ (contrary to Dirac's original assumptions), then [96] However, here, it is important to note that the numerical equivalence in Eq. (162) holds only if λ p denotes the true Compton wavelength, defined with respect to Planck's constant h, not the reduced Compton wavelength, defined with respect to . For λ p ≡ /(m p c), we obtain Eq. (160) with m = m p , yielding Λ m 6 p G 2 / 4 10 −53 cm −2 , since (2π) 4 1559. Clearly, the latter estimate is incompatible with current observational bounds on the value of Λ [41,42]. (See [97] for further discussion of this point.) However, Zel'dovich's observation that if a positive cosmological constant (and hence a de Sitter radius) exist in nature, the physics of sub-atomic particles may be profoundly affected, remains valid. In particular, exchanging m p → m e /α e in the formula (160) yields the upper bound given by Eq. (159), which is compatible with the current experimental value of Λ.
Finally, we note that, if the identification (159) results from fundamental physical considerations (as claimed here) and is not simply a numerical coincidence, it is all the more remarkable since it not only implies a connection between cosmological and atomic physics,à la Dirac and Zel'dovich, but, perhaps even more surprisingly, an intimate connection between the very essence of 'dark' and 'light' physics (i.e., Λ and e) [97]. In fact, several models incorporating non-minimal couplings between dark energy and the electromagnetic sector have already been proposed in the literature, as solutions to problems in contemporary cosmology [98][99][100][101][102]. The cosmological implications of the Λ ∝ α −6 e model, based on Eq. (159), and its various motivations [57,[103][104][105], were investigated in [106]. An alternative form of MLUR, also incorporating effects of dark energy/the de Sitter radius (though not based on the arguments presented in Sec. III B), was given in [107]. The possible relation of (generic) cosmological horizons with the GUP were also considered in [108].
E. Holography
It is straightforward to see that, for any particle which minimizes the total uncertainty given by the DE-UP according to Eq. (90), a holographic relation holds between the bulk and the boundary of the Universe. Specifically, so that the number of Planck sized 'bits' on the de Sitter boundary is equal to the number of minimum-volume 'cells', V cell (∆x total ) 3 min , in the bulk [57]. It is interesting to note that (∆x total ) min may also be regarded as the classical radius of a 'particle' with both minimum energy, E dS = m dS c 2 , and minimum energy density, As shown in [58], and discussed in Sec. III B, a massive particle with rest energy E dS would be unstable due to the effects of dark energy. However, E dS may correspond to the energy of a photon with maximum wavelength, λ l dS . Thus, (∆x total ) min may also be interpreted as the classical radius of a localized, minimum-energy photon. A space-filling 'sea' of such photons would have the same energy density as the dark energy field [58].
In addition, we may consider a maximum-mass, maximum-density state, for which ρ ρ Pl and the total energy is E dS m dS c 2 . The classical radius thereby obtained corresponds to the smallest possible volume within which the total mass of the present day horizon may be confined, without exceeding ρ Pl . We then have [57] Thus, the length-scale (∆x total ) min (l 2 Pl l dS ) 1/3 corresponds to at least three physically interesting scenarios in the context of the DE-UP model. It may be interpreted as (i) the maximum classical radius of a minimum-energy, minimum-density 'particle', (ii) the minimum classical radius of a maximum-energy, maximum-density 'particle', and (iii) the classical radius/minimum total uncertainty of the electron, for cosmic epochs greater than or equal to the present day, τ τ 0 . All three interpretations satisfy the general holographic relation, Eq. (163), which also remains valid for earlier epochs under the substitution l dS → r H (τ ).
Furthermore, we note that, if the probability of a single cell of space 'pair-producing' within a time interval ∆τ = t Pl = l Pl /c, due to the production of dark energy particles, is given by where V 0 denotes the initial volume, this leads naturally to a de Sitter-type expansion, modeled by the differential equation or, equivalently [85]. The production of a single dark energy particle then requires the production of n cell = V Λ /V cell l 3 Λ /(l 2 Pl l ds ) = N 1/4 cells of space which, in turn, implies that the probability of a dark energy particle pairproducing within ∆τ = t Pl is given by Since there are n DE l 3 dS /l 3 Λ = N 3/4 dark energy particles within the de Sitter horizon, this implies that one dark energy particle is produced somewhere in the observable Universe during every Planck-time interval. Remarkably, this rate of pair-production is capable of giving rise to the accelerated expansion of the Universe observed at the current epoch.
In this model, the observed vacuum energy is really the energy associated with the dark energy field: its fundamental dynamics remain unknown, but are assumed to be associated with the mass-scale m Λ , and excitations of the vacuum state correspond to the production of chargeneutral particles with this mass. Thus, λ C (m Λ ) = l Λ provides a natural a cut-off for the field modes -with higherenergy excitations yielding pair-production of dark energy particles throughout space -so that The precise dynamics, or 'true' nature of the dark energy field, are essentially unobservable at the current epoch as the field remains 'trapped' in a Hagedorntype phase in which any increase in kinetic energy, even that caused by random collisions between neighbouring dark energy particles due to quantum uncertainty, results in pair-production rather than an increase in temperature/kinetic energy. (The interested reader is referred to [85] for a more in-depth discussion of this point.) The temperature associated with the field is therefore constant, on large scales, and is comparable to the present day temperature of the CMB, Here, the factor of 8π is included by analogy with the expression for the Hawking temperature, Pl /m Λ again denotes the dual mass, which is equal to the total mass-energy contained in the dark energy field within the de Sitter horizon.
Though this may seem like another 'miraculous' coincidence, in the dark energy model implied by the DE-UP it is simply a restatement of the standard coincidence problem of cosmology, whereby the Universe begins a phase of accelerated expansion at the present epoch, when r U l dS and Ω M Ω Λ and, hence, T CMB T Λ . The coincidence remains: why do we live at precisely this epoch? However, no new coincidences are required, in order to explain Eq. (171) in the context of the DE-UP.
IV. COSMOLOGICAL CONSEQUENCES OF THE DE-UP
At epochs prior to the present day, τ τ 0 , the cosmic horizon is smaller than the de Sitter radius and, strictly, we must substitute l dS → r H (τ ) in Eq. (84) and all subsequent formulae derived from it. In this case, the upper bound on the charge-squared to mass ratio for stable charged particles obeying the DE-UP, Eq. (157), is lowered and drops below the charge-squared to mass ratio of present day electrons. Hence, the DE-UP model strongly suggests time-variation of either, or both, e and m e , assuming that {G, c, , Λ} are genuine universal constants. Similar arguments apply to the minimum mass for neutral particles, which is required to ensure that (∆x canon. ) min , ∆x grav and r each remain super-Planckian.
In the case of a running gravitational coupling [109][110][111][112], variable speed of light [113][114][115][116][117][118][119][120], or dynamical dark energy field [43][44][45][46], the situation is even more complicated, and it may be extremely difficult, in practice, to distinguish variation in e and/or m e , or in the minimum neutral particle mass, from other effects. (See [121][122][123][124][125] for current bounds on varying α e theories, including their effects on cosmic string phenomenology [126,127] and [128][129][130][131][132] for more general models involving temporal and/or spatial variations of multiple physical constants.) However, though a thorough analysis of the cosmological implications of the DE-UP model must be left to a later publication, we regard this prediction as a positive aspect of the model since, in principle, future observations and/or analysis of currently available data may be capable of falsifying it. Alternatively, it may be possible that, despite the two not being in causal contact for τ τ 0 , the existence of an asymptotic de Sitter horizon affects sub-atomic particle dynamics through some non-local mechanism, such as (acausal) entanglement [133], so that Eq. (157) remains valid at all epochs. Nonetheless, based on the analysis presented in Sec. III, the DE-UP model strongly favours time-variation of the ratio e 2 /m e , in line with the bound This corresponds to a minimum holographic cell radius which is similar to the MLUR for an expanding Universe recently suggested by Ng [134], but with the cosmological horizon r H (τ ), given by Eq. (31), in place of the Hubble horizon, H(τ )/c. The equivalent time-variation of the neutral particle limit is where m H (τ ) = /(r H (τ )c) is the Compton mass associated with the horizon distance at time τ . We note that, in general, the problem of how (if at all) local physics is affected by the cosmological expansion remains an important open question [135]. Nonetheless, it is interesting to note the similarity of the minimum particle mass (175) with the (running) dark energy mass-scale predicted by 'agegraphic' [136,137] and holographic [138] dark energy models previously proposed in the literature. A further subtlety of the model stems from the fact that, even if we assume Eqs. (173)- (175) to be true, it is not clear whether the resulting time-dependent quantities should be interpreted as bare values or renormalized values of m e , m νe and e. Since the standard model couplings and masses are energy-dependent due to renormalization group flow and, since a reduction in r H (τ ) is equivalent to increasing the IR cut-off for interactions in the DE-UP model, the relationship between these two (energydependent) factors may be non-trivial. What is clear is that, within the limits of the non-relativistic (i.e., non-Lorentz invariant) theory formulated here, such questions may be very difficult to answer. To satisfactorily address them, we need to go beyond the non-relativistic approximation.
It is therefore interesting to note that the relation (159) was originally found by Nottale [103] using a renormalization group approach. He argued that, like other fundamental 'constants', the cosmological constant is in fact a scale-dependent quantity, obeying an (as yet unknown) renormalization group equation. If so, its present day value may be split into a 'bare' gravitational part plus a scale-dependent part, corresponding to the quantum mechanical vacuum energy, i.e. Λ(r) = Λ G + Λ QM (r). Following Zel'dovich [96], who noted that the bare zeropoint energy is unobservable, he then argued that the observable contribution is given by the gravitational energy of virtual particle-antiparticle pairs, continually created and annihilated in the vacuum, so that where m(r) /(cr) is the effective mass of the particles at scale r. This gives rise to a scale-dependent formula for the vacuum energy density, where ρ Pl = (3/4π)m Pl /l 3 Pl is the Planck density. Assuming a renormalization group equation of the form where γ(ρ vac ) is an unknown function, which (he also assumed) could be expanded to first order for ρ vac ρ Pl , giving γ(ρ vac ) γ 0 + γ 1 ρ vac , yields where ρ 0 = −γ 1 /γ 0 and r 0 is an integration constant. Comparing Eqs. (178)-(179) then gives γ 1 = −6, ρ 0 = ρ Pl (γ 0 = 6/ρ Pl ) and r 0 = l Pl . Hence, although Λ is a manifestly scale-dependent quantity, its low-energy asymptotic value, predicted by Eq. (179), is scaleindependent, in agreement with present day observations [41,42]. Next, he argued that e + e − pair-production represents the main contribution to the vacuum energy at late times (τ τ 0 ), so that the transition between the scaledependence and scale-independence of Λ should be identified with the cross-section for this interaction. Finally, he argued that the latter is equal to the Thomson scattering cross-section, which is approximately equal to the square of the classical electron radius, σ T πr 2 e . This is equal to the e + e − annihilation cross-section evaluated at E m e c 2 . In other words, the Thomson scattering length/classical electron radius r e represents the radius of the annihilation cross-section -which is an energy-dependent quantity r(E) -evaluated at the rest-mass scale m e . Hence, by identifying ρ vac ≡ ρ Λ and r ≡ r e in Eq. (179), he obtained the relation which is equivalent to Eq. (159). This is a remarkable achievement. However, we note that the argument above implicitly assumes that the 'gravitational cut-off', i.e., the UV cut-off in the expression for the gravitational self-energy of a particle pairproduced in the vacuum, Eq. (176), is equal to the average inter-particle distance. A priori, there is no reason why this should be the case. In fact, the most natural assumption, for virtual particles pair-produced in the vacuum, is that the average inter-particle distance is comparable to the Compton wavelength, in this case λ C (m e ). In this, more general scenario, Eqs. (176)-(177) are replaced by respectively, where r min denotes the UV cut-off for the gravitational self-energy. Interestingly, if we set r min α 2 e λ C (m e ), the minimum 'probe' distance for a particle of charge ±e predicted by the DE-UP (155), identifying ρ vac ≡ ρ Λ in Eq. (183) also yields Eq. (181). In this sense, the predictions of the DE-UP model may also be considered as compatible with Nottale's analysis.
Finally, we note that, in an expanding Universe, a vacuum energy of the form coupled with a Nottale-type analysis, analogous to that performed above, gives rise to Eqs. (173)- (175). Alternatively, if for τ τ 0 , this implies and where m H (τ ) = /H(τ ) is the mass associated with the Hubble horizon. As mentioned above, Eq. (187) was previously suggested by Ng [134], and naturally implies a holographic relation between the bulk and the boundary of the Universe.
V. CONCLUSIONS
We have proposed a new minimum length uncertainty relation (MLUR), defined by Eqs. (84)- (90), which incorporates both canonical quantum and gravitational effects in the presence of dark energy, given by a positive cosmological constant Λ > 0. In this model Λ is assumed to be a fundamental constant of nature, giving rise to a constant minimum (vacuum) energy density ρ Λ ∝ Λ at all points in space. The new relation, termed the dark energy uncertainty principle, or DE-UP, is structurally similar to the MLUR proposed by Károlyházy, Eq. (36) [15,16], and reproduced by Ng and van Dam using alternative arguments, Eq. (55) [47,48].
However, while both derivations of Eq. (36)/(55) considered gravitational corrections to canonical (nongravitational) quantum theory, each did so under the assumption that the background space-time was both asymptotically flat and static. Though these assumptions are valid in many physically interesting regimes, it is clear that the discovery of dark energy [38,39] gives rise to a new fundamental length-scale in physics, namely, the de Sitter horizon l dS ∼ 1/ √ Λ, as well as to an associated minimum curvature given by Λ. On cosmological time-scales, it is also clear that the effects of universal expansion on local physics must somehow be taken into account [135]. In the DE-UP, the effects of minimum curvature and of a maximum horizon distance for all observers, including quantum mechanical 'particles', are explicitly accounted for, and the effects of universal expansion are incorporated into the MLUR.
At a technical level, our derivation of the DE-UP closely resembles Ng and van Dam's derivation of Eq. (55). The primary difference is that, whilst they assumed the gravitational uncertainty of a fundamental particle is given by its Schwarzschild radius, we assume it is, instead, given by the irremovable quantum uncertainty inherent in a 'measurement' of the particle's horizon distance, r H (τ ), where τ is the cosmic time. The physical basis for this assumption is straightforward. Since, classically, the distance between the particle and its horizon is exact, any quantum uncertainty inherent in the measurement of r H (τ ) is equivalent to an irremovable uncertainty in the position of the particle itself.
Hence, in order to estimate the uncertainty in a measurement of r H (τ ), including the effects of the particle's gravitational field, we assumed a simple relationship between the classical perturbation of the space-time line element, induced by the presence of the particle (∆s pert ), and the quantum mechanical spread in a superposition of background geometries (∆s), i.e. ∆s pert ∆s, Eq. (59). (See also Fig. 3.) This, in turn, allowed us to demonstrate the equivalence of Károlyházy's procedure for 'resolving' space-time intervals, using quantum mechanical particles as 'probes', and the interaction of a particle with its outermost horizon.
Whilst, clearly, this assumption cannot remain valid for macroscopic objects, and must break down at some critical mass and/or length-scale, it leads to a number of interesting and physically viable predictions based on the DE-UP (84)- (90). We note that the scale(s) at which this assumption becomes invalid may be naturally related to Károlyházy's concept of a coherence cell [15,16], though a detailed investigation of the this possibility lies beyond the scope of the present paper.
Applying the DE-UP to neutral particles, and requiring all potentially observable length-scales to remain super-Planckian, implies the existence of minimum massscale in nature, which can be expressed in terms of the fundamental constants {G, c, , Λ}. Furthermore, this mass-scale can be derived independently by combining classical minimum mass bounds for stable compact objects, in the presence of dark energy, with the simple requirement of the existence of a Compton wavelength [57]. The DE-UP is thus naturally consistent with known gravitational and quantum mechanical effects, as well as with the presumed minimum resolution due to quantum gravitational effects at the Planck scale [70][71][72].
Evaluating the minimum mass for neutral particles numerically, it is of order 10 −3 eV, and is consistent with current experimental bounds on the mass of the electron neutrino obtained from Planck satellite data [42]. This mass-scale may also be interpreted as the effective mass of a dark energy 'particle' [57]. Such a model implies that, though the dark energy density is approximately constant on large scales, it may become granular on length-scales of order 0.1 mm, the associated Compton wavelength. With this in mind, it is particularly intriguing that recent submilimetre tests of Newtonian gravity reveal tentative evidence for periodic variation in the gravitational field strength over precisely this lengthscale [86,87].
Applying the DE-UP to electrically charged particles, we defined the quantum gravity regime as the regime in which the minimum total uncertainty, including both canonical quantum and gravitational contributions, was larger than (or equal to) the classical radius, but smaller than (or equal to) the Compton radius. Evaluating this condition for a particle of charge e, at the current cosmological epoch τ 0 , we obtained the minimum mass of a stable, compact, charged, gravitating and quantum mechanical object, obeying the DE-UP, in terms of the constants {G, c, , Λ, e}. Numerically, this is of order 10 −28 g, which is consistent with the current measured value of the electron mass m e [58].
At all epochs, the DE-UP implies the existence of a holographic relation between the bulk and the boundary of the Universe, in which the number of minimumuncertainty 'cells' in the bulk equals the number of Planck sized 'bits' on the boundary, Eq. (163). However, this strongly implies time-variation of the minimum charge-squared to mass ratio of a stable charged object, under the assumption that {G, c, , Λ} remain constant. Hence, for τ < τ 0 , in which r H (τ ) < l dS ∼ 1/ √ Λ, the ratio e 2 /m e becomes a function of the horizon distance in the DE-UP model. The resulting bound, Eq. (174), closely resembles the MLUR for an expanding Universe recently proposed by Ng [134], but with the particle horizon r H (τ ) in place of the Hubble horizon, H(τ )/c. Similar arguments also imply time-variation of the minimummass bound for neutral particles, according to Eq. (175).
Hence, although the DE-UP proposed herein suffers from a number of drawbacks, including an incomplete picture of the communication between a particle and its cosmological horizon, and a reliance on the assumption of an intimate connection between classical perturbations and space-time superpositions, we believe it it yields sufficiently interesting predictions to be worthy of further study. Therefore, with future high-precision quantum experiments in mind, we have identified two regimes, listed in Table 1, in which the gravitational uncertainty term in the DE-UP dominates at least one of the two positional uncertainty terms obtained from canonical quantum theory. These, together with its prediction of precise timevariation of the ratio e 2 /m e and of the minimum neutral particle mass, may render the model falsifiable using table-top measurements and/or cosmological data in the near future.
Specifically, in regard to future lab-based experiments, we note that, since the MLUR proposed herein is structurally similar to that predicted by the K-model [15,16], and since the assumed relation ∆s pert ∆s must break down at some mass/length-scale, which may naturally be identified with a dark energy-modified version of Károlyházy's concept of a 'coherence cell', precision measurements of decoherence may be crucial in this regard. Although the total decoherence of micro-objects may be unobservable over realistic time-scales (as in the original K-model), partial-decoherence [139][140][141][142] may be probed using existing experimental platforms such as mesoscopic suspended atomic clouds [143,144], opto-mechanical experiments involving trapped micro-spheres or micromirrors [145,146], space-based macroscopic quantum resonators [147,148], and neutrino flavour oscillations in existing detection facilities such as IceCube [149][150][151][152]. Such an analysis lies beyond the scope of the present paper and is left to a future work [63].
Finally, we briefly address the question of the implications (if any) of the DE-UP model for black hole physics. As discussed in Sec. III C, it is by no means clear whether Eqs. (84)-(90) apply to objects with masses m m Pl , or whether a different kind of positional uncertainty applies to black holes. (See for [153][154][155][156][157][158] for recent works on this topic.) Realistically, it seems likely that the identification of small classical perturbations with quantum mechanical spreads, postulated as a physical basis for the DE-UP in Eq. (59), breaks down for macroscopic objects. Furthermore, this idea is consistent with Károlyházy's original concept of a mass-limited coherence cell [15,16], as discussed above. Nonetheless, the fact that the DE-UP provides a natural realisation of the holographic conjecture [49,50] is intriguing, and is it is worth exploring its information theoretic implications in the context of the black hole Information Loss Paradox (ILP) [159][160][161][162][163][164][165][166][167]. If valid for m m Pl , Eqs. (84)- (90) should also have non-trivial implications for the potential observability of black holes in collider experiments, such as at the LHC [168,169].
It is therefore certainly worthwhile to attempt to extend the DE-UP into this region, which may be done naïvely by replacing the rest mass m with the 'dual' ADM mass, m → m ADM ≡ m 2 Pl /m ADM m 2 Pl /(m + m 2 Pl /m). This gives rise to a unified Compton-Schwarzschild line connecting the black hole and particle regimes (see [153][154][155][156][157][158] and [170,171]). Since the DE-UP naturally implements holography in the particle sector, it may be hoped that an extended version maintains it for black holes, which may have implications for the ILP [159][160][161]. In this context, it is also interesting to note that Eq. (159) was previously derived using information-theoretic arguments [105] (see also [97] for a critical appraisal of this work).
In addition, we note that the derivations presented in Sec. III can (in principle) be easily generalized to incorporate modified gravity theories, either by substituting modified classical mass bounds in place of Eqs. (139)-(142) (see, for example [85] and [172,173]) and/or by substituting modified line-elements and metric functions in place of Eqs. (17), (18) and (21). Such modifications may also have non-trivial implications for quantum gravity phenomenology on cosmological scales. The impact of generalized uncertainty on the cosmological evolution equations should also be considered for various combinations of classical modified gravity models/MLURs. (See [174] and references therein for an analysis of GUPinduced modifications of the canonical Friedmann equations.) r min α 2 Q λ C α Q (∆x total ) min (α Q ≤ 1), at which relativistic quantum gravitational effects become important, this too may be regarded as physical, even if the full relativistic theory of quantum gravity required to treat it in detail is lacking.
Hence, if ∆x total (r) represents the total (and irremovable) positional uncertainty of a quantum particle, as seen by an observer located at a distance r from its CoM or, equivalently, the irremovable uncertainty in any measurement of l dS , obtained via the two-stage measurement process outlined in Sec. III B, we may ask the question: is it physically meaningful to consider r < ∆x total (r)?
In general, for a particle of a given mass m, we may solve the inequality r ∆x total (r) to find the critical value r crit , below which this condition holds. Intriguingly, and at first sight somewhat bizarrely, the analysis presented in Secs. III-IV suggests that ∆x total (r), given by Eq. (84), is minimized for r min α Q (∆x total ) min α Q (l 2 Pl l dS ) 1/3 , where α Q = Q 2 /q 2 Pl ≤ 1. In other words, when the uncertainty in the measured value of ∆x total (r) is as small as it can be, it is larger than the 'probe' distance r. To interpret this result correctly, we must reconsider the gedanken experiment proposed by Salecker and Wigner and consider in detail the physical conditions that permit the emission (absorption) of a photon from (by) the 'probe' particle in canonical quantum mechanics. We may then consider the modified conditions induced by the DE-UP.
Classically, a particle of finite extension cannot spontaneously emit another without reducing its internal or kinetic energy [175]. In canonical QM, a non-composite particle does not have internal (i.e. binding) energy, but the wave function of its CoM corresponds to a superposition of position or, equivalently, momentum states. Thus, a given positional uncertainty ∆x corresponds to a momentum uncertainty ∆p, and therefore to an uncertainty in the kinetic energy of order ∆E (∆p) 2 /2m. This allows the spontaneous emission of additional particles -for example, the emission of photons from electrons -without violating conservation of energy or momentum. With this is mind, we now reconsider Salecker and Wigner's thought experiment under two different sets of conditions. In the first, the particle 'tries', and succeeds, in emitting a photon with wavelength λ > λ C . In the second, it 'tries', and fails, to emit a photon with λ < λ C .
Prior to the act of measurement, either by an external detector that absorbs it, or via its reabsorption by the particle after reflection at a mirror placed at a distance r, the photon is in a superposition of states corresponding to ∆λ /∆p / √ 2m∆E. The emission of its wave packet takes a time ∆t ∆λ/c /(c √ 2m∆E). Thus, if ∆E mc 2 , then ∆λ λ C : the photon wave packet is larger than the particle's Compton wavelength and may escape to communicate with the outside world. Specifically, it may traverse a distance 2r, where r > ∆λ > λ C , reflect off a mirror and be reabsorbed, yielding a measurement of r.
Clearly, if r < λ C , the 'mirror' cannot lie outside the wave packet of the massive particle and the act of 'measurement' involves a self-interaction, in which the particle emits a photon and reabsorbs it within a time ∆t r/c. This is inevitable if ∆λ c∆t < λ C , since the wave packet of the photon will not have sufficient spatial extension, or have travelled far enough over the time-interval ∆t, to escape to the outside world. Thus, for ∆λ λ C (∆E mc 2 ), the would-be emitted photon wave packet is 'trapped' within the Compton radius of the particle and the associated photon remains virtual.
Strictly, at this point, the conceptual apparatus of canonical quantum mechanics breaks down and we must switch to the Feynman diagram interactions predicted by QFT. In this picture, the particle emits (and reabsorbs) a virtual photon of wavelength λ over a time-scale t λ/c. The photon is never made real as this would require λ λ C or, equivalently, E mc 2 , which is above the threshold for pair-producing particles of mass m. Nonetheless, in the canonical QM picture this result may be obtained from Salecker and Wigner's bound by setting r ∆λ in Eq. (42), giving (∆x canon. ) min (∆λ) λ C ∆λ ∆λ ⇐⇒ ∆λ λ C . (A-1) To obtain the equivalent bound in the non-canonical theory, represented by Eq. (86), we set (∆x total )(∆λ) λ C r + l 2 Pl l dS λ C r ∆λ . which automatically ensures ∆λ λ C for α Q ≤ 1.
To summarize: In canonical quantum mechanics, photon wave packets with ∆λ λ C remain 'trapped' within the massive particle wave packet, whose minimum extent is given by (∆x canon. ) min √ λ C r λ C . In the non-canonical, dark-energy modified theory, the minimum spatial extent of the CoM wave packet and the Compton wavelength of the particle no longer coincide. Instead, (∆x canon. ) min α Q λ C , which is identified with the classical particle radius, R. Photon wave packets with ∆λ λ C still remain trapped, but only those with ∆λ α 2 Q λ C also minimize the positional uncertainty of the CoM.
This suggests that, in the QFT picture, gravitationally-induced modifications of the Feynman diagram structure should yield an expansion in which the main contribution to the particle's self-energy comes the emission and reabsorption of virtual photons with a specific wavelength, λ α 2 Q λ C , In other words, self-interactions with photons of this wavelength should have maximum amplitude, or 'weight' in the path integral approach.
Hence, we argue that it is physically meaningful to consider length-scales r < (∆x total (r)) min < λ C in dark energy-modified quantum mechanics. Though interactions between the particle and its surroundings are possible only for r λ C ∆x total (r), self-interaction is possible within the contiguous regions α 2 Q λ C r α Q λ C and α Q λ C r λ C . Interestingly, the boundary between the two, r (∆x total ) min α Q λ C , marks the length-scale at which renormalization becomes important for charged particles in QED [69,176], and our naïve picture correctly reproduces a phenomenologically significant length-scale from the relativistic (but nongravitational) quantum theory of charged particles. We may therefore conjecture that, in a more complete theory, including relativistic quantum effects from both dark energy and canonical gravity, the length-scale r min α Q (∆x total ) min α 2 Q λ C should naturally emerge as an effective cut-off, which minimizes the self-interaction energy of charged particles due to the irremovable 'haziness' of the space-time in their vicinity.
Finally, we note that, for electrons, the key lengthscale is r min α 2 e λ e 2.054 × 10 −15 cm, which corresponds to an energy E max m e c 2 /α 2 e 9.596 GeV, well below the 13 TeV maximum operating energy of the Large Hadron Collider (LHC). However, the LHC is a proton-proton collider and the relevant energy-scale for protons is E max m u c 2 /(2α e /3) 2 101 GeV where m u 2.4 MeV/c 2 is the mass of the up quark [85]. Though this is also well within the maximum operating energy of 13 TeV, we must remember that only a fraction of the total beam energy is used in any particular quark-quark collision. Nevertheless, the corresponding length-scale is r min (2α e /3) 2 λ u 1.95 × 10 −16 cm, where λ u = /(m u c), which is close to the smallest distances likely to be probed at LHCb. The possibility of directly testing quantum gravity phenomenology predicted by the DE-UP at present day or, more realistically, next generation colliders is therefore tantalizingly close. | 33,223.4 | 2017-12-01T00:00:00.000 | [
"Physics"
] |
53Mn and 60Fe in iron meteorites—New data, model calculations
We measured specific activities of the long‐lived cosmogenic radionuclides 60Fe in 28 iron meteorites and 53Mn in 41 iron meteorites. Accelerator mass spectrometry was applied at the 14 MV Heavy Ion Accelerator Facility at ANU Canberra for all samples except for two which were measured at the Maier‐Leibnitz Laboratory, Munich. For the large iron meteorite Twannberg (IIG), we measured six samples for 53Mn. This work doubles the number of existing individual 60Fe data and quadruples the number of iron meteorites studied for 60Fe. We also significantly extended the entire 53Mn database for iron meteorites. The 53Mn data for the iron meteorite Twannberg vary by more than a factor of 30, indicating a significant shielding dependency. In addition, we performed new model calculations for the production of 60Fe and 53Mn in iron meteorites. While the new model is based on the same particle spectra as the earlier model, we no longer use experimental cross sections but instead use cross sections that were calculated using the latest version of the nuclear model code INCL. The new model predictions differ substantially from results obtained with the previous model. Predictions for the 60Fe activity concentrations are about a factor of 2 higher, for 53Mn, they are ~30% lower, compared to the earlier model, which gives now a better agreement with the experimental data.
INTRODUCTION
Most meteorites are routinely measured by accelerator mass spectrometry (AMS) for the radionuclides 10 Be, 26 Al, 36 Cl, and (more rarely) 41 Ca, which, if combined with concentrations for cosmogenic noble gases, provide information on cosmic ray exposure (CRE) histories, that is, CRE ages, terrestrial ages, pre-atmospheric sizes, and shielding depths. In contrast, the data for two other cosmogenic radionuclides, 53 Mn (T 1/2 = 3.7 AE 0.4 Ma; Honda and Imamura 1971) and 60 Fe (T 1/2 = 2.61 AE 0.04 Ma), are scarce because for their measurements dedicated AMS systems are needed to generate ions of 150-200 MeV energy, which is required to reduce the otherwise strongly interfering isobaric background and to achieve a sufficiently high sensitivity. Only two AMS facilities report routine 53 Mn and 60 Fe measurements. Before AMS, 53 Mn has been measured via radiochemical neutron activation techniques (e.g., Imamura et al. 1980), but there are currently only very few suitable nuclear reactors available for such studies.
The 60 Fe half-life has been under debate for the last few decades. The value of T 1/2 = 2.61 AE 0.04 Ma determined by Rugel et al. (2009) was confirmed recently by additional independent measurements (Wallner et al. 2015;Ostdiek et al. 2017). The new value is one order of magnitude higher than the first estimate of T 1/2 = 0.3 AE 0.9 Ma (Roy and Kohman 1957) and~75% higher than the previously adopted value of T 1/2 = 1.49 AE 0.27 Ma .
Another challenge in 60 Fe measurements is due to the fact that 60 Fe in iron meteorites is only produced from 62 Ni and 64 Ni. Since both Ni isotopes have a low abundance ( 62 Ni = 3.63%, 64 Ni = 0.93%), the 60 Fe production rates are relatively low, that is, in the range of disintegration per minute per kg (dpm kg À1 ) or less (e.g., Knie et al. 1999b;Nishiizumi and Honda 2007). Because the concentration of stable 56 Fe atoms is high in meteoritic material, the resulting 60 Fe/ 56 Fe ratio is low, that is, in the range 10 À14 . Such low isotope ratios cannot at present be measured with AMS systems commonly used for 10 Be, 26 Al, 36 Cl,or 41 Ca. For 53 Mn, it is necessary to remove or suppress the ubiquitous, interfering isobar 53 Cr. It is not widely appreciated that this can make an AMS measurement of 53 Mn as challenging or even more so than AMS measurements for 60 Fe. As a consequence, 53 Mn measurements of meteorites require similarly large AMS facilities (~14 MV tandem accelerators) as needed for 60 Fe.
Against those odds, 53 Mn and 60 Fe both have the potential to constrain CRE histories of iron meteorites, potentially even better than the classical lighter radionuclides. First, 60 Fe is only produced from a single target element, that is, Ni, and 53 Mn is only produced from Fe and Ni. Consequently, neither nuclide is affected by problems caused by variable contributions from the lighter elements sulfur and phosphorous (with the exception of sample dilution, which typically is irrelevant compared to overall uncertainties of the data). Very often microinclusions of troilite (FeS) and schreibersite ([Fe,Ni]P 3 ) compromise detailed studies of the mainstream cosmogenic nuclides 10 Be, 26 Al, and 21 Ne and therefore limit their use for accurate determination of CRE histories of iron meteorites. Second, due to its relatively long half-life compared to 10 Be, 26 Al, 36 Cl,and 41 Ca,53 Mn is less affected by decay during terrestrial residence, which makes the interpretation easier, because most meteorites are finds and not falls. Consequently, although the measurements of 53 Mn and 60 Fe are more challenging than the measurements of the classical radionuclides, they can provide valuable additional constraints on CRE histories of iron meteorites.
Recently, 60 Fe (and in some cases 53 Mn) have been analyzed in deep sea materials (crusts, nodules, sediments), Antarctic snow, and the lunar regolith as a tracer for nearby supernova explosions (e.g., Knie et al. 1999aKnie et al. , 2004Fitoussi et al. 2008;Wallner et al. 2016;Fimiani et al. 2016;Ludwig et al. 2016;Koll et al. 2019aKoll et al. , 2019bWallner, personal communication). Considering meteorites, there are only very few studies that include 60 Fe. The first 60 Fe measurement in a meteorite was performed by Goel and Honda (1965) by decay counting of the radioactive daughter 60 Co in 2.5 kg of the chemically processed iron meteorite Odessa. Kutschera (1984) described the first successful 60 Fe detection by AMS of a meteorite sample (Treysa). Knie et al. (1999b) presented 60 Fe activities in the iron meteorites Dermbach and Tlacotepec and in the metal fractions of the mesosiderite Emery and the LL chondrite Saint-S everin. More recently, Berger et al. (2007) studied the 60 Fe shielding dependence in the three iron meteorites Canyon Diablo, Grant, and Dorofeevka, which cover a large range of different shielding conditions. They found that 60 Fe production rates in large iron meteorites decrease with increasing shielding but that the trend in smaller iron meteorites is not that clear; the production rates might also slightly decrease with increasing shielding. In addition, Nishiizumi and Honda (2007) measured 60 Fe activities in six iron meteorites using low-level counting and found a good correlation between 60 Fe/kg(Ni) and measured 53 Mn activities. The Odessa sample studied by Nishiizumi and Honda (2007) was measured before by Goel and Honda (1965). The large discrepancy of a factor of six in specific 60 Fe activities was explained by a change of the 60 Co half-life in-between the two studies (Nishiizumi and Honda 2007). The latest study that includes both 53 Mn and 60 Fe was for the very large iron meteorite find Gebel Kamil (Ott et al. 2014). The Gebel Kamil data were used together with data from the aforementioned AMS studies to constrain the activity ratio of 60 Fe to 53 Mn of (2.68 AE 0.35)910 À3 (dpm kg À1 [Ni]/dpm kg À1 [Fe]). This ratio allowed workers to disentangle the cosmogenically produced 60 Fe from interstellar 60 Fe in lunar and terrestrial material Koll et al. 2019aKoll et al. , 2019b. To summarize, for the last 12 years, only one meteorite study was published that analyzed both heavy radionuclides.
The data presented here are part of a larger project determining CRE histories of iron meteorites and thereby studying the constancy of galactic cosmic rays (Smith et al. 2019). In the course of this project, we measured cosmogenic noble gas and radionuclide concentrations in~60 iron meteorites, mainly from group IIIAB. We chemically separated 53 Mn and 60 Fe and prepared AMS targets for all of them; so far 28 were measured for 60 Fe and 41 were measured for 53 Mn (plus six additional samples from the large Twannberg iron meteorite). Although the database is not yet complete, it may nevertheless help to better understand the production systematics of 53 Mn and 60 Fe in iron meteorites and to validate the improved model calculations for cosmogenic nuclide production in meteorites (Ammon et al. 2009 (Cook et al. 2018). In addition, we studied six samples from the large Twannberg iron meteorite (for the corresponding lighter radionuclides, see Smith et al. 2017).
Sample Preparation
In a related project, all samples have been studied for 10 Be, 26 Al, 36 Cl, and 41 Ca as well as for the light cosmogenic noble gases He, Ne, and Ar (Smith et al. 2019). The chemical preparation was performed at the Dresden Accelerator Mass Spectrometry (DREAMS) facility of the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) and was adapted from the procedure described earlier by Merchel and Herpers (1999). A full description of the chemical separation procedure is given by Smith et al. (2017Smith et al. ( , 2019. Here, we give some details, especially for the 53 Mn and 60 Fe analysis. The solution from the anion exchange (7.1 M HCl fraction, height 20 cm, diameter 1 cm, DOWEX 1X8, 100-200 mesh) containing mainly Mn was further purified from the interfering isobar 53 Cr using the following procedure: First, the solution was evaporated to dryness on a hot plate. The residue was dissolved in a mixture of 5 ml H 2 O, 5 ml HNO 3 , and 0.5 ml H 2 O 2 to reduce Mn 4+ to Mn 2+ . The solution was then heated for~1 h to fully destroy H 2 O 2 . Subsequently, KClO 3 was added to oxidize Mn 2+ back to Mn 4+ and finally precipitate it as MnO(OH) 2 by heating for~1 h. This precipitate was rinsed three times with water, subsequently transferred into microreaction vessels (Eppendorf tubes), and finally dried at 80°C in an oven. In an attempt to speed up the chemical processing, we slightly changed the protocol for one sample batch, which resulted in chemical yields larger than 100% (e.g., Turtle River, Elyria, Casas Grandes). An SEM-EDX (scanning electron microscope/energy-dispersive spectroscopy) measurement of these samples revealed that they were contaminated with AgCl from the earlier Ag 36 Cl AMS target preparation. Hence, we "rescued" the MnO 2 samples by washing them three times with~12.5% NH 3aq solution to dissolve the AgCl. A second SEM-EDX scan of these samples after the cleaning showed that the precipitate was pure MnO 2 . We applied the original longer protocol for all remaining samples. The MnO 2 powder was mixed with Ag powder, with a mass ratio MnO 2 :Ag = 1:4 and was then pressed into Cu sample holders.
For the 60 Fe analysis, the isobar 60 Ni introduces an interfering background in AMS, a background that needs to be suppressed during sample preparation. We applied the following procedure: During the anion exchange step, Fe 3+ is present as a chloro-complex [FeCl 4 ]and is absorbed on the DOWEX 1x8. Nickel is eluted with 10.2 M HCl, while Fe can be stripped after all other elements by elution with H 2 O (27 ml). The iron was then precipitated as FeO(OH) by adding~9 ml of 25% NH 3aq . The precipitate was rinsed three times with dilute ammonia solution (i.e., two drops of 25% NH 3aq in 250 ml H 2 O), then dried as iron oxide in an oven at 90°C, and was later ignited at 800°C for~2 h. The iron oxide powder was mixed with Ag powder, with a mass ratio Fe 2 O 3 :Ag~1:2, and was pressed into Cu sample holders for subsequent AMS measurements.
AMS Measurements
The nuclide 60 Fe was measured at the ANU Canberra relative to the PSI-12 standard material, which was produced at the Paul Scherrer Institute (PSI) from a dilution series that is based on material extracted from a beam dump. The 60 Fe/Fe ratio of PSI-12 is 1.234(7) 9 10 À12 ). The original material was used for the half-life measurements of 60 Fe (e.g., Rugel et al. 2009). The Munich group used a primary standard with a concentration of 60 Fe/Fe = (9 AE 1) 9 10 À12 , which is described in Knie et al. (1999aKnie et al. ( , 1999b. All 53 Mn measurements were performed at the ANU Canberra relative to a piece of the Grant iron meteorite that was provided by Greg Herzog (personal communication); the nominal 53 Mn/ 55 Mn ratio is 2.59 9 10 À10 (Gladkis 2006). This value has been obtained measuring the 53 Mn activity in 200 g of the iron meteorite Grant via the 53 Cr K a -line. The ratio used at ANU is 10% lower than the value used by others.
Note that a change in AMS standards and/or halflives has a direct influence on the meteorite data. If we consider, as an example, the radionuclide 60 Fe we measure the concentration of 60 Fe atoms in a sample, whereas the given production rates are saturation activities calculated via decay constant times the nuclide concentration. The recent change in the half-life of 75%, therefore, reduces the production rates by the same 75%. Consequently, all data that are based on the old half-life must be reduced by 75% to be comparable to the new measurements.
ANU Canberra: Samples were loaded into an MC SNICS ion source that is equipped with a sample wheel holding up to 32 positions. Either MnO -(for 53 Mn) or FeO -(for 60 Fe) was extracted and injected into the 14UD tandem accelerator at the Heavy Ion Accelerator Facility (HIAF; Fifield et al. 2013;Wallner et al. 2015). For these measurements, the 14UD accelerator was operated at terminal voltages between 13.8 and 14.3 MV. By selecting charge states of 11+ for Fe and 12+ or 13 + for Mn, we obtained particle energies between 165 MeV for 60 Fe and up to 200 MeV for 53 Mn. Typical currents were several µA of FeO À and several 100 nA for MnO -. Beam intensities of the stable isotopes 54 Fe, 56 Fe, and 55 Mn, respectively, were measured with Faraday cups at the low-and highenergy side of the spectrometer. The rare isotopes 53 Mn and 60 Fe were directed into a gas-filled magnet (ENGE spectrometer converted into gas-filled mode) and then counted atom by atom in a multi-anode ionization chamber (Fifield et al. 2013;Martschini et al. 2019). The gas-filled magnet allows for a spatial separation of the stable isobar from the radionuclide due to their different mean charge states and therefore different deflection angles caused by the interaction with the gas. This separation blocks the majority of the stable isobars from entering the particle detector and consequently reduces the beam intensity of the background isobars in the ionization chamber to acceptable levels (typically less than 100 60 Ni and up to a few 1000 53 Cr events per second). The background for 60 Fe was measured as low as 60 Fe/Fe = 3910 À17 (typical blanks for our study 60 Fe/Fe~10 À16 ; Wallner, personal communication) and the 53 Mn-background was in the range 53 Mn/ 55 Mn <10 À12 . The reproducibility of the AMS measurements, based on repeated measurements of identical samples, is 3-5% for 60 Fe and~5-10% for 53 Mn, respectively.
TUM Munich: The AMS facility in Munich at the Maier-Leibnitz-Laboratory in Garching is also based on a 14 MV tandem accelerator combined with a gas-filled analyzing magnet system. The isobaric background for the 60 Fe measurements was reduced in the same way as described above for the ANU Canberra setup Koll et al., 2019aKoll et al., , 2019b.
NEW MODEL CALCULATIONS
Our previous calculations modeling the production rates of 53 Mn and 60 Fe were based on the nuclear reaction cross sections that were available at that time (Merchel et al. [2000] and references therein), on adjusted cross sections for the neutron-induced reactions (Ammon et al. 2009;Leya and Michel 2011), and on the depth-and size-dependent spectra for primary and secondary particles calculated using Monte Carlo methods. For the production of 53 Mn from nat Ni and 60 Fe from nat Ni, the cross section database was limited and/or the data scattered far outside the range of the given uncertainties (e.g., Merchel et al. [2000] and references therein; Ammon et al. 2009). The earlier model predictions for 53 Mn overestimated measuredspecific activities for the meteorite Grant by up to 50%, which Ammon et al. (2009) argued could be due to AMS normalization problems for the proton-induced cross sections and/or the thick target production rates used to determine the neutron-induced cross sections. Note that such problems would only partially cancel out during the adjustment procedure used to determine the neutron-induced cross sections.
As already stated by Ammon et al. (2009), the model predictions for the production of 60 Fe were much lower than most of the experimental data. The calculated upper limit for the 60 Fe-specific activity was 0.9 dpm kg À1 (Ni) and was at the center of a 25 cm iron meteoroid. Since then, the recommended half-life value for 60 Fe has seen an increase of~75%. Consequently, it is possible that the discrepancies between the earlier model predictions and the experimental data were (at least partly) caused by problems related to the experimental input data used for modeling, that is, proton-induced cross sections and/or thick target production rates used to determine neutron-induced cross sections. Here, we try to overcome the problem by using only calculated cross sections for modeling. In doing so, we rely on the latest version of the INCL (Li ege Intra Nuclear Cascade) code, which has recently been improved for higher energies, that is, in the range above 1 GeV, and for the emission of light complex particles (e.g., David et al. 2013;Mancusi et al. 2015;Pedoux and Cugnon 2011). We consider the current version of the code to be for the first time reliable enough for calculating sufficiently accurate production rates. The depth-and sizedependent energy spectra of primary and secondary particles used for modeling have been calculated using Monte Carlo methods, and they are the same as used by Ammon et al. (2009), that is, we use a solar modulation parameter M = 550 and a particle flux in the meteoroid orbits of 4.47 cm À2 s À1 .
Manganese-53: The proton-(solid line) and neutron-(dashed line) induced cross sections calculated by INCL for the production of 53 Mn from nat Fe are shown in Fig. 1 (upper panel). Also shown are the experimental cross sections for the proton-induced production given by Furukawa (1973), Gensho et al. (1972Gensho et al. ( , 1979, Kumabe et al. (1963), Lavrukina et al. (1964), Shore et al. (1961), Perron (1976), and Merchel et al. (2000). In addition, we show the data for the reaction 56 Fe(p, X) 53 Mn obtained in inverse kinematics experiments (Villagrasa-Canton et al. 2007). With an 56 Fe abundance of 92%, the inverse kinematics data for 56 Fe should be comparable to the other data obtained by irradiating Fe with a natural isotopic composition. It can be seen that the experimental data give a consistent excitation function, at least up to~60 MeV incident proton energy. In the energy range 400-1600 MeV, the data scatter significantly. Conversely, the INCL model produces a smoother excitation function. For energies below 40 MeV, the experimental cross sections are significantly higher than the model predictions and also the threshold energies differ. Whereas the model predicts a threshold energy of~14 MeV, the experimental data show the production of 53 Mn already at 12 MeV; the difference is important because reactions with lower reaction thresholds very often produce higher meteorite production rates. Above 40 MeV, there is reasonable agreement between measured and calculated cross sections. Also shown are the modeled results for the neutron-induced production of 53 Mn from nat Fe. Below~40 MeV, the neutron-induced cross sections are higher than the proton-induced data by up to a factor of two (the average is 40%); at the local minimum close to 40 MeV, they are lower than the proton-induced cross sections by up to 50%, and they are again~20% larger than the proton data for energies between 40 MeV and 65 MeV. For energies in the range 65 MeV-1 GeV, the proton-induced cross sections are on average 20% larger than the neutroninduced cross sections. Above 1 GeV, the cross sections for both projectile types are similar.
The proton-(solid line) and neutron-induced (dashed line) cross sections for the production of 53 Mn from nat Ni are shown in the middle panel of Fig. 1. Also shown are the experimental data from Merchel et al. (2000). While there is a reasonable agreement between experimental and calculated data, the experimental data are too scarce for establishing an excitation function consistent enough for model calculations. Again, the INCL calculations produce a smooth excitation function.
The new model predictions for cosmogenic 53 Mn production in iron meteoroids with pre-atmospheric radii of 5, 10, 15, 25, 30, 32, 40, 50, 60, 65, 85, 100, 120 cm, and for the outermost 200 cm of a 10 m object calculated using the new proton-and neutron-induced cross sections are shown in Fig. 2. The 53 Mn production is for almost all radii and all shielding depths dominated by neutrons. For example, already at the center of a 5 cm iron meteoroid more than 50% of 53 Mn is produced by secondary neutrons. This value increases to~80% at the center of a 25 cm iron meteoroid and reaches more than~90% at the center of a 50 cm iron meteoroid.
The specific 53 Mn activities for iron meteorites found in literature are between 23 AE 1 dpm kg À1 and 583 AE 25 dpm kg À1 (e.g., Nishiizumi et al. 1991), which is in good agreement with the range 33-567 dpm kg À1 predicted by the model. Furthermore, and experimental data. The experimental data for the protoninduced production are from Furukawa (1973), Gensho et al. (1972Gensho et al. ( , 1979, Kumabe et al. (1963), Lavrukina et al. (1964), Shore et al. (1961), Perron (1976), Merchel et al. (2000), and Villagrasa-Canton et al. (2007) and are not corrected for the new half-life value. Honda et al. (1961) measured 53 Mn in the four iron meteorites Grant, Williamstown, Odessa, and Canyon Diablo and found production rates in the range 92 AE 12-299 AE 11 dpm kg À1 , again in the range of the model predictions. However, there is a slight discrepancy for the iron meteorite Grant. While the model predicts 53 Mn production rates in the range 342-500 dpm kg À1 for an iron meteoroid with a radius of 40 cm, Honda et al. (1961) measured a production rate of 299 AE 11 dpm kg À1 . In a later measurement of the iron meteorite Grant, Imamura et al. (1980) measured specific 53 Mn activities between 304 AE 15 and 374 AE 16 dpm kg À1 , still somewhat low but slightly closer to the model predictions. The 53 Mn AMS value by Merchel (1998) of 435 AE 65 dpm kg À1 fits perfectly with the new model.
Iron-60: Our new purely theoretical data support the statement that the earlier experiment-based model significantly underestimated the 60 Fe production rates in iron meteorites as 21 of 28 of the obtained data are higher than the (former) modeled upper limit. Figure 1 (lower panel) depicts the INCL results for the proton-(black solid line) and neutron-induced (dashed line) production of 60 Fe from nat Ni. Also shown are the experimental data from Merchel et al. (2000) for the proton-induced production. Note that changing the halflife from 1.49 to 2.61 Ma will not change the cross sections, because the change in the measured activity cancels with the factor accounting for saturation. The two major reactions for the proton-induced production of 60 Fe are 62 Ni(p,3p) 60 Fe and 64 Ni(p,3p2n) 60 Fe or 64 Ni (p,ap) 60 Fe (i.e., an alpha particle instead of two protons and two neutrons in the exit channel). Most importantly and very surprisingly, the cross sections for the neutron-induced production of 60 Fe from Ni are up to five times higher than the cross sections for the proton-induced production. The most relevant reactions for the neutron-induced production of 60 Fe are 62 Ni(n,2pn) 60 Fe and 64 Ni(n,2p3n) 60 Fe or 64 Ni(n, an) 60 Fe. The results of the new model calculations are shown in Fig. 3 as depth-dependent production rates for iron meteorites with pre-atmospheric radii of 5, 10,15,25,30,32,40,50,60,65,85, 100, 120 cm, and for the outermost 200 cm of a 10 m object.
Mn Activities in Iron Meteorites
The 53 Mn data for the 46 samples from 41 meteorites are given in Table 1. For the large iron meteorite Twannberg, we studied six samples. The terrestrial ages used below are from the study by Smith et al. (2019). Owing to the long half-life of 53 Mn, decay corrections due to the terrestrial residence are insignificant, that is, the measured activity concentrations can just be converted to production rates. Note that even the longest terrestrial age of 285 ka for Puentel del Zacate reduces the 53 Mn concentration by less than 5%, which is below typical uncertainties for the AMS measurements (note that no 53 Mn has been measured for Puentel del Zacate). The 53 Mn activities based on the AMS measurements range from 4.3 to 658 dpm kg À1 . Considering only Twannberg samples, the 53 Mn activities range from 4.3 to 165 dpm kg À1 , that is, the spread is almost a factor of 40. Such a large spread for data from one meteorite confirms the large pre-atmospheric size of Twannberg (see also below). For the iron meteorite Grant, we measured a 53 Mn production rate of 441 AE 45 dpm kg À1 , which is significantly higher than the 53 Mn production rates given by Imamura et al. (1980), which range between 304 AE 15 and 374 AE 16 dpm kg À1 .
The new model predictions for 53 Mn are betweeñ 20% (surface of small meteoroids) and~30% (center of large meteoroids) lower than the previous model predictions ( Ammon et al. 2009). Averaged over all radii and all shielding depths, the difference is~30%. With the new 53 Mn data for Grant and the new model calculations, there is now a good agreement between experimental data and model predictions. For example, the model predicts 53 Mn-specific activities in the range 331-478 dpm kg À1 , in very good agreement with the measured specific activity of 441 AE 45 dpm kg À1 .
Nevertheless, there are still some discrepancies between model predictions and experimental data. For example, the model predicts maximum 53 Mn production rates of 547 dpm kg À1 in the center of a 25 cm meteoroid (Fig. 2). In contrast, the 53 Mn production rate for 8 of the 41 studied meteorites (Avoca, Bristol, Calico Rock, Chulafinne, Dalton, Fort Pierre, Greenbrier County, Zerhamra) is higher than the upper limit given by the model. For seven of the eight meteorites, however, there is agreement within the 1r standard deviation; the only exception is Chulafinne, for which the agreement is only within the 2r standard deviation.
We now discuss the shielding dependence of the 53 Mn data for Twannberg. The model predicts that for an iron meteoroid of radius 120 cm, the production rate varies only by a factor of~10 with shielding, compared to the measured factor of 40. Consequently, Twannberg must have been larger than 120 cm in radius. Moreover, 53 Mn activities as low as 4.26 dpm kg À1 as measured for one of the Twannberg samples are only possible at a depth of 120 cm in an object with a radius of 10 m (e.g., Smith et al. 2017). Larger depths in smaller objects (1.2 m <radius <10 m) are also possible; however, we have no model predictions for such objects.
Fe Activities in Iron Meteorites
The 60 Fe concentrations normalized to the Ni content of the 28 iron meteorites are compiled in Table 1. The calculated specific activities range from 0.38 dpm kg À1 (Ni) for Casas Grandes to 2.02 dpm kg À1 (Ni) for Gan Gan. The new 60 Fe data are in the range of values found in the literature for other iron meteorites (recalculated for the new half-life). For example, Casas Grandes and Lombard have low 60 Fe activities of 0.38 dpm kg À1 (Ni) and 0.46 dpm kg À1 (Ni), respectively. Such low concentrations have also been determined in the large meteorites Gebel Kamil (Ott et al. 2014) and Canyon Diablo (Berger et al. 2007).
For further discussion, we calculate production rates, that is, we correct the 60 Fe activities for radioactive decay during terrestrial residence. The terrestrial ages have been determined using 41 Ca/ 36 Cl atom ratios (Smith et al. 2019). The changes are minor; the maximum is 7% for Casas Grandes (terrestrial age of 247 AE 98 ka) and the average change is 1%. The calculated production rates are given in Table 1 as 60 Fe (0; dpm kg À1 [Ni]). Figure 4 depicts a histogram of all existing (to our knowledge) 60 Fe data for iron meteorites. In addition to our new data (Table 1), literature data from Knie et al. (1999b), Nishiizumi and Honda (2007), Berger et al. (2007), and Ott et al. (2014) are included. The data from Knie et al. (1999b), Berger et al. (2007, and Nishiizumi and Honda (2007) have been recalculated for the new 60 Fe half-life.
The new model calculations for 60 Fe production rates are significantly higher than the earlier model by Ammon et al. (2009). While the maximum is still at the center of a 25 cm meteoroid, it is now slightly above 2 dpm kg À1 (Ni), that is, more than a factor of 2 higher (see Fig. 3). Since for most of the studied meteorites neither the pre-atmospheric radius nor the preatmospheric shielding depth of the studied sample is known, a comparison of the experimental data with the model predictions is only possible for production rate averages and ranges. The improved model is in accord with measured activities. For example, the predicted 60 Fe production rates for all shielding depths in iron meteoroids with radii between 5 and 120 cm range c Cannot be given. 53 Mn and 60 Fe in meteorites between 0.2 and 2 dpm kg À1 (Ni), which covers all of the measured 60 Fe activities. Again, there are still some discrepancies between model predictions and experimental data. The modeled 60 Fe production rates for an iron meteorite with a radius of 40 cm, that is, very close to the preatmospheric radius of Grant (Ammon et al. 2008), vary between 1.2 dpm kg À1 (Ni) and 1.8 dpm kg À1 (Ni). This is significantly higher than the measured data for Grant from Berger et al. (2007), which are-after recalculating them using the new 60 Fe half-life-0.57 dpm kg À1 (Ni) and 0.69 dpm kg À1 (Ni). According to the model, such low 60 Fe activities are only reached in objects at least 100 cm in radius, which is unreasonable for Grant (e.g., Ammon et al. 2008).
Nuclide Correlations
To search for cosmogenic nuclide correlations that might help decipher cosmic ray exposure histories in iron meteorites, we plot in Fig. 5 the 53 Mn production rates as a function of 36 Cl production rates for the studied samples. Also shown are the results from the new model predictions for all shielding depths in meteorites with pre-atmospheric radii between 5 and 120 cm and the outermost 2 m of a 10 m radius object (dashed lines). The two solid black lines connect the results for the centers and the surfaces, respectively. The model calculations define an area of allowed 36 Cl-53 Mn production rates for meteorites that fall within the above-mentioned size range and that experienced singlestage exposure histories. The solid black symbols are experimental data; the 53 Mn data are from Table 1; and the 36 Cl production rates, which were determined in the same aliquots, are from Smith et al. (2017Smith et al. ( , 2019. In total, we have 36 Cl and 53 Mn data for 35 iron meteorites; of these, 25 plot within and 10 plot outside the allowed data field. The three meteorites North Chile, Turtle River, and Benedict plot below the allowed data field. The 36 Cl production rates for all three meteorites are in a range typical for iron meteorites, that is, they range between~6 dpm kg À1 for North Chile and~24.4 dpm kg À1 for Benedict. In contrast, the 53 Mn production rates are very low. For example, with a 36 Cl production rate of 23.9 AE 0.9 dpm kg À1 for Turtle River, the model predicts 53 Mn production rates in the range 360-530 dpm kg À1 , that is, far higher than the 56.4 dpm kg À1 measured by us. According to the model calculations, 53 Mn production rates as low as 56.4 dpm kg À1 are only possible in iron meteorites with pre-atmospheric radii larger thañ 100 cm. This is in contrast to the relatively low 4 He/ 21 Ne ratio of~220 and the activity ratios of the light cosmogenic radionuclides (e.g., Smith et al. 2019). The reason for the apparently too low 53 Mn production rates is not clear. It might be due to (1) a complex Fig. 5. Production rates of 53 Mn as a function of 36 Cl production rates for all shielding depths in iron meteorites with pre-atmospheric radii between 5 cm and 120 cm and the outermost 2 m of a 10 m object. The thin dotted lines connect the model calculations for an individual meteorite (from the surface toward the center). The thick black lines connect the results for all surfaces and centers, respectively. The model predictions define an area of allowed 36 Cl-53 Mn production rate combinations. Also shown are experimental data. Meteorites plotting outside the allowed field are labeled. Fig. 4. Histogram of 60 Fe activities (dpm kg À1 [Ni]) of the 28 studied meteorites. Also shown are data from the literature (Ott et al. 2014;Berger et al. 2007;Nishiizumi and Honda 2007;Knie et al. 1999b). The data from Knie et al. (1999b), Berger et al. (2007), and Nishiizumi and Honda (2007) were recalculated for the now accepted 60 Fe half-life of T 1/2 = 2.61 AE 0.04 Ma (Rugel et al. 2009). exposure history, (2) unrecognized problems during sample preparation and/or AMS measurements, and/or (3) an unusually high concentration of natural 55 Mn in the studied iron meteorite. For the last point, we discuss as an example the data for Turtle River. The studied sample had a mass of 102 mg, and during chemical extraction, we added~4 mg of Mn carrier. For calculating specific 53 Mn activities, we used the measured 53 Mn/ 55 Mn ratio, the amount of 55 Mn carrier added, and we assumed that the concentration of native 55 Mn in the sample is negligible. Consequently, calculating an 53 Mn activity~10% too low requires in addition to the 4 mg of 55 Mn carrier added~0.4 mg of native 55 Mn in the sample. This value is unreasonably high considering that the Mn/Fe ratio in iron meteorites is in the range 10 À7 (e.g., Sugiura and Hoshino 2003), which corresponds to~10 ng of native 55 Mn in the Turtle River sample, which is lower by more than four orders of magnitude. In addition, Herpers et al. (1969) measured native 55 Mn concentrations in the range <5.5-199 ppm, again too low to compromise 53 Mn-specific activity measurements by AMS. Next, we consider the second possibility, a complex exposure history. Doing so, we assume a recent break-up of an originally much larger Turtle River iron meteorite. The recovered mass of Turtle River is~23 kg, which corresponds to a minimum pre-atmospheric radius of~9 cm (after the hypothetical recent break-up). The 36 Cl production rate in such an object is~25 dpm kg À1 , that is, very close to the measured value. To reach this value, the meteorite must have been irradiated for at least 1 Ma. During the same time, 80 dpm kg À1 53 Mn would have been produced in such a meteorite, that is, much more than measured by us for Turtle River. Therefore, a complex exposure with a very recent break-up leaving some of the radionuclides under-saturated cannot explain the measured data, leaving unrecognized problems during sample preparation and/or AMS measurements the most likely explanation. Indeed, Turtle River belongs to the batch for which some samples had "virtual" chemical yields larger than 100% for Mn and needed reprocessing (note that the excess was not MnO 2 but AgCl, see above). For Benedict, we cannot completely exclude that the sample (or at least parts of it) got lost during chemical processing. However, there are no indications for any such problems for the North Chile sample.
There are also five meteorites (Casas Grandes, Sikhote-Alin, Calico Rock, Schwetz, Trenton) for which the 36 Cl and the 53 Mn data individually fall into the allowed range, but for which the combination of 36 Cl and 53 Mn data are outside the range predicted by the model. For the five meteorites in question, the 53 Mn production rates are higher than expected based on the model calculations. Note, however, that our data for Casas Grandes of 237 AE 29 agree well with the 210 AE 6 dpm kg À1 measured by Herpers et al. (1969). For Sikhote-Alin, we measured 358 AE 35 dpm kg À1 and Herpers et al. (1969) measured 335 AE 8 dpm kg À1 , again a good agreement. Finally, Herpers et al. (1969) measured 590 AE 14 dpm kg À1 compared to the 516 AE 52 dpm kg À1 measured by us. However, to be more precise, the 53 Mn data for Calico Rock, Schwetz, and Trenton with greater than 500 dpm kg À1 are, according to the model calculations, only possible for iron meteorites with pre-atmospheric radii in the range 20-40 cm. For meteorites in this size range, however, the 36 Cl production rates are~20 dpm kg À1 (e.g., Smith et al. 2019), that is, far higher than the values of less than~14 dpm kg À1 measured for two of the three meteorites in question. The found masses of 7.28 kg, 21.5 kg, and 505 kg for Calico Rock, Schwetz, and Trenton are in accord with preatmospheric radii in the range 20-40 cm. While we cannot fully exclude that the 36 Cl data are too low, we infer that the 53 Mn data are too high, which might be caused by unrecognized problems during sample preparation and/or AMS measurements, or it could be due to a complex exposure history. The latter might be as follows: The 53 Mn production rate decreases more slowly with depth than the 36 Cl production rate. For example, the 53 Mn production rate in a 10 m object decreases from the surface toward a shielding depth of~2 m by about three orders of magnitude. In contrast, the 36 Cl production rates decrease by about four orders of magnitude in the same range of shielding depths. Consequently, there are regions in a large iron meteoroid with measurable amounts of 53 Mn but without any 36 Cl. If, after further break-up, those regions get close to the (new) pre-atmospheric surface, they might have inherited some excess 53 Mn from the first irradiation stage leading to high 53 Mn/ 36 Cl activity ratios. For the break-up to have a measurable effect on the 53 Mn/ 36 Cl ratio, a requirement would be that it occurred during the last few half-lives of 53 Mn, that is, within the last 10 Ma or so. While we consider it as unlikely that such a recent break-up has remained unnoticed so far, considering the variety of measured cosmogenic nuclides (e.g., Smith et al. 2019), such a scenario is not impossible. However, such a scenario cannot explain the data for the five meteorites in question. We discuss here as an example the data for the meteorite Calico Rock. Assuming after the hypothetical recent break-up, a relatively large object with a 36 Cl production rate in the range 4 dpm kg À1 (close to the value measured by us), the 53 Mn production rate would be~200 dpm kg À1 , that is, far lower than the 500 dpm kg À1 measured by us. Consequently, in such a scenario, 300 dpm kg À1 of 53 Mn must have been inherited from the earlier irradiation stage. From Fig. 5, we can conclude that at least in the range of studied pre-atmospheric radii, there is no region in an iron meteorite in which 300 dpm kg À1 53 Mn is produced without any collateral production of 36 Cl, making such a scenario impossible. A special case is the data for Greenbrier County; the 36 Cl data of 29 dpm kg À1 are unexpectedly high, whereas according to the model calculations (Smith et al. 2017), the upper limit for the 36 Cl production rate in meteoritic metal is $ 25 dpm kg À1 . Currently, however, we have no reason to consider the 36 Cl data for Greenbrier County as unreliable. We might speculate that neutron-capture reactions on natural chlorine might be the reason for the too high 36 Cl concentrations. Such a mineral could be lawrencite, which occurs in some iron meteorites (e.g., Goldschmidt 1954;Honda et al. 1961). For a discussion, see also Smith et al. (2019).
The new model predicts 60 Fe/ 53 Mn production rate ratios in the relatively narrow range 0.0021-0.0031 (dpm kg À1 [Ni]/dpm kg À1 ); the average for all radii and all shielding depths is (2.8 AE 0.1) 9 10 À3 . Considering now that the model calculations are for metal consisting of 90% Fe and 10% Ni, the production rate ratio 60 Fe (kgNi) À1 to 53 Mn (kgFe) À1 changes to (2.5 AE 0.1) 910 À3 . This is in excellent agreement with the activity ratio of 60 Fe (kgNi) À1 to 53 Mn (kgFe) À1 of (2.68 AE 0.35)910 À3 deduced by Fimiani et al. (2016) for meteorite data, which has been used by these authors to disentangle measured 60 Fe data for lunar Apollo 12, 15, and 16 samples into cosmogenic and interstellar components. Both activity ratios are slightly higher than the ratios expected for extraterrestrial dust ( 60 Fe/ 53 Mñ 10 À4 dpm kg À1 [Ni]/dpm kg À1 ; Knie et al. 1999a). Figure 6 depicts the 60 Fe production rates (dpm kg À1 [Ni]) as a function of the 53 Mn production rates (dpm kg À1 ). The experimental data are shown by the solid black symbols. Also shown is the linear correlation predicted by the model calculations (gray band). Twenty of the available 23 experimental data follow the predicted linear trend; three irons (Benedict, Gan Gan, Turtle River) plot well above the correlation line. The meteorites Benedict and Turtle River have already been discussed before for their low 53 Mn data (see Fig. 5). Gan Gan has a very high 60 Fe production rate of 2.02 AE 0.37 dpm kg À1 (Ni), which indicates a rather small pre-atmospheric radius. Values that high are only possible close to the center of an iron meteorite with a pre-atmospheric radius in the range 20-30 cm (see Fig. 3). In such objects, however, the 53 Mn activities are in the range 550 dpm kg À1 , that is, far higher than the measured value of 323 AE 14 dpm kg À1 . The calculated pre-atmospheric radius in the range 20-30 cm is in reasonable agreement with the recovered mass of 83 kg, which corresponds to a postatmospheric radius of~13 cm. From these arguments, we speculate that the measured 53 Mn activity for Gan Gan is slightly too low.
CONCLUSIONS
We measured 53 Mn and 60 Fe activities in 41 and 28 iron meteorites, respectively, including six samples from the large iron meteorite Twannberg for 53 Mn. Measurements of 60 Fe and 53 Mn by accelerator mass spectrometry are both experimentally challenging. Consequently, prior to this study, the database for cosmogenic 53 Mn and for 60 Fe was limited to a few measurements only. In addition, we performed new model calculations for the production of 60 Fe and 53 Mn in iron meteorites. The model is based on the same particle spectra as a function of size and depth as used by Ammon et al. (2009), but our model uses only theoretical cross sections for proton-and neutron-induced reactions obtained from the INCL nuclear model code.
The new model predictions for 60 Fe are significantly higher than earlier ones and, with one exception (Grant; Berger et al. 2007), are in generally good agreement with the older and newer measurements for iron meteorites. There is still a discrepancy between measured and modeled 60 Fe data for the iron meteorite Fig. 6. Iron-60 production rates (dpm kg À1 [Ni]) as a function of 53 Mn production rates (dpm kg À1 ). The experimental data are shown by solid black symbols. The gray shaded area indicates the linear correlation predicted by the new model calculations. Samples that deviate significantly from the predicted linear correlation are labeled.
Grant, which could well be due to inconsistent Grant data in a previous publication (Berger et al. 2007).
For 53 Mn, the new model predictions are on average 30% lower than the earlier model and are therefore in better agreement with experimental data. The new 53 Mn data for Grant are higher than the earlier data from Imamura et al. (1980) but are consistent with early AMS data (Merchel 1998) and are now in good agreement with the range predicted by the model for iron meteorites with a 40 cm pre-atmospheric radius. We found large variations among the 53 Mn activity concentrations of the six studied Twannberg samples, clearly confirming its exceptionally large preatmospheric size (Smith et al. 2017).
There are, however, still some discrepancies. Some of the measured 53 Mn/ 36 Cl and 60 Fe/ 53 Mn ratios do not fit into the allowed range in 53 Mn/ 36 Cl and do not follow the linear correlation between 60 Fe and 53 Mn predicted by our model, respectively. The discrepant data cannot be explained by complex exposure histories but might indicate some unrecognized problems either during sample preparation and/or during AMS measurements. The grand average of all measured data, however, agrees well with the average production rate ratio of 53 Mn/ 36 Cl~31 and 60 Fe/ 53 Mn~3.6 9 10 À3 calculated with our model by considering all shielding depths in iron meteorites with pre-atmospheric radii between 5 and 120 cm.
In iron meteorites, 10 Be and 26 Al are often not very reliable due to possible contributions from traces of sulfur and/or phosphorous. For 26 Al, such contributions can be in the range of tens of percent, even in samples that were visually expected to be devoid of any inclusions (see Smith et al. 2019). In addition, 41 Ca is often very difficult to measure and is strongly affected by decay during terrestrial residence. Furthermore, 36 Cl is difficult to study because of the need for a dedicated chemistry laboratory for the chemical extraction and high risk of cross contamination, especially in the AMS ion source. Consequently, it might well be that 53 Mn and 60 Fe are more reliable cosmogenic radionuclides for studying cosmic ray exposure histories of iron meteorites. However, for 53 Mn to become reliable, there is a need to establish a consistent and well-documented standard, which ideally should not be from a meteorite.
The quality of the model calculations has improved considerably by using calculated data, instead of using experimental cross sections that are based on a few measurements only and that are sometimes inconsistent. While this clearly demonstrates the good quality currently achieved by nuclear model codes to calculate nuclear cross sections (at least for some target-product combinations), it also indicates that there is still a need for more and more reliable cross section measurements. | 11,271.4 | 2020-03-31T00:00:00.000 | [
"Physics",
"Geology"
] |
Cross-talk between HIF and p53 as mediators of molecular responses to physiological and genotoxic stresses
Abnormal rates of growth together with metastatic potential and lack of susceptibility to cellular signals leading to apoptosis are widely investigated characteristics of tumors that develop via genetic or epigenetic mechanisms. Moreover, in the growing tumor, cells are exposed to insufficient nutrient supply, low oxygen availability (hypoxia) and/or reactive oxygen species. These physiological stresses force them to switch into more adaptable and aggressive phenotypes. This paper summarizes the role of two key mediators of cellular stress responses, namely p53 and HIF, which significantly affect cancer progression and compromise treatment outcomes. Furthermore, it describes cross-talk between these factors.
Important consequences of rapid tumor growth include poor vascularization and insufficient oxygen delivery that together lead to formation of hypoxic (poorly oxygenated) areas [1]. Adaptation to hypoxia is facilitated by the activation of transcriptional machinery, in which hypoxia inducible factor (HIF) plays a pivotal role. HIF is a heterodimeric transcription factor composed of an oxygendependent α-subunit and constitutively expressed βsubunit. Regulation of the α-subunit is driven by enzymes of the prolyl hydroxylase family (PHDs) and by the factor inhibiting HIF (FIH) [2,3]. Under normoxia, PHDs hydroxylate prolines at positions 564 and 402 (in HIF-1α isoform) and FIH hydroxylates asparagine at position 803 [3]. Hydroxylation of prolines is required for recognition of HIF-1α by the ubiquitin ligase complex via von Hippel-Lindau (pVHL) tumor suppressor protein, which in consequence leads to HIF-1α ubiquitination followed by its proteasomal degradation [2]. Simultaneously, FIH prevents interaction between HIF-1α and the transcriptional co-activator, p300. Although there are three isoforms of the α-subunit: HIF-1α, HIF-2α and HIF-3α, most attention is drawn to HIF-1α and HIF-2α. These subunits contain similar oxygendependent degradation domains, but play different roles in hypoxic tumor growth and progression (for extended review see Keith et al.) [4]. Whereas HIF-1 mediates acute responses to hypoxia, HIF-2 is more involved in adaptation to chronic hypoxia and is functionally implicated in tumor progression [5].
In situations of insufficient oxygen levels, PHDs and FIH remain inactive, while HIF-1α is no longer hydroxylated and escapes recognition by pVHL. This results in its stabilization, accumulation and translocation to the nucleus, where it interacts with a β-subunit leading to creation of an active heterodimeric form of the transcription factor. This heterodimer binds to specific cisacting hypoxia responsive elements (HREs) in the promoters of target genes [6].
Several recent reports point out novel molecular mechanisms that affect HIF-1α levels in normoxia. An inhibitor of Janus Activated Kinase (JAK2), AG490, prevents HIF-1α hydroxylation and thus interferes with VHL-mediated degradation resulting in increased HIF-1α protein half-life [7]. Another mechanism by which HIF-1α can be rescued from degradation is via interaction with ubiquitin-specific protease 19 (USP19) [8]. Epigenetic mechanisms such as histone methylation can also be involved in HIF-1α regulation, which was studied in clear cell renal cell carcinoma (ccRCC) [9]. Moreover, HIF-1 activity is phosphorylation-dependent and thus requires engagement of signaling such as mitogenactivated protein kinase (MAPK), PI3K/Akt and mammalian target of rapamycin (mTOR), amongst others (see review by Dimova et al.) [10].
HIF-induced cascades of events allow cells to survive and overcome unfavorable conditions during hypoxia by transcriptional reprogramming that leads to modulated proliferation, angiogenesis, cell metabolism and many other features of tumor phenotype. One of the prominent HIF-1 downstream genes involved in this process is the gene coding for carbonic anhydrase IX (CA IX). CA IX is a member of the family of zinc metalloenzymes involved in regulation of cellular pH by reversible conversion of CO 2 to bicarbonate and proton [11,12]. Its activity is regulated by hypoxia through protein kinase A and leads to acidosis of the tumor milieu, which is known to be one of the hallmarks of solid tumors [13,14]. CA IX also promotes tumor cell growth and survival and helps to eliminate the surplus of intracellular acids generated through oncogenic metabolism [15,16]. Moreover, it facilitates migration and invasiveness of tumor cells and thereby supports tumor progression [17].
To satisfy the need for nutrients, tumor cells are forced to create an extensive net of new vessels through increased expression of pro-angiogenic molecules, including vascular endothelial growth factor (VEGF), which is also a well-known HIF target gene [18,19]. Additionally, VEGF can promote both angiogenesis and metastasis via up-regulation of matrix metalloproteinase 28 and matrix metalloproteinase 14 [20].
Due to lack of oxygen, a key factor for respiration, hypoxia is also known to induce a shift to glycolytic metabolism [21]. HIF-1 plays a growth factor-dependent role in the regulation of glycolysis in hematopoietic cells even in the absence of hypoxia [22] and reduces mitochondrial respiration in RCC lacking VHL [23]. HIF was also shown to be responsible for expression of specific isoforms of glycolytic enzymes and transporters via alternative splicing [24].
There are many other molecular targets of HIF that execute multiple adaptive responses to hypoxia depending on the cell type and physiological context as described elsewhere [25,26].
p53-mediated responses to genotoxic stress
Tumor suppressor p53, which shows many similarities to HIF-1 in terms of protein control by degradation, is predominantly involved in adaptation of cells to genotoxic stresses. p53 is a well-characterized transcription factor that plays a crucial role in responses to DNA damage, aberrant cell cycle control, apoptosis, and senescence [27][28][29]. Comparably to HIF-1α, the basal level of wild-type p53 is kept low due to murine double minute 2 (MDM2)-dependent ubiquitination [30]. In response to DNA damage p53 is stabilized and phosphorylated by ataxia telangiectasia mutated (ATM) protein, which leads to its activation and binding to the regulatory region of target genes [31,32]. Moreover, p53 can be regulated through methylation caused by MDM2dependent recruitment of methyltransferases [32]. In contrast, MDM2 can also act as a p53 inducer. This is mediated through the interaction of p53 mRNA region containing the MDM2-binding site with the RING domain of MDM2, which impairs the E3 ligase activity of MDM2 and promotes p53 mRNA translation [33]. This interaction depends on ATM-mediated phosphorylation of MDM2 at Ser395 [34]. Finally, activated p53 can then start the machinery leading either to cell cycle arrest and DNA repair or to apoptosis. For example, p53dependent upregulation of genes involved in inhibition of IGF-1/AKT and mTOR pathways prevents cell growth and division [29,35,36]. On the other hand, inhibition of DNA damage-activated kinases leads to switch of the p53-dependent growth arrest to apoptosis [37].
ATF3 gene, a downstream target of p53, encodes a transcription factor involved in adaptation to hypoxia, ER stress, oxidative stress and genotoxic stress [38]. ATF3 acts both as an effector of p53-mediated cell death and a regulator of p53 signaling. A recent report indicates that ATF3 has opposing effects on apoptotic transcriptome in stress response and in cancer, where it was found to be over-expressed [39]. Zhang and colleagues [40] developed a four-module model to investigate p53 dynamics and the DNA damage response. They found that primary modifications such as phosphorylation at Ser-15 and Ser-20 cause cell cycle arrest, whereas further modifications such as phosphorylation at Ser-46 fully activate p53 which can then induce apoptosis. This report more clearly elucidates how p53 converts between the cell cycle arrester and the killer, which was previously shown to be controlled by Wip1 (wild-type p53-induced phosphatase 1) [41].
p53 does not only act as a transcriptional factor in the nucleus, but also can move to the mitochondria where it induces permeabilization of the mitochondrial outer membrane consequently releasing pro-apoptotic factors [28]. Suppression of autophagy via inhibition of AMPdependent kinase and/or activation of mTOR is another cytoplasmic p53 function [42]. For the extensive insight into the cytoplasmic functions of p53, see the review by Green and Kroemer [28].
p53 as a tumor suppressor plays an important role in maintaining of genome stability thus it is not surprising that is mutated in more than 50% of cancers in which its loss facilitates malignant transformation [43]. The majority of p53 mutations represent missense mutations located in the DNA-binding core domain of p53, producing a full-length protein that is incapable of binding DNA and is therefore nonfunctional as a transcriptional activator/repressor. Compared to wild-type p53, missense mutant proteins show increased stability, which is partly caused by their inability to induce MDM2 but also by the formation of complexes with HSP90 and HSP70 [44].
Cross-talk between HIF-1 and p53
In addition, p53 participates in responses to hypoxia by regulating expression of genes involved in cell cycle control. This happens via a pathway that is different than that involved in the DNA damage response [45]. There are many contradictory reports on mutual influence of p53 and hypoxic signaling. Some of them claim that hypoxia causes accumulation and increase in p53 protein level [46,47], whereas others postulate degradationmediated decrease in p53 level [48,49] or no effect at all [50]. These intricate relations have been extensively reviewed by Sermeus and Michiels [51]. One explanation of these contradictory statements can be found in the phosphorylation status of HIF-1. It was shown that dephosphorylated HIF-1 is a major form binding to p53, precluding downregulation of p53 by MDM-2, and thus enabling it to conduct apoptosis [52]. As both p53 and HIF-1 are mediators of cell adaptation to many stresses, they are known to be involved in similar processes such as apoptosis, cell cycle control, metabolism etc. (Figure 1). Severe and/or prolonged hypoxia activates p53-dependent apoptosis, which is initiated by stabilization of 53 by HIF-1 [53]. In contrast, another report states that hypoxia causes growth arrest by decreasing p53 phosphorylation, but has no impact on either p21 WAF1 or HIF-1 protein stabilization [54]. One of the possible explanations is that these convergences can be due to cancer cell type [55]. Opposite effects can be observed upon genotoxic stress, where wild-type 53 abrogates HIF-1 activity triggering its proteasomal degradation [56].
However, there is a line of evidence that HIF-1 can also impair p53 activity, through the downregulation of the tumor suppressor homeodomain-interacting protein kinase-2 (HIPK2) [57]. HIPK2 phosphorylates p53 at serine 46 in response to DNA damage and subsequently activates its apoptotic function [58]. Moreover, HIPK2 inhibition can result from the hypoxia-induced upregulation of MDM2 [59].
p53 can respond to DNA damage in cooperation with 70 kDa subunit of the replication protein A (RPA70). Under hypoxia, wild-type p53 undergoes a conformational change and acquires mutant conformation [60]. Furthermore, hypoxia leads to disruption of the complex between p53 and RPA70, dissociation of RPA70 and activation of RPA70-mediated nucleotide excision repair and non-homologous end-joining repair, which cause resistance to apoptosis in hypoxic cancer cells [61]. That report poses a new insight into impairment of the p53mediated apoptosis and consequent insensitivity of cancer cells to treatment. However, it is still hard to elucidate what starts the p53 and/or HIF-1 machinery for the adaptation of cells to unfavorable conditions.
Thomas et al. [62] focused on tumor response to nitric oxide (NO) exposure and proposed that both p53 and HIF-1 are stabilized by NO in a dose-and timedependent manner, with a higher NO concentration required for p53 stabilization. They suggested that cells localized closer to the source of NO production can undergo p53-dependent cell arrest and death, while more distant cells respond with increased HIF-1 levels. Additionally, their results indicated that HIF-1 stabilization by NO was independent of p53 status.
Altered metabolism is one of the prominent features that promote tumor survival. The first who discovered that tumors rely on anaerobic glycolysis even in the presence of sufficient oxygen and produce large amount of lactate was Otto Warburg [63]. Later this phenomenon was named after him. The consequences of this effect have been previously reviewed [64]. Another tumor characteristic is increased uptake of nutrients that as stated by Vander Heiden et al. [65] is due to oncogenic mutations mainly in Akt, Myc and Ras [66]. A multitude of mutations of genes encoding enzymes participating in glycolysis, tricarboxylic acid cycle, mitochondrial oxidative phosphorylation and other molecular pathways underlying the advantageous metabolism of cancers have been already characterized [67][68][69][70][71]. Comprehensive insights into this phenomenon can be found in recent works [72][73][74][75]. In this respect HIF-1 and p53 play crucial, but usually competing, roles. HIF-1 controls expression of genes encoding e.g. glucose transporters, glycolytic enzymes, lactate dehydrogenase etc. [25,76]. Interestingly, inactivating mutations in fumarate hydratase and succinate dehydrogenase cause accumulation of their substrates, which interfere with HIF-1α degradation leading to its accumulation [77]. On the other hand, loss of p53 contributes to enhancement of glucose transport and metabolism through NF-κB pathway [78]. Furthermore, it increases lactate production, diminishes oxygen consumption and enhances hypoxia-induced cell death. Disruption of p53 function reduces the expression of cytochrome c oxidase 2 (SCO2), which is necessary for the respiratory chain function [79]. This indicates that mutations in the TP53 gene contribute to Warburg effect.
example of the mutual communication between HIF-1 and p53 in regulation of the tumor cells survival.
Recent developments in the field of senescence, a process leading to elimination of damaged cells from the growing population and subsequently preventing cancer occurrence, reveal a dual role for hypoxia. Leontieva et al. [88] found that hypoxia inhibits a conversion from the reversible cell cycle arrest to senescence (known as geroconversion), nutlin-induced senescence and mTOR activity. Additionally, in marrow-derived mesenchymal stem cells (MSCs) hypoxia promotes proliferation [89] and causes downregulation of p21 WAF1 expression in a HIF-1α-dependent manner [90]. On the other hand, many of HIF-1-regulated genes are associated with the senescence induction, including plasminogen activator inhibitor (PAI1), cell cycle regulators, glycolytic enzymes and secreted molecules (see review by Welford et al.) [91]. The classic model of senescence shows that hyperoxia can induce senescence through reactive oxygen species (ROS). In accordance, senescence is inhibited under low oxygen conditions simply due to decreased production of the mitochondrial ROS [92]. Interestingly, recent report indicates that overexpression of caveolin-1 in the cancer-associated fibroblasts causes induction of their senescence and supports tumor growth due to HIF-1α stabilization by ROS increase [93]. In addition, VHL loss induces senescence in an oxygen-dependent manner by increasing the level of p27, which regulates cell cycle. However, these effects do not rely on HIF-1α or HIF-2α activity [94]. p53 involvement in senescence has been intensively studied till nowadays and recent achievements in that field have been profoundly reviewed [95][96][97]. It is noteworthy that p53 induction together with the prolonged p21 WAF1 overexpression can suppress senescence in favor of quiescence [98]. Importantly, the cross-talk between p53 and HIF-1 can be observed at the level of their regulation, within a complex molecular loop which involves both factors ( Figure 2). As mentioned before, ATM mediates a DNA double strand break signaling and repair via phosphorylation of p53. Ousset et al. [99] used various cellular models where ATM was disrupted and demonstrated that the absence of ATM increases expression of both subunits of HIF-1 as well as protein biosynthesis, through oxidative stress. However, ATM is also responsible for the phosphorylation of HIF-1 on Ser-696, which causes a downregulation of mTORC1 signaling that regulates a translational efficiency [100]. Not only hypoxia suppresses the mTOR pathway; p53 in response to stress also negatively regulates mTORC1 by inducing the expression of a plethora of target genes in the IGF-1/AKT and mTOR pathways. This intrinsic regulation was reviewed previously [29].
Another crosstalk between HIF-1 and p53 is observed on the level of trans-activation. During hypoxia, these transcription factors compete for the binding to the CH1 domain of p300 cofactor [101]. Furthermore, it was found that another cofactor, p300/CBP Associated Factor (PCAF) is involved in this regulatory mechanism. A study carried out by Xenaki et al. [102] focused on the expression of the pro-apoptotic p53 target BID and revealed a molecular mechanism underlying the regulation of p53 transcriptional activity in hypoxia. They have shown that hypoxia not only enables preferential direction of p53 to the promoter of p21 WAF1 cell cycle arrester via PCAF, but also decreases PCAF-dependent acetylation of p53, which disrupts binding to its proapoptotic targets. They found that PCAF is also a HIF-1 cofactor involved in HIF-1-mediated apoptosis, whereas PCAF histone acetyltransferase (HAT) activity regulates transcriptional selectivity.
Additional convergences are visible on the level of regulation of these two transcription factors by VHL, which as mentioned above, is a well-documented ubiquitin-dependent executer of HIF-1 degradation [2,103]. However, it was also reported that VHL positively regulates p53 activity, preceded by DNA damage, via nucleating ATM and histone acetyltransferase to p53. It also influences cell cycle arrest and apoptosis triggered by p53 due to upgrading the p53-p300 interaction and p53 acetylation [103]. Moreover, ATF3 links the molecular pathways of HIF-1 and p53 in response to DNA-damage, where both transcription factors are overrepresented, which can be explained by the suggestion that ATF3 synergizes with these transcription factors to modulate their target gene expression [39]. Recently, FIH was added to an even more complicated network in which p53 and HIF-1 are involved: FIH silencing in colon adenocarcinomas and melanoma cells greatly abolishes cell proliferation and, more importantly, increases both p53 and p21 WAF1 protein levels [104]. These results support the role of FIH in the suppression of the p53-p21 WAF1 axis.
Impact of the p53 and HIF-1 interplay on cancer progression
Despite the fact that p53 is known to prevent mutations which cause genome instability and can lead to carcinogenesis, it represents one of the most frequently mutated genes in solid tumors [45]. Conformational changes related to missense mutations in the DNA-binding domain disrupt p53 transcriptional activity resulting in impaired ability of p53 to regulate the cellular response to hypoxia in an effective way [105,106]. It was also established that low oxygen pressure selects cells carrying p53 mutation and due to that contributes to metastatic potential and diminished apoptosis [46,107]. Interestingly, Gogna et al. [60] using in-vivo electron paramagnetic resonance oximetry 3D imaging found that conformationally mutated p53 appears in tumor hypoxic core and that its conformation is oxygen-dependent.
Furthermore, not only p53 mutations act in favor of cancer progression. Also hypoxia correlates with more aggressive tumor phenotypes and poor responses to therapy [108]. This mainly involves stabilization of HIF-1 and overexpression of its target genes [109]. For instance, expression of a HIF-1 target CA IX has been investigated in various types of cancers, including breast, colorectal, pancreatic etc. [110][111][112]. In these reports overexpression of this hypoxic marker was associated with poorer patient survival, less differentiated tumors of higher grade and worse response to therapy. Similar effects were described for VEGF in lung and gastric cancers [20,113]. Interestingly, high expression of HIF hydroxylases, which negatively regulate HIF-1 and are themselves regulated by hypoxia were postulated as poor prognostic factors in non small cell type lung cancers [114], whereas their inhibition reduced survival of glioblastoma cells [115]. Concurrent overexpression of both HIF-1 and p53 was found in many cancers as well [116]. An in vivo study, based on an experimental model of chick embryo chorioallantoic membrane, revealed that HIF-1α increases invasiveness of human small cell lung carcinoma via promoting angiogenesis not only due to overexpression of VEGF but also due to secretion of pro-inflammatory factors [20]. Moreover, Khromova et al. [117] found that accelerated growth of cancer cells is associated with p53 mutations and caused by ROSmediated activation of the HIF-1/VEGF-A pathway, which links both factors with neovascularization. In a large cohort of colorectal cancers, HIF-1α but not HIF-2α was shown to have an important negative prognostic role in cancer aggressiveness and overall survival of patients [118]. Contradictory to that, Cleven et al. [110] suggested that in the stroma of these tumors HIF-2α and CA IX serve as poor prognostic factors in tumors expressing wild-type p53 compared with tumors with mutant form. Regarding p53, some studies join its expression with patient survival [119] another with invasion depth [120] and poor differentiation [111] or worse distant survival [121]. Moreover, another report indicates no significant survival difference between wild-type and mutant p53 [110]. This leaves an open question on how hypoxia selects for mutated p53 and thereby impacts on patient outcome.
Hypoxia causes resistance to commonly used anticancer agents either due to downregulation of genes that are drug targets or because oxygen deprivation abrogates activity of the drugs. Chemotherapeutics of the first choice (doxorubicin, etoposide, cisplatin) cause DNA damage and therefore activate p53 to conduct apoptosis. HIF-1 by modulating expression of its target genes, render the cells less prone to treatment, although this effect is cell type-dependent [55]. Insensitivity can be HIF-1 independent as well, but relies on p53 suppression [122]. Moreover, hypoxic cells divide less rapidly and are localized further from functional blood vessels. Due to that, drugs are unable to reach poorly oxygenated areas and work less efficiently than in highly proliferating cells [123].
Last but not least, overexpression of P-glycoprotein (Pgp), a member of ATP-binding cassette (ABC) protein superfamily has been reported to cause multidrug resistance (MDR) of tumors [124,125]. Other studies elucidated that increase in Pgp abundance is due to transactivation by HIF-1 recruited to the MDR-1 gene in MCF-7 spheroids and hypoxic cells. Importantly, both MCF-7 spheroids and hypoxic cells show lower susceptibility to doxorubicin treatment and reduced accumulation of drugs [126].
Conclusions
It is well known that hypoxia and genome instability are intrinsic tumor characteristics, which influence cancer progression and hence patient outcome. This report describes mutual relations between p53 and HIF-1 as mediators of adaptation to diverse cellular stresses, including DNA damage and hypoxia. Although they share many similarities, they can either act in parallel or compete with each other in regulation of diverse molecular pathways. These discrepancies have been extensively studied, but there are still many gaps in understanding what triggers pro-survival or lethal activity of these transcription factors. This work highlights the importance of further investigation of this loop as the data mentioned above indicate that it involves both positive and negative regulators as well as epigenetic mechanisms. This knowledge is indispensable not only for proper patient treatment, which as reported here can be influenced by both cancer cell type and tumor environment, but also for development of new drugs targeting p53 and/or HIF-1 pathways.
Competing interests
The authors declare they have no competing interests.
Authors' contributions JO reviewed the literature, and wrote and edited the manuscript. SP contributed to study conception and critically revised the paper. BV critically revised the paper. RH contributed to study conception, revised and finalized the manuscript. All authors read and approved the final manuscript. | 5,187 | 2013-08-14T00:00:00.000 | [
"Biology"
] |
Comparison of noninvasive cardiac output and stroke volume measurements using electrical impedance tomography with invasive methods in a swine model
Pulmonary artery catheterization (PAC) has been used as a clinical standard for cardiac output (CO) measurements on humans. On animals, however, an ultrasonic flow sensor (UFS) placed around the ascending aorta or pulmonary artery can measure CO and stroke volume (SV) more accurately. The objective of this paper is to compare CO and SV measurements using a noninvasive electrical impedance tomography (EIT) device and three invasive devices using UFS, PAC-CCO (continuous CO) and arterial pressure-based CO (APCO). Thirty-two pigs were anesthetized and mechanically ventilated. A UFS was placed around the pulmonary artery through thoracotomy in 11 of them, while the EIT, PAC-CCO and APCO devices were used on all of them. Afterload and contractility were changed pharmacologically, while preload was changed through bleeding and injection of fluid or blood. Twenty-three pigs completed the experiment. Among 23, the UFS was used on 7 pigs around the pulmonary artery. The percentage error (PE) between COUFS and COEIT was 26.1%, and the 10-min concordance was 92.5%. Between SVUFS and SVEIT, the PE was 24.8%, and the 10-min concordance was 94.2%. On analyzing the data from all 23 pigs, the PE between time-delay-adjusted COPAC-CCO and COEIT was 34.6%, and the 10-min concordance was 81.1%. Our results suggest that the performance of the EIT device in measuring dynamic changes of CO and SV on mechanically-ventilated pigs under different cardiac preload, afterload and contractility conditions is at least comparable to that of the PAC-CCO device. Clinical studies are needed to evaluate the utility of the EIT device as a noninvasive hemodynamic monitoring tool.
S1. Protocol of the first study
We began each experiment with a baseline/stabilization part where MAP was stably maintained for at least 5 minutes.This baseline/stabilization part was repeated after each intervention.The part #1 was to measure CO and SV under different afterload conditions using the following protocol: 1. Nitroprusside was slowly administered while increasing its dose until MAP decreased to 60 mmHg.The dose was maintained for 15 minutes.
3. Phenylephrine was slowly administered while increasing its dose until MAP increased above 85 mmHg.The dose was maintained for 15 minutes.4. Waited until MAP returned to 70~80 mmHg.Further waited for 5 minutes.
The part #2 was to measure CO and SV under different contractility conditions as follows: 1. Dobutamine was slowly administered while increasing its dose until MAP increased above 100 mmHg.The dose was maintained for 5 minutes.
2. Waited until MAP returned to 70~80 mmHg.When MAP did not return to 70~80 mmHg, crystalloid fluid (Plasma Solution-A Injection, CJ Healthcare, Korea) was administered until MAP returned to 70~80 mmHg.Waited for 5 minutes.
3. Esmolol was slowly administered while increasing its dose until MAP decreased below 60 mmHg.The dose was maintained for 15 minutes.4. Waited until MAP returned to 70~80 mmHg for 30 minutes.
The part #3 was to measure CO and SV under different preload conditions as follows: 1. Blood was withdrawn from the animal through a needle, which was inserted into a blood vessel and connected to a blood bag, until MAP decreased below 50 mmHg.The amount of blood withdrawn and the bleeding rate varied in different animals.
2. Waited for 5 minutes without withdrawing additional blood.
S2. Protocol of the second study
We began each experiment with a baseline/stabilization part where MAP was stably maintained for at least 5 minutes.This baseline/stabilization part was repeated after each intervention.The part #1 was the same as the part #1 of the first study.
The part #2 was the same as the part #2 of the first study.
The part #3 was to measure CO and SV under different afterload conditions using the following protocol: 1. Thromboxane was slowly administered while increasing its dose until the mean PAP increased to 35 mmHg.The dose was maintained for 5 minutes.
2. Waited until the mean PAP returned to 25 mmHg.Further waited for 5 minutes.
The part #4 was to measure CO and SV under different preload conditions as follows: 1. Blood was withdrawn from the animal through a needle, which was inserted into a blood vessel and connected to a blood bag, until MAP decreased below 50 mmHg.The total amount of blood withdrawn and the bleeding rate varied in different animals.
2. Waited for 5 minutes without withdrawing additional blood.
3. The removed blood was transfused into the pig for 5 to 10 minutes.When MAP decreased below 50 mmHg during fluid administration, dobutamine was administered until MAP increased above 50 mmHg.
S3. Thoracotomy procedure
Each pig was placed on an operating table in the right lateral position.Thoracotomy started at the 5 th or 6 th intercostal space.Subcutaneous tissues, muscles and pleura were dissected using an electrosurgical unit (ESU).Then, a retractor was placed between the ribs to secure a view to install the C-shaped UFS around the pulmonary artery while the pericardium was open.There was little displacement of the heart during this thoracotomy procedure.After checking the signal from the installed UFS around the pulmonary artery, the dissected layers were sutured and wound areas were taped to close the chest.Then, we attached 16 sensing electrodes and 1 reference electrode for EIT measurements around the chest.We did not observe any noticeable changes in the acquired signals before and after the thoracotomy procedure.We believe that the thoracotomy did not affect the PAC-CCO, APCO and EIT measurements per se.
S4. EIT data collection method
The EIT device injected current between a chosen neighboring electrode pair.For each current injection, 16 voltage data were measured simultaneously (parallel measurements).13 of them were used for SV/CO calculations and 3 were used to estimate electrode-skin contact impedance values.This was repeated for all 16 neighboring current-injecting electrode pairs in 10 ms.
S5. Percentage error and concordance
To compute the percentage error (PE), we let X , and X , be the jth data from the ith animal using the reference device and the device under test, respectively.The number of data pairs from the th animal is and the number of animals is .Then, the total number of data pairs is = ∑ =1 . Denoting the difference as , ≔ X , X , , we compute the PE as follows: where and and ℎ is the harmonic mean of { } =1 .Here, ̂ is the standard deviation of the difference accounting for inter-subject biases.
The condition of concordance in measured data X DEV1 and X DEV2 by the DEV1 and DEV2 was defined as follows: where ∆X DEV1 () and ∆X DEV2 () are the percentage difference in X DEV1 and X DEV2 at time i, respectively, as follows: and where ( 1) and are two consecutive time points when the entire experiment time was divided into non-overlapping 10-min intervals.Similarly, the condition of discordance in measurements X was defined as The concordance of X denoted as CCD X was computed as the percentage of data pairs satisfying the concordance condition outside an exclusion band of 15% in both ∆X DEV1 and ∆X DEV2 as follows: where # denotes the number of elements contained in the set , is the set of data pairs in the exclusion band, is the set of data pairs satisfying the concordance condition, and is the set of data pairs satisfying the discordance condition.The sets were defined as
S9. Details about Bland-Altman and Concordance Analyses
Table S2.Details of the Bland-Altman analyses.The results for the APCO method should not be interpreted conclusively due to its unreliability in animals.
3. 1 L of crystalloid fluid (Plasma Solution-A Injection, CJ Healthcare, Korea) was intravenously administered for 20 minutes.When MAP decreased below 50 mmHg during fluid administration, dobutamine was administered until MAP increased above 50 mmHg.4. Waited for about 20 minutes.
Fig
Fig. S1(a) shows an example of the EIT device's 208-channel impedance signals from an animal Fig. S2.(a) Example of the CVS extracted from the 208-channel impedance data shown in Fig. S1 using the leadforming algorithm 25 .(b) Power spectrum of the CVS in (a) shows a fundamental frequency of about 1.2 Hz corresponding to a heart rate of about 72 bpm.
S7.
Fig. S3.(a)~(p) Measured CO, SV, COScaled, SVScaled, HR, ABP, CVP and PAP data for 16 pigsfrom the first study.In the CO and SV plots, neither amplitude scaling nor time-delay adjustment was applied.In the COScaled and SVScaled plots, the EIT and APCO data were scaled in amplitude using a PAC-CCO datum in the beginning of each experiment as a reference value, and a timedelay adjustment was applied to the PAC-CCO data.The mean values of the ABP, CVP and PAP data are shown in yellow.SV plots were smoothed to remove short-term SV variations.
Fig. S4.(a)~(g) Measured CO, SV, COScaled, SVScaled, HR, ABP, CVP and PAP data for 7 pigsfrom the second study.In the CO and SV plots, neither amplitude scaling nor time-delay adjustment was applied.In the COScaled and SVScaled plots, the EIT, PAC-CCO and APCO data were scaled using a UFS datum in the beginning of each experiment as a reference value, and a time-delay adjustment was applied to the PAC-CCO data.The mean values of the ABP, CVP and PAP data are shown in yellow.SV plots were smoothed to remove short-term SV variations.
Table S3 .
Details of the concordance analyses.The results for the APCO method should not be interpreted conclusively due to its unreliability in animals. | 2,171.4 | 2024-02-05T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Glass with a low-melting temperature belonging to the P 2 O 5 –CaO–Na 2 O system, applied as a coating on technical ceramics (alumina, zirconia) and traditional ceramics (porcelain stoneware)
This article investigates the development and potential applications of low-melting point lead-free glasses. Their importance is due to strong market demands to comply with the strict international regulations against the use of lead. In this work, a preliminary study of the existing interactions of a low-melting-temperature glass belonging to the P 2 O 5 –CaO–Na 2 O system when it is deposited on different ceramic substrates, both traditional (porcelain stoneware) and technical (alumina and zirconia), has been carried out. The ionic diffusion through the studied interfaces, the phases present, the composition of the glassy phase and the surface morphology of the coating have been studied by Field Emission Scanning Electron Microscopy (FESEM), with coupled EDX microanalysis. The chemical resistance of the different glassy coatings obtained is also evaluated. The results showed that these new lead-free low-melting-temperature glassy coatings are chemically and mechanically compatible, and promising candidates for applications and markets in a broad range of fields. © 2023 The Authors. Published by Elsevier Espa ˜ na, S.L.U. on behalf of SECV. This is an
Introduction
The most important challenge nowadays in the field of lowmelting glass materials (softening temperatures < 600 • C) -a consolidated and key technology for a wide range of industrial and consumer products -is to obtain ecofriendly alternatives to lead-containing glass products.
Currently, technology includes uses for sealing applications for electronics [1][2][3] as a glass matrix for thick-film materials (circuit carrier) and passive components (resistors, condensers, resonators, etc.), for encapsulation of organic light emitting diode (OLED) [4] and many microelectromechanical system (MEMS) devices [5][6][7], such as resonators, gyroscopes and tunnelling sensors.While most low-melting glasses are used for sealing, there are several significant applications for fuel cells, systems in solar energy devices, nuclear waste immobilization, together with overglazing of automotive, packaging and architectural glass [8].
The most classical low-melting glass materials based on the PbO-B 2 O 3 system integrated in commercial devices present an important environmental impact due to the high toxicity of the lead oxide (PbO).Sources of contamination are the material components and waste generated during mechanical and other types of post-processing of the product, making its replacement necessary as also indicated in the directives adopted by the European Union (EU) for the exclusion or substitution of hazardous substances in electrical and electronic devices [9,10].Accordingly, lead-free low-melting point glasses have a wide range of applications and great prospective future in electronic industry because of their low sealing temperature and respect for the environment.
On the other hand, it is not environmentally friendly to use a large amount of energy in processing materials at high temperature.Specifically, energy saving is an important issue with different approaches, from glass to ceramic industry sector.All these industries are characterized by the prolonged operation of high-temperature kilns and furnaces; not only a large amount of energy is consumed during the production process, but also the energy cost is a significant percentage of the total production costs.The implementation of energy saving technologies, therefore, is imperative for reasons that have to do both with the worldwide energy crisis and environmental degradation, as well as with product cost reduction.In this regard, the tile industry dominates the ceramic sector and is the most competitive, demanding the lowest production costs and the highest productivity [11].A great quantity of thermal energy is consumed in ceramic tile manufacture, mainly in the firing stage.In fact, producing just one tonne of ceramic tiles requires 1.67 MWh of energy [12].Reducing the environmental impact can be achieved by modifying the compositions of low-melting glass materials while maintaining their functional characteristics or by improving their manufacturing technology process, associated with lowering of firing temperatures.
In this context, many researchers have been investigating lead-free low-melting glass systems alternatives, for example ZnO- and others [13][14][15][16][17][18][19][20][21][22][23][24][25].Zinc-borate glasses exhibit high softening temperatures in the range >600 • C. Antimony oxide presents high volatility, which complicates the synthesis of these glasses with low chemical stability.Bismuth borate glasses show low electric insulation properties.Most phosphate glasses do not satisfy the requirements in terms of chemical stability or they are characterized by heightened values of the softening temperatures.
In this work, the interactions between a low-melting glass coating belonging to the P 2 O 5 -CaO-Na 2 O system, on different advanced technical and traditional ceramic substrates have been studied.During the heat treatment, the glaze must be considered as a heterogeneous system changing its aggregation state, that is accompanied by structural modifications of the formed melt and of its phase composition as well.The final objective will be to evaluate whether the new designed glasses Pb-free satisfy both the low processing temperature and the mechanical and chemical compatibility requirements.
Glass preparation
Glass belonging to the ternary system P 2 O 5 -CaO-Na 2 O [13] was synthesized starting from the mixture of 83 wt.% of sodium hexametaphosphate (Na 15 P 13 O 40 , Sigma-Aldrich) and 17 wt.% of dicalcium phosphate (CaHPO 4 •2H 2 O, Sigma-Aldrich), being its chemical composition (% in weight) the following: 65.60% P 2 O 5 , 5.76% CaO and 28.64% Na 2 O.The precursors were weighed in a Pt-10% Rh crucible.The final heat treatment to melt the mixture was at 900 • C for 1 h stay.The melt was cooled by pouring it into a water-cooled stainlesssteel container.In Fig. 1, the situation of the composition obtained within the ternary system can be seen.
The glass was ground and sieved below 63 m and then deposited on each of the corresponding ceramic surfaces.The amount of the glass powder deposited on each of them was always controlled to be 0.05 g/cm 2 .On the other hand, another glass has also been formulated to which 1 wt.%Co 2 O 3
Characterization of the glass
For the characterization of the glass, X-ray diffraction (Bruker D8-Discover, UK) using Cu radiation (K␣) was used.The glass characteristic temperatures were analyzed by differential thermal analysis (DTA, TA Instruments, Q600, USA) with a heating rate of 10 Different disks of technical ceramic with density > 99% have been used as substrates: (i) TM-DAR alumina (99.9% purity) from Taimei (Japan), and (ii) PSZ magnesia-doped zirconia (3% mol) from Unitec Ceramics (USA).Also, commercial glazed porcelain stoneware slabs (Castellón, Spain) were used.The study by microscopy and semi-quantitative microanalysis was carried out in a FESEM (FEI: Quanta FEG 650, USA).Previously, after their heat treatments, the specimens were first cross-sectioned, then prepared with SiC papers (4000) and, finally, polished with diamond cloths down to 6 and 3 m.All the specimens were metallized with a nanometer layer of carbon for their observation by FESEM and subsequent microanalysis by EDX.
The chemical resistance was evaluated following the method described in the UNE-EN ISO 10545-13:2017 standard [26], which consists in the application of a certain volume of reagent on the surface of the tiles during a certain time.The concentration of the solutions used and their residence time on the specimens to be tested are detailed in Table 1.
Results
The differential thermal analysis (DTA) curve is shown in Fig. 2. The exothermic signal at ∼425 • C can be ascribed to the crystallization of the glass (T c ).The two endothermic peaks (Tm1 and Tm2) at 500 and 550 • C are attributed to the melting of crystalline phases precipitated during the heating of the glasses.These values are in good agreement with the DTA results obtained in a similar glass composition studied in a previous work [13], where the exothermic crystallization and endothermic melting temperature peaks were found to be T c = 435 • C and T m = 530 • C, respectively.
In Fig. 3, the decrease in the area of the sample as a function of the temperature may be observed.The glass crystallization temperature of the glasses, T c , obtained by DTA measurements is indicated.Complete densification takes place before the therefore, the white colour of the alumina substrate (Fig. 5A) and the yellow colour of the zirconia substrate (Fig. 5B) are visible.In Fig. 5C, the glass doped with 1% weight of cobalt oxide, showing the typical blue colouration, on a zirconia-magnesia substrate is shown.Fig. 6 shows the X-ray diffractogram of the glass, where the typical hump of amorphous materials can be seen.For comparison purposes, the XRD pattern of this glass after a heat treatment at 875 • C for 2 h and cooled into the furnace at a slow speed of 1 • C/min is also shown.A calcium sodium phosphate, CaNa 4 O 18 P 6 , precipitates as the majority phase in good agreement with the equilibrium diagram of Fig. 1.
In Fig. 7, the microstructure of the cross section of the glassy coating on the alumina plate can be seen.The melt fits perfectly to the alumina surface, being the heat treatment used 775 • C/2 h.EDX microanalysis shows the presence of a small proportion of Al in the upper layer (Fig. 7, microanalysis 1) coming from the substrate, which results in the bright whitish colour of the surface, while in the lower layer, pure alumina, no cation coming from the coating is detected.The presence of superficial crack patterns on the coating surface can be observed.
The cross section of the 3 wt.%MgO zirconia substrate coated with the glass doped with 1 wt.% cobalt oxide is shown in Fig. 8.The heat treatment corresponding to this sample was 875 • C/2 h.As can be seen in the micrograph (Fig. 8A), a neat contact between the ceramic substrate and the glass coating can be observed, as well as idiomorphic crystals rich in calcium phosphate that appear in the contact zone.EDX microanalyses taken in different areas are plotted in Fig. 8.
Regarding the coating of porcelain tile with glass, a study of the interactions between the two materials was carried out, taking into account two heat treatments at 775 • C and 875 • C and with two final holding times: 2 h and 4 h.Figs. 9 and 10 show the microstructures of the polished cross-sections of the heat treatments carried out at 775 • C with 2 h and 4 h of holding time.
Referring to Fig. 9, it is noteworthy that the layer of glass fits perfectly to the relief of the original porcelain substrate; however, after the heat treatment for 2 h, its chemical composition changes compared to the original one.Thus, throughout the thickness, it can be seen that there has been a microdiffusion of ions rich in silica and also in aluminium from the glaze substrate of the starting porcelain.
In Fig. 10, other microstructures can be observed by FESEM under the same heat treatment, but with a residence time twice as long as the previous one: 4 h.This figure shows the average chemical composition of the engobe-glaze assembly and the chemical composition of the starting glass (Fig. 10A).In Fig. 10B, the microstructure of the glazed layers can be seen in detail, where in the contact zone of the glass coating on the porcelain tile substrate, there are two compositional zones due to the existing reactions and the subsequent diffusion of the cations present in both the porcelain tile glaze and its engobe.Specifically, there is a layer of about 60-65 m where there is an important fraction of cations (Al, Si, Mg, K, Ba and Zn) that are not present in the original composition of the glass, which are responsible for the whitish colouration of the final glaze (Fig. 4B).The other continuous layer below of about 25 m is rich in Si, and is the result of the reorganization of the dark grey precipitates (Fig. 9A) with increasing holding time of the heat treatment.EDX microanalyses corroborate this fact as can be seen in positions ( 1)-(3) of Fig. 10.
The pieces thermally treated at 875 • C/4 h were considered to be tested against both acid and alkaline attack, according to the conditions established by standards ISO 10545.In Fig. 11, the surface appearance of the treated pieces can be observed, where the treatments with citric acid and potassium hydroxide seem unalterable after being washed.On the other hand, when the attack with hydrochloric acid is evaluated, a superficial deterioration is observed with loss of shine of the piece.The results showed that the coating exhibits chemical stability at an acceptable level.
Discussion
In this work, a Pb-free low-melting point glass with an environmentally compatible non-toxic chemical composition has been selected [13].As shown in Fig. 2, this glass has an endothermic melting temperature peaks in the DTA located at 500 • C and 550 • C. Additionally, a flow point is reached at 611 • C, regardless of the support substrate on which the high-temperature optical microscopy test has been carried out (alumina, porcelain).For this reason, and taking into account the Stokes-Einstein equation [27] (D ≈ K B T/ , where D is the diffusivity; K B is the Boltzmann constant, T is the temperature (K); is the diameter of the diffusing molecule; is the viscosity), the heat treatment temperature for the coatings was chosen to be around 200 • C above the mentioned fluidity temperature (775-875 • C), in order to facilitate the diffusion and exchange of ions through the coating/substrate interface and, consequently, to be able to modify their chemical composition and increase its chemical stability.
This transfer of ions at the glass/substrate interface has been widely achieved in the temperature range studied.As can be seen in Fig. 7, in the case of the alpha-alumina substrate at 775 • C, alumina dissolution has been produced at the interface, which has been incorporated into the glass composition (≈6.6 Al at.% was detected) making it more durable than traditional phosphate glasses [28].In this particular case, the presence of surface microcracking is due to the large difference between the coefficients of thermal expansion of the alumina substrate (7.2 × 10 −6 • C −1 ) and the glass coating (24 × 10 −6 • C −1 ) [29].
In the case of the Mg-Zirconia substrate (Fig. 8), a perfect glass-ceramic adhesion, free of microcracks, is observed.A precipitation of isomorphic crystals of calcium and sodium phosphate along the interface, with sizes ranging between 0.1 and 1 m, is detected.This fact is in good agreement with the equilibrium diagram of Fig. 1 and with the X-ray diffraction pattern of the original glass thermally treated at 875 • C for 2 h (Fig. 6).The composition of the glassy phase was notably modified, drastically decreasing its Na 2 O and P 2 O 5 content, as can be seen in the corresponding EDX analyses.Such effect induces a greater resistance to chemical attack of the final glaze obtained.In this particular case, because of the higher value of the thermal expansion coefficient of the PSZ substrate Fig. 8 -Cross section at different magnifications of the 3 wt.%MgO zirconia substrate coated with the glass doped with 1 wt.% cobalt oxide.EDX microanalyses taken in different areas of the interface are showed.In (A) the calcium phosphate microcrystals can be seen in the contact zone between the two dissimilar surfaces.
(11 × 10 −6 • C −1 ), an excellent adhesion is achieved, and also a good mechanical coupling that prevents the appearance of superficial microcracks.Furthermore, through a detailed observation of the glassy coating/PSZ interface shown in the SEM micrographs of Fig. 8, it can be deduced that the zone of approximately 30 m deep of the zirconia substrate in contact with the coating is apparently denser and more absent of pores than the rest of the substrate.This fact could be due to a diffusion/wetting process of the glassy phase of the coating through the zirconia grain boundaries improving the anchorage of the coating as well as its mechanical stability.
In the case of porcelain tile, the substrate on which this low-melting point glass has been deposited, corresponds to a commercial ceramic glaze with a high silica and alumina content (≈50 wt.%, 22 wt.%, respectively).Therefore, in this particular case there is a high driving force G < 0 that favours the diffusion of the Si 4+ and Al 3+ ions from the ceramic glaze to the glass at the interface, as clearly observed in the FESEM micrographs corresponding to the cross section of the glass/porcelain tile treated at 775 • C (Figs. 9 and 10).In these figures a high concentration of silica at the glass/glaze interface can be seen.The Si 4+ has migrated from the original glaze to the glass.This silica-rich layer precipitates probably in the form of cristobalite [30] when it reaches a certain concentration, reorganizes and increases its thickness during the heat treatment, reaching about 25 m after 4 h.The composition of the top glassy phase has been significantly modified with Al, Ba and Zn ions (according to the EDX analysis), which have Taking into account that the chemical stability of phosphate glass is generally poor, that greatly limits its applications [31].This fact allows to improve its chemical stability, for instance to the citric acid and the alkaline one, as can be seen from the tests of resistance to chemical attack (Fig. 11).Therefore, the ionic interdiffusion data obtained in this work open up the possibility of designing ad hoc compositions that can satisfy the strict requirements of applications with higher added values, e.g.microelectronics, surface decoration, etc.
In this preliminary study, it has been shown that lowmelting point glasses belonging to the P 2 O 5 -CaO-Na 2 O system (free of highly polluting heavy metals, such as Pb, and with a non-toxic chemical composition, totally compatible with the environment) with an appropriate compositional design can be used in both the field of traditional ceramics and the field of technical ceramics.Considering that a significant application of low melting glasses free of Pb is in the field of decorative ceramics, it is important to point out that the addition of 1 wt.% of Co 2 O 3 in the glassy matrix original free of SiO 2 , induced an intense blue colour (Fig. 5), as it has been reported in the literature for commercial non-toxic and traditional silicoaluminate based glazes [32][33][34].As far as we know, no data have been reported in literature on this topic in P 2 O 5 -CaO-Na 2 O glasses free of silica.
Conclusions
The following conclusions can be stated: 1. Starting from a low-melting point glass belonging to the P 2 O 5 -CaO-Na 2 O system, free of heavy metals such as Pb, glassy coatings can be manufactured at 775-875 • C, which is the temperature range commonly used in the ceramic industry for the application of decorative coatings.The glassy coatings obtained exhibit good chemical compatibility and mechanically integrity with both traditional ceramic substrates (porcelain tiles) and technical ceramics (zirconia).2. During the heat treatment, an ionic exchange takes place with the substrate, that modifies (a) the chemical composition (i.e., in the case of ceramic tiles or alumina dense substrates), as well as (b) the phase relationship of the original glassy coating through nucleation and growth of new phases (i.e., calcium phosphate at the interface in Mg-ZrO 2 substrates) which would improve both its chemical stability and its mechanical stability.3. We understand that this fact opens up a panoply of application possibilities for these Pb-free low-meltingtemperature glasses, both in the field of microelectronics and traditional ceramics.
Fig. 1 -
Fig. 1 -Location of the composition of glass within the ternary system P 2 O 5 -CaO-Na 2 O.
• C/min up to 650 • C. In addition, the thermal characterization was also performed by Hot Stage Microscopy (HSM) using a side view optical microscope EM 201 with a computerized image analyser system and an electrical furnace Leica 1750/15.The powder glass samples were cold pressed in cylindrical dimensions (2 mm × 4 mm), and the measurements were conducted in air and with a 10 • C/min heating rate up to the flow temperature, using a ceramic support containing a Pt/Rh (6/30) thermocouple.The temperatures corresponding to the characteristic viscosity points (first shrinkage, maximum shrinkage, softening, half ball and flow) were obtained from the photomicrographs taken during the hot-stage microscopy experiment following the standards DIN 51730-1998 and ISO 540-1995.Additionally, the HSM software calculates the percentage of decrease in area of the sample images.The coefficient of thermal expansion of the glass was calculated using a dilatometry equipment (Netzsch DIL402C, Germany).
Fig. 2 -DTA curves obtained for the glass showing the exothermic crystallization temperature peak (T c ) and endothermic melting temperature peaks (Tm1 and Tm2).
Fig. 3 -
Fig. 3 -Variation in area of glass samples on the two types of substrates (alumina and porcelain), and photomicrographs of the shape glass sample evolution during the HSM measurement.
Fig. 4 -
Fig. 4 -Glass coatings on porcelain tile.(A) Aspect of the uncoated porcelain tile, (B) coating under a heat treatment of 875 • C/2 h, and (C) coating with the glass doped with cobalt oxide and under the same heat treatment (875 • C/2 h).
Fig. 6 -
Fig. 6 -X-ray diffractogram of the: (A) synthesized glass and (B) heat-treated glass at 875 • C/2 h and cooled into the furnace at 1 • C/min.
Fig. 7 -(A) FESEM observation and EDX analysis of the cross section of the glass on the alumina disc.The heat treatment performed was at 775 • C/2 h.(B) Surface of the coated sample at different magnifications.The presence of micro-cracks due to the stresses generated by the different coefficients of thermal expansion between the substrate and the coating can be observed.
Fig. 9 -
Fig. 9 -FESEM micrographs and EDX microanalyses corresponding to the cross section of the glass/porcelain tile treated at 775 • C/2 h.
Fig. 10 -
Fig. 10 -FESEM micrographs and EDX microanalyses corresponding to the cross section of the glass/porcelain tile treated at 775 • C/4 h.In the EDX microanalysis of the glass layer (1), the cations incorporated which are not present in the original glass composition are marked as green in the list.
Fig. 11 -
Fig. 11 -Chemically attacked porcelain tiles with the glass coating thermally treated at 875 • C/4 h.The glass coatings were more exposed to hydrochloric acid attack (A) and less sensitive to the citric acid (B) and the alkaline one (C).
Table 1 -
Solutions and residence times used for the determination of chemical resistance. | 5,191.2 | 2023-08-01T00:00:00.000 | [
"Materials Science"
] |
A Bayesian approach for two‐stage multivariate Mendelian randomization with mixed outcomes
Many research studies have investigated the relationship between baseline factors or exposures, such as patient demographic and disease characteristics, and study outcomes such as toxicities or quality of life, but results from most of these studies may be problematic because of potential confounding effects (eg, the imbalance in baseline factors or exposures). It is important to study whether the baseline factors or exposures have causal effects on the clinical outcomes, so that clinicians can have better understanding of the diseases and develop personalized medicine. Mendelian randomization (MR) provides an efficient way to estimate the causal effects using genetic instrumental variables to handle confounders, but most of the existing studies focus on a single outcome at a time and ignores the correlation structure of multiple outcomes. Given that clinical outcomes like toxicities and quality of life are usually a mixture of different types of variables, and multiple datasets may be available for such outcomes, it may be much more beneficial to analyze them jointly instead of separately. Some well‐established methods are available for building multivariate models on mixed outcomes, but they do not incorporate MR mechanism to deal with the confounders. To overcome these challenges, we propose a Bayesian‐based two‐stage multivariate MR method for mixed outcomes on multiple datasets, called BMRMO. Using simulation studies and clinical applications on the CO.17 and CO.20 studies, we demonstrate better performance of our approach compared to the commonly used univariate two‐stage method.
toxicities resulted from a treatment can be accepted to a certain degree if the treatment can significantly improve the chance of survival, 6,7 and many of these toxicities may be correlated, 8 of which the types of toxicities and frequencies may largely depend on the patients' characteristics (eg, some baseline blood measures like magnesium level). For instance, a patient may have a higher chance of experiencing adverse events such as vomiting and diarrhea if some of their baseline blood variables are higher than normal range.
Studying such relationships can help clinicians make better treatment decisions and be more prepared for the management of the adverse events, and it has become more feasible given the increasing collection of detailed outcome data from clinical trials. 9,10 One example is the CO.17 trial which was conducted by the Canadian Cancer Trials Group (CCTG). As a phase III randomized, placebo-controlled study, its primary objective was to examine the effect of cetuximab on colorectal cancer patients compared to placebo in terms of survival. Besides the primary analysis on survival, the study data also suggested that there may be a difference in toxicities experienced by different patient groups. 11 Another example, the CO.20 trial, also conducted by CCTG, found that the addition of brivanib (BRI) to cetuximab was associated with increased toxicities and did not significantly improve the overall survival of patients. 1,12,13 These findings raise a natural question of whether some toxicities can be attributed to the patients' baseline characteristics.
To answer this kind of question, an intuitive way is to examine the effect of an exposure variable on an outcome using a regression model. However, the presence of potential confounders, either measured or unmeasured, may render such analysis invalid. 14 This is usually the case in clinical studies where the patients are not randomly assigned by treatment. Meanwhile, measures of toxicities or quality of life scores usually come with a mixture of different types of variables (eg, binary and continuous) that may be correlated, and there may be multiple datasets available, each containing some of the outcomes of interest. 15 It is challenging to analyze multiple mixed variables from multiple datasets jointly in an efficient way. 16,17 Over the past few years, the Mendelian randomization (MR) approach has become a popular approach to handle the confounding problem in clinical investigations, especially where observational data are collected and analyzed, with the help of genetic instrumental variables, [18][19][20] which are genetic variants (eg, single-nucleotide polymorphism [SNP]) in the most cases. Many different methods of MR have been proposed with the majority of them relying on certain instrumental variable assumptions. When those conditions are met, MR methods can efficiently estimate the causal effect of an exposure on a single outcome as well as make statistical inference. Some methods are more robust to the instrumental variable assumptions, while some are extensions that can analyze multiple exposures. [21][22][23][24][25][26][27] Nevertheless, most of the established methods focus on univariate analysis, meaning that they analyze one outcome at a time. When research interests are on multiple outcomes, it may be beneficial to conduct multivariate analysis, which jointly models different outcomes simultaneously. Since multivariate analysis can make use of the correlation information and avoid the Bonferroni correction that is known to be conservative, 28 it can be more powerful than the univariate analysis, especially when testing the overall causal effect, whether the exposure has significant effect on any of the outcomes. 29 To conduct multivariate analysis on mixed outcomes, several methods have been developed, though most of them are computationally challenging due to the complexity of the likelihood function. 15,[30][31][32][33][34] Besides, none of these methods use the MR framework to handle confounders with instrumental variables. Deng et al. 29 proposed a two-stage MR method with multivariate analysis on mixed outcomes (binary and continuous), combining the MR framework with the composite likelihood approach used in Bai et al. 34 This approach, called MRMO, has been shown to have higher power than the standard two-stage univariate MR analysis in most cases. However, MRMO was developed for studies with single dataset, it cannot be applied to the case where multiple datasets are available, especially when some datasets do not contain all the outcomes of interest.
In this article, we propose a Bayesian approach for two-stage multivariate MR with mixed outcomes, denoted by BMRMO, which combines the composite-likelihood based two-stage multivariate MR method with a Bayesian framework to handle multiple datasets for mixed outcome variables. While we focus on testing the causal effect in this article, BMRMO can also provide good estimation when the required assumptions are met, especially for continuous outcomes. The main novelty of our approach is not only the methodology development of innovative MR framework on mixed responses, but also the use of Bayesian algorithm to integrate multiple datasets, which is methodologically a promising approach to extend the existing analytic framework on multiple datasets. We apply the Metropolis-Hastings algorithm to obtain the posterior samples of the causal effects. 35 In terms of examining the overall causal effect, we propose three different ways based on the adjusted credible intervals, the Wald test and Bayes factor, respectively. 36 We evaluate the performance of our proposed multivariate method through simulations and the application to the CO.17 and CO.20 data.
Two-stage univariate analysis
Before introducing our new approach, we briefly describe the basic framework of traditional MR analysis and the difference between univariate and multivariate MR analyses. As depicted in Figure 1A, traditional MR analysis focuses on one outcome (eg, Y 1 ) at a time, using a single or multiple instrumental variables (IVs), usually chosen as some genetic variants (G), to examine the causal effect of the exposure (X) on this outcome while avoiding the confounding problem brought by unmeasured or unincluded confounders (U). In this case, the causal estimand is the average change in an outcome (eg, Y 1 ) for a unit change in X, which can be written as E ( . 37 For MR, three IV assumptions need to be met: (a) each IV should have an effect on X; (b) IVs should not be associated with U; (c) the pathway for IVs to affect Y has to go through X. Traditional MR usually uses univariate analysis, meaning that it analyzes each outcome at a time. For study with multiple outcomes, several univariate MR analysis will be conducted on each outcome separately. This may lead to loss of power, especially when testing the overall causal effect (the exposure's effect on any of the outcomes), since the correlation information between multiple outcomes is not used. On the other hand, building joint models to analyze different outcomes jointly, known as multivariate analysis, may make hypothesis testing more powerful by incorporating the possible correlations between different outcomes, as shown in Figure 1B.
To describe the two-stage approach of univariate MR, we consider a simple scenario with one dataset (D 0 ) for the exposure (n 0 subjects with G (0) = ( G 0,il ) n 0 ×p and X (0) = ( X 0,i ) n 0 ×1 ) and one dataset (D 1 ) for the outcomes (n 1 subjects with . G m,il , X m,i and Y m,ij stand for the lth IV value, exposure value and the jth outcome value of the ith subject in dataset D m . We have p instruments and q outcomes in total.
Two-stage univariate MR builds the first-stage model X 0,i = 0 + ∑ p l=1 l G 0,il + e 0,Xi based on dataset D 0 , where l is the effect of the lth SNP on the exposure, and then use this model to predict the exposure values for dataset D 1 , denoted bŷ X 1,i . Next, the second-stage models can be constructed as follows: To test the exposure effect on a single outcome j, we need to test whether coefficient j1 is significant. Note that there are other variations of the two-stage MR approach, but we only focus on the more commonly used standard approach. According to the literature, 22 for both continuous and binary outcomes, standard two-stage MR can provide a valid way to test the causal effect of an outcome on the exposure. For binary outcomes, we consider the probit link instead of the logistic link in the article, since it is more comparable to the mixed response model that we will introduce.
To test whether 11 = 21 = … = q1 = 0 (ie, the exposure does not have any effect on any outcome), univariate MR usually applies the minP test with Bonferroni correction. It compares q times the smallest P-value of any coefficient j1 to the significant threshold (eg, 0.05), which may give conservative results. 28
Multivariate mixed response model
To model multiple outcomes jointly, which may give us power benefit, Deng et al. 29 proposed a two-stage multivariate Mendelian randomization method, called MRMO, which accounts for mixed outcomes. The general idea is that in the second stage concerning dataset D 1 , instead of modeling each outcome separately, a joint model is used, with MRMO uses the pairwise likelihood, defined as L jk ( 35,36 to estimate j1 's and their covariance matrix, which can be then used to test each single j1 as well as the overall causal effect by the Wald test.
Univariate analysis with multiple outcome datasets
In a scenario with multiple datasets which contain possibly different number of outcomes, applying the traditional univariate MR analysis is straightforward. We only need to combine all subjects that have a certain outcome measured when analyzing this outcome. Suppose we still have p IVs and q outcomes. Dataset D 0 contains n 0 subjects with their IV information and exposure values. Datasets D m (m = 1, 2, … , M) each contains n m subjects with their IV information and some of the q outcomes. We assume that the subjects of different datasets are independent and do not have any overlaps. Figure 1C shows an example, where dataset D 1 contains G and all three outcomes, while datasets D 2 , D 3 , and D 4 have different dimensions of outcome variables. In this case, to conduct univariate analysis, we use D 0 to build the first-stage model to predict the exposure values for each outcome dataset D m (m = 1, 2, 3, 4), denoted byX m,i . Then, we can construct the second-stage model for outcome 1 using those datasets that have this outcome (D 1 , D 2 , and D 4 ) to estimate and test the causal effect. Similarly, we can use D 1 , D 2 , and D 3 to analyze outcome Y2, D 1 , D 3 to analyze outcome Y 3 . To be more specific, the first stage of our two-stage model can be written as follows: The second stage of our two-stage model can be written as follows: To avoid confusion, unless otherwise specified, subscripts m, i, j, and l are used to denote the mth dataset, ith subject, jth outcome, and lth SNP, respectively.
Bayesian multivariate mixed response model
When dealing with multiple datasets containing possibly different outcome variables, we cannot directly apply the MRMO method for multivariate analysis, since it's pairwise likelihood is based on complete data (ie, each subject in the outcome dataset [s] should have all of the q outcomes measured). One possible approach is that when calculating the pairwise likelihood of a pair of outcomes, we may combine all the subjects that have this pair of outcomes recorded across different datasets. However, doing so will require each outcome dataset to have at least two outcomes, or it will not be applicable to calculate the pairwise likelihood.
To solve this problem, we propose a Bayesian two-stage multivariate MR model, denoted by BMRMO, which incorporates the idea of Bayesian update. Firstly, we build the first-stage model using D 0 to predict the exposure valuesX m,i for each outcome dataset D m , similar to what was described for the univariate analysis. Then, the second-stage model is defined using pairwise distributions, similar to what was done in MRMO. Suppose Y m,ij is the jth outcome for subject i in dataset D m .
If outcomes j, k are both continuous, following Cox et al., 31 then we assume , with m,ij = j0 + j1Xm,i and m,ik = k0 + k1Xm,i . This means for subject i in dataset m, the two outcomes follow a multivariate normal distribution with correlation jk . If outcomes j, k are both binary, then we use the probit models, with are a pair of latent variables for subject i in dataset m, m,ij = j0 + j1Xm,i and m,ik = k0 + k1Xm,i . If one outcome j is binary, and the other outcome k is continuous, then we combine the latent variable model with the linear regression model, using .
It can be derived that for a pair of continuous outcomes, For a pair of binary outcomes, For one continuous and one binary outcomes, , and Φ is the cumulative distribution function of the standard normal distribution. Our parameters include intercepts j0 (j = 1, … , q), exposure effects j1 (j = 1, … , q), standard errors j (j = 1, … , q) and correlations jk (j = 1, … , q; k = 1, … , q; j ≠ k). The full composite likelihood can be written as follows: In the Bayesian framework, we assume the prior distribution of the parameters has density p( ). Suppose for dataset D m , the relevant density function is p m , and the recorded outcome data is Y (m) . We can apply the Bayesian update procedure illustrated in Figure 2A to obtain the posterior density , which can then help us make inference on certain parameters (eg, the causal effects j1 's).
, we only consider the available outcome variables in dataset D m . If D m has a set of multiple outcomes, denoted by S m , then we propose to use the composite likelihood based on these outcomes, which means is equal to the part of CL( ) whose outcomes are available in D m . When updating with p m ( , since it only considers outcomes that belong to S m , only the parameters related to those outcomes are updated. If D m only has one outcome, denoted as outcome O m , then we can only apply the marginal model, which means we consider building is defined as the likelihood based on this marginal model. In practice, mixing pairwise likelihood and marginal likelihood in p * ( ) may be problematic because data points may be used multiple times in pairwise likelihood, while datapoints used in marginal likelihood are only used once. We propose to mitigate this problem by replacing contains multiple outcomes. Note that even though we are able to obtain p * ( ) using the above procedure, this posterior distribution can be very complex. As a result, we propose to use the Metropolis-Hastings (MH) algorithm to generate posterior samples from p * ( ), based on which we can make inference about our parameters of interest. 35 Our algorithm is described as follows: 1. Choose the starting point for , denoted by (1) . Choose a distribution with density g based on the current parameters to generate the next candidate. 2. For each iteration t, generate a candidate ( * ) from the proposal distribution g ( ( * ) | (t) ) . Calculate the acceptance ratio . Generate a random number u from the standard uniform distribution. Then After obtaining T samples, we burn in the first B samples. Then, we can use (t) (t = B + 1, … , T) to infer about the parameters. Note that the efficiency of MH decreases with the increase of parameters. To reduce the number of parameters, for simplicity, we propose a single correlation parameter = jk rather than having different jk 's, which if unrestricted, will drastically increase the number of parameters. We will show in our simulation study that this choice of simplification is relatively robust under different values of jk 's. In terms of choosing the starting point and the proposal distribution, we use the estimates from the marginal models (described in Section 2.3) and a univariate distribution with parameter . For example, if the current estimate for j0 is (t) j0 , then the next candidate will be generated from Unif j0 + ). We choose = 0.1 by default, which yields robust results.
Inference procedure of BMRMO
To infer whether a single causal effect j1 is 0, we can use the MH posterior samples to estimate its (1 − ) (eg, 95%) credible interval. If the credible interval contains 0, then we can conclude that the exposure does not have a causal effect on outcome j. If it does not contain 0, then we can conclude that the exposure has a causal effect on outcome j. To examine the overall causal effect (whether 11 = 21 = … = q1 = 0), we have different possible options. A widely used approach for Bayesian analysis is using the Bayes factor. We can build the full model A 1 (as described in Section 2.4) and estimate its likelihood Pr(D 1 , … , D M |A 1 ) as p * ( )∕p( ) as well as the null model A 0 (fixing 11 = 21 = … = q1 = 0) and its likelihood Pr(D 1 , … , D M |A 0 ). Then, the Bayes factor is calculated as Pr(D 1 , … , D M |A 1 ) ∕ Pr(D 1 , … , D M |A 0 ). If the Bayes factor is greater than 10, then we can conclude that there is strong evidence showing that the exposure has a causal effect on at least one of the outcomes. 38 One alternative option is to use the credible intervals while adjusting for multiple testing. For each j1 , we can use the MH posterior samples to estimate its c * confidence interval instead of the (1 − ) credible interval, where c * = (1 − ) 1∕q . Another alternative option, assuming the posterior distribution of the causal effects is close to normal, is to apply the Wald test. We can use the MH posterior samples to estimate the means of the causal effects as well as their variance-covariance matrix, and then we can carry out the Wald test. Though the Bayes factor may seem more fitting in our Bayesian framework, we will demonstrate the effectiveness of different options in our simulations.
2.6
Proposed general procedure In this section, we present the general procedure of applying our approach to examine the causal effect of an exposure on multiple outcomes using multiple datasets, illustrated by Figure 2B. For this article, we focus on the last stage, comparing the use of BMRMO for multivariate analysis to the standard approach of two-stage univariate MR analysis. As discussed in the previous research, 29 compared to the various MR techniques that use summary statistics, two-stage methods that use individual-level data can easily incorporate moderately correlated IVs, which may be beneficial when there are not many available IVs that are independent. Also, it will be much more challenging to build joint models involving mixed outcomes if we use summary statistics. We also limit our discussion to independent and non-overlapped samples for the exposure and different outcomes, since overlapping samples may lead to biased estimates and more false discovery, as pointed out by literature. 18, 39 We will discuss more about the strengths, limitations and possible improvement of our approach in the discussion section.
Assumptions
In this section, we give a brief summary of the assumptions that are needed for BMRMO. First of all, as an MR approach, BMRMO requires the basic MR assumptions to hold: (a) each IV should have an effect on X; (b) IVs should not be associated with U; and (c) the pathway for IVs to affect Y has to go through X. These three assumptions are also usually referred to as the relevance assumption, independence assumption and exclusion restriction assumption. 40 In addition, since BMRMO uses a mixed response model with latent variables in the second stage of two-stage MR, it brings the corresponding distributional assumption on the outcome variables. Specifically, we assume that the binary outcomes originate from some latent continuous variables that follow normal distributions. Detailed structures of the distributions are provided in Section 2.4. For the different datasets, we assume the individuals do not overlap, and they are from a single population of interest, which is crucial for applying the two-stage methods. Note that the above assumptions are required for testing the causal effect, which is the focus of our article. Under the same assumptions, BMRMO can obtain consistent effect size estimates for continuous outcomes. More details regarding estimation are provided in Appendix B of the supplementary materials.
Simulations
We simulate datasets D 0 , D 1 , … , D M with M = 6. Each outcome dataset D m (m = 1, 2, … , 6) has sample size n m = 120, and the exposure dataset D 0 has sample size 720. For the IVs, we simulate p = 10 independent SNPs with minor allele frequencies (MAFs) generated from Unif(0.3, 0.5). Suppose we have q 1 binary outcomes and q 2 continuous outcomes. We generate U, X and Y using where G m,ik , U m,i , X m,i , and Y m,ij are the genotype of the kth SNP, confounder, exposure, and jth outcome for subject i in dataset D m . Z m,ij is the jth latent outcome, which is connected to the binary outcome by the probit link, and the continuous outcomes by the identity link. GU,k , GX,k represent the kth SNP's effect on U and X, while XZ,j and UZ,j stand for the effects of X and U on the jth latent variable. UX Following relevant studies, 29,41 GX,k 's are generated from a truncated normal distribution with mean zero and SD 0.15. We take GX,k > 0.08 to ensure the IV strength assumption is met. We also scale GX,k 's so that the IVs explain about 20% of the variation in the exposure. We set GU,k = 0, UX , UZ,j ∼Unif(−0.5,0.5).
After we simulate the seven whole datasets, we remove certain variables to make sure dataset D 0 only contains information on the IVs and the exposure, datasets D 1 , … , D 6 each contains the IVs and a subset of the outcomes. We apply the BMRMO method and the two-stage univariate MR method to these datasets to examine the difference between their performances. By default, we choose T = 25000 and B = 1000 for BMRMO. A brief discussion on the validity of these numbers is provided in Appendix A of the supplementary materials. Table 1 shows a summary of the differences between our examined major scenarios. When comparing the proportion of falsely identified causal effect at the presence of no causal effect, we set XZ,1 = XZ,2 = XZ, 3 Before introducing the results, we would like to specify some terms and abbreviations. "UVA" and "MVA" stand for standard two-stage univariate MR and BMRMO, respectively. "minP" corresponds to the minP test with Bonferroni correction for UVA. "CI," "Wald," and "BF" refer to the overall tests based on the adjusted credible intervals, Wald test and Bayes factor, respectively for BMRMO.
In Scenario 1, we assume D 1 has outcomes 1, 2, 3; D 2 has outcomes 1, 2; D 3 has outcomes 1, 3; D 4 has outcomes 2, 3; D 5 has outcome 1; D 6 has outcome 2. This means we have a mixture of datasets that do not contain all of the outcomes. As shown in Figure 3, both univariate and multivariate analyses are able to control the rates of falsely identified causal effect when testing a single outcome or testing the overall causal effect. In terms of power, as shown in Figure 4, the overall tests of BMRMO usually have higher power than the minP test, especially when some correlations are negative. The Wald test tends to have the highest power, but the Bayes factor approach also has decent power. Note that when the three outcomes are uncorrelated, BMRMO is still able to boost power over univariate analysis, especially when the exposure is affecting more than one outcome. When there is only one outcome affected by the exposure, the Bayes factor approach does not show much advantage compared to the minP test. These results are consistent with the previous findings. 42,43 The minP test with Bonferroni correction is similar to the SPU(Inf) test, proposed by Pan et al., 42 and usually works well when the signal is sparse, meaning that most of the outcomes are not affected by the exposure. Meanwhile, the Wald test and the TA B L E . 1 Summary of different scenarios.
4
Yes D 1 has outcomes 1, 2, 3; D 2 has outcomes 1, 2; D 3 has outcomes 1, 3; D 4 has outcomes 2, 3; D 5 has outcome 1; D 6 has outcome 2. Bayes factor approach learn more toward the SPU (2) test, 42 which means they are more advantaged when the signal is relatively dense, with multiple outcomes affected by the exposure. Next, we examine another scenario where for the outcome data, we only have datasets D 1 , D 2 and D 3 , and they all contain all three outcomes. This means we are looking at an ideal situation with complete data. For power comparison, in Scenario 2, we choose the following cases: Case 2A: XZ,1 = 0, XZ,2 = 0, and XZ,3 = 0.4. The exposure affects only one outcome. Case 2B: XZ,1 = 0.2, XZ,2 = 0, and XZ,3 = 0.2. The exposure affects two outcomes. Case 2C: XZ,1 = XZ,2 = XZ,3 = 0.15. The exposure affects all three outcomes. As shown in Figures 5 and 6, results are very similar to those in the previous setting. Again, both univariate and multivariate analyses are able to control the proportion of falsely identified causal effect, while the overall tests of BMRMO tend to have higher power than the univariate overall test, especially when there are multiple causal effects. The increase of power brought by BMRMO is the largest when the correlation between different outcomes is negative. The results of Scenarios 3-5 are provided in Appendix B of the supplementary materials, including examples showing that BMRMO is able to handle moderately correlated instruments as well as the situation where none of the outcome datasets contain all of the outcomes.
Real data application
To illustrate how BMRMO and the univariate MR analysis perform differently in a real setting, we apply them to the CO.17 and CO.20 data. [11][12][13] The CO.17 and CO.20 trials were two independent phase III-randomized trials aimed to study the treatment effect of cetuximab compared to placebo and the treatment effect of cetuximab plus brivanib alaninate compared to cetuximab alone, respectively, for colorectal cancer patients. A total of 78 subjects who received cetuximab and 80 subjects who received placebo in the CO.17 trial as well as 284 subjects who received cetuximab plus brivanib alaninate and 300 subjects who received cetuximab alone in the CO.20 trial were genotyped and passed quality control (there is no subject who only took placebo in the CO.20 trial as the main objective of this trial was to compare the combined treatment with the cetuximab only treatment). At the beginning of our analysis, 533 631 SNPs are genotyped using the Illumine Oncoarray platform. We would like to point out that even though this application uses the data from two randomized trials, when combining both CO.17 and CO.20 data, we are treating our data as observational data, as the subjects were randomized based on different treatment groups instead of the exposure variable of interest. Our approach can be used on observational studies without randomization, which is what MR methods are usually applied to. Figure 7 shows our study process. For the CO.20 data, we use the subjects who received cetuximab plus brivanib alaninate as dataset D 0 , and the subjects who received cetuximab plus placebo as dataset D 1 . For the CO.17 data, we use the subjects who received cetuximab as dataset D 2 . As a result, we have three independent datasets D 0 , D 1 , and D 2 . We are interested in examining whether baseline magnesium level (a continuous variable), an exposure variable that is associated with certain genetic variants, has a significant effect on at least one of the three toxicity outcomes: diarrhea, aspartate aminotransferase (AST), and lactate dehydrogenase [LDH] for patients treated by cetuximab. Diarrhea is recorded as a binary variable (whether a patient experienced it within 8 weeks after allocation), while AST and LDH are two continuous variables defined as the worst (maximum) values within 8 weeks. We log-transform AST and LDH and exclude outliers.
For our two-stage MR analysis, we build a first-stage model using dataset D 0 , and then use this model to predict the exposure values for datasets D 1 and D 2 . Next, we build second-stage models using D 1 and D 2 to analyze the causal effect of baseline magnesium level on diarrhea, AST and LDH. We compare the Wald test and Bayes factor approach based on BMRMO with the standard two-stage univariate MR approach. The P-value of the univariate overall test is 0.076, meaning that if we rely on univariate analysis, then we will conclude that the exposure has a marginal effect on any of the outcomes. Meanwhile, the P-value for the Wald test based on BMRMO with 200 000 iterations and 20 000 burn-ins is 0.009, and the Bayes factor is 15.3, which means we have stronger evidence to conclude that F I G U R E 7 Workflow for the CO.17/CO.20 data application. the baseline magnesium level has a significant effect on at least one of the outcomes. These results are consistent with our simulation findings, showing that multivariate analysis may give us more power to detect a significant causal effect compared to the standard univariate analysis. Besides, based on the credible intervals, baseline magnesium' effect on AST is significant with a negative posterior mean (−1.85). In conclusion, having a higher baseline magnesium level may lower the risk of elevated AST levels. More information on this application, including checking the MR assumptions and the posterior distributions of the causal effects, is available in Appendix C of the supplementary materials.
DISCUSSION
We propose a novel approach to conduct two-stage MR in a Bayesian framework with multivariate analysis. Incorporating the composite likelihood method and the Metropolis-Hastings algorithm, this new method can be applied to situations where researchers have a mix of binary and continuous outcomes from multiple datasets. We have also proposed three different ways to conduct hypothesis testing based on our multivariate modeling, including the adjusted credible interval method, Wald test and the Bayes factor approach. Our simulation studies show that in terms of the overall test, while both multivariate and univariate MR analyses can control the proportion of falsely identified causal effect, BMRMO has consistently higher power than the univariate MR method. The increase of power is largest when multiple outcomes are affected by the exposure, and there is negative correlation between the study outcomes. Besides, the Wald test based on BMRMO tends to show slightly higher power than the Bayes factor approach, while the Bayes factor approach may seem more appropriate in a Bayesian setting. Note that even though we have incorporated various scenarios in our simulations, the number of configurations is still limited, which means more simulations can be explored for future complex data structures. In practice, we recommend conducting both tests and comparing their results. If the results are inconsistent (eg, the Wald test has significant result while the Bayes factor is not large), we should not accept the significant result. Instead, more investigation needs to be conducted, which may involve collecting more data to increase the sample size.
After applying the univariate and multivariate MR methods to the CO.17 and CO.20 data, we have found stronger evidence from the proposed method than that from the univariate method that baseline magnesium has a significant effect on at least one of the three toxicity outcomes of interest (diarrhea, AST and LDH), confirming the potential power advantage of multivariate analysis over the univariate analysis. We also found that the significant signal comes from AST, and the causal effect on AST is negative. This is clinically relevant since for patients treated with cetuximab, hypomagnesemia is potentially associated with improved outcomes, and our result suggests that predisposition to low magnesium may lead to increased liver toxicities. Whether this relationship is a result of low magnesium leading to increased levels of cetuximab (pharmacokinetic association) or increased efficacy of cetuximab on the metastatic cancer (pharmacodynamic association) has yet to be confirmed. Nevertheless, since it is usually more likely to observe AST abnormalities occurring with drug toxicity than liver metastases, our result may suggest though does not prove that the relationship between hypomagnesemia and elevated AST may be related more to drug toxicities.
Note that since the current version of BMRMO is based on the Metropolis-Hastings algorithm, the computation burden can be heavy when we have a large number of parameters in our models. In the future, we may explore more computational efficient ways to carry out the analysis, with Gibbs sampling for instance, 44 though this can be challenging given the complexity of the composite likelihoods. Another possible extension for our method is to incorporate the Bayesian framework's ability to handle missing data, which may involve specifying more parameters to estimate. Meanwhile, our method uses the 2SPS (two-stage predictor substitution) approach, which can provide consistent estimates for continuous outcomes but may have biased estimates for some binary outcomes as the second-stage model is not linear, while other methods like 2SRI (two-stage residual inclusion) may provide better estimates in certain scenarios. 45 Nevertheless, we choose to use the standard 2SPS since 2SPS may still perform as good as or even better than 2SRI in some scenarios, 39 and 2SPS is valid for testing the causal effect, 18 which is the focus of this article. In the future, we may explore other estimation algorithms to reduce bias and find extra assumptions needed to acquire consistent estimates for binary outcomes. Following Zou et al., 46 we may also extend our method to the situation with overlapping samples, which will involve more methodology considerations to account for compared to independent samples.
We would also like to point out that the focus of this article is on comparing the multivariate analysis to the standard two-stage MR analysis using individual-level data. In MR research, it is of importance to select valid IVs, since violation of the MR assumptions may lead to problematic results, unless a robust method that addresses invalid IVs is carefully applied. In the presence of a growing emphasis on invalid IVs, it will be very beneficial to develop a robust multivariate approach that can handle invalid IVs and compare it to the robust MR methods, which are usually based on summary statistics. However, due to the complicated likelihoods for mixed outcomes, developing an efficient and robust multivariate approach using summary statistics only (eg, a multivariate approach parallel to the MR-Egger regression) may be particularly challenging if we still consider mixed outcomes instead of only one type of outcome. Meanwhile, we may also explore the possibility of extending the multivariate analysis to other types of outcomes such as survival outcomes.
ACKNOWLEDGMENTS
The authors would like to acknowledge the clinical contributions of Lillian Siu as well as CCTG (Canadian Cancer Trials Group) and AGITG (Australasian Gastro-Intestinal Cancer Trials Group's) contributions for the CO.17 and CO.20 studies.
FUNDING INFORMATION
This work was supported by the Alan Brown Chair in Molecular Genomics, the Lusi Wong Family Fund, and the Posluns Family Fund, all through the Princess Margaret Cancer Foundation.
DATA AVAILABILITY STATEMENT
R code for our simulation studies is available at https://github.com/yangq001/BMRMO. The data related to CO.17 and CO.20 are not publicly available due to privacy or ethical restrictions. | 8,842.4 | 2023-03-30T00:00:00.000 | [
"Biology"
] |
Glucocorticoid-Induced TNF Receptor Family-Related Protein Ligand is Requisite for Optimal Functioning of Regulatory CD4+ T Cells
Glucocorticoid-induced tumor necrosis factor receptor family-related protein (TNFRSF18, CD357) is constitutively expressed on regulatory T cells (Tregs) and is inducible on effector T cells. In this report, we examine the role of glucocorticoid-induced TNF receptor family-related protein ligand (GITR-L), which is expressed by antigen presenting cells, on the development and expansion of Tregs. We found that GITR-L is dispensable for the development of naturally occurring FoxP3+ Treg cells in the thymus. However, the expansion of Treg in GITR-L−/− mice is impaired after injection of the dendritic cells (DCs) inducing factor Flt3 ligand. Furthermore, DCs from the liver of GITR-L−/− mice were less efficient in inducing proliferation of antigen-specific Treg cells in vitro than the same cells from WT littermates. Upon gene transfer of ovalbumin into hepatocytes of GITR-L−/−FoxP3(GFP) reporter mice using adeno-associated virus (AAV8-OVA) the number of antigen-specific Treg in liver and spleen is reduced. The reduced number of Tregs resulted in an increase in the number of ovalbumin specific CD8+ T effector cells. This is highly significant because proliferation of antigen-specific CD8+ cells itself is dependent on the presence of GITR-L, as shown by in vitro experiments and by adoptive transfers into GITR-L−/−Rag−/− and Rag−/− mice that had received AAV8-OVA. Surprisingly, administering αCD3 significantly reduced the numbers of FoxP3+ Treg cells in the liver and spleen of GITR-L−/− but not WT mice. Because soluble Fc-GITR-L partially rescues αCD3 induced in vitro depletion of the CD103+ subset of FoxP3+CD4+ Treg cells, we conclude that expression of GITR-L by antigen presenting cells is requisite for optimal Treg-mediated regulation of immune responses including those in response during gene transfer.
Tolerance induction to specific foreign protein by hepatic gene transfer may be established in two steps. First, antigen-specific Tregs are de novo induced in the hepatic microenvironment. Second, antigen-specific Tregs are expanded systemically. Indeed, we previously found that transgene product-specific Treg actively suppresses antibody and T cell responses thereby ensuring long-term gene expression (16). Recently, studies in hemophilic mouse models have shown that AAV-mediated hepatic gene transfer can not only prevent but also reverse pathogenic antibody responses and desensitize from severe allergic reactions to the therapeutic coagulation factor IX protein (17)(18)(19)(20). We have recently shown that the immune suppressive cytokine TGF-β is required for Treg induction in hepatic AAV gene transfer and thus necessary for suppression of antibody and CD8 + T cell responses against the transgene product www.frontiersin.org (21). TGF-β, a cytokine highly expressed in mucosal tissues and sites of inflammation, plays a role in conversion of conventional peripheral CD4 + T cells into Treg, and TGF-β up-regulates expression of CD103 (Integrin α E β 7 ) (22), which is the primary ligand of E-cadherin, an epithelial adhesion molecule. Expression of CD103 marks a subset of peripheral inducible Tregs (about 20-30% of the CD4 + FoxP3 + Tregs in the spleen), which inhibit graft-versus-host disease more potently than the CD4 + CD25 + Tregs (23,24).
In this study, we provide evidence in support of the concept that the interactions between GITR and GITR-L are requisite for optimal functioning of Tregs. To this end, we analyze GITR-L −/− FoxP3(GFP) and GITR-L −/− CX3CR1(GFP) mice after gene transfer of ovalbumin into hepatocytes with adeno-associated virus (AAV8-OVA). Coordinate expansion of Treg and dendritic cells (DCs) was assessed after injection of Flt3 ligand in GITR-L −/− mice. The interactions between antigen presenting cells and Tregs are also evaluated after administering αCD3 in GITR-L −/− mice or by co-activation with αCD3 and soluble Fc-GITR-L.
MICE
B6, OT-II Tg, and CX3CR1(GFP) reporter mice were purchased from the Jackson Laboratory (Bar Harbor, ME, USA). OT-I × Rag -/mice were purchased from Taconic Labs (Germantown, NY, USA). GITR-L -/and FoxP3-IRES-EGFP-SV40 knock-in [FoxP3(GFP)] B6 mice were described previously (8,25). GITR-L -/mice were crossed with FoxP3(GFP) and CX3CR1(GFP) mice to generate GITR-L -/-FoxP3(GFP) and GITR-L -/-CX3CR1(GFP) B6 mice. All animals were housed in the Center for Life Science animal facility of BIDMC. The Guide for the Care and Use of Laboratory Animals was followed in the conduct of the animal studies of the Institutional Animal Care and Use Committee at BIDMC. Veterinary care was given to any animals requiring medical attention.
AAV8-OVA MEDIATED EXPRESSION OF FOREIGN PROTEIN IN HEPATOCYTES
AAV8-OVA vector (containing an ovalbumin expression cassette driven by AAV-EF1α) was packaged into serotype 8 capsid as described previously (16). Vector was injected i.v. into FoxP3(GFP) and GITR-L -/-FoxP3(GFP) mice at a dose of 10 10 vector genome/mouse. Five weeks later, leukocytes from liver, spleen, and thymus were stained with TCRvα2. Also, Ly6G -NK1.1 -GFP + cells FACS sorted from the liver of CX3CR1(GFP) mice 7 days after AAV8-OVA injection were incubated with OT-II CD4 + or CFSE-labeled OT-I CD8 + T cells for 3 days. OT-II CD4 + T cell cultures were stained with TCRvα2 and FoxP3. OT-I CD8 + T cell culture was stained with TCRvα2 and proliferating CD8 + cells were evaluated by CFSE dilution.
INDUCTION OF DENDRITIC CELLS AND TREG WITH Flt3L
Flt3L-Fc fusion protein (10 ng/mouse/injection) was i.p. injected into FoxP3(GFP) and GITR-L -/-FoxP3(GFP) mice for nine consecutive days as described previously (26). Leukocytes from the spleen and liver were analyzed at day 10.
CELLULARITY IN MICE AFTER αCD3-MEDIATED ACTIVATION OF T CELLS BY IN VIVO
Anti-CD3ε was i.p. injected into CX3CR1(GFP) and GITR-L −/− CX3CR1(GFP) mice (20 µg/mouse, one injection). After 72 h, leukocytes of the spleen and liver were stained with CD4 and FoxP3. CX3CR1 + cells were evaluated by expression of the reporter gene GFP.
IN VITRO ACTIVATION OF CD4 + T CELLS
CD4 + T cells from the spleen of FoxP3(GFP) mice were negatively selected using a CD4 + T cells isolation kit (Miltenyi, Auburn, CA, USA) and were activated with αCD3-coupled microbeads in a round bottom 96-well plate in the presence or absence of Fc-GITR-L (1 µg/ml) for 2 days as described previously (9). Cells were stained with CD4 and CD103. Expression of FoxP3 was judged by the reporter protein EGFP. Cell numbers were counted with a Countess Automated Cell Counter (Invitrogen, Grand Island, NY, USA).
ISOLATION OF LIVER LEUKOCYTES
Liver leukocytes were isolated as described previously (27). Briefly, liver was mashed and filtered through a 70 µM cell strainer. Hepatocytes and cell debris were removed by spinning at 300 rpm for 10 min. Supernatant was centrifuged at 1500 rpm for 10 min to collect cells. Leukocytes were isolated from the interface of a 40 and 70% Percoll gradient.
Statistical analysis used Prism 4.0c software (GraphPad, San Diego, CA, USA). Statistical comparisons were performed using the two-tailed Student's t -test. Values of P < 0.05 were considered to be statistically significant.
Flt3L-INDUCED EXPANSION OF TREG WAS IMPAIRED IN GITR-L DEFICIENT MICE DUE TO A PARTIALLY REDUCED NUMBER OF DENDRITIC CELL SUBPOPULATIONS
We previously found that after administering a Fc-GITR-L fusion protein to WT mice the number of Treg cells increased, which was confirmed by studies with GITR-L transgenic mice (9)(10)(11)28). Surprisingly, we found that GITR-L was dispensable for the development of naturally occurring Treg, as the number of FoxP3 + Treg cells was normal in the thymus and spleen of GITR-L −/− FoxP3(GFP) mice under resting conditions ( Figure 1A; Figure S1 in Supplementary Material). To further investigate the role of GITR-L in controlling Treg development, we assessed the consequences of injecting Fmsrelated tyrosine kinase 3 ligand (Flt3L) into FoxP3(GFP) and GITR-L −/− FoxP3(GFP) mice for nine consecutive days. Not only is Flt3L a potent inducer of DC and macrophage proliferation (26,29), several phagocyte subpopulations express GITR-L (12,30). After the injection of Fc-Flt3L fusion protein, both the numbers and the frequency of FoxP3 + Treg were significantly increased in the spleen and liver. This Fc-Flt3L-induced expansion was, however, significantly reduced in GITR-L −/− FoxP3(GFP) mice (Figures 1B,C). The total number of CD4 + T cells in the spleen was also lower in GITR-L −/− FoxP3(GFP) mice than the WT counterparts ( Figure 1D). Thus, GITR-L plays a significant role in the expansion of Treg in the peripheral tissues.
Frontiers in Immunology | Microbial Immunology
We next evaluated whether the impaired Flt3L-induced expansion of Treg cells in GITR-L −/− FoxP3(GFP) mice correlated with reduced numbers of DCs and macrophages (MØ) (31,32). As shown in Figure 2A and Figure S2A in Supplementary Material, the percentage of CD11c + CD11b + and CD11c + CD11b − DCs was reduced in the spleen of GITR-L −/− FoxP3(GFP) mice as compared to FoxP3(GFP) mice. Although the number of conventional CD11c + DCs in the liver was normal (Figure 2A), the percentage of pDCs in GITR-L −/− FoxP3(GFP) mice was higher than that of their WT counterparts ( Figure 2B; Figure S2B in Supplementary Material and Data not shown). The frequency of CD11c − CD11b + MØ was comparable between these two mice ( Figure 2C). Taken together, these data indicate that after Flt3L induction, GITR-L affects the expansion and differentiation of subpopulations of DCs, which in turn leads to expansion of Tregs.
GITR-L −/− CX3CR1 + DCs ISOLATED FROM THE LIVER ARE LESS EFFICIENT THAN WT CX3CR1 + DCs IN THE IN VITRO INDUCTION OF OVA-SPECIFIC TREG AND CD8 + T CELLS
To directly test whether the absence of GITR-L in DC subpopulations affects proliferation of antigen-specific GITR + Treg and CD8 + cells, we immunized GITR-L −/− CX3CR1(GFP) and WT CX3CR1(GFP) mice by gene transfer with AAV8-OVA ( Figure 3A). One week after injection of AAV8-OVA, liver CX3CR1(GFP) + cells purified by FACS were incubated with OVAspecific OT-II CD4 + T cells or OT-I CD8 + cells for 3 days. GITR-L −/− CX3CR1 + cells were less efficient in inducing Treg as compared to the same cells isolated from WT mice (Figures 3B,C). Since activated CD8 + cells carry GITR on their surface, we also evaluated whether in vitro proliferation of CD8 + T cells would be affected by the absence of GITR-L from the surface of these DCs. Indeed, the proliferation of CD8 + OT-I cells was reduced when cocultured with liver CX3CR1 + cells from AAV8-OVA-primed GITR-L −/− CX3CR1(GFP) mice compared to OT-I cells cultured with WT CX3CR1 + DCs (Figures 3D,E).
We conclude that GITR-L on the surface of antigen presenting cells can drive proliferation of both FoxP3 + CD4 + Treg cells and activated CD8 + T cells in an antigen-specific manner. www.frontiersin.org
AFTER AAV8-OVA GENE TRANSFER, THE NUMBER OF ANTIGEN-SPECIFIC TREG IN GITR-L −/− FoxP3 MICE IS REDUCED, WHICH RESULTS IN AN INCREASED NUMBER OF OVA-SPECIFIC CD8 + T CELLS
Because targeted expression of exogenous protein in hepatocytes by AAV8-mediated gene transfer induces a Treg-mediated tolerance (16), we assessed whether this process involves GITR-L. To assess this, we injected an AAV8-OVA vector into in FoxP3(GFP) and GITR-L −/− FoxP3(GFP) mice and determined the number of OVA-specific Treg and CD8 + T cells. Consistent with the results when administering Flt3L, there was a reduced percentage of OVA-specific FoxP3 + TCRvα2 + T cells in the spleen and liver of GITR-L −/− FoxP3(GFP) mice as compared to that of WT mice 5 weeks after vector administration ( Figure 4A). Conversely, AAVmediated OVA expression in the hepatocytes induced an increased percentage of OVA-specific CD8 + TCRvα2 + T cells in the spleen and liver of GITR-L −/− FoxP3(GFP) mice ( Figure 4B). By contrast, the total cell numbers were comparable between these two mouse strains ( Figure 4C). The data suggest that GITR-L deficiency may impair the induction of antigen-specific Tregs (16-18, 21, 33), which may at least partially compromise their immunosuppressive capability.
As the in vitro data suggest that GITR-L expression on DCs causes the expansion of CD8 + cells, this in vivo result might underestimate the consequences of the reduced number of the Tregs in the GITR-L −/− mice. To test whether GITR-L is implicated in the in vivo expansion of antigen-specific CD8 + cells, we used a system in which the Treg-mediated suppression is absent. To this end, we injected AAV8-OVA into Rag −/− and GITR-L −/− Rag −/− mice followed by the adoptive transfer of OT-I CD8 + T cells after 1 week (Figure 5A). Eight weeks after transfer of OT-I CD8 + T cells, the number of CD8 + T cells in the blood of the GITR-L −/− Rag −/− recipients was significantly lower than that of the Rag −/− recipients ( Figure 5B). This was not due to an inadequate amount of OVA antigen production in the GITR-L −/− Rag −/− recipients ( Figure 5C). Taken together, the data indicate that GITR-L is required for optimal induction and/or expansion of antigen-specific Treg in the context of hepatic AAV8 gene transfer.
DEPLETION OF CX3CR1 + (GFP) CELLS BY αCD3 IN GITR-L −/− MICE CORRELATES WITH A REDUCED NUMBER OF FoxP3 + TREG CELLS
In vitro expansion of FoxP3 + Treg cells can be achieved by stimulation with a combination of αCD3 and soluble GITR-L (Fc-GITR-L) (9). We then assessed whether injection of αCD3 into WT and GITR-L −/− mice would affect the Treg population. As shown in Figures 6A,B, αCD3 induced a significant reduction of the percentage of FoxP3 + Treg in the spleen and liver of GITR-L −/− CX3CR1(GFP) mice, but not in WT CX3CR1(GFP) mice. In support of our observations in this paper, the reduced number of Tregs coincided with a reduction of CX3CR1 + DCs in the spleen Frontiers in Immunology | Microbial Immunology (Figures 6C,D). In contrast, the numbers of CX3CR1 + cells in the spleen and liver were comparable in the two mouse strains under homeostasis ( Figure S3 in Supplementary Material).
To further investigate the role of GITR-L in the expansion of FoxP3 + Treg, CD4 + T cells were purified from the spleen of FoxP3(GFP) mice and stimulated in vitro with αCD3 with either Fc-GITR-L or IgG. Forty-eight hours after exposure to αCD3, the number of total CD4 + and FoxP3 + CD4 + Treg was significantly higher in the presence of Fc-GITR-L than that of IgG (Figures 7A,B). Interestingly, a subset of CD103 + Treg cells, which is induced in epithelium and in sites of inflammation (23,34) and comprises approximately 20% of all FoxP3 + Treg cells in the spleen, was also expanded by Fc-GITR-L (Figures 7C,D).
We conclude that while the induction or expansion of Treg is impaired in the absence of GITR-L, Fc-GITR-L provides a positive signal to GITR + Treg.
DISCUSSION
The receptor-ligand pair GITR/GITR-L (TNFRSF18/TNFSF18) appears to be involved in the development of a variety of inflammation-related diseases in murine models (6,8,12,35,36). It was originally thought that the suppressor function of Treg cells, which constitutively express GITR, would be abrogated by anti-GITR thus breaking immune self-tolerance (2). More recent additional evidence shows that GITR engagement by its natural ligand GITR-L causes an extensive expansion of functionally competent Tregs (9)(10)(11), although the relative role of GITR on Treg and Teff cells remains only partly understood. In this study we find that in the absence of GITR-L the expansion of FoxP3 + Treg cells is impaired in an antigen-specific manner, which can be mimicked by in vivo and in vitro activation of CD4 + Treg cells with αCD3. Our results are consistent with the findings of the Chatila group that expansion and contraction of Teff and Treg dynamically control primary immune responses to foreign antigen (25).
Glucocorticoid-induced TNF receptor family-related protein ligand impacts immune regulation in gene replacement therapy at least at three levels. First, the induction/expansion of antigenspecific Treg cells in the liver after AAV-mediated gene therapy is impaired directly by the absence of GITR-L. Second, the expansion of antigen-specific CD8 + T cells is reduced by GITR-L deficiency. However, impaired expansion of Treg cells can on the other hand up-regulate CD8 + T cell expansion indirectly. Third, GITR-L deficiency affects the infiltration of monocyte-derived MØ to the sites where exogenous protein is expressed and/or the sites of inflammation (30), which changes the local function of different immune cells. These GITR-L-expressing, monocyte-derived MØ may provide a microenvironment for the expression of CD103 in Treg cells, an integrin that facilitates the retention of Treg cells in the sites of inflammation or infection.
Surprisingly, we found that administering αCD3 causes the depletion of CX3CR1 + DCs in the spleen and liver of GITR-L −/− www.frontiersin.org mice, which correlates with a reduced number of FoxP3 + Tregs. It is reported that IL10-secreting GITR + Tr1 cells may suppress immune responses by granzyme B-mediated killing of myeloid APCs (37,38). Granzyme B is also important for the ability of Treg, NK cells, and CD8 + T cells to kill their targets (39). It is possible that Tr1, Treg, and CD8 + T cells play a role in the depletion of CX3CR1 + DCs in GITR-L −/− mice. In the presence of GITR-L, an increased expansion of Treg may inhibit this self-destructive cytotoxicity. Depletion of CX3CR1 + DCs, which includes the GITR-L-expressing pDCs and MØ (12,30), may feedback to cause the reduction of Treg number during immune responses.
Ly6C hi monocytes give rise to CX3CR1 + DCs under both steady state and inflammation. Under resting conditions, CX3CR1 + DCs in the intestine is reported to induce a immunosuppressive CD8 + T cells (40). CX3CR1 + DCs isolated from the liver are able to induce Treg in vitro. However, during inflammation CX3CR1 + DCs give rise to proinflammatory effector cells (41). The mechanism how this Ly6C hi monocyte-derived DC subpopulation is educated to be either protagonist or antagonist is still not well understood. Anti-CD3-mediated depletion of CX3CR1 + DCs in the liver may provide an important tool for the study of migration, colonization, and education of this special DC subset (30).
In conclusion, our data show that GITR and GITR-L have important implications for gene therapy. Optimal induction of an immune regulatory response, which is crucial for tolerance to the transgene product and for immune modulatory gene therapy, requires co-stimulation by GITR-L, which enhances Treg induction and function. Expression of GITR-L on hepatic APCs may in part explain the tolerogenic/Treg inducing capacity of hepatic gene transfer.
AUTHOR CONTRIBUTIONS
Gongxian Liao performed all the experiments; Michael S. O'Keeffe helped in processing the samples and editing the manuscript; Guoxing Wang and Boaz van Driel helped in processing the samples and discussing the results. Rene de Waal Malefyt generated GITR-L deficient mice; Hans-Christian Reinecker brought deeper insight into αCD3-inducing murine model. Roland W. Herzog helped in discussing and writing the manuscript; Cox Terhorst is the major organizer of this work and designed the experiments with Gongxian Liao.
ACKNOWLEDGMENTS
We thank Dr. Talal Chatila for providing the FoxP3EGFP knock-in reporter mice, all other members of the Terhorst Lab for helpful discussions. We thank Dr. Shangzhen Zhou and the AAV research vector core at The Children's Hospital of Philadelphia for the help with production of AAV8-OVA vector. Grant Support: this work was sponsored by National Institutes of Health (P01 HL078810 to Roland W. Herzog and Cox Terhorst, and R01 DK-52510 and P30 DK-43351 to Cox Terhorst). | 4,425.4 | 2014-02-03T00:00:00.000 | [
"Biology"
] |
From composite material technologies to composite products : a cross-sectorial reflection on technology transitions and production capability
Materials, since the dawn of time, have played a crucial role in the development of civilization. Pre-history ages are fundamentally characterized by the material humans mastered, while the transitions to new materials have always marked a different socio-technical order. In this work we are going to investigate a relatively new material class, composites, in order to explain the issues the industry is currently facing. We are going to discuss material in the context of developing products that take full advantage of the benefits that composites can offer. The main idea behind this work is to understand how composite material technologies create growth and how the properties of those materials influence production capability and manufacturability. This work is the result of the EPSRC Centre for Innovative Manufacturing in Composites Platform research in the UK. It started with the bold intention to go beyond conventional research in composite material and explore the mechanisms of industrial change and growth through material. An examination of cases from a diverse range of sectors, acted as a platform to initiate a conversation on the issues practitioners are facing when adapting their products, or processes, to composite technologies, or when moving from a craftsman approach to state-of-theart material and process technologies. This paper presents insights from a sector/market agnostic point of view to probe the socio-technical considerations related to the diffusion of manufacturing innovation concerning composites and their production capabilities. The paper makes three main contributions. First, it presents a discussion on the capability issues regarding composites. Second, it presents empirical evidence on industrializing in composite material technologies. Finally, building on empirical evidence and previous literature, it describes the feedback loops during the composite product development process. The paper concludes with a reflection on current theories of innovation management on composite material technologies. A P Chatzimichali and K D Potter Printed in the UK 026001 tmr © 2015 IOP Publishing Ltd 2015 2 transl. mater. res.
percentage of those aircraft structures is composite, reducing structural weight and consequently fuel consumption compared with existing aircrafts in the same class.These benefits explain the interest in this relatively new class of material technologies.
Despite such examples and other sector-specific cases, it is widely understood that the composites industry can only demonstrate individual cases of success, and that these successes have proven to be inadequate for the development of a coherent industry built on deep expertise and volume production.So the question is-is a 'better material' a guarantee for industrial success?
In this paper we attempt to answer this question.Section 2 sketches a current picture of the composites material industry, including a brief historical analysis.In section 3 we discuss the issues related to craftsmanship, industrialization and academic research related to composites.In section 4 empirical evidence from the investigation of eight industrial Cases regarding the enabling and the blocking factors in the development of the industry are presented.Section 5 demonstrates a framework for production capability development for composite products.Discussion on material strategy theories follows in section 6. Section 7 concludes this paper and discusses implications of the current study.
Paint it black: black metal components and black craftsmanship
Industrial practice has traditionally treated composites as a substitute material, usually overlooking the systemic architecture of the component and thus compromising the benefits composites can offer.Part of the reasons behind this is that engineering design has been very closely interwoven with the metallic tradition, and composites require a very different design mind-set.Most engineering designers are still trained in metallic design and thus carry this tradition across even when dealing with composites.As a result very often those components do not take full advantages of the novel possibilities inherent in composites.Historically, composites have evolved around this oxymoron known widely as black aluminium (Tsai 1993), carbon fibre components designed using the 'old' knowledge and norms of metallic structures.These components are designed as metals but manufactured in composite material resulting also in serious manufacturability issues.For example, processes like milling, drilling or grinding, widely used in metals, deliver a particular set of localized geometrical features such as corner radii, minimum gauges, surface finishes and geometrical tolerances which cannot be carried directly across into composites manufacturing processes.
Metals and composites might require very diverse industrial philosophies and distinct skill-sets, however, the limited availability of composite design and manufacturing knowledge is not the root of all the problems.Practice has demonstrated that even when new knowledge is available, adoption by industrial partners is not as evident as we might expect.Practices and rules developed very early in the history of composites, when the materials were new and untried, are still widely used across the breadth of composites applications despite the availability of new knowledge (Potter 2009).This old mindset around composites is evident when we consider current production capability issues.
On low production capability
The origins of composite manufacturing methods go back to a technique known in practice as 'bucket and brush'.This is the manual process of dipping a brush in resin and covering layers of fibres with it.A more recent technique known as lamination utilizing pre-impregnated (prepreg) fibres has standardized the quality of the raw material (Paton 2007), nonetheless it still relies heavily on manual labour to apply that material to the mould tools.Product quality is thus dependent on human craftsmanship skills, creating a 'black art' character (Bloom et al 2013) in composite manufacturing.This craft requires highly skilled manual techniques and frequently involves the use of a self-made toolset (known as dibbers) created by the workers themselves (Jones et al 2015).This skillset is usually self-taught and can only be acquired in practice by apprenticeship next to a master laminator with many years of expertise.Very little formal training for laminators exists, and the application of theoretical knowledge to support a deeper understanding of this tacit process is in its infancy (Elkington et al 2013).
Automated processes in composite manufacturing have appeared in the last decades, offering the prospect of cost effective manufacture of large composite components.However it has been widely reported that such automated techniques are facing significant difficulties and problems related to affordability, process reliability and overall productivity (Newell et al 1996, Lukaszewicz et al 2012).A possible reason is that automation and robotic application companies lack the material expertise and did not take into consideration the nature of composites while developing the machinery.They only started dealing with inherent manufacturability issues recently, as they gradually develop expertise in composites.Moreover, there are still no automated processes available to manufacture relatively small and complex components to high quality standards and volumes.With the exception of existing approaches for large and relatively simple geometries (i.e.automatic fibre placement), the majority of composite manufacturing is still dependent on manual labour and craftsmanship skills.As a result, only small numbers of complex components can be manufactured with sometimes unreliable quality and relatively low efficiency levels.
This inability to capture the expert skills and develop automated technologies seems to result in limiting the composite production capability.A particular case is the new Boeing 787 Dreamliner where composite production capability and material lay-down rate fell short.The forecasted materials deposition production capability target of 200-500 lbs h −1 proved to be unrealistic and the actual production rate only reached 30 lb h −1 by the time a report became available (Airbus SAS 2008).The corporate world has put significant effort into increasing composite production rates.Nevertheless, reports of these efforts are never available, mainly due to the reluctance to share evidence related with their organizational performance.On the other hand, official national and international statistical records regarding composite material are not available either.Since composites pertain to a variety of sectors and, no single Standard Industrial Classification (SIC) code exists, making it particularly difficult to map composite activity and formulate reliable figures.However, data related to composite patents can provide a good indication and historical reference regarding the growth of the sector.
Composites and the 1970s promise
Patent records can provide evidence to sketch some reliable patterns related to the trajectory of the composites industry.Figure 1 demonstrates a growing trend in composite patents through the years.Data refer to international patent filings.But how many of those patents actually relate to the shaping of composite products (i.e.directly to forming composites on the tool) and not with technologies that are peripheral to their development?Figure 2 shows a very different pattern.Industrial patents related to composite shaping rose around the 1970s when composites were believed to be part of the future (Schatzberg 1998).After a twenty-year gap the industry appears to return to a similar record only very recently.This momentum echoed up to the early 1980s when strong expectations in composite technologies were still formed (Harris 1991, Carlson 1993).We can only make speculations regarding the reasons behind those trends.Another approach would be to rely on basic theory about industrialization in order to understand how such a pattern might have developed.In the next section we explore those concerns.
Manufacturing skills, craftsmanship and industrialization
The current diversity and broad spectrum of activities in composites results in different levels of sophistication in manufacturing skills, fabrication techniques or production approaches.However, the main difficulty in the sector arises from the fact that designing and manufacturing composite products that utilize the qualities of the material, requires a very deep understanding of the behaviour of the material, not only during the material use, but also during manufacturing.This essentially means that the industry first needs to build up enough expertise on these matters before it is able to formulate product specifications that utilize the inherent material qualities.
Consider for instance the case of the two crashed de Havilland Comet airplanes in the 1950s (e.g.Withey 1997, Schijve 1994) .These accidents unveiled a major flaw in knowledge related to the fatigue behaviour of aluminium under the load conditions of pressurized fuselages therefore causing the engineers to overlook the potential fatigue related problems in their design.Consequently, metal fatigue became a major engineering issue on the agenda of the airplane designer (Vlot 2001).Similar stories can be found in incomplete manufacturing knowledge in early stages of the adoption cycle of new materials.The development of theoretical understanding of the material in terms of how to engineer it (calculate loads, strength, etc), its behaviour in production and its performance in practical applications are essential for the advanced industrialization of the sector.
Division of labour and material transitions
Historically, knowledge developed in composite technologies was largely based on old rules and routines prevalent in traditional industries.As a result, when composites became broadly available as a new class of material the growth of the sector was restricted.To understand the main mechanisms leading to industrialization of the composite sector we need to go back to the basic principles of industrialization.Those considerations could allow a clearer view of the enabling factors that can catalyse industrial growth of a material technology.
The division of labour and the disconnection of design, engineering and production from physical craftsmanship skills lie at the heart of the industrial revolution.Essentially, design and manufacturing are found in one and the same 'hand' during early stages of applying new materials.One could say that in order to industrialize, first an integrated body of knowledge covering design, engineering, manufacturing, and use, needs to be in place.The industrial engineering literature is full of methodologies and approaches on dividing tasks in workstations and balancing production lines after such a body of knowledge is established.This is also happening in activities beyond the production floor, where outsourcing nowadays is a very common strategy.However, this approach that seemed to work well in the post industrial revolution area is currently falling short due to rapid technological developments.For example, when the actual tasks of detailed design and manufacturing in automotive are carried out by outside suppliers, the outsourcing company is missing substantial opportunities to gain knowledge and as a consequence the company's knowledge base tends to decline (Takeishi 2002).Something similar happened recently to Boeing's 787 Dreamliner where due to outsourcing design and manufacturing of parts, an integrated body of knowledge regarding the design itself was largely missing (Tang and Zimmerman 2009).As tasks are divided (i.e.division of labour) or outsourced, the integrated knowledge that used to belong to a single master craftsman or team is spread now across the whole supply chain.Thus it becomes a challenge to manage knowledge especially when substantive amounts of new knowledge are simultaneously developed.However, it is even more of a challenge when an integrated body of knowledge covering design, engineering, manufacturing, and use is only weakly developed.
Additional issues arise when new technologies enter the field and a lack of integrated and embodied knowledge appears to be a burden in adapting to a new reality.It is already known that supplying an immature industrial environment with the latest machines and methods is a seriously inappropriate model for industrialization, particularly due to the lack of specialists who can improve raw material and products (Stigler 1951).Meaning, that without having the deep knowledge that underpins the new machines, users of these machines will be 'condemned' to consider this technology as a black box and thus preventing them to 'play' with the underlying principles in order to innovate and aim at a sustainable growth.Therefore, the solution does not rest in mechanization or automation as such, but in progressive development and establishment of the capability to build the practical skills and the integrated knowledge around the new technology.
Composite material research
On the other side of industrial practice stands pure academic composite research.Here there is a relatively narrow focus around issues related to the chemical or physical properties of composite material.There is also an important body of research driven by design considerations (i.e.strength prediction, damage characterization), however, the great bulk of that work has been related to the design of simple structural forms to achieve specific property suites (including effects such as bend/twist coupling, maximizing buckling resistance, minimizing the effects of impacts and minimizing mass properties).Much less work has been done to formalize design approaches for the components of more complex geometry that make up the great bulk of commercially manufactured parts.In parallel to that research, there has been a significant level of research activity relating to aspects of composites processing and manufacture in areas such as cure simulation, geometrical distortion, process modelling, woven cloth drape and consolidation, defect initiation and propagation and so on.
A systemic approach to innovation and technology development in composites was recognized very early as a need for the sector (Brown et al 1985, Carlson 1993), nonetheless research at the organizational and operations level for composites manufacturing has been very limited (Oliver andStricklans 1990, The Lean Aircraft Initiative 1997).Despite the significant research output in the science of composites, there is no known effort to understand concerns related to composites productivity at a systemic level.
Empirical evidence and issues in the composites industry
This lack of theoretical underpinning drove the collection of industrial cases regarding the growth of the composites industry.Rather than testing a hypothesis, a series of expert interviews generated contextually rich data, looking at a broader range of interconnected themes in the context of composite product innovation and industrial growth.Early findings were reported in Chatzimichali and Potter (2015).Here we discuss the emerging themes related to growth issues and developed through the investigation of eight industrial Cases in different composite sectors.
Table 1 presents the sectors of those Cases and their main activity.All interviews were audio recorded (total hours of interviews 17 : 07) and were fully transcribed by the researcher (total number of words: 101 241).The participants had an average amount of experience in the composite industry of 30.5 years.
The qualitative data were analysed from two perspectives, the factors that enable and the factors that block the industrial growth of composite technologies.The following table is an aggregated report of those factors as derived from the analysis of all expert interviews.
There are five general categories emerging that relate to industrial growth in composites: design, manufacturing, production planning and control, investment and funding a new technology, and market development.Each category presents themes related with enabling and blocking factors for the development of the sector (table 2).
Having a critical view of those themes reveals an interesting pattern.The majority of issues under design and manufacturing are very closely related to the nature of composites.On the other hand, in investment and funding, market development and production planning and control more general issues arise that can also be found in many other new products, technologies or markets.For example, lack of trained designers, material variability and faster-handling material are closely interwoven with the nature of the industry, while outsourcing, difficulty to find the first client or IP issues can be identified in many sectors.
The next step would be to get a deeper understanding of elements related to design and manufacturing of composite products.This will be a credible approach that could enable us to highlight where the real issues lie for composites.
Building composite production capabilities
Successful product development in composites requires an integrated view of many strands of activity, usually under tight time and financial constraints and often with some uncertainties with regard to the design requirements and materials response.Despite the importance of those factors, there is little academic research that is concentrated on the development process of composite products or any schematic map of the interactions between processes that take place.In order to understand how production capabilities are built, it is important that composite product development be considered as a system that addresses the total requirements of application that the product is intended and their impact in every part of the development cycle.
A framework of feedback loops in composite product development
Composite component design, compared to other material technologies, is not a well-defined problem that can be divided into smaller bits that are solved separately and then combined into a total solution.This feedback approach in composite product development means that during the component design the part geometry, the decision of the material and the manufacturing routes evolve simultaneously.The reason is that one cannot perform the selection of component material, design, and choice of processes independently; any change in one will inevitably affect the other (Bader 2002).
Here we concentrate on this need for a combinatorial product development map that highlights the integrative nature of composite products.Going back to product development in composite design and manufacturing, the individual building elements of design and process development are represented as feedback loops.Those building elements, initially presented in Potter (1997) are represented here in such a way that allows a consistent view of the evolution of a composite component from concept to reality.
Figure 3 represents how the main elements in composite product development interact with each other.The main product development process starts with the Initiation and formulation of a design brief.To develop this design brief an assessment loop takes place and involves considerations regarding all future process, design development, manufacturing development, fabrication/production but most importantly the final stage which is the realization of the product and includes the assessment of the product's functional requirements and costs.The next stage is the design development that involves three feedback loops outline, detailed, and validation where decisions about manufacturability, joints and loads, prototyping or scaling happen and we move from the design outline to a provisional design and the final design.Manufacturing development follows when a process development loop with decisions regarding manufacturability, tools and thermal analysis lead to a processing model.Fabrication/production is the next step where the last manufacturability considerations are addressed while moving from preproduction to ramp-up and full-scale production.Finally in realization, the product is a reality and the developed component is in use.
The feedback loops demonstrate the difficulty to take decisions in each stage while envisioning a future or a reality that is not yet determined.Also while in the Initiation stage the feedback loops concerns all four next stages, in fabrication/production there is only one stage the feedback loop is touching upon.This explains what Potter (1997) observed, that the majority of defects in manufactured parts could be traced back to design decisions (where more future stages should be considered), rather than processing variability or errors.
Exploration versus exploitation in composites
Within the context of innovation, it has long been known that 'an organization that is designed to do something well for the millionth time is not good at doing something for the first time' (Galbraith 1982, p 6).According to Galbraith (1982), this creates a dual perspective when the process of creating new products uses a fundamentally opposing logic to the process of manufacturing.This is the fundamentally opposing reasoning of exploration and exploitation.Exploration happens in the initial stages when experimenting and developing new products and components.On the other hand, improving quality and production reliability through refinements, production efficiency or incremental innovation of existing output is the exploitative aspect of product development (Levinthal and March 1993).When developing new products organizations ought to focus on both those logics, therefore different departments or organizational arrangements in a supply chain cover those aspects.But what does this reveal for composite product development and production?
We should reconsider under this light the previous discussion on industrialization and division of labour in section 3. When an established organization has already delegated product design to one actor and manufacturing to another actor, what would happen when a new material that requires different design and manufacturing approach becomes available?This transition is not easy, because it is not a simple material substitution.It requires a bottom up reengineering of the organizational structure or even of the whole supply chain.New knowledge should be generated and redistributed.Failure to redistribute this knowledge is reflected as a symptom on production capability.
Figure 4 represents exactly this concept.Manufacturing development a crucial activity in composites stands exactly in-between traditional design and manufacturing processes, making it a grey zone.Instead of recognizing it as an activity on its own, many organizations tend to fit it within their previous structure because this seems to be more in line with the pre-existing and well-founded concepts on product development.A possible way forward would be to understand how material strategy and material technology development can create production capability.In the next section we discuss some of the most established theories and their limitations.
The socio-technical forces that shape a new technology
Technology strategy is crucial for the success of any product or technology, however to understand composite product development we also need to understand the environment in which they evolve as technologies.A critical look at the history of material developments in the aerospace sector makes clear that this new technology requires much more than technological expertise.Schatzberg (1998) discusses the resistance to composite innovation and questions the laws of natural selection for new material technologies.No objective processes ensure that the best technology will prevail.Instead, progress comes part from reasoned argument and empirical evidence, and part from the symbolic meanings shaping technical culture.In these terms, the first step to gain industrial momentum in composite material technologies is by becoming convinced that they are an indispensable part of a sustainable future.A new material technology requires the shaping of a new social order when stakeholders tacitly cooperate to formulate different technological reality.
There are two distinctive stands of literature arguing on this point: technical determinism and social constructivism.Technical determinism supports the vision that technologies develop as a reflex of scientific discovery and therefore are unable to be affected by human influence.According to this point of view and paraphrasing the Victor Hugo quote 'nothing is stronger than a technology whose time has come'.Technical determinism is evident in many companies with great technical abilities that often are very dismissive of their understanding of their own (design and product development processes) processes (O'Donovan et al 2005).Social constructivism on the other hand, simply argues that technologies are shaped through individuals and collective groups through actions, strategies and interpretations.
Taking both theories to the extreme can provide a platform to understand why the answer in making sense of the growth of a material technology might rest in the middle.Social constructivism has been characterized as naïve empiricism, when focusing purely on markets and networks.Similarly, promising technologies are not born in a social vacuum.But is it possible to delay or speed up the development of a technology 'when its time has come' and how do we know when is this?Of course one cannot ignore the disruptive nature of some technologies that changed the course of sectors and markets almost overnight (e.g.fibreglass in the small boat hulls market).However, even if one studies disruptive innovations and technologies it is clear that those technologies are only disruptive in specific contexts (Christensen 1997, Christensen andRaynor 2003).This means that a material technology like composites cannot be approached in a very broad context, but in order to be studied should be pinned down to specific products and markets.
Dynamic capabilities and technology diffusion
Another level of analysis on the socio-technical forces driving new technologies deals with the emergence of antagonistic patterns between competing technologies (Rip and Talma 1998) and more recent studies from sociology and institutional theory (Geels 2004).Considering that innovations are separate from the current socio-technical regime (Geels and Schot 2007), technological skills arise after the transition and grow due to the industrial momentum around a technology.Consequently, a seeming lack of momentum in the composites socio-technical environment might be the underlying reason of low production capability.Even if resources or the right skills magically appear, there is an increased possibility of not getting properly utilized.An immature industrial environment cannot absorb new technologies, when integrated and embodied knowledge is in short supply.According to Mitchell (1989), who examined probability and timing of entry into emerging technical sub-fields, industry-specific capabilities increased the likelihood a firm could exploit a new technology within the industry.However, dynamic capabilities and the ways they were defined in strategic management seem to have more to do with private wealth creation and keeping competitors off balance (Teece et al 1997), rather than growth and the development of a sophisticated technology that can potentially impact a variety of fields.It is therefore particular difficult to use such theoretical construct to analyse industrial change in the context of composites.Moreover, these studies go beyond the scope of the present work and touch upon the realms of research and technology policy.
The adoption/diffusion innovation model (Rogers 2003) is another prevalent framework focusing on the development of new technology.This model seeks to explain the timing and the stages of the adoption of a specific innovation.Despite the fact that diffusion signifies a group phenomenon, the theory is intended to be used on specific innovations that were either rejected or accepted.This level of analysis imposes significant difficulties when assessing adoption rates for composite products.A similar limitation is the fact that the model was initially intended for consumer adoption rates and therefore considers this rate in a specific population.In the case of composites it is particularly difficult to quantify the market or the part of the sectors that took the decision to adopt composites and also acquire empirical evidence to illustrate how such transition happen.
Dominant designs and material
Another body of work built on evolutionary economics and literature on history of technology and was initiated by Abernathy (1978), Abernathy and Utterback (1978), Utterback and Abernathy (1975), Anderson and Tushman (1990), Utterback and Suarez (1993).This strand of literature argues that technological innovation in a sector is driven forward by the role of a dominant design.Dominant designs emerge as an outcome of socio-political or institutional dynamics constrained by economic and technical conditions and directly link to organizational evolution and technology cycles.When a dominant design appears and gets broadly accepted in an industrial context, an organization shifts efforts from product innovation to process innovation.This essentially means that the R&D activities change their focus from product innovation and work towards decreasing production cost through process innovation.It also allows the development of production capability and the further growth of new technologies.
There is another limitation of this theory in relation to composite material.Theories around dominant design raise several conceptual issues in the material technology context.First, the classification of new technologies as process innovation or product innovation fails to describe the underlying dynamics in composites.The composites industry does not fall in the same category with cement, steel or glass and other chemicals, where innovation comes from fundamental changes in the production processes and the products have little or no customization capability (Hayes and Wheelwright 1979).Composite characteristics are customized according to the product; however they do not belong to the product innovation class either.The reason is that the material and the manufacturing processing are the ones that enable the product's distinctive characteristic.In composites, product and material are created simultaneously and therefore product innovation cannot happen without process innovation.Therefore composite technologies seem to fall in the middle between the product and process innovation schemes, making the dominant design framework unable to describe the growth of this material technology at an industrial level.A similar pattern of product and process innovation occurring simultaneously has been identified in the nanotechnology sector (Linton and Walsh 2008).
Another point related to the dominant design approach is the hierarchy of the design that a product or a technology is divided into (system level, first-order subsystem, second order subsystem, component level), according to (Murmann and Frenken 2006).Each level in this hierarchy can follow its own technology cycle.However, the material of a product is not a part of this systemic hierarchy.The material is an attribute and a change in material can potentially redefine the whole systemic hierarchy in a product.Consequently, current theories around dominant design and technological change cannot adequately describe this type of material-based technology.
Finally another issue with the particular framework is that dominant designs can only be studied in retrospect, also it is a rather ambiguous phenomenon whose definition, unit of analysis, causal mechanisms and underlying conditions seem rather unclear (Ehrnberg 1995).
Conclusions
Technologies have their own dynamics, but one cannot ignore actor strategies or sector economics.Shaping the social dimension of the associated design and manufacturing network or the dynamics of pre-existing networks determines to a large extent the success of a technology.At least this was proven in the case of the semiconductor industry as demonstrated by the narrative of inventors (Berlin 2005) where influencing technology development proved to be a complicated multi-actor process and also supported by more recent literature (Le Masson et al 2013).In the semiconductor industry growth became possible first by getting collaboration together and later by solving the technical problems.Expectations structured activities and built agendas.The pure nature of the technology was not enough to fuel growth and also the patterns that eventually emerged could not be attributed to one particular actor.It was also apparent that a repertoire of stories (including Moore's law) defined the possibilities and future strategies including the evaluation of actions of others as illustrated by Lente and Rip (1998).
The answer to how a new material technology can create growth rests on the common thread that connects those seemingly independent but linked stands of literature.One thing to keep in mind is that composites are not simply a material or a technology, but material systems.Therefore adequate theoretical frameworks are hard to come by.Thus, the difficulties organizations face in the composite product development, don't have to do merely with the reconfiguration of the product, but also with the reconfiguration of organizational structures.When something as radical as the material changes, a substitution process would not get you far.It requires organizational change that must be considered at system level.
It is clear that more effort is required in order to understand the composites industry and look further than single technologies or single manufacturing facilities, which are only small parts of the total.Research needs should concentrate both on academic rigour and also more importantly on the inherent fuzziness of real systems.It is also important to select researchers that understand production methods in different industries and have an aptitude in communicating findings to people from very diverse backgrounds.This will enable the discussion of real problems with industry, government ministries, union executive committees, labour unions and leaders in the investment community in order to gain their reaction, criticism and suggestions to continue this work forward.This requires access to both executive suites and factory production floors.Only when organizations open up to discuss their problems candidly, can research projects a successful feedback to practice.
Figure 3 .
Figure 3. Feedback loops in composite product development.
Figure 4 .
Figure 4. Exploration and exploitation in composite product development.
a New product development.
Table 2 .
Enabling and blocking factors in the growth of composite material technologies. | 7,534.8 | 2015-07-24T00:00:00.000 | [
"Business",
"Materials Science"
] |
A case study of proton shuttling in palladium catalysis
Thanks to mechanistic studies, the catalytic performance of SCS indenediide Pd pincer complexes has been spectacularly enhanced using catechol additives as proton shuttles.
II.b. Partial order determination: The partial order of each reaction's components (substrate and catalyst) was determined by the initial rate method. The data of the concentration of product versus time plot were fitted with Excel. The obtained slope of the linear fitting represents the initial rate. The partial order was then determined by plotting the initial rates versus the initial concentrations.
II.b.1. Partial order determination for [Pd]
To determine the partial order of the reaction on catalyst, the initial kinetic profiles at different initial concentrations of palladium center were recorded. The final data were obtained by averaging the results of three independent trials for each experiment. Experiment % mol [Pd] x mg of Cat. I Initial concentration [Pd] 1 3 1.5 0.0042 2 5 2.5 0.007 3 7.5 3.8 0.0105 4 10 5.1 0.014 Figure S1. Reaction conditions for the partial order determination of [Pd].
General procedure: 10.8 -hexynoic acid 1a (0.0098 mmol, 0.14 M) a specific quantity of Catalyst I according to the above table and 0.7 mL of CDCl3 were introduced in a pressure NMR tube. The S3 reaction mixture was heated at 90°C and 1 H NMR spectra were recorded every five minutes until a conversion of 10%.
II.b.2. Partial order determination for 5-hexynoic acid 1a.
To determine the partial order of the reaction in 5-hexynoic acid, the initial kinetic profiles at different initial concentrations of 5-hexynoic acid were recorded. The final data were obtained by averaging the results of three independent trials for each experiment.
II.b.3. Partial order determination for 5-hexynoic acid 1a in presence of 1 mol% of Tetrachlorocatechol 4u.
To determine the partial order of the reaction on 5-hexynoic acid in presence of 1 mol% of Tetrachlorocatechol 4u, the initial kinetic profiles at different initial concentrations of 5-hexynoic acid were recorded. The final data were obtained by averaging the results of two independent trials for each experiment. [1a] (mol.L -1 ) V0(1) (mol.L -1 .min -1 ) V0(2) (mol.L -1 .min -1 ) Average of V0 (mol.L -1 .min -1 ) 0.14 0. To determine the partial order of the reaction on 5-hexynoic acid in presence of 5 mol% of Tetrachlorocatechol 4u, the initial kinetic profiles at different initial concentrations of 5-hexynoic acid were recorded. The final data were obtained by averaging the results of two independent trials for each experiment. To determine the partial order of the reaction on 5-hexynoic acid in presence of 10 mol% of Tetrachlorocatechol 4u, the initial kinetic profiles at different initial concentrations of 5-hexynoic acid were recorded. The final data were obtained by averaging the results of two independent trials for each experiment. II.b.6. Partial order determination for 5-hexynoic acid 1a in presence of 20 mol% of Tetrachlorocatechol 4u.
To determine the partial order of the reaction in 5-hexynoic acid in presence of 20 mol% of Tetrachlorocatechol 4u, the initial kinetic profiles at different initial concentrations of 5-hexynoic acid were recorded. The final data were obtained by averaging the results of two independent trials for each experiment. Figure S22. Partial order of 5-hexynoic 1a versus the quantity of tetrachlorocatechol 4u.
III.
Study of the self association 5-hexynoic acid in CHCl3 by IR spectroscopy. The self-association of 5-hexynoic acid was evidenced by IR spectroscopy in CHCl3 at different acid concentrations (Vide Infra). IR spectra were recorded with a resolution of 4 cm -1 in 16 scans. Two well defined absorptions bands were observed, at 1711 and 1751 cm -1 corresponding to the dimeric and monomer forms, respectively. The observed bands are shown in Figure S23 at different concentrations and reported in Table S8 with the intensities ratio.
31
P NMR analysis of the Palladium indenediide dimer I at variable temperature. The dissociation-association behavior of the Palladium indenediide dimer I was evidenced by 31 P NMR spectroscopy at variable temperature using a 400 MHz NMR spectrometer. The association activation barrier was estimated from these experiments ( Figure S24), using the following formula: The association activation barrier was estimated to be at least of 15.8 kcal.mol -1 , considering the 31 P NMR spectroscopic data: the coalescence temperature Tc = 363 K, and the chemical-shift difference │AB │ = 1017.33 Hz.
VII.
General procedure for cycloisomerization of alkynoic acids in presence of additives: In a NMR pressure tube, alkynoic acid (0.098 mmol), dried additive (x mol%) and complex I (2.5 mg, 5 mol% [Pd]) in 0.7 mL of CDCl3 were heated at the corresponding temperature under argon atmosphere. The progress of the reaction was monitored by 1 H NMR. Figure S26. H-bond additives library used in the cycloisomerization of 5-hexynoic acid 1a catalyzed by indenediide dimer I. Figure S27. Evaluation of the impact of weak H-donor compounds 4 (30 %mol) on the cyclization of 5-hexynoic acid 1a. Table S10. Evaluation of the additives in the cyclization of 5-hexynoic acid 1a and optimization of the reaction conditions.
IX.
Computational details Calculations were carried out with the Gaussian 09 program S3 on the real experimental palladiumpincer system at the B3PW91 level of theory. S4 Palladium atom was treated with the corresponding Stuttgart-Dresden RECP (relativistic effective core potential) in combination with its adapted basis set, S5 augmented by an extra set of a f polarization function. S6 Phosphorus atoms were represented by the ECP from Dolg et al. and its associated basis set, S7 augmented also by d polarization functions. S8 For the remaining atoms the 6-31G(d,p) basis set was used. S9 Geometry optimizations carried out without any symmetry restrictions, and were followed by analytical frequency calculations to confirm that a minimum or a transition state had been reached. The connection between the transition state and the corresponding minima were done by performing IRC calculations. S10 Finally, the CYLview program was used for the representation of 3D structures. S11 | 1,381.8 | 2015-12-07T00:00:00.000 | [
"Chemistry"
] |
Inner Nuclear Membrane Asi Ubiquitin Ligase Catalytic Subunits Asi1p and Asi3p, but not Asi2p, confer resistance to aminoglycoside hygromycin B in Saccharomyces cerevisiae
The heterotrimeric Asi ubiquitin ligase (encoded by ASI1, ASI2, and ASI3) mediates protein degradation in the inner nuclear membrane in Saccharomyces cerevisiae. Asi1p and Asi3p possess catalytic domains, while Asi2p functions as an adaptor for a subset of Asi substrates. We hypothesized the Asi complex is an important mediator of protein quality control, and we predicted that Asi would be required for optimal growth in conditions associated with elevated abundance of aberrant proteins. Loss of Asi1p or Asi3p, but not Asi2p, sensitized yeast to hygromycin B, which promotes translational infidelity by distorting the ribosome A site. Surprisingly, loss of quality control ubiquitin ligase Hul5p did not sensitize yeast to hygromycin B. Our results are consistent with a prominent role for an Asi subcomplex that includes Asi1p and Asi3p (but not Asi2p) in protein quality control.
. ASI1 and ASI3 confer resistance to hygromycin B: (A-C) Sixfold serial dilutions of yeast of the indicated genotypes were spotted onto agar plates containing rich growth medium (No Drug) or indicated concentrations of hygromycin B. Plates were incubated at 30°C and imaged after 1-3 days. Experiments were performed in triplicate. (C) "asi2Δ (YKO)" is VJY852 and was obtained from the Yeast Knockout Collection (Tong et al., 2001). "asi2Δ (new clone 1)" and "asi2Δ (new clone 2)" are VJY969 and VJY970, respectively, and were generated for this study.
The aminoglycoside hygromycin B produced by the bacterium Streptomyces hygroscopicus reduces translational fidelity by distorting the ribosome A site, resulting in inaccurately synthesized protein molecules (Brodersen et al., 2000;Ganoza and Kiel, 2001). We previously demonstrated that loss of ER and nuclear PQC ubiquitin ligases Hrd1p, Doa10p, and Ubr1p sensitizes cells to hygromycin B Niekamp et al., 2019;Runnebohm et al., 2020). The extent of Asi's contribution to PQC relative to these enzymes remains unknown.
We hypothesized that Asi is an important mediator of PQC. We predicted that the Asi complex would be required for resistance to conditions expected to increase the abundance of aberrant proteins. To test this, we cultured wild type yeast, yeast lacking genes encoding each subunit of the Asi complex, and a panel of PQC mutant yeast strains in the absence and presence of increasing concentrations of hygromycin B ( Figure 1A). Consistent with previous results, loss of HRD1 or DOA10 sensitized cells to 75 μg/ml hygromycin B, and yeast deleted for UBR1 exhibited sensitivity at concentrations as low as 25 μg/ml. By contrast, deletion in two different genetic backgrounds of the gene encoding PQC ubiquitin ligase Hul5p (Fang et al., 2011;Runnebohm et al., 2020;Sitron and Brandman, 2019) did not sensitize cells to hygromycin B at the concentrations evaluated ( Figure 1A, 1B).
Loss of ASI1 and ASI3 sensitized cells to 75 μg/ml hygromycin B to a similar extent as loss of DOA10 or HRD1 ( Figure 1A). Intriguingly, loss of ASI2 in multiple independently generated yeast strains did not confer a similar growth disadvantage under these conditions ( Figure 1A, 1C). Deletions of ASI genes and HUL5 were validated by PCR. Taken together, our results indicate Asi1p and Asi3p, but not Asi2p, are required for optimal growth in the presence of a compound expected to generate increased numbers of PQC substrates.
The finding that loss of Hul5p does not enhance sensitivity to hygromycin B was surprising, given multiple characterized functions of Hul5p in PQC. Among other roles, Hul5p promotes degradation of substrates that have escaped detection by the ribosome quality control ubiquitin ligase Ltn1p (Sitron and Brandman, 2019) and promotes turnover of misfolded proteins following heat shock (Fang et al., 2011). Loss of Ltn1p sensitizes cells to hygromycin B (Bengtson and Joazeiro, 2010;Crowder et al., 2015). We speculate that a requirement for Hul5p in hygromycin B resistance may become apparent during conditions characterized by elevated cellular dependence on Hul5p, such as compromised Ltn1p function or heat shock.
Multiple lines of evidence suggest that a subcomplex of the Asi ubiquitin ligase including Asi1p and Asi3p (but not Asi2p) mediates PQC degradation of misfolded proteins, potentially in complex with unidentified substrate adaptors. First, as demonstrated here, deletion of ASI1 or ASI3, but not of ASI2, sensitizes cells to conditions expected to increase the abundance of aberrant, mistranslated proteins to an extent similar to that observed following loss of other characterized PQC genes (we note it remains possible that ASI2 is required for optimal growth under different forms of proteotoxic stress, such as elevated temperature). Second, while Asi1p, Asi2p, and Asi3p collaborate to mediate degradation of a host of mislocalized proteins, only Asi1p and Asi3p promote degradation of mutated translocon component sec61-2p (Foresti et al., 2014). Finally, simultaneous deletion of genes encoding Hrd1p, Ire1p (a component of the yeast unfolded protein response), and either Asi1p or Asi3p causes markedly slower growth than concurrent knockout of HRD1, IRE1, and ASI2 (Foresti et al., 2014).
Asi2p function is also dispensable for degradation of some Asi1/3p substrates that do not possess features rendering them predicted PQC substrates (Khmelinskii et al., 2014). Such substrates may expose degradation signals (e.g. when other complex subunits are present in substoichiometric abundance) resembling those of quality control substrates, co-opting a PQC enzyme for regulatory purposes. The precise nature of degradation signal(s) recognized by Asi remains to be resolved.
Yeast growth assay. Yeast growth analysis was performed as described (Watts et al., 2015). Four μl of sixfold serial dilutions were pipetted onto yeast extract-peptone-dextrose medium (Guthrie and Fink, 2004) in the absence or presence of increasing concentrations of hygromycin B (Gibco). Plates were incubated at 30°C and imaged at the indicated times. | 1,314.6 | 2021-06-01T00:00:00.000 | [
"Biology"
] |
Parental and offspring contribution of genetic markers of adult blood pressure in early life: The FAMILY study
Previous genome wide association studies (GWAS) identified associations of multiple common variants with diastolic and systolic blood pressure traits in adults. However, the contribution of these loci to variations of blood pressure in early life is unclear. We assessed the child and parental contributions of 33 GWAS single-nucleotide polymorphisms (SNPs) for blood pressure in 1,525 participants (515 children, 406 mothers and 237 fathers) of the Family Atherosclerosis Monitoring In early life (FAMILY) study followed-up for 5 years. Two genotype scores for systolic (29 SNPs) and diastolic (24 SNPs) blood pressure were built. Linear mixed-effect regressions showed significant association between rs1378942 in CSK and systolic blood pressure (β = 0.98±0.46, P = 3.4×10−2). The child genotype scores for diastolic and systolic blood pressure were not associated in children. Nominally significant parental genetic effects were found between the SNPs rs11191548 (CYP17A1) (paternal, β = 2.78±1.49, P = 6.1×10−2 for SBP and β = 3.60±1.24, P = 3.7×10−3 for DBP), rs17367504 (MTHFR) (paternal, β = 2.42±0.93, P = 9.3×10−3 for SBP and β = 1.89±0.80, P = 1.8×10−2 for DBP and maternal, β = -1.32±0.60, P = 2.9×10−2 and β = -1.97±0.77, P = 1.0×10−2, for SBP and DBP respectively) and child blood pressure. Our study supports the view that adult GWAS loci have a limited impact on blood pressure during the five first years of life. The parental genetic effects observed on blood pressure in children may suggest epigenetic mechanisms in the transmission of the risk of hypertension. Further replication is needed to confirm our results.
Introduction
In 2008, 978 million adults, or 28% of the global adult population had hypertension (HTN) and the burden of HTN may reach 1.5 billion by 2025 [1,2].HTN is associated with an increased risk for cardiovascular disease and contributes as such to 7.6 million (13.5%) deaths each year worldwide [1].Modifiable risk factors for HTN include excessive dietary sodium, physical inactivity, excessive alcohol intake, psychosocial stress and obesity [3].Non modifiable risk factors include sex, age, but also ethnicity and family history of HTN, suggesting a contribution of genetic determinants in HTN etiology [4].Twin and family studies have reported heritability estimates of 30-50% for blood pressure (BP) and hypertension [5].Twelve genes have been associated with Mendelian syndromes causing HTN [5].Genomewide association studies (GWAS) have identified 54 common genetic variants associated with systolic blood pressure (SBP) and diastolic blood pressure (DBP) [5,6].These GWAS signals point toward the role of vasodilatory hormones, ionic regulation by solute channels and vascular smooth muscle growth and signaling in the pathogenesis of HTN [7].It is noteworthy that most of the GWAS studies for BP have been performed in adults of European ancestry, and only one GWAS for BP has been reported in children and adolescents [5,8,9].To date, four studies assessed the contribution of SNPs identified in adult GWAS in children and adolescents of European ancestry [9][10][11][12].Oikonen et al. built two genotype scores by using 5 SBP and 8 DBP-associated SNPs and did not find any evidence of association with SBP and DBP from the age of 3 to 18 years (sample size comprised between 340 and 1100) [10].More recently, Howe et al. studied a unique genotype score based on 29 adult BP SNPs in 8472 children from Australia and United Kingdom and evidenced a nominal association only with SBP at the ages of 6 and 17 years [11].Early 2016, an international consortium found two novel loci associated with SBP at pre-puberty (4-7 years) and puberty (8-12 years-) [9].The authors also highlighted an age specific association of the two SNPs.
Parental history of high BP has been associated with higher SBP and DBP in offspring in the literature; some but not all studies reporting sex-specific parental effects [13,14].Family heritability studies for SBP and DBP support the view that the phenotypic resemblance observed between parents and offspring may be explained in part by genetic determinants [15,16].However, the parental contribution of genetic markers of adult BP in offspring has never been investigated.This prompted us to investigate the parental and child contributions of 33 GWAS associated-SNPs for BP in 1,525 participants of the Family Atherosclerosis Monitoring In early life (FAMILY) study followed-up from birth to the age of 5 years.
Subjects
The Family Atherosclerosis Monitoring In earLY life (FAMILY) study has been described elsewhere [17].FAMILY is an ongoing birth cohort study that includes mothers, fathers and children with a planned follow-up of 10 years.Briefly, over the last 7 years, 859 families including 901 babies, 259 siblings, 857 mothers and 530 fathers were enrolled into the FAMILY study.In this study, we excluded offspring from multiple births, siblings of "index" children due to familial relatedness and phenotypic issues (i.e.absence of phenotypic data at birth).Following these exclusion criteria, 630 mothers, 351 fathers and 544 unrelated children had DNAs extracted and were selected for genotyping.After assessing the family structure between the children and their parents, we selected 515 children, 406 mothers and 237 fathers with genotypic and phenotypic (sex, age and BMI) data for the analysis (406 child/mother pairs, 237 child/father pairs, and 219 trios).Phenotypic characteristics of these individuals are available in the Table 1.Sample sizes at each time of measurement are available in the S1 Fig.The data coordination site of the FAMILY study is the Population Health Research Institute (Hamilton, ON, Canada).Informed consent was obtained from all the adult participants, and the parents provided consent for their children.All procedures were performed in accordance with relevant guidelines and regulations.The study was approved by the Research Ethics Boards at the participating hospitals (Hamilton Health Sciences, St Joseph's Hospital-Hamilton, Joseph Brant Memorial Hospital, Burlington, ON, Canada).
Phenotyping
Offspring's phenotypic measurements have been performed at birth, 1 year, 2 years, 3 years and 5 years of age (Table 1).Systolic and diastolic blood pressures were measured with a Dinamap Pro100 V2 (GE Medical Systems, Tampa, Florida, USA), which utilizes an oscillometric method, and repeated 3 times at 2 minutes intervals.At birth the measures were performed while the child was sleeping or lying quietly.For all other time measurement (1, 2, 3 and 5 years), the measures were performed while the child was sitting quietly and after resting for at least 5 minutes.The child's height was recorded from birth to 2 years using an O'Leary pediatric length board (Ellard Inc) then using Harpenden stadiometer with a precision of 0.1 cm.The weight was measured to the nearest 200g in light clothes using an electronic top.BMI was calculated using the following formula: weight (kg)/ height 2 (m 2 ).
Genotyping
Genomic DNAs were extracted from buffy coats for all the participants.Buffy coats for mothers and fathers were extracted from blood samples collected at the initial visit at the 24-37 weeks of gestation.For the child, the buffy coat comes from cord blood at the delivery.
The genotyping was performed using the Illumina Cardio-Metabochip (San Diego, CA, USA).This array has been designed by seven consortia on cardiac, metabolic and anthropometric traits.A selection of 196,725 SNPs for 23 different traits was made.The design and SNP selection of the array have been detailed elsewhere [18].We selected SNPs that reached genome-wide significance level of association (P<5×10 −8 ) for SBP and/or DBP in at least one population of European ancestry and were available in the Cardio-Metabochip array (lead SNP or proxy).All the SBP and DBP-associated SNPs were extracted from two databases (HuGE Navigator and NHGRI GWAS Catalog).For SNPs that were not available in the Cardio-Metabochip, we searched for proxy SNPs using the Broad Institute website tool SNAP (SNP Annotation and Proxy Search).For those highlighted as missing in the Cardio-Metabochip, we checked their availability using their chromosomal position in the Illumina product file.We used the following criteria to select proxy SNPs: 1) SNPs included in the Cardio-Metabochip 2) r 2 >0.95 in European population data issued from the 1000 Genomes Project, 3) selection of a coding non-synonymous SNP if available in the list of proxy, otherwise selection of the SNP located closest to the GWAS lead SNP.To avoid any overlap in the final SNP selection, linkage disequilibrium between all the SNPs was double-checked using SNAP in European population data of the 1000 Genomes Project.We discarded 13 SNPs that displayed r 2 > 0.2 with another SNP in the list.Thirty-three SBP and DBP-associated polymorphisms remains for further study (S1 Table ).Standard procedures have been used to assess the quality of the genotyping: all 33 SNPs displayed call rates > 99% and are consistent with the Hardy Weinberg Equilibrium (S2 and S3 Tables).As an additional quality control procedure we analyzed the Mendelian transmission patterns of the 33 SNPs.We found recurrent Mendelian inconsistencies in five pedigrees.After excluding the five non-biological fathers from the analysis, only one Mendelian distortion was observed in the whole sample for the 33 SNPs, which therefore successfully passed the quality control test.Data from the five non-biological fathers were excluded from further analyses.We then searched for discrepancies between the reported sex and the one determined using the genetic information.We found 9 discrepancies by using the heterozygosity rate calculated by PLINK.The cryptic relatedness between the children was also verified and we removed six individuals due to evidence of relatedness (second degree relatives).We double-checked the selfreported ethnicity of our individuals using EIGENSTRAT.The 1525 participants of the FAMILY study were predominantly white Caucasians (92.8% Mothers; 89.3% Fathers; 91.1% Offspring).Other participants were South Asians, East Asians, Latino Americans, Africans and Native North Americans.
Statistical analyses
S1 and S2 Files provide datasets for SBP and DBP analyses, respectively.We coded genotypes as 0, 1 and 2, depending on the number of copies of the SBP or DBP increasing alleles.Two genotype scores were calculated by summing the alleles of 24 and 29 SNPs for SBP and DBP, respectively.The genotype scores were used as an ordinal value in the models.Considering the possibility that genetic effects for BP GWAS SNPs may diverge in adult and children populations, we used an unweighted genotype score to prevent any analytical bias.Unweighted and weighted genotype scores for complex traits usually have a comparable performance [19,20].This is especially true if the differences in genetic effects of SNPs are minor and if the sample size is not very large, two conditions that apply to our study [19,20].Individuals with more than two missing values were discarded from the calculation of the genotype score and the remaining missing values were imputed using the method of the mean.This imputation was performed for each SNP individually using the arithmetic average of the coded genotypes observed for all the successfully genotyped individuals.We did not perform family-based association tests in this study for two reasons.First, larger sample sizes are needed for family-based than regression association tests to achieve comparable statistical power [21].The low participation rate of fathers in the FAMILY study (515 children, 406 mothers and 237 fathers) adds to the loss of statistical power.Unfortunately, this is a common pitfall of family-based designs where mothers often bring children to clinic visits and thus are included more easily than fathers [22].Second, the software used in family-based association tests only perform cross-sectional analyses.Longitudinal analyses have been shown to achieve more power than cross-sectional association tests [23,24].Associations between SNP/ genotype scores and BP measurements were assessed using linear mixed-effect regression model to account for the longitudinal nature of the data (5 SBP and DBP measurements).We used the intercept and the age at measurement as random effects and sex, BMI and the principal components as fixed effects.To assess paternal and/or maternal effects on offspring's SBP and DBP, a linear mixed-effect regression was performed using the parental genetic information (SNPs or genotype score) as predictor and sex, age, BMI, principal components and SNPs/genotype score of the offspring as covariates.The 10 first principal components were computed using all the SNPs passing the quality control filter in the Metabochip and they were defined using EIGENSTRAT [25].Principal components were added as covariates in all regression models to account for population structure.We handled SBP and DBP missing data at different ages through a missing at random approach in the linear mixed-effect regression model and did not to impute SBP and DBP missing data in our study.This decision was based on three arguments: 1) the percentage of SBP and DBP missing data at each measurement is heterogeneous in FAMILY (S1 Fig) ; 2) SBP and DBP values vary significantly in early life and a large inter-individual variability is observed at each measurement (S2 Fig) ; 3) linear mixedeffect regression models handle well the presence of missing data [24].All the regression analyses were performed using the free software R 3.0.1 with the package lme4 [26].
Hardy-Weinberg equilibrium was tested using a Chi-square test in combination with permutations and bootstrapping.Mendelian incompatibilities were checked using PLINK [27].Twotailed P-values are presented in this manuscript.Bonferroni corrected P-values are routinely applied to exploratory genetic association studies.However, they are overly conservative given the high prior likelihood of association in post-GWAS experiments.P < 0.05 was therefore considered significant for post-GWAS associations between offspring SNPs/GSs and BP traits in children.We did not apply a Bonferroni correction for the comparison of genetic effects of SNPs at different ages in children, and between children and adults, as they represent post-hoc analyses for BP-associated SNPs.In contrast, we applied Bonferroni corrections for the exploratory associations of 1) paternal SNPs/GSs and 2) maternal SNPs/GSs with BP traits in children, as no evidence of parent-of-origin effects on BP traits has been reported in literature before.P <2.0×10 −3 (0.05/25) and P <1.7×10 −3 (0.05/30) was considered as significant for SBP and DBP respectively.We previously applied a similar approach for the study of obesity traits in FAMILY [28].We compared our significant mixed model results with those obtained on an adult cohort from the International Consortium for Blood Pressure using a Z-test [29,30].We also performed Z-tests on the child beta values across time to assess potential age-dependent genetic effects.
Associations of offspring SNPs and genotype scores with blood pressure in children
Linear mixed-effect regressions on the longitudinal series of data were used to assess the effect of children's SNPs on SBP and DBP from birth to 5 year (S4 and S5 Tables).The rs11191548 SNP near CYP17A1 showed directionally consistent association with DBP (β = 1.71±0.61,P = 4.6×10 −3 ) (Table 2 and S5 Table).The rs1378942 in CSK showed directionally consistent association with SBP (β = 0.98±0.46,P = 3.4×10 −2 ).Directionally inconsistent association was found for the rs12946454 in PLCD3 and SBP (β = -1.07±0.50,P = 3.3×10 −2 ) (Table 2 and S4 Table ).To assess the combined effect of the SBP and DBP SNPs, we tested the association of the children's genotype score using a linear mixed-effect regression model on the longitudinal series of data (S4 and S5 Tables).Neither the SBP nor DBP genotype scores showed associations with SBP or DBP.
Associations of parental SNPs and genotype scores with blood pressure in children
Linear mixed-effect regressions on the longitudinal series of data were used to assess the effect of parental SNPs on SBP and DBP in offspring (S6 and S7 Tables).The regressions of the offspring's phenotypes highlighted a directionally consistent nominal evidence of association of the paternal genotype of rs11191548 (CYP17A1) for DBP (β = 3.60±1.24,P = 3.7×10 −3 ) and a trend of association with SBP after adjusting for offspring's genotypes (β = 2.78±1.49,P = 6.1×10 −2 ).Further adjustment for the maternal genotype did not significantly modify the nominal association of the paternal genotype of rs11191548 (CYP17A1) with SBP and DBP (β = 2.24±1.18,P = 5.7×10 −2 and β = 2.85±1.00,P = 4.2×10 −3 , respectively).We did not find any association between the maternal genotype of rs11191548 (CYP17A1) and SBP or DBP.The associations of the child genotype rs11191548 (CYP17A1) with SBP and DBP did not resist to an adjustment by the paternal genotype (Table 2, S6 and S7 Tables).Both the maternal and paternal genotypes of rs17367504 (MTHFR) were nominally associated with SBP and DBP.The nominal associations for SBP and DBP adjusted for the offspring's genotype were directionally inconsistent when the rs17367504 (MTHFR) maternal genotype was assessed (β = -1.32±0.60,P = 2.9×10 −2 and β = -1.97±0.77,P = 1.0×10 −2 , respectively).In contrast, these nominal associations were directionally consistent when the paternal genotype was assessed (β = 2.42±0.93,P = 9.3×10 −3 for SBP and β = 1.89±0.80,P = 1.8×10 −2 for DBP).The paternal and maternal nominal associations of rs17367504 (MTHFR) with SBP and DBP were removed when the model was adjusted for the reciprocal parental genotype (Table 2, S6 and S7 Tables).
The maternal genotype of rs1717017 (ULK4) was directionally consistent and nominally associated with DBP after adjusting for the offspring's genotype (β = 1.38±0.56,P = 1.4×10 −2 ).The maternal genotype of rs12187017 near EBF1 was found to be nominally associated in an inconsistent direction with DBP after adjusting for the offspring's genotype (β = -1.01±0.47,P = 3.0×10 −2 ).These associations disappeared after adjusting for the corresponding paternal genotype (rs1717017 and rs12187017).We did not find any association between the paternal genotype of rs1717017 (ULK4) or rs12187017 (EBF1) and DBP (Table 2 and S7 Table ).Linear mixed-effect regressions highlighted a nominal and directionally consistent association between the maternal genotype of rs381815 (PLEKHA7) and SBP (β = 1.31±0.64,P = 3.9×10 −2 ) (Table 2 and S6 Table ).The maternal and paternal genotype scores were not associated with child SBP or DBP (S6 and S7 Tables).None of the above-mentioned maternal or paternal associations with BP traits in children survived to a Bonferroni correction.
We assessed potential changes of the parental effects of rs11191548 (CYP17A1) and rs17367504 (MTHFR) SNPs from birth to 5 year.The parental genetic effect of these two SNPs on child's SBP and DBP did not vary during the follow-up.
Discussion
In this study, we assessed the associations of 33 GWAS SBP/DBP SNPs in the FAMILY birth cohort.The SNP rs1378942 (CSK) showed a significant association with SBP from birth to 5 year in line with previous reports on adults [7,[29][30][31].CSK (c-Src tyrosine kinase) is a tyrosine kinase with roles in the mediation of the G protein signals to actin cytoskeletal reorganization [32].Actin remodeling has a direct impact on the constriction of the arterial endothelium in rats and human newborns, supporting genetic effects in early life [33,34].In line with our data, a nominal association between rs1378942 (CSK) and SBP was recently reported in 8,472 children from Australia and United Kingdom at the age of 6 years [11].An association of rs1378942 (CSK) with SBP was also reported in 1,027 Chinese obese children [35].The fact that CSK rs1378942 SNP shows comparable genetic effects on SBP in both FAMILY and adult populations from the International Consortium for Blood Pressure suggests that this SNP contributes equally to SBP variations over the life course.In contrast, the genotype scores based on 24 and 29 SNPs did not show any association with child SBP and DBP in early life in our study.Similarly, Oikonen et al. did not evidence any association between two genotype scores based on 5 SBP and 8 DBP-associated SNPs and SBP or DBP from the age of 3 to 18 years [10].In contrast, Howe et al. found a nominal association between a genotype score based on 29 adult SBP/DBP SNPs and SBP at the ages of 6 and 17 years [11].The inconsistency of this finding with our data may relate to the unique nature of the composite genotype score developed by Howe and colleagues, thus making direct comparison difficult [11].The lack of association between the SBP and DBP genotype scores observed by us and others [10] during childhood and adolescence is consistent with the fact that heritability estimates for both traits increase progressively during this age window to reach a plateau at young adulthood [36].Similarly, the association of SNPs such as CSK rs1378942 or the genotype score by us and others with child SBP (but not DBP) is in line with the systematically lower heritability estimates found for DBP in comparison with SBP in adolescents and adults [37,38].As no longitudinal study to date reported heritability estimates for SBP and DBP in the first years of life, we calculated these values in FAMILY and also found a progressive increase of heritability estimates from birth to 5 years and an overall lower heritability for DBP than SBP (DBP: from 1.1 to 26.4%; SBP: from 0.0% to 31.5%;Robiou-du-Pont et al., manuscript in preparation).These results deserve further investigations but show that beyond lifestyle a subset of genetic factors already plays a role in early life.
We are aware of the modest power of our study, as showed by our power calculation simulations (S3 Fig) .A suboptimal statistical power inflates both the risk of false negative and false positive associations [21].This means that, in addition to the association observed between CSK rs1378942 and SBP, other SNPs contributing to BP in early life may have been missed in the present study.We also speculate that some of the associations observed in this study with BP (e.g.rs12946454 in PLCD3) but displaying an inconsistent direction of effect in contrast with previous literature in adults may represent false positive results.Alternatively, we cannot exclude the possibility of age-dependent genetic effects on BP, as recently reported by Simino et al. for SBP and DBP in young versus old adults [39].An inversion of genetic effect in infancy versus childhood has also been reported for the FTO intron 1 variant in relation with body mass index in eight longitudinal cohorts of European ancestry [40].
This study is the first to assess parental genetic effect of SNPs identified by GWAS on SBP and DBP phenotypes in early life independent of the influence of child genotype.This investigation highlighted a nominal association of the paternal BP increasing CYP17A1 rs11191548 allele with higher child SBP and DBP using a mixed-effect model.Maternal and paternal alleles of rs17367504 in MTHFR display opposite effects on BP in children.While the paternal SBP/DBP increasing allele of rs17367504 (MTHFR) shows a directionally consistent nominal association with offspring's BP from birth to 5 years, the maternal SBP/DBP increasing allele at the same SNP is nominally associated with a decrease in children's BP.Beyond the associations described at the MTHFR and CYP17A1 loci, other nominally significant parental effects were observed for rs12187017 (EBF1), rs1717017 (ULK4), rs381815 (PLEKHA7) and offspring BP.Even if the biology of these genes does not enable a trivial explanation for these associations, further replication of these nominally significant results in additional studies is warranted to definitively assess the potential epigenetic transmission of hypertension.
Our study has several strengths.First and most importantly, this report is the first to investigate SNPs that affect BP from birth to five years.This is also the first time that parental effects are studied on BP in young children.Furthermore, the longitudinal FAMILY study provided a unique opportunity to investigate the effects of parental SNPs on offspring BP using mixedeffects models.Of note, several genetic associations have been strengthened by plausible biological arguments.Lastly, the Illumina Cardio-Metabochip allowed us to investigate the most exhaustive list of SNPs so far (N = 33).
One limitation of the study is the modest sample size, which restricted our power to detect associations with small effect sizes and/or low risk allele frequencies.The longitudinal nature of our study and the use of linear mixed-effect regressions compensated to a certain extent the suboptimal power.Another limitation is the low number of fathers recruited.This however constitutes a common feature of birth cohorts focusing principally on mothers and offspring.
In conclusion, we highlighted in this study a significant association of the rs1378942 SNP in CSK with SBP during the first years of life, but no overall association of the GWAS BP SNPs using SBP/DBP genotype scores.Moreover and for the first time, nominally significant parental genetic effects were found between the SNPs rs11191548 (CYP17A1) and rs17367504 (MTHFR) and child BP suggesting possible epigenetic mechanisms in the transmission of susceptibility to hypertension.Our results suggest that the genetic predisposition for hypertension have a limited impact on BP during the first years.Furthermore, the observation of paternal and maternal genetic effects may contribute to explain why maternal risk factors do not account for the global phenotypic variance of child BP [41].
SBP and DBP increased during the first year of life to reach a plateau until the age of 5 years (S2 Fig).
Table 2 . Summary of the significant results using mixed-effect regressions.
SNP, Single Nucleotide Polymorphism.SBP, Systolic Blood Pressure.DBP, Diastolic Blood Pressure.R.A., Risk Allele.In the children section, we tested the association of the children SNPs with children BP.In the others section, we assessed the effect of maternal or paternal SNP on children phenotypes. https://doi.org/10.1371/journal.pone.0186218.t002 | 5,674.2 | 2017-10-18T00:00:00.000 | [
"Medicine",
"Biology"
] |
Research on The Application of Ceramic 3D Printing Technology
Ceramic 3D printing technology is using computer aid design techniques to model and produce ceramic products. The basic principle is through X, Y, Z axis of the ceramic 3D printer displacement layer to create the computer-designed 3D ceramic shape, and combined with traditional pottery hand-made techniques to finally complete it. Analysis and utilize the advantages of ceramic 3D printing technology, in order to inject new vitality into the traditional ceramic production industry.
Introduction
Each step of Ceramic modeling process is through countless experience of precipitation. The traditional skill passed down from generation to generation, so that we still can use it to continue to product new ceramics. With the development of science and technology, the technology of making and producing ceramics is also advancing and changing. Traditional ceramic production techniques also have their inherent limitations. For example, it needs a certain production experiences and production basis to complete a ceramic product. Ceramic materials are more fragile easily to produce defects before burning them. The traditional ceramic techniques have certain requirements for modeling the clay body. It is difficult to find a new way for innovation.
In the process of traditional ceramic product design, the first step is drawing a sketch which is contented and expressed of the main idea through pictures. Besides it also should be considered about the working process and material's costs. Ceramic product design techniques are developed from handmade pottery skills, along with the industry technology movement people began to use threedimensional models into the mass ceramic production. Although it was advantaged to promote the development of ceramic product design, the accuracy of the three-dimensional model is not precise enough and the working time are increased. Also, it constantly modifies the model to extend the product procession. Otherwise, ceramic production designers need to communicate with different participates in the production process, express their ideas and explain the former design drawings or models. It can be seen that the traditional design process is very cumbersome. Two-dimensional drawings or sketches cannot express all angles of production. In the process of hand building a three-dimensional model, we also need to consider materials and equipment and skill factors etc. Both these two ways are lack of time and efforts, and the products cannot achieve the designers' desired.
Alone with the development of computer technology, ceramic designers encouraged to use variety of new techniques to design ceramic model. Ceramic 3D printing technology is a currently developing new technology. It can easily though computer and ceramic 3D printer to product ceramic design productions. Ceramic 3D printing technology is flexible and the data of the objects are editable. This 2 technology can easily sculpt the objects shape, and the product procession is much shorter than the traditional way. The sketch drawing and modeling time are reduced. The influence of materials and technology on the hand-made model are avoided. Using of computer modeling software design a digital object, through ceramic 3D printer directly product a real clay object. It is a new ceramic production design revolution, and it can reduce the waste of human and material resources in different aspects.
Today, the production of 3D model is the basis of ceramic product design. Through the prefabricated 3D modeling, the designer normally can confirm the final ceramic design production shape. 3D printing models can more intuitively reflect designers' ideas by computer and 3D modeling software. It can also provide technical support to make the design idea clearly and convenient especially for later functional expression. For example, designers or engineers are operating 3D software to build ceramic models which can be quickly converted into STL formats and sent to ceramic 3D printers to print during a multifaceted objects design process. The digital 3D model techniques provide technical support for the production of ceramic design that are able to design more complicated appearances of ceramic production. In order to improve the aesthetic appearance of ceramic products, the application of curves and the smoothness of production surface are become significance. It is necessary for 3D technology design and sculpt the curves and print them out to see the effect. From this procession, designers can constantly by adjust the data of the program to flexibility modulate the final production's surfaces. Figure 1. Designers are using 3D modeling software to create ceramic 3D printing models.
Ceramic 3D printing molding technology
2.1 Principle of ceramic 3D printing molding 3D printing technology is actually the general term for a series of quickly modeling techniques. The basic principle of the technology is laminated manufacturing. Particularly, there is rapidly shift needle within full of materials forms a cross-sectional shape of the work piece by transmit data from computer in the X-Y axis, and the Z axis coordinates uninterrupted shift forms the thickness in order to sculpt integrated 3D printing. Compared with the traditional pottery process like throwing technique, plaster casting technique, hand modeling, 3D printing technology transform the three-dimensional process of ceramic forming into a discrete stacking process which is formed from point to line, from line to surface, from surface to body. The 3D printing technology is greatly reducing the complexity of manufacturing, and breaking through the restriction of traditional techniques. The ceramic 3D printing technology can quickly create complex shapes and structural features in terms of shape complexity beyond those forms were difficult or even impossible to sculpt in the traditional modeling techniques. Expanding the imagination and making it possible and practically for ceramic designers, pottery masters or artists to imagine and create ceramic innovations.
Spray-extruded and stacked molding technology
This technology uses extrusion needles to continuously extrude paste-like clay which is working cavity under constant pressure, and accumulates layer by layer after curing in the air to final resulting a ceramic shape. This printing technique can use multiple needles and spray different kinds of paste clay at the same time, even it can print a variety of colors to form the colorful ceramic bodies. This technology originated from 3D printing in the construction industry. An American scholar, Joseph Pegna, proposed a construction method for constructing a free-form component by cement materials that could add up cement layer by layer and selectively solidify in 1997. Behrokh Khoshnevis, a professor at the University of Southern California, proposed a building 3D printing technology which is called Contour Crafting to enable to building layered stacking of concrete through large 3D extrusion devices with needles of smears in 2001. Studio Under studio, from Israel's Fire Dragon Institute of Technology, has developed a color ceramic 3D printing technology that blended specially colored powders into ceramic clay and then printed them out with extruded needles to produce colorful ceramics.
Layered bonding overlay molding technology
This technique is selective cohere ceramic clay by spraying the binder to achieve the accumulation of molding ceramic objects. The specific process of the technology is after the completion of the upper layer bonding, the molding cylinder drops a distance, equal to the thickness of the layer from 0.013 to 0.1mm, the powder cylinder rises a height and a number of powders are introduced and pushed by the paving roller to the forming cylinder. The layer is flattened and compacted by the needle under the control of the computer. Then following the construction section of the forming data, the machine selectively sprayed and adhesive the next construction level. The excess powder is collected by the powder collection device when the powder roll is laid. So that powder, powder roll and spray binders are sent back and forth in cycles, to result in the bonding of a three-dimensional powder objects. The place where the binder is not sprayed will become the dry powder, which support the forming process and is easier to remove after forming. This technology is simple to operate, the product has a high porosity rate. A wide range of original material applications, the surface of the holder is smoothly. The disadvantage of this technique is that the mechanical strength of the product is not strong enough. The products are normally need to be repaired in post-production phase. An Italian engineer Enrico Dini, who was working in Monolite company in UK, proposed a binder that accumulates into a type D(D-Shape) layer-by-layer selective bonding sandstone powder outside the device in 2007. Later, he successfully printed a sculpture with a height of 1.6m in 2009. Figure 5. A printer that uses layered bonding overlay molding technology.
Selective laser-burning technology
This technique is a mixture of ceramic powder and a certain binder powder together, and molded by 3D printer by melted from the laser to adhesive powder from the lower melting point powder in order to bonding the ceramic powder together. The specific molding process is 3D printer feeder rise then move the powder roller, lay a layer of powder material on the working platform. Then the laser is beaming by the laser devices, under the control of the computer it is burning some of the selected areas of the powder according to the profile of the section. It melts the powder of the binder to form an integrated printing layer. The working platform will drop a certain height after the first layer completed, and the powder roll will lay the mixture powder to prepare burning in the next layer and move in circles to stacking a ceramic form. This technology is ideal for the molding of composite prints based on polymeric polymers, composite ceramics, glass, fibers, metals and other powders.
Advantages and disadvantages of the ceramic 3D printing technology
Nowadays, the ceramic 3D printing technology is becoming a usual product or model technique by using computer and ceramic 3D printer to make 3D objects. This technique can accurately reflect the designer's thinking through 3D software and digital expression, and then combine them together to create the 3D models from imagination to finally printing accumulated layer ad layer texture by the 3D printer. If people want a smooth surface, they may polish their printed works in post-production phase. The technique is not only saving the working times, but also reduces costs and saving human and material resources.
Compared with traditional ceramic design methods, ceramic 3D printing technology has a more obvious advantage. It is not only can better displayed the designer's ideas by digital techniques, and more convenient to modify the form to saving the working time and material costs which were mostly spend in the traditional ceramic design or hand-made technologies. But also, it is improving production efficiency and accuracy by using computer and digital technology to interchange human's images into three-dimensional forms which are directly transferred from ceramic 3D printers.
Today, ceramic 3D printing technology is usually using wet clay as the printing material in ceramic product design. But wet clay has certain defects, for example that is difficult to form and is not strong enough and easily to deform during the printing process. Perhaps the harder clay materials can be applied to avoid this problem later. Another problem is the layer thickness of ceramic 3D printing which normally is 10 mm, and it cannot guarantee the quality and speed of printing. The technician need research how to reduce the thickness to improve the accurate of printing, and after air-drying its appearance still can be polished to ensure its smoothness. Furthermore, ceramic 3D printing technology also has certain disadvantages that are the technology form the rough surface which qualities are not exquisite besides the appearance of them with irregular textures. Although the uniquely textures can highlight the characteristics of the ceramic itself, it cannot meet the requirements of smooth surface for functional use. To improve the smoothness of surface, the appearance of printed ceramic needs to be polished in post-production phase.
In addition, the ceramic 3D printing technology is still fresh vitality in many aspects. The appearance of printed objects excesses or lack of material during the printing process. Some materials will appear wiredrawing or remaining on the surface, so after the production should be further improved like | 2,788.8 | 2021-03-01T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics",
"Computer Science"
] |
Density dependent composition of InAs quantum dots extracted from grazing incidence x-ray diffraction measurements
Epitaxial InAs quantum dots grown on GaAs substrate are being used in several applications ranging from quantum communications to solar cells. The growth mechanism of these dots also helps us to explore fundamental aspects of self-organized processes. Here we show that composition and strain profile of the quantum dots can be tuned by controlling in-plane density of the dots over the substrate with the help of substrate-temperature profile. The compositional profile extracted from grazing incidence x-ray measurements show substantial amount of inter-diffusion of Ga and In within the QD as a function of height in the low-density region giving rise to higher variation of lattice parameters. The QDs grown with high in-plane density show much less spread in lattice parameter giving almost flat density of In over the entire height of an average QD and much narrower photoluminescence (PL) line. The results have been verified with three different amounts of In deposition giving systematic variation of the In composition as a function of average quantum dot height and average energy of PL emission.
Since the discovery 1,2 of narrow photoluminescence (PL) lines of InAs single quantum dots grown epitaxially on GaAs surfaces, an enormous amount of work has been carried out on these nanostructures for the development of optoelectronic devices such as lasers, quantum dot infrared photo-detectors and single-photon sources 3,4 . Recently, successful fabrication and operation of InAs/GaAs QD based intermediate band solar cell devices has been reported 5,6 . The prime challenge in this field is to determine the growth parameters to obtain predictable composition and strain profiles within a quantum dot (QD) and size-distribution of QD so that optoelectronic properties of these quantum structures can be tuned with fundamental physics calculations 7 . It has been pointed out recently that the In-Ga intermixing within the self-assembled QD strongly influences intensity distribution and polarization of emitted photons 8 as the confinement length of the carriers depend mainly upon the In composition profile within a QD and not just on their size 9 .
Several structural studies have already established that InAs QD grow on GaAs substrate in Stranski-Krastonov (SK) growth mode by forming InGaAs wetting layer (WL), initially, at lower growth temperatures (around 350 °C) 10 . It is also known that at higher growth temperatures (above 420 °C), the QD volume becomes much higher than the deposited additional InAs after formation of WL, providing 1 Saha Institute of Nuclear Physics, 1/AF, Bidhannagar, 700064, Kolkata, India. 2 Cavendish Laboratory, University of Cambridge, J. J. Thomson Avenue, Cambridge, CB3 0HE, United Kingdom. 3 Deutsches Elektronen-Synchrotron, DESY, Notkestrasse 85, 22607, Hamburg, Germany. 4 a direct evidence of considerable migration and intermixing of In and Ga 11,12 . Substantial intermixing of In and Ga was also observed in thickening of WL after QD formation 13 . High resolution transmission electron microscopy measurements 14 and scanning tunneling electron microscopy studies 15,16 have indicated segregation of In towards the center and tip of the QD formed above the WL. It is essential now to understand role of various growth conditions, like the deposition rate, temperature and in-plane density to tune the size and composition of these QD for obtaining desired electronic, structural and optical properties 7,17 .
In-plane density of self-assembled InAs QDs on GaAs surface is an important parameter for the development of devices based on these nanomaterials. Dense and uniform array of QD may be useful for solar-cells or lasers but low density limit may be valuable in the development of devices for quantum information processing. Recently it has been demonstrated that such density dependent growth can be characterized by Hopkins-Skellan Index 18 . Here we show that composition and strain profile and also PL property of QD can be tuned by controlling the in-plane density of QD by having a growth temperature gradient over the substrate. Results of grazing incidence x-ray scattering (GIXS) 19-21 measurements of three representative samples presented here show that composition and morphology across the height of an average QD are quite different in the low-density and in the high-density regions (refer Fig. 1). The GIXS measurements provide significant insight into the In-composition profile in an average QD and also provide valuable information regarding the strain profile as a function of height within an average QD. We present here results of two types of GIXS measurements (refer Fig. 2) on six different dot-densities (refer Fig. 1), namely grazing incidence diffraction (GID) to extract In composition profile as a function of in-plane lattice parameter and grazing incidence critical angle measurement to map in-plane lattice parameter to average QD heights. The GIXS results show substantial amount of inter-diffusion of Ga and In within the average QD in all the three samples as a function of dot heights.
All the presented three samples w0795, w0808 and w0809 have multilayer structure with two layers of InAs quantum dots separated by GaAs buffer layers deposited on GaAs (001) substrate. It should be mentioned here that the PL peak is obtained here from the buried lower layer of quantum dots and the exposed top layer of dots was probed by GID and AFM measurements. The details of the sample structure and growth conditions are given in the 'Methods' . Due to heat sinking effects resulting in a temperature gradient 22 it was found that the edge of the substrate was 15 °C colder than the centre. Moreover an intentional non-uniformity in the Indium source flux was used to supply around 10% less In at the edge of the substrate. The difference between the three samples was only the In deposition time, which was kept equal for both the buried and exposed QD layers. For the presented samples w0795, w0808 and w0809 the In deposition time were 115, 105 and 100 seconds respectively. The GID measurements provide significant insight into the In-composition profile in an average QD and also provide valuable information regarding the strain profile as a function of height within an average QD for the three samples. PL measurement results presented here provided us information regarding optical properties of the buried QD layer as a function of in-plane QD density. The conclusions drawn here assume that both the buried and exposed layer of quantum dots have similar structural and compositional profiles. In future, we plan to study GID, AFM and PL from same QD layer by reducing the thickness of the cap-layer to improve our understanding.
Results and Discussion
We first present the results of AFM measurements of the three InAs QD samples having different coverage at the edge and at the center positions in Fig. 1. The left and right panels correspond to the edge and central regions respectively on the substrate surface exhibiting different in-plane densities. The QD at both positions are found to be elongated in shape for w0795. High density QD obtained at the edge of the substrate surface are observed to have diameters roughly 31 ± 2 nm and 50 ± 2 nm in the two mutually perpendicular directions and the average height was found to be 17 ± 4 nm. Presence of very few dome sized islands could also be observed. Low density QD obtained near the center of the substrate with higher growth temperature have diameters of around 38 ± 2 nm and 60 ± 3 nm in the two mutually perpendicular directions with average height of 25 ± 2 nm. The QD in this region are seen to be more uniform in size and the dome sized islands are completely absent here. This can be understood by the fact that at higher temperature, the adatoms have higher diffusivity leading to the coalescence of small islands to form larger QD having uniform size 22,23 . The higher mobility of the adatoms on the central region of substrate surface also leads to a lowering in-plane density of the QD from 100 per μ m 2 at the edge to 50 per μm 2 at the center. The central and edge portions of the substrate surface will now be referred as low-density and high-density regions, respectively. Relatively spherical quantum dots are observed in the sample w0808. Low density QD observed at the centre of the substrate consist of larger QD having average diameter of 56 ± 1 nm and height of 18 ± 1 nm. This central region also consists of wet layer pre-pyramids which are observed to be around 2 nm high with diameter of around 26 ± 2 nm. At the substrate edge, the average QD diameter is observed to be 53 ± 2 nm with a height of 15 ± 1 nm. The in-plane number density of QD is observed to be 5 per μ m 2 and 30 per μ m 2 at the central and edge regions respectively of the sample. For the sample w0809, having the lowest In deposition among all, wet layer with several pre-pyramids (having average height of around 2 nm) can be observed at the central portion of the substrate. The edge region of the substrate is observed to have a very high in-plane QD number density of 42 per μ m 2 . Again, slightly elongated QD are observed with average diameters Scientific RepoRts | 5:15732 | DOi: 10.1038/srep15732 of 68 ± 1 nm and 56 ± 1 nm in the two mutually perpendicular directions and average height of 16 nm. Insets of Fig. 1 show the 3-D view of a representative single QD in the high and low density regions, respectively for all the three samples.
The schematic diagram of the GIXS measurements and the sample having a layer of the InAs quantum dots at the top surface of the grown structure is shown in Fig. 2. Figure 3(a,b) show the representative 2-D plots of the radial scans around (200) GID peak at high and low density regions for w0795 sample, respectively. The scattered intensity variation with respect to the in-plane lattice parameter (X-axis) and the detector exit angle (α f ) (Y-axis) are found to be quite different in these two density-regions. In the high-density region, the scattered intensity is concentrated over a small lattice parameter space (from a || = 5.95 to 6.08 Å) but for low-density region, scattering was observed over large in-plane lattice parameters (a || = 5.77 to 6.02 Å). Figure 3(c,d) represent typical radial scans around (400) and (200) GID Fig. 3(c,d) show the variation of In content calculated using this expression for high and low density regions respectively. The In concentration increases to (x ≈ 0.8) with increasing in-plane lattice parameter and then reduces to (x ≈ 0.4) towards the InAs lattice parameter for both low and high density regions. However the extracted compositional profiles for two density regions are found to be quite different.
Insets in
The absolute in-plane strain present in the QD can be calculated by where a || is the in-plane lattice parameter and a(x) is the lattice parameter corresponding to the In content (x) given by Vegard's law 24 . The in-plane strain-profile was also calculated with respect to the substrate (GaAs) lattice parameter given by ε || = (a || − a GaAs )/a GaAs . These calculated strain-profiles are presented in Fig. 3(e,f) for high and low density regions respectively. The absolute in-plane strain profiles of QD present in both density regions were found to evolve from highly compressive to tensile in nature. It can be observed from the Fig. 3(e) that the strain value varies from − 4 (compressive strain) to + 4 (tensile strain) with a high slope for the high density region while for low density region it varied from − 2 to + 3 with the increase in lattice parameter (refer Fig. 3(f)). Thus, the QDs are more relaxed in low density region than those present in the high density region. This observation is consistent with the fact that higher In-Ga intermixing occur in the low density region due to higher growth temperature.
Since the quantum dots under investigation are epitaxial, the base of the QD will have in-plane lattice parameter close to that of GaAs (a GaAs = 5.653 Å) and towards the apex of QD, the lattice would tend to approach the value of InAs (a InAs = 6.058 Å). Thus, the lattice near the base of InGaAs QD will suffer compressive strain and that near the apex of QD will experience tensile strain. It is known that for InGaAs QD, In content is lower in the apex of QD and higher in the middle region 11,14 . For simplicity of analysis, here we assume an average QD consists of few disks stacked one over the other, each having a unique lattice parameter 24 . Figure 3(g,h) show typical angular scans at the high and low density regions respectively around (400) GID peak at several fixed radial momentum positions corresponding to different in-plane lattice parameters (a || ) as indicated. These intensity plots as a function of angular momentum provide us the isostrain length scales i.e. the length of the area having same in-plane lattice parameter which can be considered to be the diameter of the disks in the model. The contribution of each disk with radius 'R' in the obtained x-ray scattering profile can be given by 24,25 : where f InGaAs (r) is the effective scattering factor at position 'r' from the center of the disk. The line profiles in Fig. 3(g,h) show the fit of the angular momentum intensity profiles as calculated using Equation (2). Similar calculations and data analysis were performed for the samples w0808 and w0809 as well and the results have been presented in Figs 4 and 5 along with results of the w0795 sample. Figure 4(a,b) show the variation of the isostrain region with a || for the two density regions for samples w0795 and w0809 respectively. It can be seen that as the in-plane lattice parameter (a || ) increases, the radii of the disks constituting an average QD decrease for both the samples. This coincides with the fact that the lower a || values represent the base region of the QD and the higher a || values correspond to the QD-apex. In Fig. 4(a) for sample w0795, the diameter of 50 nm for the base of QD in low density region matches quite well with the average value obtained from AFM measurements. However, the value of 60 nm for the base diameter in the high density region is higher than that estimated from the AFM measurements. It should be noted here that in the high QD density region, the islands have varied sizes and occurrence of large sized domes is also evidenced. It is also to be noted that x-ray techniques provide statistically averaged results over large areas and also probe buried layer as compare to AFM measurements that yield localized information of the exposed surface. Figure 4(b) shows the variation of isostrain region with a || for the sample w0809. The highest diameter of 70 nm for the high QD density region matches well with that obtained from the AFM measurements. For the low QD density region, the maximum isostrain region is found to be around 38 nm which corresponds to the pre-pyramids observed in this region from the AFM measurements. The position of a disk in the model, with given a || , above the GaAs surface can be estimated from the exit angle plots 19,20 of the Mythen detector. Figure 5(a,b) show the exit angle plots [in (400) GID geometry] corresponding to different in-plane lattice parameters (as indicated) for the high and low density regions respectively for sample w0795. These exit angle intensity profiles can be extracted directly from the radial scans as their 2-D plots [refer Fig. 3(a,b)] show intensity variation with respect to α f and a || . From the position of the first maximum (α f max ), the height 'z' above GaAs surface corresponding to any a || can be calculated as 19,20 : where k is the wave number of the x-ray beam and α c is the critical angle for GaAs. Thus, the height of a particular iso-strain region in the QD from the GaAs surface can be calculated using Equation (3) and is represented in Fig. 5(c). As the in-plane lattice parameter increases, the height inside the QD also increases monotonically. Thus, from base to apex of the QD, the in-plane lattice parameter (a || ) increases and the diameter of the QD decreases as can be inferred from Figs 4(a) and 5(c). Similar trend was observed for the w0809 sample having minimum In deposition as shown in Fig. 4(b) though over much reduced range. The diameters corresponding to the highest in-plane lattice parameter in both density regions correspond to the apex region of the QD. Also, it is observed that the apex of the QD corresponding to the a InAs (= 6.05 Å) is under high tensile strain for both the QD regions [refer Fig. 3(e,f)]. By comparing the in-plane lattice parameter with those in the insets of Fig. 3(c,d) respectively for the high and low density regions, we could correlate different angular scans [ Fig. 3(g,h)] with the height dependent In-profile of an average QD. In Fig. 5(d-f) we have shown the extracted In profile within an average QD as a function of height for samples w0795, w0808 and w0809 respectively. It is observed that the In content for an average QD in both density regions increase initially as a function of height measured from the base. This observation is consistent with the previous XTEM studies and theoretical calculations performed on InGaAs QD which show a segregation of Indium in their central region 11,14 . For the QD grown at the edge having higher in-plane density, the In content increases from base and then fall within 5 nm to attain a nearly constant value until the apex of QD. The highest Indium content (x = 0.74) in the high density region was found at the height of 3 nm above the substrate and it corresponds to the iso-strain disk of radius 13 nm for sample w0795. On the other hand for low density region of this sample highest Indium content (x = 0.78) was obtained at the height of 8 nm above GaAs substrate corresponding to the iso-strain disk of radius 11.5 nm. Moreover these low density QDs grown in the central portion of the substrate exhibit high In content even beyond 10 nm height and finally attain a constant value (x = 0.4) towards the tip. Almost constant value of Indium content beyond the base height of 5 nm of average QD in high density region, apparent in Figs 3(a) and 5(d), was found to be crucial in exhibiting sharper photoluminescence (PL) shown in Fig. 6. For the samples w0808 and w0809 (refer Fig. 5(e,f) respectively), the In concentration increases initially from the base of the QD in both the low and high QD density regions and decreases towards the apex as observed for the sample w0795. Lower In deposition time resulting in lesser In content is quite evident from these results as the maximum In content within the QD for w0795 is observed to be 0.8 while the highest In content in w0808 is observed to be 0.46 and that in sample w0809 it is 0.28. A decrease in QD height is also observed as the In deposition is reduced as expected. In Fig. 6 we have shown the micro PL spectra for all the three samples measured at the two density regions. A picoquant 785 nm laser diode driven at 80 MHz was used as the excitation source for collecting PL from a sample placed in an Oxford Instruments continuous flow cryostat. The sample temperature was 10 K. The signal was directed to a Horiba HR460 spectrometer and detected by a LN 2 cooled InGaAs array. The PL peaks are observed to be at 1.07 eV (FWHM = 0.038 eV) and 1.14 eV (FWHM = 0.058 eV) for the high and low density regions respectively for sample w0795. The peak position is observed to be shifted to lower energy giving sharper peak for the high density region as compared to that for low density region. Following earlier results 26,27 , we conclude that broad PL peak observed in low density region can be attributed to the higher In-Ga interdiffusion observed in x-ray measurement (refer Fig. 4(d) of this region). Further study of crystal defects such as dislocations that may lead to the shifting of PL peak energy to higher values is required to substantiate this observation as higher InAs deposition on the central portion of the substrate surface may lead to such defect states 28,29 . For samples w0808 and w0809 with lower In deposition large fall in PL intensity was observed. Only WL related emission is observed in sample w0808 from low density region at around 1.44 eV. For high QD density region of this sample a broad PL intensity distribution is observed around 1.22 eV. For w0809 sample, a broad QD related peak is observed around 1.16 eV (FWHM = 0.057 eV) and a small WL emission is observed. This shifting of high QD density related PL emission to higher energies may be attributed to the lower In content in the samples w0808 and w0809 as compared to w0795.
Conclusions
Epitaxial InAs quantum dots grown on same GaAs wafer at different deposition temperatures have been studied. AFM measurements suggest the coarsening of small quantum dots to form uniformly sized larger quantum dots for the higher deposition temperature. This coarsening and absence of it lead to variation in In-Ga intermixing inside the quantum dots deposited at different growth temperatures. Results of grazing incidence x-ray scattering measurements presented here clearly show that quantum dots grown in low-density region have large variation of Indium composition as a function of height in an average dot. Most promising results were obtained from w0975 sample where Indium concentration profile within an average quantum dot present in high in-plane density region exhibit a sharp In-peak near the base of the dot and then a flat In 0.4 Ga 0.6 As composition over the rest of the dot giving a much sharper PL emission as compare to the dots grown in low-density region. The techniques developed here to correlate GIXS and PL measurements of quantum dots will help to develop better structure-spectroscopy relationship in these technologically important materials.
Methods
Sample Preparation. The samples were grown by molecular beam epitaxy using a Veeco Gen III system on 3 inch semi-insulating GaAs (001) substrates. The structure consists of a 250 nm GaAs buffer grown at 580 °C, a QD layer plus 10 nm GaAs capping layer grown at 515 °C for PL measurements followed by a second 200 nm GaAs buffer and a surface layer of QD is deposited at 515 °C for AFM and GID studies. The GaAs growth rate was 1 ML/s and the InAs arrival rate was 0.027 ML/s. Due to the relatively high deposition temperature for the InAs QD resulting in some desorption, the Indium shutter was kept open for a time equivalent to 0.8625 nm (2.8 ML) for both the layers of QDs in sample w0795. Similarly for both layers of w0808 and w0809 samples shutter was open for growth of 0.7875 nm and 0.75 nm, respectively. For the sample w0809, having the lowest In deposition among all, wet layer with several precursors are observed at the central region of substrate 30,31 . X-ray measurements. Synchrotron measurements were performed at the two different QD density regions on the sample surface to study their size, strain profile and the amount of In-Ga intermixing as a function of the height of QD. All x-ray experiments were performed at Beamline P08 of Petra III synchrotron in DESY, Germany at energy of 11103 eV 32 . A beam-defining slit setting of dimension 50 × 300 micron was used in vertical and horizontal direction respectively and the data was collected by a position sensitive linear Mythen detector. The intensity of all the channels was integrated to obtain the data presented here. During measurements the incident angle was kept to be 0.1° which is lower than the critical angle for GaAs, in order to keep the x-ray beam onto the surface of the sample to enhance sensitivity to the InGaAs QDs present at the top surface. The obtained results of radial and angular scans taken in GID geometry around two in-plane diffraction peaks around (400) and (200) are presented.
Radial scans are intensity measurements in which incidence angle to the planes in the sample surface (θ) and detector angle (φ) are varied by keeping φ = 2θ. The intensity measurement by this type of scans is directly related to the in-plane lattice parameter (a || ) defined by | 5,798.8 | 2015-10-28T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Effect of Particle Sizes on the Efficiency of Fluorinated Nanodiamond Neutron Reflectors
Over a decade ago, it was confirmed that detonation nanodiamond (DND) powders reflect very cold neutrons (VCNs) diffusively at any incidence angle and that they reflect cold neutrons quasi-specularly at small incidence angles. In the present publication, we report the results of a study on the effect of particle sizes on the overall efficiency of neutron reflectors made of DNDs. To perform this study, we separated, by centrifugation, the fraction of finer DND nanoparticles (which are referred to as S-DNDs here) from a broad initial size distribution and experimentally and theoretically compared the performance of such a neutron reflector with that from deagglomerated fluorinated DNDs (DF-DNDs). Typical commercially available DNDs with the size of ~4.3 nm are close to the optimum for VCNs with a typical velocity of ~50 m/s, while smaller and larger DNDs are more efficient for faster and slower VCN velocities, respectively. Simulations show that, for a realistic reflector geometry, the replacement of DF-DNDs (a reflector with the best achieved performance) by S-DNDs (with smaller size DNDs) increases the neutron albedo in the velocity range above ~60 m/s. This increase in the albedo results in an increase in the density of faster VCNs in such a reflector cavity of up to ~25% as well as an increase in the upper boundary of the velocities of efficient VCN reflection.
Introduction
Slow neutrons are usually subdivided into three ranges: ultracold neutrons (UCNs) [1][2][3][4][5], very cold neutrons (VCNs), and cold neutrons (CNs). The characteristic feature of UCNs is their (nearly) total reflection from a material surface provided the neutron velocity is smaller than the critical velocity of the surface material; a typical value of the critical velocity of materials used to build UCN traps is~5 m/s. The available UCN fluxes are extremely low; however, this property of total reflection makes them an invaluable tool in fundamental neutron physics. CNs are widely used in neutron scattering and particle neutron physics due to their much higher fluxes than those of UCNs. Their wavelengths are slightly larger than interatomic distances; thus, Bragg scattering in solids starts disappearing, and matter starts becoming more transparent. Most of the neutrons are thermalized in cryogenic CN sources [6] in research nuclear reactors and spallation neutron sources to this energy range. A typical velocity of CNs is~500 m/s. The intermediate range of VCNs with a typical velocity of~50 m/s is rarely used because of two reasons: they have much lower fluxes than those of CNs, and there is no storage in traps unlike in the case of UCNs. We explore the possibility of solving both of these problems by means of developing VCN reflectors, which are efficient in the lower half of the so-called "reflectivity gap".
Over a decade ago, it was confirmed that detonation nanodiamond (DND) powders reflect VCNs diffusively at any incidence angle, and they reflect CNs quasi-specularly at small incidence angles [7][8][9][10][11][12]. In analogy to UCNs, the phenomenon of efficient reflection of neutrons from DND powder could be used for the definition of the range of velocities/energies of VCNs. Potential applications of these phenomena in neutron technology, as well as the beauty and complexity of the phenomena themselves, motivate active experimental and theoretical research in this area [13][14][15][16][17][18][19][20].
Not only can neutrons be reflected from DNDs, but they can also be lost due to the interaction with DNDs in the course of diffusive motion. As neutron losses in raw DNDs are dominated by a small admixture of hydrogen on the surface of DNDs, the removal of hydrogen by fluorination [21][22][23][24][25] is essential for reducing neutron losses.
The properties of DNDs have been studied over the decades of their research and use, in particular those relevant to neutron reflectors [26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45]. In relation to neutron reflectors, DNDs combine the large cross-section of coherent scattering and the low cross-section of neutron losses. As this paper shares the scientific motivations and some experimental methods with another article published in the same issue of this journal [46], we limit our description here to only the specific results related to the subject of our present research. We refer the reader to this previous publication for a detailed description of the principle of operation and the general properties of such neutron reflectors. In the present publication, we report results of our study to optimize the sizes of DNDs in order to increase the neutron albedo in the most interesting VCN range; the neutron albedo is neutron reflectivity for the isotropic angular distribution of incident neutrons. From below, the VCN velocity range starts directly from UCNs. The most probable VCN velocity at the only existing VCN user facility, PF2 at ILL, is~50 m/s. The maximum available VCN velocity is~200 m/s. Better VCN reflectors might allow for an increase in this value.
For every neutron wavelength, there is an optimum nanoparticle diameter that corresponds to the maximum neutron transport cross-section. In the model of diamond nanospheres, the optimal particle diameter D opt is related to the neutron wavelength λ n and the neutron velocity V n as follows [8]: In particular, for a typical VCN velocity of~50 m/s, the optimal particle diameter is 4.3 nm, precisely in line with a typical size of commercially available DNDs. Figure 1 illustrates the calculated optimal particle sizes and the corresponding maximum VCN albedo for a range of neutron velocities from 20 to 200 m/s. This and all other calculations presented below were performed using Monte Carlo method and the model of discrete-sized diamond nanospheres (MDDNS) [17,46]. In the particular case shown in Figure 1, the powder layer thickness is 1 cm and the density is 0.56 g/cm 3 . All particles are assumed to have a spherical shape and monodisperse optimal diameter, at which the transport cross-section for each wavelength reaches its maximum. It is clear that for the broad range of velocities above ~50 m/s, the mean size of particles should be smaller than ~4.3 nm. For comparison with the ideal case, we added a dashed curve corresponding to the albedo calculated for a model of the real powder of deagglomerated fluorinated DNDs (DF-DNDs) with the same density of 0.56 g/cm 3 described in ref. [46] (the sample with the best performance achieved). The difference between DF-DNDs and optimal diamond nanospheres is due to the deviation from the optimum size and the spread of sizes.
Evidently, a maximum neutron albedo for all wavelengths at once is unattainable for any particular DND powder. However, the albedo can be increased in a broad wavelength range by properly selecting the mean size of DNDs. Therefore, we found a way to separate, by centrifugation, the fraction of finer DNDs from a broad size distribution in the initial powder and experimentally and theoretically studied the performance of a neutron reflector made of such particles. To underline the separated DNDs as well as the small particle size in this powder, we refer to these as S-DNDs to distinguish them from the DF-DNDs, which are used for comparison.
When choosing methods of characterization of DNDs, we took into account that our main goal was to describe the diffusion of neutrons in the DND powder. Therefore, SANS (small-angle neutron scattering) is the main characterization method used in this paper, as it allowed us to describe DNDs as neutron scatterers. Other methods are complementary and were used to confirm the effect of different procedures that we applied to modify the samples (fluorination, size separation, etc.). When applying MDDNS to SANS data, we approximated a real medium with a model one, the main goal of which was to precisely describe all of the SANS data and to extrapolate the experimental data to velocity and angle ranges that are not accessible for direct SANS measurements. Such a model allowed us to describe neutron diffusion in DND powders in all velocity and angular ranges of interest.
This approach allowed us to take into account all relevant properties of the interaction of neutrons with nanostructured media described in this work (diamond cores and This and all other calculations presented below were performed using Monte Carlo method and the model of discrete-sized diamond nanospheres (MDDNS) [17,46]. In the particular case shown in Figure 1, the powder layer thickness is 1 cm and the density is 0.56 g/cm 3 . All particles are assumed to have a spherical shape and monodisperse optimal diameter, at which the transport cross-section for each wavelength reaches its maximum. It is clear that for the broad range of velocities above~50 m/s, the mean size of particles should be smaller than~4.3 nm. For comparison with the ideal case, we added a dashed curve corresponding to the albedo calculated for a model of the real powder of deagglomerated fluorinated DNDs (DF-DNDs) with the same density of 0.56 g/cm 3 described in ref. [46] (the sample with the best performance achieved). The difference between DF-DNDs and optimal diamond nanospheres is due to the deviation from the optimum size and the spread of sizes.
Evidently, a maximum neutron albedo for all wavelengths at once is unattainable for any particular DND powder. However, the albedo can be increased in a broad wavelength range by properly selecting the mean size of DNDs. Therefore, we found a way to separate, by centrifugation, the fraction of finer DNDs from a broad size distribution in the initial powder and experimentally and theoretically studied the performance of a neutron reflector made of such particles. To underline the separated DNDs as well as the small particle size in this powder, we refer to these as S-DNDs to distinguish them from the DF-DNDs, which are used for comparison.
When choosing methods of characterization of DNDs, we took into account that our main goal was to describe the diffusion of neutrons in the DND powder. Therefore, SANS (small-angle neutron scattering) is the main characterization method used in this paper, as it allowed us to describe DNDs as neutron scatterers. Other methods are complementary and were used to confirm the effect of different procedures that we applied to modify the samples (fluorination, size separation, etc.). When applying MDDNS to SANS data, we approximated a real medium with a model one, the main goal of which was to precisely describe all of the SANS data and to extrapolate the experimental data to velocity and angle ranges that are not accessible for direct SANS measurements. Such a model allowed us to describe neutron diffusion in DND powders in all velocity and angular ranges of interest.
This approach allowed us to take into account all relevant properties of the interaction of neutrons with nanostructured media described in this work (diamond cores and non-Nanomaterials 2021, 11, 3067 4 of 15 diamond carbons, nano-pores, interference on neighbor scatterers, etc.). The structures of significantly larger sizes, originating in particular from microporosity and microstructure, have no direct effect on neutron transport characteristics, as they result in the scattering of too-small angles. However, they are also included in our analysis through SANS measurements. The absence of neutron-absorbing impurities must be verified by other methods (neutron activation, neutron prompt-γ analysis, etc.) for each DND powder considered to be used for a real reflector.
There are several options for building neutron reflectors based on DND powders, and these include placing a DND powder in a thin-wall envelope made of materials with low neutron losses as used, for instance, for the first demonstration of VCN storage in a closed trap [10], as well as sintering and cold compaction, as is being investigated in the ANR-20-CE08-0034 project (ANR-Agence Nationale de la Recherche, France). The choice between these options depends on the particular applications of such reflectors. However, the results of the present study are valid for all these cases.
The details of the sample preparation process are described in Section 2. In Section 3, we present the results of experimental studies of the particle size distribution. We investigated DF-DND and S-DND samples with complementary techniques as follows: Section 3.1 illustrates the size distribution of DNDs in these samples using transmission electron microscopy (TEM); in Section 3.2, the size distribution of DND cores using X-ray diffraction (XDR) is discussed; SANS results are described in Section 3.3; in Section 4, we discuss the calculated performance of DF-DND and S-DND reflectors and the effect of increased albedo on the yield of neutrons.
While molecular fluorine does not react with sp 3 carbons of the diamond core, it decomposes amorphous carbons when the reaction is carried out at a high temperature such as 450 • C in our case. The lower the crystalline order, the higher reactivity of sp 2 -type carbon. At this temperature, graphitized samples form, in pure F 2 gas, a mixture of (C 2 F) n and (CF) n structural types, whereas amorphous carbons are decomposed into C x F y gases (x = 1, 2, 3, . . . and y = 4, 5, 6, . . . ). As a gas/solid reaction, the F 2 molecules react with the available sp 2 C and functional groups (mainly C-OH, C-H, C=O, and COOH) on the diamond surface. If those groups are located on sp 2 C shells, they are removed together with C x F y gases. When located onto the diamond surface, they are converted into C-F bonds. It is important to note that no diffusion of fluorine occurs inside the diamond core. At boundaries, two opposite cases may occur: F 2 molecules open channels for their diffusion through the decomposition of sp 2 carbons. If this phenomenon does not occur, the impurities located at boundaries are inaccessible for molecular fluorine, and their removal fails.
We separated an S-DND fraction of finer DNDs from a broad size distribution. For this purpose, we used raw DNDs produced at the Federal State Unitary Enterprise Special Design-Technology Bureau (FSUE SDTB) "Technolog", Saint-Petersburg, Russia. These DNDs underwent the deagglomeration process described in ref. [36]. The obtained hydrosol of deagglomerated particles was centrifuged (in a Sigma 6-16 centrifuge (SIGMA Laborzentrifugen GmbH, Germany) with the acceleration a max = 1.8 × 10 4 g for 100 min) in order to separate particles by size in water. The supernatant containing S-DND particles with diameters of~3 nm was carefully separated from the sediment. Note that such diameters are well within the range of stability of DNDs, whose lower bound is 1.2 nm [33].
Transmission Electron Microscopy
Figure 2a,c show examples of TEM images of DF-DND and S-DND samples, respectively, obtained using FEI Tecnai G2 30 S-TWIN, NRC "Kurchatov Institute"-CRISM "Prometey", Russia. For these measurements, a 2-3 mm 3 sample of the powder was added to 1 mL of distilled water, and the container with the mixture was placed in an ultrasonic bath filled with water. It was then sonicated for 30 min. The resulting suspension (2-3 drops) was applied to a carbon replica placed on the plain grid manufactured by Pacific Grid-Tech. After drying, the replica was examined via TEM.
Transmission Electron Microscopy
Figure 2a,c show examples of TEM images of DF-DND and S-DND samples, respectively, obtained using FEI Tecnai G2 30 S-TWIN, NRC "Kurchatov Institute"-CRISM "Prometey", Russia. For these measurements, a 2-3 mm 3 sample of the powder was added to 1 mL of distilled water, and the container with the mixture was placed in an ultrasonic bath filled with water. It was then sonicated for 30 min. The resulting suspension (2-3 drops) was applied to a carbon replica placed on the plain grid manufactured by Pacific Grid-Tech. After drying, the replica was examined via TEM.
The size distribution of DF-DNDs and S-DNDs shown in Figure 2b,d, respectively, was evaluated using all TEM images available, examples of which are shown in Figure 2a,b. For the presented histograms, we used 10 TEM images for S-DNDs and 1 TEM image for DF-DNDs. The visible projection of an individual particle was described by an ellipse. The particle was assigned a diameter equal to the diameter of a circle of equal area. The total number of particles in TEM images of S-DNDs was 2078, and the total number of DF-DND particles was 264. For the presented histograms, we used 10 TEM images for S-DNDs and 1 TEM image for DF-DNDs. The visible projection of an individual particle was described by an ellipse. The particle was assigned a diameter equal to the diameter of a circle of equal area. The total number of particles in TEM images of S-DNDs was 2078, and the total number of DF-DND particles was 264.
The mean particle diameters in DF-DND and S-DND samples are~4.7(2) nm and 3.8(1) nm respectively. This difference in mean diameters is large enough for the purpose of this study, i.e., for a comparison of the two DND reflectors as a function of the particle size.
X-ray Diffraction
Another non-neutron method of evaluating DND sizes is the broadening of peaks in XRD patterns.
With the Rigaku diffractometer, a planar powder stage was used as the sample holder. With the ID28 diffractometer, the samples were packed in quartz capillaries with the diameter of 200 µm; the data were evaluated using SNBL Toolbox [48] and Dioptas [49] software.
The results are shown in Figure 3. The positions of the diffraction peaks correspond to the diamond lattice. Broadening of the peaks contains information about particle sizes. A characteristic size of the coherent scattering region was determined by analyzing the full width of the peak at half maximum using the Sedyakov-Scherrer equation and the shape of the peak through the comparison of full width at one-fifth and four-fifths of the maximum [50]. For the latter purpose, the 311 peak was the most convenient, and its width change by~10% is shown in Figure 3c with a parabolic local background subtracted and the peak height scaled. The results, as expected, are only slightly dependent on the procedure. Rigaku data provide the value of the mean particle size equal to~3.4 nm for S-DND, while the mean size and dispersion of sizes from the synchrotron data are, respectively,~4.1 nm and~2.0 nm for DF-DND, and~3.7 nm and 1.7 nm for S-DND. The mean value D 4 /D 3 estimated from XRD peak broadening thus decreases by~10%. Nanomaterials 2021, 11, x FOR PEER REVIEW 6 of 17 The mean particle diameters in DF-DND and S-DND samples are ~4.7(2) nm and ~3.8(1) nm respectively. This difference in mean diameters is large enough for the purpose of this study, i.e., for a comparison of the two DND reflectors as a function of the particle size.
X-Ray Diffraction
Another non-neutron method of evaluating DND sizes is the broadening of peaks in XRD patterns.
With the Rigaku diffractometer, a planar powder stage was used as the sample holder. With the ID28 diffractometer, the samples were packed in quartz capillaries with the diameter of 200 μm; the data were evaluated using SNBL Toolbox [48] and Dioptas [49] software.
The results are shown in Figure 3. The positions of the diffraction peaks correspond to the diamond lattice. Broadening of the peaks contains information about particle sizes. A characteristic size of the coherent scattering region was determined by analyzing the full width of the peak at half maximum using the Sedyakov-Scherrer equation and the shape of the peak through the comparison of full width at one-fifth and four-fifths of the maximum [50]. For the latter purpose, the 311 peak was the most convenient, and its width change by ~10% is shown in Figure 3c with a parabolic local background subtracted and the peak height scaled. The results, as expected, are only slightly dependent on the procedure. Rigaku data provide the value of the mean particle size equal to ~3.4 nm for S-DND, while the mean size and dispersion of sizes from the synchrotron data are, respectively, ~4.1 nm and ~2.0 nm for DF-DND, and ~3.7 nm and 1.7 nm for S-DND. The mean value 〈 〉/〈 〉 estimated from XRD peak broadening thus decreases by ~10%.
Small-Angle Neutron Scattering
SANS provides the most direct and unambitious information for the analysis of scattering and transport of slow neutrons in DND powders. SANS characterization [51] of DF-DND and S-DND samples was performed using three instruments: a time-of-flight spectrometer YuMO in the two-detector mode (FLNP, JINR, Dubna, Russia [52]), a diffractometer D11 (ILL, Grenoble, France) [53,54], and an NGB30 at the NIST Center for Neutron Research (Gaithersburg,MD, USA) [55], and their neutron wavelengths and ranges of transferred momenta (Q) were equal to 0.7-5.0 Å and 7×10 −2 nm −1 < Q < 10 1 nm −1 ; 6 Å and
Small-Angle Neutron Scattering
SANS provides the most direct and unambitious information for the analysis of scattering and transport of slow neutrons in DND powders. SANS characterization [51] of DF-DND and S-DND samples was performed using three instruments: a time-of-flight spectrometer YuMO in the two-detector mode (FLNP, JINR, Dubna, Russia [52]), a diffractometer D11 (ILL, Grenoble, France) [53,54], and an NGB30 at the NIST Center for Neutron Research (Gaithersburg, MD, USA) [55], and their neutron wavelengths and ranges of transferred momenta (Q) were equal to 0.7-5.0 Å and 7 × 10 −2 nm −1 < Q < 10 1 nm −1 ; 6 Å and 10 −2 nm −1 < Q < 10 0 nm −1 ; and 6 Å and 3.4 × 10 −2 nm −1 < Q < 1.2 × 10 0 nm −1 , respectively. The three SANS instruments were used to increase the reliability and precision of the results and to select Q-ranges with maximum statistical accuracy and minimum backgrounds. To match the SANS data measured at different Q-ranges (corresponding to different distances from the sample to the detector) as well as SANS data measured with the various instruments (YuMO, D11, and NGB30), the following procedures were applied: (1) With each instrument, a few Q-ranges were measured with a significant overlapping and high enough statistics in the overlapping ranges. The data from the overlapping Q-ranges were used to normalize the intensity for the entire Q-range for each instrument.
We used Igor macros [56] to evaluate the SANS data. The samples were placed inside aluminum 1 mm cells with 1 mm windows for measurements with YuMO and in Quartz SUPRASILL 1 mm cells with D11 and NGB30. The DF-DNDs bulk density was equal to 0.56 g/cm 3 . The bulk density of S-DNDs was equal to 0.67 g/cm 3 . Figure 4 compares the merged results of SANS measurements made with all these instruments. The shape of the scattering intensities for the two samples differs significantly at small values Q < 10 0 nm −1 due to the lower concentration of agglomerates in S-DNDs resulting from the DND size separation procedure. Another natural consequence of this procedure is that the number of individual DNDs (corresponding to large values of Q > 10 0 nm −1 ) increases by 36%. These observations agree with our expectations based on the knowledge of the DND separation method. Incoherent scattering at Q > 7 × 10 0 nm −1 is defined mainly by the presence of hydrogen. It is larger for S-DND powder because it is not fluorinated; however, this fact is not relevant to our calculations, as we do not consider the effect of impurities.
Approximation of the Size Distribution of DNDs Using MDDNS
As previously mentioned, our main goal was not to precisely extract the size distribution of real powders but to find an approximate model with a size distribution that allows us to precisely reproduce neutron scattering. Figure 5 presents the diameter distribution of ideal DNDs obtained from MDDNS in the case of DF-DND and S-DND samples evaluated using the SANS data shown in Figure 4 after the incoherent scattering on hydrogen in S-DNDs was subtracted. The physical basis for the procedure of size evaluation is the model of discrete-sized diamond nanospheres (MDDNS) [25,46], which represents the medium as ideal independent diamond nanospheres with a discrete set of diameters. Such a model medium scatters neutrons the same way as a real medium but also allows model extrapolation of the measured experimental data to other wavelength and angular ranges not accessible directly with standard SANS devices. The mathematical algorithm of this procedure will be described in detail in a forthcoming publication and is the subject of a patent (RU 2020662675). Although both SANS curves in Figure 4 are of equal statistical quality, the size distribution for MDDNS DF-DNDs is smooth in shape, whereas that for S-DNDs shows considerable fluctuations. This difference might be because fewer nanodiamond diameters effectively contribute to the scattering of neutrons on S-DNDs. Such fluctuations have no effect on the precision of model calculations of neutron transport in S-DNDs on the condition that our model precisely describes the SANS data. In the range of 1.2-10.0 nm, the mean size of ideal scatterers in the S-DND model is ~2.9 nm, while it is ~3.8 nm for the DF-DND model. Figure 5 confirms the conclusion drawn from the other methods of DND characterization; that is, in S-DNDs, the fraction of smaller particles is larger, while the larger particles disappear completely. Moreover, it can be clearly observed that the size distributions obtained from MDDNS using SANS data have a fraction of noticeably smaller particles Although both SANS curves in Figure 4 are of equal statistical quality, the size distribution for MDDNS DF-DNDs is smooth in shape, whereas that for S-DNDs shows considerable fluctuations. This difference might be because fewer nanodiamond diameters effectively contribute to the scattering of neutrons on S-DNDs. Such fluctuations have no effect on the precision of model calculations of neutron transport in S-DNDs on the condition that our model precisely describes the SANS data. In the range of 1.2-10.0 nm, the mean size of ideal scatterers in the S-DND model is~2.9 nm, while it is~3.8 nm for the DF-DND model. Figure 5 confirms the conclusion drawn from the other methods of DND characterization; that is, in S-DNDs, the fraction of smaller particles is larger, while the larger particles disappear completely. Moreover, it can be clearly observed that the size distributions obtained from MDDNS using SANS data have a fraction of noticeably smaller particles than those obtained with other methods. The reason for this difference is that neutrons are not scattered only on individual DNDs but also on fluctuations in the density of the medium. They are sensitive to the entire structure of the powder. The scattering on pores between DNDs also contributes to this. In MDDNS, scattering on small pores, on non-diamond carbon, on specific types of agglomeration, on specific shapes of DNDs, etc. corresponds to scattering on small nanospheres. Note that to calculate the transport of neutrons in the powder, we only used SANS data. Other methods serve only for a better understanding of the peculiarities of powder modifications and the effect of these modifications.
Albedo Calculations
To calculate the albedo for different cases, as discussed below, we used size distributions shown in Figure 5 in Section 3.3 obtained from SANS data for DF-DNDs and S-DNDs. First, we simulated the neutron albedo for a theoretical case of semi-infinite flat media; second we considered cases of final thickness (flat and spherical layers) with the same or different density. In the latter case, we used the densities of real samples. Note that when preparing samples for SANS, we aimed at minimum but stable density; for DNDs of different types, the density was notably different. For a powder of infinite thickness and a flat geometry, the albedo does not depend on density, and it is defined only by the absorption and scattering cross-sections. For a powder of a finite thickness, the fraction of neutrons passing through the layer depends on the powder density; thus, the albedo depends on density. For a spherical trap, the albedo also changes for the infinite powder thickness, since it depends on the ratio of the radius of curvature of the surface to the depth of the penetration of neutrons into the powder. The calculations were performed within MDDNS, using the Monte Carlo method [57] and our original software. Incident neutrons are isotropic. Figure 6 shows the calculated neutron albedo from the semi-infinite media for DF-DND and S-DND powders. within MDDNS, using the Monte Carlo method [57] and our original software. Incident neutrons are isotropic. Figure 6 shows the calculated neutron albedo from the semi-infinite media for DF-DND and S-DND powders. From Figure 6, it can be seen that for VCN velocities <70 m/s, the albedo is higher for DF-DNDs, whereas, for VCN velocities >70 m/s, it is higher for S-DNDs. This is explained by the presence of larger DNDs in DF-DNDs, which are closer to the optimal values for smaller velocities, and, vice versa, the presence of smaller DNDs in S-DNDs, which are closer to the optimal values for larger velocities. Both conclusions are in line with Formula (1) and the calculation presented in Figure 1.
The relatively small difference between these absolute albedo values is translated into a larger difference between loss factors (η = 1 -albedo), especially for albedo values close to 100%. As reciprocal loss factors are proportional to VCN densities, which can be accumulated in a trap with DND walls, we used the results shown in Figure 6 for the calculation of the loss factors ratio shown in Figure 7.
From Figure 6, it can be seen that for VCN velocities <70 m/s, the albedo is higher for DF-DNDs, whereas, for VCN velocities >70 m/s, it is higher for S-DNDs. This is explained by the presence of larger DNDs in DF-DNDs, which are closer to the optimal values for smaller velocities, and, vice versa, the presence of smaller DNDs in S-DNDs, which are closer to the optimal values for larger velocities. Both conclusions are in line with Formula (1) and the calculation presented in Figure 1.
The relatively small difference between these absolute albedo values is translated into a larger difference between loss factors (η = 1 -albedo), especially for albedo values close to 100%. As reciprocal loss factors are proportional to VCN densities, which can be accumulated in a trap with DND walls, we used the results shown in Figure 6 for the calculation of the loss factors ratio shown in Figure 7. From Figure 7, it can be observed that DF-DNDs are more advantageous than S-DNDs for the reflection of neutrons with typical VCN velocities of~50 m/s from semi-infinite flat media. However, when the goal is to increase the upper boundary of the VCN velocity range, S-DNDs are slightly more appropriate than DF-DNDs. It must be borne in mind, however, that the advantage of S-DND will become more evident for realistic 3D geometries of DND reflectors, which we consider below.
In contrast to the case of semi-infinite media, for a finite-thickness flat layer, powder density is important. Figure 8 shows the calculated neutron albedo from flat 3 cm-thick layers for DF-DNDs and S-DNDs for two cases: an equal density of the samples as well as for the real densities of SANS samples. This comparison allows one to separate the effects of powder density and particle sizes. The probability of neutron absorption is below 1.43% for S-DNDs, and it is below 1.30% for DF-DNDs, at any velocity.
As Figure 8 clearly shows, for a finite thickness geometry, the albedo increases with an increase in powder density. This can be explained by the fact that increasing density is effectively equivalent to increasing thickness (for the same density). The effect of density is more pronounced in a 3D reflector geometry, which is of interest for a VCN source reflector or a VCN trap. It assumes that there is a cavity inside the DND reflector. This effect can be explained by the fact that the probability of return of a VCN to the cavity decreases with an increase in the depth of penetration into the reflector. Figure 9 shows the results of calculations of the neutron albedo from the wall of a spherical cavity depending on the cavity radius. Neutron velocities are 50 m/s, 100 m/s, and 150 m/s; powder thickness is infinite and 3 cm; and powder densities vary. geometries of DND reflectors, which we consider below.
In contrast to the case of semi-infinite media, for a finite-thickness flat layer, powder density is important. Figure 8 shows the calculated neutron albedo from flat 3 cm-thick layers for DF-DNDs and S-DNDs for two cases: an equal density of the samples as well as for the real densities of SANS samples. This comparison allows one to separate the effects of powder density and particle sizes. The probability of neutron absorption is below 1.43% for S-DNDs, and it is below 1.30% for DF-DNDs, at any velocity. As Figure 8 clearly shows, for a finite thickness geometry, the albedo increases with an increase in powder density. This can be explained by the fact that increasing density is effectively equivalent to increasing thickness (for the same density). The effect of density is more pronounced in a 3D reflector geometry, which is of interest for a VCN source reflector or a VCN trap. It assumes that there is a cavity inside the DND reflector. This effect can be explained by the fact that the probability of return of a VCN to the cavity decreases with an increase in the depth of penetration into the reflector. Figure 9 shows the results of calculations of the neutron albedo from the wall of a spherical cavity depending on the cavity radius. Neutron velocities are 50 m/s, 100 m/s, and 150 m/s; powder thickness is infinite and 3 cm; and powder densities vary. In contrast to the flat geometry, in a 3D geometry, the effect of density is significant provided the cavity radius is comparable with the depth of penetration of VCNs into the reflector. As shown in Figure 9, this condition is always met for any cavity radius and powder type when the goal is to increase the upper boundary of the velocity range of efficient reflection of VCNs. In this case, VCNs with a maximum velocity from the range penetrate into the DND wall to a depth comparable to the cavity radius.
One could try to compress powders to a higher density. However, there are certain constraints. With the decrease in the DND density, at some point, the density fluctuations would also decrease, thus resulting in a decrease in neutron scattering. The powder becomes more transparent to neutrons, and as it approaches the diamond density, it becomes completely transparent. This phenomenon can develop unevenly. In some areas of the compressed powder, the density may become too high, and in others, it may not. A In contrast to the flat geometry, in a 3D geometry, the effect of density is significant provided the cavity radius is comparable with the depth of penetration of VCNs into the reflector. As shown in Figure 9, this condition is always met for any cavity radius and powder type when the goal is to increase the upper boundary of the velocity range of efficient reflection of VCNs. In this case, VCNs with a maximum velocity from the range penetrate into the DND wall to a depth comparable to the cavity radius.
One could try to compress powders to a higher density. However, there are certain constraints. With the decrease in the DND density, at some point, the density fluctuations would also decrease, thus resulting in a decrease in neutron scattering. The powder becomes more transparent to neutrons, and as it approaches the diamond density, it becomes completely transparent. This phenomenon can develop unevenly. In some areas of the compressed powder, the density may become too high, and in others, it may not. A technical realization of this compactification and the compressibility of DNDs of different types are the topic of a separate study.
To summarize the results of the present study, we compare the performance of reflectors based on DF-DNDs and S-DNDs for a realistic geometry of a spherical cavity with a radius of 5 cm and a wall thickness of 3 cm for realistic powder densities (see Figure 10). (1) S-DNDs are more advantageous for the reflection of VCNs starting from a smaller velocity equal to ~60 m/s; (2) the decrease in the loss factor for S-DNDs is also more important, and it is equal to ~20-25% in a broad range of VCN velocities. The first point is important when estimating the spectrum of VCNs to be reflected with a DND reflector. The second factor is directly translated into the gain in the intensity of VCNs, which can be accumulated in DND traps with S-DND and DF-DND walls, respectively. The relative decrease in the gain factor in Figure 10 is a result of the partial transmission of faster neutrons through the wall of a finite thickness.
As Formula (1) shows, the optimum diameter of DNDs for VCNs with a velocity of ~50 m/s is ~4.3 nm. If the goal is to increase the upper boundary of the velocity range of effective reflection of VCNs, the mean diameter of DNDs must be reduced. However, it cannot be reduced by more than a factor of approximately three because even smaller DNDs do not exist, and because the cross-section of coherent scattering of neutrons decreases dramatically as a function of the DND diameter (as the sixth power of the diameter [7]). If the goal is to improve storage times for a softer part of the VCN spectrum, the mean diameter of DNDs must be increased. However, it should not be increased by more than a factor of approximately three, because there are other mechanisms of reflection of slower VCNs that are efficient for such velocities (optical Fermi potential and supermirrors). These arguments roughly define the range of effective reflection of VCNs by DNDs and guide the choice of parameters of DNDs for the design of DND reflectors optimized for the diffusive reflection of VCNs.
Conclusions
In the current study, we investigated the effect of particle sizes on the efficiency of DND neutron reflectors. If typical DNDs with a size of ~4.3 nm are efficient for the reflection of typical VCNs with a velocity of ~50 m/s, other mean sizes are more efficient for other VCN velocities. Thus, up to a factor of approximately three, reduction in the mean (1) S-DNDs are more advantageous for the reflection of VCNs starting from a smaller velocity equal to~60 m/s; (2) the decrease in the loss factor for S-DNDs is also more important, and it is equal to~20-25% in a broad range of VCN velocities. The first point is important when estimating the spectrum of VCNs to be reflected with a DND reflector. The second factor is directly translated into the gain in the intensity of VCNs, which can be accumulated in DND traps with S-DND and DF-DND walls, respectively. The relative decrease in the gain factor in Figure 10 is a result of the partial transmission of faster neutrons through the wall of a finite thickness.
As Formula (1) shows, the optimum diameter of DNDs for VCNs with a velocity of 50 m/s is~4.3 nm. If the goal is to increase the upper boundary of the velocity range of effective reflection of VCNs, the mean diameter of DNDs must be reduced. However, it cannot be reduced by more than a factor of approximately three because even smaller DNDs do not exist, and because the cross-section of coherent scattering of neutrons decreases dramatically as a function of the DND diameter (as the sixth power of the diameter [7]). If the goal is to improve storage times for a softer part of the VCN spectrum, the mean diameter of DNDs must be increased. However, it should not be increased by more than a factor of approximately three, because there are other mechanisms of reflection of slower VCNs that are efficient for such velocities (optical Fermi potential and supermirrors). These arguments roughly define the range of effective reflection of VCNs by DNDs and guide the choice of parameters of DNDs for the design of DND reflectors optimized for the diffusive reflection of VCNs.
Conclusions
In the current study, we investigated the effect of particle sizes on the efficiency of DND neutron reflectors. If typical DNDs with a size of~4.3 nm are efficient for the reflection of typical VCNs with a velocity of~50 m/s, other mean sizes are more efficient for other VCN velocities. Thus, up to a factor of approximately three, reduction in the mean size allows more efficient reflection of faster neutrons. An increase up to a factor of approximately three in the mean size allows the storage times of slower VCNs to be increased. Simulations based on SANS data and MDDNS (model of discrete-size diamond nanospheres) for a realistic reflector geometry show that the particle size reduction corresponding to the replacement of DF-DNDs (DNDs with the best achieved performance) by S-DNDs (DNDs of smaller sizes) increases neutron albedo in the broad velocity range above~60 m/s. This increase in albedo results in an increase in the density of faster VCNs in such a reflector cavity of up to~25%, as well as an increase in the upper boundary of velocities of efficient VCN reflection. As the density of DND reflectors is of importance for their efficiency, an investigation of the practical feasibility of its increase and corresponding feedback is of interest for future research.
Patents
The algorithm for extracting a model-independent size distribution of scatterers from small-angle scattering data was developed to obtain the results reported in this manuscript. It is protected by the author's certificate of state registration of the software "Structural Nanopowders Analyzer Based on Small-Angle Scattering Data (SNASAS)" RU 2020662675, issued by the Federal Service for Intellectual Property. | 9,683.8 | 2021-11-01T00:00:00.000 | [
"Physics"
] |
Scattering efficiencies measurements of soft protons at grazing incidence from an Athena Silicon Pore Optics sample
Soft protons are a potential threat for X-ray missions using grazing incidence optics, as once focused onto the detectors they can contribute to increase the background and possibly induce radiation damage as well. The assessment of these undesired effects is especially relevant for the future ESA X-ray mission Athena, due to its large collecting area. To prevent degradation of the instrumental performance, which ultimately could compromise some of the scientific goals of the mission, the adoption of ad-hoc magnetic diverters is envisaged. Dedicated laboratory measurements are fundamental to understand the mechanisms of proton forward scattering, validate the application of the existing physical models to the Athena case and support the design of the diverters. In this paper we report on scattering efficiency measurements of soft protons impinging at grazing incidence onto a Silicon Pore Optics sample, conducted in the framework of the EXACRAD project. Measurements were taken at two different energies, ~470 keV and ~170 keV, and at four different scattering angles between 0.6 deg and 1.2 deg. The results are generally consistent with previous measurements conducted on eROSITA mirror samples, and as expected the peak of the scattering efficiency is found around the angle of specular reflection.
Introduction
the experimental data with the Remizovich formula led to the evaluation of the parameter enclosing the micro-physics of the scattering, so that a new analytical semi-empirical model that better reproduces the data from the eROSITA mirror sample was derived [1]. This model can be used to assess the SP flux expected at the instrumental focal plane of the satellite. The model can also be applied to Athena, provided that the proton scattering properties of the SPO are experimentally determined.
In this publication, we present the first measurements of scattering efficiencies of low-energy protons off a SPO sample. These experimental activities were conducted in the framework of the EXACRAD (Experimental Evaluation of Athena Charged Particle Background from Secondary Radiation and Scattering in Optics) project funded by ESA. The paper is structured as follows: we first describe the laboratory set-up and the elements along the beam line in Section 2; in Section 3 we illustrate how the scattering efficiency is derived from the raw data; the new data are presented in Section 4 and are compared to the data from eROSITA as well as to the semiempirical model mentioned above in Section 9; finally, we draw our conclusions in Section 5.
Experimental setup
The experiment was conducted at the 2.5 MV Van de Graaff accelerator at the Goethe University (Riedberg Campus) in Frankfurt am Main. The setup of the beamline, similar to that of [2,3], is given in Figs. 1, 2, and 3. [2]). The proton beam enters the setup from the right and moves towards the left. The SPO sample is located in the target chamber, while the detector is placed in the chamber at the end of the beamline (detector chamber). A second detector (not shown in the picture) was placed next to the central one, with an angular distance of ∼ 2 •
Beamline setup
Protons enter the beamline through a copper pinhole aperture of the diameter of 1 mm, which reduces the size of the incoming beam to prevent pile-up and to maintain reasonable rates on the detectors. Successively, the beam goes through a 0.002 mm thick aluminium foil, which degrades the incoming beam energy below the lower limit of the accelerator. The degraded beam enters, at this point, a 78 cm long collimator, which directs part of the widened beam directly to the target. Two further apertures are positioned at the entrance and at the exit of the collimator, both with a diameter of 1 mm. This diameter limits the smallest possible incident angle to ∼ 0.5 • . The apertures are supported in their position by 2 mm aluminum plates, which absorb any proton of the degraded beam not entering the apertures and being scattered by the inner walls of the collimator and of the beamline itself. The SPO target (Fig. 4), provided by cosine 3 , consists of a 110 mm long single silicon wafer, 0.775 mm thick, grooved in the bottom, and coated on top with a 10 nm of iridium and 7 nm of silicon carbide 4 . It is located in an apposite chamber (hereafter called target chamber) and mounted on a tiltable plate. The height of the target can be adjusted by a set of screws underneath the plate. A linear manipulator is used to change the inclination of the plate, i.e., of the incident angle (θ 0 ). The pivoting point is is several centimeters below the line of the beam, so that the target can be completely removed from the beam, allowing for a determination of the primary beam position on the detector plane. The manipulator is set below the target chamber and hence can be easily accessed when the system is under vacuum.
Between the exit of the collimator and the target plate, a Passivated Implanted Planar Silicon (PIPS) detector 5 is mounted on a push-pull manipulator, at the same height of the beamline. This detector is used to register the amount of flux of the incident beam impinging on the target, useful to have normalisation measurements. This detector will be called hereafter 'normalisation detector'. The push-pull manipulator permits a fast removal of the detector, guaranteeing a measure of the impinging proton flux (Φ inc , cfr. (2)) for each measure of the scattered beam (see Section 3 for the need of having frequent normalisation measurements). An aluminium blind with an aperture of 3 mm is set on top of the normalisation detector to avoid saturation. Lastly, downstream of the target chamber, a thick aluminum sheet, with a slit of 3 cm height and 1 cm width, is installed a few centimeters after the target plate. This window lets pass only the protons on the line of the beam, while the sheet absorbs all the ones that have been scattered by the inner walls or by the other elements in the target chamber.
At the end of the beamline, a second chamber (hereafter detector chamber) hosts two more PIPS detectors, called 'central detector' and 'lateral detector', respectively, used to register the on-axis and off-axis fluxes (Φ scat (θ 0 , θ, φ), cfr. (2)) of the beam scattered by the target. They are mounted on a second linear manipulator, which allows a spatially resolved sampling of the scattered beam. The distance between the center of the target plate and the detection plane is 942 mm. The central detector is aligned with the beam, while the lateral detector is set on the left. This configuration allows for a coverage of the scattered beam on the the incident direction and at an azimuthal angle φ of 1.97 • ± 0.13 • . On top of each detector there is a blind with an aperture of a diameter of 1 mm for the central detectors and of 3 mm for the lateral detector, respectively. They reduce the solid angle of the detectors with respect to the mirror center to about 8 × 10 −7 sr and 2 × 10 −5 sr for the central and lateral detector, respectively.
Data acquisition chain
The pulse signal produced by the PIPS is amplified and digitalised trough several analogical/digital electronic components. A flow chart is given in Fig. 5.
The PIPS detectors produce a pulse with an amplitude proportional to the energy of the incident particle. The pulse signal from each PIPS goes through its own preamplifier and amplifier and it is then digitalised by the Analog to Digital Converter (ADC). The ADC receive the continuous signal (from 0 to ∼10 V) from the three channels -one for each detector -and convert them into discrete signals, distributing it into 8192 bins, with a resolution of 1.22 mV. The digitised signals are then passed to the histogramming memory, which produces an histogram for each channel. Once the measurement is done, the histograms are read out by the CAMAC module and are transferred to a computer, which acquires and stores them as raw data files.
The process of digitalisation of the data within the ADC takes a certain time (fractions of seconds), so that if a new signal comes within that time, it is not registered. To account for this dead-time, a pulse generator, which generates pulses at a fixed frequency, is connected to the ADC and to a scaler, which counts the number of pulses produced by the pulse generator during the acquisition time. The scaler is also fed to the CAMAC control module. The difference between the reading of the counts from the ADC and that from the scaler gives the dead-time correction factor (cfr. (4)). The pulse generator fed to the ADC constitutes another channel, so that the whole acquisition systems consists of four channels, all working simultaneously, and the scaler.
Alignment and angular calibration
The alignment of all the movable elements on the beamline, i.e., the pinhole aperture, the slits, the normalisation detector, and the central detector, is done by using a telescope previously aligned with the exit of the accelerator.
A 520 nm laser is employed to perform the angular calibration. The laser is set right after the pinhole aperture and goes through all the slits. When the target plate is down, the laser reaches the central detector in the detector chamber. In this way, the zero of the beamline, corresponding to θ = 0 • , can be established. This measurement gives also the vertical offset on the linear manipulator of the central/lateral detectors.
To calibrate the incident and scattering angles, we use the property of the mirror target to reflect optical light. Hence, we raise the target plate, using its own manipulator, until the light is blocked. Then, we raise the central detector till the laser beam is detected again. Assuming a specular reflection, the angle subtended by the height h of the manipulator will be ζ = θ + θ 0 = 2θ 0 , so that the incident angle can be computed as: This operation is repeated several time, so that we end up with different angles corresponding to different readings on the linear manipulator of the target plate. The incident angle can then be determined with a simple linear interpolation. The analogical signal from the PIPS detectors first goes through a pre-amplifier and an amplifier, then it is converted into a digital signal by the ADC, and finally it is stored in the histogramming memory. Contemporary, a pulse genarator sends signal at a fixed frequency to the ADC and to a scaler. The digitised signals are read out by a CAMAC controller unit, which transmits them to a computer once the measurement is finished
Efficiency definition and normalisation measurement
In the laboratory system of reference, the scattering efficiency per unit solid angle can be defined as: where θ 0 is the incident angle, θ and φ are the polar and azimuthal scattering angles (see Fig. 6), Φ scat and Φ inc are the scattered and incident proton count rates, and Ω(θ, φ) is the solid angle subtended by the detector. The count rate of the scattered particles is given by the number of protons N scat scattered by the SPO sample reaching the detectors in the detector chamber divided by the integration time Δt scat . In a similar way, the count rate of the incident particles is given by the number of particles N inc intercepted by the normalisation detector in front of the mirror chamber divided by the integration time Δt inc . The number of counts of incident and scattered protons, N inc and N scat , is obtained by integrating the ADC histograms. This number must be corrected for the dead-time of the ADC (cfr. Section 2.2), so that the effective count rates can be expressed as: with the correction factor α given by: where N scaler is the number of counts from the pulse generator as read out from the scaler fed to the CAMAC controller module and (N pulser ) ADC is the number of pulses from the pulse generator as read out from the ADC (see Fig. 5). For an ideal incoming proton beam, the number of incident particles N inc is constant in time. However, the beam exiting the Van de Graaff accelerator was not stable, with fluctuations in the direction of the beamline varying in a time range from a few to several tens of minutes. This made necessary to take normalisation measurements before and after each scattering measurement and average them for each scattering data point, so that: where N inc,1 and N inc,2 are the counts in two consecutive normalisation measurements with integration times Δt inc,1 and Δt inc,2 , respectively. Concerning the uncertainties, the one on the scattering angle is given mainly by the errors on the angular calibration, the detector aperture, and the indeterminate position of the impact point of the beam on the mirror surface. The uncertainty on the incident angle θ 0 is dominated by the dimension of the aperture on the central detector and by the length of the target. It resulted in ∼ 0.1 • for all the chosen scattering angles. Lastly, the uncertainty on the scattering efficiency is mainly given by the intrinsic fluctuation of the proton beam. Minor contributions are due to the count statistics and to the error on the solid angle Ω(θ, φ). The sum of this contributions results in statistical fluctuations of ±20% on the scattering efficiencies.
Results and discussion
We measured the scattering efficiency at two different energies (high-and low-energy data sets, hereafter) and at four different incident angles: 0.6 • , 0.8 • , 1.0 • , and 1.2 • , both on-axis and off-axis, the latter at an angle φ of about 2 • . Each data set consists of on-axis and off-axis scattering efficiencies. Results are shown in Figs. 7 and 8, where the scattering efficiencies have been multiplied by the square of the incident angle (as in [1]) and are displayed as a function of the scattering angle divided by the incident one, i.e., Ψ = θ/θ 0 .
For the high-energy data set (Fig. 8) we used a beam at ∼590 keV from the accelerator, which was degraded by the Al foil down to 471±25 keV. This energy value was chosen mainly for purposes of comparison with the previous measurements on the eROSITA mirror sample ( [2,3] cfr. Fig. 9). The rationale behind the low-energy value can be found in the work of [7], who showed that the highest transmission efficiency of soft protons is observed for those protons impacting the mirrors with 40-60 keV, for both instruments on board of Athena. Hence, it is crucial to investigate the scattering of soft protons at energies around and below 100 keV. Unfortunately, the present setup could not reach such low energies, limiting us to a proton beam with an energy of ∼340 keV at the exit of the accelerator, degraded to 172±30 keV by the Al foil. In both cases, the values of the incident energies were determined by simulations with the software TRIM 6 (TRansport of Ions in Matter, [14]), already validated in [2].
As expected, the on-axis scattering efficiencies peak at the specular angle (ψ 1) and are consistent with each other within the uncertainties. However, a higher spread is observed for the high-energy on-axis data set (Fig. 8, top panel), with efficiencies ranging from 0.03 to 0.07 at the peak of the distribution. Also the off-axis data show a significant spread, which is expected in this case. Overall, the maximum scattering efficiency values are ∼0.07 and ∼0.02 for the on-axis and off-axis configurations, respectively, with the low-energy data set showing slightly smaller efficiencies than the high-energy one. Figure 9 shows the eROSITA measurements [2,3] overlapped to the SPO data, for both the energies and the ox-axis and off-axis configurations. Though the SPO efficiencies are systematically higher than the eROSITA data, they are consistent within the error bars. Due to this consistency, we applied the semi-empirical model proposed in [1] to the scattering efficiencies of SPO, as shown in Fig. 10.
Comparison with the eROSITA measurements
Overall, the model well reproduces the scattering efficiency of the low-energy data set, but overestimates the efficiency of the high-energy data set by a factor of ∼1.5 times. Nonetheless, it has to be borne in mind that at this stage we simply overlapped the semi-empirical model developed for the eROSITA to the new SPO experimental data. A more accurate model, specific for SPO, can be obtained by fitting the data with the formula of [11] in non-elastic approximation, as in [1], provided that energy loss measurements are retrieved from the raw data. Fig. 9 Comparison of the eROSITA scattering efficiencies (blue dots) with the SPO ones (green dots for the low-energy set and red dots for the high-energy set), for the on-axis (top panels) and off-axis (bottom panels) data Fig. 10 Comparison between the experimental scattering efficiency of SPO (points) and the semiempirical model developed from eROSITA data (solid line), for the low-energy (green) and high-energy (red) data sets and for the on-axis (top panels) and off-axis (bottom panels) measurements Lastly, we group the efficiency values from the two data sets by the incident angle, irrespective of the energy of the incident beam. Figure 11 shows that the data are perfectly consistent with each other and with the old eROSITA measurements when grouped by the incident angle, without accounting for the energy. Once again, the semi-empirical model derived for the eROSITA data is overlapped with the data, resulting in a general acceptable agreement.
Conclusions
Within the EXACRAD project, we measured for the first time the scattering efficiency of a single wafer of SPO hit by low-energy protons at grazing incidence. Measurements were performed at two different energies, of about 470 keV and 170 keV, and at four different incident angles, 0.6 • , 0.8 • , 1.0 • , and 1.2 • , both on-axis and at an off-axis angle of about 2 • .
Hereafter some major remarks: -the scattering efficiencies show the trend expected from [11] and from the experimental data on eROSITA [2,3]. The on-axis data peak close to the specular reflection, while the off-axis data show a peak shifted to higher Ψ ; the offaxis data reach lower efficiencies than the on-axis ones; higher incident angles resulted in higher scattering efficiency; -the SPO data are generally consisted with the eROSITA data, though the highenergy data set show a higher spread in efficiency; -as for the eROSITA data, the scattering efficiency very weakly depends on the energy of the incident beam; -the semi-empirical model developed from eROSITA experimental data is able to acceptably reproduce the low-energy data set, while it results in higher efficiencies for the high-energy data set. The same model can be improved specifically for SPO, with a direct fit of the experimental data, provided that energy loss measurements are retrieved from the raw data.
As stated above, the experimental configuration used for this experiment is not suited for measurements at energies below 100 keV, which are those expected to contribute the most to the background of Athena [8]. Therefore, some possible changes to the setup could be explored in the future to enable measurements at lower energy ranges.
The work here presented is only the first step towards a thorough estimation of the SP flux expected at the focal plane of Athena. Indeed, after being scattered by the optics, SP cross all the other elements along their path, and, in particular, the filters located in front of each detector. As showed in [8], the filters not only reduce the energy of the protons, but also act as degraders, altering their original trajectories. Hence, a thorough prediction of the overall transmission energy of SP can be achieved only combining the processes of scattering from the optics, crossing of the filters and energy release within the detectors. The latter two phenomena can be investigated, for instance, by simulations, as done, e.g., by [4]. If the estimated soft proton flux should still result higher than the scientific requirement of 5 × 10 −4 cts s −1 cm −2 keV −1 , in the 2-10 keV energy band, for 90 % of the observing time (cfr. Section 1), then appropriate solution should be adopted. Currently, the hypothesis of a magnetic diverter specific for protons is under discussion and possible designs are under investigation. | 4,799.2 | 2021-10-01T00:00:00.000 | [
"Physics"
] |
Stereoselective synthesis of unnatural α-amino acid derivatives through photoredox catalysis
A protocol for stereoselective C-radical addition to a chiral glyoxylate-derived N-sulfinyl imine was developed through visible light-promoted photoredox catalysis, providing a convenient method for the synthesis of unnatural α-amino acids. The developed protocol allows the use of ubiquitous carboxylic acids as radical precursors without prior derivatization. The protocol utilizes near-stoichiometric amounts of the imine and the acid radical precursor in combination with a catalytic amount of an organic acridinium-based photocatalyst. Alternative mechanisms for the developed transformation are discussed and corroborated by experimental and computational studies.
Introduction
Unnatural a-amino acids constitute an important class of biologically relevant compounds that are widely used in both pharmaceutical industry and for fundamental research within molecular and structural biology. 1 A number of pharmaceuticals based on unnatural a-amino acids are currently available, including angiotensin-converting enzyme (ACE) inhibitors for the treatment of cardiovascular and renal diseases, 2 antiviral medicines, 3 and others. 4 Recently, peptidomimetic a-ketoamide inhibitors based on unnatural a-amino acids have received increased attention as drug candidates for treatment of COVID-19 disease caused by the SARS-CoV-2 coronavirus, 5 highlighting the high demand for such building blocks.
A variety of synthetic strategies to access unnatural amino acid derivatives have been developed over the years, with some notable methods being the catalytic asymmetric Strecker-type reactions, asymmetric hydrogenation of dehydroamino acids, and electrophilic and nucleophilic alkylation of glycine derivatives (Fig. 1A). 6 Among these, functionalization or reduction of a-imino esters offers a straightforward route to various enantiomerically enriched a-amino acids. 7 Traditionally, these strategies have employed polar retrosynthetic disconnections, which oen require the use of (super)stoichiometric amounts of toxic and highly sensitive reagents at low temperatures, thereby limiting the substrate scope and practicality for scale up of these reactions. These limitations have recently been challenged by re-introduction of free-radical reaction manifolds, aided by the developments in base-metal catalysis, 8 electrosynthesis 9 and photoredox catalysis, 10 leading to a vast array of strategies for light-induced modication and synthesis of amino acids and peptides. 11 Among these, radical addition to imines through photoredox catalysis was demonstrated in symmetric 12 and asymmetric 13 fashion (Fig. 1B). In 2017, Alemán and co-workers reported a protocol for asymmetric radical addition to imines mediated by visible light. 13a The developed catalytic system made use of a chiral sulfoxide auxiliary group, commonly employed in the synthesis of chiral amines. 14 Here, the C-centered radical was generated through visible light-mediated reductive cleavage of the N-O bond in a redox-active phthalimide ester, followed by radical addition to the N-sulnyl imine. The reductive nature of this protocol required the use of a stoichiometric amount of a reducing agent (Hanztsch ester). More recently, a related Ni-based catalytic system was described by Baran and co-workers. 15 This protocol employed a tetrachloro-substituted redox-active ester as the radical precursor, with Zn as a stoichiometric reducing agent and a Ni-based catalyst for mediating the C-C bond formation. Although this protocol displayed an impressive substrate scope, it is associated with moderate atom-economy, limiting its applicability for large-scale synthesis.
Results and discussion
Inspired by the catalytic systems developed by the Alemán 13a and Baran 15 groups, we sought to realize a protocol for diastereoselective decarboxylative radical addition to chiral N-sul-nyl imines that would utilize ubiquitous non-activated carboxylic acids as radical precursors. 16 A related direct decarboxylative addition process was attempted by the Alemán group for a benzaldehyde-derived N-sulnyl imine under reaction conditions reported by MacMillan; 17 however, no formation of the desired product was observed (see the ESI to ref. 13a). Similarly, we observed no desired product with pivalic acid 2a as the radical precursor and N-sulnyl imine 1 as the radical acceptor when the reaction was conducted in DMSO with [Ir(dF(CF 3 )ppy) 2 (dtbbpy)](PF 6 ) as photocatalyst (Table 1, entry 1), presumably due to fast decomposition of N-sulnyl imine 1. Gratifyingly, changing the solvent to a,a,a-triuorotoluene (PhCF 3 ) furnished the desired product 3a in fairly good yield of 65%, although with poor diastereoselectivity (Table 1, entry 2). Using other bases in place of Cs 2 CO 3 completely prohibited the reaction (for details on the optimization studies, see the ESI †), and the highly-oxidizing photocatalyst 4CzIPN 18 failed to deliver the radical addition product (Table 1, entry 3). Fortunately, the highly-oxidizing organic acridinium-based photocatalyst [Mes-Acr-Me](BF 4 ) delivered product 3a with excellent diastereoselectivity, although in poor yield (Table 1, entry 4). Increasing the catalyst loading from 1 to 5 mol% and switching to the more stable N-phenyl-substituted photocatalysts [Mes-Acr-Ph](BF 4 ) and [Mes-Me 2 Acr-Ph](BF 4 ) 19 dramatically increased the yield of the stereoselective radical addition product up to 78% (Table 1, entries 5-7). Conducting the reaction in more conventional solvents, such as MeCN, CH 2 Cl 2 , and 2,2,2-tri-uoroethanol (TFE) in place of PhCF 3 resulted in diminished yields (Table S1, see the ESI †), highlighting the documented inertness of PhCF 3 towards free-radical intermediates. 20 Changing the base to K 2 CO 3 and increasing the base loading further improved the yield up to 85% (Table 1, entry 11). Finally, utilizing a slight excess of the acid radical precursor 2a delivered the desired product 3a in excellent yields (91% and 95% for 1.2 and 1.5 equiv. of 2a, respectively; Table 1, entries 13 and 14). Consistently with the previous reports on radical additions to N-sulnyl imines, the tert-butyl-and para-tolyl-substituted N-sul-nyl imines 4 and 5 proved to be inefficient as radical acceptors (Table 1, entries 15 and 16). 13a, 15 In case of tert-butyl-substituted N-sulnyl imine 4, it is likely that the transiently formed aminyl radical intermediate underwent decomposition to form an iminosulfanone (-N]S]O), thereby disrupting the catalytic cycle. 21 The substrate scope of the developed transformation was evaluated with a variety of non-functionalized and functionalized tertiary, secondary, and primary carboxylic acids (Fig. 2). For all of the amino acid derivatives excellent diastereoselectivity at the a-position was observed (>95 : 5 dr). The radical precursors producing tertiary and secondary alkylsubstituted radicals provided the expected products in generally good to high yields (3a-e and 3k-n). The highly reactive primary alkyl radicals displayed lower selectivity for the addition reaction (3o and 3p), consistent with previous reports featuring unstable free-radical intermediates under related conditions. 22 Further optimization of the reaction conditions for the primary acids 2o and 2p did not result in improved yields (Tables S2 and S3 †), illustrating the intrinsic instability of the respective radical intermediates and/or the photocatalyst under the employed conditions. Benzylic-type radicals were generally inefficient (see Fig. 2, unsuccessful substrates); however, a cyclopropyl-substituted benzylic radical and an indole-derived benzylic-type radical provided the expected products 3f and 3q, respectively, in satisfactory yields.
The carboxylic acid radical precursors that furnish stabilized a-heteroatom C-radicals generally provided the addition products in good to excellent yields. Gratifyingly, N-Boc-protected aamino acid radical precursors based on pipecolic acid, proline, valine, and phenylalanine furnished the expected amino acid derivatives 3r-u in generally excellent yields, exemplifying a prominent synthetic route to biologically active a,b-diamino acids. 23 The a-O-substituted radicals derived from dialkyl (3v, 3w) and alkyl aryl ethers (3g-i, 3x) provided the expected products in moderate and excellent yields, respectively. To our delight, a primary a-S-substituted radical containing an aryl bromide functionality afforded the expected product 3y in satisfactory yield despite combining several structural features that can be deleterious under free-radical conditions. The sterically-demanding carbohydrate-based radical derived from diprogulic acid 2j delivered the monosaccharide-amino acid conjugate product 3j in satisfactory yield and excellent diastereoselectivity at both the aand b-stereocenters (>95 : 5 a dr, >95 : 5 b dr).
The N-sulnyl amide functionality in product 3a could be conveniently removed under mild acidic conditions in quantitative yield (Fig. 2E). The absolute conguration at the a-stereocenter in the obtained a-amino ester was then determined as (R), in complete agreement with the previous observations and the results of computational studies (for details, see the ESI †). Similarly, removal of the N-sulnyl amide functionality was carried out for more complex products 3q, 3x and 3z, and the corresponding a-amino esters 6q, 6x and 6z were isolated in excellent yields (>95%).
Based on literature precedents, a mechanism for the developed transformation was proposed and corroborated by uorescence quenching and computational studies (Fig. 3). 13a, 24 Initially, the acridinium photocatalyst [Mes-Me 2 Acr-Ph] + (Acr + ) [Mes-Me 2 Acr-Ph](BF 4 ), 5 mol% K 2 CO 3 , 0.5 equiv. 60 min 95% >95 : 5 15 As entry 13, but with t Bu-sulnyl imine 4 60 min --16 As entry 13, but with p-Tol-sulnyl imine 5 60 min is excited by visible light (l max z 425 nm) to a highly oxidizing excited state Acr + * (E(Acr + */Acrc) z 2.09 V vs. SCE). 25 In this state, the photocatalyst can abstract an electron from the deprotonated carboxylic acid via a single-electron transfer (SET) event to generate a carboxylate radical while being reduced to the acridinium radical Acrc. The steady-state and time-resolved uorescence quenching measurements for tetrabutylammonium pivalate as the model radical precursor demonstrated efficient quenching of the excited acridinium photocatalyst with Stern-Volmer quenching constant K SV ¼ 237.5 M À1 and bimolecular quenching constant k q ¼ 6.8 Â 10 9 M À1 s À1 , while no quenching was observed for the free pivalic acid (Fig. 3B, S3 and S4 †). The carboxyl radical formed via SET then extrudes CO 2 to yield a C-centered radical, which undergoes addition to the N-sulnyl imine 1 in the key step of the reaction, forming an aalkylated N-centered radical. Finally, the N-centered radical is reduced by acridinium radical Acrc, closing the photocatalytic cycle and furnishing the desired product 3 upon protonation (Fig. 3A).
In order to gain better understanding of the stereodetermining C-C bond forming step in the proposed mechanism, DFT calculations were performed on the M062X-D3/6-311+G(d,p) level of theory (for details, see the ESI †). First, the structure of the N-sulnyl imine radical acceptor 1 was evaluated. Previously, Alemán and co-workers tentatively suggested an s-cis conformation around the N-S bond as being more stable in such compounds due to the hydrogen bonding between the imine proton and the sulfoxide oxygen. 13a Such a conformational preference would then lead to the a-(R) product when the S(R)-sulnyl imine is employed as the radical acceptor. This stereochemical outcome was indeed observed for both Alemán's and our catalytic system. The calculations conrmed that the s-cis conformer is more stable compared to the s-trans-1 conformer by 3.8 kcal mol À1 , corresponding to >99.8 : 0.2 ratio between the conformers from the Boltzmann distribution at room temperature (Fig. 3C). In the s-cis conformer, the hydrogen bonding between the imine hydrogen and the sulfoxide oxygen could be observed from the noncovalent interaction (NCI) plots, while no hydrogen bonding was present in the s-trans-1 conformer (for a detailed discussion, see the ESI †).
Subsequently, the radical addition step was evaluated for the tert-butyl radical donor and the N-sulnyl imine radical acceptor 1. The computed Gibbs free energy and enthalpy diagrams for the reaction are presented in Fig. 3D. The formation of the (R,R)-diastereomer of 3a was found to be favored both kinetically and thermodynamically and the computed activation barrier was found to be 3.8 kcal mol À1 smaller for the re-addition compared to the si-addition, while the (R,R)-diastereomer product is 2.5 kcal mol À1 more stable compared to the (R,S)-diastereomer. Interestingly, the difference in the computed activation barriers for the re-and si-addition reactions originated almost exclusively from the enthalpic terms (DDG ‡ ¼ 3.8 kcal mol À1 , DDH ‡ ¼ 3.4 kcal mol À1 ). The better stabilization of the re-TS is in part due to the stronger hydrogen bonding between the imine hydrogen and the sulfoxide oxygen for this transition state, as evident from the calculated bond distances and the NCI plots ( Fig. 3D and S7 †). Additionally, signicant steric crowding occurs in the si-TS, where the incoming tert-butyl radical requires the mesityl group to become almost completely coplanar to the sulfoxide S]O bond. In contrast, the mesityl group and the S]O bond in the re-TS are out of plane by 50 while the incoming tert-butyl radical experiences no steric crowding.
An alternative mechanism for a related radical addition to imine derivatives was proposed by Ooi and co-workers. 26 In this mechanism, the key C-C bond-forming step was found to proceed through radical-radical coupling between a C-centered radical and an a-amino radical formed by one-electron oneproton reduction of an imine substrate. However, under our conditions such a mechanistic pathway seems unlikely due to weak reducing ability of the one-electron reduced form of the employed acridinium photocatalyst (E 1/2 (Acr + /Acrc) z À0.32 V vs. SCE, Fig. 3B). As opposed to the conditions reported by Ooi and co-workers, where strongly reducing [Ir(ppy) 2 (bpy)] + -type photocatalysts (E(Ir III /Ir II ) z À1.5 V vs. SCE) were used, electron transfer from Acrc to N-sulnyl imine 1 (E p/2 z À1.24 V vs. SCE, Fig. 3B) should not be favored. However, a contribution from the radical-radical coupling pathway would explain the low diastereoselectivity (4 : 1 dr) during formation of product 3a when the reaction was conducted with the [Ir(dF(CF 3 )ppy) 2 (dtbbpy)](PF 6 ) photocatalyst (Table 1, entry 2). Indeed, this photocatalyst displayed relatively low reduction potential (E 1/ 2 (Ir III /Ir II ) ¼ À1.10 V vs. SCE, Fig. 3B), sufficient to reduce the N-sulnyl imine substrate 1 to the corresponding a-amino radical. The conformation analysis of this radical then revealed nearly free rotation around the N-S bond with a barrier of ca. 2.5 kcal mol À1 , while N-sulnyl imine 1 displayed a signicantly higher rotation barrier of ca. 8.0 kcal mol À1 and only one dominant conformer. Addition of the tert-butyl radical to the aamino radical would therefore be expected to proceed with low, if any, diastereoselectivity. The low diastereoselectivity could also be explained by product epimerization during the reaction; however, no epimerization was observed when product 3a was subjected to the comparable reaction conditions with the Irbased photocatalyst.
Conclusions
In conclusion, a practical protocol for stereoselective synthesis of various a-amino acids has been developed, employing ubiquitous carboxylic acids as radical precursors and an organic photocatalyst under visible light irradiation. This protocol allows for synthesis of highly functionalized a-amino acids, which are challenging to prepare through traditional twoelectron reaction manifolds. The protocol utilizes nearstoichiometric amounts of reagents and does not produce large quantities of waste, which is an intrinsic disadvantage of the previously described systems utilizing redox-active esters as radical precursors.
Author contributions
A. S. performed optimization studies, major part of substrate scope investigation, electrochemical and spectroscopic studies, and wrote the manuscript. A. A. performed the computational studies and part of the substrate scope investigation. E. V. S., J.-Q. L., and B. B. performed part of the substrate scope investigation. A. Z. T. performed part of the analytic measurements. B. P. K. and J. M. G. assisted during data acquisition and analysis of the spectroscopical studies. M. D. K. conceived and directed the project. P. D. and M. D. K. supervised the project. All authors discussed the results and approved the nal version of the manuscript.
Conflicts of interest
There are no conicts to declare. | 3,543.4 | 2021-03-03T00:00:00.000 | [
"Chemistry",
"Biology"
] |
Subgraphs Matching-Based Side Information Generation for Distributed Multiview Video Coding
,
Introduction
Multiview video coding (MVC) has played a new paradigm of a wide variety of interactive multimedia applications. In many MVC systems, the fundamental efforts have been dedicated to investigating the adjacent views in addition to the traditional temporal and spatial correlations within a single view. The availability of multiple views benefits many image processing tasks such as enhancement, segmentation, or object recognition. However, the existing interview prediction assumes that the video frames from different views can be freely exchanged or simultaneously available at the encoder [1]. We should be aware that the communication between cameras with tremendous data volume is impractical. Inspired from lossless Slepian-Wolf and lossy Wyner-Ziv source coding theory [2,3] where separate encoding of correlated sources can approach the rate of joint entropy, provided joint decoding is executed with known correlation. Distributed Multiview Video Coding (DMVC) is normally emerging to attain benefits inherent to distributed video coding (DVC) [4].
Suppose X and Y are correlated sources termed as source data and side information [2]. Traditional source coding assumes that Y should be available at both encoder and decoder, and then the rate-distortion (R-D) function for X given Y is R X|Y (D). Conversely, distributed source coding (Wyner-Ziv) theorem assumes that Y is only available at decoder, and encoder could only access to the correlation between X and Y , and corresponding rate-distortion function is denoted as R WZ X|Y (D). Surprisingly, a rate loss R WZ X|Y (D)− R X|Y (D) = 0 is proved feasible for Gaussian memoryless source and mean square error (MSE) distortion metric [3]. Pradhan et al. also have proved that there is no rate loss for arbitrary side information Y and independent Gaussian noise in theory [5]. For general distribution and arbitrary distortion metric, Zamir [6] has proved that the rate loss is less than 0.5 bit/sample. Several practical Slepian-Wolf and Wyner-Ziv video coding approaches have been proposed [7][8][9][10][11][12][13], where temporal prediction for the side information of the estimated frame is fulfilled at the decoder side other than the encoder side. Pradhan and Ramchandran [8] contribute a DVC framework based on syndrome for cosets, which encodes the residue of Wyner-Ziv frame with traditional block-based prediction coding scheme on the scale of motion and computation. Because the operational block length is small, PRISM might adopt relative short BCH block codes. Aaron et al. [9] develop a transform-domain DVC scheme with intraframe encoding and interframe decoding, which uses Turbo coder for each subband. Impressively, both side information Y and correlated channel between coded source and side information would impose the essential constraints on DVC coding performance.
Due to the extremely large amount of data associated with Multiview video, efficient compression techniques are essential for 3D scene communication by exploiting the inherent similarities of the Multiview imagery: interview and temporal similarity. As we know, temporal similarities have been motivated as a variety of motion compensatedprediction (MPC) methods in hybrid video compression standards, for example, MPEG-4, H. 264, and WM9. According to the level of geometric redundancy for Multiview imagery, various Multiview video compression algorithms could be categorized into three classes: 3D model-based algorithms, disparity/depth-based algorithms, and distributed compression.
In 3D model-based/model-aided algorithms, the geometry of the objects of the scene is recovered using camera parameters, which are obtained by camera calibration of shape-from silhouettes techniques. Scene geometry is explicitly used to convert images to view-dependent texture maps prior to compression [14][15][16]. However, there is a high degree of freedom between multiple views, and 3D scene geometry is impractical to be available or accurately estimated for intermap correlation.
In disparity/depth-based algorithms, scene geometry is implicitly used by performing disparity prediction and compensation across the different views or combing depth information of each view. It is noted that disparity is the displacement of corresponding points from different shooting positions of the cameras. A typical example is scalable hybrid predictive coding (SHPC) algorithm [17,18], where one view is compressed as a based layer by normal single-view compression and other views as enhancement layer(s) in combination with multiple depth information. Herein, disparity-compensated prediction (DCP) is used to reduce the interview redundancy. Joint Video Team (JVT) has also been developing Joint Multiview Video Model (JMVM) based on H.264/AVC-based trajectory [19]. However, disparity estimation (DE) to obtain the dense map of the corresponding points from different view is still an open challenge for computer vision paradigm.
Distributed compression algorithms compress each video stream individually without geometric priors. For DMVC, flexible prediction fusion methods between temporal and view correlations have been seriously considered to generate the side information at the decoder. Previously, Zhu et al. simply absorbed Wyner-Ziv coding to compress data acquitted by large light field system [20]. And then Aritgas et al. use View Synthesis Prediction to compensate interview correlated side information [21]. However, VSP needs depth information for each frame and is not realistic due to complex appearance of real scenes [22]. In [23], a mix prediction method is applied through wavelet transform. However, the coding performance is limited without explicit inference of correlation. Revisiting the transformation from signal to bases, we have recognized that it generally achieves two desirable properties: variable decoupling and dimension reduction. It is shown in harmonic analysis that the Fourier, wavelet, and ridgelet bases are independent components for various ensembles of mathematical functions. Unfortunately, the ensemble of natural images is obviously different from those functional classes so that it degrades the correlation estimation and rate-distortion performance in DMVC. Therefore, image components must be adapted to natural images, and it leads to sparse coding with overcomplete basis or dictionary. Going beyond the image bases, the textonlike representation consisting of a number of image bases at various geometric, photometric, and dynamic configurations is taken into account. The basic idea has been presented in our previous work [24], where a feature-based Wyner-Ziv coding framework (FWZC) for DMVC is explored to preserve the constrained relaxation with multiple side information implication and high-level features matching at the decoder.
In this paper, we present a novel graph matching-based FWZC scheme. It integrates graph-based segmentation and matching to generate interview correlated side information with a significant rate-distortion performance and without knowing the camera parameters. It is inspired by subgraph semantics and sparse decomposition of high-dimensional scale invariant feature data. The sparse feature data as a good hypothesis space are employed to enable best matching optimization of interview side information with compact syndromes, from inferred relaxed coset. Obviously, a priori knowledge extracted from multiple image descriptions of neighboring views should reinforce a plausible compensation and approximation to the original information in a converged sense. The graph-based representations of Multiview images are adopted as constrained relaxation, which assists the interview correlation matching for subgraph semantics of the original Wyner-Ziv image by the graphbased image segmentation and the associated scale invariant feature detector MSER (maximally stable extremal regions) and descriptor SIFT (scale-invariant feature transform). In order to find a distinctive feature matching with a more stable approximation, linear (PCA-SIFT) and nonlinear projections (Locally linear embedding, LLE) are adopted to reduce the dimension of high-dimensional SIFT descriptors, and TPS (thin plate spline) warping model is to catch a more accurate interview motion model in 3D angle of view.
This paper is organized in the following manner. Section 2 presents the DMVC architecture and highlights the formulation of subgraph-based DMVC scheme with constrained relaxation. In Section 3, a detailed implementation Wyner-Ziv encoder Wyner-Ziv decoder Side information of the subgraph-based DMVC scheme is illustrated, involving with graph-based image segmentation, feature extraction and sparse description, and graph-based matching with warps. Section 4 presents the experiment results. Section 5 concludes the paper and discusses future directions.
Distributed MVC.
In video coding standardized by MPEG or the ITU-T H.26x recommendations, the encoder and decoder jointly exploit the statistics of the source signal. Separate encoding of correlated sources can approach the rate of joint entropy, provided joint decoding is executed with known correlation. Figure 1 shows a practical Wyner-Ziv video codec architecture.
A blockwise DCT is firstly applied to a Wyner-Ziv frame I wz . For each DCT transform coefficient band of a Wyner-Ziv frame I wz (even frames), the Wyner-Ziv codec makes use of a quantizer, a bit-plane extraction, and a Slepian-Wolf codec (Turbo or LDPC) to generate layered parity bits. These parity bits are punctured and transmitted upon request by the decoder through a feedback channel. At the decoder side, the odd frames are conventionally decoded to generate the side information. The side information can be seen as a noisy version of the Wyner-Ziv frames, and the decoder employs a Laplacian noise model for error correction of received codes. The Laplacian parameter is estimated by observing the statistics from decoded frames. More parity bits from the encoder buffer through feedback are requested once the decoder cannot reliably decode the original symbols.
The decoder and the reconstruction modules assume a Laplacian residual distribution between Wyner-Ziv frame I wz and side information Y . Let d be the difference between corresponding coefficients in I wz and Y , then the distribution of d can be approximated as f (d) = (α/2)e −α|d| for each subbands. Let c i j denote the ith bit of a coefficient c j and let c i j denote the estimated reconstruction value for c i j . The probability can be computed using the residual distribution model as follows: where m i represents the magnitude of ith bit-plane, I( c i j ) indicates the possible value of c i j (1 or 0), y j indicates the coefficient of side information corresponding to c j , and offset is an estimated value used to compensate the lower part of c j . Because the lower bit-plane of c j is still not decoded, the value of offset is decided in terms of the distribution parameter and the quantization step size.
DMVC is normally emerging to attain benefits inherent to distributed video coding for the Multiview camera setup in Figure 2. It arranges Intraframes and Wyner-Ziv frames, noted by I and WZ, respectively, in an interlaced way. Two directions are defined: Temporal Directions, from which the intraview side information is generated by the temporal interpolation and View Direction, from which constrained relaxation matching is applied to infer the interview correlated side information. The whole DMVC system consists of independent encoder and joint decoder. Thus, low encoding complexity and high coding performance can be achieved.
Feature-Based Wyner-Ziv Coding with Constrained
Relaxation. The basic idea of Slepian-Wolf coding theory is to partition the space of all possible source outcomes into disjoint bins (sets). Usually, these bins (sets) are used as the cosets of some linear channel code for the specific correlation model. FWZC extended this idea by using high-level features, F(I wz ), as constraints. I wz is WZ frame, and the group of features F(I wz ) constructs a relaxed frame coset, U, which consists of a set of frames that are inferred as all the possible representations of I wz under the available constraints and syndromes.
where ⊕ is the operation that uses z(g) (parity bits gradually received) to correct Y , which is the approximation (side information) to I wz . Thus, the fundamental envision is to use F(I wz ) to decode the original video frame by finding the best match in U{I wz (g)}. This can be formulated as Figure 3 depicts the coding structure of the FWZC system. Compared with the traditional DCT domain Wyner-Ziv coding in Figure 1, this coding procedure imposes two new modules; one is the feature extraction/matching module and the other is the side information fusion module. It extracts scale invariant local features as the high-level constraints, which are transmitted to the decoder to notify the decoder how the source frame looks like (Y ), the more 4 EURASIP Journal on Advances in Signal Processing T0 T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 distinctive the information (F(I wz )) is, the easier the target I wz can be identified, so that fewer parity bits Z(g) are required to decode I wz . It can be equivalent to a learningbased optimization problem from sparse data. In essence, prior information can be used for choosing an efficient input representation, or for choosing a good hypothesis space that leads to enhanced performance of the learning machine.
In view of the given set of data provided by the random sampling with a noisy function, such a problem is ill-posed as there exists an infinity of functions that pass through the data. The common way with regard to regularization theory is by means of a stabilizer assuming that the function presents some intrinsic properties, for example, smoothness. It induces the underlying problem in (3) of finding the function that minimizes the functional combination of the empirical convex loss and prior information, associating with different approximation on the balance between fitness and prior constraints. The incurred attempt is dedicated to finding distinctive frame information F(I wz ) to generate more likely approximation Y to I wz .
Specifically, the "Side information generation" module in Figure 3 firstly generates a temporal side information. In terms of the obtained MVs, the "Arbitrator" module selects desired regions where spatial cue is required and requests local features of these regions from the encoder. With the received local features, interview side information is generated in the "Side information generation" module. Finally, the "Arbitrator" fuses the interview side information and temporal side information to generate the final side information for Multiview Wyner-Ziv decoding. This setup deduces the computational complexity of the encoder by only extracting local features from partial samples of the frame.
Noted, once there exists bad matching within either wide view images or occlusion, we might (1) use the RANSAC algorithm [25] to cope with a large proportion of outliers in the candidate point data. It uses the smallest point set possible beyond conventional sampling techniques; (2) we can use the image extrapolation or inpainting approaches [26] to synthesis the bad matching region from both surrounding areas and interview reference images via a partial difference equation (PDE). Typically, it could be interpreted as an iterative optimization algorithm to approximate the minimum of the energy using belief propagation.
Subgraphs-Based Matching for FWZC.
In this paper, we explore this feature-based idea with multiple representations of graphs to break those bottlenecks previously mentioned. Usually, the vertices-based features (point) are good for texture (high entropy), while edge-based features (lines, curves, axes, sketches) are good for cartoon (low entropy). As natural images are decomposed as texture and cartoon, the mixed graph-based representations are attained through the graph-based image segmentation and the associated scale invariant feature detector MSER and descriptor SIFT implications. To find distinctive feature matching throughout the over-complete space in a more stable approximation, PCA-SIFT and TPS warping models are adopted to reduce the dimension of SIFT descriptors and catch a more accurate interview motion model in 3D angle of view.
Graphs here can be attained through the effective image segmentation; meanwhile the points are produced by dimension reduction which has a low dimension of feature descriptor. Through such representations, the highlevel feature aggregation F(I wz ) is supplemented so as to make a more distinctive constraint for FWZC. Since the subgraph-based matching method is to exploit the interview correlations, we focus on the generation of interview side information Y v . Figure 4 illustrates the generic interview side information process at the decoder. Given the colocated left (right) view of the current WZ frame, I v , the attribute graph is generally given by 3-tuple G = (V , E, D)(∈ F(I wz )), with V being a set of vertices consisting of distinctive feature points, E being groups of edges belonging to various subregions, and D, being descriptors for each v i ∈ V allowing for significant levels of local shape distortion and change in illumination. This colocated view feature information (V ,E, D) can be determined at the decoder due to the availability of I v . For WZ frame, I wz , its attribute graph is given by G = (V , E , D )(∈ F(I wz )). The vertices-based feature (V , D ) is extracted at the encoder and will be transferred to the decoder; the edge-based feature (E ) is determined at the decoder.
The goal of segmentation at the decoder in Step 1 is to split each image into n + 1 regions that are likely to contain similar disparities that make a promising compensation for separated regions. A graph partition of G is denoted by Each subgraph has an amibute graph g i = (V i , E i , D i ). We can denote the n + 1 subgraphs matching functions by Features are efficiently matched in Step 2 by identifying the nearest neighbor keypoint that has the minimum Euclidean distance for the invariant descriptor vector based on dimension reduction. In this step, adjacent views correlations are exploited through graph segmentation and point features matching. Since we do this at the decoder where I wz is not available, the feature information (V , D ) should be extracted at the encoder in advance and trans-ferred to the decoder. As the result of matching, the graph G is also partitioned into n + 1 subgraphs G = {g 0 , g 1 , . . . , g n }.
Now we have pairs of matched attribute graphs In Step 3, interview side information, Y l v and Y r v from the left and right views of the WZ frame are obtained by the geometric transform, TPS warping F i (x, y).
Finally, a view fusion method is applied to generate the interview side information Y v in Step 4 The subgraph matching-based side information generation algorithm is summarized in Table 1.
Graph-Based Segmentation.
Firstly, a rough segmentation of the correlated left (right) view image I v is performed using the graph-based segmentation method [27]. Through blurring with a Gaussian filter, a segmentation consisting of a small number of large regions is obtained. All feature nodes v i ∈ V are divided into a small unknown number of n + 1 subgraphs for the graph matching in Step 2 (refer to (4)). We assume that n+1 should be small and the subregions are large enough so that sufficient feature points for each v j ∈ V i are contained to make the accurate matching. The first layer g 0 is always made of the background and small subregions whose 6 EURASIP Journal on Advances in Signal Processing Step 1 Step 2 Step 3 Step 4 View fusion feature points are not sufficient. The segmentation results with different scale of subgraph matching-based semantic parameters are shown in Figure 5, where k sets a scale of observation when a larger value causes a preference for larger components, σ is the smoothing factor when 0 labels nonsmoothing.
Affine Invariant Features Extraction of Subgraphs.
Many existing algorithms of image matching are difficult to handle the viewpoint change. Recently, local invariant features are shown to be robust for occlusion, background clutter, and content changes [28]. The definition is implicated by the observation that even though the regions themselves are covariant, the normalized image pattern they cover and the feature descriptors derived from them are typically invariant. Among all the popular scale invariant feature detectors and affine invariant feature detectors, maximally stable extremal regions (MSER) algorithm is evaluated to obtain the best results as shown in Figure 6. Also, scaleinvariant feature transform (SIFT) algorithm is identified as the best descriptor which is most resistant to common image deformations [29]. Based on MSER and SIFT, a robust affine invariant features extraction is put forward to benefit the subsequent subgraph matching. The SIFT algorithm was recently identified as the most resistant to image deformations and affine distortion between different views. In given graph representation G = (V , E, D), V is a set of vertices consisting of such distinctive feature points, localized at local peaks in a scale-space search and stable over transformations; and D as descriptor represents the local image gradients in the feature point's neighborhood. At the decoder, features of the colocated left and right views of WZ frame can be extracted. At the encoder, WZ frames are extracted the associated features.
Principal component analysis (PCA) has been widely used in data analysis, and PCA-based SIFT introduces a more compact, distinctive, and accurate local descriptors [30]. It reduces the dimension of the descriptor through the following transform:
Subgraphs Matching-based Algorithm
Step 1. Use graph-based segmentation, subgraph-based scale invariant feature detector and descriptor, and dimension reduction algorithms to obtain the graph-based sparse data space; Step 2. Use minimum Euclidean distance to find pairs of matching subgraphs; Step 3. Generate left and right interview side information by TPS warping model; where x is normalized image gradient vector, the projection matrix, A k presents the offline computed eigenspace, and y is the k dimension vector of PCA-SIFT descriptor. The local image patches surrounding each interest point are normalized so that their dominant orientation is in the same direction, which creates the redundancy that makes PCA effective. This normalized local gradient image patch is transformed into a 41 by 41 vector whose dot product is computed with the 20 prelearned PCA basis vectors. The dot product produces a signed 20-element integer vector which is the descriptor vector for that interest point. Through four stages of PCA-SIFT in Figure 7, features are matched by identifying the nearest neighbor in the database which stores the candidate features extracted from the left and right views. The nearest neighbor is the keypoint which has the minimum Euclidean distance for the invariant descriptor vector. The solution of the feature matching problem is For each feature point v j ∈ V i , v k ∈ V i , and l is the number of feature points in V i . Having found pairs of matched feature points {v j , v j }: v j ∈ V i or v j ∈ φ, (5) can be determined for each subgraph. Table 2 chooses three Multiview video sequences "Fla-menco1", "Race1", and "Golf ", to compare the prediction results between SIFT and PCA-SIFT. In this paper, the dimension of PCA-SIFT feature space n is set to 20. According to the analysis in [30], this setup achieves a good trade-off between matching accuracy and feature space dimension. In the following of this paper, all the experiments are performed with n = 20. Both SIFT and PCA-SIFT use the same 6-parameter affine transform prediction model, and equal number of matching keypoints (9 pairs of features are selected for each WZ frame) to generate interview side information. From the average PSNR of the estimated frames, we can see that PCA-SIFT's matching accuracy at the keypoint level translates into good performance. As a nonlinear dimensionality reduction, locally linear embedding (LLE) succeeds in identifying the underlying structure of the manifold and does not involve local minima. Its procedure can be described as follows: (1) compute the neighbors of each data point; (2) compute the weights that best reconstruct each data point from its neighbors, minimizing the cost by constrained linear fits: (3) compute the vectors best reconstructed by the weights minimizing the quadratic form by its bottom nonzero eigenvectors: where X is the 128-dimension SIFT descriptors, and R represents the reduction data. Figure 8 shows an illustrative comparison of a sampled frame by linear (PCA) and nonlinear LLE.
Thin Plate Spline (TPS).
Thin-Plate Spline warps have been shown to be very effective as a parameterized model of the optic flow field between images of various deforming surfaces. The close-form minimizer of TPS is parameterized by a global affine matrix d and a local warping coefficient matrix c. Giving K pairs of matched feature points v j , v j for each subregion extracted from (10) as control points, the spatial interpolation function can be written for each subregion: where v j ∈ V i , d is a 3 × 3 matrix as affine transform, and c is a K × 3 matrix as the nonaffine deformation. The kernel function φ( z − v j ) is a 1 ×K vector for each point z, where each entry Figure 10: The PSNR of the estimated frames in the view direction; "1" denotes the proposed scheme involving TPS in combination with subgraph matching; "2" represents the scheme using global affine transform for interview side information generation [23]. attained by Tikhonov regularization minimizing the energy function. Therefore, the interview side information can be retrieved by (8). Figure 9 displays this warping transform by using TPS.
To evaluate TPS's effectiveness in the proposed approach, WZ frames of three Multiview video sequences are estimated from the view direction by 6-parameters global affine transform and TPS warping of the graph-based matching. The number of features from the encoder to assist the affine transform and TPS warping is 9 and 50, respectively, so that the overheads for the features are nearly equal due to the low-dimension descriptor of PCA-SIFT. Figure 10 shows the average PSNR of the estimated frames (luminance). It can be observed that TPS warping with graph matching works better than the global affine transform especially for the sequence with high motion, for example, RACE1. Figure 11 illustrates the individual frame's PSNR of affine transform and TPS warping for the second view of the "RACE1" and "GOLF" Multiview video sequences. It is shown that each subgraph extracted from the original Wyner-Ziv target image is more accurately estimated with the proposed scheme.
Temporal Side Information.
Temporal side information, Y t , is predicted from the temporal direction according to the algorithm in [31]. As displayed in Figures 12(a) and 12(b), forward motion estimation is applied to get the candidate motion vectors for each nonoverlapped block in the interpolation frame I wz . From the available candidate vectors, the motion vector that intercepts the interpolated frame closer to the center of block is under consideration. Now that each block in the interpolated image has a motion vector, bidirectional motion compensation is performed to obtain the interpolated frame.
Side Information Fusion. Interview side information
Y v is obtained by the combination of TPS warping results from the left and right views by (8) (simply average the two results). Figure 13 shows the side information frames generated from the temporal direction and the view direction. Temporal method works well for the prediction of objects with small motion, while the view prediction with proposed scheme, subgraphs-based matching, has advantages for those with high motion and can significantly reduce the effect of ghost compared to [9] (the small icon "KDDI" is erased in advance and later added when the interview fusion is finished; the blank area is filled with the average of adjacent two frames in the temporal direction). Furthermore, an inherent data fusion algorithm to reconstruct more accurate side information from temporal and view side should be applied. The final side information Y is generated by We generate a fusion mask for Y , where 1 indicates that the pixel is taken from the interview side information, and 0 the pixel taken from the temporal side information. In this work, we adopt the intensity of MVs as criteria to measure (c) (d) Figure 13: The fusion masks, white areas indicate that temporal side information is unreliable and interview side information is used.
the reliability of interview side information and temporal side information [32]. Since temporal motion estimation performs poorly in regions where motion is high, it is obvious that the motion vectors from temporal estimation can be used as criteria for fusion. The abrupt changes of the direction of the motion vectors, which have low spatial coherence, are considered to be incorrect motion vectors when compared to the true motion field. They can be detected by the weighted vector median filters, extensively used for noise removal. Through these abrupt motion vectors, we set the corresponding block to ones in fusion masks. Figure 13 shows an example of the fusion mask, where white areas indicate temporal estimation unreliable and thus enable interview estimation. From Figure 13, it can be seen that the temporal side information shown on the left side has a bad estimation in areas with high motion so that these areas should be determined by the interview side information. Figure 14 shows the percentage of MBs from interview side information in each frame of some sequences. It demonstrates that interview side information contributes very few in frames with low motion, for example, the first 40 frames of "Race" sequence. Obviously, the interview side information is more helpful to improve the quality of fused side information for frames with intensive motion.
Experimental Results
We have tested the proposed scheme on Multiview sequences from KDDI Lab, where three views with 128 frames (320 × 240) of each sequence are chosen. DMVC's structure is IWIW in an interlaced way. The Wyner-Ziv frame rate is 15 f /s. It is assumed that the I frames are available at the decoder perfectly reconstructed. Each Wyner-Ziv frame is predicted from both the temporal direction by interpolation solution presented in Section 3.4.1 and view direction with constrained relaxation of subgraphs matching described in Section 3.1-3.3. The LDPC adopted in our DVC scheme has block length with L = 6336 bits and its source rate is 2/66,3/66,4/66, . . .,66/66 [33]. The source node degree distribution is irregular with λ(x) = 0.316x 1 + 0.415x 2 + 0.128x 6 + 0.069x 7 + 0.020x 18 + 0.052x 20 .
The parameters of graph-based image segmentation are manually set in this paper, where the observation scale k is set to 300, smoothing factor σ is 0.8, and the minimum subregion size is 1000. The PCA method is adopted in experiments to reduce the dimension of deduced SIFT descriptors. The number of PCA-SIFT based features transmitted upon request is around 50 with the dimensionality of the feature space n = 20. The projection matrix used in the PCA-SIFT is precomputed once and stored. Figure 15 gives the R-D curves of four Multiview video sequences from eight MVC coding methods. These methods are grouped into three categories: the JMVM coding method, the H.264/AVC coding method, and the DMVC method. The "Intra" in Figure 15 is the result of Intracoding with H.264/AVC [34]. And "H.264 B" stands for result of H.264/AVC with motion search open. The frame structure is "I-B-I-B-I-· · · ", the search range is 32 and reference frame number is set to 2. The configuration of "H.264 B 0MV" is similar with that of "H.264 B" except that the bidirectional motion search is closed. The "JMVM" stands for the result of JMVM with motion search on, where the GOP is set to 15, the search range is set to 96, the max iterations for bi-directional search is 4, and the search range for iteration is set to 8. The setup of "JMVM 0MV" is similar to that of "JMVM" just with motion search turned off. The rest three methods are DMVC method based on Wyner-Ziv coding. There, frame structure is "I-B-I-B-I-· · · " as shown in Figure 2(b). The difference between these three DMVC approaches is their different side information generation method. "Temporal ME" generates side information bidirectional motion compensation where the search range is 16. The "Affine-based" method [23] generates interview side information with affine transform, and fuses that with the results of "Temporal ME" to produce the final side information for Wyner-Ziv decoding. And "The proposed" stands for the result of the proposed subgraphbased method.
The results in Figure 15 show that the proposed graphbased DMVC approach outperforms H.264/AVC-based intra coding up to 4-5 dB, DMVC with temporal prediction about 0.5-1.5 dB, and the "Affine-based" DMVC scheme about 0.3-0.5 dB. In terms of motion classification in four video sequences, it is supposed that the proposed DMVC scheme has quite high precision for objects with high motion so as to bring a significant improvement in a rate-distortion sense.
More results of various texture images selected from related sequences are presented in Figure 16. Figure 17 shows the side information frames generated from the temporal direction and the view direction. It can be seen that each subgraph extracted from the original Wyner-Ziv target image is more accurately estimated with the proposed approach. In the contours of separate regions of same images, it has significantly reduced the ghost effect and attained better PSNR values.
To analyze the additional computations for feature extraction, we performed simulations of DMVC schemes with and without local feature extraction process, and compared their encoding complexity with existing typical coding schemes, for example, "JMVM", "H.264/AVC B", and "H.264/AVC B 0MV" without bidirectional motion search. It is worth mentioning that the feature extraction processing in the proposed DMVC scheme would give a hint for interview side information generation, which could be regarded as motion-compensated prediction in H.264 or joint Multiview video coding. It is performed on Windows XP SP3 system with Intel Core 2 CPU 1.86 GHz and 2.00 GB memory. Figure 18 presents the average encoding/decoding time of a WZ frame, a B-frame of H.264/AVC, or a B frame of JMVM coding paradigm. The experimental results in Figure 18(a) demonstrate that although noticeable additional computations are introduced by the feature extraction process, the total coding complexity of the proposed scheme is still significantly lower than conventional prediction schemes. In fact, the computational complexity has been mainly transferred to the decoder in distributed video coding sense. Generally, the total decoding of Wyner-Ziv coding might typically take hundreds to thousands of seconds for the WZ frames, as shown in Figure 18(b), which is far beyond the additional computation burden of feature matching at the decoder side.
The communication overhead, induced by the local feature descriptors, has been taken into account of the overall bit-rate in Figure 15. It has demonstrated a superior ratedistortion performance of the proposed scheme compared with a variety of existing schemes. In fact, the communication overhead is correlated with a source. For a sequence with smooth motion, the temporal side information is relatively high and there would be few frames to transmit features to the decoder, such as "Flamenco 2". For example, the communication overhead for "Race" and "Flamenco 2" is about 28 kbps and 11 kbps on the average.
Conclusion
This paper proposes a novel graph matching-based FWZC scheme for DMVC. It devotes graph-based representations of Multiview images to generate interview correlated side information without knowing the camera parameters. The sparse feature set as a good hypothesis space aims for a best matching optimization of interview side information with compact syndromes, from inferred relaxed coset. The plausible fillingin from a priori feature constraints between neighboring views could reinforce a promising compensation to interview side information generation for joint Multiview decoding. The graph-based representations of Multiview images are adopted as constrained relaxation, which assists the interview correlation matching for subgraph semantics of the original Wyner-Ziv image by the graph-based image segmentation and the associated scale invariant feature detector MSER and descriptor SIFT. In order to find distinctive feature matching with a more stable approximation, linear and nonlinear projections are adopted to reduce the dimension of highdimensional SIFT descriptors, and TPS warping model is to catch a more accurate interview motion model in 3D angle of view. | 8,260 | 2010-03-22T00:00:00.000 | [
"Computer Science"
] |
Microbiological studies on resistance patterns of antimicrobial agents among Gram negative respiratory tract pathogens
Department of Natural Products and Alternative Medicine, Faculty of Pharmacy, King Abdulaziz University, Jeddah, Kingdom of Saudi Arabia. Department of Microbiology and Immunology, Faculty of Pharmacy, Cairo University, Cairo, Egypt. Department of Microbiology and Immunology, Faculty of Pharmacy, Al-Azhar University, Cairo, Egypt. Department of Microbiology and Immunology, Faculty of Pharmacy, October University for Modern Sciences and Arts, 6 October City, Egypt.
INTRODUCTION
The Centers for Disease Control and Prevention (CDC) estimates that more than 100 million antibiotic prescriptions are written each year in the ambulatory care setting. With so many prescriptions written each year, inappropriate antibiotic use will promote resistance. In addition to antibiotics prescribed for upper respiratory tract infections with viral etiologies, broad-spectrum antibiotics are used too often when a narrow-spectrum antibiotic would have been just effective (Steinman et al., 2003).
Resistance to β-lactam antibiotics occurs primarily through the production of β-lactamases, enzymes that inactivate these antibiotics by splitting the amide bond of the β-lactam ring. β-Lactamases most likely coevolved with bacteria as mechanisms of resistance against natural antibiotics over time, and the selective pressure exerted by the widespread use of antimicrobial therapy in modern medicine may have accelerated their development and spread. β-Lactamases are encoded either by chromosomal genes or by transferable genes located on plasmids and transposons. In addition, βlactamase genes (bla) frequently reside on integrons, which often carry multiple-resistance determinants. If mobilized by transposable elements, integrons can faciletate further dissemination of multidrug resistance among different bacterial species (Weldhagen, 2004).
Four major groups of enzymes are defined by their substrate and inhibitor profiles: group 1 cephalosporinases that are not well inhibited by clavulanic acid; group 2 penicillinases, cephalosporinases and broadspectrum β-lactamases that are generally inhibited by active site-directed β-lactamase inhibitors; group 3 metallo β-lactamases that hydrolyze penicillins, cephalosporins and carbapenems and that are poorly inhibited by almost all β-lactam-containing molecules; and group 4 oxacillin-hydrolyzing enzymes that are not inhibited by clavulanic acid (Webb, 1984).
Another important mechanism of antibiotic resistance is efflux pumps. In general, multiple antibiotic resistance in Gram-negative bacteria often starts with the relatively limited outer membrane permeability to many antibiotic agents, coupled with the over expression of multi-drug resistance (MDR) efflux pumps, which can export multiple unrelated antibiotics. In addition, by reducing the intracellular concentration of the antimicrobial agent to less than the MIC required for bacterial killing, efflux mechanisms may allow bacterial survival for longer periods, facilitating the accumulation of new antibioticresistance mutations (e.g., those encoding topoisomerase IV or DNA gyrase targets, rendering fluoroquinolones ineffective) (Piddock, 2006).
Antimicrobial agents exert strong selective pressures on bacterial populations, favoring organisms that are capable of resisting them. Genetic variability occurs through a variety of mechanisms. Point mutations may occur in a nucleotide base pair, and this is referred to as microevolutionary change. These mutations may alter enzyme substrate specificity or the target site of an antimicrobial agent, interfering with its activity (Medeiros, 1997). This study focused on the genetic variability among Gram negative respiratory tract isolates and its relation to antimicrobial resistance including multi-drug resistant isolates.
Bacterial isolates
A total of 309 non replicate Gram negative respiratory tract isolates from 249 patients: 115 males, 134 females, between the ages of 3 and 50 from medical intensive care unit, MICU and surgical intensive care unit, SICU, with underlying upper and lower respiratory tract diseases with no history of antibiotic administration prior to sample acquisition for three months were collected from King Abdulaziz University Hospital, Jeddah, KSA. From September 2011 to June 2012 according to the generally accepted guidelines for specimen collection and transportation of common specimen types as illustrated in Table 1 (Murray, 2007), clinical specimens collected were isolated, identified using morphological, microscopy, biochemical tests and API kit method as well.
Characterization and molecular mechanisms of antimicrobial resistance pattern of Gram negative respiratory tract pathogens
Isolates that exhibited reduced susceptibility to one or more of ceftazidime, aztreonam, cefotaxime or ceftriaxone were considered as potential producers of ESβL. Double-disk synergy test (Figure 1) was done using ceftazidime and a ceftazidime + clavulanic acid (30 μg/10 μg) discs as confirmatory test for detection of ESβL production (Coudron et al., 1997). Isolates resistant to imipenem or meropenem were considered as suspicious for production of metallo-beta-lactamases (MβL), ethylene diamine tetraacetic acid (EDTA) disc synergy test (Figure 2) was done for detection of metallo-β-lactamases in the imipenem resistant isolates (Yong et al., 2002). Isolates resistant to one or more of cefoxitin, cefotetan, cefotaxime, ceftazidime and aztreonam were considered as suspicious for production of AmpC-beta-lactamases (AmpC-BL), combined disc test ( Figure 3) using cloxacillin as inhibitor of AmpC enzymes was done as confirmatory test for detection of AmpC producing isolates (Mirelis et al., 2006). Minimum inhibitory concentration (MIC) of ciprofloxacin against the clinical isolates was determined using the two-fold serial broth dilution method with an inoculum of 1 x 10 6 cells/ml. All experiments were done with and without 100 mg/L carbonyle cyanide-m-chlorophenylhydrazone (CCCP). The MIC was taken as the lowest concentration inhibiting visible growth after 18 h incubations at 37°C. CCCP inhibited multidrug resistant (MDR) efflux pump was inferred if the MIC with CCCP was four-fold or lower than the MIC without CCCP (Omoregie et al., 2007).
Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License Table 1. Guidelines for specimen collection and transportation of common specimen types.
Specimen Collection methods Respiratory, Upper Nose
Premoistened swab was inserted 1-2 cm into nares and rotated against nasal mucosa. Nasopharynx Nasopharyngeal washings and swabs. Throat or pharynx The posterior pharynx was swabbed, avoiding saliva.
Respiratory, Lower
Bronchial alveolar lavage A large volume of fluid was collected; transported in sterile container.
Sputum (expectorated)
Patient was instructed to rinse or gargle with water to remove excess oral flora; then to cough deeply and expectorate secretions from lower airways; which were then collected and transported in a sterile container.
DNA sequencing
After initial screening for the amplification of β-lactamases and efflux pump genes on both chromosomal and plasmid, the plasmid and chromosomal borne genes were subjected to nucleic acid sequencing. The initial PCR amplified products were purified and treated with QIAquick PCR Purification Kit (QIAgen Inc.,Valencia CA, USA).
Direct sequencing of each amplicon was carried out using the Sanger dideoxynucleotide chain termination method with the ABI Prism Big Dye Terminator Cycle Sequencing Reaction Kit (Applied Biosystems, Inc., Foster City, CA, USA) on an ABI Prism 3500 Automated Sequencer. Using data collection software version 2.0, and sequencing analysis software 5.1.1, for each sequencing reaction, 2 μl purified PCR product were added to a final reaction volume of 10 μl containing 1× of sequencing buffer; 4 μl BigDye reaction mix; and 3.2 pM of each of the Forward and Reverse primer. The sequencing cycle was composed of two stages; stage one is denaturing at 96°C for 1 min, while stage two is composed of 25 cycles of denaturing at 96°C for 10 s, annealing at 50°C for 5 s, and extension at 60°C for 4 min (Sabate et al., 2000).
Each cycle sequence product was purified by BigDye XTerminator Purification Kit. The purified PCR product was then placed in the DNA analyzer. The DNA sequences obtained were compared with those in the GenBank using the BLAST program (http://blast.ncbi.nlm.nih.gov/). (34.6, 26.6, 13.9, 7.7, 6.4, 5.5, 1.4, 1.3, 1, 1, 0.3 and 0.3%. The distribution of organisms harboring β-lactamases and efflux pump among Gram negative respiratory tract isolates are illustrated in Table 2.
Detection and prevalence of beta-lactamases and efflux pump genes in Gram negative respiratory tract isolates
PCR and sequence analysis indicated the presence of bla SHV , bla CTX-M , bla TEM , bla IMP , bla VIM , ACC, DHA, AdeJ, MexX and MexE genes in the isolated respiratory tract isolates with distribution illustrated in Table 3.
DNA sequencing results
Nucleotide composition analysis of some A. baumannii isolates showed that, the RND family drug transporter (AdeJ) gene detected was of GCwith value of 41.5 and the detailed composition was: T (30.7), C (20.8), A (27.8) and G (20.8). Among the studied 659 nucleotide bases compromising for AdeJ gene, 655 bases were conserved while only 4 sites were variable. Surprisingly, 3 out of the four base substitutions were transitional changes, from T→C (356 and 389) and C→T (566). Only one base substitution was transversional change from G→T (449) (Figure 4). Nucleotide composition analysis of some P. aeruigenosa isolates showed that, the multidrug efflux membrane fusion protein encoding gene (MexE) detected was of high GC with value of 71.1 and the detailed composition was: T (10.7), C (38.7), A (18.2) and G (32.4). Among the studied 458 nucleotide bases compromising for MexE gene, 455 bases were conserved while only 3 sites were variable. Two out of the three base substitutions were transitional changes, from A→G (15 and 39). Only one base substitution was transversional change from C→A (77) ( Figure 5). nucleotide bases compromising for MexX gene, 411bases were conserved while only 2 sites were transitional changes, from T→C (20) and C→T (404) (Figure 6).
DISCUSSION
The present study proposes a combined phenotypic and genotypic approach for the specific diagnosis of antibiotic resistance mediated by β-lactamases and efflux pump system harboring Gram negative respiratory tract isolates.
In the present study, bla CTX-M genes were predominant in A. baumannii and P. aeruginosa isolates with percentage of 39 and 31% respectively, followed by bla SHV genes in A. baumannii and E. cloacae isolates with percentage of 20 and 60% respectively. bla TEM genes were predominant in E. coli, K. pneumoniaea and S. maltophilia isolates with percentage of 50, 33 and 33%, respectively. bla SHV genes were predominant in E. cloacae with percentage of 60%. Similar findings were found in Indian study (Gupta, 2007), from a total of 94 isolates, 50 (n = 47), 14.89 (n = 14) and 11.70 (n = 11) ESβL rates for bla TEM , bla SHV and bla CTX-M type beta lactamases, respectively. bla TEM and bla CTX-M type ESβL were observed in 72.72 and 22.72% of E. coli isolates, respectively. Also, the present study revealed that bla IMP gene was predominant in A. baumannii isolates with percentage of 27%, followed by bla VIM gene 11%. bla VIM gene was predominant in P. aeruginosa isolates with the percentage of 44%, followed by bla IMP gene 18%. Both bla IMP and bla VIM genes were found together in E. coli isolates with the percentage of 33% followed by 11% of each alone. This was in accordance with Nordman and Poirel (2002), were a total of 8 pseudomonas isolates carried bla VIM -type gene, these data demonstrate that bla VIM -type gene are the most prevalent MβLs among clinical specimens of P. aeruginosa. ACC gene was predominant in A. baumannii and P. aeruginosa isolates with percentage of 52 and 36%, respectively, followed by DHA-1, DHA-2 genes with 13 and 6% respectively. This result differs significantly from the findings of several studies were the isolation numbers of ACC enzymes were still significantly lower than those of CIT (CMY), FOX and DHA (Philippon et al., 2002). AdeJ was detected in A. baumannii with percentage of 29.2%, while MexX gene was predominant in P. aeruginosa isolates with percentage of 46% followed by MexE, 3.8%. This differ from the findings of some biological observations made during a study where the basal expression level of MexX is much lower than that of MexA but that both efflux pumps are over-expressed 4 to 8 times in resistant strains, suggesting that a lower quantity of MexXY-OprM than MexAB-OprM protein may be needed for effective transport of the corresponding substrates (Llanes et al., 2004). Second, over-expression of MexX in clinical isolates is systematically associated with that of MexA. This may be related to the fact that MexXY uses OprM as a porin (Masuda et al., 2000).
Conclusion
The study figured out the most common genes responsible for the expression of β-lactamase enzymes and efflux pump system in Gram negative respiratory tract isolates. The study also revealed that, isolates harboring more than one gene from the same class have higher resistance pattern towards antimicrobial agents than those harboring only one; also, isolates having microevolutionary changes in their nucleotide composition of the detected genes have higher resistance pattern towards antimicrobial agents than those where all bases are conserved. | 2,875.6 | 2014-07-02T00:00:00.000 | [
"Biology",
"Medicine",
"Physics"
] |
Isolated electrons and muons in events with missing transverse momentum at HERA
A search for events with a high-energy isolated electron or muon and missing transverse momentum has been performed at the electron–proton collider HERA using an integrated luminosity of 13.6 pb−1 in e−p scattering and 104.7 pb−1 in e+p scattering. Within the Standard Model such events are expected to be mainly due to W boson production with subsequent leptonic decay. In e−p interactions one event is observed in the electron channel and none in the muon channel, consistent with the expectation of the Standard Model. In the e+p data a total of 18 events are seen in the electron and muon channels compared to an expectation of 12.4±1.7 dominated by W production (9.4±1.6). Whilst the overall observed number of events is broadly in agreement with the number predicted by the Standard Model, there is an excess of events with transverse momentum of the hadronic system greater than 25 GeV with 10 events found compared to 2.9±0.5 expected. The results are used to determine the cross-section for events with an isolated electron or muon and missing transverse momentum.
Introduction
The HERA collaborations H1 and ZEUS have previously reported [1][2][3] the observation of events with an isolated high energy lepton and missing transverse momentum in e + p collisions recorded during the period 1994-1997. The dominant Standard Model (SM) contribution to this topology is real W boson production with subsequent leptonic decay. Such events can also be a signature of new phenomena beyond the Standard Model [4]. H1 has reported [2] one e − event and 5 µ ± events compared to expectations from the Standard Model of 2.4 ± 0.5 and 0.8 ± 0.2 for the e ± and µ ± channels respectively, with W contributions of 1.65±0.47 (e) and 0.53±0.11 (µ). For the same data taking period ZEUS has reported [3] 3 (0) e ± (µ ± ) events compared to an expectation of 2.1 (0.8) W events and 1.1 ± 0.3 (0.7 ± 0.2) events from other processes. In the present paper a search for events with isolated electrons 1 or muons and missing transverse momentum is performed in an extended phase space and with improved background rejection. The complete HERA I data sample (1994)(1995)(1996)(1997)(1998)(1999)(2000) is analysed here. This corresponds to an integrated luminosity of 118.4 pb −1 , which represents a factor of three increase with respect to the previous published result. This paper is organised as follows. Section 2 describes the SM processes that contribute to the signal and to the background. Section 3 describes the H1 detector and experimental conditions. Section 4 outlines the lepton identification criteria and the reconstruction methods for the hadronic final state. The selection requirements for the electron and muon channels are described in section 5. Studies of background processes are presented in section 6. Section 7 deals with systematic uncertainties and section 8 presents the results of the analysis including the numbers of events seen, the kinematics of the selected events and the measured cross sections. The results of a search for W production in the hadronic decay channel are given in section 9. The paper is briefly summarised in section 10.
Standard Model Processes
The processes within the Standard Model that are expected to lead to a final state containing an isolated electron or muon and missing transverse momentum, due to penetrating particles escaping detection in the apparatus, are described in detail in [2] and are only briefly outlined in this section. The processes are called "signal" if they produce events which contain a genuine isolated electron or muon and genuine missing transverse momentum in the final state. The processes are defined as "background" if they contribute to the selected sample through misidentification or mismeasurement. For the background processes, a fake lepton, fake missing transverse momentum or both can be reconstructed and may lead to the topology of interest. The following processes are considered.
Real W production in electron proton collisions with subsequent leptonic decay W → lν, proceeding via photoproduction, is the dominant SM process that produces events with prominent high P T isolated leptons and missing transverse momentum. W bosons are predicted to be produced mainly in resolved photon interactions, in which the W typically has small transverse momentum, whilst in direct photon interactions the W transverse momentum may be larger.
In this paper, the SM prediction for W production via ep → eW ± X is calculated by using a next to leading order (NLO) Quantum Chromodynamics (QCD) calculation [5] in the framework of the EPVEC [6] event generator. Each event generated by EPVEC according to its default LO cross section is weighted by a factor dependent on the transverse momentum and rapidity of the W [7], such that the resulting cross section corresponds to the NLO calculation. The ACFGP [8] parameterisation is used for the photon structure and the CTEQ4M [9] parton distribution functions are used for the proton. The renormalisation scale is taken to be equal to the factorisation scale and is fixed to the W mass. Final state parton showers are simulated using the PYTHIA framework [10].
The NLO corrections are found to be of the order of 30% at low W transverse momentum (resolved photon interactions) and typically 10% at high W transverse momentum (direct photon interactions) [5]. The NLO calculation reduces the theory error to 15% (from 30% at leading order).
The charged current process ep → νW ± X is calculated with EPVEC [6] and found to contribute less than 7% of the predicted signal cross section.
The total predicted W production cross section amounts to 1.1 pb for an electron-proton centre of mass energy of √ s = 300 GeV and 1.3 pb for √ s = 318 GeV.
A small number of signal events may be produced by Z production with subsequent decay to neutrinos. The outgoing electron from this reaction can scatter into the detector yielding the isolated lepton in the event while genuine missing transverse momentum is produced by the neutrinos. This process is calculated with the EPVEC generator and found to contribute less than 3% of the predicted signal cross section.
Charged Current (CC) processes : ep → νX (background)
A CC deep inelastic event can mimic the selected topology if a particle in the hadronic final state or a radiated photon is interpreted as an isolated lepton. The generator DJANGO [11] is used to calculate this contribution to the background.
Neutral Current (NC) processes : ep → eX (background)
The scattered electron in a NC deep inelastic event yields an isolated high energy lepton, but measured missing transverse momentum can only be produced by fluctuations in the detector response or by undetected particles due to limited geometrical acceptance. The generator RAPGAP [12] is used to calculate this contribution to the background.
Photoproduction of jets
The generator PYTHIA [13] is used to calculate the contribution from hard scattering photoproduction processes. Background from this process may occur if a particle from the hadronic final state is interpreted as an isolated lepton and missing transverse momentum is measured due to fluctuations in the detector response or limited geometrical acceptance.
Lepton pair (LP) production : ep → e l + l − X (background) Lepton pair production can mimic the selected topology if one lepton escapes detection and measurement errors cause apparent missing momentum. The generator GRAPE 1.1 [14], based on a full calculation of electroweak diagrams, is used. The dominant contribution is due to photon-photon processes and is cross-checked with the LPAIR [15] generator. Internal photon conversions are also calculated. Z production and its subsequent decay into charged leptons is also included in GRAPE. This contribution is found to be negligible.
In order to determine signal acceptances and background contributions, the detector response to events produced by the above programs is simulated in detail using a program based on GEANT [16]. The simulated events are then subjected to the same reconstruction and analysis chain as the data.
Experimental Conditions
Results are presented for the 37.0 pb −1 of e + p data taken in 1994-1997 at an electron-proton centre of mass energy of √ s = 300 GeV , the 13.6 pb −1 of e − p data (1998-1999, √ s = 318 GeV) and the 67.7 pb −1 of e + p data (1999-2000, √ s = 318 GeV).
A detailed description of the H1 detector can be found in [17]. Only those components of particular importance to this analysis are described here. The inner tracking system consisting of central and forward 2 tracking detectors (drift chambers) is used to measure charged particle trajectories and to determine the interaction vertex. A solenoidal magnetic field allows the measurement of the particle transverse momenta.
Electromagnetic and hadronic final state particles are absorbed in a highly segmented Liquid Argon (LAr) calorimeter [18]. The calorimeter is 5 to 8 interaction lengths deep depending on the polar angle of the particle. A lead-fibre calorimeter (SpaCal) is used to detect backward going electrons and hadrons.
The LAr calorimeter is surrounded by a superconducting coil with an iron return yoke instrumented with streamer tubes. Tracks of muons, which penetrate beyond the calorimeter, are reconstructed from their hit pattern in the streamer tubes. The instrumented iron is also used as a backing calorimeter to measure the energy of hadrons that are not fully absorbed in the LAr calorimeter.
In the forward region of the detector a set of drift chamber layers (the forward muon system) detects muons and, together with an iron toroidal magnet, allows a momentum measurement. Around the beam pipe, the plug calorimeter measures hadronic activity at low polar angles.
The LAr calorimeter provides the main trigger for events with high transverse momentum. The trigger efficiency is 98% for events with an electron which has transverse momentum above 10 GeV. For events with high missing transverse momentum, determined from an imbalance in transverse momentum measured in the calorimeter P calo T , the trigger efficiency is 98% when P calo T > 25 GeV and is ∼ 50% when P calo T = 12 GeV [19]. Events may also be triggered by a pattern consistent with a minimum ionising particle in the muon system in coincidence with tracks in the tracking detectors.
Lepton Identification and Hadronic Reconstruction
An electron candidate is defined [20] by the presence of a compact and isolated electromagnetic cluster of energy in the LAr calorimeter, with the requirement of an associated track having an extrapolated distance of closest approach to the cluster of less than 12 cm. Electrons found in regions between calorimeter modules containing large amounts of inactive material are excluded [19]. The energy of the electron candidate is measured from the calorimeter cluster. The additional energy allowed within a cone of radius 1 in pseudorapidity-azimuth (η-φ) space around the electron candidate is required to be less than 3% of the energy attributed to the electron candidate. The efficiency of electron identification is established using NC events and is greater than 98% [19].
A muon candidate is identified by a track in the forward muon system or a charged track in the inner tracking system associated with a track segment or an energy deposit in the instrumented iron. The muon momentum is measured from the track curvature in the solenoidal or toroidal magnetic field. A muon candidate may have no more than 8 GeV deposited in the LAr calorimeter in a cone of radius 0.5 in (η-φ) space associated with its track. The efficiency to identify muons is established using elastic LP events [21] and is greater than 90%.
Identified leptons are characterised by the following variables, where l represents e or µ: • P l T , the transverse momentum of an identified muon or electron; • θ l , the polar angle of the muon or electron.
In order to check that the probability to misidentify a particle as an electron or muon is well described by the simulation, a sample of NC events is used, in which a second electron or a muon is found in the event. In the majority of cases this second lepton results from the misidentification of a hadron from the final state. The second lepton in the event must pass the same criteria as described above, except for the upper limit on the calorimeter energy within a cone associated with its track. The study is performed requiring the reconstructed electrons or muons to have P l T > 10 GeV. From a total NC sample of 121408 events, 2087 events with a second identified electron and 520 events with a reconstructed muon are selected by this procedure. Figure 1a shows the polar angle distribution of the electron with the second highest transverse momentum and figure 1b shows the polar angle distribution of reconstructed muons. The distributions are described by the simulation within the uncertainties, demonstrating that the misidentification of a particle as an electron or muon is well understood.
The hadronic final state (HFS) is measured by combining calorimeter energy deposits with low momentum tracks as described in [19]. Identified isolated electrons or muons are excluded from the HFS. The calibration of the hadronic energy scale is made by comparing the transverse momentum of the precisely measured scattered electron to that of the HFS in a large NC event sample. The transverse momentum of the hadronic system is: • P X T , which includes all reconstructed particles apart from identified isolated leptons.
The isolation of identified leptons with respect to jets or other tracks in the event is quantified using: • their distance D jet from the axis of the closest hadronic jet in η-φ space. For this purpose jets, excluding identified leptons, are reconstructed using an inclusive k T algorithm [22][23][24] and are required to have transverse momentum greater than 5 GeV. If there is no such jet in the event, D jet is defined with respect to the polar and azimuthal angles of the hadronic final state; • their distance D track from the closest track in η-φ space, where all tracks with a polar angle greater than 10 • and transverse momentum greater than 0.15 GeV are considered.
The following quantities are sensitive to the presence of high energy undetected particles and/or can be used to reduce the main background contributions.
• P calo T , the net transverse momentum measured from all energy deposits recorded in the calorimeter.
• P miss T , the total missing transverse momentum reconstructed from all observed particles (electrons, muons and hadrons). P miss T differs most from P calo T in the case of events with muons, since they deposit little energy in the calorimeter.
• Vap
Vp , a measure of the azimuthal balance of the event. It is defined as the ratio of the anti-parallel to parallel components of the measured calorimetric transverse momentum, with respect to the direction of the calorimetric transverse momentum [19]. Events with one or more high p T particles that do not deposit much energy in the calorimeter (µ, ν) where E i and θ i denote the energy and polar angle of each particle in the event detected in the main detector (θ e < 176 • ) and E e is the electron beam energy. For an event where only momentum in the proton direction is undetected δ miss is zero.
• ∆φ l−X , the difference in azimuthal angle between the lepton and the direction of P X T . NC events typically have values of ∆φ l−X close to 180 • .
Selection Criteria
The published H1 observation [2] using 1994-1997 e + p data was based on the selection of a sample of events with P calo T > 25 GeV. This experimental cut mainly selected charged current events in a phase space where the trigger efficiency is high. In the selected events all isolated charged tracks with transverse momentum above 10 GeV were identified as electrons or muons.
In the present paper the P calo T cut has been lowered to 12 GeV, taking advantage of the improved understanding of trigger efficiencies with increased luminosity and more sophisticated background rejection. The analysis extends the phase space towards lower missing transverse momentum for the electron channel (P calo T P miss T ) and towards lower P X T for the muon channel (P calo T P X T ). The lepton identification has also been improved and extended in the forward direction. The increased phase space and increased luminosity allow the comparison with the SM predictions to be made differentially and with improved precision. Further details of the analysis can be found in [25,26].
The selection criteria for both channels are summarised in table 1. The dominant background in the electron channel is due to NC and CC events. To reduce the NC background, events with NC topology (azimuthally balanced, with the lepton and the hadronic system backto-back in the transverse plane) are rejected. For low values of P calo T , where the NC background is largest, a requirement on ζ 2 e is imposed. A requirement that the lepton candidate be isolated from the hadronic final state is imposed to reject CC events. Events which have, in addition to an isolated electron, one or more isolated muons are not considered in the electron channel, but may contribute in the muon channel. The dominant backgrounds in the muon channel are inelastic muon pair production and CC or photoproduction events which contain a reconstructed muon. The final muon sample is selected by rejecting azimuthally balanced events and events where more than one muon is observed. Following the selection criteria described above, the overall efficiency to select SM W → eν events is 41% and to select SM W → µν events is 14%. The main difference in efficiency between the two channels is due to the cut on P calo T , which for muon events acts as a cut on P X T because the muon deposits little energy in the calorimeter. There is thus almost no efficiency in the muon channel for P X T < 12 GeV. For values of P X T > 25 GeV the efficiencies of the two channels are compatible at ∼ 40%.
Background Studies
To verify that the backgrounds (see section 2) that contribute to the two channels are well understood, alternative event samples, each enriched in one of the important background processes, are compared with simulations. For both channels these event samples have the same basic phase space definition (θ l , P l T , P calo T ) as the main analysis. It should be noted that these selections do not explicitly reject signal events, which may be present in the enriched samples.
The two background enriched samples in the electron channel, defined within the phase space 5 • < θ e < 140 • , P e T > 10 GeV and P calo T > 12 GeV, are selected with the following additional requirements.
only one e candidate is detected, which has the same charge as the beam lepton.
NC enriched sample
A NC dominated electron sample is selected by requiring D jet > 1.0. The events in this channel mainly contain genuine electron candidates, but with missing transverse momentum arising from mismeasurement.
CC enriched sample
A CC dominated sample is obtained by rejecting events with an isolated muon and applying cuts to suppress the NC component. These criteria are ζ 2 e ≥ 2500 GeV 2 , Vap Vp ≤ 0.15, δ miss > 5 GeV and ∆φ e−X < 160 • . In this sample the missing transverse momentum is genuine, but an electron candidate is usually falsely identified.
The two samples designed to study the backgrounds in the muon channel, defined within the same phase space 5 • < θ µ < 140 • , P µ T > 10 GeV and P calo T > 12 GeV, are selected with the following additional requirements.
LP enriched sample
A sample of events predominantly from the two photon process is selected by requiring at least one isolated muon and Vap Vp ≤ 0.2 to suppress photoproduction events.
CC enriched sample A sample dominated by CC events is selected by requiring Vap
Vp ≤ 0.15 and requiring at least one muon candidate that need not be isolated. This selection tests fake or real muons observed in events with genuine missing P T .
The distributions of all quantities used in these selections are well described in both shape and normalisation by the SM expectation in regions where there is little contribution from W production. This gives us confidence that the backgrounds are described within the uncertainty.
Example distributions of the background enriched event samples for the e + p data are shown in figure 2 for the electron channel and in figure 3 for the muon channel. Also included in the figures are the SM expectations from all processes together and the signal expectation alone. Agreement is also obtained between the data and the simulation in all distributions for the e − p data sample.
Systematic Uncertainties
The systematic uncertainties on quantities which influence the SM expectation and the measured cross section (see section 8.1) are presented in this section and discussed in more detail in [19,26]. The uncertainties on the signal expectation and the acceptance used in the cross section calculation are determined by varying experimental quantities by ± 1 standard deviation and recalculating the cross section or expectation. The experimental uncertainties are listed below and the corresponding variation of the cross section is given in table 2.
• Leptonic quantities
The uncertainties on the θ l and the φ l measurements are 3 mrad and 1 mrad respectively. The electron energy scale uncertainty is 3%. The muon energy scale uncertainty is 5%.
• Hadronic quantities
The uncertainties on the θ and φ measurements of the hadronic final state are both 20 mrad. The hadronic energy scale uncertainty is 4%. The error on the measurement of Vap Vp is ±0.02.
• Triggering / Identification
The electron finding efficiency has an uncertainty of 2%. The muon finding efficiency has an error of 5% in the central (θ µ > 12.5 • ) region and 15% in the forward (θ µ < 12.5 • ) region. The uncertainty on the track reconstruction efficiency is 3%. The uncertainty on the trigger efficiency for the muon channel varies from 16% at P X T = 12 GeV to 2% at P X T > 40 GeV.
• Luminosity
The luminosity measurement has an uncertainty of 1.5%.
• Model
A 10% uncertainty on the model dependence of the acceptance is estimated by comparing the results obtained with two further generators which produce W bosons with different kinematic distributions from those of EPVEC. The generators used are an implementation of W production within PYTHIA and ANOTOP, an "anomalous top production" generator, using the matrix elements of the complete process e + q → e + t → e + b + W as obtained from the CompHEP program [27].
Contributions from background processes, modelled using RAPGAP, DJANGO and GRAPE, are attributed 30% systematic errors determined from the level of agreement observed between the simulations and the control samples (see section 6). The uncertainties associated with lepton misidentification and the production of fake missing transverse momentum are included in these errors.
A theoretical uncertainty of 15% is quoted for the predicted contributions from signal processes (predominantly SM W production). This is due mainly to uncertainties in the parton distribution functions and the scales at which the calculation is performed [5].
Results
For the e − p data sample one event is observed in the electron channel. The kinematics of the event are listed in table 3. No events are observed in the muon channel. This compares well to the SM expectations of 1.69 ± 0.22 events in the electron channel and 0.37 ± 0.06 in the muon channel.
In the e + p data sample 10 candidate events are observed in the electron channel compared to 7.2 ± 1.2 expected from signal processes and 2.68 ± 0.49 from background sources. One candidate event in the electron channel is observed to contain an e − . This event was first reported and discussed in [2]. Four of the other candidate events contain an e + . The charges of the electrons in the remaining five events are unmeasured since the electrons are produced at low polar angles and they shower in material in the tracking detectors. In the muon channel 8 candidate events are observed compared to 2.23 ± 0.43 expected from signal processes and 0.33 ± 0.08 from background sources. Four of the muon events observed in the e + p data sample are among those first reported and discussed in [2]. The event discussed in [1] is rejected from this analysis by the azimuthal difference (∆φ µ−X ) cut. Four of the events have a positively charged muon, three have a negative muon and in one event the charge is not determined.
Distributions of the selected events in lepton polar angle, azimuthal difference, transverse mass and P X T are shown in figure 4. The lepton-neutrino transverse mass is defined as where P miss T and P l T are the vectors of the missing transverse momentum and isolated lepton respectively. The figure shows the electron and muon channels combined. Also included is the expectation of the Standard Model. The events generally have low values of lepton polar angle and are consistent with a flat distribution in azimuthal difference ∆φ l−X , in agreement with the expectation. The distribution of the events in M T is compatible with the Jacobian peak expected from W production. The kinematics of the events with P X T > 25 GeV are detailed in table 3. In three of the eighteen events a further electron is detected in the main detector (θ e < 176 • ). Taking this to be the scattered electron and assuming that there is only one neutrino in the final state and there is no initial state QED radiation, the lepton-neutrino mass M lν can be reconstructed. All three events yield masses that are consistent with the W mass, having values of 86 +7 −9 , 73 +7 −7 and 79 +12 −12 GeV. The observation of a second electron in these three events is compatible with the expectation from SM W production, where approximately 25% of events have a scattered electron in the acceptance range of the main detector.
Details of the event yields from the e + p data sample as a function of the transverse momentum of the hadronic final state, P X T , are given in table 4 and 5 for the electron and muon channels respectively. The combined results for the electron and muon channels are given in table 6. At P X T < 25 GeV eight events are seen, in agreement with the expectation from the Standard Model. At P X T > 25 GeV ten events are seen, six of which have P X T > 40 GeV, where the signal expectation is very low. The probability for the SM expectation to fluctuate to the observed number of events or more is 0.10 for the full P X T range, 0.0015 for P X T > 25 GeV and 0.0012 for P X T > 40 GeV. The uncertainties on the SM predictions are taken into account in calculating these probabilities.
An excess is observed at P X T > 25 GeV in both sets of e + p data. In the 1994-1997 data 4 events are observed compared to an expectation of 0.80 ± 0.14. In the 1999-2000 data 6 events are observed compared to an expectation of 2.12 ± 0.36.
The method published in [2] has been applied to the 1999-2000 data sample. Using this method an excess of events is also seen at P X T > 25 GeV in this new data sample: 5 events are observed for 2.34 ± 0.29 expected. These 5 events selected by the method of the previously published analysis are also found by the analysis presented in this paper.
Cross Section
The observed number of events in the e + p data sample is corrected for acceptance and detector effects to obtain a cross section for all processes yielding genuine isolated electrons or muons and missing transverse momentum. This is defined for the kinematic region 5 • < θ l < 140 • , P l T > 10 GeV, P miss T > 12 GeV and D jet > 1.0 at a centre of mass energy 3 of √ s = 312 GeV.
The definition of isolated electrons or muons includes those from leptonic tau decay. The generator EPVEC is used to calculate the detector acceptance A for this region of phase space. The acceptance accounts for trigger and detection efficiencies and migrations. The cross section is thus 3 Assuming a linear dependence of the cross section on the proton beam energy. 13 where N data is the number of events observed, N bgd is the number of events expected from processes treated here as background (see section 2) and L is the integrated luminosity of the data sample.
The cross section integrated over the full P X T range is where the first error is statistical and the second is systematic (calculated as described in section 7).
This result is compatible with the SM signal expectation of 0.237 ± 0.036 pb, dominated by the process ep → eW X, calculated at NLO [5,7]. The small signal components from ep → νW X and Z production are calculated with EPVEC [6] as explained in section 2. The cross section is presented in table 7 split into the regions P X T < 25 GeV and P X T > 25 GeV. Whilst the cross section in the low P X T region agrees within errors with the prediction, in the high P X T region it exceeds the expectation. Table 7 also includes two signal calculations in which all components are calculated at LO [5,6]. The calculation in [6] is the default calculation implemented in the event generator EPVEC. All the calculations agree within the uncertainties.
Search for W Production in the Hadronic Decay Channel
Since the dominant SM process that produces events with isolated charged leptons and missing transverse momentum is W production, it is interesting to search for W bosons decaying hadronically. The search for hadronic W decays is performed using events with two high transverse momentum jets in 117.3 pb −1 of e + p and e − p data from the period 1995-2000.
Events are selected with at least two hadronic jets, reconstructed using an inclusive k T algorithm, with a transverse momentum P T greater than 25 GeV for the leading jet and greater than 20 GeV for the second highest P T jet. The minimum P T of any further jet considered in the event is set to 5 GeV. The pseudorapidity η of each jet is restricted to the range −0.5 < η < 2.5. The dijet combination with invariant mass M jj closest to the W mass is selected as the W candidate. The resolution of the reconstructed W mass is approximately 5 GeV. P X T is defined as the transverse momentum of the hadronic system after excluding the W candidate jets.
A cut on the missing transverse momentum P miss T < 20 GeV is applied to reject CC events and non-ep scattering background. NC events where the electron is misidentified as a jet are rejected [28,29]. The final selection is made with the cuts M jj > 70 GeV and |cosθ| < 0.6, whereθ is the decay angle in the W rest frame, with the W flight direction in the laboratory frame taken as the quantisation axis. This phase space is chosen to optimise the acceptance for W events and reduce other SM contributions. The overall selection efficiency for SM W production is 43% and is 29% for P X T > 40 GeV. The main physics background to this search is the production of jets via hard partonic scattering, which is modelled by PYTHIA and RAPGAP for the photoproduction and deep inelastic regimes respectively. The predicted cross section is increased by a factor of 1.2 in order to match the observed number of events outside the signal region.
The systematic uncertainty on the background prediction includes parton distribution function uncertainties, the uncertainty on the jet energy scale and uncertainties due to the misidentification of an electron as a jet. In quadratic sum these give a total systematic error on the background prediction of 23% [28,30]. The SM W production rate has a theoretical error of 15%, which is added in quadrature to the experimental uncertainties, resulting in an overall error of 21%.
The M jj distribution (without the M jj cut) and the P X T distribution (with all cuts) of the selected data are compared to the Standard Model in figure 5. The final data selected show overall agreement with the SM expectation up to the highest P X T values. At P X T > 25 GeV, 126 events are observed compared to 162 ± 36 expected with 5.3 ± 1.1 from W production. The expectation is dominated by QCD multi-jet production. For P X T > 40 GeV 27 events are observed in the data, compatible with the expectation of 30.9 ± 6.7, where the W contribution amounts to 1.9 ± 0.4 events. Although there is increasing sensitivity to W production with increasing P X T , it is at present not possible to conclude from the hadronic channel whether the observed excess of events with an isolated electron or muon with missing transverse momentum at high P X T is due to W production.
Summary
A search for events with isolated electrons or muons and missing transverse momentum has been performed in e + p and e − p data, using the complete HERA I (1994-2000) data sample. The selection has been optimised to increase the acceptance for W production events and it extends to lower values of hadronic transverse momentum P X T than in previous publications. One electron event and no muon events are observed in the e − p data, consistent with the expectations of 1.69 ± 0.22 and 0.37 ± 0.06 for the electron and muon channels respectively in this relatively low luminosity data sample. In the e + p data sample 10 events are observed in the electron channel and 8 in the muon channel. These events are kinematically consistent with W production. The expected numbers of events from the Standard Model are 9.9 ± 1.3 and 2.55 ± 0.44 for the electron and muon channels respectively. At low P X T , the number of observed events in both channels is consistent with the expectation. At P X T > 25 GeV, however, the 10 observed events exceed the SM prediction of 2.92 ± 0.49. An excess of events is observed in both the 1994-1997 and the 1999-2000 e + p data samples. The observed events are used to make a measurement of the cross section for all processes producing isolated electrons or muons and missing transverse momentum in the kinematic region studied.
In a separate search for hadronic W decays, agreement with the SM expectation is found up to the highest P X T values. The high background in this channel, however, does not allow one to conclude whether the excess of isolated leptons with missing P T at high P X T is due to W production.
[30] C. [5,7] and at leading order (SM LO) [5] and [6]. The total error on the SM expectation is given by the shaded band. The "signal" component of the SM expectation is given by the hatched histogram. N data is the total number of data events observed for each sample. N SM is the total SM expectation. The total error on the SM expectation is given by the shaded band. The W production component of the SM expectation is given by the hatched histogram. N data is the total number of data events observed for each sample. N SM is the total SM expectation. | 8,498 | 2003-05-29T00:00:00.000 | [
"Physics"
] |
Enhancing Service Discovery via Multi-Ontology Management and Annotation
Ontology, which adds semantic information to services, is an effective tool to automatize service discovery and composition. Though service discovery systems have been developed extensively, most possible scenario would still be that all the services adhere to a single ontology. However, even in a same domain, different domain experts or users can conceptualize the same real world entities in different views which will lead to multiple domain ontologies. Therefore how to manage multi-ontology and then how to use them to annotate service are the key technologies in service discovery system. In this paper based on ISO SC32 19763-3: MFI-3 (Metamodel Framework for Interoperability: Metamodel for Ontology Registration) we discuss in detail the mechanism of multi-ontology management and annotation to enhance the service discovery. Finally a prototype of Ontology Management Platform (OMP) is realized to serves annotation.
Introduction
With the deployment of more and more services on the Internet, it becomes much more difficult to find appropriate services.Service discovery systems, which have been developed extensively, play the key role in Semantic Service Registry and Repository (S2R2).However, service discovery need to know the meaning of service interface parameters such as Input, Output or Operation.WSDL [1], which is a kind of syntactic description, is not enough for this purpose.Therefore, Ontology is usually used to add semantic information to solve this problem [2].It can help services to agree on a set of concepts and relationships allowing a shared understanding of the common domain knowledge.This allows matching service requests to service advertisements with less human intervention [5,6].Meanwhile, with the adoption of semantic analysis on ontology, the discovery of services is enriched to improve the responsiveness of the S2R2 and help users with candidate services that are close enough to what they would have liked to get.
In most cases, the semantic discovery systems adhere to single global ontology which is always constructed and standardized by Domain Experts [7].However, even in the same domain, different domain experts or users can conceptualize the same real world entities in different views which will lead to multiple domain ontologies.Besides, the scope of domain ontology is usually arbitrary and cannot be formally defined.This is because multiple ontologies can model the same function differently.Another reason may be the co-existence of independent organizations.
For example, in the first 9 months of 2010, Google has acquired about 40 companies.Each company may adhere to their own ontology.Thus differences and overlaps between the models used for ontologies can exist.Moreover the agreements that are captured in relation to a domain may differ when seen from different perspectives and at different levels of details.Since the service requesters and providers operate independently, they usually choose any one of these available ontologies that best aligns to the service to annotate.Therefore, in such an environment where the service requesters and providers adopt different ontologies, a discovery system that supports services using different ontologies is extremely important.
From the description mentioned above, we can divide service discovery systems into two categories as shown in Fig. 1: keyword-based discovery and ontology-based discovery.And the latter often refers to the service discovery in single-ontology environment and in multi-ontology environment.In the paper, we will focus on how to manage multi-ontology and then how to use them to annotate service in multi-ontology environment.Motivated by the multi-ontology environment, a new approach of multi-ontology management and annotation in S2R2 is presented by means of ISO SC32 19763-3: MFI-3 (Metamodel Framework for Interoperability: Metamodel for Ontology Registration) [3].Based on the approach, we construct and register the different ontologies in the same domain in order to serve annotation to web service.And more important, we specially design the Ontology Management Platform (OMP), in which different ontologies and the relationship among them can be effectively registered and managed.This platform will greatly enhance the discovery of services annotated with multi-ontology in S2R2.
The paper is organized as follows.Section II gives a short interpretation of what is MFI-3.Section III presents how to realize semantic annotation to service based on Multi-Ontologies in S2R2.In Section IV illustrate why and how to design the OMP.The related work is discussed in Section V, followed by the conclusion and future work in Section VI.
What is MFI-3
In order to effectively manage multi-ontology, we take ISO SC32 19763-3: MFI-3 as theoretical basis in this paper, which is mainly used for providing a common metamodel framework for ontology registration so that ontology definition from every metamodel could be unified [3,4].
As shown in Fig. 2, MFI-3 provides the solutions by defining "Reference Ontology" (RO), "Local Ontology" (LO).Both RO and LO are composed of the Ontology Component (OC: ROC or LOC), and Ontology Component is composed of the Ontology Atomic Construct (OAC: ROAC or LOAC).Fig. 2 also shows the ROC or ROAC defined in RO can be reused by the other ontologies, while that defined by LO can only for its own definition purpose.Generally speaking, RO is used to represent common and global ontology in that domain, which is constructed and maintained by experienced and authoritative domain experts.So RO is comparatively stable in that domain it belongs to.Different from RO, LO is often constructed and evolved from RO for particular purpose.And more important, the LOs evolved from the same RO can interoperate with each other according to the evolution rules between them, which will be discussed below.That is to say, this mechanism provides a solid foundation for semantic interoperability among multi-ontology.
Besides RO and LO, the evolution information from RO to LO is a core part in MFI-3.The ontology evolution rules can help record the detailed information when LOs evolve from RO, which ensure that the difference of ontologies will not hamper the interoperability between RO and LOs.There always exist three basic evolution rules: SameAs Rule, and Enhancement Rule and Contraction Rule.Theoretically, the three evolution rules can cover all the possible modifications on RO and LO because any complex evolution can be decomposed, viewed as a sequence consisting of these three to LO described in following parts of the paper focus on the SameAs Rule.Now we give the brief definition of SameAs Rule: It can help set up the equivalent mapping between instances of Ontology, Ontology Component or Ontology Atomic Construct.If the semantic conflict occurs, we need to inform the changes to the petitioner for a better understanding of the actual needs.When SameAs Rule is adopted, there is no change in connotation of corresponding concept.
In order that the LOs evolved from the same RO can understand each other, the evolution information should be recorded when SameAs Rule is adopted.In this paper, according to MFI-3, the evolution information is kept as a form which shows that the concept XXX in LO is same as YYY in RO.That is to say, according to the SameAs Rule, we create the equivalent mapping relationship between XXX in LO and YYY in RO, which help them to understand each other.
Multi-Ontology Based Annotation in S2R2
As we all know, Domain Ontology (DO) is always constructed and standardized by domain experts who use common approved concepts and relationship among them, but sometimes even in the same domain, different domain users can conceptualize the same real world entities in different views which will lead to multiple domain ontologies.We regard the DO defined by authoritative experts as Domain Reference Ontology (DRO) and the DO defined by users as Domain Local Ontology (DLO).
Fig 3 shows DRO and DLO can both be adopted to add semantic information to Service.That is to say, we realize multi-ontology based annotation to service.This method extends the scope of semantic annotation in the domain, which will enhance the flexibility during the process of service publication and discovery.Fig. 3. Multi-ontology based annotation to service However, DRO and DLO, which can both be annotated to Service, are not isolated but interrelated to each other.Based on MFI-3, the evolution information between DRO and DLO should be registered and recorded in S2R2.As mentioned in section II, although there exist three evolution rules between DRO and DLO, we are ready to adopt SameAs Rule because this rule is the part of the first version of MFI-3 which has already become International Standard.Next we will give an example to illustrate how DRO and DLOs interrelate to each other.
Our solution is not isolated with other classic matching algorithms, but takes them as the basis.That is to say, after applying the matching algorithms, we make a further step to calculate SameAs relationship between DRO and DLO.Therefore, it is necessary to introduce the classic matching algorithms first.
General speaking, matching algorithm is divided into two stages namely Syntactic Matching and Functional Matching. ( Where w1 and w2 are defined by users.S A and S R represent advertised Service and requested Service on, respectively.
210
Smart Technologies for Communication , Syntactic Matching.relies on name and description of S A and S R , which is calculated as the weighted average of the Name similarity (NameMatch (S A , S R )) and the Description Similarity (DescrMatch(S A , S R )).Functional Matching calculated as the average Match Score of the operations of S A and S R , including operation similarity, IO similarity, property similarity and etc. this process, ontology concept matching is involved, which tries to find the requested service that is the most similar (the highest degree match).
In the paper, we are not prepared to show the complete algorithms in detail, because they are comparatively popular and widely used.So we focus on how to use MFI-3 to manage multi-ontology effectively.Next section will discuss the design process of ontology management platform.
Ontology Management in S2R2
Design of OMP.As mentioned above, DRO, DLO and the evolution information among them play the key role in S2R2 which will assure the interoperability among the services.In order to effectively manage these ontologies and evolution information, the Ontology Management Platform (OMP) based on MFI-3 is designed.First, we register DRO, including ontology name, ontology document (*.owl file), ontology descriptions, domain which the ontology belongs to and ontology type (DRO or DLO).Whatever DRO or DLO, they are both registered by the pattern of "Ontology-Ontology Component-Ontology Atomic Construct" defined in MFI-3 shown in In this section, the Ontology Management Platform (OMP) based on MFI-3 is designed in which we have discussed how to register the DRO/DLO and define SameAs relationship between the DRO and DLO in detail.According to DRO, DLO and SameAs relationship between them, the multi-ontology based annotation in S2R2 can be achieved, which can help improve the semantic interoperability of services based on these ontologies.
As a whole, Fig 6 illustrates the framework of S2R2, including three components at present: Ontology Management Platform (OMP), Service Registration System and Semantic Query System.From the framework, we can easily draw a conclusion that OMP plays a very important role in S2R2.On one hand, it provides a mechanism to semantically annotate services when they are registered in Service Registration System, on the other hand it supports semantic discovery of services in Semantic Query System.
Fig.6. Framework of S2R2
Evaluation.To evaluate our approach of ontology management, we take recall ratio, the number of returned candidate services and the total number of services as the evaluation metrics.
Because of the absence of the relevant standard platform and test data sets, web services are randomly generated as a test case in this paper.The service itself does not perform a meaningful action, but these services have a mapping with multi-ontology.From the evaluating point of view, there is no real difference from the true Web services.
Experimental settings are: (1) Using Protege to create domain ontologies, including class, subclass and hierarchical relationship.The number of concept of ontology is about 50~100; (2) 100~300 web services are registered in each S2R2, and the functional description of the services such as input and output as well as the atomic service parameters are all mapped to the concept of domain ontology; (3) The number of input and output parameters of services is about 1~4, and each service is provided by a composition of atomic services; (4) The performance is measured on a workstation of 2GB RAM, 2.6 GHz CPU with Microsoft Windows 2003 Fig. 7 illustrates that the number of the returned candidate services annotated with DRO/DLO is increased by 40% compared to the number of the returned services annotated with single DO, which means the ratio of recall has been increased greatly, because our proposal not only makes full use of the logic reasoning algorithm in single ontology just as the traditional methods has done for many years, but also focus on the SameAs relationship among multi-ontology based on MFI-3 to achieve the high performance.S2R2 Online.A test version of multi-ontology management and annotation in S2R2 had been released on website http://www.s2r2.org.Users can publish and discovery services through multi-ontology method on this platform.
Related work
During the recent years, many efforts have been done to ease the Web Services annotation and discovery process.M. Klusch etc. [14][15][16] design the constraints as a plug-in into a simple matchmaker version for the desired service.Other work on services discovery includes LARKS (Language for Advertisement and Request for Knowledge Sharing) [8,9], a project based on a collaboration between Toshiba and Carnegie Mellon University [10,11], a Matchmaker from TU-Berlin [12], and systems by Li and Horrocks [13], Paolucci [7] etc.However, these services discovery systems only support matching services using the same or single ontology which both provider and requester share.This assumption implies that if different ontologies are used, matching cannot be carried out.As mentioned, this is a major limitation.That is to say it might produce a semantic gap between providers and requesters due to the use of different ontologies to describe services and requests, respectively .There exist some studies in multi-ontology environment.In [17], the system is extended with a method for allowing providers and requesters to use different ontologies.This method only takes into account the two provided ontologies (provider's and requester's ones).Our method enables to add evolution information to the different ontologies, which can simplify the process of service discovery.In [18], another proposal for matchmaking based on OWL-S is presented.In this system, when providers and requesters use different ontologies, the mappings have to be provided by the requester.All these approaches could get rid of their assumption of using the same ontologies for both requester and provider (or having the mappings) by using the ontologies integrated with the information of the extracted senses.Although the technologies of ontology mapping and merging might be an important step towards semantic interoperability, the building of mappings and mediators between ontologies is a costly process to be done manually and difficultly.
Our approach of multi-ontology (DRO/DLO) annotation based MFI-3 is adopted to reduce the complexity of ontology management.More important, we design the OMP to effectively manage DRO, DLO and the evolution information between them which will lead to the interoperability of services in S2R2 based on these ontologies.That is to say, our approach will extend the scope of semantic annotation in the Domain and greatly enhance the flexibility during the process of service publication and discovery.
Summary
In this paper, we make full use of MFI-3 to enhance the services discovery in multi-ontology environment.Different from the traditional Service Registry, in S2R2 providers and requesters can use different ontologies (DRO/DLO) in the same domain to annotate and discovery services.More important, in order to effectively manage the DRO, DLO and the evolution information between them, we design the Ontology Management Platform to help realize semantic annotation and discovery of services based on multi-ontology.The evaluation result illustrates that the performance of discovery is increased greatly by use of our approach.
In future, we plan to expand the Ontology Evolution Rules which make the evolution information to be recorded comprehensively and clearly, and verification criterions should be attached to each kind of rules to avoid semantic contradiction when ontology evolving.
rules.Now the first version of MFI-3 has become the international standard since 2008 [3].This version only includes the SameAs Rule at present (See Fig 2), and Enhancement Rule and Contraction Rule will be included in the second version of MFI-3 in the future.Therefore the evolution rules from RO Fig 4.
Fig 5
illustrates how to define the SameAs relationship between DRO and DLO.In the left pane of Reference Ontology Information, when you choose the ontology in the dropdown list DRO, the Ontology Component of DRO will be listed in the ListBox automatically.And then when an Ontology Component is selected, the Ontology Atomic Construct will also be listed.In the right pane of Local Ontology information, there exist the same operations as the left pane.After operations above, we can define the SameAs relationship between the Ontology Component or Ontology Atomic Construct in DRO and that in DLO.
O
On nt to ol lo og gy y O On nt to ol lo og gy y C Co om mp po on ne en nt t O On nt to ol lo og gy y A At to om mi ic c C Co on ns st tr ru uc ct t
Fig. 7 .
Fig.7.Relationship of the number of candidate service and the number of service in S2R2 | 3,973.2 | 2012-06-01T00:00:00.000 | [
"Computer Science"
] |
causal-curve: A Python Causal Inference Package to Estimate Causal Dose-Response Curves
In academia and industry, randomized controlled experiments (colloquially “A/B tests”) are considered the gold standard approach for assessing the impact of a treatment or intervention. However, for ethical or financial reasons, these experiments may not always be feasible to carry out. “Causal inference” methods are a set of approaches that attempt to estimate causal effects from observational rather than experimental data, correcting for the biases that are inherent to analyzing observational data (e.g. confounding and selection bias) (Hernán & Robins, 2020).
Summary
In academia and industry, randomized controlled experiments (colloquially "A/B tests") are considered the gold standard approach for assessing the impact of a treatment or intervention. However, for ethical or financial reasons, these experiments may not always be feasible to carry out. "Causal inference" methods are a set of approaches that attempt to estimate causal effects from observational rather than experimental data, correcting for the biases that are inherent to analyzing observational data (e.g. confounding and selection bias) (Hernán & Robins, 2020).
Although significant research and implementation effort has gone towards methods in causal inference to estimate the effects of binary treatments (e.g. what was the effect of treatment "A" or "B"?), much less has gone towards estimating the effects of continuous treatments. This is unfortunate because there are a great number of inquiries in research and industry that could benefit from tools to estimate the effect of continuous treatments, such as estimating how: • the number of minutes per week of aerobic exercise causes positive health outcomes, after controlling for confounding effects. • increasing or decreasing the price of a product would impact demand (price elasticity). • changing neighborhood income inequality (as measured by the continuous Gini index) might or might not be causally related to the neighborhood crime rate. • blood lead levels are causally related to neurodevelopment delays in children.
causal-curve is a Python package created to address this gap; it is designed to perform causal inference when the treatment of interest is continuous in nature. From the observational data that is provided by the user, it estimates the "causal dose-response curve" (or simply the "causal curve").
In the current release of the package there are two unique model classes for constructing the causal dose-response curve: the Generalized Propensity Score (GPS) and the Targetted Maximum Likelihood Estimation (TMLE) tools. There is also tool to assess causal mediation effects in the presence of a continuous mediator and treatment.
causal-curve attempts to make the user-experience as painless as possible: • This package's API was designed to resemble that of scikit-learn, as this is a commonly used Python predictive modeling framework familiar to most machine learning practitioners. • All of the major classes contained in causal-curve readily use Pandas DataFrames and Series as inputs, to make this package more easily integrate with the standard Python data analysis tools.
• A full, end-to-end example of applying the package to a causal inference problem (the analysis of health data) is provided. In addition to this, there are shorter tutorials for each of the three major classes are available online in the documentation, along with full documentation of all of their parameters, methods, and attributes.
This package includes a suite of unit and integration tests made using the pytest framework. The repo containing the latest project code is integrated with TravisCI for continuous integration. Code coverage is monitored via codecov and is presently above 90%.
Methods
The GPS method was originally described by Hirano (Hirano & Imbens, 2004), and expanded by Moodie (Moodie & Stephen, 2010) and more recently by Galagate (Galagate, 2016). GPS is an extension of the standard propensity tool method and is essentially the treatment assignment density calculated at a particular treatment (and covariate) value. Similar to the standard propensity score approach, the GPS random variable is used to balance covariates. At the core of this tool, generalized linear models are used to estimate the GPS, and generalized additive models are used to estimate the smoothed final causal curve. Compared with this package's TMLE method, this GPS method is more computationally efficient, better suited for large datasets, but produces significantly wider confidence intervals. The TMLE method is based on van der Laan's work on an approach to causal inference that would employ powerful machine learning approaches to estimate a causal effect (van der Laan & Gruber, 2010). TMLE involves predicting the outcome from the treatment and covariates using a machine learning model, then predicting treatment assignment from the covariates. TMLE also employs a substitution "targeting" step to correct for covariate imbalance and to estimate an unbiased causal effect. Currently, there is no implementation of TMLE that is suitable for continuous treatments. The implemention in causal-curve constructs the final curve through a series of binary treatment comparisons across the user-specified range of causal-curve allows for continuous mediation assessment with the Mediation tool. As described by Imai this approach provides a general approach to mediation analysis that invokes the potential outcomes / counterfactual framework (Imai & Tingley, 2010). While this approach can handle a continuous mediator and outcome, as put forward by Imai it only allows for a binary treatment. As mentioned above with the TMLE approach, the tool creates a series of binary treatment comparisons and connects them to show the user how mediation varies as a function of the treatment. An interpretable, overall mediation proportion is provided as well.
Statement of Need
While there are a few established Python packages related to causal inference, to the best of the author's knowledge, there is no Python package available that can provide support for continuous treatments as causal-curve does. Similarly, the author isn't aware of any Python implementation of a causal mediation analysis for continuous treatments and mediators. Finally, the tutorials available in the documentation introduce the concept of continuous treatments and are instructive as to how the results of their analysis should be interpretted. | 1,379 | 2020-08-31T00:00:00.000 | [
"Economics"
] |
Study on the Distance Learners’ Academic Emotions Using Online Learning Behavior Data
In recent years, computer vision, articial intelligence, machine learning, and other high-tech technologies have advanced rapidly. ese strategies lay a new technical foundation for online learning and intelligent education by making it easier to promote the scientic, intelligent, and data-driven growth of learners’ academic emotions. However, at present, online learning can better make up for the shortcomings of traditional learning and enable people to realize distance learning. However, as an important indicator, learners’ learning emotion has a direct impact on learners’ learning quality and eect. erefore, this paper analyzes distance learners’ academic emotions based on online learning behavior data. It extracts online learning behavior data by using a deep learning algorithm and multimodal weighted feature fusion based on DS (Dempster-Shafer) evidence theory, establishing distance learners’ academic cognition motivation model, and constructs an online learning emotion measurement framework. Finally, it is determined through a correlation study of distance learners’ academic emotions and learning impacts those learners’ academic emotions in class. It will have a benecial inuence on learning since learners’ academic emotion is favorably connected with instructors’ emotion, and learners’ addition, deletion, and modication behavior is positively correlated with learners’ academic emotion.
Introduction
With the continuous reform of the educational concept, the problems of traditional classroom education have become increasingly prominent. In traditional classroom teaching, teachers and students communicate with each other in a variety of ways, such as students' facial expressions, body language, and answering questions in class. e online learning behavior of distance education learners needs to use technical means to capture sound, text, images, and other information, to realize indirect emotional communication with learners, thus increasing the di culty of distance learners' academic emotion analysis [1]. Academic emotion is a major factor in uencing the e ect of online learning. Emotion permeates all aspects of people's life and works, showing the e ects of perception and motivation, which can promote or inhibit people's learning motivation [2].
Chinese academic institutions have changed the education of big classes and in-person training in classrooms with insu cient ventilation to comply with the National Epidemic Control Center's requirements on social distance. ese innovations have included teachers transitioning from traditional classrooms to online schools using computerized learning management systems, as well as providing synchronous instruction via distant courses. Yet, synchronous education has been criticized for its instructor-centric models, which prioritize educators above pupils [3]. As a result, several learners who were quarantined or unable to visit China during the COVID-19 epidemic preferred the small private courses online and massive online class's initially public and private colleges as means of distance learning. e MOE has established a statewide online learning framework that covers all educational sectors, especially higher education, to address the COVID-19 epidemic without interrupting lectures. is platform supports a collection of online educational programs and materials available throughout all systems for usage by all institutions [4].
e abovementioned system collaborates with telecommunication companies to provide special offers on online services, including free 4 G SIM cards and some other pupil discounted rates, to financially deprived students or students for whom the school systems have been stopped and are now attending courses online at home. e idea of behavioral psychology states that analysis of behavioral data can provide information about students' psychodynamics and observable behaviors [5]. LMSs provide the ability to record a child's online operating habits, which are saved as part of a student record. To monitor a learner's learning behaviors, teachers might mine the student's biography for data. e operating actions of learners when participating in online learning are termed learning behaviors [6] and can indicate either explorative learning behavior or learning involvement behaviors. Various LMSs offer multiple data gathering limitations; therefore online operational behaviors vary. Investigators can collect recordings of various online operational activities to extrapolate information that is not readily visible in raw data. When a student clicks on a certain function in the LMS, the record and timing of that activity are saved in the database as part of the student biography. When appropriately evaluated, such online operational behaviors might mirror students' online learning practices. e majority of online learning activities are estimated using frequency and duration. ese include the total frequency with which a class was accessible, the total time with which an instructional video was accessed, and the total amount of posts generated in online conversation [7,8]. After researchers collect these online learning behaviors, data cleaning must be performed to avoid bias caused by aberrant outcomes, and the efficiency of the behavioral data collecting must be evaluated. is is done to mitigate the impact of determined online operational actions induced by user competition. is means that researchers must gather data about online learning behaviors properly to avoid things for which pupils are prone to be affected by the score, i.e., items from which students might gain higher scores by selecting more regularly or spending more time.
Based on the above, this research work focuses on the emotional problems of distance learners under the online learning behavior data. By collecting, identifying, and analyzing various emotional data formed by online learners, we can master the emotions of distance learners and mine the resources and values in educational data. In addition, we can establish an online learner emotion measurement model, which is conducive to better grasping the academic emotions of distance learners [9]. e main innovations in the research process of this paper are as follows: (1) this paper uses a deep learning algorithm to build a perception model and uses multimodal weighted feature fusion based on DS evidence theory to collect and analyze students' academic emotions [10]. (2) Summarize the connotation and different classifications of academic emotion, establish the academic cognition motivation model of distance learners, strengthen the learning effect and influence, and build the online learning emotion measurement framework [11]. e following sections are organized in the research process of this paper: Section 2 discusses the contributions of national and international researchers. Section 3 explains the material and approach for online learning behavior based on deep learning. Section 4 will give an analysis of the academic emotions of distant learners. Section 5 discusses in depth the results and simulation of distant learners' academic emotion analysis. Finally, this study is completed in areas such as Section 6.
Related Work
At present, scholars at home and abroad focus on the academic emotion analysis of distance learners and have achieved remarkable research results [12]. e work of [13] studied the influence of screen time on emotion regulation and student performance, studied the use of smartphones and tablets by more than 400 children in a four-year cycle, analyzed the relationship between these behaviors and emotion and academic performance, and evaluated students' ability and academic performance. Similar to the above scholar, the work of [14] studied the influence of early childhood emotion on academic preparation and socialemotional problems. Emotion regulation is the process of regulating emotional arousal and expression, which directly affects whether children can better adapt to the school environment. In this connection, the researcher of [15] introduced the connectionist learning theory to establish a new learning model of distance education and proposed the teaching content based on the emotional education objectives. ey used the Mu class teaching mode to build a distance learning community, humanized network courses, and other new teaching modes for the problem of emotional deficiency in the stage of distance education. For effectiveness, the scholar of [16] builds a hybrid reality virtual intelligent classroom system. e system makes full use of television broadcasting technology and interactive space technology to form a network teaching environment. Teachers employ video, audio, text, and other techniques to realize contact between teachers and students and to increase communication between teachers and students in the network teaching stage.
Besides the above scholars, the early work of [17] proposed a sift emotion recognition algorithm based on facial expression scale invariant feature transformation. Based on emotion theory, this algorithm captures the facial expression of distance learners according to facial expression to realize SIFT feature extraction, recognize the expression of distance learners, and better compensate for the lack of emotion in the learning stage of distance learners, while the researcher of [18] established a learner emotion prediction model for an intelligent learning environment based on the fuzzy cognitive map. ey used the model to extract and predict the learning emotion of distance learners, which is convenient for the teaching system to adjust the teaching scheme in real time according to the predicted emotion. e work of [19] developed the distance learner emotion self-assessment scale, which can define the basic emotion variables of distance learners and complete the design and establishment of the distance learner emotion early warning model. Finally, based on the regression model, the work of [20] analyzed the online academic emotion of adults, analyzed various factors affecting it, and studied the environmental factor model of online learning community related to academic emotion tendency in an online learning community. Inspired by the contributions and findings of the aforementioned scholars, we attempt to study distance learners' academic emotions using online learning behavior data and obtain significant results.
Material and Methodology for Online
Learning Behavior Based on Deep Learning
Online Learning Behaviors and Its Features.
Online learning behavior refers to learning behaviors that occur in a network setting. We concentrate on extracting learner features from online learning behavior following analysis to comprehend the quality of teaching and learning. e functioning of online learning behaviors lies at the heart of learning behavior [21]. e features of online learning behavior can be explained in Figure 1.
Style of Learning.
Style of learning is the characteristic of a person of learners when studying and trying to solve their academic tasks, which influences learners' cognitive load. As per the Felder-Silvermande study habit concept, we may examine learners' learning styles using the 4 aspects of information process, information interpretation, information intake, and information comprehending [22].
(1) Processing of Information. Students studying the processing of information are quite interested in the material on the online learning system, and they are responsive to the opinions made by the other online learners and the comments from professors in the course materials instructional video on the learning system. Motivated students obtain information by constantly doing much to share or explain concepts to others, and they like cooperation, whereas reflective learners prefer learning via deep concentration, either alone or with a daily study partner.
(2) Awareness of Information. Learners of information are supported and are habituated to comprehending information by individual interpretation, and they choose conceptual and fascinating learning content. ey are particularly interested in video learning on the learning system, extensive learning materials, and student communication. Insightful students enjoy studying information and great attention to detail. However, they frequently avoid complicated topics, whereas perceptive students enjoy studying theoretical knowledge and have the guts to learn complex subjects but they are careless in their gaining.
(3) Input of Information. Learners of this system of learning are clever or responsive and are used to learning from the contributions of others. is sort of learner is more interested in reading or watching videos. Visual students, for example, are exceptional at recalling what they see, such as video pictures, but on either side, auditory learners have a strong memory for what they listen to or read. (4) Understanding of Information. is type of learner often analyzes and comprehends knowledge on their own, which is expressed in studying to meet their requirements. Stepwise learning and knowledge acquisition in predetermined logical order are characteristics of orderly learners. While comprehensive learners want to think globally, their thought is more varied and leaping.
References of Learning.
Various types of students have different learning preferences, which influence students' success in the online learning system. e preferences of students reflect their requirements. Main input learning preferences, including audiovisual and verbal learning, are reasonably straightforward to accommodate, and existing online training systems may be incorporated. Intermediate preference is mostly for communication activities between persons and others, such as student inquiries, instructor responses, and contact between online learners. Enhanced preference is the process of autonomous creativity that occurs after pupils integrate knowledge, such as spreading information individually.
Interactive Learning.
Human-computer interaction and human-human interaction are the two types of interactions that take place throughout the online learning procedure. Registration, browsing, downloads, and other actions that proactively obtain platform resources are the most common human-computer interface behaviors. Person-to-person interaction mostly refers to learner publishing and answering data in BBS, which can construct a learning interaction network graph. e geographic closeness in the network may be computed based on the size of learner nodes to determine the interaction scenario and learning law of the learner, as well as the connection with other learners [23]. Learners' engagement and engagement depth can be utilized as markers to measure interactive behavior.
Data Acquisition of Online Learning Behavior Data.
e data created by the interaction among students and the platforms throughout the process of learning is primarily recorded in real time by the system database and other technologies. Learning partners may gain a more complete understanding of the study processes and realize empirical forecasting, assessment, and management of the learning experience by evaluating online learning behavior data. On the other hand, the foundation of learning evaluation is the collecting of behavioral data. It may be split into server-side and client-side methods based on variations in data capture targets and rules, whereas sources of data can be classified into wireless connections, PC connectors, and client connectors based on terminal viewpoints. Multiterminal and allaspect data collecting approaches can help in understanding learners' learning features. Figure 2 depicts the online learning behavioral data gathering architecture [24].
As per the above figure, Web services and Web-logging are two examples of server-side data collecting. A weblog is being used to record data from the learner's real-time operation, such as the user's demand moment, demand type, demand contents, request progress, the client's accessible location, the time of procedure completion, and the browser version used by the clients, among other things. A web application is a method of implementing data collecting using backend programming. Investigators may create the platform component database module based on the kind of learning behavior so that the target material can be gathered based on the demands, and learner behavior collected data could be more complete and adaptable, with a wide variety of services.
Furthermore, client-based collected data involves the collection of data created by learners when they are using the browser to study directly, which mostly employs the Java script Cookies to gather data to conveniently obtain information about the learner's browsing activity.
is approach stores learner behavior data in a specified area and gets the data from the information stored as required, allowing for even more adaptable data collecting and recording of caching proxy server usage, as well as more precise tracking of visitor activity.
Deep Learning.
Machine learning is to build statistical models based on data and use models to predict and analyze data. As the main branch of machine learning, deep learning is called "depth," which is a machine learning model compared with the traditional shallow feature learning. e essence of deep learning is to imitate the brain neurons of the human brain. e use of a multilayer neural network structure to simulate the way the human brain processes information is a deep-seated feature learning method. Deep learning can imitate many different data types such as images, texts, sounds, and videos analyzed by the human brain and build an analysis model that imitates the human brain. Its analysis ability is strong. e learning method adopted by deep learning is similar to the neuron structure of the human brain. Its components include a hidden layer, input layer, and output layer. e nodes of the input layer are used on input data, and the nodes of the output layer are used on model output. e input layer is similar to neurons, the output layer is similar to decision-making neurons, and the weight coefficient is similar to the strength of connecting each neuron. e perceptron model is a basic artificial neural network. e architecture of the perceptron model is seen in Figure 3. To imitate the stimulation process of the human brain, the perceptron model employs the f (x) activation function. According to the above, each perceptron is a function whose input is represented by x and which may be obtained by creating a function. Similarly, the output is represented by y as the function is processed. Equation (1) represents the function.
Deep learning can generate a complex function and automatically learn the input features, so the model accuracy is higher than other learning methods. In Chinese text classification, the deep learning method can realize the automatic extraction of text features, reduce manual intervention problems, significantly improve the accuracy of the learning model, and greatly improve the classification degree. erefore, using deep learning algorithm in emotion classification is feasible and more efficient [25].
Multimodal Weighted Feature Fusion Based on DS Evidence eory.
e core of learning emotion is to explain the differences in various model features. For example, human posture features describe the position of human joints from a global perspective, and facial expression features explain the apparent structure of local areas of the image, which are quite beneficial. Following a large number of trials, the accuracy of diverse emotion recognition on different particular characteristics differs, indicating that various aspects differ significantly in the recognition sensitivity of an emotion type [26]. e Dempster-Shafer probability concept, abbreviated as DS theory, is a popular method in the field of multisensor data fusion. It is an imprecise derivative of probability and statistics, and Bayesian thinking may work without previous information and random selection. As a result, this work provides a weighted feature fusion approach based on DS evidence theory that computes the weight vectors of all feature types based on the verification set samples determined by DS evidence theory.
DS evidence theory is mainly used to deal with the problem of multimodal information fusion represented by Θ. e identification framework and the concept of m trust assignment function explain uncertain information. To identify the framework, setting the mapping to [0, 1] represents the trust allocation function of M; if AA⊆Θ, then it expresses any subset of the following equation: In the above equation, m(φ) � 0 indicates that the empty proposition has no trust, and m(A) indicates the trust allocation function of event a. In the light of Θ subset a must meet the requirement of m(A) > 0, which is called evidence focal element. e evidence body is represented by (A, m(A)) binary body composed of evidence focal elements and basic trust. e combination of multiple evidence bodies is called evidence. If m 1 , m 2 , . . . , m n are the same multiple basic trust allocation functions in Θ, then A i , i � 1, 2, . . . , N, is the corresponding focal element, and equations (3) and (4) are DS evidence synthesis rules.
In the above equations, E1 and E2 represent the evidence under different recognition frameworks in the two synthesized pieces of evidence, while m1 and m2 are the corresponding mass functions. Similarly, the corresponding focal elements are represented by A j and B j , respectively. From simplification of equations (3) and (4), we obtained the following equation: In the above equation, 1/1 − K is the regularization factor. K represents the conflict coefficient between different evidence pieces. If K is larger than or equal to 1, the evidence cannot be synthesized since there is no orthogonal sum between m1 and m2.
Connotation and Classification of Academic Emotion.
Academic emotion refers to various emotional responses associated with academic tasks such as learning or teaching. Academic emotion is often classified into two types: negative emotion and positive emotion. It has been determined via extensive study that both negative and positive academic emotions pay little regard to the arousal dimension and that the arousal value also has a direct impact on the complicated behavior of students' learning. erefore, some scholars add arousal factors to the classification of academic emotions and further divide academic emotions into arousal emotions Mobile Information Systems with higher positivity than low positivity and arousal emotions with lower negativity and higher negativity. e emotional types involved in the above four academic emotions are listed in Table 1.
e first kind of arousal emotion with high enthusiasm is reflected in hope, happiness, pride, etc., which is formed after positive events, such as teacher encouragement, support, reward, etc. e second kind of arousal emotion with low enthusiasm is reflected by calm, relaxation, satisfaction, and other emotions because the learners' learning environment is stable, and their performance remains stable. e third kind of negative arousal emotion is anxiety, anger, and guilt. e fourth is the low negative arousal emotion, which is manifested as boredom, disappointment, depression, and so on.
Distance Learners' Academic Cognition Motivation
Model.
e cognitive effect of academic emotion is reflected in the extraction, preservation, processing, and attention to resources of academic emotion.
is paper analyzes the effect of academic emotional motivation from two different perspectives: internal motivation and external motivation. Internal motivation is the motivation of task generation and completion influenced by personal factors. Positive emotions will form positive internal motivation; negative emotions will reduce internal positive motivation and even generate negative internal motivation. Usually, external motivation refers to the motivation that students take to implement a task. erefore, the emotion related to the results will interfere with the external task motivation, including retrospective emotion and anticipatory emotion. Happiness and hope will form positive external motivation, while personal anxiety will lead to negative motivation. Strong disappointment will enhance learning helplessness and reduce external motivation. Academic emotion will also interfere with the motivation effect and cognitive effect, which will be enhanced by adding this effect. Figure 4 shows the impact model of academic emotion on learning achievement. Figure 5 shows the proposed system's architecture, which defines the technical, application, and data visualization layers of the developed framework that describe the academic feelings of distant learners. e layers communicate via an interface that allows for the replacement and upgrade of their components as needed. Big data is employed for processing in this case. Data collecting, data processing, and data set analysis application services are the major components of the process. A model for measuring emotions in online learning is developed based on this and other aspects of emotion assessment.
Online Learning Emotion Measurement Framework.
is section can be used to discuss each layer's specifics of the recommended model.
Data Layer.
A data layer may transform the data on our model so that it can be used by many tools. It guarantees that a homepage and a label management system communicate. is layer is also used to process, read, and store data. Its primary role is to preprocess data supplied by learners during online learning, such as posture, voice, physiology, and text. e index function is preserved and created into the database during automated clustering based on the appropriate system results, and retrieval and query activities are accomplished using the index.
Technical Layer.
e technology layer is used to analyze emotions and collect data. e parts of this layer can be utilized to describe our model's technological architecture, detailing the structure and behavior of our model. e node is the major component of the active structure for this layer. In this layer, the component can be used to represent architectural objects. It precisely represents a system's factors in the form: its behavior is represented by an explicit link to the behavior component. A technical interface is a location where other nodes or software modules from the application layer can utilize the technological services provided by a node. Nodes come in a variety of configurations, incorporating device and system programs. A device represents a physical computing capacity on which objects can be executed. Various technologies are involved in data collection and analysis and diagnosis. erefore, this system uses a variety of data acquisition technologies such as wearable devices, video surveillance, and web crawlers to record and save the data formed during learners' online learning and transmit it to the data layer. en the system extracts the information from the data layer and uses text mining, emotion recognition, and other analysis and diagnosis techniques to identify students' academic emotions.
Application Layer.
End-user applications such as internet browser programs employ the application layer. It offers protocols that enable software to communicate and collect information while also presenting useful data to consumers. e application layer in our proposed model is responsible for realizing mutual interaction with users, strengthening academic emotional interaction using visualization techniques to feedback data processing results to users, and developing reverse intervention or reinforcement adjustment schemes for learners in conjunction with their actual learning emotions.
Correlation Analysis of Distance Learners' Academic
Emotion and Learning Effect. When studying the correlation between distance learners' academic emotions and learning effect, this paper selects 50 students who have published posts on the course and have homework scores for analysis. e average emotional value of students' postings on the course during distance course learning is the learner's emotional value provided to the course, which is considered as the learner's ultimate learning outcome as its learning impact. After completion, Pearson correlation analysis is carried out [27]. Finally, the correlation analysis result obtained is r � 0.537, p < 0.01. Figure 6 shows the scattered distribution results between learners' academic emotions and achievements in class.
By analyzing the correlation analysis results and scatter diagram in Figure 4 above, students' academic emotions and learning effects in this course are significantly positively Mobile Information Systems correlated at the level of 0.01, and the correlation coefficient result is 0.537. at is, learners' academic emotion in the classroom has a good impact on the learning effect, and the outcomes of learners' academic emotion in the classroom reveal that they are excited about students' learning. Furthermore, their impact and quality are higher, demonstrating the critical importance of analyzing distant learners' academic emotions [28].
Correlation Analysis between Distance Learners' Academic Emotion and Teachers' Emotional Tendency.
is paper studies the positive correlation between distance learners' academic emotions and teachers' emotional tendencies based on the posting on student forums. By sorting out various topic posts under different course forums, it also conducts mining research on the evaluation results and data given by the courses in this topic post in one semester [29]. After analyzing the real content of the course post, it is concluded that most of the content of the course topic post is to arrange learning tasks, learning activities, etc., without significant emotional performance. erefore, this paper only focuses on the course content of academic emotion analysis in the topic post replies. e selected research objects here are 100 teachers and learners of the residual course. e average value of teachers' post emotion in each topic post is calculated as teachers' emotion value, and the average value of learners' post emotion is calculated as learners' academic emotion. e correlation result of Pearson correlation analysis is 0.168, and p < 0.01. Figure 7 shows the scatter distribution results between teachers' emotional values and learners' academic emotional values.
By analyzing the learners' academic emotions on the topic post in Figure 5 above and the teachers' emotional values on this topic post, a significant correlation is shown. At the same time, the emotional distribution of students and teachers posted on the same topic in this course shows a triangular state, which is consistent with the above emotional calculation results. is demonstrates that there is a good association between students' academic emotions and instructors' emotions and those teachers have a favorable influence on students while teaching. e emotional tendencies of teachers have a direct influence on the academic inclinations of students.
Correlation Analysis between Real-Time Academic Emotion and Online Learning Behavior of Distance Learners.
Based on the dynamic characteristics of distance learners' academic emotion, this paper proposes distance learners' learning emotion related to learning environment and learning tasks, and the emotion will not change in a certain period of time. e time period selected in the study is 2 days, 5 days, and 14 days. e online learning behavior indicators of learners before and 1 day, 2.5 days, and 7 days after posting are calculated according to the time point when learners post. en Pearson correlation analysis is conducted between the online learning behavior indicators and the academic emotions of learners in corresponding posts [30]. e statistics ( * p < 0.05, * * p < 0.01) of the obtained results are shown in Table 2.
By analyzing the correlation results shown in Table 2 above, it is concluded that there are few online behaviors related to learners' real-time academic emotions, and some online behaviors have high significance and low correlation coefficients. In the above table, by analyzing the correlation analysis of the three groups in different time periods, only the addition, deletion, and modification behaviors of learners are significantly correlated with the real-time academic emotions of distance learners. e smaller the time period is, the higher the significance between academic emotions is, and the significance increases from 0.15 in 14 days to 0.004 in 2 days. Only the amount of forum and workshop participation and the addition, deletion, and Table 3.
In this paper, Pearson correlation analysis is conducted on the learning behavior indicators of learners within 2 days and the real-time academic emotions of corresponding learners. e correlation analysis results are shown in Table 4 below.
According to the correlation analysis results listed in the above table, the learners' behaviors of entering the forum to create new posts and the learners' academic mood are significantly high at the level of 0.01. In terms of correlation coefficient and significance, the number of learners' addition, deletion, and modification activities has greatly increased [29].
To improve the analysis of learners' online learning behavior data, thoroughly mine the students who have posted in this study and examine the link between their academic emotion law and their online learning behavior and academic emotion. If a learner has posted, it is necessary to calculate the emotional value of posting on this day and select the average value as the academic emotion value of the learner. After that, calculate the number of learning behaviors in the log when the learner posts. Figure 8 below shows the scatter distribution results between the academic emotion value of distance learners and the total number of online learning behaviors. Figure 9 compares our eight behavioral indicators: workshop attendance, number of access users, forum involvement, courseware visits, browse course volume, number of browsing activities, and number of additions, deletions, and alterations. is data clearly shows that the indication number of additions, deletions, and alterations is more relevant than other indicators. Figure 10 compares our eight behavioral indicators: workshop attendance, number of access users, forum involvement, courseware visits, browse course volume, number of browsing activities, and number of additions, deletions, and alterations. is data clearly shows that the indication workshop participation is more significant than other indicators. Mobile Information Systems Figure 11 shows the comparison between relevancy and significance of our eight behavioral indicators: workshop attendance, number of access users, forum involvement, courseware visits, browse course volume, number of browsing activities, and number of additions, deletions, and alterations. e distribution shape of dispersed points is a triangle, and the density in the lower right corner is high, indicating that this distribution state is associated with the strong intellectual emotion of some course participants. When more than 80% of learners' postings reflect the same emotional tendency, it suggests that the learners' academic emotions are heavily veiled and cannot be identified through an online learning activity.
Conclusions
With the fast growth of information technology, the online learning model is now extensively employed in the field of education and has evolved into a teaching mode with a broader range of applications. Online learning, which is based on information technology, disrupts traditional teaching techniques by connecting students, teachers, and online learning materials in a diverse interactive environment. Learners will experience a range of learning emotions throughout online learning, which will have a significant impact on the learning effect. Positive learning experiences can increase students' enjoyment and drive to study. When there are too many negative emotions, the learning effect suffers and the learning efficiency suffers. As a result, this article employs a deep learning system to assess distance learners' academic emotions based on data from online learning behavior. e multimodal weighted feature fusion algorithm based on DS evidence theory is used in this paper to extract online learning behavior data, and the academic cognition motivation model and online learning emotion measurement framework for distance learners are built. It is determined through a correlation study of distance learners' academic emotions and learning impacts that learners' academic emotions in class will have a favorable influence on learning, and there is a positive relationship between students' academic emotions and instructors' emotions. Furthermore, there is a favorable relationship between learners' addition, deletion, and modification activity with academic mood.
Data Availability
e data used to support the findings of the study can be obtained from the corresponding author upon reasonable request. | 7,764.4 | 2022-08-28T00:00:00.000 | [
"Education",
"Computer Science"
] |
Thermal properties of polycrystalline cubic boron nitride sintered under high pressure condition
The excellent thermal and chemical properties of cubic boron nitride (cBN) indicate that it is potential materials to prepare the thermal dissipate substrate applied in the electronic packaging. The thermal properties of polycrystalline cBN ceramics, however, have not been fully investigated. We report the first sintering experiment on preparing polycrystalline cBN ceramics using cBN powder as starting material without any sintering aids. The microstructure and high bending strength show that the strong combination was achieved among the crystal grains. The measured results, including density, thermal conductivity and thermal expansion coefficient, reveal that the properties of this ceramics depend on the grain size of starting crystal cBN. The PcBN ceramics has low thermal expansion coefficient extremely matching to that of silicon and exhibits moderate thermal conductivity due to its low density and the existence of low thermal conductive phase of hexagonal boron nitride.
Introduction
As electronic devices become smaller, faster and more powerful, thermal management and thermal stresses are becoming critical issues in many packaging applications, including microprocessors, power semiconductors, high power RF devices, and light-emitting diodes.As a result, the low density materials with high thermal conductivity and low thermal expansion coefficient (matching to that of silicon) are extremely needed for reliable performance of electronic devices [1,2].Non-metallic materials has attracted enough attention due to its excellent thermal properties and chemical stability, such as diamond [3,4], cubic boron nitride (cBN) [5][6][7], aluminum nitride (AlN) [8,9], silicon nitride (Si 3 N 4 ) [10,11], silicon carbide (SiC) [12,13], etc.
The cBN, first synthesized by Wentorf [14] with the transformation of hexagonal boron nitride (hBN) to cubic form under high pressure and high temperature conditions, is usually utilized to prepare cutting tools for processing various hard steel works because of its super wear-resisting property.Moreover, cBN possesses excellent thermal properties.Theoretical model predicts its single crystal with very high thermal conductivity, of about 1300 W/mK at room temperature, only second to diamond [5].These properties indicate cBN is very likely to be applied in the uses of electronic packaging as thermal dissipate substrate.No crystals of cBN were available that were large enough to be implied directly as the thermal dissipate substrate.Therefore, the only form of this material that could be used in electronic packaging is the sintered polycrystalline ceramics.
However, the research on sintering this thermal ceramic and studying its related properties has not been fully investigated, because polycrystalline cubic boron nitride (PcBN) is hard to be sintered using traditional sintering process, such as the hot pressing and spark plasma sintering methods.The reason is that cBN is metastable phase in contrast to hBN under ambient pressure and high temperature condition.It tends to transform to the stable phase, hBN, under the high sintering temperature condition with the traditional sintering methods.The previous experiment revealed that the cBN entirely transform to hBN at the temperature of about 1200 o C under the ambient pressure [15], implying that the PcBN ceramics only can be sintered under the pressures of GPa order (1 GPa=10 9 Pa).
There are several experimental studies on the thermal conductivity of PcBN ceramics, which were synthesized by direct conversion of hBN to cBN at the pressure of approximately 10 GPa.The reported results, however, remain very scattered.The first research on the thermal conductivity of PcBN ceramics was conducted by Slack in 1972 [5].He reported the thermal conductivity of PcBN is 180 W/mK at room temperature.Subsequently, Corrigan reported a high thermal conductivity value range from 250 W/mK to 900 W/mK [6].Ohashi and co-workers synthesized PcBN ceramics with the conditions similar to Corrigan's, but reported relatively lower thermal conductivity value, range from about 200 W/mK to 600 W/mK [16].These studied PcBN ceramics were prepared at extreme conditions, leading it hard to be applied in industrial manufacturing.Moreover, these works mainly investigated the thermal conductivity.No research was conducted on other properties related to the uses of the electronic packaging, such as the thermal expansion and bending strength.
In this study, we carried out the sintering experiments to study the thermal properties of PcBN ceramics.We placed emphasis on thermal conductivity, thermal expansion coefficient and bending strength properties of the ceramics.We used pure cBN powder as starting material to prepare PcBN ceramics, without any sintering aids.Four grain size cBN crystals were employed to prepare high purity and large size ceramics under a same sintering condition.We contributed the initial work to exploring a potential material that could be implied to electronic packaging.
Materials and Experimental Procedures 2.1. Starting materials, high pressure apparatus and sample preparation
Commercial cBN powder (supplied by Zhongnan Jete Superabrasives Co. Ltd., Henan, China) of different grain size (0-2μm, 2-4μm, 4-8μm and 8-12μm) was used in this study.The cBN powder was first pressed into molybdenum container under the pressure of 20 MPa then assembled into synthetic block as showed in Fig. 1.
Fig. 1. Sample assembly for sintering experiments. For clearly, only four anvils of the cubic
press were showed in the picture [19].
We used a large volume cubic press to sinter the PcBN ceramics.This press can generate the high pressure condition up to 6 GPa and the high temperature up to 2000 o C. As shown in Fig. 1, the WC anvils were connected with the pistons which were driven by hydraulic oil, thus the six anvils move toward the cell center from three dimensions and generate high pressure in the synthesis chamber.The computer-controlled hydraulic system allows a vary pressure in the chamber within the pressure-generating capability of the apparatus.More detailed descriptions of the apparatus have been reported by the literature [19].The temperature in the chamber was controlled by controlling the heating power applied to the graphite heater.The temperature was measured with W-Re thermocouple.The relationship between the hydraulic oil pressure and the chamber pressure was calibrated by metal melting point method which has been described in the literature [20].
We sintered the PcBN ceramics for 5 minutes under a same condition.In order to avoid transformation of cBN to hBN, we selected the sintering condition of nearly 6 GPa and temperature of 1500 o C, which entirely locates in the cBN stable region (see the Fig. 4 in reference [15]).The sintered PcBN ceramics were ground with a diamond wheel and subsequently polished with diamond paste less than 2 μm.For the thermal expansion coefficient and the bending strength measurements, some polished samples were cut by laser to the size of 12mm×2.5mm×2.5mm.The samples used for the thermal conductivity measurements were ground to the size of Φ12mm×2.5mm.
Characterization
The densities of the sintered samples were measured using the Archimedes method.The relative densities were determined using the measured densities divided by the theoretical density of cBN crystal (3.486 g/cm 3 ).Crystalline phases of PcBN ceramics were analyzed by X-ray diffraction (XRD, D8 Advance, Bruker, Germany).The microstructures of the ceramics were observed by scanning electron microscopy (SEM, JSM-IT300, JEOL, Japan).The thermal expansion coefficients of the ceramics were measured by the differential method using an Al 2 O 3 rod as standard for the temperature range from room temperature to 300 o C (DIL402C, NETZSCH, Germany).Thermal conductivity was measured using the laser flash method (TC-7000H, ULVAC-RIKO, Japan) at room temperature, 200 and 300 o C, respectively.The bending strength was determined by the three points bending test (Instron-5800, US) using 12mm×2.5mm×2.5mmbars (span 10mm).
Crystalline phase and microstructure of PcBN
Fig. 2 shows the X-ray diffraction patterns of the starting crystal cBN powder and the sintered PcBN ceramics.Although the sintering condition locates in the cBN stable region, the sintered PcBN ceramics nevertheless contain a certain amount of hBN reversely transformed from the cBN, especially for the ceramics sintered with the 0-2 μm cBN powder.We attributed the reversed transformation to the voids among the cBN grains in the process of sintering.Because the cBN is a super hard material, the pressure around the voids could be much lower than that in the area where the cBN crystal faces contact.As the temperature increased to 1500 o C, the sintering conditions of pressure and temperature around the voids could locate in the hBN stable region.This situation is more severe when using the more small size cBN powder as starting materials.Therefore, the ceramics sintered with 0-2 μm CBN powder exhibits a significant content of HBN that is indicated by the diffraction peak of X-ray.The SEM images of sintered PcBN ceramics are shown in the Fig. 3.The cBN grains still keep their regular crystal shape and the strong combinations among the grains seem to be realized.Some obvious pores can be observed among the cBN grains, especially in the ceramics sintered with the large size cBN grains.The tiny particles also can be observed around the pores, which are believed to be the hBN.When using the smaller cBN grains as the starting materials the hBN particles seem to be more obvious around the pores.This resulted in that the crystal shape became obscure in the ceramics that was sintered with the 0-2μm cBN powder.
Density and bending strength of PcBN
The relative density and bending strength of the PcBN ceramics are shown in Fig. 4. Because the low density phase of hBN (2.29 g/cm 3 ) and pores exist, the sintered ceramics have lower density value than that of crystal cBN.The values of density increase when the size of starting cBN grains increase because the content of hBN decreases.This demonstrates that the reverse transformation becomes easier when using the small cBN grains as the starting materials.It is consistent with the results reflected by the X-ray patterns and SEM photos.The high bending strength value of the sintered PcBN ceramics shows that the strong combination among the cBN grains was realized.The bending strength, however, shows an opposite tendency to that of the density.The ceramics sintered from the small cBN grains exhibit a high bending strength value.We also attributed the result to the existence of hBN because it increases the combination strength among the crystal grains.The larger specific surface of the smaller cBN grain, of course, is another reason resulted in the tendency of the bending strength, because it enhances the combination opportunity of the crystal faces.Fig. 5 shows the thermal expansion coefficient of PcBN ceramics.The property of the ceramics is generally consistent with that of silicon and exhibits a decreasing tendency as the starting cBN grains increasing.The tendency demonstrates that the extent of purity, or, the content of hBN, dominates this thermal property of the PcBN ceramics.The ceramics sintered with 2-4 μm CBN powder has a thermal expansion coefficient value which is extremely matching to that of silicon.The thermal conductivity of the PcBN ceramics is shown in the Fig. 6.Generally, the ceramics have moderate thermal conductivity, although the starting material, crystal cBN, possesses an extreme high theoretical thermal conductivity.The thermal conductivities of the ceramics reach their maximum value near 200 o C.This temperature dependence of the thermal conductivity is consistent with the reports of Slack [5] and Corrigan [6].The thermal conductivity of the ceramics is also dependent on the starting cBN grain size.The ceramics sintered with the 2-4 μm cBN powder has the maximal thermal conductivity of 56 W/mK at room temperature.When using the too small grain size cBN as the starting material, the existence of considerate hBN content increases the phonon reflections between the crystal surfaces, thus obstructs the transmission of heat flux and results in a low thermal conductivity.The too large cBN grain size is also not good for the transmission of heat flux due to the weak combination of crystalline grains, which is reflected by the bending strength mentioned above.
Thermal properties of PcBN
Although the crystal cBN and the PcBN ceramics synthesized by direct conversion of hBN to cBN have excellent thermal conductivity, the PcBN ceramics prepared in this study have a moderate thermal conductivity.According to the measurements of XRD, SEM and bending strength, the reversed transformation and the obvious pores in the ceramics are thought to be the main reasons that results in the drastic decline of thermal conductivity.Adding sintering aids, such as the metal aluminum or cobalt, is an expected method to enhance the densification of the PcBN ceramics.And, improving the sintering temperature could be another means to increase the thermal conductivity because it is benefit to the strong combination among the cBN grains and preventing from the reversed transformation.
Conclusion
We conducted the experimental research on sintering polycrystalline cubic boron nitride thermal ceramics using only crystal cubic boron nitride powder as the starting material.The high bending strength and microscopic images show that the ceramics were sintered successfully.The sintered ceramics have low thermal expansion coefficient which is extremely matching to that of silicon.However, the reversed transformation of cubic boron nitride to the hexagonal form and the relative low density lead to a moderate thermal conductivity of the ceramics.Adding metal sintering aids or increasing the sintering temperature is the expected method to improve the thermal conductivity.This initial work indicated that the high purity polycrystalline cubic boron nitride ceramics can be prepared in a short time.And, the polycrystalline cubic boron nitride is a potential thermal ceramics that can be implied to electronic packaging as improving the sintering method.
Fig. 4 .
Fig. 4. The relative density and bending strength of the PcBN ceramics.
Fig. 5 .
Fig.5.The thermal expansion coefficient of PcBN.The line represents the thermal expansion coefficient of silicon, shown for comparison. | 3,078.6 | 2018-06-13T00:00:00.000 | [
"Materials Science",
"Engineering",
"Physics"
] |
Performance versus Complexity Study of Neural Network Equalizers in Coherent Optical Systems
We present the results of the comparative analysis of the performance versus complexity for several types of artificial neural networks (NNs) used for nonlinear channel equalization in coherent optical communication systems. The comparison has been carried out using an experimental set-up with transmission dominated by the Kerr nonlinearity and component imperfections. For the first time, we investigate the application to the channel equalization of the convolution layer (CNN) in combination with a bidirectional long short-term memory (biLSTM) layer and the design combining CNN with a multi-layer perceptron. Their performance is compared with the one delivered by the previously proposed NN equalizer models: one biLSTM layer, three-dense-layer perceptron, and the echo state network. Importantly, all architectures have been initially optimized by a Bayesian optimizer. We present the derivation of the computational complexity associated with each NN type -- in terms of real multiplications per symbol so that these results can be applied to a large number of communication systems. We demonstrated that in the specific considered experimental system the convolutional layer coupled with the biLSTM (CNN+biLSTM) provides the highest Q-factor improvement compared to the reference linear chromatic dispersion compensation (2.9 dB improvement). We examine the trade-off between the computational complexity and performance of all equalizers and demonstrate that the CNN+biLSTM is the best option when the computational complexity is not constrained, while when we restrict the complexity to lower levels, the three-layer perceptron provides the best performance. Our complexity analysis for different NNs is generic and can be applied in a wide range of physical and engineering systems.
Abstract-We present the results of the comparative performance-versus-complexity analysis for the several types of artificial neural networks (NNs) used for nonlinear channel equalization in coherent optical communication systems. The comparison is carried out using an experimental set-up with the transmission dominated by the Kerr nonlinearity and component imperfections. For the first time, we investigate the application to the channel equalization of the convolution layer (CNN) in combination with a bidirectional long short-term memory (biLSTM) layer and the design combining CNN with a multilayer perceptron. Their performance is compared with the one delivered by the previously proposed NN-based equalizers: one biLSTM layer, three-dense-layer perceptron, and the echo state network. Importantly, all architectures have been initially optimized by a Bayesian optimizer. First, we present the general expressions for the computational complexity associated with each NN type; these are given in terms of real multiplications per symbol. We demonstrate that in the experimental system considered, the convolutional layer coupled with the biLSTM (CNN+biLSTM) provides the largest Q-factor improvement compared to the reference linear chromatic dispersion compensation (2.9 dB improvement). Then, we examine the trade-off between the computational complexity and performance of all equalizers and demonstrate that the CNN+biLSTM is the best option when the computational complexity is not constrained, while when we restrict the complexity to some lower levels, the three-layer perceptron provides the best performance.
I. INTRODUCTION
A MONGST the variety of different nonlinearity compensation methods, the machine learning (ML) based techniques are gaining momentum as a promising and flexible tool capable to efficiently unroll fiber and component-induced impairments. In the past several years, the research on artificial neural networks (NN) for optical channel equalization has already led to the development of a noticeable number of novel digital signal processing (DSP) methods that can provide the performance better than that rendered by the "conventional" DSP approaches [1]- [10]. The fast development of NNrelated research and the growing ML developers community incites testing different novel NN architectures to mitigate fiber propagation impairments. In terms of the experimental verification of NN-based equalizers, several works dealt with the intensity-modulation with direct-detection (IM/DD) links. It was demonstrated that the application of the NNs with different internal structure, such as multi-layer perceptron (MLP) [11], [12] (i.e. a simple densely connected feedforward NN architecture), convolutional NNs (CNN) [13], [14], echo state networks (ESN) [15], and long short-term memory (LSTM) NNs [16], is efficient in improving optical system-level performance. However, the test of similar NN architectures in coherent optical systems has been carried out, mainly, numerically [17]- [20], or in short-haul experiments [21]- [24]. It is worth noticing that some very recent works evaluated the functioning of NN-based equalizers in metro/long-haul trials [4], [5], [8]- [10].
The variety of existing and emerging channel equalizers makes a comparative analysis of the different solutions a timely challenge. The NN-based channel equalization refers to two important aspects: i) the improvement of performance by the reduction of bit-error-rate (BER), and ii) the complexity of the algorithms, which is a fundamental issue for practical implementation. Clearly, the comparison can be carried out only for specific systems: some approaches can be more suitable for certain transmission links, while the others are favorable for different systems.
To gain a thorough understanding of how each of the aforementioned NN architectures performs, we need to pick a benchmark system for the comparison. In this work, we perform such a comparison using, as a benchmark, a single channel transmission of a dual-polarization (DP) 16-QAM signal with 34.4 GBd rate transmitted over 9×50km TrueWave Classic (TWC) fiber spans at the power of 2 dBm. Such a choice of the fiber and the power level ensures that the system is in the strongly nonlinear regime, as we intend to study how the NNs unroll the Kerr nonlinearity effects. In our work, we analyze both the synthetically simulated and the experimental data. We, first, analyze the performance of several previously studied NN models: MLP, bidirectional LSTM (biLSTM), and ESN. Next, we compare their performance with that rendered by new composite NN structures: i) the convolutional layer coupled with the MLP (CNN+MLP); ii) the combination of the convolutional layer with the biLSTM (CNN+biLSTM). These new designs are, then, tested in the same environment, allow-arXiv:2103.08212v2 [eess.SP] 23 Jun 2021 ing us to infer the performance characteristics pertinent to each type among the 7 different NN topologies. We point out that the term "topology" in our research identifies the particular NN structure (architecture) with a specific fixed distribution of hyper-parameters. We emphasize that, in contrast to other similar investigations, we employ the Bayesian optimization procedure [9] for each NN type studied. This provides the optimal distribution of hyper-parameters pertaining to each NN type, such that we identify the best functioning regime (in terms of the performance delivered) for each architecture without complexity constraints. We show that the new CNN+biLSTM combination performs better than all other studied types. For each NN type considered hereafter, we also present the analytical expressions for the complexity, i.e. the number of multiplications attributed to each specific NN per recovered symbol. The highest complexity for the optimized NN equalizers corresponds to the new CNN+biLSTM composition that also renders the best performance.
The completely new subject in the remit of this manuscript is what happens when we restrain the complexity of different NN types: no work has previously addressed the comparison of the performance rendered by different NNs considering the identical levels of computational complexity. Our findings demonstrate a nontrivial behavior: while at the relatively high complexity levels the best performing model is the CNN+biLSTM, when we constraint the complexity to lower values, the simple MLP equalizer outperforms the advanced NN structures with the same complexity. Nevertheless, we notice that the goal of this paper is not to reach a broad conclusion about the trade-off between the complexity and performance for all possible transmission scenarios; rather, we aim at emphasizing the importance of accounting for this issue in the equalizer design stage, and we provide the tools for one's correctly assessing the DSP-type complexity of the most popular neuro-layers.
The paper is organized as follows. In Sec. II we describe the details of the different NN equalizers analyzed in our study. Sec. III presents how to compute the computational complexity on all NN-based equalizers considered in this paper. Sec. IV describes the experimental setup and contains the results, including the comparison between the performance and computational complexity of different NN topologies; the performance is also compared with the digital back-propagation with 3 steps per span. Our findings are summarized in the conclusion.
II. A ZOO OF NEURAL NETWORK-BASED EQUALIZERS
In this section, we revisit the most popular NN architectures that have been proposed and investigated so far in coherent optical channel post-equalization. We also introduce two new composite NN equalizer structures that can be deemed as the extension of previously proposed NN configurations.
To enhance the reproducibility of our methods, we provide a thorough summary of each NN architecture. The code of the algorithms implemented in Python 3.6.9 with TensorFlow (2.2.0) GPU backend and Keras (2.3.1), is provided in Zenodo [25].
Before addressing the details of the NN-based equalizers, let us describe how the datasets used in this work are created. When dealing with the optical channel equalization, we require the NN to process not only the symbol of interest but also the neighboring ones insofar as both the chromatic dispersion and the drive amplifier add the memory to the channel. The latter means that the NN performs better if it is given information about the correlations between the symbols in the sequence. Therefore, the input of the real-value NN models used in this paper (in the regression task), is the time-domain vector delayed by k symbols (the memory vector) containing the real and imaginary parts of both polarizations for the symbol at the time-step k and its 2N neighboring (past and future) symbols. In the NN signal processing, due to the computational memory constraints the input layer receives just a portion of the total data, called the mini-batch, as far as the finite computational resources limit the length of the sequences with which we can operate. The NN input mini-batch shape can be defined by three dimensions: (B, M, 4), where B is the mini-batch size, M is the memory size defined through the number of neighbors N as M = 2N + 1, and 4 is the number of features for each symbol, referring to the real and imaginary parts of two polarization components. The output target is to recover the real and imaginary parts of the k-th symbol of one of the polarization, so the shape of the NN output batch can be expressed as (B, 2).
In general, for all the NNs considered in this paper, we use the mean square error (MSE) loss estimator, since this choice corresponds to the conventional loss function frequently used for the regression tasks [26]. The other types of loss functions such as the mean absolute error, the Huber Loss, and the Log-Cost loss, were also considered for our NNs, but they did not show any noticeable benefits compared to the MSE. Moreover, it is important to highlight that we decided to present just the regression task in this paper because (for our test case scenario) the results achieved by regression and classification algorithms were close, but some fewer epochs were needed in the case of regression to reach the lowest BER.
The classical Adam algorithm was chosen for the stochastic optimization step with the default learning rate equal to 0.001 [27]. All NNs were trained for at most 1000 epochs (if not stopped earlier because of negligible changes in the loss function value over 150 epochs) and, after every training epoch, we calculated the BER obtained using the independently generated testing dataset.
The dataset was composed of 2 20 symbols for the training dataset and 2 18 independently generated symbols for the evaluation. To eliminate any possible data periodicity and overestimation [28] in our experiment, a pseudo-random bit sequence (PRBS) of order 32 was used to generate those datasets with different random seeds for each of them. The periodicity of the data is, therefore, 2 10 times higher than our training dataset size, since the modulation format used in our study was the 16 QAM. For the simulation, the Mersenne twister generator [29], that has periodicity equal to 2 19937 − 1, was used with different random seed. Additionally, we highlight that the NN training data were shuffled using numpy.random.shuffle function in Python before feed-ing the dataset into the NN: such a shuffling helps to mitigate overfitting. The experimental setups and scenarios in which the datasets were acquired are described in the following sections.
The following subsections will delve deeper into the design of the NN models used within this paper.
A. A multi-layer perceptron
The first and, perhaps, simplest and well-studied NN-based equalizer that we consider is the MLP, proposed for the shorthaul coherent system equalization in [22] and the long-haul systems in [30]. The MLP is a deep feed-forward densely connected NN structure that handles the I/Q components for each polarization jointly, providing two outputs for each processed symbol: its real and imaginary parts. Due to the MLP's ability to process joint I/Q components, the equalizer can learn the nonlinear phase impairments in addition to the amplitude-related nonlinearities. When using the MLP, the channel and device-induced memory effects are taken into account by incorporating the time-delayed versions of the input signal, as was carried out in [30].
In a simulation environment, the MLP equalizer showed performance metrics similar to those delivered by the "traditional" digital back-propagation (DBP) with 2 steps-per-span and 2 samples-per-symbol at 1000 km of standard singlemode fiber [30]. In our current paper, we use the same 3layer MLP as in [22], but in our case here the number of neurons and the activation function optimized for each layer. Importantly, the number of layers in MLP, which is 3, has been found as optimal for our particular transmission scenario by the Bayesian optimizer (BO). However, this MLP topology rendered the BO, can alter essentially for different transmission scenarios.
The general equation, in a matrix form, describing the output vector y given the input x passing through the 3-layer MLP, is: where x is the input vector with n i elements, y is the output vector with n o elements, φ is a nonlinear activation function, W n1 ∈ R ni×n1 , W n2 ∈ R n1×n2 , W n3 ∈ R n2×n3 and W out ∈ R n3×no are the real weight matrices of the respective dimensions participating in each layer of the MLP, b 1,2,3 are the bias vectors, the indexes n 1,2,3 stand for the number of neurons in each hidden layer, and × in (1) is the matrix-vector convolution.
B. Long short-term memory NNs
Compared to static (memoryless) systems where the MLPs can be efficient, the time sequences usually ought to be approached dynamically. Thus, recurrent NNs (RNNs) are often favored over other NN models for time sequences. However, training the recurring connections can be a much more complicated task compared to the MLP training, so that the network weights are usually changed almost imperceptibly. This aspect of RNNs often leads to the well-known vanishing gradient problem [26], [31]. The LSTM networks were built to solve it and to harness the memory-related effects. The LSTM comprises a gateway architecture that includes three gate types: the input (i t ) gates, the forget (f t ) gates, and the output (o t ) gates, as shown in Fig. 1. The compact form of the forward pass LSTM cell equations for a time-step t, (i.e. when we process the input feature sequence x t having the size n i ) is [32], [33]: where C t is the cell state vector, h t is the current hidden state vector of the cell with size n h and h t−1 is the previous hidden state vector. Note that n i is equal to the number of features, and n h is the number of hidden units that will be chosen in the design process. The trainable parameters of the LSTM network are represented by the matrices W ∈ R n h ×ni and U ∈ R n h ×n h with the respective upper indices i, f , o, and c, referring to the particular LSTM gates mentioned previously. More details are given in Fig. 1. In (2), is the elementwise product, and σ denotes the logistic sigmoid activation function. The aim of the input i-gate is to store the content to the cell; the forget f -gate defines what information is to be erased; the output o-gate defines what information has to be passed to the next cell. (2) for one time-step. The arrows represent the "flow" of respective variables (the blue/green ones refer to the previous state and current input), the rectangles identify the nonlinear functions, while the symbols in circles identify the respective mathematical operations.
What makes the usage of the LSTM a dynamical approach is: the time sequence is processed by the array of LSTM cells ranging over the t-interval of interest, which is the memory size in our case. Besides the "dynamical" LSTM property, the bidirectional LSTM (biLSTM) provides a more robust solution for time series since with the bidirectional structure, we are learning the weights from the past visible values to the future hidden values, and that corresponds to our learning which features of the past values are useful for a particular symbol value prediction [34]. In the optical channel equalization context, the key advantage of biLSTM is that it can efficiently handle intersymbol interference (ISI) between the preceding and the following symbols.
In the context of channel equalization, the LSTM was suggested in [35], [36] to reduce the transmission impairments in IM/DD systems with pulse-amplitude modulation (PAM). The LSTM-based approach was developed further in [17], where, for the first time, the biLSTM was used in an optical coherent system to compensate for fiber nonlinearities, but only in a simulation environment. Additionally, it was shown that the biLSTM also outperformed a low-complexity DBP [17]. More recently, a bi-vanilla RNN was applied as well for the softdemapping nonlinear ISI [10]. In our current study, we use a similar structure as in [17] , where the NN model is made up of a bidirectional LSTM layer followed by a dense layer. Finally, we note that, in contrast to the previous studies where the grid search was executed to guess the optimal number of hidden unities and memory size, this paper uses the BO to identify the best-performing biLSTM structure [9].
C. Echo state networks
The ESN is a promising type of RNNs due to its relaxed training complexity and its ability to preserve the temporal features from different signals over time [21], [37]- [40]. The ESNs are in the reservoir computing (RC) category because in the ESNs only the output weights are trainable. In Fig. 2, the grey-colored area is the reservoir "main" structure containing the randomly connected "neurons" that capture the time features of the signal, while the output weights are trained to define which states are more relevant to describe the desired output. In this paper, we use the concept of leaky-ESN [41] containing no output feedback connections. Our motivation to choose the leaky-ESN architecture is that there was an experimental observation that the leaky-ESN configuration outperforms the traditional ESN in feature extraction for noisy time series [38]. The latter is, evidently, an important property in optical transmission-related tasks. The leaky-ESN is formalized for a certain time-step t, as follows: where s t ∈ R Nr is the system state at time-step t, N r is the number of hidden neurons units in the dynamic layers, which represents the dimensionality in the reservoir; x t ∈ R ni and y t ∈ R no are the input and the output vector of the ESN, respectively; W r ∈ R Nr×Nr is a reservoir weight matrix that defines which neuron units are connected (including the selfconnections); this matrix is also characterized by a sparsity parameter s p defining the ratio of connected neurons to the total possible connections number. Finally, W in ∈ R Nr×ni is the input weight matrix, µ is the leaking rate parameter, and W o ∈ R no×Nr is the output weight matrix which is the only one that is trainable using a regression technique. This training phase in ESN does not affect the dynamics of the system, which makes it possible to operate with the same reservoir for different tasks [37]. A schematic representation of a leaky-ESN, including the sequential input, dynamic, static, and output layers, as depicted in Fig. 2. The signal passing through the dynamic layer in Fig. 2 is represented by (3), and this layer is the core of the reservoir structure. Then it is followed by a static layer, represented by (4), which incorporates the leaky-ESN behavior through accumulating (integrating) its inputs, but it is also losing exponentially (leaking) accumulated excitation over time. Finally, the output layer defines which units are relevant to the description of the current task (for the equalization, in our case), and it is described by (5).
Concerning the previous ESN applications for optical channel equalization [39], the ESN was implemented in the optical domain for the distortions' mitigation: a 2 dB gain in Q 2factor was achieved for 64-QAM 30 GBaud signals transmitted through 100 km fiber at 10 dBm input power. In addition, the same as it is in our paper, the reservoir can be applied in the digital domain. In [21], the leaky-ESN was successfully applied after the analog-to-digital converter to enable 80 km transmission to reach below KP4-FEC limit [42] for a 32 GBd on-off keying signal.
D. Convolutional neural networks
Due to their feature extraction propensity [26], the CNNs have become one of the most commonly used NN structures in such areas as 2D image classification and 3D video applications [43], [44]. Convolution layers have also been found efficient in the analysis of temporal 1D sequences with several applications to time series sensors, audio signals, and natural language processing [45], [46]. For longer sequences, the CNN layer can be used as a pre-processing step due to its ability to reform the original sequence and extract its high-level features used for further processing cycles [24].
Here, we investigate, for the first time, two new models for the equalization of signal distortions in metro systems, combining a 1D convolutional layer performing the effective signal pre-processing with two previously proposed NNbased equalizers: the MLP described in Sec. II-A, and the biLSTM, Sec. II-B. These new structures, CNN+biLSTM and CNN+MLP, are addressed in our study because it was shown that the convolution layers are efficient in image denoising [47] and array signal processing [48], where the CNNs can reduce the background and quantization noise effects on coded signals. Therefore, we can naturally surmise that in our model the first convolutional layer can enhance the received signal by removing a part of the embedded noise before it enters the next neural layer. Also, generally, by adding the CNN layer, we end up with a NN model with less trainable parameters without losing performance, which can be yet another advantage. To that end, in the current study, we analyze how the combined NN architectures work for the optical channel equalization task. A simplified CNN+MLP combination was already successfully used in [24] at the transceiver for the high-baud-rate 80 km system.
The convolutional layer is primarily characterized by three key parameters: the number of filters, the size of its kernel, and the layer activation function. The extracting functionality is achieved by applying n f filters, sliding over the raw input sequence, and generating the number of output maps equal to n f , with a fixed kernel size n k . The convolutional layer is constructed as a squash function, which means that the input is mapped to a lower-dimensional representation, in which only the main (or desirable) characteristics are retained. Since the CNNs were mainly developed in the context of image recognition and spacial feature extraction, other parameters such as padding, dilation, and stride, are also used in the design of the convolutional layers. Considering that the input shape is (B, M, 4), the output shape after the CNN layer with all those parameters is defined as (B, L out , n f ), where the parameter L out is the function of the CNN hyper-parameters and defined as: However, in this paper, we will not focus on the investigation of those additional parameters. Consequently, we fix the default convolutional layer configuration with the padding equal to 0 (which corresponds to "valid" in Keras), the dilation equal to 1, and the stride equal to 1. Then, the input-output mapping of the convolution layer for this configuration can be described as follows: where y f i is the output feature map of the i-th input element produced by filter f in the CNN layer, x in is the input raw data vector, k f j is the j-th trained convolution kernel of the filter f , and b f j is the bias of the filter f . Further, n is the feature index of the kernel and input data, ranging from 1 to n i , corresponding to the number of features in the data; φ, as before, denotes the nonlinear activation function used in the convolutional layer. Note that (7) is true for all i ∈ [1, ..., L out ]. Moreover, since the pooling layer captures only the most important features in the data and ignores the less important ones [49], the pooling discretization process is not used in our equalizers to avoid the downsampling of feature sequences.
The output collection of feature maps, y f , emerging from the convolutional layer, is then fed into one of the structures described above: either into two dense layers (MLP, where the number of layers is, again, dictated by the BO), forming the CNN+MLP structure or into the one biLSTM layer, resulting in CNN+biLSTM. We recall that we use the convolutional layer before the following layers to extract the middle-level locally invariant features from the input series.
Here we mention that even the CNNs alone are extremely powerful deep learning instruments that have a complicated multiparametric structure that combines filters, kernel size, padding, stride, dilation, and pooling. However, having performed an exhaustive experimental exploration, we observed that deep CNNs have not reached the substantial performance level, like the one achieved by CNN+ MLP or CNN+biLSTM in our test case. Therefore, in this work, we utilize the convolution layers as pre-processing feature-extracting step and do not include deep CNN architectures in our current study. In this section, the computational complexity in terms of real multiplications per recovered output symbol is examined for all introduced NN architectures. We notice that the number of additions is typically neglected for such estimation in ordinary DSP techniques [50]. The major reason for this is that the typical algorithms for multiplying two integers with n digits have a computational complexity of O(n 2 ), whereas adding the same two numbers has a computational complexity of Θ(n) [51]. As a result, due to dealing with float values with 16 decimal digits, multiplication is by far the most timeconsuming part of the implementation procedure.
Here we point out that the training complexity will not be considered since we evaluate the real-time computation complexity (evaluation phase), which is the most critical part, while the training of a NN equalizer is carried out offline (calibration phase). Also, the computational complexity of nonlinear activation functions is not considered in our framework, due to the fact that typically their operation is based on an approximation approach, rather than on direct multiplicative calculation. In the classical lookup tables-based (LUTs) approximation method, direct mapping can be digitally implemented with much fewer computations required to apply such activation functions [52], [53].
Early works presented the results regarding the complexity of the MLP [30], RNN [54], and LSTM [36] layers. However, to enhance the understanding of this subject and clarify it, in our work we directly relate those complexities to the parameters of the most widely used machine learning platforms (Keras, TensorFlow, and PyTorch) without losing generality, and specifically addressing the composite NN types described before. Let the mini-batch size be B, n s be the input time sequence size, with n s = M , where M is the memory size (see also Sec. II), and n i be the number of features, which in our case is equal to 4. Since we recover the real and imaginary parts of each symbol, the number of outputs per symbol, n o , is equal to 2. For ESN, biLSTM, and CNN layers, as they require inputs in the form of tensors of rank 3, the input of the NN equalizer can be parametrized as [B, n s , n i ], the three numbers defining the dimensions of the input tensor, as mentioned above. The parametrization for the MLP equalizer is simpler, with [B, n s · n i ] defining the dimensions of the 2D tensor input. We use flattening layers when it was necessary to reduce the dimensionality of the data.
In this case, considering three dense layers with n 1 , n 2 , and n 3 neurons, respectively, the complexity C MLP of the resulting NN is given by: where a 1 is the contribution of the input layer, b 1 is the contribution of the hidden layer, and c 1 refers to the contribution of the output layer. The subindex "1" in a, b, and c explicitly associates these parameters with the MLP architecture The next part presents the computational complexity for an NN-based equalizer composed of a biLSTM layer. Assuming that the biLSTM layer has n h hidden units, the complexity of such a NN is given by: where a 2 is the contribution of the only layer, while the subindex "2" attributes the number a to the biLSTM. This expression is easier to understand if we analyze the mathematical description of the LSTM cell, see (2) and Fig. 1. We have several contributions to the cell's complexity. In the first layer we have 4n i n h multiplications associated with the input vector x t . Then, 4n 2 h multiplications are due to the operations with the previous cell output h t−1 . Afterward, 3n h and n o n h multiplications due to the internal multiplications identified with and involving the current cell output (h t ) going into the output layer, respectively, are added. Lastly, we multiply the number of operations by the number of time steps in the layer, n s . Since the topology is bidirectional, the total contribution is also multiplied by 2.
Following Sec. II, now we address the computational complexity associated with the ESN equalizer. Before presenting the respective expression, it is important to emphasize two aspects. First, the implementation of the ESN in the digital domain does not benefit from the fact that only the output layer weights are trainable, since, as mentioned previously, the training is not a key bottleneck as it is carried out during the offline calibration process. Second, the complexity of the ESN can potentially drop drastically if we implement it in the optical domain as an ESN dynamic layer, as it was noted in [39]. However, in this paper, we analyze the ESN implementation in the digital domain, similarly to [21].
Considering the leaky-ESN definition given by (3)-(5), the computational complexity of this equalizer can be expressed as: In the expression above, a 3 represents the contributions of (3), where the input layer adds n i N r multiplications whereas the dynamic layers add N 2 r s p . b 3 refers to the contributions of (4) describing the static layer, and c 3 represents the multiplications in the output layer, (5). This overall process is repeated for all n s time steps. Note that in the case of a potential optical implementation of the ESN, a 3 and b 3 would be equal to zero, and only the final weights would be learned in the digital domain.
Finally, let us address the complexity of the composite structures: CNN+MLP and CNN+biLSTM. The computational complexity of a 1-D convolutional layer is described as: C CNN = n i n f n k n s + 2 padding−dilation(n k − 1)−1 stride +1 (11) However, we assumed (7) that the convolutional layer is defined by the number of filters n f , the kernel size n k , and that the number of time steps n s ≥ n k , according to (6) the output size for each filter of the CNN is (n s − n k + 1). (12) and (13) are the expressions for the complexity of a convolutional layer combined with two dense layers or one biLSTM layer, respectively: C CNN+MLP = n i n f n k (n s − n k + 1) a4 + (n s − n k + 1)n f b4 n 1 + n 1 n 2 + n 2 n o c4 , In this scenario, the two-layer MLP has n 1 and n 2 neurons in each respective layer, and the biLSTM layer has n h hidden units. In the equations above, a 4 and a 5 are the contributions of the convolutional layer, b 4 is the correction factor for the transition between layers since the flattening layer was placed before the dense layers; b 5 is the number of time-steps for the following biLSTM layer; c 4 is the contribution of the twolayer MLP; and c 5 is the contribution of the biLSTM layer where, in this case, the number of filters, n f , is equal to the number of features entering the LSTM cell. Finally, we would like to express the computational complexity of the DBP-based receiver used in this paper for benchmark purposes. We considered a basic implementation of the DBP algorithm [55], where each propagation step comprises a linear part for dispersion compensation followed by a nonlinear phase cancellation stage. The linear part is achieved with a zero-forcing equalizer by transforming the signal in the frequency domain and multiplying with the inverse dispersion transfer function of the propagation section. The complexity of the DBP in terms of RMpS is [30], [50]: (14) where N step is the number of steps per span used, N FFT is the FFT size, n is the oversampling ratio, and N D = τ D /T , where τ D corresponds to the dispersive channel impulse response and T = 1/R s is the symbol duration. We have considered that N FFT = 256 and τ D defined as: where f c is the optical carrier reference frequency that in our case was 193.41 THz, c is the speed of light, L span is the span length and D is the fiber dispersion parameter.
IV. PERFORMANCE VERSUS COMPUTATIONAL COMPLEXITY TRADE-OFF ANALYSIS
In this section, we initially describe the numerical and experimental scenarios used in this paper to analyze and compare the functioning of the equalizers detailed in Sec. II. After that, the two types of analysis for our set of NN structures are carried out. First, we present the maximum performance improvement (in terms of Q-factor gain compared to the non-equalized case) that each equalizer can deliver and compare this gain to the respective computational complexity corresponding to each optimized equalizer. Then, we decrease the computational complexity of six NN topologies from Sec. II and present the gain improvement provided by each NN-equalizer when all NNs have approximately the same computational complexity. This enables us to investigate the dependence of optical performance on the computational complexity and to identify which equalizer is better for a certain complexity level.
A. Experimental and numerical setups
The setup used in our experiment is depicted in Fig. 4. At the transmitter, a DP-16QAM 34.4 Gbaud symbol sequence was mapped out of data bits generated by a 2 32 − 1 PRBS. Then, a digital RRC filter with roll-off 0.1 was applied to limit the channel bandwidth to 37.5 GHz. The resulting filtered digital samples were resampled and uploaded to a digitalto-analog converter (DAC) operating at 88 Gsamples/s. The outputs of the DAC were amplified by a four-channel electrical amplifier which drove a dual-polarization in-phase/quadrature Mach-Zehnder modulator, modulating the continuous waveform carrier produced by an external cavity laser at λ = 1.55µm. The resulting optical signal was transmitted over 9×50 km spans of TWC optical fiber with EDFA amplification. The optical amplifier noise figure was in the 4.5 to 5 dB range. The parameters of the TWC fiber -at λ = 1.55µm -are: attenuation coefficient α = 0.23 dB/km, dispersion coefficient D = 2.8 ps/(nm·km), and effective nonlinear coefficient γ = 2.5 (W · km) −1 . At the RX side, the optical signal was converted into the electrical domain using an integrated coherent receiver. The resulting signal was sampled at 50 Gsamples/s by a digital sampling oscilloscope and processed by an offline DSP based on the algorithms described in [56]. Firstly, the bulk accumulated dispersion was compensated using a frequency domain equalizer, which was followed by the removal of the carrier frequency offset. A constant-amplitude zero-autocorrelationbased training sequence was then located in the received frame and the equalizer transfer function was estimated from it. After the equalization, the two polarizations were demultiplexed and the signal was corrected for clock frequency and phase. Carrier phase estimation was then achieved with the help of pilot symbols. Thereafter, the resulting soft symbols were used as input for the NN equalizers. Finally, the pre-FEC BER was evaluated from the signal at the NN output.
With regard to simulation, we mimic the experimental transmission setup 1 . The optical signal's propagation along optical fiber was simulated by solving the Manakov equations via the split-step Fourier method (with a resolution of 1km per step). Every span was followed by an optical amplifier with noise figure NF = 4.5 dB, which fully compensates fiber losses and adds amplified spontaneous emission noise. At the receiver, after full electronic chromatic dispersion compensation (CDC) by the frequency-domain equalizer and downsampling to the symbol rate, the received symbols were normalized to the transmitted ones. Finally, we added Gaussian noise to the signal representing an additional transceiver distortion that we may have in the experiment, such that the Q-factor level of the simulated data matched the experimental one. The system performance is evaluated in terms of the Q-factor, defined as: Q = 20 log 10 √ 2 erfc −1 (2 BER) .
B. Optimized NN-based architectures
In this section, we show the maximum achievable Qfactor for all equalizers without constraining the computational complexity. The Bayesian optimization (BO) tool, introduced in [9] for optical NN-based equalizers, was implemented to identify the optimum values of hyper-parameters for each NN topology, which provides the best Q-factor in the experimental test dataset. As it was recently shown, the BO render superior performance compared to other types of search algorithms for machine learning hyperparameter tuning [57]. The same topologies (without further optimization) were tested for the numerical analysis as well. The search space used in the BO procedure was defined via the allowed hyper-parameters intervals: In Table. I, the line marked with the "Best Topology" label, summarizes the hyper-parameters obtained by BO. These values are used to count the real multiplications per symbol recovery (complexity), and to assess the equalizers' 1 We consider a DP-16QAM, single-channel signal at 34.4 Gbaud preshaped by an RRC filter with 0.1 roll-off transmissions with an upsampling rate of 8 samples per symbol (275.2 GSamples/s) over a system consisting of 9×50 km TWC-fiber spans TABLE I: Summary of the complexity attributing to each NN equalizer topology: the topology type is identified in the leftmost column. The complexity corresponding to each topology and the NN type is expressed in terms of real multiplications per symbol recovered (RMpS), highlighted in red. In this table, we also depict the hyper-parameters distributions found by the BO: the cell marked as "Best topology" and other 6 topologies (Topologies from 1 to 6, referring to the increasing complexity threshold number) for the study of complexity versus performance. In addition, for all topologies, the values of n s , n i and n o were: 41, 4 and 2, respectively, and these are not reported in the performance expressed via the Q-factor gain, Fig. 5. Note that for all equalizers, the same optimal number of taps found by the BO was N = 20, which means that the memory in our equalizers is M = 41 and the mini-batch size, B, is equal to 4331. Moreover, for the ESN, the BO found the best value µ = 0.57, and the optimal spectral radius equal to 0.667. The activation functions found for every hidden NN layer are summarized as following: 1D-CNN layer -'linear activation function followed by The results obtained by using the numerical synthetic data are presented in Fig. 5a. First, the CNN+biLSTM turned out to be best-performing in terms of the Q-factor gain: it achieved a 4.38 dB Q-factor improvement when compared to the conventional DSP algorithms [56], 0.05 dB when compared to the biLSTM equalizer level, 0.47 dB when compared to the CNN+MLP equalizer level, 1.4 dB when compared to the MLP equalizer level, and 3.96 dB when compared to the ESN equalizer level. Second, when adding the convolutional layers to MLP and biLSTM, we observed the improvement in terms of the number of epochs needed to reach the highest performance: the single-layer biLSTM required 119 epochs, while the CNN+biLSTM reduced this number to 89 epochs; the MLP itself needed 214 epochs to reach the best performance level, and the CNN+MLP required just 100 epochs. Thus, we conclude that the addition of a convolutional layer indeed renders the enhancement in the NN structure's performance and assists in the training stage.
When considering how NN equalizers function with the experimental data, Fig. 5b, we can mention two major observations. First, similarly to the numerical results, the CNN+biLSTM is best-performing among all the considered NN structures in terms of the Q-factor gain. The CNN+biLSTM demonstrated a 2.91 dB improvement when compared to the conventional DSP, 0.15 dB when compared to the biLSTM equalizer, 0.61 dB when compared to the CNN+MLP equalizer, 0.96 dB when compared to the MLP equalizer, and 2.33 dB when compared to the ESN equalizer. Additionally, as was also observed in the numerical analysis, a lower number of training epochs was necessary to reach the best performance point when we add a convolutional layer: using the CNN+biLSTM we needed 169 epochs, while for the pure biLSTM this number was 232 epochs; the number of epochs required for the CNN+MLP to reach the best performance was 107, and for the pure MLP it was 753 epochs. Second, compared to the simulation, the overall gain of all NN-based equalizers is slightly reduced. This can be explained by the existing "reality gap" between the numerical model and the true experimental transmission results. In real transmission, extra nonlinearity and non-ideally behavior of the transceivers (signal clipping by the ADC/DAC, harmonic and intermodulation distortions of the driver amplifier (DA), I/Q skew, etc.) add extra noise and complexity to the process of channel inversion. We believe that with just the split-step method, the NNs can unroll the synthetic propagation effect more easily than reverting the actual propagation in the experimental condition. We also point out that even though the gain numbers are different in the numerical and experimental data, the NN structures' performance followed the same pattern for both numerical and experimental data: the best performance was attributed to the CNN+biLSTM, the next level performance pertains to the biLSTM, followed by the CNN+MLP, the MLP and, finally, the ESN.
Finally, of all equalizer types investigated in this study, the DBP 3 StPS applied with two samples per symbol was still the least complex method. In all simulation and experiment test cases, however, the CNN+biLSTM outperformed the DBP, as shown in Fig. 5. Even by optimizing the DBP's nonlinear coefficient parameter (γ), the DBP approach was able to enhance the Q-factor by 1.32 dB, whereas the CNN+biLSTM equalizer improved by 2.91 dB, in the experimental case. The boost in performance by the CNN+biLSTM relative to the DBP in the experiment scenario demonstrated the NNequalizer power in mitigating transmission impairments in a practical application.
C. Comparative analysis of different NN-based equalizers with the fixed computational complexity
The analysis given above does not address the question of which NN topology would provide the best gain if we restrict the NN structure's complexity to a certain level. To answer this question, we retested the equalizers constraining the total number of real multiplications per recovered symbol (RMpS). We considered the complexity values in the range from 10 3 to 10 8 RMpS. We note that the NN structures with large RMpS (∼ 10 8 ) can be prohibitively complex for efficient hardware implementation. However, Ref. [58] demonstrated an efficient Table I in the cells marked from ''Topology 1" to "Topology 6". The parameters of those topologies were also tuned by the BO: for each case, we reduced the allowed BO search range to comply with each computational complexity constraint.
As seen in Fig. 6, for different allowed computational complexity levels, the performance ranking of equalizer types changes. Several conclusions can be drawn analyzing the results emerging from the simulated (Fig. 6a) and experimental (Fig. 6b) data. First, in the experimental scenario, the best complexities corresponding to the maximum gain coincide with the complexities identified by the BO procedure, which confirms the effectiveness of the BO in finding the "right" NN architecture. Second, in simulations, the maximum performance is reached already at a lower complexity level compared to the experimental results. As it can be seen from the experimental figure, the CNN+biLSTM, CNN+MLP, and biLSTM equalizers need ≈ 10 7 RMpS, while in the simulation ≈ 10 6 RMpS was already enough to achieve the best performance. This observation further confirms that the NN can cope with the reversion of the simulated channel more easily than with the reversion of experimentally obtained data. Third, when we increase the complexity above the level determined by the BO, the gain remains nearly constant: this is due to overfitting and it is particularly pronounced in the MLP scenario. The key concept of the function approximation capability of the MLP belongs to its number of i) feed-forward hidden layers and ii) hidden neurons; these two parameters define the NN's capacity [59]. Changing, the MLP's capacity by adjusting the complexity levels frequently leads to unpredictable changes in the NN's performance. Starting at the 10 5 complexity level for both simulation and experimental layouts, we can see that MLPs with oversized capacity suffer from overfitting, as the network memorizes the properties of the training set in such detail that it can no longer efficiently recover the information from the inference dataset [59]. The latter blockades the equalizer from providing further Q-factor improvement. Thus, we argue that the architectures found by the BO identify the most appropriate NN equalizer's capacity (structure) matching our problem, and a further increase in complexity cannot render any noticeable performance improvement.
Next, we note that for the high level of RMpS (Topologies 4, 5, and 6), the best-performing equalizer is the CNN+biLSTM. However, once we reduce the number of real multiplications from Topology 3 and below, the best-performing equalizer turns out to be the traditional MLP. This can be explained by the fact that advanced architectures, such as CNN and biLSTM, require more filters and a higher number of hidden units, respectively, to learn the complete dynamics of the data. Also, we observe that the CNN+biLSTM performs similarly to the CNN+MLP at low complexity levels (orange and yellow curves in Fig. 6), and similarly to the biLSTM (blue line) at high complexity. Consequently, we can infer how the addition of a convolutional layer works: while for high complexity the blue and orange curves are approximately the same, at a lower allowed complexity level the CNN+biLSTM performs better.
In addition, we used the hatched blue zone in both simulation and experimental cases, for the traditional DBP with 3 StPS, to highlight the performance of the NN equalizers with similar CC of the DBP. Then it is evident that reducing the number of neurons, filters, and hidden units is not the optimal technique to achieve low complexity architectures, because performance fell below the DBP level. As a possible alternative, pruning and quantization techniques [60], [61] can be used to minimize the CC of the NN equalizers without compromising their performance, making the NN equalizers appealing not only for their good performance but also for their decreased complexity.
Finally, the performance shown by the ESN does not meet the expectations, showing the lowest achievable gain number. However, [62] contains the results explaining the poor ESN performance for the nonlinear wireless scenario. It was shown that in the channel with a high level of noise, the ESNbased equalizer was found to perform poorly. Furthermore, they demonstrated that by increasing the ESNs' number of neurons (i.e., the complexity), and, thus, effectively increasing the hidden dimensionality of the representation, equalization performance worsens. Moving to the nonlinear optical channel equalization, we observed both aforementioned effects: the performance was relatively poor due to the high level of noise, and the performance did not improve when we increased the complexity, as observable in the green curve of Fig. 6.
V. CONCLUSION
In this paper, we proposed and examined novel designs of combined NNs: (a) CNN+MLP and (b) CNN+biLSTM for the equalization of coherent optical fiber channels. We reviewed and compared several key existing NN-based methods with the proposed new algorithms using both the numerically simulated synthetic data and the experimental data from the benchmark transmission system. One of the most relevant outcomes of our work lies in the reported analytical expressions for the complexity (the number of real multiplications) associated with each NN type considered in the paper. Although a comparative analysis has been carried out for a specific benchmark system, we believe that our findings are relatively generic and can be applied to other scenarios.
Fiber Kerr nonlinearity was the predominant source of signal deterioration in the experimental benchmark system for comparing different channel equalizers. In order to produce clear nonlinear signal distortions, we used a low dispersion TWC fiber and processed the data at 2 dBm signal launch power. We emphasize that the trade-off conclusions for each NN equalizer's performance and complexity are unique to the system under consideration in this paper. However, we believe that our research paves the way for a methodology for estimating the computational cost of various NN-based channel equalizers.
We described in detail the design of the selected most promising NN-based equalizers. To derive the best-performing NN structures, we utilized the Bayesian optimization of each NN type that provides the optimized set of hyper-parameters for each particular NN-based equalizer. For these optimized structures, we found that the best performance of the test system was rendered by the new CNN+biLSTM architecture, though the performance of the pure biLSTM was only slightly lower. However, the optimized CNN+biLSTM design corresponded to the highest complexity among all cases studied.
The important part of the analysis was the comparison of the performance under the condition of the restricted complexity: the respective results are given in the last section. We found that at high complexity levels the best-performing NN among studied cases is the CNN+biLSTM. However, when reducing the complexity, we observed the transition: when the allowed complexity is relatively low, the best-performing structure turned out to be the simple MLP. We can explain this behavior as follows: the advanced architectures (the CNN and biLSTM) require more complexity-hungry components (filters or hidden units) to learn the data dynamics, while the MLP is less demanding using just the summation and activation functions at the basic level. Overall, we conclude that the addition of the convolutional layer can be beneficial if we do not restrain the complexity. However, the important message is that complexity can play an important and even crucial role in the hardware implementation of the NN equalizers. Our analysis demonstrates that even the simple NN structures, like the MLP, can outperform the more advanced counterparts when the complexity is constrained to relatively low levels. | 12,180 | 2021-03-15T00:00:00.000 | [
"Engineering",
"Physics",
"Computer Science"
] |
Ultrabroadband High Photoresponsivity at Room Temperature Based on Quasi‐1D Pseudogap System (TaSe4)2I
Abstract Narrow bandgap materials have garnered significant attention within the field of broadband photodetection. However, the performance is impeded by diminished absorption near the bandgap, resulting in a rapid decline in photoresponsivity within the mid‐wave infrared (MWIR) and long‐wave infrared (LWIR) regions. Furthermore, they mostly worked in cryogenic temperature. Here, without the assistance of any complex structure and special environment, it is realized high responsivity covering ultra‐broadband wavelength range (Ultraviolet (UV) to LWIR) in a single quasi‐1D pseudogap (PG) system (TaSe4)2I nanoribbon, especially high responsivity (From 23.9 to 8.31 A W−1) within MWIR and LWIR region at room temperature (RT). Through direct probing the carrier relaxation process with broadband time‐resolved transient absorption spectrum measurement, the underlying mechanism of majorly photoconductive effect is revealed, which causes an increased spectral weight extended to PG region. This work paves the way for realizing high‐performance uncooled MWIR and LWIR detection by using quasi‐1D PG materials.
Introduction
High performance broadband photodetection ability, especially in the MWIR (3-5 μm) and LWIR (7-14 μm) range, is quite DOI: 10.1002/advs.202302886important for the applications such as thermal imaging, medical quarantine, and industrial monitoring.Narrow bandgap semiconductors such as HgCdTe, [1] InGaAs, [2] and InSb, [3] are commercially used for MWIR and LWIR detection.However, they have complex preparation process and work at cryogenic temperature to reduce the dark current, which suffers from expensive cost.It is desirable to find novel and efficient photosensitive materials especially working in the middle and far-infrared region at room temperature (RT).Over the past few years, a large amount of wide-band photodetection devices based on narrow-bandgap semiconductors [4] or semimetal materials [5] were explored.However, on one hand, due to the existence of near zero band gap, the devices usually have high dark current, restricting the improvement of photoresponsivity and detectivity, so it remains a challenge to achieve high-performance photodetection in the MWIR and LWIR range at RT. On the other hand, orders of magnitude reduced absorption from visible to LWIR range leads to the low responsivity in middle and far IR.
For a traditional narrow bandgap semiconductor, the reduced absorption from short wavelength to long wavelength light has resulted from the reduced density of states when energy approaches the bandgap edge.To compensate the much lower absorption at longer wavelength, that is MWIR or LWIR, the mid gap states were intentionally introduced by generating defects [6] and heterostructures combined with plasmonic quantum dots were explored. [7]An extrinsic method to enhance the absorption in longer wavelength is to add external structures, such as cavity, [8] optical antenna, [9] plasmonic, [10] metasurface [11] or antireflection structure. [12]Nevertheless, these methods are either uncontrollable or endure high fabrication cost, which hardly be used for stable, low cost MWIR or LWIR photoelectronic applications.Another approach to address this issue is the utilization of a combination of materials with varying bandgaps, wherein each bandgap material is responsible for absorbing light in its respective wavelength region.By exploiting bandgap engineering and quantum tunneling, the quantum cascade broadband IR photodetector (quantum cascade detector, QCD) was proposed and nowadays becomes the state-of-the-art IR photodetector. [13]However, due to the complex structure and the required advanced fabrication technique, the cost for producing and maintaining QCD is extremely high and the lifetime is still very limited.Therefore, a single material broadband IR photodetector with high photoresponse at all wavelength ranges is highly demanded.
Specifically, to improve the photoresponsivity at MWIR and LWIR detection in single material at room temperature, photogating and bolometric mechanism were proposed in previous studies.However, either the sample defects for photogating is not controllable, or the high-temperature coefficient of resistance (TCR) for bolometric effect is needed. [14]Recently, it has been demonstrated that 2D charge density wave (CDW) materials exhibit enhanced photo responsivity of ≈1 A W −1 due to their collective transport, thereby improving intrinsic photoconduction. [15]he CDW gap is usually small and develops in a metallic ground state, which enables their high density of states near the gap edge, hence enabling high absorption for long wavelength light.Nevertheless, photodetectors based on 2D CDW materials are very unstable, due to the random distribution of CDW domains [16] and in fact they worked as threshold detectors, which limits the achievement of higher performance.
Naturally, the question arises as to how we can find a material with semiconducting ground state but with high density of states (DOS) near the bandgap edge available for broadband photoconduction.This question attracts our attention to the materials with PG.PG systems have previously attracted a great interest in condensed matter physics, especially the one related to the Mott superconducting transition of high-T c superconductor. [17]sually, such PG associates with the strong electron-electron correlation state when the 2D Mott insulating system is doped to the boundary of "strange metal" [18] or a 1D strongly corrected metal, such as Luttinger liquid state [19] or bipolaron state. [20]ince these systems have already large occupied DOS for photon excitation while the dark state is semiconducting instead, they are naturally the candidate materials for seeking equally high photoresponsivity from visible to MIR region.However, to date, the study for light modulating the materials with PG and its application in broadband photodetection is almost absent.In this work, we investigate the broadband photoresponse of 1D (TaSe 4 ) 2 I, which has PG at room temperature.The investigation of PG in (TaSe 4 ) 2 I was previously conducted [21] and has been revisited in recent studies utilizing time-resolved angle-resolved photoelectron spectroscopy (trARPES). [20,22]The observed enhancement of spectral weight at Fermi energy (E F ) under =780 nm photoexcitation suggests potential advantages for achieving high performance photodetection. [22]Herein, we investigate the photoresponse of 1D (TaSe 4 ) 2 I and present a physical scenary for corroborating the broadband photoresponse of PG materials, as Figure 1 depicts.When the photon energy of the laser is much larger than the PG, the photoexcited electrons have much higher energy than the conduction band edge, therefore lose most of their energy by electron-electron and electron-phonon scattering before they reach the conduction band edge and contribute to the photocurrent.Such high ratio of energy loss leads to a low photocurrent generation efficiency.As the photon energy approaches the size of PG, the energy loss from the scattering is almost suppressed, causing a high photocurrent generation efficiency.While the photon energy is smaller than the PG, since there is still enough DOS within the gap for photoexcitation, the photocurrent generation efficiency would not be decayed heavily, which is in sharp contrast to the real semiconductor gap.Under this picture, the photoresponsivity for a PG material exhibits uniformly high values across the whole wavelength spectrum, which is the key finding of our work.
Recently, (TaSe 4 ) 2 I nanowire was reported with high responsivity of 0.792 A W −1 at near-infrared region, [23] but the research including photoresponse mechanism was still at an infant stage.Additionally, low-noise current level characteristics at dark state were observed in (TaSe 4 ) 2 I nanoribbons recently, [24] which would be benefit for obtaining high detectivity in photodetection application.Therefore, it is desirable to explore optoelectronic properties further in the long wavelength range.
In this work, we have prepared the (TaSe 4 ) 2 I nanoribbons by mechanical exfoliation method, the minimum thickness and maximum length can reach up to 6 nm and 312 μm respectively.We investigated the photoresponse of (TaSe 4 ) 2 I over broadband region.Different from the traditional narrow bandgap semiconductor, the increased spectral weight under photoexcitation can be extended within the PG, which leads to high responsivity from = 375 nm to = 10.6 μm derived from the high-quality exfoliated (TaSe 4 ) 2 I nanoribbon devices.Both the major photoconduction and the minor bolometric mechanism contribute to the photoresponse, which is unambiguously clarified by time-resolved pump-probe measurements.To the best of our knowledge, our work is the first one to report such broadband superior responsivity (From 23.9 to 8.31 A W −1 ) at MWIR and LWIR region in 1D single nanoribbon system.Our results demonstrate that quasi-1D PG materials are promising for MWIR and LWIR photodetection at RT.
Results and Discussion
(TaSe 4 ) 2 I has a monoclinic unit cell (space group I422) that consists of TaSe 4 chains with helical symmetry that are placed in the middle of the faces and separated by chains of iodine atoms, [23,24] the crystal structure is shown in Figure S1a (Supporting Information).In this work, high-quality (TaSe 4 ) 2 I single crystal was synthesized by one-step chemical vapor transport method, a stoichiometric mixture of Ta and Se were used, with an excess of iodine to be served as both reactant and transport agent, as illustrated in Figure S1b (Supporting Information).The bottom figure of Figure S1b (Supporting Information) shows the typical optical image of as grown (TaSe 4 ) 2 I needle-like single crystals in the cold zone region.The X-ray diffraction (XRD) spectrum as shown in Figure S1c (Supporting Information) confirms the pure phase composition and high crystal quality.The energydispersive X-ray spectroscopy reveals a Ta:Se:I ratio of 1.9:7.4:1(Figure S2, Supporting Information).Due to the weak interchain binding energy in (TaSe 4 ) 2 I, it is very easy to obtain the (TaSe 4 ) 2 I nanoribbon by using traditional mechanical exfoliation (MF) method.Figure S3a (Supporting Information) shows the typical optical image of exfoliated (TaSe 4 ) 2 I nanoribbon/nanoplate, which the thickness was confirmed by AFM.5b,27] The smooth surface of exfoliated (TaSe 4 ) 2 I nanoribbon with different width was confirmed by the scanning electron microscopy (SEM) image, as shown in Figure S4a (Supporting Information).The high-resolution transmission electron microscopy (HRTEM) of exfoliated thin (TaSe 4 ) 2 I shown in Figure S4b (Supporting Information) indicates an interplanar lattice spacing of 7.0Å, and the selected area electron diffraction (SAED) pattern (insert of Figure S4b, Supporting Information) with sharp diffraction spots confirms the high-quality single-crystalline structure of (TaSe 4 ) 2 I.The Raman spectrum taken on a freshly exfoliated (TaSe 4 ) 2 I single crystal is shown in Figure S4c (Supporting Information), which confirms no sample degradation after thinning.
Next, to investigate the potential optoelectronic applications of (TaSe 4 ) 2 I, as Figure 2a depicts, large area LPE samples for absorption measurement and two-probe configuration photodetectors based on freshly MF (TaSe 4 ) 2 I nanoribbon were fabricated respectively.We studied power-dependent and wavelengthdependent photoresponse of (TaSe 4 ) 2 I nanoribbon device over a wide wavelength range at RT.We have completed all photoresponse measurements of (TaSe 4 ) 2 I nanoribbon under a bias voltage of 1 V, and the laser polarization is parallel to the chains unless it is specially mentioned.Figure 2b shows the photoresponse under = 635 nm and = 4640 nm of laser excitation respectively, with the laser power intensity keeping at the same value of 6.7 mW mm −2 .Surprisingly, we find the photocurrent is only slightly attenuated from visible to MWIR region.The phenomenon is in stark contrast to the traditional semiconductor, which the photocurrent decreases quickly when the photon energy approaches the bandgap.High photoresponsivity (R) of 23.9A W −1 was obtained under = 4640 nm excitation, which defined as R = I pc /P (I pc refers to photocurrent and P is the power irradiation on the device channel).In addition, bias-dependent photocurrent is investigated as shown in Figure 2c, the results indicate that the photoresponsivity can be optimized by increasing bias voltage.The obtained photoresponsivity under UV to LWIR range radiation are summarized in Figure 2d (down panel).The variation trend of responsivity almost fits well with the absorption derived from the transmittance spectrum as shown in top panel of Figure 2d.The photoresponse in LWIR region is shown in Figure S5 (Supporting Information).6b] Other mechanism for slow response time such as bolometric effect can be excluded for the main reason in (TaSe 4 ) 2 I nanoribbon.Bolometric effect induced slow thermal transport plays a secondary role due to the fast heat dissipation in thin nanoribbon, compared with wide-thick nanoplate sample (Figure S7, Supporting Information).Also, the measured R-T curve of (TaSe 4 ) 2 I nanoribbon shows a small TCR (≈1.22%/K), indicating the small contribution from the bolometric effect (Figure S6, Supporting Information).The response time, that is, rise time versus wavelength was summarized in Figure 2f.The relative slow response time is attributed to the defect-induced trap states in nanoribbon, [28] which we analyzed its origin in detail in supplementary Note S1 (Supporting Information).
To reveal the photoresponse mechanism more clearly, we performed the broadband time-resolved transient absorption (TA) spectrum measurements.The TA technique can direct probe the broadband carrier relaxation process, which is in contrast with the indirect evidences derived from power-dependent photocurrent or temperature-dependent electric transport measurement.The TA setup is illustrated in Figure 3a, the excitation wavelength is = 500 nm produced by fundamental beam pumping optical parametric amplification (OPA).The probe wavelength from visible to MWIR region (600 nm-7 μm) is provided from a supercontinuum laser beam and another OPA.The obtained 2D TA spectrum under probe wavelength of 600-900 nm, 1200-1600 nm, 4-5 μm and 6-7 μm are presented in Figure 3b,c and Figure S9a,b (Supporting Information) respectively.Interestingly, different from the negative TA signal (∆A) within nearinfrared region, the TA signal in Figure 3 is positive, indicating a photoinduced absorption phenomenon occurs in MWIR and LWIR region.Combining with analysing the measured broadband transmittance results (Insert of Figure 2d), we confirm the existence of PG, in accordance with the value of≈300 meV at RT derived by previous tr-ARPES measurement. [20,22]Furthermore, a strong TA signal ≈4.3 μm was observed (Figure 3b), indicating a large photoconductivity, which is possibly associated with photo-excited single polaron states within PG. [29] The nano-FTIR measurement on a 80 nm-thick sample under pulsed laser excitation also reveals a peaked photo absorption ≈4.3 μm (Figure 3d).The extracted carrier dynamics by typically selected probe wavelength of 4.3 and 6.5 μm are shown in Figure 3e,f respectively.The TA kinetics show a typical characteristic of fast and slow decay time.The dominated fast region contributed to the ultrafast large photoconductivity, the slow part (beyond 6 ns) resulted from the accompanied heat by elevated lattice temperature, due to the sample's poor thermal conductivity.The carrier relaxation time under probe wavelength of = 4.3 μm displays a decay time of 687 fs by single-exponential exponent fitting, agreeing with the tr-ARPES measurement reported recently. [20]By increasing probe wavelength toward LWIR region, the fast decay TA signal prolongs, that is doubled to 1.51 ps at =6500 nm (Figure 3f), which suggests a slowing down of the photocarriers transportation.Together with the reduced subgap absorption, the photoconductivity becomes suppressed at the LWIR region, leading to decayed photoresponsivity, which is about half of that at 4.3 μm.Nevertheless, the photoresponsivity maintained to LWIR is still high compared to that of other low dimensional materials.The reason why such photoresponsivity is not suppressed sharply to zero when photon energy is below the PG size calls for further investigation, probably due to the finite DOS excitation of PG nature instead of the zero DOS state in conventional semiconductors.
Overall, the (TaSe 4 ) 2 I nanoribbons demonstrate high photoresponsivity especially in MWIR and LWIR region.For example, the responsivity of ≈23.9A W −1 @ 4.64 μm is two orders of magnitude higher than that of other 2D Weyl semimetal TaIrTe 4 , [30] 3D Weyl semimetal TaAs [5a] and 1D narrow-bandgap semiconductor InAsSb [31] operating at 77K.The responsivity could be further improved by optimizing the sample geometry, i.e. ≈170 A W −1 @ 4.64 μm obtained in another nanoribbon device (sample S5, Figure S10, Supporting Information).The photoresponsivity of our nine fabricated devices are shown in Figure S11 (Supporting Information).The photoresponsivity also outperforms the commercialized HgCdTe detector (Responsivity of 0.2-1.7A W −1 ) worked at liquid nitrogen temperature, [1a] far beyond most reported low-dimensional based MWIR photodetectors.A more detailed photoresponsivity comparison with broadband range based on single low-dimensional materials is summarized in Figure 4.Only intrinsic photoresponsivity of single material is considered here (without any treatment like applying plasmonic structure, ferroelectric polymers or composed as a heterostructure).To the best of our knowledge, for single 1D systems, our work is the first one to report such ultra-broadband high responsivity with wavelength range covering = 375 nm to = 10.6 μm, especially with record-high values in LWIR region at RT.Even compared with 2D materials, our device also outperforms most of them from MWIR to LWIR region.
According to Fermi golden rule, the photo-excited interband transition is related to DOS and transition matrix element, to be more exactly, the absorption ∝g(hv)|M cv | 2 , [32] where g(hv) is the joint DOS involving both conduction and valence bands, and M cv is the transition matrix element.The DOS for a thick material displays a square-root decaying behavior towards the bandgap edge, [33] thus the absorption coefficient of most semiconducting film materials quickly decays.While for (TaSe 4 ) 2 I, the DOS does not decay much since the pseudogap nature of the room temperature ground state.Interestingly, (TaSe 4 ) 2 I is also claimed to be a Weyl semimetal at room temperature, [34] thus can also keep a slow absorption decaying, due to the linear dispersion of the band structure in low energy region, which is similar to the case of Dirac semimetal graphene, [35] by considering the joint DOS and transition matrix element.
The photoresponsivity (R) for photodetector based on lowdimensional material can be expressed as [36] : where e is the elementary electron, t is the sample thickness, is the absorption coefficient, h is Planck's constant, v is light frequency, l is carrier lifetime, t is the electron transit time.For (TaSe 4 ) 2 I, the coefficient hv is not varied much as the photon energy decreases when the light polarization is parallel to the |TaSe 4 | chains. [29]Owning such relatively high absorption in long wavelength region enables (TaSe 4 ) 2 I to possess equivalently high photoresponsivity in ultra-broadband region, in contrast to the fast decaying behavior near the bandgap edge for most thick 2D semiconductor materials, as displayed in Figure 4b.
Futhermore, the Fourier transform infrared spectrum (FTIR) spectrum (Insert of Figure 2d) demonstrates the optical absorption of (TaSe 4 ) 2 I beyond 10 μm, indicating a great potential for realizing high responsivity in far-infrared band.
The noise equivalent power (NEP) is another important figure of merit to evaluate the weak light detection performance.The lower NEP, the photodetector is more sensitive.It can be [4a-d,5b,6,27,30,31,37] a) 1D PD is short for quasi-1D 1D nanowire/nanoribbon-based photodetectors; b) 2D PD is short for 2D material-based photodetectors).6b,37h,38a,39] The working temperature is room temperature unless it is specially labeled.The lines in Figure 4b are guiding to the eyes.Symbol ▲ indicates the defect related trap states to tune the photoresponse.
expressed as NEP = RMS(I noise )/R, where I noise is the noise current of the device, RMS is the root mean square.The measured noise current spectrum is shown in Figure S12 (Supporting Information).The typical NEP value obtained is ≈753 fW/Hz 1/2 @ 4.64 μm (sample S1, Supporting Information), while the highest parameter we can achieve is ≈38 fW/Hz 1/2 @ 4.64 μm (sample S5, Supporting Information), which is almost the lowest value among single low-dimensional materials within MWIR region at RT, as shown in Figure 4c.The detailed detectivity comparison within MWIR and LWIR range is summarized in Figure 4d (2D) and Figure S13 (Supporting Information) (1D), which is also competitive.In addition, the availability of blackbody response in an infrared photodetector is critical for practical applications.The detail about the blackbody response test of (TaSe 4 ) 2 I nanoribbon is shown in supplementary note 3.Under a bias voltage of 0.1 V, the responsivity of the device is obtained ≈17 A W −1 under 1200 K blackbody source illumination, which is superior than most low-dimensional blackbody-sensitive photodetectors. [40]The measured detectivity is ≈1.56 × 10 8 Jones, which is higher than that of 1D carbon tube and comparable with 2D Te. [40b,c] The detectivity can be further improved by specially designing the heterostructure with other 2D materials. [41]Though the response time is relatively slow at present stage, on one hand, there's often a trade-off between responsivity and response time, the defects of the sample could be reduced by further iodization treatment through annealing in iodine atmosphere. [42]On the other hand, beyond photodetection application, the fall time with a relative long tail ≈20 s was observed due to the charge de-trapping process, the persistent photoconductivity (PPC) phenomenon could be exploited for novel optoelectronic synapses and optical memory application. [43]Although not all the parameters bear the best values, the proposed new type of broadband photodetector based on quasi-1D PG system, may provide a new route to achieve uncooled high-performance broadband photodetector.
Conclusion
In summary, we demonstrated a new type of ultrabroadband high photoresponsivity photodetector based on PG system (TaSe 4 ) 2 I. Due to the increased spectral weight under photoexcitation extended to PG region, ultrabroadband high photoresponse from 375 to 10.6 μm was demonstrated based on single (TaSe 4 ) 2 I nanoribbon.Furthermore, the broadband photoexcited carrier dynamics were revealed in our TA experiments, demonstrating that the contribution to the high photoresponsivity is from the absorption of single polaron states within the PG and majorly photoconductive mechanism.The typical nanoribbon device shows high photoresponsivity especially in MWIR and LWIR regions (from 23.9 to 8.31 A W −1 with a bias voltage of 1 V) at RT.The best performance (photoresponsivity of 170 A W −1 , NEP of 38 fW//Hz 1/2 and detectivity of 1.54 × 10 9 Jones) we achieve is very competitive among single low-dimensional material based MIWR photodetectors.In addition, it is also demonstrated to have a large blackbody response.Such balanced performance in broadband IR region enables (TaSe 4 ) 2 I to be a potential candidate material for high performance broadband IR photodetector, like QCD.Our work thus paves a way for exploring low manufacturing cost, high-performance MWIR and LWIR photodetector at RT by using quasi-1D PG materials.
Experimental Section
Materials Synthesis: High-quality single crystals of (TaSe 4 ) 2 I were synthesized by chemical vapor transport (CVT) method in a sealed quartz tube.The high-purity Ta(4N), Se(4N), and I(4N) were mixed in chemical stoichiometry sealed in an evacuated quartz tube which inserted into a furnace with a temperature gradient of 500 to 400 °C with the educts in the hot zone.After 2 weeks, shiny crystals with needle-like shape were obtained in the cold zone.The thin (TaSe 4 ) 2 I nanosheets synthesized by liquid phase exfoliation (LPE) method were prepared for broadband absorption measurement and directed deposited on a Cu grid for TEM characterization.
Materials Characterization: HRTEM analysis was carried out on a JEM2100F with an acceleration voltage of 200 kV.The SEM and EDX characterizations were performed in an Oxford SEM system.Raman spectroscopy was performed on a freshly cleaved (TaSe 4 ) 2 I under a 100× objective lens by using a grating of 1800 g mm −1 .To avoid the laser-induced damage of the samples, the optimized Raman spectrum were recorded at low power level (P ∼ 500 μW).For broadband optical absorption analysis, the transmittance spectra were measured by a UV-NIR spectrometer (Agilent Cray 5000) and a FITR (Vertex 70) spectrometer under at room temperature.The nano-FTIR spectrum were measured from a multi-functional nano-infrared spectrometer (Anasys Instruments Inc.).
Device Fabrication: For fabrication of the nano-thick devices, electrode patterns were defined by standard electron beam lithography.Metal electrodes (10 nm Cr/100 nm Au) were deposited by thermal evaporation in PVD system (K.J.Lesker Nano 36).The thickness of (TaSe 4 ) 2 I nanoribbons were determined by atomic force microscopy under non-contact mode (Park NX-10).
Electrical and Photo Response Measurement: The current-voltage (I-V) measurements were performed under voltage driving mode along the chain direction.The photoelectric signal and photo response time under biased voltage were measured by using a Keithley 2450 sourcemeter.For wavelength-dependent photocurrent measurements, different continuous-wave solid-state lasers (Changchun New Industries Optoelectronics Technology Ltd. = 375 nm, 437 nm, 635 nm, 1064 nm), a singlewavelength ( = 4.64 μm) and a wavelength-tunable ( = 6-10.6μm) mid-IR continuum wave quantum cascade laser (Daylight Solutions) were used as light sources.The incident light power illuminated on the device was monitored by calibrated power meters.The laser spot diameters were ≈3, 4, 4, 1, 3.9, and 2 mm for 375 nm, 437 nm, 635 nm, 1064 nm, 4.64 μm, and 6-10.6 μm, respectively.A 1200 K blackbody source (HGH RCN1250) was used to detect blackbody response of the device.
Figure 1 .
Figure 1.Energy band diagram and proposed concept of the photodetector based on PG system (TaSe 4 ) 2 I at room temperature.
Figure 2 .
Figure 2. Photo response characterization of (TaSe 4 ) 2 I. a) SEM image of the prepared sample on CaF 2 substrate (top panel) and optical image of the prepared exfoliated nanoribbon device on 285 nm SiO 2 /Si substrate (down panel, sample S1, Supporting Information).The channel area is ≈1um 2 .b) Typical photoresponse under = 635 nm and = 4.64 μm excitation based on (TaSe 4 ) 2 I nanoribbon device.c) Voltage-dependent photoresponse under = 4.64 μm excitation.d) Top panel: Transmittance spectra from UV to LWIR region.Down panel: Photo responsivity from UV to LWIR region.e) Power-dependent photo response in LWIR region.f) Photo response time from UV to LWIR region.The power density is fixed ≈6.7 mW mm −2 for (b-d) and (f).
Figure 3 .
Figure 3.The photo response mechanism of (TaSe 4 ) 2 I nanoribbons.a) TA setup.b) 2D TA spectrum under probe wavelength range of 4100-5100 nm.c) 2D TA spectrum under probe wavelength range of 6000-7000 nm.d) Nano-FTIR absorption measurement under MWIR pulsed laser excitation on a 80 nm-thick sample.e,f) Typical extracted carrier dynamics at probe wavelength of = 4300 and 6500 nm. | 5,957 | 2023-12-08T00:00:00.000 | [
"Materials Science",
"Physics"
] |
On the alternative approaches to stability analysis in decision support for damaged passenger ships
A decision support system with damage stability analysis has been recognized as an important tool for passenger ships. Various software applications have been developed and taken into use over the years, without a direct link to any compelling requirement, set forth in the international regulatory framework. After the Costa Concordia accident, new regulations have been established, setting minimum requirements for a decision support system, as an extension to a loading computer. Yet, more advanced systems have been developed recently, aiming at providing valuable additional information on the predicted development of the stability of the damaged ship. This paper presents these alternative decision support systems with damage stability analysis methods for flooding emergencies on passenger ships. The technical background, usability, and usefulness of the various approaches are compared and discussed, taking into account the important statutory approval point of view. In addition, practical examples, including past accidents, are presented and discussed.
Introduction
An important strategy to reduce the disaster potential of maritime accidents is to enhance post-accident situational awareness and related decision making, Goerlandt et al. (2016). Rapid and correct decisions onboard are needed, especially in the case of a flooding accident of a passenger ship. The situation may evolve fast, leaving the crew with a short time frame for appropriate actions. Decisions on evacuation, abandonment, and possible counter actions need to be based on predicted time frame and evolvement of the scenario. Consequently, a dedicated decision support system is an essential tool in a distressed accident situation. The grounding and subsequent capsizing of the Costa Concordia in 2012 further emphasized this need. Alternative solutions for such systems have been developed both for use onboard the flooded ship and in a shore-based support center. A brief overview of this progress was given by Pennanen et al. (2017). The present study further elaborates the implications of the alternative methods for both onboard and shore-based decision support, accounting for the latest research and development, both from technical and regulatory perspectives.
One of the first concepts for a decision support in flooding accidents was outlined by Lee et al. (2005), including a suggestion for color-coding of damage stability characteristics. More advanced user experience through a virtual environment for decision support was introduced by Varela and Guedes Soares (2007), focusing on the visualization of both the flooding and relevant equipment for damage control. Ölcer and Majumder (2006) presented a case-based reasoning, using a large number of precalculated damage scenarios and an algorithm to select the closest one to the actual condition. More recently, also Kang et al. (2017) have proposed using pre-calculated time-domain simulation results in a decision support system.
Excessive heeling of the ship complicates the evacuation process, as described, e.g., by Bles et al. (2002), and may even prevent the launching of the lifeboats. Moreover, heeling increases the risk of capsizing, and consequently, already Lee et al. (2005) emphasized the heel angle as a critical parameter for decision support. Thereafter, research has focused on planning optimal counter ballasting actions to reduce heeling, for example Lee (2006); Martins and Lobo (2011); Calabrese et al. (2012); Choi et al. (2014); and Hu et al. (2015). These tools have mainly been developed for navy ships. However, in military applications, the main objective is to maintain the functionality of the weapon systems, whereas for passenger ships, the target is to ensure survivability of the people onboard the damaged ship. Consequently, the needs for the decision support system are also somewhat different. Yet, there are also similarities, such as the objective to minimize heeling.
An advanced monitoring tool, informing the crew about the current vulnerability status of the ship, was introduced by Jasionowski (2011). Most notably, this approach was shown to improve the awareness of the crew, aiding decision-making in case of a flooding accident. In a distress situation, the available information on the flooding extent and damage stability of the ship is essential, and for example Varela et al. (2014), emphasize the need to provide the crew with prediction of the progression of flooding.
During the past decade, several time-domain flooding simulation tools have been developed, such as Jasionowski (2001); Ruponen (2007); Dankowski (2013); Lee (2015); and Ypma and Turner (2019), enabling calculation of flooding progression in the complex arrangement of compartments and openings of a large passenger ship. Typically, it is assumed that water levels in the flooded compartments are horizontal, and Bernoulli's theorem is used to calculate the flow rates in the openings. With increased computing capacity, these simulation tools can also be used in onboard decision support applications. Initially, a simplified approach was introduced by Ruponen et al. (2012) for rapid assessment of flooding progression. However, later more accurate onboard simulation methods have been presented by Varela et al. (2014Varela et al. ( , 2015; Ruponen et al. (2015Ruponen et al. ( , 2017; and Braidotti and Mauro (2019). The main benefit of such tools is the capability to estimate the time frame for the evolvement of the scenario. Even if the ship eventually capsizes, there may still be enough time to carry out orderly evacuation and abandonment. Quantification of the current safety level, accounting for the results of the flooding prediction and possible manual user input, needs to be included in a decision support system (DSS). Usually, simple criteria for stability characteristics are applied, as presented, e.g., by Lee et al. (2005). However, other factors, such as the weather condition and available systems, can also be included in this assessment. Recently, Braidotti et al. (2018) have considered integration of all aspects of ship survivability into a global risk index, and an overview of recent developments in ship stability and operational risk is provided in Manderbacka et al. (2019).
A fundamental aspect of decision support is the communication between all stakeholders, such as shore-based support and search and rescue (SAR) personnel. For this purpose, an elaborate Vessel TRIAGE system, providing means of communicating the status of the situation, has been developed, Nordström et al. (2016). Analogically to the widely used medical TRIAGE, the severity of the situation is displayed with color codes: green, yellow, red, and black, Table 1. Combined with damage stability calculation in time-domain, the Vessel TRIAGE system forms a solid background to an effective and useful decision support system.
Various active counter measures can also be included in the decision support framework, as described in Boulougouris et al. (2016). Recently, e.g., Kang et al. (2018) have considered a concept of buoyancy support system. However, if all factors are not known and properly accounted for in the decision making, such counter measures may also have a negative impact on the stability and survivability of the damaged ship. For example, incorrectly applied buoyancy support system may increase the asymmetry of flooding and risk of capsizing.
The main consequences of flooding are decreased freeboard and reduced stability. The crew of the damaged passenger ship needs to react promptly and decide on mustering and possible abandonment of the ship. Disorderly evacuation and abandonment can also cause casualties and serious injuries. Therefore, if the ship will remain afloat with sufficient reserve stability, there is no need for immediate evacuation. On the other hand, if the ship will capsize, a delayed start of evacuation is likely fatal. Ockerby (2001) points out the need to keep the passengers well informed on the facts of the situation, starting from the very first alarm, in order to avoid panic. These actions obviously require rapid assessment of the situation. Data from the automation system and advanced tools for analysis of the situation can enhance objectively the awareness of the situation and support the crew in the distressed situation. Green Vessel is safe and can be assumed to remain so.
Yellow
Vessel is currently safe, but there is a risk that the situation will get worse.
Red
Level of safety has significantly worsened or will worsen and external actions are required to ensure safety of the people aboard.
Black
Vessel is no longer safe and has been lost.
On the alternative approaches to stability analysis in decision...
For passenger ships, the loss of stability is usually caused by progressive flooding to undamaged compartments. The non-watertight structures, such as closed A-class fire doors, inside the watertight compartments can have a notable effect on the flooding progression. Typically, the closed doors leak and eventually collapse under a quite moderate pressure head of 2.0…3.5 m, Jalonen et al. (2017). For example, simply by closing all A-class fire doors, the time-to-sink can be prolonged by several hours in certain damage cases, Ruponen (2017). The actual status (open/closed) of these doors may be available from the automation system, and this data can be used for more accurate analysis of the flooding progression. Moreover, these previous studies point out that for passenger ships, there can be thousands of alternative ways for the same damage scenario to evolve, depending on the door statuses. Consequently, it is impossible to effectively consider all possible combinations in a decision support system that relies on pre-calculated results.
2 Alternative approaches for damage stability analysis in decision support
Regulatory requirements
The International Maritime Organization (IMO) has taken the decision that all passenger ships covered by the SRtP (Safe Return to Port) requirement and built after 2014, need to be equipped with a stability computer, capable of providing the master with operational information after a flooding casualty. Alternatively, a shore-based support proving the same can be used. The requirement is included in the amendments of the Safety of Life at Sea (SOLAS) text, and relevant detailed guidelines are given in MSC Circulars 1400 and 1532 (IMO 2011(IMO , 2016. In its 99th session, the IMO Maritime Safety Committee (MSC) extended this requirement to concern also existing passenger ships built before 2014, in the SOLAS edition entering in force January 1, 2020. The relevant guideline, which takes into account the characteristics of older tonnage, is MSC Circular 1589 (IMO 2018). A comprehensive description of the regulatory background is given in Hutchinson and Scott (2015).
In the amended SOLAS text, MSC.436(99), the relation of these guidelines is clarified, meaning that the Circular 1400 only affects ships built between January 1, 2014 and May 13, 2016, whereas the revised circular 1532 affects ships built after May 13, 2016. The latest circular 1589 affects only existing ships, built before January 1, 2014.
It is noted that the ships built before 2014 represent a vast majority of different passenger ships in operation. According to Equasis (2018), over 90% of all passenger ships over 500 GT are older than 5 years. These include both pure passenger ships and ro-ro/passenger (RoPax) vessels, covered by many editions of SOLAS conventions in use at the time of their construction.
Flooding detection
An essential aspect of decision support in an accident situation is fast and reliable flooding detection. New passenger ships are equipped with sensors IMO (2008). An adequate number of well-placed flood level sensors enable the calculation of time-domain flooding prediction, Takkinen et al. (2017). New ships usually have automation systems, capable to provide all needed data for the damage stability computer directly through various interfaces. On the contrary, the installation of the flood level sensors to older ships is complicated and costly.
Recently, Karolius et al. (2018) have introduced a risk-based positioning of flooding detection sensors. Such an approach may be very useful for designing the instrumentation for new passenger ships. Yet, it is of utmost importance that all watertight compartments are equipped with sensors, even if the risk of flooding is very small. Otherwise, there may be a notable delay in the flooding detection and subsequent alarm. Trincas et al. (2017) suggest that flooding could be detected only on the observed change in the floating position of the ship. In ideal conditions, this could be used to trigger alarm on possible flooding, but it is considered to be extremely difficult to obtain reliable assessment of the real damage case without proper flooding detection sensors in the compartments. Mainly because the same change in the floating position may result from several different combinations of flooded compartments and breaches. Consequently, manual user input from the crew will be needed if there is no flooding detection in the compartments.
Overview of alternative approaches
The conventional approach for damage stability assessment onboard is to calculate the final equilibrium after flooding based on the current loading condition. In practice, loading computer software, relying on static damage stability method, is used for this purpose. International Association of Classification Societies (IACS) defines four different types of stability software, Table 2, in the Unified Regulations regarding Onboard Computers for Stability Calculations, IACS (2017). In principle, only Type 4 can be considered as a decision support tool.
More recent developments of onboard software include time-domain prediction of damage stability, as presented in Varela et al. (2014Varela et al. ( , 2015; Ruponen et al. (2015Ruponen et al. ( , 2017; Trincas et al. (2017); and Braidotti and Mauro (2019). Such solutions have already been installed on new passenger ships for better operational information of Table 2 Different types of stability software onboard the ship, as specified in IACS (2017) Type 1 Software calculating intact stability only (for vessels not required to meet a damage stability criterion).
Type 2 Software calculating intact stability and checking damage stability on basis of a limit (e.g., for vessels applicable to SOLAS Part B-1 damage stability calculations) or checking all the stability requirements (intact and damage stability) on the basis of a limit curve.
Type 3 Software calculating intact stability and damage stability by direct application of preprogrammed damage cases based on the relevant conventions or codes for each loading condition (for some tankers, etc.).
Type 4 Software calculating damage stability associated with an actual loading condition and actual flooding case, using direct application of user defined damage, for the purpose of providing operational information for safe return to port (SRtP).
damage stability, and for providing time perspective of the evolution of the stability for enhanced decision support. An alternative to an onboard stability computer is to utilize a shore-based support center, IMO (2016). Further recommendations have been outlined in IACS (2016). Some practical aspects and applied tools are described in Peiris et al. (2015), noting that it is important to establish the condition of the ship immediately before the casualty. Therefore, swift communication between the shore-based support system and onboard loading computer, and possible decision support system, is essential.
The importance of rapid assessment of the situation, and especially the communication to the passengers, was emphasized by Ockerby (2001). According to IMO (2016): "the shore-based support should be operational within one hour (i.e. with the ability to input details of the condition of the ship)". In practice, this is likely initiated much faster. However, the response time from the shore-based support will inevitably cause some delay in getting the first damage stability results. Therefore, use of onboard system for rapid assessment can be considered as the preferred option. However, if the situation is prolonged, the stability experts in the shore-based support may be able to give valuable assistance, e.g., related to possible counter actions.
Static damage stability analyses
The loading computers, including Type 4 (see Table 2), are based on static stability assessment. Most of the large passenger ships in operation are equipped with software, where the user has a possibility for manual definition of damaged rooms and compartments. In the stability calculations, these rooms are treated as lost buoyancy, Ruponen et al. (2018). Such a system is utilizing a 3-D model of the ship, and it can calculate the final equilibrium after flooding. In addition, there is usually a possibility to calculate a few artificial intermediate flooding stages.
The damaged stability calculations are based on the current loading condition, and for example, the tank filling levels are obtained from the automation system. However, since arbitrary damage cases can be defined, most commonly used systems differ from the direct damage analysis (IACS Type 3) loading computers that are by definition limited to rule-based deterministic damage cases (e.g., SOLAS 74/90). In principle, the Type 3 software are mainly suitable for checking the compliance to the relevant damage stability regulations before sailing, especially for tankers, where MARPOL compliance needs to be confirmed for the actual loading condition. However, for passenger ships, the regulatory compliance can be achieved also by using the GM (metacentric height) limiting curves with IACS Type 2 software.
A real damage to the ship is naturally deterministic, having an exact size, shape, and location. The actual case is always different from any case that was included in the regulatory damage stability calculations (e.g., one or two compartment damages). In principle, this fact rules out systems that are based on pre-calculated damage scenarios. Especially, for passenger ships, the number of scenarios would be infinite because of the effects of the internal structures. Consequently, it is important that the calculations are based on the real, current loading condition, as emphasized in the guidelines (MSC Circulars 1400, 1532, and 1589).
The results of damage stability calculation are traditionally presented in the form of a righting lever (GZ) curve. In addition, deterministic stability criteria for the characteristics of this curve are presented, as presented in Fig. 1. Based on the GZ curve, and some knowledge of the ship, an experienced master (onboard) or a naval architect (shore-based support) can estimate the severity of the flooding case. However, this data still needs to be combined with the information on the prevailing weather and geographic conditions, when making the decision to either evacuate and abandon the ship, or to proceed to the nearest port. Furthermore, the time to reach the equilibrium cannot be estimated. It may also be difficult to judge how the situation will evolve, for example, due to progressive flooding.
The regulatory texts, IMO (2011IMO ( , 2016IMO ( , 2018, contain very detailed specifications for the required output. It appears that these specifications are based on the damage stability calculations and analyses for design and approval, according to the relevant SOLAS editions. With the introduction of probabilistic damage stability analyses, special attention was paid on the immersion of escape routes, and subsequent nullification of the survivability in such a case. However, damage stability analyses in a real Fig. 1 Example of typical damage stability output from a Type 4 Loading Computer; righting lever curve and various stability criteria On the alternative approaches to stability analysis in decision... casualty differ significantly from the design stage calculations. For example, the immersion angle of an escape route provides no relevant information to decision making onboard a damaged ship. Instead, the predicted development of progressive flooding and stability, along with the resulting estimate of the available time for evacuation can be considered much more relevant information for decision support.
Time-domain damage stability prediction
An advanced approach to decision support is to use time-domain flooding simulation, as presented in Varela et al. (2014); Ruponen et al. (2015Ruponen et al. ( , 2017; and Braidotti and Mauro (2019). In general, the process of such a DSS contains three elements: Ruponen et al. (2015). The detailed process of this DSS is illustrated in Fig. 2. When flooding is detected by the sensors, first the breach size and location are assessed automatically, as presented in Ruponen et al. (2017). Constant volumes of floodwater are used for calculation of the GZ curve, Ruponen et al. (2018). Finally, based on the estimated breaches, progressive flooding and quasi-static ship motions for the next 3-h period are calculated in time-domain, and the Vessel TRIAGE color code is evaluated based on the prediction results.
A key feature of the DSS is that the results are constantly updated, using the latest measurement data from the automation system. Consequently, also progressive flooding through unknown openings can be detected and accounted for in the subsequent predictions, as demonstrated in Ruponen et al. (2017).
Overview of real accidents
The evolution of the scenario, including flooding progression and ship motions, can have a significant impact on evacuation and abandonment of the damaged ship. The actual available time frame, from the time of the accident to the point, where orderly evacuation and abandonment is not possible, is listed in Table 3 for some notable passenger ship flooding accidents. The data is based on the accident investigation reports and publications. For pure passenger ships, the available time frame may be over 10 h, allowing for detailed assessment of possible counter actions with the help from the shore-based support. The grounding of the Sally Albatross in 1994, MoJF (1996), is a good example of such actions. The stability experts ashore concluded that the stability was critical, and the ship was safely towed to shallow water in order to prevent sinking or capsizing. Despite of the very extensive damage to the ship, everyone was safely evacuated, and eventually also the ship was re-floated and repaired.
On the other hand, if the time frame is very narrow, as in the case of the Costa Concordia, there may not be time to activate the shore-based support, and swift actions need to be taken by the crew, using loading computer and onboard decision support system for damage stability analyses.
In the European Gateway accident, the flooding of the main vehicle deck and transient flooding resulted in very rapid loss of stability, Spouge (1986) (2013) time for orderly evacuation and abandonment. In such situation, even an advanced decision support system is of little use.
Damage scenario
The implications of alternatives for decision support are demonstrated with a 125,000 gross tonnage passenger ship design, Kujanpää and Routi (2009). The first studied damage scenario is a long and narrow raking breach near the waterline. In real life, this could be caused by ice or side grounding. The breach extends over seven watertight (WT) compartments, including both main engine rooms, Fig. 3. The ship will eventually capsize, but the flooding takes several hours. The reference results for the progression of flooding and ship motions are calculated in calm water with time accurate simulation, Ruponen (2014). The applied time step is short (1.0 s) in order to minimize the numerical error. The reference simulation results were also used to generate the measurement signals for the level sensors. Here, a sampling frequency of 0.25 Hz was assumed. The applied methodology is described in Ruponen et al. (2017).
The ship is equipped with level sensors for flooding detection. All of these sensors are considered as fully operational, and thus providing the onboard system with the upto-date information on the progression of flooding. The floodwater does not immediately reach the sensors in all damaged compartments, but about 10 min after damage, flooding is detected in all breached WT compartments.
Flooding prediction results
The applied method for assessment of the breach size and location and analysis of progression of flooding are described in detail in Ruponen et al. (2017). Examples of the results from time-domain flooding prediction are presented in Fig. 4. Predictions are calculated for the next 3 h with a constant time step of 30 s. The implicit time integration of the applied pressure-correction method ensures numerical stability, Ruponen (2007). However, this also means that the results are not as accurate as with a short time step. The first prediction is started immediately after flooding has been detected and the breach size and location is assessed. The results of the first prediction indicate a transient heeling that is soon equalized to a steady equilibrium. A couple of minutes later, flooding has been detected in new WT compartments, and the second prediction reveals that the transient heel angle is much larger and progressive flooding continues during the 3-h prediction. Thereafter, predictions are frequently updated, and the results show slow but extensive progressive flooding with a small heel angle, see, e.g., the 40th prediction in Fig. 4. The development of heel angle is predicted qualitatively rather well, but with a long time step, the prediction onboard cannot capture the details of the flooding progression. However, the achieved accuracy is considered to be more than sufficient for decision support purpose.
About 160 min (2 h 40 min) after the damage, the 52nd prediction reveals that the heel angle will eventually start to increase notably, with an obvious risk of capsizing. However, the heel angle is predicted to remain under 5°for about 2 h, and consequently, there should be sufficiently time for orderly evacuation and abandonment. Eventually, about 285 min (4 h 45 min) after the damage, the 93rd prediction indicates that the ship will capsize within the next 3 h. The predicted time-to-capsize is somewhat faster than in reality, but the qualitative behavior is correctly captured, also with the subsequent updates, e.g., the 135th prediction.
Not all flooded rooms are equipped with a level sensor, and therefore, the results of the previous flooding prediction must be used to get a reasonable estimate of the amount of floodwater for the initial condition of the subsequent prediction. Consequently, in this particular damage case, the inaccuracies in the initial volumes of floodwater result in a small "peak" in the heel angle in the beginning of most predictions Fig. 4.
The key components of a decision support system with time-domain prediction of progressive flooding are visualized in Fig. 5. The main output is the Vessel TRIAGE color code and extent of flooding. For detailed analysis, a time line of critical events and prediction of heel angle can be viewed.
Loading computer results
A Type 4 loading computer indicates the detected flooding, and the user can also manually indicate additional damaged compartments. The final equilibrium condition is calculated by considering the damaged compartments as lost buoyancy. In addition, typically five intermediate stages of flooding are calculated. It should be emphasized that these artificial stages do not reflect the actual progress of flooding.
In the studied damage scenario, the ship capsizes during the intermediate flooding, and the last stable floating position for the third stage is shown in Fig. 6. In any case, the loading computer can only calculate the final condition and a number of intermediate stages, but the time line of events cannot be evaluated.
Case study B-collision damage
The second studied damage scenario is a two-compartment collision damage in the aft part of the same large passenger ship, Fig. 7. The starboard side electric motor room is penetrated, causing asymmetric flooding due to longitudinal WT bulkhead that separates the intact PS motor room. The breach in the aft damaged compartment is very small, and it takes about 3 min before the water level reaches the sensor. Therefore, the first prediction is started assuming only flooding of the forward damaged WT compartment, Fig. 8. A couple of minutes later, all breaches are correctly detected, and the second prediction provides qualitatively good results. The third prediction provides very accurate results also for the final steady heel angle. For this kind of damage scenario with a stable final equilibrium position, a Type 4 loading computer could also provide very useful results. However, a time-domain analysis gives more detailed information, including the time-to-flood and stability during the flooding process. Moreover, floodwater can accumulate due to heeling during the flooding process, and hence there may be some difference in the final equilibrium condition if the accumulated water is trapped in the compartments when heeling equalizes. This phenomenon explains the small difference in the final steady heel angle between the second and third prediction in Fig. 8.
Analysis of the case study results
In the case A, both the static analysis with a Type 4 loading computer and the timedomain flooding prediction indicate that the situation is extremely serious, and eventually the ship will sink or capsize. For an experienced master, this would be obvious already based on the extent of flooding. However, the major benefit of the time-domain flooding prediction is the estimate of the time-to-sink. In this damage scenario, there is plenty of time for orderly evacuation and abandonment. Furthermore, assistance from the nearby ships can be waited for. The flooding is very slow, and therefore, active counteractions, such as pumping, could be used to further increase the available time. Flooding and eventual capsize of the ship in the case A, takes nearly 9 h. However, the results obtained from the static loading computer give an impression of a more severe situation, simply indicating the extent of damage and loss of stability due to flooding. The lack of information on the time line of the flooding process may lead to rushed evacuation actions and panic. However, it is important to note that in some other damage scenario, the situation may evolve in a faster pace, so that swift actions are necessary, immediately when flooding is detected. In all cases, the immediate results from the time-domain simulation are considered to be very valuable.
For a damage case, where a stable equilibrium position is reached, such as the case B, the differences between the alternative decision support tools are less obvious. In practice, both a Type 4 loading computer and a time-domain flooding prediction tool will give the same final condition. However, the additional information on realistic flooding progression can be very useful, for example, in planning of active counter measures, such as pumping.
Discussion
The IMO Circ. 1532, IMO (2016), states that the "shore-based support should be operational within one hour". In practice, the gathering of the information of the situation may take a substantial amount of time. After this, with a full awareness of the situation, the shore-based support will be able to provide results on the evolution of the situation and possible recommended actions. In a serious damage case, this means that the decision for evacuation and abandonment may be critically delayed. Considering this aspect, an onboard decision support system, including automatically launched time-domain prediction of progressive flooding, would appear very useful supplement to the loading computer and shore-based support.
An essential aspect of onboard stability computers is the statutory approval. In practice, this is conducted by the classification societies, and for example, DNV GL (2018) defines an additional class notation LCS (DC) "loading computer systemdamage control" for static damage stability onboard calculation. This definition exceeds the IMO Circular 1532 requirements. In the future, it should be discussed, if also time-domain predictionbased systems could be checked and approved by the classification societies.
Automatic breach assessment, based on flood level sensor data, combined with timedomain prediction of progressive flooding can provide valuable information to both the crew of the damaged ship and possible shore-based support center. However, it is crucial to acknowledge that all decision support systems are always based on the available data, and consequently inaccuracies in the 3-D model of the ship or a broken sensor may have a significant effect on the results. Therefore, all output from any decision support system should be critically reviewed, accounting for all available information, including visual observations.
Conclusions
Considering the pace of evolvement of some flooding accidents, such as the Costa Concordia case (MIT 2013), it is utmost important that there is a system onboard the ship, capable of giving immediate alert, as well as rapid view of the severity and progress of the scenario. A loading computer-based system provides only an estimation of the situation at the end of the flooding process, and the evaluation of the severity may require expert level interpretation of the results. This kind of system is also suitable for training and drills, as it provides the user with understanding of the extent and type of damages the ship eventually can or cannot survive.
The available floodwater level sensor data and time-domain prediction of the flooding scenario can be utilized in the decision making process through a novel decision support system. Getting the time line view of the damage scenario is considered very valuable in the distress situation. The assessment of severity of the flooding accident can be based on the evolvement of the events, which can be easily communicable to all stakeholders, according to the Vessel TRIAGE concept. In order to keep the loading computer functional for its primary purpose for planning and checking the loading condition for rule compliance, the time-domain prediction should run as a separate, dedicated decision support system. This separate system can also be complemented with other safety-related functions, like vulnerability monitoring, without causing problems in the class approval of the loading computer.
Although there is inevitably some delay in the response from a shore-based support team, the expert assistance to the crew of the ship can be very valuable. For example, detailed assessment of alternative scenarios and possible counter actions can be done by the support team, and recommendations on best actions can be provided to the crew. Consequently, the onboard and shore-based decision support alternatives are in fact complementary.
In order to increase maritime safety, all passenger ships should be equipped with a loading computer that is capable of performing damage stability analysis onboard. In addition to this, shore-based support should be provided for increased safety and redundancy. For new ships, with properly located functional level sensors, a decision support system with time-domain flooding prediction would provide valuable additional information. Whatever alternative for damage stability assessment is selected, it is important that the crew is familiar with the system, especially regarding the limitations and applied assumptions. Consequently, frequent use of the system during the emergency drills is highly encouraged.
The development prospects include linking onboard and shore-based support tools through dedicated interfaces. More effort is also needed on the reliable quantification of the survivability level as a reliable measure for color coding according to the Vessel TRIAGE system. However, also the presently available tools for time-domain assessment of progressive flooding and damage stability are considered to provide very useful information for decision support in a distress situation.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 7,947.8 | 2019-09-01T00:00:00.000 | [
"Computer Science"
] |
STABLE REVERSE BIAS OR INTEGRATED BYPASS DIODE IN HIP-MWT+ SOLAR CELLS BASED ON DIFFERENT INDUSTRIAL REAR PASSIVATION
The Metal Wrap Through+ (HIP-MWT+) solar cell is based on the PERC concept but features two additional electrical contacts, namely the Schottky contact between p -type Si bulk and Ag n -contact and the metal-insulator-semiconductor (MIS) contact on the rear side of the cell below the n -contact pads. To prevent hotspots under reverse bias, both contacts shall either restrict current flow or allow a homogenous current flow at low voltage. In this work we present both options. First the stable reverse bias characteristics up to – 15 V with a MIS contact using industrially manufactured SiON passivation and second, an integrated by-pass diode using AlO X as insulator in the passivation stack allowing current flows at approximately V rev = – 3.5 V depending on the chosen screen-print paste. The examined Schottky contacts break down at around V rev = – 2.5 V. Reverse bias testing of the cells proof a solid performance of the cells under reverse bias and an average conversion efficiency of η = 21.2 % (AlO X ) and η = 20.7 % (SiON), respectively.
INTRODUCTION
The passivated emitter and rear cell (PERC) [1] technology is now the standard technology in industrial production [2].An advanced version of the PERC concept is the High-Performance Metal Wrap Through+ (HIP-MWT+) cell where only one additional process step (drilling of vias) is required to create a PERC with p-and n-contact located on the rear side of the cell [3].This backcontact configuration results in several intuitive benefits such as a rear side module interconnection and reduced front side shading.
However, wrapping the metal through the Si bulk and placing the external n-contact on the rear also introduces new electrical contacts, namely the Schottky contact between p-type Si bulk and Ag n-contact in the vias and the metal-insulator-semiconductor (MIS) contact on the rear side of the cell below the n-contact pads.A schematic cross-section of the cell and the resulting electrical contacts are illustrated in Figure 1.
These additional contacts offer the following customization potential.If the contacts are designed for electrical insulation, a cell with a stable reverse bias characteristic is created, comparable to a regular PERC.This means that reverse current flow under partial shading in the module is restrained and thus hotspots are prevented.A standard module integration of such cells should then pass the IEC certification.
On the other hand, one can also exploit the contacts by designing dielectric layers which insulate under forward bias but allow a current flow through the cell under reverse bias across a large area.This way an integrated bypass diode is built into the cell with only little additional production effort.This means that the module assembly is significantly simplified since no parallel bypass diodes are required in the module and a single shaded cell is not leading to an outage of a whole cell-string, which would reduce the power output of a typical module by 33 percent.
To find suitable passivation stacks for the insulating and conducting properties in the MIS contacts, different passivation principles can be compared.First, a chemical passivation using SiO2 appears suitable to create an isolating passivation stack.Second, a passivation based on the electrical field effect using AlOX seems appropriate to function as a rectifying contact [4].
EXPERIMENTAL
For the experimental realization of different contacts, two solar cell precursor types are used.The precursors are sourced from different industrial partners, after the front and rear surface passivation has been applied.Both precursor types are made from p-type Cz-Si and all passivation stacks are deposited using Plasma Enhanced Chemical Vapor Deposition (PECVD).The first stack is relying on the chemical passivation with a total thickness of d ≈ 160 nm consisting of a hydrogenated siliconoxynitride (SiON) stack (d ≈ 75 nm) capped with SiNX (d ≈ 85 nm) [5].The second stack, relying on the electrical field effect passivation with a total thickness of d ≈ 150 nm, is consisting of an AlOX stack (d < 20 nm) and again a SiNX capping (d ≈ 130 nm) [6].The metal side in the Schottky and MIS contact is created by a screenprinting process.While formally referred to as Ag, it is no pure metal due to the additives in the screen-printing paste.As the screen-printing paste contributes to the electric contacts, two screen-printing pastes with slightly different compositions were applied.The following manufacturing of test structures and solar cells was carried out using industrial production equipment located at the PV-TEC laboratory of Fraunhofer ISE [7].Based on the described precursors, two types of dedicated test structures are manufactured: First a set of test structures to reveal the individual IV-characteristics of the MIS and Schottky contact, which show the reverse bias breakdown of the two contacts for different precursors and passivation stacks.Therefore, the different precursor types were metallized via screen printing.While the frontside (no anti-reflection coating) was fully metalized with an Alpaste to create a well-conducting contact, the rear side was metallized with circular Ag-pads with diameters between ⌀ = 0.5 mm and ⌀ = 2.5 mm.To create test samples with MIS as well as Schottky contacts, 50 % of the test samples had the rear side passivation stack removed prior to metallization.After metallization, all samples were then submitted to a contact firing process at Tpeak = 850 °C.The samples were measured using an industrial 4-Point-Probe IV-setup.Two measurement pins were contacted on the front and two on the rear side, thus the IV-characteristics of the contacts were determined.The front side contact was proven to be ohmic, hence not influencing the measured rectifying behavior of Schottky and MIS contact.For the measurements, voltage sweeps from -15 V to + 15 V were conducted with a step size of 10 mV, a delay of 50 ms and a current compliance of 1000 mA.Additionally, ten consecutive voltage sweeps from 10 V to -10 V were performed (interjacent cooldown phases of 5 s) to show the IV-characteristics of the contacts over several reverse bias cycles.
Second, a set of test structures for lock-in thermography imaging (LIT) of the spatial breakdown was processed.These samples ran through the same process steps as the regular HIP-MWT+ cells (see paragraphs below).However, to further investigate the contact characteristics using LIT, the rear side metallization layout and the underlying LCOs were adapted.Instead of the regular rear side n-pads, a pad geometry variation was carried out with circular pads reaching from ⌀ = 2 mm to ⌀ = 20 mm each with one via in the center of the pad.After screen printed metallization, the samples were contact fired at Tpeak = 850 °C and measured in a commercial solar cell analysis system (LOANA, pv-tools [8]).Each sample was measured seven times with set voltages from -1 V to -10 V, a LI-frequency of 18.75 Hz and a measurement duration of up to 300 s for low voltages.
Finally, HIP-MWT+ cells with a 6 pseudo-busbar layout in M2 format were manufactured on the industrial precursors with the named industrial process tools at Fraunhofer ISE.The HIP-MWT+ cells were manufactured in five process steps.
In the first process step the precursors were processed in an industrial laser system (ILS500X, Innolas Systems [9]).Here, the HIP-MWT+ cells receive local contact openings (LCOs) (⌀ = 40 µm, pitch of 500 µm) and 48 vias (⌀rear = 148 µm, ⌀front = 100 µm) across the whole cell.In the following rear side metallization process, the via-filling is achieved by a controlled suction process of the vacuum chuck [10].The following screen printing of the Al contact and front side grid were realized in parallel to standard PERC processes using an automated screen printer (XH2, ASYS TECTON GmbH).After contact firing the cell performance was measured in a commercial solar cell analysis system (customized, h.a.l.m.Elektronik GmbH [11]).Additionally, the manufactured cells were used for reverse bias cycling to demonstrate the long-term stability of the cells and the electrical contacts for 20 consecutive cycles at -15 V (Forward bias at standard testing conditions for 100 ms and reverse bias for 40 ms with a compliance current of Irmax = -0.5 A).
IV-Characteristics of Schottky and MIS Contact
The Schottky contact on the AlOX based precursor (AlOX removed) breaks at initial reverse bias at approximately Vrev = -1.5 V to Vrev = -2.5 V depending on the applied screen-printing paste as shown in Figure 2. The Schottky contact for the SiON based precursor (SiON removed) breaks at approximately Vrev = -1 V [4].The different breakdown voltages for the Schottky contacts both consisting of p-type Si and Ag (with paste additives) is related to the varying bulk doping of the precursor material.While the SiON based precursor has lower bulk doping of N ≈ 7.3×10 15 atoms/cm 3 , the AlOX based precursor has a significantly higher bulk doping of N ≈ 1.7×10 16 atoms/cm 3 , resulting in a reduced width of the Schottky barrier and thus a lower breakdown voltage.
Over several reverse bias cycles the breakdown voltage is decreasing for all measured samples independent from the precursor and utilized screenprinting paste.The effect is the strongest between the first and second bias with the most significant right shift of the IV-characteristic.For all following reverse bias cycles the effect is incrementally decreasing, meaning that the contact tends to stabilize over several cycles.The increased contact formation can be explained by the percolation model which has been reported in electrical contacts before [12].
The same effect is observed even more significantly for the MIS contact using the AlOX passivation as insulator as shown in Figure 2. Furthermore, the choice of the screen-print paste seems to have a major impact on the IVcharacteristics of the MIS and Schottky contact.While the contacts using paste 1 shows a constantly increasing reverse current towards higher voltages, the IVcharacteristics using of the contact using paste 2 shows a more distinct knee in the curve during the voltage sweep at negative bias.This means that the reverse current increase is more sudden and thus the breakdown is more abrupt especially for the first bias.We suggest that the different breakdown behaviors originate from different paste ingredient.The abrupt breakdown of the contacts using paste 2 are closer to the IV-characteristic of an ideal diode and suggest that the present contact is more homogenous than contacts using paste 1 with the creeping breakdown.The paste ingredient responsible for the inhomogeneous contact formation is the glass fritt [13] which has a weight content of approximately 5 -10 wt% in paste 1 and approximately 0 -1 wt% in paste 2.
The MIS contact using the SiON passivation is not showing any breakdown in the range from -15 V to + 15 V independent from the chosen screen-print paste.The described chemical passivation is thus considered stable under reverse bias in a regular solar cell assembly.
Lock-in Thermography Imaging Results
The Lock-in Thermography (LIT) images of the dedicated test structures confirm the findings of the IVcharacteristics for the Schottky contact and MIS contact on cell level.As shown in Figure 3, the SiON based precursors only allows a small current flow under reverse bias (-3 V to -10 V) at the vias where the Schottky contact is located.The MIS contact underneath the n-pads is completely restricting the current flow.
The LIT images for the AlOX based precursor show a different behavior.At a reverse bias voltage of Vrev = -3 V a current flow is essentially observed at the vias as shown in Figure 3.At a voltage of Vrev = -5 V accessory currents are flowing from the n-pads to the p-type Si bulk, meaning that the MIS contact's breakdown voltage is exceeded.At V = -10 V the currents are spread more homogenously among the whole MIS contact across the whole test wafer.
The LIT image at V = -10 V is also revealing a higher current flow at the n-pad edges (ring structure).The increased current flow at the edges is correlated to an increase pad height.We suggest that the increased paste application at the edges leads to a higher glass frit concentration resulting in an increased contact formation.Therefore, the mass of applied paste during screenprinting seems suitable, among other parameters, to manipulate the breakdown voltage of the MIS contact.
Cell Results and Degradation by Reverse Bias Cycling
The cell results of the manufactured HIP-MWT+ cells suffer from technical issues during the screen-printing process which caused narrowing in the fingers and thus an increased series resistance.However, the cells still showed a solid performance with an average conversion efficiency η = 21.2 % for the the AlOX based precursor and η = 20.7 % for the the SiON based precursor (20 cells per group).The deviation in the conversion efficiency is related to several precursor parameters such as base resistivity (AlOX based precursor 0.94 ± 0.03 Ω cm and SiON based precursor 1.97 ± 0.14 Ω cm), and passivation.The opencircuit voltage (Voc) and short-circuit current (jsc) are reflecting the differences in the precursor parameters and can be obtained from Table I.
In terms of long-term stability, the HIP-MWT+ cells show a solid performance for the first 20 cycles.While the cells with the SiON passivation stack are not affected by the reverse cycling, the cells using the AlOX passivation stack show a slight decrease in the conversion efficiency of Δηabs = 0.2 %, reducing the average conversion efficiency from ηabs =21.2 % to ηabs =21.0 %.Furthermore, the HIP-MWT+ cells using the AlOX passivation show acceptable heat dissipation under reverse bias with a power dissipation < 40 W at Vrev = -10 V.This is still significantly below the upper limit of the 80 W limit a module encapsulation is able to cope with without being damaged [14].This leads to the conclusion that the HIP-MWT+ cells using the SiON passivation stack, as well as the ones using the AlOX passivation stack are suited for module assembly.
CONCLUSION
The IV-measurements of the Schottky contacts show a breakdown in the test structures under reverse bias at voltages between Vrev = -1.0V to Vrev = -2.5 V depending on several factors such as the Si bulk doping (precursor material) and the screen-printing paste.Furthermore, a decrease in the breakdown voltages is observed when several reverse biases are applied.This percolation effect is most significant with the first reverse bias and gradually decreasing with each following applied voltage.The MIS contact using the AlOX passivation breaks in a similar manner as the Schottky contacts at slightly higher reverse voltages, while the MIS contact using the SiON passivation shows no breakdown down to Vrev = -15 V.
The LIT images on cell-level confirm the findings from the IV-characteristic.The cells using the AlOX passivation allow a reverse current from via metallization to Si bulk at small voltages (Vrev ≈ -5 V).At higher voltages the current also flows through the MIS contact.The repeated reverse bias testing on cell level leads to marginal cell degradation resulting in a decrease of conversion efficiency of approximately Δηabs = 0.2 % (from ηabs = 21.2 % to ηabs = 21.0 %).Cells using the SION passivation stack only allow small nonhazardous currents from via metallization to Si bulk and no current through the MIS contact even at higher voltages (Vrev = -10 V).The repeated reverse bias testing for these cells is not significantly affecting the cell performance.However, further data is required to determine long-term effects on the MIS and Schottky contacts which must therefore be subject of future research activities.
Figure 1 :
Figure 1: Cross sections of HIP-MWT + cell.Showing location of Schottky and MIS contact and potential current flows under reverse bias (yellow arrows).
Figure 2 :
Figure 2: Repeated IV-characteristics of quantitative test structures for Schottky and MIS contact of the AlOX based precursor with two different screen-print pastes.
Figure 3 :
Figure 3: LIT images of test structures at different voltages, showing the spatially resolved reverse current for the SiON and AlOX based precursor.
Table I :
Average cell results for HIP-MWT+ cells.Per cell group 20 cells were measured. | 3,694.6 | 2021-01-01T00:00:00.000 | [
"Engineering"
] |
Edinburgh Research Explorer Functional evolution of the Colony Stimulating Factor 1 Receptor (CSF1R) and its ligands in 2 birds
Macrophage colony-stimulating factor (CSF1 or M-CSF) and interleukin 34 (IL34) are secreted cytokinesthatcontrolmacrophagesurvivalanddifferentiation.BothactthroughtheCSF1recep-tor (CSF1R), a type III transmembrane receptor tyrosine kinase. The functions of CSF1R and both ligands are conserved in birds. We have analyzed protein-coding sequence divergence among avian species. The intracellular tyrosine kinase domain of CSF1R was highly conserved in bird species as in mammals but the extracellular domain of avian CSF1R was more divergent in birds with multiple positively selected amino acids. Based upon crystal structures of the mammalian CSF1/IL34 receptor-ligand interfaces and structure-based alignments, we identified amino acids involved in avian receptor-ligand interactions. The contact amino acids in both CSF1 and CSF1R diverged among avian species. Ligand-binding domain swaps between chicken and zebra finch CSF1 confirmed the function of variants that confer species specificity on the interaction of CSF1 with CSF1R. Based upon genomic sequence analysis, we identified prevalent amino acid changes in the extracellular domain of CSF1R even within the chicken species that distinguished commercial broilers and layers and tropically adapted breeds. The rapid evolution in the extracellular domain of avian CSF1R suggests that at least in birds this ligand-receptor interaction is subjected to pathogen selection. We discuss this finding in the context of expression of CSF1R in antigen-sampling and antigen-presenting cells.
phenotypic consequences differ depending on genetic background and species but include osteopetrosis and postnatal growth retardation. 4,5 Conversely, administration of CSF1 to mice, rats, or pigs produces a monocytosis and expansion of tissue macrophage populations. [6][7][8] In humans, gain-of-function coding mutations in CSF1R have been associated with an autosomal-dominant human neurodegenerative disease, 9,10 while two recent studies describe recessive loss-offunction CSF1R mutations 11,12 that share skeletal abnormalities with the mouse and rat Csf1r knockouts. Variants at the CSF1 locus are strongly associated with Paget's disease. 13 Differences in phenotype of Csf1r −/− mice compared to a spontaneous Csf1 mutation (Csf1 op/op ) mice suggested the existence of a second CSF1R ligand, which was subsequently identified and named interleukin 34 (IL34). 14 Mutation of the Il34 locus in mice revealed a specific function in development of subsets of tissue macrophages in skin and brain, where the gene is most highly expressed. 15 The two CSF1R ligands appear functionally equivalent. IL34 expressed under the control of the CSF1 promoter rescues the Csf1 op / op phenotype. 16 The CSF1R system of two ligands binding to one receptor was shown to be conserved throughout vertebrates, including birds 17 and fish. 18 An intronic enhancer that controls CSF1R expression is also conserved from reptiles to mammals. 19 Recombinant CSF1 administered to chicks produced a massive expansion of blood and tissue macrophage populations. 20 Solution of the tertiary structures of mouse and human CSF1 revealed the characteristic four alpha helices with two beta sheets, a structure shared by a large family of cytokines. The 3D structures of human/mouse IL34 also highlighted four antiparallel alpha helices, but with two shorter beta sheets partially replaced with an additional three alpha helices. Subsequent studies revealed the distinctive structures of the complexes between CSF1, IL34, and the receptor. 21 Most immune proteins are subjected to an "arms-race" between host and pathogen and experience a strong positive selective pressure. 24,25 With some caveats, 26 nonsynonymous (amino acid altering) to synonymous substitution rate ratio ( = dN/dS) provides a measure of natural selection at the protein level, where = 1, > 1, and < 1 indicate neutral evolution, purifying, and positive selection, respectively. 27 The average dN/dS ratio of annotated immune-associated genes is up to four times higher than the genomewide average for protein-coding genes. 24,25 Previous analysis on limited datasets indicated that both CSF1 and CSF1R were subject to positive selection in birds, whereas IL34 was subject mostly to purifying selection. 17 Since the original characterization of the CSF1R system in chicken and zebra finch 17 the Avian Phylogenomic Consortium 28 completed the draft genome sequences for 48 bird species, representing all extant clades and many targeted projects since that time have further expanded the number of partial or complete genomes to >300 and the pool of predicted protein sequences for genes expressed in avian immune cells. Among many applications, these data permitted a re-evaluation of the gene content of avian genomes and global analysis of dN/dS ratios. 29 The expanded number of genomic sequences has added greatly to the diversity of avian predicted CSF1R, CSF1, and IL34 protein sequences. The current study takes advantage of the multispecies genomic dataset to examine the contrasting evolutionary constraints on the CSF1R system in birds and mammals.
Sequence collection and multiple sequence alignment
Avian CSF1, IL34, and CSF1R protein and gene sequences were retrieved from the National Centre for Biotechnology Information (NCBI; http://ncbi.nlm.nih.gov) and completed avian genomes were analyzed by Avian Phylogenetic Consortium. 28 Accession numbers for all protein sequences are provided in Supplementary Table 4.
Phylogenetic analysis
An MSA for avian sequences was created using CLUSTALW and phy- Full annotation of the whole genome sequences and analysis of genetic diversity of these chicken populations will be published elsewhere.
Assay of the biological activity of chicken and zebra finch CSF1 proteins using growth factor dependent cells
We have previously established a bioassay for chicken CSF1 by stably transfecting the interleukin 3 (IL3)-dependent BaF3 cell line with a chCSF1R expression plasmid. 17 The transfected BaF3 cells express chCSF1R on the cell surface 33
Sequence analysis of the CSF1 ligand-receptor system from birds and mammals
From available genomic DNA sequences and entries in NCBI GenBank, we were able to extract 68 CSF1R, 30 IL34, and 36 CSF1 predicted full-length protein sequences orthologous to the functional chicken proteins analyzed previously. 17 The relative paucity of avian CSF1 and IL34 sequences available reflects the difficulties in sequencing in the respective genomic regions, in common with multiple other GCrich regions, in all avian genomes. 29 In many cases, the sequences annotated as CSF1 or IL34 in NCBI as a predicted protein were truncated at the N terminus relative to full-length chicken and zebra finch orthologs. Multiple sequence alignments (MSAs) of each of the avian CSF1, IL34, and CSF1R protein-coding regions are provided in Supplementary Table 1A-C. In mammals, the CSF1 locus encodes multiple isoforms of the protein generated by alternative splicing. 3 The longest cDNA encodes a membrane-bound precursor that is cleaved from the cell surface by TNF-alpha converting enzyme (TACE, ADAM17) 34 to release the minimal bioactive CSF1 protein. In transgenic mice, this longer form of the ligand is required to fully complement a CSF1 mutation and restore postnatal growth. 35 Consistent with previous evidence of the production of longer forms of The short intracellular domain contains a membrane-proximal basic region that is conserved between mammals and birds. The remainder of the intracellular domain is also strongly conserved in birds. Similar membrane proximal basic domains are found in many membraneassociated proteins including G protein-coupled receptors. The intracellular domain may function to promote membrane trafficking from the Golgi 36 or conceivably also produce a reverse signal to the CSF1producing cell. 37 The intervening region between the bioactive peptide and membrane is longer in mammals than in birds. In common with many proteolytic cleavage domains, the obvious conserved feature is repeated proline (P), glutamate (E), serine (S), and threonine (T) amino acids.
At the N terminus, we also noted that there was considerable ambiguity among predicted protein sequences in GenBank regarding the location of the start codon and the length of the leader sequence.
For the purpose of the current analysis, we have aligned the processed peptide containing the 160 amino acids that make up the minimal bioactive 4-helix bundle. 17 In the case of IL34, the predicted avian proteins are all around 180 amino acids, truncated at the C-terminus relative to predicted mammalian IL34 proteins (230-240 amino acids).
In mammals, some of the C-terminal amino acids were found to be engaged in binding to CSF1R 23 but in birds the 180 amino protein contains the biological activity. 17 As noted based upon comparison of chicken and zebra finch, 17 the avian CSF1 sequences all showed conservation of cysteines that provides a strong reference framework for the alignment (Supplementary Table 1A). These conserved avian residues are predicted to form three intrachain disulfide bonds coincident with the cysteines involved in disulfide bonds in CSF1 of mammals and fish. 17 In all of the avian CSF1 peptides, the cysteine responsible for the interchain disulfide bond in mammalian CSF1 is substituted with glycine (G29 in Supplementary Table 1A; position 63 in Fig. 1). Nevertheless, the chicken protein forms a dimer through predicted large hydrophobic interfaces. 17 Early studies indicated that the interchain disulfide in human CSF1 was absolutely required for dimerization and biological activity, but this does not appear to be the case. 38 Mutation of this cysteine (C31S, numbered in the mature CSF1 peptide without the leader sequence) did not compromise refolding or biological activity of recombinant human CSF1.
Based upon structural analysis, two amino acids (Q26 and M27) were predicted to make strong contributions to dimer formation. 38 These are conserved in all bird and mammalian CSF1 sequences (Q25/M26 in the active mature chicken sequence shown in Supplementary Table 1A; positions 58/59 in Fig. 1). Indeed, D23, which made strong electrostatic and nonpolar contributions to the dimer interface in the C31S mutant human protein, is also conserved between birds and mammals and in all birds (Supplementary Table 1C). A second shorter segment in CSF1 that contributed to the dimer interface, R66-N73 in human CSF1 (positions 98-107 in Fig. 1), is also conserved between mammals and birds and the core (FKENS) is identical in all bird species. A combined C31S/M27R mutation produced a monomeric CSF1 that acted as a CSF1R antagonist. The absence of cysteine in this location in the avian ligand suggests that the C31S mutation in the mammalian protein is unlikely to be necessary to achieve this outcome. Our earlier analysis of available CSF1 sequences indicated significant divergence among species and evidence of positive selection. 17 This conclusion was confirmed using the larger dataset. 30 Figure 2 shows a neighborjoining phylogenetic tree for the available sequences. This simple analysis reveals that the Galloanseriformes (chicken, turkey, guinea fowl, quail, and goose) clearly form a separate group.
Avian IL34, unlike CSF1, is subject to purifying selection. 17 Indeed, although CSF1 is highly divergent between birds and mammals, the core 145 amino acid chicken IL34 protein, excluding the leader sequence, is around 60% identical to the human protein and can be readily aligned (not shown). Despite this level of conservation, amino acid differences among mammalian species were associated with species-specific biological activity. 39 inactive on the chicken receptor. 17 The arginine substitution is present in all bird sequences. A neighbor joining phylogenetic tree was then generated using the same package.
a much larger assembly of avian species 41 (see the phylogenetic tree image from this study in the graphical abstract, reproduced with permission) and recapitulates analysis based upon the divergence of the conserved intronic enhancer in the CSF1R locus. 19 As in the case of CSF1, the Galloanserae form a divergent group.
The overall sequence identity between the most disparate CSF1R protein sequences (e.g., between chicken and zebra finch), around 75%, is similar to the conservation between the most divergent mammalian sequences (primates and rodents 39
Cross-species specificity of the CSF1 ligand in birds
The divergence also distinguishes chicken, quail, and turkey from duck and goose. The structure-based alignment of predicted contact residues in CSF1 reveals corresponding variation in Site 1 of the ligand, in particular multiple nonconservative substitutions between chicken T57 and E82, whereas Site 2 on CSF1 is conserved across all available avian sequences. The Site 1 interaction between chCSF1 and chCSF1R is predicted to involve a salt bridge between K73 in the ligand and E168 and E170 in the receptor ( Table 2). This interaction is abolished in the zebra finch receptor (Q164, S166); substitutions shared by many bird species (Supplementary Table 1C For the predicted CSF1-CSF1R interaction, the binding Sites 1 and 2 are based upon structure-based alignment of available human and mouse CSF1-CSF1R (D1-D3) and IL34-CSF1R (D1-D3) structures. Contact amino acids in CSF1 and CSF1R derived from the human structures are highlighted in gray, and asterisks indicate amino acids that differ between human and mouse. Where the corresponding amino acids diverge between zebra finch and chicken, they are set in bold.
Four constructs were expressed in HEK293T cells and supernatants containing recombinant CSF1 were tested. The supernatants from HEK293 cells transfected with zfCSF1 expression plasmid were able to promote survival of BaF3 cells expressing the chicken CSF1R to the same extent as supernatants from cells expressing chCSF1 (Fig. 5).
Both of the domain-swapped constructs zf_chCSF1 and ch_zfCSF1 were also active on the chCSF1R reporter cell line (Fig. 5A). We The chicken and zebra finch CSF1R complexes were modeled based upon the human CSF1-CSF1R (D1-D3) structure as described in Materials and Methods section. Non-conserved amino acids are set in bold. week. In both cases, no cells survived in the absence of added growth factor (panels E and J). As shown in images in panels A-D, chicken bone marrow cells produced a relative confluent lawn of macrophages in response to all of the supernatants. Conversely, only zfCSF1 or zf_chCSF1 (chCSF1 with ZF Site 1) directed macrophage proliferation and differentiation from ZF marrow (panels G and H). Images are representative of three separate experiments.
Site 1 of the chicken ligand (ch_zfCSF1) abolished the activity on zebra finch marrow. This observation confirms that the difference in crossspecies reactivity between chicken and zebra finch CSF1 ligands can be attributed to the variation in receptor binding Site 1 (Table 1).
Polymorphism in the CSF1R, CSF1, and IL34 genes among selected chicken populations
Western commercial chickens have been subject to intensive selection of production traits: rapid growth and meat production or egg laying.
Selection has produced genomic signatures that can be detected as extended regions of homozygosity. 44 In mammals, mutations in CSF1 or CSF1R produce severe postnatal growth retardation suggesting a link between macrophages and the growth hormone/IGF1 axis. 3,5 Indeed, the CSF1R gene on chromosome 13 lies within an interval containing signatures of selection 44 A different selection pressure including heat stress and disease applies to indigenous chicken ecotypes selected for resilience and survival in tropical small holder systems. 45 We predicted that genes such as CSF1 and CSF1R that diverge rapidly between species might also exhibit functional polymorphism within a species occupying many diverse environmental niches. We therefore explored genomic Table 1 and most vary to some extent between species. Position F125 is also leucine (L) in most other avian species; position N153 is serine (S) in two species of tit, starling, and ruff and position 308 is threonine (T) in 2 manakin species (blue-crowned and golden-crowned) and glycine (G) in cuckoo roller (Supplementary Table 1C Table 3). This amino acid is conserved in bird species but lies outside the binding site for the receptor. In the biologically active portion of CSF1, we identified the N87D variant discussed above at low allele frequency in the majority of populations and a small number of rare potentially deleterious variants at low frequency in specific pop- Table 4). None of the variants altered contact amino acids. One other variant detected in all Ethiopian populations, E99K, is also present in duck and goose, but not in quail or guinea fowl reference sequences.
DISCUSSION
We retained the ability to activate chCSF1R, whereas chCSF1 was inactive on zfCSF1R. Domain swap analysis confirmed that the amino acids K57-N82 within zfCSF1 (Site 1) that interact with domain D2 of CSF1R are both necessary and sufficient to enable activation of zebra finch BM cells. There are six amino acid differences between the two species in this short segment, all involving charged amino acids ( Table 1). As discussed above, we suggest that the binding affinity of chicken CSF1 for chicken CSF1R depends upon charged amino acid interactions.
By contrast, there appear to be no predicted salt-bridge interactions in zebra finch CSF1 binding to its receptor, but two charged amino acid substitutions may permit the formation of salt bridges to the chicken receptor.
The analysis of many more IL34 sequences in birds (Supplementary and syndecan-1. 47 The function of IL34 in birds has not been studied beyond the demonstration that the protein is active on the chicken CSF1R. 17 The most striking feature of our analysis, which clearly distinguishes birds from mammals, is the hypervariability of the CSF1/IL34 binding Site 1 in CSF1R. Why has selection in avian evolution apparently acted upon ligand binding to CSF1R? One major difference between birds and mammals lies in the expression of CSF1R. We developed monoclonal antibodies against CSF1R 33 propria of the intestine control M cell differentiation. 51 In a second contrast with mammals, we found that CSF1R is highly expressed by antigen-presenting dendritic cells, which are a prevalent cell population in the avian liver in addition to their well-recognized prevalence in bursa and spleen. 52 So, we suggest two nonexclusive explanations.
One is that a class of pathogen-associated virulence determinants acts to block binding of CSF1 or IL34 in order to compromise innate immunity or the function of FAE. Such a pathogenicity determinant exists in the form of the immunomodulatory BARF1 viral protein, which binds to human CSF1R. 43 A second nonexclusive explanation is that a pathogen or pathogen-associated molecule binds to CSF1R to enable receptor-mediated internalization. CSF1R is expressed on the cell surface and upon ligand binding promotes endocytosis of the ligand, either CSF1 or IL34. 3 Hence CSF1R could provide a portal for pathogen invasion.
The secondary question is how evolution in CSF1R can occur without compromising the innate immune system. CSF1 and CSF1R knockout mutations in mice and rats 4,5 are macrophage deficient and have severe developmental abnormalities. This is also the case in zebra fish. 12 We have recently confirmed based upon CRISPR-mediated knockout in the germ line that the chicken CSF1R is also absolutely required for posthatch development (Balic A. and DAH, forthcoming). Previous studies of birds in smallholder systems in Ethiopia provided strong evidence for heritable disease resistance and resilience. 45 Comparative analysis of available sequences of western commercial and tropically adapted populations identified prevalent protein sequence variants (Supplementary Tables 2 and 3). Some CSF1R variants distinguished layer and broiler lines consistent with evidence of signatures of selection in broiler lines in this region of chromosome 13 44 and QTL association with growth-related traits. CSF1R is clearly highly polymorphic in chickens and the coding variants distinguish western commercial birds from tropically adapted birds. Two common variants that distinguish commercial broilers and layers, A308T and S409L, also occur within domain 4 but whether they influence CSF1R function is unknown. Common variants detected in commercial birds are relatively rare in Ethiopian and Nigerian populations and one CSF1R variant D91N was prevalent and unique to Ethiopia and Nigeria. Each of the variants affects an amino acid that is conserved to some degree across avian species. Polymorphism is a common feature of innate immune receptors. 54 It remains to be determined whether any of these variants can be associated with disease resistance or production traits and could represent targets for markerassisted selection.
Although the focus of this study has been on the avian CSF1R system, as mentioned in the Introduction, there is emerging interest in CSF1R as a drug target 2 and in functional analysis of loss-of-function and gain-of-function mutations in CSF1R in human patients. [9][10][11][12] The human and mouse equivalents of the BaF3-CSF1R we have used here to assess cross-reaction of avian CSF1 have previously used to assay function of disease-associated human mutant receptors. 9 Our findings in birds suggest that focused mutagenesis of the interaction sites of CSF1 with CSF1R could provide the basis for generation of monomeric antagonists or higher affinity agonists. | 4,634 | 2019-09-05T00:00:00.000 | [
"Biology"
] |
Credibility in Search Systems via Information Retrieval theory
Information Retrieval methods, and search systems in general, are now a significant part of our interaction with the world outside of our direct reach. As such, they influence the way we perceive the reality we cannot directly experience. From here, the issue of credibility of IA systems is raised, and in this poster we look at the work which has already been done, observe its limits and propose some directions for future work in this area.
INTRODUCTION
A number of studies deal with credibility of the dataweb pages, answers, tags.This is a considerable and challenging problem, and systems and methods have been proposed to help the users in assessing the credibility of the information they receive (Schwarz and Morris (2011)).Such studies assume the IR method and search system in general to be an independent, impartial, trustworthy intermediary.Even if such were the case, the user may still be entitled to mistrust a, for the lack of a better word, incompetent system.In our experience, this comes up first in professional search scenarios (Patent, Legal, Medical), but there is no reason to stop here, since we all take search systems for granted in many of our daily interactions with knowledge.This problem is not specific to information access systems, nor even only to computer systems, and as such, the issue of credibility of computer systems is not new in itself.In the following we will very briefly summarize the work which has already been done and propose some directions for future research in IR models and practice.
PRIOR WORK
The issues of how a human can trust a system have been studied for other types of computer systems (see for instance Galletta et al. (2005) and Lai et al. (2011) for spell-checkers and internet-based inter organizational systems, respectively) and in general in the more humanistic literature (Kiran and Verbeek (2010); Taddeo (2010)), but less so for information retrieval engines.However, before proceeding, we should provide a definition for our understanding of credibility.
The vast majority of researchers identify two components of credibility: trustworthiness and expertise (Fogg and Tseng (1999)).In a general context, trustworthiness is unbiased, truthful, well intentioned, while expertise is knowledgeable, experienced, or competent.For IR engines and systems, trustworthiness reflects the perception of the user that the search system is not filtering out potentially desirable results (e.g.censorship) or biasing the results according to an unknown agenda (e.g.hidden advertisement).Expertise for IR systems is, on one hand, market popularity (less interesting for us) and, on the other hand, effectiveness and efficiency evaluations to the extent to which these help build confidence in the quality of the search system.
In fact, a lot of the work already done in IR can be casted as a conveyor of credibility in the performance of the system.Here are a few IR research areas and how they can be viewed in terms of credibility.
To save space and because many of these are wellknown research areas in our field, we have refrained from using citations, except in cases where a specific point was to be made.
Probabilistic retrieval together with methods to regenerate probabilities of relevance from retrieval status values (e.g.Nottelmann and Fuhr (2003)) are methods to convey to the user more than the set of most relevant documents, but also how relevant these most relevant documents are.
Automatic explanations have appeared mostly in
Question-Answering systems, but recent work has also looked at recommender systems and attempted to provide explanations based on text similarity (Blanco et al. (2012)).
Diversity can be viewed as the answer to the need of the user to explore the entire knowledge space before making a decision.
Findability looks at the core ability of an engine to retrieve documents and as such can be viewed as a measure of expertise, as well as trustworthiness (if a set of documents is intentionally not retrievable).
Evaluation campaigns are a direct measure of expertise of an IR method at a specific task.
Human-computer interfaces study the objective and subjective ways in which the presentation of the results affects the, ultimately subjective, perception of credibility.
All this work can be cast as credibility, but is is only a series of proxies, each independently looking at a different aspect, providing a fragmented image of credibility of IA systems.
PROPOSED DIRECTIONS
There are many directions starting from here.Here are some that come first to mind.
Back to probabilities
Use the now pervasive big data to improve the probabilistic model.A number of assumptions and simplifications have been made in the original models, which have subsequently been further refined in countless articles, but ultimately still with the goal of providing the set of most relevant documents, rather than a precise probability of how relevant each one is.
New benchmarks While the evaluation campaign benchmarks are, as we said, an indicator of expertise and therefore credibility, developments of new IR results are linked to the existence of specific test collections and metrics.
Automatic explanations are already present in some form (e.g.text snippets) or for specific (linked) data, but what is now left to the user to assess, could be done also automatically by the system.
Consistency assessment refers to the experience that users tend to adapt to a system and consider it reliable even if it has known and considerable weaknesses, as long as those weaknesses are consistent.Furthermore, an inconsistent result may be an indicator of censorship or hidden advertisement.
User studies are ultimately needed because credibility is essentially a subjective assessment.This will have to include, or in some way factor out, the human-computer interface.
CONCLUSION
The assessment of the expertise (quality of results) and trustworthiness (impartiality of results) of a search system is performed constantly by each user, either consciously or unconsciously.The question we have is to what extent, and in which way the underlying IR method can be used to assist in this evaluation.There is no answer at the time, but hopefully, the set of research directions proposed here would bring us closer to one. | 1,397.2 | 2013-09-01T00:00:00.000 | [
"Computer Science"
] |
Curcumin and Mesenchymal Stem Cells Ameliorate Ankle, Testis, and Ovary Deleterious Histological Changes in Arthritic Rats via Suppression of Oxidative Stress and Inflammation
Rheumatoid arthritis (RA) is a chronic inflammatory condition, an autoimmune disease that affects the joints, and a multifactorial disease that results from interactions between environmental, genetic, and personal and lifestyle factors. This study was designed to assess the effects of curcumin, bone marrow-derived mesenchymal stem cells (BM-MSCs), and their coadministration on complete Freund's adjuvant- (CFA-) induced arthritis in male and female albino rats. Parameters including swelling of the joint, blood indices of pro-/antioxidant status, cytokines and histopathological examination of joints, and testis and ovary were investigated. RA was induced by a single dose of subcutaneous injection of 0.1 mL CFA into a footpad of the right hind leg of rats. Arthritic rats were treated with curcumin (100 mg/kg b.wt./day) by oral gavage for 21 days and/or treated with three weekly intravenous injections of BM-MSCs (1 × 106 cells/rat/week) in phosphate-buffered saline (PBS). The treatment with curcumin and BM-MSCs singly or together significantly (P < 0.05) improved the bioindicators of oxidative stress and nonenzymatic and enzymatic antioxidants in sera of female rats more than in those of males. Curcumin and BM-MSCs significantly (P < 0.05) improved the elevated TNF-α level and the lowered IL-10 level in the arthritic rats. Furthermore, joint, testis, and ovary histological changes were remarkably amended as a result of treatment with curcumin and BM-MSCs. Thus, it can be concluded that both curcumin and BM-MSCs could have antiarthritic efficacies as well as protective effects to the testes and ovaries which may be mediated via their anti-inflammatory and immunomodulatory potentials as well as oxidative stress modulatory effects.
Introduction
Rheumatoid arthritis (RA) is the most severe destructive inflammatory arthritis. It is a chronic autoimmune condition through which nonsuppurative proliferative synovitis contributes to destruction of the articular cartilage and bone resulting in multiple joint inflammation. RA is more common among women than among men [1,2]. The severity of the disease ranges from person to person, with joint damage varying from mild pain and irritation to severe inflammation. RA also affects joint pairs (two hands, two feet) and can affect small joints in wrists and hands. Many joints such as knees, elbows, shoulders, feet, and ankles can be also affected over time and deformity occurs. In addition, other organs such as the skin, eyes, and lungs can be affected, and neuropathy, anemia, fatigue, and heart disease may occur [3]. Although the etiology of RA is unclear, disease susceptibility is associated with inheritance of certain allelic types of major histocompatibility complex (MHC) class II genes [4].
The mechanism of the joint degeneration effects in rheumatoid arthritis involves direct cell damage by cytotoxic CD 8+ T-cells or other lytic cells. On other hand, the damaging effects of cytokines are triggered by CD 4+ T-cells which know their antigenic targets, or by non-T-cells which release inflammatory mediators like tumor necrosis factor-α (TNF-α) and interleukin-(IL-) 1β [5]. In addition, Ahmed [6] suggested that the cytokine imbalance of CD 8+ and CD 4+ Th1/Th2 with a predominance of Th1 cytokines has pathogenic importance. TNF-α, a proinflammatory Th1 cytokine, serves a key role in the pathophysiological processes of RA [7,8]. It is mainly released from activated inflammatory cells including macrophages, T-lymphocytes, and natural killer cells [9]. It contributes to the stimulation of other inflammatory cytokines, including interleukin-(IL-) 1, 6, 8, and 17 [7,10]. TNF-α and other proinflammatory cytokines potentially amplify differentiation and activation of osteoclasts which in turn induce synovial hyperplasia, angiogenesis, cartilage erosion, and bone damage [11][12][13]. On the other hand, Th2 cytokines including IL-4 and IL-10 have anti-inflammatory effects, and their increases results in improving inflammation and arthritis [14,15].
Reactive oxygen species (ROS) often participate in the pathogenesis of different diseases, including RA. ROS also play a central role both upstream and downstream of the TNF-α and nuclear factor-kappa B (NF-κB) pathways, which are at the center of the inflammatory response. RArelated inflammation is associated with altered signaling pathways, resulting in elevated levels of inflammatory cytokine markers, lipid peroxides, and free radicals. The natural protection mechanism involves antioxidant enzymes like catalase (CAT), superoxide dismutase (SOD), and glutathione peroxidase (GPx), as well as nonenzymatic antioxidant and reduced glutathione (GSH). The defect in such protective mechanism contributes to toxic oxidative free-radical accumulation and consequent degenerative changes [14,16]. Due to the adverse effects and toxicity arising from the use of antiarthritic drugs, more focus is placed in discovering safer, more efficient, natural product-based, alternative medicines with antioxidant activities [17][18][19].
Curcumin or diferuloylmethane is a polyphenolic yellow pigment derived from turmeric (Curcuma longa) and has been reported to exhibit numerous activities including antioxidant and anti-inflammatory properties [20,21]. Curcumin is insoluble in water and ether but soluble in ethanol, dimethylsulfoxide, 1% carboxymethyl cellulose, and acetone [20,22]. The fact that curcumin in solution exists primarily in its enolic form has an important role in the radical-scavenging ability of curcumin [20]. Many chronic disorders, including inflammatory arthritis, intestinal disease, chronic anterior uveitis, pancreatitis, and malignancies may benefit from curcumin [20]. Curcumin has also been shown to decrease many proinflammatory cytokines and their release mediators such as nitric oxide synthase (NOS), interleukin-8 (IL-8), interleukin-1 (IL-1), and TNF-α [21,22].
The mesenchymal stem cell (MSC) population mainly resides in the bone marrow but may be present in other tissues (e.g., fat) and are capable of multilineage differentiation and self-renewal [23]. Under appropriate stimulation, MSCs can differentiate into 3 mesenchymal lineages: chondrocytes, adipocytes, and osteoblasts [23]. MSCs can also be induced experimentally to differentiate into neural and myogenic cells [24]. Multiple publications have confirmed that adherent cells (MSCs) isolated from various tissues meet the minimal criteria corresponding to the basic MSC phenotype, such as the expressions of CD73, CD90, and CD105 [25]. However, MSCs derived from different tissues can also express mesenchymal, hematopoietic, and endothelial tissue developmental markers [26], and they also produce molecules which directly involve immune response regulation, like programmed death ligand 1 (PDL-1) and PDL-2 inhibitory molecules, the costimulatory molecule CD28, and different cytokine arrays [27]. Therefore, MSCs can control immune response to these molecules. In vivo, MSC immunoregulatory function has also been observed; treatment with MSCs in humans enhanced the outcome of allogeneic transplantation through reducing graft-versus-host disease (GVHD) and facilitating hematopoietic engraftment [28]. MSCs have been widely used in animal models to prevent the autoimmunity recurrence in lupus-pronounced mice [29], to promote improvement of experimental autoimmune encephalomyelitis [30], and to enhance amelioration of CFA-induced arthritis in rats [14,15]. Due to the success of MSC therapy in the treatment of some autoimmune disorders in animal models [30] and humans [28], the current research is aimed at examining the potential of bone marrow-derived mesenchymal stem cells (BM-MSCs) either singly or in combination with curcumin in the therapy of RA in male and female Wistar rats.
Experimental
Design. Experimental animals ( Figure 1) were organized into 16 groups (6 animals for each), eight groups including male rats and the other eight groups including female rats as follows: (1) Group 1: normal group that did not receive any treatment or vehicle.
(2) Group 2 (control group): rats within this group received the equivalent volumes of 1% CMC (5 mL/kg b.wt./day) as vehicle 1 by oral gavage daily and PBS (as vehicle 2) in the lateral tail vein weekly for three weeks. The equivalent volume of phosphate-buffered solution was given.
(3) Group 3: curcumin control group. Rats were daily supplemented with curcumin by oral gavage. Curcumin was dissolved in 1% CMC (carboxymethyl cellulose) at 2% concentration and was administrated orally (100 mg/kg b.wt./day) [34]. This group was also weekly given the equivalent volume of PBS.
(4) Group 4 (mesenchymal stem cells (MSCs) control group): in this group, the rats weekly received injection of MSCs (1 × 10 6 /rat) in PBS. This group was daily given the equivalent volume of 1% CMC by oral gavage for 21 days.
(5) Group 5 (arthritic control group): rats were subcutaneously injected with CFA (0.1 mL (0.1 mg)/kg b.wt. single dose) into a foot pad of the right hind leg [35] to induce RA. This group was also given the equivalent volumes of 1% CMC by daily oral administration and PBS by weekly intravenous injection.
(6) Group 6 (arthritic group treated with curcumin): rats were injected with CFA like in group 5 and orally treated with curcumin like in group 3. This group was also weekly given the equivalent volume of PBS by intravenous injection. 2.6. Tissues Sampling. The ankle circumference of the right hind leg of each rat was measured at the end of the experiment, and rats were sacrificed under mild anesthesia in each group. The ankle circumference was measured by wrapping a cotton thread around the ankle, and the length of the wrapped thread was measured by ruler. By centrifugation of blood at 3000 rpm for 15 minutes, sera were separated and the clear and nonhemolyzed supernatant sera were rapidly removed and kept at -20°C while being used in biochemical analysis. For histopathological analysis, paw and hind ankle, testes, and ovaries were removed and then fixed in neutral-buffered formalin (10%).
Paw Edema
Level. The circumference of the right hind paw above the tarsal pad was determined by using a piece of cotton thread and wrapping it around the paw just above the tarsal pad as an indicator of the swelling rate and paw edema in different groups. The circumference was measured using a meter ruler [18,36]. The measurements were taken on the 21th day of adjuvant induction.
Oxidative Stress Markers.
In serum, the thiobarbituric acid-reactive substances (TBARS) were measured according to Preuss et al. [37] to determine lipid peroxidation (LPO). Glutathione reduced form (GSH) level was measured colorimetrically using the Ellman reagent as protein-free sulfhydryl content [38]. In addition, glutathione-S-transferase (GST) activity was calculated according to Habig et al. [39], and glutathione peroxidase (GPx) activity was determined by using the method of Kar and Mishra [40] in serum. Finally, superoxide dismutase (SOD) activity was detected according to the colorimetric method of Nishikimi et al. [41].
2.9. Detection of Serum TNF-α and IL-10 Levels. TNF-α levels in the serum of normal and experimental groups were measured using ELISA kits which were purchased from R&A Systems, USA, according to the manufacturer's instructions [42]. The level of IL-10 was determined using specific ELISA kits purchased from R&A Systems, USA, in the serum of control and experimental groups. According to the manufacturer's instructions, the concentrations of 2.11. Statistical Analysis. Two-way analysis of variance (ANOVA) [44] accompanied by one-way ANOVA using the SPSS/PC program (version 20.0; SPSS, Chicago, IL) and post hoc LSD test was used to statistically analyze biochemical variable measurements (P < 0:05 was considered to be significant). Two-way ANOVA was applied to assess The CFA-induced arthritic male and female rats exhibited a significant increase in the hind paw edema at day 21 as compared with the normal group. The arthritic effect in female rats is more severe than in male rats. The groups of the arthritic male and female rats treated with curcumin, MSCs, and their combination showed a significant amelioration of the elevated values of paw edema as compared to the arthritic animals and the values returned to nearly normal ( Table 2).
Stem Cells International
Regarding one-way ANOVA, the general effect was very highly significant between groups (P < 0:001) (Table 1) throughout the experiment.
Concerning two-way ANOVA, it was noticed that the effect of treatment, gender, and treatment-gender interaction was significant at P < 0:001 (Table 1). Tables 3, 4, 5, and 6. A significant elevation in serum TNF-α level was noticed in CFA-induced arthritic rats when it was compared with normal rats; the arthritic effect is more severe in female than in male rats. CFA-injected rats that were treated with curcumin and/or MSCs exhibited a marked decrease in the elevated serum TNF-α level in comparison with arthritic control rats and when compared with either the curcuminor MSC-treated arthritic groups (Table 4). Regarding oneway ANOVA, the general effect between groups on serum TNF-α level was highly significant (P < 0:001) (Table 3) throughout the experiment. Concerning two-way ANOVA,
Stem Cells International
it was revealed that the effects of treatment, gender, and treatment-gender interaction were very highly significant at P < 0:001 (Table 3).
A significant decrease in serum IL-10 level was shown in CFA-induced arthritic rats when compared with normal rats after 21 days; the decrease is more pronounced in female Stem Cells International than in male rats. The treatment of CFA-injected rats with MSCs and/or curcumin produced a significant increase in IL-10 level after 21 days in comparison to arthritic control rats; the combinatory effects were the most potent (Table 6). Regarding one-way ANOVA, the general effect between groups on serum IL-10 level was very highly significant (P < 0:001) ( Table 5) throughout the experiment. Concerning two-way ANOVA of normal-arthritis effect, it was revealed that the effects of treatment and treatment-gender interaction were very highly significant (P < 0:001), while the effect of gender is only significant (P < 0:05) ( Table 5).
Oxidative Stress Markers.
The data showing the effects on LPO, GSH content, and antioxidant enzymes in serum are represented in Tables 7, 8, 9, 10, 11, and 12. Table 12 shows changes in LPO and antioxidant parameters for all groups. MDA level, as an indicator of LPO, exhibited a significant increase (P < 0:05) in male and female rats in comparison with the normal group. CFA injection resulted in MDA level increase in female rats more than in male rats. On the other hand, the treatment with curcumin, MSCs, and a mixture of both induced a potential reduction of elevated MDA in both male and female rats.
Concerning the two-way ANOVA. The MDA level of arthritic rats treated with curcumin, MSCs, and combination effects, it was noticed that the effects of treatment and gender were very highly significant (P < 0:001), while the effects of gender-treatment interaction were nonsignificant (P > 0:05) ( Table 7).
The level of the nonenzymatic antioxidant, GSH, and the activities of antioxidant enzymes including GST, GPx, and SOD showed a significant depletion in CFA-induced arthritic rats. On the other hand, the treatment with curcumin, MSCs, and their combination induced a significant improvement in GSH, GST, GPx, and SOD activities in both male and female rats; the effects of curcumin, MSCs, and their combination were more or less similar.
Concerning the one-way ANOVA, CFA caused significant effects in GSH levels (Table 8) along with GST (Table 9), GPx (Table 10), and SOD (Table 11) activities (P < 0:001) in both male and female rats compared to the normal group.
Concerning the two-way ANOVA, in the case of GSH content, it was noticed that the effects of treatment were very highly significant (P < 0:001), and the effects of gender were significant (P < 0:05), while the effects of gender-treatment interaction were nonsignificant (P > 0:05) ( Table 8). In the case of GST and GPx activities, it was noticed that the effects of gender and gender-treatment interaction were significant (P < 0:05), and the effects of treatment were very highly significant (P < 0:001) (Table 9). Finally, in the case of SOD, it was revealed that the effects of gender were significant (P < 0:05), and the effects of treatment were significant (P < 0:001), while the effects of treatment-gender interaction were nonsignificant (P > 0:05) (Table 10).
Histopathological
Results. The histological alterations of the articular ankle joint in various groups of male and female rats are depicted in Figures 2-5. In normal rats, the bone surfaces in the synovium are covered by an articular cartilage that lacks a perichondrium. The heads of the two articulated bones are enclosed and joined by an articular capsule consisting of two parts, the outer and inner parts. The outer part is a sheath of fibrous tissue (fibrous capsule) that extends well beyond each bone's articular cartilage. The inner part is called a synovial membrane and lines the fibrous capsule and is reflected in the bone that covers right up to the articular cartilage. Therefore, the joint cavity between two 10 Stem Cells International articulated bones is lined everywhere with either articular cartilage or synovial membrane. The synovial membrane is a thin sheath of fibrous connective tissue, with a dense network of blood and lymph capillaries. Ankle joint sections of male rats (Figures 2(a)-2(d)) and female rats (Figures 4(a)-4(e)) from normal, CMC, and combined curcumin and MSC groups, respectively, showed the normal histological structure of an ankle with normal articulating cartilage, synovial cavity, sponge bone, and bone marrow. CFA-administered arthritis male rats showed necrosis of cartilage with inflammatory cell infiltration in ankle joint sections, degeneration of cartilage, and pannus formation (Figures 3(a) and 3(b)).
CFA-administered arthritis female rat ankle joint sections showed severe necrosis of cartilage with massive inflammatory cell infiltration, severe degeneration of cartilage, and eroded spongy bone (Figures 5(a) and 5(b)). This indicated that the arthritic effect was more severe in female rats than in male rats.
CFA-administered male rats (Figures 4(c)-4(e)) and female rats (Figures 5(c)-5(e)) treated with curcumin, MSCs, and a mixture of curcumin and mesenchymal stem cells showed a nearly normal section structure of articulating cartilage, synovial cavity, sponge bone, and bone marrow nearly similar to the normal control groups.
The ovary of control normal rats (Figures 6(a)-6(d)) from normal, CMC, curcumin, and MSCs, respectively, showed a normal morphology. The ovary consists of two distinct regions: an outer cortex that contains numerous follicles at various stages of maturation and an inner central medulla, which did not appear in these histological sections. The surface of the ovary is covered with germinal epithelium. It contains corpus luteum and different primordial follicles including primary follicles and secondary follicles.
Stem Cells International
The secondary follicle contains an oocyte surrounded by two or more layers supporting granulosa cells and a follicular antrum filled with liquor follicle, and the follicle is surrounded by theca interna. Mature Graafian follicles are seen beneath the epithelium. Graafian follicles consist of an enlarged oocyte that floats freely within liquor folliculi surrounded by clear zona pellucida, corona radiata, welldefined zona granulosa, and compact theca folliculi.
The histopathological examination of arthritic ovarian sections revealed multiple luteal structures in ovarium medulla, stromal hyperemia, and infiltration of mononuclear cells (Figures 7(a) and 7(b)).
Sections of arthritic rats treated with curcumin (Figure 7(c)), MSCs (Figure 7(d)), and a combination of curcumin and mesenchymal stem cells (Figure 7(e)) revealed nearly normal structure.
Testes of normal rats (Figures 8(a) and 8(b)), CMC (Figure 8(c)), curcumin (Figure 8(d)), and stem cells (Figure 8(e)) revealed a normal seminiferous tubule morphology. Every tubule has epithelial cells including Sertoli cells and germ cells that demonstrated the complete spermatogenesis process (Figure 8(b)). Sertoli cells were usually located in the seminiferous tubule toward the basement membrane. Spermatogonia stood on seminiferous tubule basal lamina. Primary spermatocytes were immediately above them, identified by their large nuclei having coarse chromatin clumps and copious cytoplasm. Due to the rapid division processes, secondary spermatocytes in these sections were not seen. Therefore, there were small, rounded spermatids with rounded nuclei above the primary spermatocytes that proceeded in a long metamorphosis to become recognizable spermatozoa (Figure 8(b)).
Testicular tissue sections obtained from CFA-treated rats displayed several histopathological changes as showed in Figures 9(a)-9(c). Atrophy and focal necrosis in germinal cells, spermatogenic arrest, and congestion were noticeably observed (Figure 9(a)). Pyknotic nuclei, interstitial edema, and damaged seminiferous epithelium and germ cells are also seen (Figure 9(b)). The seminiferous tubules showed irregular variable size and congestion in intercellular space (Figure 9(c)). Testes treated with curcumin, MSCs, and mixture of MSCs plus curcumin (Figures 9(d)-9(f)), respectively, revealed apparent normal seminiferous tubules. Spermatogenic layers were well organized, the tubules had restored their regular shape, and sperms in most of the tubules were observed.
Discussion
Currently, stem cell therapy has been declared as one of the most important and promising treatments for the near future. This kind of therapy could improve or even reverse some degenerative diseases and have potential applications in replacement and regenerative medicines and RA. Also, using plant constituents in RA treatment has attracted many researchers due to the side effects of conventional drugs.
RA, one of the most common chronic inflammatory autoimmune diseases, is distinguished by systemic
12
Stem Cells International inflammation, permanent synovitis, edema, and production of autoantibodies [45]. Because of their multipotent differentiation abilities, cell therapy using MSCs is the most common new technique in tissue repair and regeneration [14,15,32,46]. Additionally, MSCs have therapeutic potential to joint and bone diseases through the secretion of a number of immune modulating substances and cell-to-cell interactions leading to antiapoptotic, antifibrotic, immunosuppressive, and proangiogenic properties [47]. Curcumin, a polyphenolic yellow pigment derived from Curcuma longa Linn, is a member of the compound family curcuminoid. Curcumin, derived from diferuloylmethane, is an important antioxidant which has been used as herbal therapy and as a dietary factor in many Eastern countries. Curcumin has also been shown to inhibit many proinflammatory cytokines and mediators such as IL-1, IL-8, and nitric oxide synthase [48]. Consequently, curcumin's beneficial effects on inflammatory disorders are due to the suppression of immune functions of T-cells, specially Th1, which plays a key role in the pathogenesis of chronic inflammatory disorders such as arthritis ( Figure 10) [14,15,18,49].
In the present study, due to treatment with MSCs and curcumin, the increased right hind leg ankle joint circumference of male and female arthritic rats was significantly reduced. This decrease in the joint circumference of the ankle represents the swelling rate decrease that can be due to edema reduction, inflammatory process attenuation, and synovial tissue hyperplasia reduction as demonstrated by
13
Stem Cells International the histological results in the current study and stated by previous publications [22,50].
Serum concentrations of TNF-α and IL-10 were determined in the current study to elucidate their potential antiinflammatory roles in the mechanisms of action of curcumin and MSCs. The TNF-α serum proinflammatory cytokine was significantly elevated in arthritic rats, and the effect was more deteriorated in female than in male arthritic rats. The IL-10 serum level of anti-inflammatory cytokine was depleted in arthritic rats and also was more deteriorated in female than in male arthritic rats. Therefore, changes of these cytokines ensure that Th1 cytokines are dominant over Th2 cytokines (Figure 10). Many previous authors supported this evidence [22,51].
Stem Cells International
In the present study, numerous histopathological changes in bone, ovarian, and testicular tissues were noticed in arthritic rats. The ankle joint of CFA-administered arthritic rats exhibited deleterious histological changes including necrosis, eroded articulating cartilage, and pannus formation [52]. These histopathological alterations may be attributed to the increase in the oxidative stress; antioxidant defense system suppression; elevation in the proinflammatory and inflammatory cytokines represented by increased IL-1β, IL-6, and COX-1 mRNA expression; and depletion of anti-inflammatory cytokines represented by decreased IL-4 mRNA expression. The improvement of ankle joint histological architecture as a result of treatment of arthritic rats with MSCs and curcumin may be due to their ability to
15
Stem Cells International scavenge lipid peroxides and free radicals, enhance the antioxidant defense system, and suppress inflammatory status. MSCs are able to inhibit osteoclast-mediated bone resorption, resulting in bone degradation through induction of T regs and reduction in the development of inflammatory cytokines that aid osteoclastogenesis. It has been demonstrated that osteoclastogenesis is inhibited by MSCs through production of osteoprotegerin or using interactions with osteoclast precursors via CD200/CD200 receptor interactions [53]. Garimella et al. [54] suggested that MSC injection into the collagen-induced arthritis (CIA) mice prevented bone loss via decreasing bone marrow osteoclast precursors but the mechanisms remain unclear.
Antioxidants are compounds which can delay, inhibit, or avoid oxidation of compounds, capture free radicals, and reduce oxidative stress. The body has an effective mechanism for preventing and neutralizing the free-radicalcaused damage. This is accomplished by a group of endogenous antioxidant enzymes like SOD and CAT and the nonenzymatic antioxidant, GSH. Oxidative stress leads to cellular function deregulation that leads to different pathological conditions when the balance between ROS production and antioxidant defense is lost [55]. In rheumatoid arthritis, oxygen free radicals are implicated as tissue damage mediators. The involvement of free radicals is well studied in various inflammatory conditions, such as synovitis and rheumatoid arthritis. In the present study, the results revealed significant lipid peroxidation increase and decrease in antioxidant enzymes as well as GSH in male and female arthritic rats. Polyphenols have the ability of protecting cells from oxidative stress. However, polyphenol compounds may have antioxidant/prooxidant properties, depending on the source and concentration of free radicals [56]. The combination of curcumin and MSCs revealed a significant decrease of LPO when compared with the arthritic group. Significant normalization of the levels of antioxidant enzymes (GST, GPx, and SOD) and GSH promoted the potent antiarthritic curcumin activity with MSC combination. Arthritic rats treated with curcumin revealed a significant increase in GSH, GST, GPx, and SOD levels when compared to arthritic control. In this study, it was determined that administration of MSCs plus curcumin in arthritic rats significantly attenuated the changes in LPO, GPx, SOD, GST, and GSH. LPO was significantly reduced in arthritic rats treated with curcumin and MSCs as compared to the arthritic control, where all values approximately returned to the normal level.
It was shown in this study that the ovarian tissue of the arthritic female rats had multiple luteal structures in ovarium medulla, stromal hyperemia, and infiltration of mononuclear cells. According to Kim and Boone [57], at the penultimate stage of follicular development in the ovary, FasL is present in granulosa cells and may be the signal that causes apoptosis of granulosa cell during atresia. decreasing IL-10 (Th2), thereby resulting in synovial hyperplasia and joint, testis, and ovary necrosis, and inflammation. The treatment with curcumin and/or MSCs can counteract these actions via enhancing the antioxidant defense system and anti-inflammatory mechanisms. →: activation; ┴: inhibition. 16 Stem Cells International In the present study, testicular changes include necrosis in germinal cells, pyknotic nuclei, interstitial edema, atrophy, vacuolation, and blood vessel congestion may be due to an increase of free radicals and elevation of inflammatory cytokines. In vitro studies on seminiferous tubule cultures revealed that IFN-γ and TNF-α caused germ cell apoptosis via the Fas-FasL system [58][59][60]. In the same regard, Rival et al. [61] also reported that IL-6 induced germ cell apoptosis.
The controls on the immune cells and inflammatory cytokines that are involved in RA are due to MSCs. The activation and proliferation of B-cells and T-lymphocytes are inhibited by MSCs via cytokine secretion (paracrine effect) and also a cell-cell direct contact effect [62,63]; thus, they have a protective effect against ovarian and testicular tissue damage induced by RA.
Curcumin can ameliorate the destructive damage of testis and ovary tissues because of the ability to scavenge lipid peroxides and free radicals, enhance the antioxidant defense system, and suppress inflammatory status, which are elevated due to CFA that induce arthritis.
Conclusion
In conclusion, the present study shows that CFA induced oxidative stress and ankle, ovarian, and testicular damage not. The administration of curcumin and BM-MSCs singly or in combination provides potential protective activities against oxidative stress changes and articular inflammatory cell infiltration and ameliorates the histopathological effects of CFA male and female rats; the combinatory effects are more potent in both male and female arthritic rats. Consequently, we advise using the combination of mesenchymal stem cells and curcumin due to their antioxidant and antiinflammatory properties and their ameliorating role in histopathological changes. However, clinical studies are required to assess the efficacy and safety of this combination before approval of its application for treatment in human beings. | 6,430.2 | 2021-11-09T00:00:00.000 | [
"Biology",
"Medicine"
] |
SOCIAL SCIENCES INNOVATIVE APPROACH TO BUDGETING ACTIVITIES OF INSURERS IN THE CRISIS ON THE UKRAINIAN INSURANCE MARKET
Traditional budgeting, despite the critical comments regarding this management tool, is widely used in many medium and large enterprises, including insurance companies. The authors propose an innovative approach to budget management in insurance companies, based on the hyperdynamics feature, permanent instability of today's business environment in the country generally, in the insurance market particularly, and specifics of insurance. In particular, it is a combination of traditional budgeting and non-budgetary methods of financial planning.
Introduction
One of the major factors of business survival in the current crisis conditions of Ukrainian economy development and framework for achieving companies a sustainable competitive advantage in an aggressive external environment is the use of an innovative approach to financial management and in particular, such important aspects as the quality financial planning. An innovative approach to financial management provides using of new financial instruments or processes in ensuring the financial activities.
An important financial planning tool is the traditional budgeting. Despite criticisms regarding this tool, budgeting continues to be the most widespread method of financial planning for many corporations, enterprises and large and medium-sized businesses. Recent years have been the most difficult for Ukraine's economic development, due to the simultaneous combination of negative objective and subjective factors. Losses in individual markets ranged from 5 to 30%. There was no exception to the insurance market, that is not surprising given the extremely elastic demand for insurance services in our country. Most companies didn't't have achieved the planned indicators on the results 2014-2017 years. This sets new requirements for financial forecasting, planning, activities of insurers, including the formation of budgets for revenue and expenditure items.
Specificity and innovative directions of budgeting in insurance companies
Today mistakes in financial planning have far more serious and quick consequences than ever before, as the competitive situation is constantly exacerbated in the insurance market and every company is experiencing a shortage of financial resources. As never before, the role of external factors for the success of the insurance business has increased. Most insurance companies do not apply an innovative approach to financial management processes in conditions of total market uncertainty and limited financial resources. Although the innovative improvement of financial planning would greatly contribute to solving the problem of financial resources optimization. As part of the formulation of the problem, it is necessary to review traditional approaches to financial planning, including drawing up budgets.
Determination of the entity's budget, provided by scientists who have studied this issue, are different, for example: -the budget is the amount of funds that are available to perform certain functions and carry out specific activities within the corporate planning (Tereshchenko, 2003); -the budget is a quantitative expression of indicators that are set centrally, according to the enterprise plan for a certain period (Shcheborch, 2004); -the budget is a financial plan covering all aspects of the enterprise (Bilyk, 2013). Despite the fact that there are many definitions of budgeting, but all who have studied the issue agree that the traditional budgeting, on the one hand, is a method of financial planning, on the other hand, the management process whose goal is the timely provision of financial resources.
The budgeting process is affected by industry or economic activity. The main feature of the financial cycle (turnover of working capital) in industry is the existence of the production stage (transformation of material resources into the finished products). This lead to a complicated system of cost planning for an industrial company compared to other sectors of the economy. So, in the banking, insurance and trade sectors a big part of the value added is transaction costs, which are determined by the general conditions of the business' support activities (office premises, staff, etc.). Thus, the main objective of these companies is that the difference in "outgoing" and "incoming" value flows, that is, the margin (whether the difference of the purchase and sales value of the goods in trade or the difference of attracting and allocation of financial resources in the banking sector) cover the operating expenses. The optimization of the operating expenses, in general, is to ensure that, it performs as an intermediary with a minimum of expenses, which redistributes "incoming" commodity or financial flows. This is much more complicated in the industry. So, there is a qualitative change of the "incoming" flows at the production stage, i.e., the "outgoing" flows value is determined not only by the market (external), but also by the internal enterprise (production) policy. The connection in industry between cost and structure of material resources purchases and income from sales of finished products is much harder than between credit interest and deposit interest of depositors in the financial sector. Despite that, the financial cycle of the industrial enterprise includes a stage of supply and the implementation phase, production accounting and planning determine the specificity and complexity of the budget process in the industry compared to other industries.
Financial resources of the bank, insurance companies, other financial institutions, to a lesser extent, goods for resale of the trade organizations, are liquid assets, and quite easy to "overflow". If the situation suddenly changes on the financial markets, the Bank can relatively painless "transfer" funds from short-term commercial loans to the stock markets. An industrial enterprise that has invested in the production of a specific product, will be in a much more difficult position.
The production stage presence determines the specificity of not only financial but also the investment cycle (cycle of renewal of fixed capital). Unlike other sectors, where the investment cycle is sufficiently standard in the industry most of the investments relate to the manufacturing of certain types of products, that is, it is highly individualized. There is a close connection not only between the profitability of the business in general and the return of investments, but also between the profitability of specific types of products and the return from specific investments in the production of these products.
Therefore, during the formation of budget management in insurance companies, it's necessary to consider specifics of financial management in insurance organizations, which are: -funds for the provision of insurance services are received in advance; -the insurance company can operate advance funding and receive additional profit; -the profit can be obtained also due to the excess of insurance premiums are being over payments and operating costs associated with the insurance activity of the company; -insurance companies, because of the high risk associated with their activities, must maintain the size of the insurance fund, that will provide them sufficient solvency (Suprun, Zajvenko, 2009).
Due to these, a row of the reference points can be determined that distinguish the budget process in insurance companies.
First, the receivables budget is determinative in many enterprises and organizations, while in insurance companies, commodity receivables is insignificant, accordingly, the budget of commodity receivables will not play any significant role.
Secondly, all budgets that are associated with the investment of free funds will be important. Their structure will be significantly different from those who are engaged in risky insurance, and those who are engaged in life insurance. Life insurance companies provide separate accounting for insurance reserves for each client, that causes the complication of the budgeting process.
Thirdly, insurance is based on the probability of occurrence or non-occurrence of events, which are defined by the contract as insurance cases. The main cost item of a stable operating insurance company is the costs associated with insurance payments. Accordingly, the process of budgeting is mainly based on actuarial calculations. Formed this way, the data concerning the anticipated costs of the insurance company are the basis for the formation of the budget of payments, and together with the forecast of income insurance premiums and other operating budgets the planned financial result of the insurer's activities are formed. Thus, in the field of operational activity, two types of budgets are formed. The first type is the budget, having a distinct probabilistic nature (the budget of insurance premiums incomes and the budget of insurance payments). The second type is the other operational budgets, that management has a direct impact on (budget for advertising costs, administrative expenses budget, budget of income from rent, etc.).
Fourth, the probabilistic nature of insurance payments of non-life insurance companies often generates significant fluctuations in the amount of cash payments in time. An array's formation of available cash for payments by type of insurance is important (considering the indicators of pays out and incurred losses ratio) and it is necessarily to create their insurance reserve (it can be defined as an equalization reserves).
The budget of insurance premiums incomes is a key indicator for planning an insurance company's activity. Formation of insurance premiums from income should include budget such as the number of concluded contracts, the amount of insurance amounts and rates of insurance tariffs. Even in a stable economic situation, the forecasting of insurance premiums in the dynamism of the external economic environment is a difficult task. The development of the budget for the receipt of insurance premiums in this regard should be based on the principle of making a "slippery" budget. That is, the budget is composed for the year with a quarterly or monthly adjustment depending on changes in the economy and in the market segments. It is necessary to structure the budget of insurance premiums from incomes.
Principles can be different at the same time. For example, it is imperative to allocate two main contractors (corporate clients and individuals) as there are fundamental differences in the acquisition work with them. Further distribution depends on the specifics of the insurer's activities. It is sufficiently effective to allocate in separate budgets the receipt of insurance premiums by the types of insurance, which have the largest share in the receipt of insurance premiums of the company with years. Now the Ukrainian insurance market faces with systemic problems associated with the general crisis of the Ukrainian economy.
As of 30 June, 2016, the total number of insurance companies was 343, including the life insurance companies -45, non-life insurance companies -298, (as of June 30, 2015 -374 companies, including life insurance companies -52, non-life insurance companies -322). The number of insurance companies tends to decrease: as of 30 June, 2016 compared to the same date in 2015, the number of companies decreased by 31. There is now a crisis "cleaning" of the market. First of all, weak companies disappear, whose owners have no opportunities or they are not interested in their support.
According to the main indicators, the market is growing, as evidenced by the latest statistics. Compared to the first half-year 2015 the volume of gross insurance premiums increased by UAH 2861,5 million (21,3%), net insurance premiums increased by UAH 2150,8 million (19,9%). The increase in gross insurance premiums was made for almost all types of insurance, namely: auto insurance (CASCO, compulsory civil liability insurance for vehicle owners, Green Card) (increase of gross insurance payments by UAH 753,6 million (20,5%); life insurance (increase of gross insurance payments by UAH 374,8 million (39,9%); insurance of cargoes and luggage (increase of gross insurance payments by UAH 275,3 million (18,4%); property insurance (increase of gross insurance payments by UAH 265,7 million (16,2%); third party liability insurance (increase of gross insurance payments by UAH 227,3 million (40,8%); medical insurance (increase in gross insurance payments by UAH 147,6 million (14,9%). The growth of the absolute value of insurance premiums received for virtually all types of insurance took place in the crisis of 2015. But this growth was purely inflationary. To do so, we need to look at dynamics of the number of concluded insurance contracts.
Thus, during the first half-year 2016, the number of contracts decreased by 9534,0 thousand units (10,3%), while the number of the contracts of voluntary insurance decreased by 33970,7 thousand units (74,6%), including: the number of concluded insurance contracts on fire risks and risks of natural disasters decreased by 13247,6 thousand units (92,4%); the number of concluded property insurance contracts decreased by 12925,3 thousand units (89,9%); the number of concluded accident insurance contracts decreased by 5334,3 thousand units (63,4%).
During crisis period of 2008-2010, the market supported compulsory insurance.
Particularly, the number of compulsory insurance contracts increased in the first half-year 2016 by 24321,6 thousand units (52,5%) due to the growth of traffic accident insurance (vehicle insurance) contracts by 24407,0 thousand units (57,9%). From 2015 until 2016, the segment of compulsory civil liability vehicle insurance showed results of a steady increase. To some extent, the inflation growth reserve has already been exhausted, therefore, it is important for the analysts of the company to make conclusions and form the budgets of insurance premiums income, it is important to conclude is the bottom reached in reducing the number of concluded contracts whether the process will continue. This uncertainty in budget revenue premiums creates difficulty in forming the operating budget expenditures in practically all areas of the insurance company.
Using traditional budgeting under conditions of such instability is difficult, sometimes impossible. In this context it is worth mentioning the criticisms that had place at different times and by different scientists on traditional budgeting.
Among critics who justify the ineffectiveness of budgeting as a managerial process, US professors Jeremy Hope and Robin Frazer. These authors do not deny the fact that with the help of budgeting as a method, operative financial planning and control over implementation of plans in the largest corporations of the world are carried out. However, the actual research suggests that in many corporations, budgeting, as a process, is causing more and more critical remarks, both from top managers and from the side of budget executives directly to the centers of responsibility.
The main criticisms given by these authors regarding the use of budgeting as a method of financial planning and control can be divided into three areas: 1. Budgeting is a long and expensive management process. 2. Budgeting is not suitable in a modern competitive environment. 3. Budgeting pushes to manipulate figures in financial statements.
In the case of formation sales budgets in insurance companies in crisis conditions, accordingly, budgeting is not suitable in the current competitive environment.
Since 1980, the level of uncertainty has grown in the business environment and the importance of effective corporative management has multiplied. Shareholders demanded a steady increase shares value, and competition increased in all markets due to increased cost of resources and implementation of new information technologies in the companies. Intangible assets of companies such as brand, permanent customers and an effective management team have become a crucial lever of stock price growth. A priority of companies has become ensuring a process of constant renewal through innovation in terms of reducing the duration of the production cycle. The constant trend towards lower prices and reduce margins made the company think about the need for a sharp administrative costs reduction and a management staff reduction. Frequent changes in customer preferences pushed the company to decentralize management so that local managers can quickly respond to them. For these reasons, many companies have introduced simplified budget management systems and began to develop a short perspective plans. So, instead of annual budgets, there were budgets for half a year or even for a quarter; at the same time there were so-called "slippery" budgets, which composed a year with quarterly adjustments. Table 1 provides a comparative analysis of the management of the insurance company on data of budget and on a non-budget basis (generally, it is called an agreement on relative improvements). Entirely fixed as at the income level so on the expense level Possible options (for example, a reduction in insurance premiums leads to a reduction in operating costs or vice versa), but the overall result should be positive Remuneration It provides rigid fixation (e.g., % of the insurance premium or the fixed salary) There are a lot of options and combinations, but the condition is the principle that improving of key indicators would increase remuneration, and vice versa Plans They have the character of clear scheduled tasks It is very flexible, key indicators are planned with a certain level of variation Resources Fully booked among divisions There is an opportunity of rapid change the resource base depending on the changing situation on the insurance market and in the contiguous segments Coordination Within the fiscal management system with a clearly-defined hierarchy There are responsible persons, but in general the structure is more horizontal than vertical Controlling Monthly (quarterly) monitoring of performance of all budgets Mostly, overall control of the situation within the limits of the indicators, the possibility of operating the changes in income or expenses depending on changes in the external economic environment * Composed by authors When comparing the first and second models, the first is simpler and easier to use. This model has proven itself well in the conditions of traditional manufacturing with standardized products. The complexity of the second model is due to the flexibility and variability. Implementation of the insurance company's second model provides, first of all, the availability of highly skilled staff, which enable to implement functioning of such a model. Implementation of the management model based on the agreement on relative improvements and abroad it has proven itself in the financial sector, which has always been characterized by highly educated staff.
Despite the professionalism of the employees of Ukrainian insurance companies, the overall level of corporate culture remains insufficient for the full implementation of the model of relative improvements. Therefore, we can offer a combination of both approaches. Thus, the first approach is used to form all administrative budgets and other non-insurance related budgets. This will allow the senior management of the company to have the idea of the necessary financial resources to manage and at the same time to avoid financial abuse from the middle managers. Budgetary tasks related to insurance activities need to be as flexible as possible. It is necessary to abandon the setting of strict targets in the field of sales, according to the specifics of the domestic insurance market (total demand drop due to the financial and economic crisis in the country). An agent or financial consultant who works on a net basis should be able to not sell services for a long time, since its real potential is uncertain and, with the improvement of the market environment, it can bring significant income to the company. The manager must have a budget for sales promotion and a budget for incentives without a clear cost indication. Of course, this approach may lead to finance abusive, but this practice is used in the leading Western insurance companies. It is also necessary to pay attention to the specifics of business, deciding about the configuration of the financial planning system in insurance companies. Thus, for a large retail company, at the current stage of the insurance market development, the elements of the traditional budgeting system will have a greater effect, as it will be very difficult to clarify the strategy and all the system's postulates of relative improvements to a big number of employees (among them are workers without the necessary education). The transition to comparative improvement methods is much simpler for a company with a small number of employees working in the Bancassurance system or concentrating on servicing legal entities.
Conclusions
The current business environment in Ukraine is very unstable and this instability is fully reflected on the insurance market. Overall during the Crisis 2014-2016, the market experienced a significant loss. According to the main criterion -the income of insurance premiums, the market has fallen to the level of 2015 in dollar terms. For insurance companies unable to predict and plan their activities in difficult circumstances. Using of the traditional budgeting methods becomes completely impossible for this. The researching of foreign experience on financial management and the planning of insurance companies' activities allows Ukrainian insurers to apply a combination of traditional budgeting and a method of planning based on relative improvements. This combination involves creating a more flexible financial planning system that allows the insurer to adapt to the rapid changes in the external economic environment. The proportions of traditional budgeting and management based on relative improvement will vary, depending on the scale, specifics and types of economic activity of the insurer. Formation of parameters for the determination of such proportions is the prospect of further research. | 4,739 | 2017-12-28T00:00:00.000 | [
"Business",
"Economics"
] |
Paving the way for precision medicine v2.0 in intensive care by profiling necroinflammation in biofluids
Current clinical diagnosis is typically based on a combination of approaches including clinical examination of the patient, clinical experience, physiologic and/or genetic parameters, high-tech diagnostic medical imaging, and an extended list of laboratory values mostly determined in biofluids such as blood and urine. One could consider this as precision medicine v1.0. However, recent advances in technology and better understanding of molecular mechanisms underlying disease will allow us to better characterize patients in the future. These improvements will enable us to distinguish patients who have similar clinical presentations but different cellular and molecular responses. Treatments will be able to be chosen more “precisely”, resulting in more appropriate therapy, precision medicine v2.0. In this review, we will reflect on the potential added value of recent advances in technology and a better molecular understanding of necrosis and inflammation for improving diagnosis and treatment of critically ill patients. We give a brief overview on the mutual interplay between necrosis and inflammation, which are two crucial detrimental factors in organ and/or systemic dysfunction. One of the challenges for the future will thus be the cellular and molecular profiling of necroinflammation in biofluids. The huge amount of data generated by profiling biomolecules and single cells through, for example, different omic-approaches is needed for data mining methods to allow patient-clustering and identify novel biomarkers. The real-time monitoring of biomarkers will allow continuous (re)evaluation of treatment strategies using machine learning models. Ultimately, we may be able to offer precision therapies specifically designed to target the molecular set-up of an individual patient, as has begun to be done in cancer therapeutics.
Facts
• Necrosis and inflammation are two auto-amplifying detrimental factors in critically ill patients.
• Necrotic cells release damage-associated molecular patterns and chemo-/cytokines. • Biomolecules released by necrotic cells and immune cells are circulating in biofluids of critically ill patients.
•
The digitalization of monitoring intensive care patients allows data mining methods and machine learning models to finetune patient stratification and treatment strategies.
Open questions
• Which circulating biomolecules and/or immune cell profiles have prognostic value for disease progression and mortality in critically ill patients?
• Is there therapeutic value in targeting novel biomarkers of necrosis or inflammation?
• How will we evolve to a patient-driven medical care, which allows a mutual secure interaction between biomedical (pre-)clinical research, health care services, and patients?
Introduction
Patients with similar symptoms can have different diseases, and not all patients with the same disease respond equally to treatment [1]. To date, tailoring of medical treatment to the characteristics and needs of individual patients, or precision medicine, is predominately based on genetics. For example, the FDA recently approved four new cancer treatments and one treatment for cystic fibrosis for use in patients with specific genetic characteristics. The challenge of 21st century is to extend precision medicine beyond genetic stratification, by implementing novel molecular diagnostics and intervention strategies.
Critical illness is characterized by dysfunction of several organ systems, or multiple organ dysfunction syndrome (MODS), because of an inciting event-for instance, major trauma, surgery, or infection. This is explained by a dysregulated inflammatory stress response, which leads to a negative spiral where the effects of one organ dysfunction impacts on other organs. MODS often shows substantial individual variation in response to treatment due to individual genetic differences, co-morbidities, frailty, and dynamic disease fluctuations. More specifically, increased inflammation along immunosuppression and necrosis can occur dynamically and concurrently, originally coined as necroinflammation [2]. Therefore, dynamic monitoring of novel biomarkers for necrosis or inflammation is needed to stratify critically ill patients for treatment with new necrosis and/or inflammation intervention strategies [3]. The joined forces of different emerging fields such as real-time biomolecule diagnostics, single cell sequencing, the multiplicity of omics approaches, electronic health recording, data mining, and machine learning could potentially reshape profoundly the landscape of healthcare in the near future. Here, we will briefly review the current state of art on each of these topics related to necroinflammation.
Necrosis (re)defined
Rudolf Virchow (1821-1902, founder of the Cell Theory (Omnis cellula e cellula) and cellular pathology, referred to tissue injury as "parenchymatous inflammation". He postulated that tissue injury is caused by pathological changes within the cells. In 1858, he introduced the notion of cell death as the basis for pathology, with "necrobiosis" being a physiological process of spontaneous wearing out of living parts from the body and "necrosis" an accidental process. Virchow's necrobiosis-necrosis dichotomy resembles to some extent the current apoptosis-necrosis classification [4]. Together with cellular and molecular insights into inflammation, came a shift into our understanding of the molecular interplay between cell death and inflammation at the site of tissue injury. This emerging field of research is crucial for understanding organismal homeostasis and how its processes contribute to a growing list of inflammatory and degenerative pathologies. Cell death is crucial as a mechanism for eliminating pathogens and regulating inflammation by exposing or releasing molecular patterns, but excessive cell death during inflammation is also one of the detrimental factors resulting in tissue damage [5].
For decades, apoptosis was considered as the standard cell death form during development, homeostasis, infection and pathogenesis, whereas necrosis was mostly considered as an "accidental" cell death in response to physicochemical insults. An increasing amount of genetic evidence, as well as the discovery of chemical inhibitors of necrosis, have radically changed this view, and revealed the existence of multiple molecular pathways of necrosis [6]. The term "necrosis" comes from the Greek word "nekros", which means "dead body". Cellular necrosis is defined by rounding, swelling, cytoplasmic granulation, and plasma membrane rupture with consequent leakage of cellular contents into the extracellular space. Thus, the destruction of vital cellular functions is essentially the result of irreversible cell membrane damage. Multiple modes of necrosis (cell death) share these morphological hallmarks, and they are now examined for common or distinct underlying signaling pathways. Attempts to define and classify modes of necrosis and their underlying pathways have resulted in multiple neologisms, such as necroptosis, parthanatos, oxytosis/ferroptosis, (n)etosis, autoschizis, pyronecrosis, or pyroptosis emphasizing a particular aspect [6].
In the human body, 1-5 million cells die every second. It is imperative that their clearance occurs efficiently and silently by phagocytes. This evolutionarily conserved process, termed efferocytosis, is critical to the maintenance of developmental and immune homeostasis [7]. As the goal of efferocytosis is the quiet removal of cellular corpses before the cells start to leak, one could theorize that part of the program of apoptosis would be the packaging of dying cells into immunologically inert pieces. However, in case of insufficient or absent phagocytic capacity, apoptotic cells, similar to necrotic cells, loose the integrity of the plasma membrane, referred to as secondary necrosis. Recently, the mechanism of action was found to be dependent on CASP3-dependent cleavage of Gasdermin E [8,9]. This important finding might challenge the generally accepted dichotomy between non-leaky, immune-silent apoptosis and leaky, immunogenic necrosis. This view implies that apoptosis can be classified as a mode of necrosis ( Fig. 1), with the notion that this stage of secondary necrosis is normally not reached in vivo owing to quick phagocytosis by neighboring cells or phagocytes.
Necrosis-induced inflammatory response
For decades, the "self/non-self" model has been used as the sole framework to differentiate between homeostatic (that is, self and non-immunogenic) and pathogen-driven (that is, non-self and immunogenic) forms of cell death. However, the multitude of observations showing the propensity of endogenous entities to initiate an immune response illustrate the limitations of this model. Thus, the immune system has evolved to recognize, respond to, and remember danger in the form of damage-associated molecular patterns (DAMPs) or microbe-associated molecular patterns (MAMPs), previously referred to as pathogen-associated molecular patterns. This change in nomenclature was proposed because symbiotic flora and other non-pathogenic environmental microorganisms can also induce an immune-stimulatory response (for instance, upon disruption of intestinal epithelial barrier) [10], potentially boosting sepsis. For example, although lipopolysaccharide (LPS)-induced shock is generally considered a sterile shock model, antibiotics pretreatment can protect indicating the presence of a microbial component, probably caused by intestinal ischemia and barrier loss [11]. Cells dying under non-physiological conditions often reflects a pathological process, which is potentially dangerous to the host. The innate immune system developed mechanisms to detect this potential danger [12]. The ensuing acute inflammatory response rapidly delivers defenses that attempt to resolve the injurious process and repair the damage. Similarly, cell death will mobilize the adaptive immune system if immunogenic antigens are present.
For a long time, cell death has been misleadingly classified in a dichotomic manner. Thus, although apoptosis was considered to be a physiological, regulated, and nonimmunogenic (or even tolerogenic), necrosis was viewed as a pathological, incontrollable and immunogenic variant of cellular demise [10]. Now it has become evident that such clear-cut differences do not exist. To date, research on immunogenic cell death is mainly performed in the context of pathogen defense and anticancer (immuno)therapy. From this field of research, we know that cell disruption induced by freeze-thaw is unable to activate dendritic cells in vitro [13] and fails to elicit protective immunity upon inoculation in syngeneic mice [14,15]. This could imply that it is not merely cellular leakage that triggers an inflammatory response. The genetic programs of cell death can also actively transform DAMPs, altering their immunogenicity and dictating the effects of cell death on phagocytes and the immune response. This has fed the idea of the existence of at least one factor other than antigenicity that explains why some, but not all, forms of cell death are immunogenic. Organismal homeostasis is based on a balance between cell renewal and death, which is mediated by apoptosis. Apoptotic blebbing allows quick phagocytic uptake and recycling, which prevents leakage of the cellular content and subsequent inflammation (Arrow 1). In the absence or lack of sufficient phagocytic capacity (Arrow 2), apoptotic caspases cleave Gasdermin E (GSDME) resulting in cell rupture, referred to as secondary necrosis. Similarly, inflammatory caspases cleave Gasdermin D (GSDMD) to induce pyroptosis. Necroptosis is executed by the concerted action of RIPK3 kinase activity and the pseudokinase MLKL, whereas ferroptosis is fulfilled by free radical-induced lipid peroxidation catalyzed by Fe(II). Neutrophils typically die by netosis along expelling neutrophil extracellular traps (NETs), which is dependent on autophagy processes and PAD4-mediated citrullination. Different molecular mechanisms execute plasma membrane rupture, resulting in cellular leakage, defined as necrosis. Release of damage-associated molecular patterns (DAMPs) and inflammatory signaling by necrotic cells subsequently induce inflammation Therefore, similar to a vaccination procedure, it is proposed that immunogenicity depends on two key factors: antigenicity and adjuvanticity [10]. The presence of neoantigens explains why dying cells can initiate an adaptive immune response provided that the cells also emit adjuvant signals as a consequence of cellular stress and death [16]. It is tempting to assume that (at least some) auto-immune disorders may originate from a situation in which an unwarranted wave of cell death in mistakenly perceived as immunogenic.
In addition, dying cells can also release chemo-and/or cytokines in a cell autonomous way through for example activation of nuclear factor -κB (NF-κB) that modulate the inflammatory response [17,18]. An accumulating body of evidence shows also the implication of interleukin-1 (IL-1) family cytokines in initiating an inflammatory response to necrotic cells or cytotoxic stimuli [19]. Note that in the context of immunogenic anti-cancer therapies, the contribution of NF-κB-mediated inflammatory signaling is still a matter of debate [15,20]. Conclusively, both processes, viz. DAMPs-induced immune responses and direct inflammatory signaling by necrotic cells, boost necroinflammation, and detrimentally contribute to disease progression. Therefore, identification of the key drivers of necrosisinitiated inflammation is likely to lead to major breakthroughs in the treatment of MODS.
Inflammation-induced necrosis
Although the immune system has evolved to protect the host against infection, it is clear that responses can be generated under absolutely sterile conditions. This is painfully evident to anyone who has experienced blunt trauma (e.g., banging a thumb with a hammer) after which the affected site rapidly becomes inflamed. Trauma, bleeding, cell injury, and irritant particles are among the many kinds of sterile stimuli that can trigger various kinds of immune responses, including both innate and adaptive ones [21]. Thus, inflammation essentially occurs in response to infections as well as tissue injury, which results in permeabilizing local blood vessels to permit rapid ingress of neutrophils, monocytes, and blood-born molecules (such as complement, antibody, platelets, clotting factors, and acute phase reactants) in an attempt to resolve the dangerous situation. Cytokines and chemokines are key mediators of this inflammatory response, which causes quite some disturbance to tissue [19]. For example, infiltrating neutrophils contain a battery of destructive proteases that after degranulation are an important source of reactive oxygen species (ROS) and can expel web-like chromatin structures known as neutrophil extracellular traps (NETs) that neutralize and kill pathogens as a consequence of netosis [22]. In addition, high concentrations of ROS are injurious, because they oxidize protein, lipids, and damage the DNA. The resulting undesirable collateral tissue damage leads to further cell death and inflammation.
An auto-amplifying loop between necrosis and inflammation drives MODS
Any disease that results in tissue injury increases the risk to develop MODS. Causal etiologies include infections, burns, severe trauma, and various other noninfectious inflammatory conditions. It is considered as one of the major causes of death in intensive care units (ICUs). The incidence of MODS in European ICU patients is increasing over the last decade from 39.7% in 2002 (SOAP study) to 51% in 2012 (ICON study) [23]. There are several proposed mechanisms to explain the pathophysiology of MODS [24]. A dysregulated immune response, or immune paralysis, in which the homeostasis between pro-inflammatory and antiinflammatory reaction is lost is thought to be key in the development of MODS. This chronic failure propagates organ damage. The gut is thought to play an important role in MODS owing to surplus of inflammatory mediators, intestinal walls become hyperpermeable, which in turn, propagates the inflammatory response. Acute kidney injury (AKI) occurs in approximately half of ICU patients and is also a common complication in MODS associated with poor clinical outcomes [25,26]. It is a syndrome that in the majority of ICU patients occurs as a consequence of disease (e.g., sepsis, trauma or shock), which evidently explains part of the observed morbidity and mortality. However, clinical data also show that AKI is not a mere innocent bystander, but also plays an important role in the prognosis of patients, as increasing severity of AKI also contributes to worse outcomes [26,27]. To date, steroids are still one of the few treatment options for this dysregulated immune response in critically ill patients with MODS. It is tempting to speculate that the beneficial effects of steroid administration in critical care is likely due to its multitude of downstream targets in relation to necroinflammation [28]. However, large clinical studies on exogenous steroid administration are showing conflicting results [29], with some studies showing a mortality benefit [30,31], whereas other could not demonstrate a beneficial effect [32,33].
Recent data from basic and clinical research have begun to elucidate complex organ interactions in AKI between kidney and distant organs, including heart, lung, spleen, brain, liver, and gut [34]. The hypothesis of organ cross-talk and distant organ injury, often referred to as remote organ injury, has emerged over the last decade and may explain the reason for the potential negative impact of AKI on outcome [35]. Animal models clearly indicate that AKI induces distant organ dysfunction through different identified pathways, including inflammatory cascades, necrosis, induction of remote oxidative stress, and differential molecular expression [36]. Basically, the communication between different organs can only occur through transportation of biomolecules and immune cells in biofluids. This might also be a key detrimental factor in transplantationinduced distant organ injury [37]. This concept was, for example, illustrated in a rat allogeneic renal transplantation model, in which ischemic allografts (stored 24 h before transplantation), but not fresh immediately transplanted allografts, led to remote lung injury [38]. Pharmacological targeting of different modes of necrosis using a combined treatment with cyclosporine A, 3-aminobenzamide and necrostatin-1 attenuated lung injury. These experimental data suggest that DAMPs released from necrotic renal cells, mostly tubular cells, follow the circulation into the lung capillaries, where they harm the pulmonary tissue by two interconnected mechanisms: necrosis and inflammation [39], referred to as kidney-lung cross-talk in the critically ill patients [40]. Recently, it was found that this process is also enhanced by neutrophil extracellular traps and circulating histones [41]. A remote lung-injured transcriptome analysis also identified ischemia-specific changes that were distinguishable from those produced by uremia and involved several pro-inflammatory and proapoptotic pathways [42]. In summary, all these findings further strengthen the potential role of necroinflammation in remote organ damage.
A direct detrimental role for necrosis in MODS is also extensively shown using mouse experimental models reflecting systemic inflammatory response syndrome (SIRS), sepsis, and AKI. RIPK3-deficient, MLKL-deficient, and RIPK1 kinase death knockin mice are, to a different extent, protected against tumor necrosis factor (TNF)induced SIRS [43][44][45]. A combined loss of CASP8 and RIPK3 provides a stronger protection against SIRS, but also kidney ischemia-reperfusion injury compared with loss of RIPK3 alone [43]. In kidney ischemia-reperfusion injury, different modes of necrosis act in a mutual way [43,46], and ferroptosis of tubular kidney epithelium seems to be a dominant mode of cell death [47]. Although RIPK1 kinase inhibitors (Necrostatins) protects against TNFinduced SIRS [44,48,49], lipophilic radical traps such as ferrostatins or liproxstatins protect against AKI [47,50]. Note that labile iron is a known risk factor to develop AKI in clinically relevant settings such as cardiac surgeryassociated AKI, rhabdomyolysis-induced AKI and contrastassociated AKI [51,52]. Iron chelation by desferoxamin has become a standard control agent for AKI when induced ex vivo in settings such as isolated renal tubules or in vivo in models of acute renal failure [53]. These data suggest that blocking cell death pathways could have therapeutic potential in context of SIRS and AKI.
There are also experimental data suggesting the therapeutic potential of targeting inflammation in sepsis. Mice deficient in the pathogen recognition receptors Toll-like receptor 4 or intracellular NOD-like receptor family member NLRP3 are protected against LPS-induced lethal shock [54][55][56]. Both receptors are required to induce the production of the inflammatory cytokines IL-1β and IL-18, which depends on the proteolytic activity of CASP1. Blocking pyroptosis by depleting mice from CASP11 also protects against LPS-induced shock [57]. A phenotypic in vivo screen revealed the superior therapeutic potential of neutralizing simultaneously IL-1 and IL-18 in sepsis rather than inhibiting the upstream inflammatory caspases CASP1 or -11 by using different experimental mouse models for septic shock [11]. In line with these data obtained in mice, patients with septic shock who did not survive displayed higher IL-18 levels than patients who survived [58,59]. Also, in critically ill AKI patients, higher IL-18 levels were associated with non-recovery at day 60 and non-survival [60]. On the other hand, neutralization of IL-1 signaling using Kineret® (Anakinra; ILRa) in clinical trials resulted only in a marginal trend for increased survival [61], whereas IL-18 neutralization has not been evaluated in clinical studies so far [62]. These data obtained in septic mice and patients clearly underscore the need for patient stratification owing to the heterogeneity in the pathology of sepsis [63]. In addition to inflammasome-mediated pyroptosis and inflammatory signaling, evidence is increasing showing a potential detrimental role for NETs and/or netosis in MODS and AKI (reviewed by [22]).
In summary, the experimental and preclinical support is increasing that indicates four not-exclusive key phenomena in the development of MODS: infection, inflammation, parenchymal cell necrosis, and immune cell necrosis (Fig. 2). These major processes result in the circulation of MAMPs, chemo-and cytokines, activated immune cells, and DAMPs, which in an auto-amplifying loop potentially cause distant organ injury. Monitoring these biomolecules and immune cells in biofluids will be crucial to stratify patients and identify novel potential biomarkers with predictive value. Ultimately, combined intervention strategies controlling infection, inflammation, and necrosis might be the key to effective treatment of MODS.
Profiling necroinflammation in biofluids of critically ill ICU patients
In the ICU, the complexity and ambiguity of critical illness syndromes have been identified as fundamental justifications for the adoption of a precision approach to research and practice [64,65]. This leads to considerable heterogeneity among patients, and conditions in which a "one size fits all" approach to therapy can lead to widely divergent results. Today a clinical diagnosis is typically based on a combination of elements including anamnesis, physiologic and/or genetic parameters, high-tech diagnostic medical imaging, and an extended list of laboratory values determined in biofluids such as blood and urine (Fig. 3). One could consider this as precision medicine v1.0. Experimental rodent models mimicking MODS unravel a still growing list of detrimental circulating biomolecules and immune cell profiles [63], which could be potentially novel biomarkers for stratification of critically ill patients. To pave the way for precision medicine v2.0, a joint venture between researchers and clinician will be crucial to daily monitor a panel of biomarkers in biofluids, to pinpoint correlations with survival and finally link an appropriate intervention strategy to the molecular diagnostic profiling.
At present, AKI biomarkers have been successfully used to identify patients who may benefit from a so-called AKI bundle of care [66][67][68].
There are two problems at the basis of inconsistent translatability in critical care [69]. One problem is lack of reproducibility owing to false positive biomarker selection or no robust statistical models. The other, more importantly, is a lack of generalizability in moving from a narrow well-defined study population into broader applications in critical care. One way to improve trials is to focus on not just size but also heterogeneity. Dynamic disease fluctuations, for example, drive this heterogeneity, which might also partially explain the still disputed beneficial role of corticosteroids in critical care [70]. Therefore, daily monitoring of potentially novel biomarkers in biofluids is increasingly done to allow better patient stratification [71,72]. Patients with similar clinical presentations typically have different cellular and molecular (1) infection, (2) inflammation, (3) parenchymal necrosis (apoptosis, necroptosis, and ferroptosis), and (4) immune cell necrosis (pyroptosis and netosis). These features are responsible for the release of biomolecules (MAMPs, DAMPs, chemokines, and cytokines) in biofluids and the activation of immune cells, which both are often biohazardous worsening tissue damage. Monitoring this in biofluids will be crucial to stratify patients and identify novel potential biomarkers with predictive value. Ultimately, combined intervention strategies controlling infection, inflammation, and necrosis might be the key to effective treatment of MODS. MAMPs, microbe-associated molecular patterns; DAMPs, damageassociated molecular patterns responses due to individual genetic differences and comorbidities. To deal with this form of heterogeneity, an expanded list of novel biomarkers with predictive value is needed to allow determining subtypes of clinically similar patients. The list of potentially clinical relevant biomarkers is growing for sepsis [73] (Table 1) as well as AKI [74] (Table 2). Circulating DNA released form dying cells [75] or microorganisms [76], coined cell-free DNA (cf-DNA), is also gaining interest as a potential biomarker in MODS [3]. However, more work must be done in order to determine the origin of cf-DNA namely parenchymal cell necrosis versus immune cell necrosis [77,78]. The translatability of these potentially novel biomarkers will critically depend on new technologies such as real-time immunodiagnostics to allow instant decision making [72]. easily accessible biofluids. The cellular and molecular profiling of necrosis and inflammation in biofluids using cutting-edge technologies such as real-time immunodiagnostics, next-generation sequencing, and mass spectrometry will pave the way for precision medicine v2.0 in critical care. This is needed for data-mining approaches to allow patient-clustering, identify novel biomarkers, and develop novel intervention strategies controlling necrosis and inflammation. The realtime monitoring of biomarkers will allow continued (re)evaluation of treatment strategies using machine-learning models In addition to monitoring biomolecules as biomarkers, immune cell profiling has also resulted in potentially interesting biomarkers that need further validation. For example, prolonged lymphopenia [79,80], CD64 expression on neutrophils [81,82], Tregs increase [83,84], prolonged depletion of dendritic cells [85], increased PD-1/PD-L1 expression on monocytes and neutrophils [86,87], and increased BTLA expression on innate immune cell population [88] all have been shown to associate with disease severity and mortality. To date, unbiased -omics approaches to profile proteome [89], lipidome [90], glycome [91], metabolome [92], exome [93], and (epi-)genome [94,95] are also extending precision oncology and start to be explored in ICU critical care in an attempt to create new predictive biomarkers [96] (Fig. 3). Profiling noncoding RNA might also have predictive value for disease severity and mortality. For example, long non-coding RNAs were investigated in sepsis [97,98] and kidney injury [99], as well as miRNA profiles in sepsis [100,101] and kidney injury [102]. Cutting-edge omics approaches such as oxidative proteomics [103], oxidative lipidomics [104], glycoprotomics [105], cellular glycomics [106], and single cell sequencing evolve quickly toward clinical diagnostic use. For example, single cell RNA sequencing [107], single cell genomics [108,109], single cell epigenomics [110], single cell proteomics [111], single cell lipidomics [112], and single cell metabolomics [113] will undoubtedly be essential for profiling the differential immune cell responses in critically ill patients, and create prognostic value. Note that proteomics [114] and next-generation sequencing [76,115] can also be used to identify the origin of MAMPs to diagnose the type of infection. It is however questionable if the current health economic evidence is high enough to support the more widespread use of whole-exome or -genome sequencing in clinical practice [116]. On the other hand, targeted sequencing or mass spectrometric analysis of Ang2 Plasma Severe sepsis Organ dysfunction and injury [150,151] Endocan Plasma Sepsis Severity of illness and mortality [152,153] Cell-free DNA Plasma Critically ill Sepsis and mortality [154,155] Critically ill No predictive value [156] Magnesium Serum Critically ill Mortality [157] CHI3L1 (YKL40) Serum Sepsis Sepsis [158] Ang Angiopoietins, ARDS acute respiratory distress syndrome, CHI3L1 chitinase 3 like 1, DcR3 soluble decoy receptor 3, SLE systemic lupus erythematosus, sTREM-1 soluble triggering receptor expressed on myeloid cells 1 robust biomarkers will probably become standard medical analysis techniques in clinical laboratory in the future. The justification for this expensive equipment will likely depend on the number of robust biomarkers, and their added value, e.g., the survival of the patient.
How to deal with the data revolution in critical care?
"Big data in health" is defined by high volume, high diversity biological, clinical, environmental, and lifestyle information collected from single individuals to large cohorts, in relation to their health and wellness status, at one or several time points [117]. Big data come from a variety of sources, such as clinical trials, electronic health records, patient registries and databases, multidimensional data from genomic, epigenomic, transcriptomic, proteomic, metabolomic, and microbiomic measurements, and medical imaging. More recently, data are being integrated from social media, socioeconomic or behavioral indicators, occupational information, mobile applications, or environmental monitoring [118]. A major challenge for preclinical and clinical research is to obtain and achieve access to sufficient high quality, informative data. We need to progress from incomprehensible networks or ranking tables to a userfriendly and intuitive format. Another major issue is the transferability of medical data between countries. Ownership of data by patients could overcome these obstacles. Presently, the patients do not have control over the access privileges to their medical records and remain unaware of the true value of the data AGT Urine ADHF AKI [159] BPIFA2 Urine, blood Critically ill Early diagnosis of acute kidney injury [160] Calprotectin Urine Critically ill Distinction between prerenal and intrinsic acute kidney injury [161,162] CHI3L1 (YKL40) Urine Critically ill Early diagnosis of acute kidney injury [163,164] Cystatin C Urine, plasma Critically ill Early diagnosis of acute kidney injury [165,166] HSP72 Urine Critically ill Early diagnosis of acute kidney injury [167] IGFBP7 Urine Critically ill Early diagnosis of acute kidney injury [168][169][170] IL-18 Urine Critically ill Early diagnosis of acute kidney injury [171] Urine AKI Mortality [171] Urine Cirrhosis Diagnosis of acute tubular necrosis [172] Urine HIV Proximal tubular dysfunction [173] KIM-1 Urine Critically ill Early diagnosis of acute kidney injury [174] L-FABP Urine, plasma AKI Mortality [166,175] MCP-1 Urine Cardiac surgery AKI [176] microRNA Urine Cardiac surgery Severe AKI and poor postoperative outcome [177] Urine Critically ill AKI predisposition [178] NAG Urine Critically ill Tubular damage [179] Netrin-1 Urine Critically ill Early diagnosis of acute kidney injury [166,180] NGAL Urine, plasma Critically ill Early diagnosis of acute kidney injury [181,182] SBP-1 Urine Critically ill Early diagnosis of acute kidney injury [183] TIMP-2 Urine Critically ill Early diagnosis of acute kidney injury [168][169][170] AGT angiotensinogen, ADHF acute decompensated heart failure, AKI acute kidney injury, BPIFA2 BPI foldcontaining family A member 2, CHI3L1 chitinase 3 like 1, HIV human immunodeficiency virus, HSP heat shock protein, IGFBP7 insulin-like growth factor binding protein 7, KIM-1 kidney injury molecule-1, L-FABP liver-type fatty acid-binding protein, MCP-1 monocyte chemotactic protein 1, NAG N-acetyl-β-D-glucosaminidase; NGAL neutrophil gelatinase-associated lipocalin, SBP-1 selenium-binding protein 1, TIMP-2 tissue inhibitor of metalloproteinase 2 they have. The USA have taken steps toward a "patientdriven economy" [119]. In such a scenario, the patient owns his/her data. By integrating the use of mobile devices, this could create a mutually interactive platform between biomedical (pre-)clinical research, health care services and patients through a world-standard public health record, although many challenges remain in achieving this [120]. For example, there is a need to have a much higher level of security than is possible today. One suggestion was to explore blockchain technology, which could be described as a distributed database that is used to maintain a continuously growing list of cryptographic records/blocks in a peer-to-peer network of users [121]. Originally used as the technology underlying "Bitcoin" to assure secure transactions, it might also be very suitable for application in healthcare. Essentially, projects fail more often because of the underappreciation of the complexities of ethical, legal, and social factors than for technological reasons. Data continue to increase at an exponential rate and the need for cross-border exchange of biomedical and healthcare data, cloud-storage, and cloud-computing is inevitable [122,123]. Until many issues of data safety and security are solved, local solutions will be favored [124,125].
Data mining and machine learning models key to precision medicine 2.0?
A wealth of data are being collected in ICUs across the world, not only by standard clinical data management systems but also by clinical trials and researcher-driven clinical studies. These data need to be filtered from artifacts and standardized to a uniform readable format to allow clinical data mining approaches [126,127]. Data mining is the process of pattern discovery and extraction where huge amount of data is involved. In the context of intensive care, this approach could be the key to identifying novel biomarkers and allow patient stratification (Fig. 3). One of the biggest benefits of the data-driven approach to biomarker discovery is the possibility of discovering novel pathobiology in the heterogeneity of critical illness compared to hypothesis-driven studies of familiar biomolecules. For example, data mining techniques are currently employed to try to predict mortality [128], one of the key issues in intensive care. Better patient stratification is also needed to improve the success rates of clinical trials, and critically depends on data mining methods including generalization, characterization, classification, clustering, association, evolution, pattern matching, data visualization, and meta-rule guided mining [129]. Dimensionality reduction and visualization techniques are exciting areas of research, which have the potential of redefining the single input monitoring approach currently applied in clinical practice. Looking even further forward, there is a need for integrative and interactive machine learning solutions, with teams of machine learning researchers and clinicians-who are directly involved in patient care and data acquisitionworking in tandem to generate actionable insight and value from the increasingly large and complex critical care data [127]. Connecting daily monitoring of an increasing set of circulating biomolecules and immune cells in critically ill patients to data mining will feed machine-learning approaches. This form of artificial intelligence allows in a feedback loop continued reevaluation of novel patient stratification strategies and novel biomarkers/therapies targeting necrosis and inflammation (Fig. 3). In clinical practice, this approach will: (1) improve outcomes for individual patients through personalization of predictions, (2) allow earlier diagnosis and detection of adverse drug reactions, (3) provide better treatments and decision support for clinicians in cyclic processes, and (4) assist in understanding the progression of rare diseases. The multidimensional signatures will hopefully deliver a much higher predictive power than the single biomarkers used today. These improvements should eventually lead to lowered costs for the healthcare system.
Conclusion and perspectives
Early advances in precision medicine have been illustrated in oncology, where both diagnosis and treatment are increasingly based on genomic features. Better success rates from the treatment of HER2-positive breast cancer [130] and EGFR-positive lung cancer [131] highlight the potential of precision medicine to lead to widespread changes in clinical practice. Growing interest is also reflected in new large-scale precision health projects, such as the NIHsponsored Precision Medicine Initiative in the United States and the NHS-sponsored 100,000 Genomes project in Great Britain, as well as by citizen support for such ventures [132]. The promise of precision medicine is to have the right treatment for the right patient at the right time to maximize effectiveness [133]. In critical care, it will be important to follow a step-by-step procedure. For instance, try to answer urgent clinical questions first (such as best treatment option upon diagnosis of the type of infection), and then pose new ones that may not have been previously answerable (such as whether there are molecular subtypes in MODS, sepsis, or AKI). As omics and big data technologies proliferate, so too will studies utilizing them as biomarkers in critical illness (studying the genome, epigenome, transcriptome, proteome, metabolome, lipidome, microbiome, …). In all cases, we must remember the extreme heterogeneity of critical illness, and strive for generalizable disease-defining diagnostics and robust biomarkers that can help the entire spectrum of critical care research and delivery [69]. Ultimately, combined intervention strategies controlling infection, inflammation, and necrosis might be the key to effective treatment of MODS. It is not a matter if, but how quickly the landscape of intensive care will profoundly reshape. This will undoubtedly occur hand in hand along reshaping global health care. Although many challenges remain in achieving this, the evolution toward a patient-driven medical care, in which cloud-storage/computing and/or peer-to-peer technologies such as blockchain are needed, is probably inevitable. The role of mobile devices in this will definitely gain importance and could become a central player in providing a mutually interactive platform between biomedical (pre-)clinical research, health care services and patients. VLIR-UOS (TEAM2018-SEL018), Charcot Foundation, Ghent University, and VIB. EH is intensivist at the Ghent University Hospital, Ghent University. Senior Clinical Researcher for the Research Foundation Flanders (FWO).
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons. org/licenses/by/4.0/. | 8,254.4 | 2018-09-10T00:00:00.000 | [
"Medicine",
"Biology"
] |
Qubit gates with simultaneous transport in double quantum dots
A single electron spin in a double quantum dot in a magnetic field is considered in terms of a four-level system. By describing the electron motion between the potential minima by spin-conserving tunneling and spin flip caused by a spin-orbit coupling, we inversely engineer faster-than-adiabatic state manipulation operations based on the geometry of four-dimensional (4D) rotations. In particular, we show how to transport a qubit among the quantum dots performing simultaneously a required spin rotation.
INTRODUCTION
Device architecture based on electrons confined in coupled quantum dots [1,2,3,4] is considered as a potential and significant candidate for quantum computation and quantum information processing. The advantages of this architecture are based on the facts that electron spin is a natural qubit with spin-up and spin-down states, mature semiconductor technology may be used, and long coherence times on the scale of microseconds have been achieved in these systems [5,6]. Laboratories use electric, microwave or magnetic fields to manipulate spin states, performing 10 3 ∼ 10 5 operations in the spin dephasing time [5,6,7,8,9,10].
Scalability of quantum information devices is associated with several architectures having the capability to transport qubits. In this paper we theoretically explore a four-level model for a spin in a double quantum dot (DQD) aiming at the possibilities to implement fast qubit transport with simultaneous rotations. We achieve this goal for arbitrary rotations by controlling the synchronized time dependence of interdot tunneling and spin-orbit coupling (SOC). We inverse-engineer these time dependencies based on our recent work [11] on the control of four-level systems. The method separates population control from control of the phases of the bare state basis [12]. Populations can be mapped onto a 4D sphere so their evolution amounts to 4D transformations controlled by the rotation Hamiltonian that may be engineered from the target state (in our case via isoclinic rotations and quaternions). A full Hamiltonian can then be constructed from the rotation Hamiltonian to realize the desired phase changes. Arbitrary state manipulations require full flexibility in the Hamiltonian, i.e., the possibility to implement the different Hamiltonian matrix elements with specific time-dependences. In the systems of interest, however, there are constraints that hinder certain manipulations and transitions. In particular, in this paper we examine the Hamiltonian structure that corresponds to combined tunneling and SOC controllable couplings, and deduce the possible transformations.
Spin-orbit coupling in semiconductors consists of two main contributions due to the Dresselhaus-and the Bychkov-Rashba-effect. The former is due to the bulk inversion asymmetry of material and the latter results from the structure inversion asymmetry, produced, e.g., by the confining potential or an external electric field [13]. The practical advantage of the Rashba coupling is the ability to manipulate it by an external electric field applied across the semiconductor structure [14,15]. The Rashba coupling controlled by a high-frequency ac gate voltage [16] provides an effective method to control the spin states in short times [17,18].
This paper is organized as follows. In Section II, we introduce first the method that parameterizes the time-dependent Hamiltonian and time evolution operator of a four-level system by using isoclinic rotations and quaternions [11]. Then we map the Hamiltonian of the spin in a DQD coupled by SOC and tunneling onto this scheme. In Section III, we apply the method developed in Section II to design the synchronized time dependences of the control parameters to perform different qubit operations, such as the interdot transport combined with spin rotations. Section IV provides discussion of the results and their relation to other systems. Some details on the structure of the Hamiltonian are presented in the Appendix.
4D Hamiltonians and evolution operators
The wave function of a four-level system where c n , ϕ n are real amplitudes and phases (we set ϕ 1 = 0), and 4 n c 2 n = 1, can be decomposed as |ψ(t) = K(t)|ψ r (t) , where is a vector on the surface of a 4D sphere, and the phase information is contained in The states |ψ(t) and |ψ r (t) evolve via evolution operators U(t) and U r (t) related by U r (t) = K † (t)U(t)K(0), where we set the initial time as 0. Accordingly, the rotation-related Hamiltonian in the Hilbert space is defined as and the total Hamiltonian is To engineer H r for a specific rotation, it is convenient to express first a general 4D rotation matrix as a product of two isoclinic rotation matrices [19,20]: where q i and p j are components of two unit quaternions q = q w + q x i + q y j + q z k and p = p w + p x i + p y j + p z k . We shall parameterize them in terms of generalized 4D spherical angles [21,22], where 0 ≤ φ 1,2 ≤ 2π, 0 ≤ θ 1,2 , γ 1,2 ≤ π. Thus by using U(t) = K(t)U r (t)K † (0) and , we find the parameterized forms for the evolution operator U(t) and the Hamiltonian H(t). The explicit expressions are lengthy, and will not be reported here.
Single electron in a double quantum dot
Consider a single electron spin in a semiconductor DQD, for example made of silicon or GaAs, with tunneling and Rashba spin-orbit coupling, as shown in Fig. 1. We use a bare basis of spin up and down states localized in each well, numbered as |ψ L↓ = |1 , |ψ R↓ = |2 , |ψ R↑ = |3 , |ψ L↑ = |4 . Following the derivation in Appendix A, and after an diagonal energy shift of −∆/2, the Hamiltonian of this system (see (A.7)) can be written as Here τ (t) represents the tunneling coupling between the two quantum dots, α(t) is the Rashba coupling, and ∆ is a Zeeman splitting. All these quantities have dimensions of frequency. Following the approach of Mal'shukov et al. [16], we consider the timedependent Rashba coupling in the complex form α(t) = α 0 + α 1 (t)e iωt . The Hamiltonian structure corresponds topologically to a diamond-configuration [11], which, in the parametric expression of H(t) we may impose with the conditionṡ see Fig. 2. Specifically, after substituting (10) in the parameterized form of (6), the Hamiltonian H(t) acquires the corresponding form To make H 0 (t) and H(t) fully consistent, we further fix the angles aṡ Then (11) gives where we have simplified the notation as γ(t) = γ 1 (t), θ = θ 1 . Now we may impose H 0 (t) = H(t), as they have the same structure, to find the following relations between control functions and auxiliary angles, which implies α 0 = 0, α 1 (t) = −γ 1 (t), ω = ∆ (i.e., the external bias is in resonance with the Zeeman frequency), and θ can be considered as a coupling mixing angle. Under the conditions stated in Eqs. (12), the parameterized time-evolution operator becomes We impose the boundary condition γ(0) = 2nπ, n = . . . , −2, −1, 0, 1, 2, . . ., to guarantee U(0) = 1 at the initial time.
Qubit preparation
Assume that the four-level system is initialized in state |1 on the left well and the objective is to prepare from it an arbitrary qubit in the right well encoded in levels |2 and |3 . Besides the conditions in (12) We can transfer |1 to any bare state except |4 , or to arbitrary superpositions of |2 and |3 (i.e., any qubit on the right well) by imposing γ(T ) = (2n + 1)π/2, n = 0, ±1, .... As an example we shall perform a state transfer to b 1 = 0, b 2 = 1/2, b 3 = e iπ/2 √ 3/2, b 4 = 0. Equation (16) with corresponds to the desired final state |ψ(T ) = −i(|2 +e iπ/2 √ 3|3 )/2 within an irrelevant global phase factor. An Ansatz for γ(t) consistent with the above boundary conditions is The resulting tunneling and Rashba SOC are calculated from (14) as with the characteristic values τ, α ∝ 1/T . We can prepare a qubit with an arbitrary relative phase by adjusting ∆ and the operation time T as long as the tunneling and SOC are experimentally feasible. We plot the time-dependence of the tunneling matrix elements, Rashba SOC, and populations evolution of all bare states in Fig. 3 with parameters corresponding to the g * −factor of electron in GaAs (g * = −0.44) with B = 100 mT, ∆ ≈ 2π × 0.5 GHz, and T = 3π/(2∆) = 1.5 ns [23].
Qubit transport and rotation
Our method may be applied to transport the qubit from one dot to the other applying simultaneously some qubit rotation, i.e., to produce an arbitrary gate. Suppose we have already prepared a qubit in the left dot in an arbitrary superposition of |1 and |4 as ψ(0) = cos χ|1 + e iµ sin χ|4 , where χ is the initial amplitude mixing angle and µ is the initial relative phase. The corresponding general final state with the unitary evolution operator (15) is given by the amplitudes where A = sin γ(T ) √ 2 1 + cos 2χ cos 2θ + cos µ sin 2χ sin 2θ, We can inversely calculate the coupling mixing angle θ under the condition that γ(T ) = nπ/2, n = 1, 3, 5, · · ·, so that the amplitudes b 1 and b 4 vanish, and for given desired final real amplitudes A and B we obtain: θ = ± arccos A 2 cos 2χ + sin 2 χ(1 − 2 cos 2 χ sin 2 µ) + 2 S cos µ sin 2χ 1 − sin 2 2χ sin 2 µ , where S = 4A 2 B 2 − sin 2 µ sin 2 2χ.
Notice that there is still a degree of freedom to control the final relative phase of the qubit on the right. Suppose our target relative phase is λ = ζ B − ζ A − ∆T . By adjusting the operation time as T = (ζ − λ)/∆, where ζ = ζ B − ζ A , the process produces the desired relative phase λ. Now we can consider two examples of application of (22).
Example 1: transport and phase gate. We assume that A = cos χ and B = sin χ and substitute them into (22) to get θ = 0, which means that τ =γ(t) and α = 0. The final state is calculated as By letting γ(t) evolve from 0 to π/2, the qubit is transported from left to right and rotated by a relative phase factor e −i∆T . Example 2: transport and NOT gate. The NOT gate with transport swaps the amplitudes between up and down states, so we set A = sin χ and B = cos χ in (22) to Figure 5. Scheme for qubit operations in a chain of quantum dots. The system is initialized in state | ↓ in Dot 1 at time t = 0; then, from this initial state, a qubit is prepared in Dot 2 at a time t 1 (the duration of the process is T 1 = t 1 ). This qubit is transported to Dot 3 with an additional relative phase π/4 (process duration T 2 = t 2 − t 1 ). Finally, a "NOT gate with transport" operation is applied to flip and transport the qubit to Dot 4 in a process with duration T 3 = t 3 − t 2 . The upper figures represent each qubit on the corresponding Bloch spheres.
Discussion
By applying an approach based on four-dimensional rotations, we studied electron charge and spin motion in tunneling-and spin-orbit coupled quantum dots. By a proper synchronization of their time-dependences, we inversely engineered the tunneling and spin-orbit coupling matrix elements to achieve spin transport with simultaneous single qubit rotations in quantum information transformations such as the qubit preparation and U phase and U NOT gates. In a chain of quantum dots, these transport+rotation operations may be applied sequentially for a long-distance qubit transfer in a multi-dot architecture, where the ability of a coherent spin transfer has been recently demonstrated [24,25]. Figure 3 demonstrates these processes for a particular sequence starting in Dot 1 and ending in Dot 4. We point out that this technique can also be applied to heavy-hole systems, where the control of the hole spin via the tunneling and strong SOC has been demonstrated for silicon-based double quantum dots [26]. In addition, a similar approach can be used to design the spin and mass transport of cold atoms in optically produced potentials [27].
The one-dimensional Rashba spin-orbit coupling is represented as where the α is the corresponding coupling parameter. We define the full four-state basis of a single electron in the DQD as The time dependence α(t) in (A.7) comes from two main sources: time-dependent α due to ac external bias and time-dependent overlap of the wave functions localized near the left (−x 0 ) and right (x 0 ) minimum of the potential V (x). | 3,080 | 2018-09-24T00:00:00.000 | [
"Physics"
] |
Cantorian-Fractal Kinetic Energy and Potential Energy as the Ordinary and Dark Energy Density of the Cosmos Respectively
In a one-dimension Mauldin-Williams Random Cantor Set Universe, the Sigalotti topological speed of light is c φ = where ( ) 5 1 2 φ = − . It follows then that the corresponding topological acceleration must be a golden mean downscaling of c namely ( )( ) 2 g c φ φ = = . Since the maximal height in the one-dimensional universe must be 2 1 2 = where is the unit interval length and note that the topological mass (m) and topological dimension (D) where m = D = 5 are that of the largest unit sphere volume, we can conclude that the potential energy of classical mechanics ( ) 2 p E mg = translates to ( ) ( )( ) 2 Topological 5 2 p E φ = . Remembering that the kinetic energy is 2 1 2 K E mv = , then by the same logic we see that ( ) Topological K E = ( ) 3 5 1 2 2 φ φ φ = when m = 5 is replaced by 3 φ for reasons which are explained in the main body of the present work. Adding both expressions together, we find Einstein’s maximal energy ( ) ( ) ( ) 5 5 2 2 Total 2 5 2 E mc mc φ φ = + = . As a general conclusion, we note that within high energy cosmology, the sharp distinction between potential energy and kinetic energy of classical mechanics is blurred on the cosmic scale. Apart of being an original contribution, the article presents an almost complete bibliography on the Cantorian-fractal spacetime theory.
Introduction
Space, time, matter and energy are concepts far from being trivial or obvious even within Newtonian classical mechanics [1]- [6].This view was amply confirmed and deeply pondered in the wonderful writing of scientists such as H. Weyl and Max Jammar [1] [3].Starting more or less from there it became the Author's lifelong work and even magical fascination to incorporate the basic structure of quantum mechanics into the very topology and geometry of space and time [7]- [428].To do this, the author followed a path inspired by the work of Richard Feynman and its development by the Canadian Physicist G. Ord and French Astrophysicist L. Nottale [7] [12] [13].
The crucial turning point for E-infinity was when the Author's basic work came in touch with the work on non-commutative geometry [14] [74].In particular the superb analysis which the great French mathematician Alain Connes undertook on Penrose's Fractal Tiling Universe using Von-Neumann's pointless geometry [14] [140] is in retrospect the most important central piece in our current understanding of high energy physics and cosmology [7]- [270].It turned out that the bijection formula which relates the Hausdorff dimension of an n-dimensional Cantor manifold to its topological dimension n [7] ( ) ( ) In addition this dimensional function is generic and can be used to understand some of the most complex and difficult problems in Physics and Astrophysics [9]- [429].In particular, it is easily shown using the above that the quantum particle may be described by the zero set as given by the bi-dimension [7] [27] ( ) ( ) ( ) while the quantum wave maybe modeled by the empty set given by the bi-dimension In other words, the zero set quantum particle is described by a bi-dimension, zero for the topological dimension and φ for the Hausdorff dimension.On the other hand, the empty set quantum wave is fixed by the bi-dimension minus one for the topological dimension and 2 φ for the Hausdorff dimension [7] [9] [73].
From this simple mental and mathematical picture, we were able to show that the volume of the quantum particle zero set in Kaluza-Klein spacetime is simply 5 φ , while the corresponding volume of the quantum wave is where c is the speed of light.In that way, we were able to show that [48]- [400] ( ) ( ) By contrast in the present work, we will take another route to arrive at the same result by stressing an optional separation between kinetic energy and potential energy in fractal spacetime.
Fractal Potential Energy and Fractal Kinetic Energy of Quantum Spacetime
The following is a "post-modern" and quite novel approach to the same fundamental problems connected to the total accepted theoretical energy density of the universe versus that which was measured and which gave rise to the new concepts of dark energy and dark matter.This problem was previously solved using a plethora of mathematical techniques.However and as we anticipated in the previous section, we are making in the present analysis a strict although optional distinction between potential energy and kinetic energy [430].
For this reason we start from a one-dimensional Cantor set.For this set everything is zero with the exception of one fundamental thing.The bi-dimension indicated already that the topological dimension is zero.The only thing which is not zero is the Hausdorff dimension which is equal to φ , but what about where φ is embedded?That means where is the nothingness which is left from removing iteratively but randomly parts of the unit interval?This zero set "nothing" is not really nothing but rather something and is embedded in the φ complementary empty set.Since the dimension of the unit interval is 1 D = , then the dimension of this complementary empty set must be a trivial Now in E-Infinity we have a technique similar to non standard analysis were differentiation is equivalent to golden mean down scaling while integration is a golden mean scaling up [7] [9] [21] [28].
In this case we have to down scale v c → by multiplication with φ .Therefore the acceleration is simply Again, not surprisingly this corresponds in elasticity to a torsional term and is numerically equal to the Hausdorff dimension of the empty set quantum wave [19] [28] [119].
Our next step is to determine the height of the mass in the gravity field which is endowed with a positive energy i.e. a potential energy.Since the edges of the unit Cantor interval corresponding to the limit of the universe at a nominal infinity, then the maximum length of the unit interval is simply one half (1/2).Now we can write heuristically a fractal expression for conventional potential energy provided we know what m is.This is easily reasoned if we get access or an insight into the real meaning of mass.This is clearly connected to energy and energy is related to entropy.On the other hand entropy maybe measured via the Hausdorff dimension which is 2 φ for the empty set.Now a mass in 5-dimensional spacetime becomes a Kaluza-Klein spacetime five dimensional mass.Consequently, our empty set mass cannot be ( ) 1 which is nothing but 1. Therefore it must be m = (1) (5).Inserting in to p E , we find the familiar expression of ordinary cosmic energy density ( ) exactly as shown previously using various other methods.Let us stress this point again.
We have just established the potential energy nature of dark energy and squared it with the energy of the quantum particle via a mathematical tautology.This is because at the end of the day it is completely the same thing to say ( ) ( ) ( ) Returning to the kinetic energy, this is relatively simpler because real energy of real zero set quantum particles is sensibly interpreted as a 3D mass.In this case we have then φ to the power of 3D which give us 3 φ as a multiplicative 3D mass i.e. volume.
In turn this can be interpreted as the inverse of the spacetime core Hausdorff dimension [7] [9] Inserting in Newton's Kinetic energy, we find the expected result ( ) ( )( ) In agreement with expectations, the total energy which is the sum of the Kinetic and the potential energy is equal to Einstein's maximal energy density [426] [427] ( ) ( ) In concluding this part of our analysis we stress the subtlety of various interpretations of E(D) which could be potentially confusing.This is because 2 5 2 φ could be interpreted in equal measure as the quantum wave kinetic energy ( ) ( )( ) . In both cases the result is the same but the "pictures" are different.In fact we could go as far as claiming that within quantum cosmology the difference between kinetic energy and potential energy is fuzzy and so is the difference between the state of motion or being at rest which resonates with the old philosophy of Zeno [430].
Conclusion
We have come a long way in a relatively short time to recognize the depth and beauty involved in the discovery of the so-called missing dark energy of the cosmos.Dark energy is simply potential energy latent in the five-dimensional empty set spacetime.
However, one could equally say that dark energy is the energy of quantum wave.Since it may be seen as a product of minus one dimensional empty set, it has a different sign to that of the ordinary energy.Consequently, the topological acceleration 2 φ when multiplied with the Kaluza-Klein topological mass and divided by the "height" 1/2 is the cause behind the accelerated expansion of the cosmos.As such our result reinforces recent exciting work reported in [424] [430], and [429] [430].However the mathematics and methodology used here are entirely different from the said references and therefore this agreement lends both theories considerable credibility.
corresponding topological acceleration must be a golden mean downscaling of c namely is the unit interval length and note that the topological mass (m) and topological dimension (D) where m = D = 5 are that of the largest unit sphere volume, we can conclude that the potential energy of classical mechanics 5 is replaced by3 φ for reasons which are explained in the main body of the present work.Adding both expressions together, we find Einstein's maximal energy ( note that within high energy cosmology, the sharp distinction between potential energy and kinetic energy of classical mechanics is blurred on the cosmic scale.Apart of being an original contribution, the article presents an almost complete bibliography on the Cantorian-fractal spacetime theory.
2 5φ [ 23
][24].In other words, the measure of the particle is multiplicative while understandably the surface of this volume is a hyper-surface constituting the additive measure of the quantum wave.Since particle and wave in this picture, which is a ball, have a hyper spherical border and are therefore necessarily inseparable, it follows then that the total volume of the wave particle "quantum" structure is simply the sum 5 in Newton's kinetic energy one finds[137] [138][139] This agrees perfectly with the bijection formula and the dimensional function for n = −1 which gives[154] [155] measure i.e. the length of the complementary set is a trivial 1 − 0 = 1.In other words this empty set is a fat Cantor set[7] [154][155].Now let us look at the velocity in D(0).This was established by the work of the notable Italian physicist L. Sigalotti to be c φ = which is not surprisingly the only non-zero quantity in the Cantor set.Next we like to determine the acceleration corresponding to V or say the acceleration analogous to Newtonian gravity on earth.
2 5φ 2 5φ.
could be interpreted as topological empty set mass m = 5 multiplied with the acceleration 2 φ or as the volume of the 5D quantum wave empty set namely the additive volume In other words we have a different mental picture leading to the same result [212] [426]. | 2,717.4 | 2016-11-29T00:00:00.000 | [
"Physics"
] |
Regenerative Medicine and Diabetes: Targeting the Extracellular Matrix Beyond the Stem Cell Approach and Encapsulation Technology
According to the Juvenile Diabetes Research Foundation (JDRF), almost 1. 25 million people in the United States (US) have type 1 diabetes, which makes them dependent on insulin injections. Nationwide, type 2 diabetes rates have nearly doubled in the past 20 years resulting in more than 29 million American adults with diabetes and another 86 million in a pre-diabetic state. The International Diabetes Ferderation (IDF) has estimated that there will be almost 650 million adult diabetic patients worldwide at the end of the next 20 years (excluding patients over the age of 80). At this time, pancreas transplantation is the only available cure for selected patients, but it is offered only to a small percentage of them due to organ shortage and the risks linked to immunosuppressive regimes. Currently, exogenous insulin therapy is still considered to be the gold standard when managing diabetes, though stem cell biology is recognized as one of the most promising strategies for restoring endocrine pancreatic function. However, many issues remain to be solved, and there are currently no recognized treatments for diabetes based on stem cells. In addition to stem cell resesarch, several β-cell substitutive therapies have been explored in the recent era, including the use of acellular extracellular matrix scaffolding as a template for cellular seeding, thus providing an empty template to be repopulated with β-cells. Although this bioengineering approach still has to overcome important hurdles in regards to clinical application (including the origin of insulin producing cells as well as immune-related limitations), it could theoretically provide an inexhaustible source of bio-engineered pancreases.
According to the Juvenile Diabetes Research Foundation (JDRF), almost 1. 25 million people in the United States (US) have type 1 diabetes, which makes them dependent on insulin injections. Nationwide, type 2 diabetes rates have nearly doubled in the past 20 years resulting in more than 29 million American adults with diabetes and another 86 million in a pre-diabetic state. The International Diabetes Ferderation (IDF) has estimated that there will be almost 650 million adult diabetic patients worldwide at the end of the next 20 years (excluding patients over the age of 80). At this time, pancreas transplantation is the only available cure for selected patients, but it is offered only to a small percentage of them due to organ shortage and the risks linked to immunosuppressive regimes. Currently, exogenous insulin therapy is still considered to be the gold standard when managing diabetes, though stem cell biology is recognized as one of the most promising strategies for restoring endocrine pancreatic function. However, many issues remain to be solved, and there are currently no recognized treatments for diabetes based on stem cells. In addition to stem cell resesarch, several β-cell substitutive therapies have been explored in the recent era, including the use of acellular extracellular matrix scaffolding as a template for cellular seeding, thus providing an empty template to be repopulated with β-cells. Although this bioengineering approach still has to overcome important hurdles in regards to clinical application (including the origin of insulin producing cells as well as immune-related limitations), it could theoretically provide an inexhaustible source of bio-engineered pancreases.
Keywords: pancreas bioengineering, extracellular matrix, stem cells, decellularization, regenerative medicine, organ bioengineering INTRODUCTION Diabetes is a syndrome characterized by an absolute or relative β-cell deficiency in terms of mass (Type 1 diabetes mellitus, T1DM) or function (Type 2 diabetes mellitus, T2DM). Both of these conditions result in an impaired glucose homeostasis.
Diabetes has reached pandemic levels, afflicting over 300 million people worldwide (1) with a cost of care estimated around $176 billion/year in the United States alone (2). Furthermore, the costs resulting from chronic diabetes-related complications like cardio-vascular disease, nephropathy and retinopathy, are growing exponentially (2,3).
The present standard cure for treating patients with T1DM consists of daily exogenous insulin injections, whereas physical exercise, specific diet, and oral hypoglycemic treatment are the first line of treatments for T2DM. However, exogenous insulin remains a suboptimal treatment, and is far from reaching an adequate regulation of native β cells. It has been estimated that fewer than 40% of patients are able to reach and maintain a euglycaemic state over a life-long insulin regimen (4).
Therefore, while insulin therapy can maintain acceptable glycemic levels and reduce diabetes-related complications, it is not a cure: the only real way to definitively treat diabetes is to restore the beta cell mass or the lost functionality of those cells.
Whole-pancreas transplantation has become the gold standard treatment to restore durable glycemic control and improve patient survival. However, as it is a major surgical intervention and requires life-long immunosuppression, this procedure is only proposed to selected patients (5), and is severely limited by organ shortage (6).
Replacement therapy using cadaveric islet transplantation has been proposed since the 1970s (7). The results of islet transplantation, initially providing an insulin-independence rate lower than 20% after 1 year, noticeably improved with the introduction of the Edmonton protocol (8), achieving glycaemia stabilization in 88% of patients after 1 year and 71% after 2 years (9).
Islet transplantation is performed with a transhepatic portal infusion, does not require a surgical procedure, and carries low morbidity. The major complications following islet transplantations are portal vein thrombosis and bleeding, for which emergency laparotomies are rarely necessary (10). For those reasons, islet transplantation is preferred to solid pancreas transplantation in fragile patients. In contrast, even there are no direct, randomized trials comparing the outcomes, the results in term of insulin independence are slightly inferior compared to whole pancreas transplantation (11).
In the recent past, the use of stem cells for T1D has expanded enormously, shifting from adult stem cells (mostly represented by bone-marrow derived hematopoietic and mesenchymal stem cells) to pluripotent stem cells (12). Previously, progress made possible the differentiation of embryonic stem cell populations (ESCs) into functional β cell clusters, providing a source of islets suitable for replacement therapy (13).
Regenerative medicine and tissue/organ engineering aim to improve the length and quality of patients' life by regenerating, preserving or enhancing the original tissue/organ function. In this context, a variety of novel methods have been considered to address tissue/organ insufficiency, including stem cell-based therapies for the regeneration of damaged tissues and tissue/organ-engineered organs to replace tissue/organ function.
Additionally, a complementary work stream, known as cellon-scaffold technology, aims at creating the "ideal" biological template to be repopulated with specific cells in order to obtain a functional bio-engineered pancreas (14).
The aim of this document is to give an overview of the existing knowledge of current experimental strategies in the treatment of diabetes covered by the umbrella of regenerative medicine.
STEM CELLS AND DIABETES
Stem cell biology has offered fascinating solutions to restore the insufficient production of insulin resulting from the loss or dysfunction of pancreatic β-cells.
Theoretically, stem cells (embryonic stem cells-ESC) can differentiate into functional β-cell populations following specific pathways and migrate to the damaged tissue in order to guarantee an appropriate β-cell mass (15). Alternativley, stem cells can be induced to differentiate in vitro into insulin-producing cells (3).
For both in-vivo and in-vitro approaches, the most important problem is choosing the best type of progenitor cell.
Despite a significant effort to produce translational results from bench to bedside, there is currently no cure for diabetes, moreover, as reviewed by Lilly et al. (1) each of the four stem cell types present significant issues.
Embryonic Stem Cells
Though promising, the use of embryonic stem cells involves ethical constraints and a high risk of the development of teratomas (26).
In 2000, Soria et al. successfully isolated pancreatic insulinproducing cells (IPCs) produced by the introduction the human insulin gene into mouse ESCs. Cells were then transplanted into the spleen of streptozotocin-induced diabetic animals, obtaining transient glycaemia normalization and body weight normalization within 1 week. Nevertheless, for unknown reasons, about 40% of ESCs-implanted mice became hyperglycemic again within 12 weeks after the implantation (13).
In 2005, another group explored the capability of insulinproducing cells to reverse hyperglycemia using a streptozotocin (STZ)-induced diabetic NOD/SCID mouse model. Clusters formed by GFP-labeled ES insulin-producing cells were transplanted into the kidney sub-capsular space of diabetic mice (each cluster contained 100 to 150 insulin-positive cells). Cellular transplantation reversed the hyperglycemic state for 3 weeks, but the rescue failed due to immature teratoma formation (27).
Germline Stem Cells
Although pluripotent cells have been confirmed as a stem cell source using female germline stem cells, the production of functional β-cells still needs to be explored in-vivo (20).
Mesenchymal Stem Cells
Thus far, MSC treatment has been used to address the autoreactive host immune system in T1D. T1D is an autoimmune disease in which insulin-producing pancreatic β cells are destroyed by the autoreactive host immune system. To definitively cure T1D, this autoreactive host immune system must be first addressed before any attempts are made at islet replacement or regeneration. The immunomodulatory effect of MSCs has been explored in an attempt to prevent immune diseases in the past decades, but several issues remain unsolved.
Mesenchymal stem cell therapy continues to be a "mild" tool and may not be an efficacious treatment to reverse autoimmunity of T1D without the co-administration of immunosuppressive drugs (still necessary to prevent the acute autoimmunity reaction). The effects are incomplete and provisional, requiring chronic administration or additional therapies (28).
Furthermore, MSCs need the guidance of "homing" factors to reach the desired sites of action, but most homing factors, especially the homing factors directed at the pancreas, are still unknown. Finally, MSCs injected intravenously suffer from a "pulmonary first pass effect" and are likely to be sequestered in the lungs (29).
Induced Pluripotent Stem Cells
The use of iPS cells may be a suitable treatment option, allowing the use of pluripotent cells without manipulating embryos and offering the possibility of generating patient-specific cells. Moreover, recent data by Russel et al. demostrated the possibility of using these cells to overcome not only the alloresponse but also the auto-immune reaction in Type 1 Diabetes. Indeed, genome editing of iPSC demonstrated the capability of available technology to generate invisible cells that are able to escape immune reactive cells (30). However, iPSCs have mutagenic potential in some reprogramming methods, and limitations for long-term transplant viability and functionality (31).
β CELLS FROM DIRECT REPROGRAMMING
The clinical application of stem cells for the cure of diabetes still has many roadblocks to overcome. For this reason, different groups of researchers have explored the direct reprogramming of non-β adult cells into insulin-producing cells in order to exploit the production of new bona fide β cells in-vitro.
This technology, known as transdifferentiation, is based on the misexpression of specific groups of master regulatory transcription factors able to control the transition from one progenitor cell state to the next, ultimately generating mature insulin-producing cells (32)(33)(34).
Pancreatic Endocrine α Cells
In order to investigate a new source of β-cells, the transdifferentiation of α-cells cells is attractive due to common endodermic origin of β-and α-cells.
In 2009, the research group led by Collombat demonstrated in-vivo the conversion of alpha cells into functional beta cells by the ectopic overexpression of the transcription factor Pax4 during the development, (35) or the loss of Aristaless-Related Homeobox (Arx) restoring euglicemia in STZ-induced diabetic mice (36).
At the molecular level, the β-cell factor Pax4 works by inhibiting the α-cell master regulatory transcription factor Arx. Therefore, the absence of Arx alone is sufficient to switch α-cells to β-cells.
Pancreatic Exocrine Acinar Cells
Transdifferentiation protocols can be also applied to the pancreatic exocrine cellular component, which amounts to ∼98% of the whole adult pancreas.
Pancreatic acinar cells and pancreatic duct cells are the most represented exocrine cellular types.
It must also be highlighted how exocrine cells comprise the vast majority of cells discarded from all cadaveric donor pancreata during the traditional islet isolation process. If successful, reprogramming could take advantage of a large pool of cells for conversion to β cells that would otherwise not be used.
The overexpression of specific transcription factors such as insulin promoter factor 1 (PDX1), neurogenin-3 (NGN3), and musculoaponeurotic fibro sarcoma oncogene family A (MafA) have showed, after viral transfection, evidence of acinar to β conversion in Rag 1 −/− non-diabetic animals. This overexpression is not sufficient to reverse diabetes in STZinduced mice, though it does partially correct the hyperglycemic state (38).
Hepatic Cells
Starting from the same embryonic origin as β-cells and sharing analogous glucose-sensing systems, hepatocytes have been targeted for transdifferentiation into pancreatic β-cells by genetic reprogramming (39).
This possibility was explored and validated for the first time in 2000 by Ferber S and colleagues who published the first reports of transdifferentiation of liver cells. This group produced mice with a recombinant adenovirus that induced the expression of endogenous PDX-1 and the expression of other β-cell markers and resulting in substantial insulin (both hepatic and plasma immunoreactive insulin) (40).
ENCAPSULATION TECHNOLOGY
Two new promising insulin delivery technologies are under investigation: micro-and macro-encapsulation devices.
Regardless of the capsule size (micro-or macro-), this approach aims to wrap the islets in a biocompatible membrane that permits the diffusion of nutrients while sheilding the islets from larger molecules, including antibodies and immune cells.
Proposed by Via-Cyte 1 , a privately-held regenerative medicine company developing novel cell replacement, PEC-Direct TM and PEC-Encap TM (VC-01 TM ) are the first and second generation β-cell-derived product delivering technologies.
PEC-Direct TM is a macrodevice designed to allow direct vascularization of specific pancreatic progenitor cells (referred as PEC-01 TM ), guaranteeing their maturation and in-vivo differentiation into insulin-producing cells (41,42).
Despite the potentially groundbreaking results, one of the major limitations of the product is the direct contact of the transplanted cells with the circulating cells of the host, implying the need for immunosuppressive therapy.
PEC-Encap TM (VC-01 TM ) aims to overcome this problem by incorporating Encaptra TM technology, a fine permeable film that permits surface vascularization and diffusion across the membrane in order to avoid contact between the transplanted and host cells. This improvement supplies an immunological protection that theoretically avoids the requirement of life-long immunosuppression.
ViaCyte received the approval from the U.S. Food and Drug Administration (FDA) in August 2014 to begin evaluation in human clinical trials. The PEC-Encap clinical trial (STEP ONE trial) is currently evaluating basic safety and tolerability in patients with type 1 diabetes in Canada and the USA.
On August 1st 2017, the company presented the first patient implanted with PEC-Direct TM .
ENGINEERING A 3-D NICHE FOR ISLETS: SYNTHETIC AND PREFABRICATED SCAFFOLDS
In their original milieu, islets are surrounded by pancreasspecific ECM proteins, usually composed of interstitial matrix and basement membrane proteins such as collagen type IV, laminin, and fibronectin. During isolation, islets are deprived of this matrix, which results in a loss of graft funtion (43) due to the crucial role that the matrix plays in islet survival, function, and proliferation (44).
The idea to house islets within a synthetic biomaterial at alternate transplantation sites is a potential therapeutic option. This concept is based on the goal of engineering a threedimensional platform able to provide a non-toxic environment for seeded cells and recreate the native physiological milieu.
Different fundamental requisites for cellular transplantation have been identified including porosity, biocompatibility, the ratio between surface area and volume, and a suitable 1 http://viacyte.com environment for new tissue formation that can integrate with the surrounding tissue (45).
The porosity of the scaffold is considered to be a crucial property for cellular vitality, guarantying the effective delivery of oxygen and nutrients to cells. Micro-and macroporous synthetic scaffolds are characterized by pores with a diameter under or over 50 µm respectively. This approach has numerous advantages: the choice of biomaterials to use is wide, many biomaterials provide a relatively precice and repetitive assembly at the micrometer level, and the material used can be loaded or cross-linked with numerous molecules in the attempt to augment cellular functionality (46,47).
The group tested other polymers (polyethylene oxide terephthalate)/polybutylene terephthalate (PEOT/PBT) and polysulfone), but concluded that the PDLLCL-based scaffold was the only polymer that supported in-vitro rat islet survival. After the in-vitro testing, islets seeded in a PDLLCL scaffold were finally transplanted into a diabetic rat model resulting in normoglycemia within 3 days and for the duration of the 16 week study period (47).
A macroporous scaffold made by poly(dimethylsiloxane) (PDMS) has been explored by Pedraza et al. (51). PDMS scaffolds were prepared by using the solvent casting and the particulate leaching procedure and were pre-conditioned for islet loading via washing with islet culture media. For in-vitro investigations, each scaffold was loaded with 1,500 rat or human islet equivalents (IEQ).
When seeded on PDMS-based scaffold, islets showed an invitro enhanced viability compared to 2D culture controls under low oxygen tensions. The in-vivo effectiveness of scaffolds to support rat islet grafts was assessed after transplantation in the omental pouch of streptozotocin-induced diabetic syngeneic rats (in this case 1,800 IEQ were used) achieving normoglycemia with a mean reversal time of 1.8 ± 1.3 days.
These data have paved the way for the use of a pre-fabricated scaffold transplanted in the omental pouch in clinical practice. In 2017, Baidal described the use of a biological scaffold seeded by autologous islets to treat a 43-year-old patient with a 25-year history of type 1 diabetes. The authors reported a laparoscopic islet transplantation (for a total of 602,395 IEQ) onto the greater omentum in a degradable biologic scaffold composed by alternate layers of islets, recombinant thrombin (Recothrom) and autologous plasma. In this patient, euglicemia was restored and insulin independence achieved, but data from long-term follow-up are still missing (52).
Although these results are promising, several issues should be still addressed. As suggested by Pellicciaro et al. (53), we need to know how long transplanted islets remain in hypoxic conditions, and how rapidly islets are re-vascularized. This question is still unanswered due to the fact that the oxygenation conditions of transplanted islets in the omentum are still debated.
EXTRACELLULAR MATRIX SCAFFOLD AND PANCREAS BIOENGINEERING FOR DIABETES TREATMENT-TISSUE ENGINEERING AND REGENERATIVE MEDICINE TE/RM APPROACH
In contrast with solid pancreas transplantation, treatment with islets frequently requires multiple injections to reach the minimal functional mass able to grant insulin independence. Furthermore, the long-term exhaustion of the transplanted islets often requires additional transplantations in order to maintain the results. Therefore, multiple pancreas donations often go to one single patient, making the treatment extremely resource-consuming (54).
Host instant blood mediated inflammatory reaction (IBMIR) (55,56), lack of revascularization of the site of implantation in the early post-transplant phase (57) and recurrence of autoimmunity (58) are common examples of the many other hurdles affecting the outcome of islet transplantation.
As discussed above, encapsulation has been developed to overcome some of these issues. However, the complete isolation of the islets from the surrounding tissue is deleterious because it does not allow for normal exchange with the surrounding milieu. The extreme effect of this isolation is "anoikis, " defined as programmed cellular apoptosis induced by inadequate or inappropriate cell-matrix interactions (59,60).
In this scenario, the "ideal" islet transplantation procedure would also provide an extracellular matrix environment that promotes isolated β-cell mass survival and function (61,62).
Due to the importance of the three-dimensional extracellular matrix (ECM), a relatively new technology is under evaluation to produce acellular ECM-based scaffolds that can be repopulated with patients' autologous cells.
The cell-on-scaffold technology stems from the ability to strip the cellular component from a tissue or a solid organ using a technique known as decellularization. The result is a threedimensional acellular template composed only by the ECM. This structure maintains the original macro-and micro-architecture of the organ as well as all the biochemical cues of the ECM itself (63).
The goal is to produce an endless source of transplantable organs through the repopulation (recellularization) of biological scaffolds of animal origin, giving rise to a totally new era in the field of transplantation (64). This approach has been successfully applied in experimental models involving insulin-producing cells. De Carlo et al. proposed one of the first experimental models in 2010 (65), seeding an acellular matrix of pancreatic and hepatic scaffolds with rat islets. The final construct was implanted in streptozotocin-induced diabetic rats that showed reduced exogenous insulin requirements. In a mouse model, Goh et al. suggested a perfusion-based decellularization protocol able to produce an acellular pancreasspecific template (66). The whole pancreas was harvested and perfused through the portal vein with a specific detergent to completely remove the parenchymal cellular counterpart. For recellularization, AR42J (acinar cell line) and MIN-6 β-cells were selected. Acinar cells were seeded directly into the pancreas by retrograde perfusion through the pancreatic duct, whereas MIN-6 β-cells were introduced through the hepatic vein via a multistep injection. The obtained construct was then maintained in static culture for 5 days. The recellularization showed the engraftment of both cell types with an apoptosis rate of 18% and the preservation of insulin expression.
More recently, this protocol has been optimized comparing different strategies for perfusion-based decellularization procedures (67) in a rat model. Three different perfusion routes (arterial, venous, and pancreatic) were tested for decellularization as well as for repopulation with islets. Although no significant differences were observed between the groups in the obtainment of the acellular scaffold, the pancreatic duct was shown to be the best way to repopulate the scaffold, as it avoided extra-parenchymal leakages and showed an 80% seeding efficiency.
In order to achieve clinically relevant sized scaffolds, porcine models were investigated as a platform for pancreas bioengineering (68).
The decellularization protocol involved the perfusion (0.75 L/h) of a single detergent (1% Triton X-100) through the pancreatic duct for 12 h. After a final rinse with sterile PBS, the scaffold was seeded with 2 different cell types: 1 -human amniotic fluid-derived stem cells (hAFSC), to assess the cellular compatibility of the acellular pancreas ECM. 2 -porcine islets, to demonstrate the potential of the ECM to support pancreatic function.
Finally, by using a metabolic assay, an increase in the metabolic rate of seeded islets was observed between day 3 and 7, with insulin secretion compared to isolated nonseeded islets. Furthermore, it was noticed that after 72 h, islet insulin production was pulsatile under basal and high glucose conditions. This research corroborated the theory that a porcine pancreas can serve as a platform for insulin-producing bioengineered tissue. Porcine pancreata can be harvested and decellularized while retaining the native architectural and biochemical cues. The resultant ECM can be seeded with the critical mass of islets required to meet insulin requirements.
The temperature of the detergent plays a central role in the preparation of the scaffold and notably on the biological quality of the final ECM. As described by Sumitran-Holgersson et al. (69), a three-step infusion of 4 • C sodium deoxicholate, Triton X-100 and DNase for the production of an acellular porcine scaffold was more effective than normo-thermic decellularization. Interestingly, these scaffolds were seeded with human fetal pancreatic stem cells (hFPSCs), supporting both the endocrine and exocrine functions under static culture conditions. The endocrine properties of the seeded cells were evaluated by the synthesis of C-peptide, glucagon and the expression of PDX1, whereas the tissutal exocrine capacity was assessed by α-amylase secretion levels.
Recently, human pancreata discarded for clinical transplantation were used to produce acellular, extracellular matrix-based scaffolds (70). This study successfully demonstrated that that a whole human pancreas could be decellularized through a detergent-based perfusion protocol with the clearance of cells and HLA class I and II antigens. The scaffolds obtained this way showed the preservation of the ECM architecture (at all hierarchical levels), as well as composition and mechanical properties. In addition, this study showed that cell proliferation, glucose metabolism, and several growth factors (key players in essential pathways such as angiogenesis) are retained within the 3D structure of acellular pancreatic matrix. Finally, the authors showed that the pancreatic scaffold inhibits naïve CD4+ cell proliferation, promotes their apoptosis, and induces their conversion into T regulatory cells (T-regs), suggesting immunomodulating properties of the model.
As a corollary, if successful, the use of human pancreata as source to produce acellular scaffolds could provide an allogeneic platform on which to repopulate endocrine cell populations. In terms of numbers, this alternative to animal sources would allow for the use of 30% of discarded pancreata that were originally harvested for transplantation purposes but finally discarded in the US (71,72).
The ECM can also be incorporated in multicellular spheroids together with different cell typologies enhancing their regenerative properties. Human amniotic epithelial cells (hAECc) obtained from placental samples are broadly accessible, and have immunomodulatory, anti-inflammatory and regenerative abilities (73,74). In their preliminary data, Lebreton et al. produced heterospheroids consisting of rat islets and hAECc (with a ratio of 1:1) that were ultimately implanted under the kidney capsule of diabetic severe combined immunodeficiency (SCID) mice. Blood glucose tests and IPGTTs revealed an enhanced glycemic control compared with control animals. In this condition, the cumulative percentage of animals reaching normoglycemia was 74% in the islet + hAECc group vs. 26% in the islets-alone group. Even if preliminary, these data suggest that hAECs could have a meaningful potential to protect islet cells and they could be utilized to improve islet cell survival and function prior to transplantation when incorporated in heterosphere.
According to the classical definition for the "ideal biomaterial, " it would have adequate biomechanical properties, a low risk for infection transmission disease and would not elicit an unfavorable immune response (75). Theoretically, an ECMbased scaffold could be considered particularly suitable but its immunogenicity has to be considered one of the major drawbacks.
Based on a plethora of factors and parameters including graft complexity or the amount of specific protein families, the whole graft can generally be considered immunogenic or antigenic, and the ECM is no exception. This point has been highlighted and discussed in 2006 (76). The author reviewed the common use of porcine and bovine biomaterials (e.g., Veritass R , CuffPatch TM , TissueMends R ) as clinical examples of acellular non-autologous biological structures without reporting any adverse immunological reactions. In this context, he also reported numerous studies regarding the presence of small amounts if gal-epitope (galactosyl 1,3 galactose epitope) following engraftment, which were unable to induce the complement activation or a cell-mediated rejection (77,78). The gal-epitope has been deeply studied in this field, and non-gal-epitopes (such as α-enolase or E-cadherin) also seem to be actively involved in the immune reaction (79) to xenogeneic grafts.
As suggested by Wiles et al. (80), the decellularization process-by removing the antigens responsible for an immune response-could minimize both acute and chronic rejection. However, decellularized tissue may still provoke an immune response process by the presence of damage-associated molecular pattern proteins (DAMPs) (81). DAMPs are well represented not only in the native cellular tissue, but also in the acellular scaffold actively secreted during cellular injuries and necrosis. Their presence upregulates HMGB1, (High mobility group box 1) a highly abundant, ubiquitous protein that can promote the pathogenesis of inflammatory and autoimmune diseases once it is in an extracellular location (82).
Immunoisolation could be a solution to isolate the xenogeneic scaffold from the surrounding environment. Immunocloaking has provided us with important preliminary data by perfusing the organ with a nanofilm (called ImmunoCloak) made by subendothelial ECM able to camouflage the antigenic components. This nanobarrier guarantees the permanent exchange of oxygen and metabolites between the systemic bloodstream and the transplanted organ while masking antigens. Using this solution, Brasile et al. delayed the onset of rejection significantly, from 6 days for untreated kidneys up to 30 days for treated "ImmunoCloaked" kidneys. (83). Even with significant limitations (above all the fact that the membrane degrades after 1 month), it is reasonable that this technology could be used in order to mask the residual acellular scaffold immunogenity (84).
EXTRACELLULAR MATRIX AND STEM CELLS AS POTENTIAL COMBINATION
Cell-matrix interactions have been proven to be essential not only for cellular proliferation, but also for differentiation (85) and physiological function (86). Based on this evidence, the idea is to use the organ-specific environment-created by the ECM-in order to offer the implanted cells the best growing conditions in order to obtain (in-vitro) a transplantable organoid.
Specifically, the relationship between the ECM and islets has been deeply investigated, even if the understanding of the exact role of extracellular matrix in the functional endocrine system remains unclear.
It has been shown that the ECM plays a pivotal role in the formation of the correct stem cell niche by specific sequences of interplays (87).
Recently, an international Lancet commission has evaluated the use of stem cells, underlining the most important hurdles that are currently in the spotlight (88).
They redefined the use of stem cells both for TE/RM as well as for cellular therapy, highlighting the lack of understanding and the importance of having a solid scientific consciousness, that in some cases has failed to fulfill the original promise: help the patient (89).
CONCLUSIONS
The pancreatic endocrine component is an interesting arena for regenerative medicine and cell therapy. Although in its early days, the evolution of TE/RM and the study of stem cell biology is leading to innovative treatments in the therapeutic field.
In this text, we reviewed studies reporting successful strategies for pancreatic tissue engineering, which are based on stem cells, islet encapsulation, and scaffold technologies. Fueled by the encouraging results, we propose that the combination of TE/RM and the stem cell approach could lead to the creation of a bioartificial pancreas. Although still far from becoming a clinical reality, the potential application of this futuristic approach is almost unlimited.
AUTHOR CONTRIBUTIONS
AP, AC conceived, designed and wrote the manuscript. TZ, LC, and CMB participated in the review draft design, provided the appropriate literature, revised it critically and approved the final version. AK-Q revised the English, revised critically the manuscript and approved the final version. EB wrote the paragraph regarding the encapsulation technology, revised critically the manuscript and approved the final version. AA, LP, TB, CT and GO conceived, designed and supervised the whole manuscript, revised it critically and approved the final version. | 6,977.6 | 2018-08-31T00:00:00.000 | [
"Medicine",
"Biology"
] |
An Object Detection Method Using Wavelet Optical Flow and Hybrid Linear-Nonlinear Classifier
We propose a new computational intelligence method using wavelet optical flow and hybrid linear-nonlinear classifier for object detection. With the existing optical flow methods, it is difficult to accurately estimate moving objects with diverse speeds. We propose a wavelet-based optical flow method, which uses wavelet decomposition in optical flow motion estimation.The algorithm can accurately detect moving objects with variable speeds in a scene. In addition, we use the hybrid linear-nonlinear classifier (HLNLC) to classifymoving objects and static background.HLNLC transforms a nonoptimal scalar variable into its likelihood ratio and uses a scalar quantity as the decision variable. This approach is appropriate for the classification of optical flow feature vectors with unequal variance matrices. The experimental results confirm that our proposed object detection method has an improved accuracy and computation efficiency over other state-of-the-art methods.
Introduction
In modern engineering, the requests for research and design are increasingly achieved with the help of intelligent models.Computational intelligence (CI) has emerged as powerful tools for information processing, decision making, and knowledge management [1].CI is a set of nature-inspired computational methodologies and approaches to address complex real-world problems to which traditional approaches [2,3].In this paper, we propose a new computational intelligence method using wavelet optical flow and hybrid linear-nonlinear classifier (HLNLC) for object detection.Object detection can be subcategorized as either the detection that has similar characteristics [4,5] or the detection of a specific object in a video sequence [6,7].Our paper is focused on the second category.
One important task in object detection is motion estimation.Optical flow is one commonly used approach to estimate the object motion.Starting with the original algorithms by Lucas and Kanade (LK) [8] as well as Horn and Schunck (HS) [9], gradient-based methods have led to other improved optical flow estimation methods.However, when the image background is cluttered or the detected object is moving at high speed, the accuracy of gradient-based methods will be significantly decreased [10].Another important task in object detection is classification.Classifier techniques such as pulse-coupled neural network (PCNN) [11], fuzzy neural network (FNN) [12], Gaussian SVM (GSVM) [13], and linear discriminant analysis (LDA) [14] have been applied to different object detection situations.In PCNN and FNN schemes, detection accuracy may be coupled with small training errors because each image pixel is associated with a unique neuron and vice versa.In SVM classifier approach, the classifier is characterized by excessive complexity when it comes to the binary classification task of identifying moving objects versus static background.The LDA classifier is a more robust linear classifier which has proven itself as the ideal observer for input feature vectors with equal covariance matrices [15], but it is not the optimal choice for data that contain multivariate optical flow vectors with unequal covariance matrices.
Our work will consider both factors.We propose a new object detection method using wavelet-based optical flow and hybrid linear-nonlinear classifier.Some wavelet-based optical flow estimation approaches have been proposed; Wu et al. [16] used wavelets to model and reconstruct flow vectors; at each iteration, the estimation will be repeated, which is to reduce efficiency.In [17], Bernard assumed that the optical flow was locally constant.In [18], Srinivasan and Chellappa proposed a similar method which modeled optical field using a set of overlapping basis functions.In [17,18], optical flow computation has been simplified, and the disadvantage is to compromise the accuracy, especially when several objects with different speeds existed in one scene.In this paper, we use wavelet calculus to compute derivatives of the functions in terms of the scaling expansion coefficients.Our proposed method could achieve an accelerated optical flow computation and accurately estimate the motions of different speed moving objects in the same scenes.
In the binary classification about distinguishing moving objects versus static background, a linear classifier is a more robust choice than existing methods [19].Linear discriminant analysis (LDA) is the most popular linear classifier.LDA produces accurate classifications for two types of input feature matrices: those with normal distributions and those with equal covariance.However, it has trouble with vector data that have an unequal covariance.Thus, we use the novel hybrid linear-nonlinear classifier (HLNLC).HLNLC was proposed by Chen et al. in 2010 [20].It has been proved to be a more robust linear classifier than LDA and other existing classifiers.
This paper is organized as follows.Section 2 describes wavelet-based optical flow motion estimation method.Section 3 introduces the hybrid linear-nonlinear classifier.Section 4 describes rectangle window scan algorithm.Section 5 describes experimental results.Our conclusions are presented in Section 6.
Wavelet Based Optical Flow
2.1.Gradient-Based Optical Flow.Gradient-based optical flow algorithms are based on the assumption of constant brightness [8,9], where it is assumed that the gradient value of a pixel will not vary due to displacement [21].It can be described as Here, (, , ) represents the brightness value of pixel (, ) at time . , , and are the partial derivatives of (, , ) with respect to , , and .The variable (V , V ) is the velocity vector of optical flow estimation about (, , ), and (, V) is the gradient operator of (, , ).The brightness constancy assumption assumes that the motion vectors are constant within small windows and that the image sequence (, , ) will not change significantly during a short period of time.
This assumption can be expressed as Based on (1) and ( 2), the flow algorithm produces two simultaneous equations for the velocity vector V and V : , , , , and are the product of the partial derivatives , in a time range .The detailed representations may be written as In gradient-based optical flow algorithms, object displacement between successive frames determines the accuracy of optical flow estimation because the assumption that there will be no major changes between successive frames will break down when the displacement between frames is significant.In order to improve the accuracy of optical flow estimation, displacements between video frames should be projected and recalculated in different frame rates, which means that the algorithm should be able to adaptively adjust the velocity components for different objects moving with different speeds.
Wavelet Based Optical Flow Estimation.
We try to apply the wavelet transform into optical flow estimation.Wavelet transform is an important tool for signal processing, and its magnitude will not oscillate around singularities as the transform magnitude is locally nearly shift invariant [22].To apply the wavelet decomposition to optical flow estimation, we transform the optical flow equation into the following expression [23]: Suppose the image size is × pixels.Then the optical flow vector [(, ), V(, )] can be expressed as The variables (, ), V(, ) are the weighted coefficients of the optical flow.Once , and V , can be determined, the optical flow estimation will be accomplished [24].Thus, we transform the optical flow estimation into a calculation that includes 2 2 node variables, where , and V , can minimize the object function (5).
2 , 2 , and denote the products of spatial derivatives: 2 , , and are products of spatial and temporal partial derivatives, where the time variable is taken from the variable frame rate: The shortest frame interval is , and the interval of multiplications is set to , so that the product of () and () in an interval is given by where = ∑ =1 ∑ ∈ , (( ())/| , |)ℓ ( ( − )/).The partial derivatives of (, , ) can be obtained via two difference equations: In the time interval , we compute the optical flow partial derivatives using (7).In our algorithm, () and () are replaced by the approximate products ∼ () and ∼ () in order to improve computation accuracy.The specific formulas are shown in the following expressions: In this work, continuous displacements between image frames are considered to be very small.Specifically, and will be considered as constants in the time interval ∼ + .With this assumption, we can replace () and () with ∼ () and ∼ ().The amplitude of optical flow is estimated by considering the previous frame − .The adaptive parameter is determined by the optical flow estimation ( ( − ), ( − )) from the previous frame.It can be used to estimate the pseudovariable frame rate (V , V ).When computing the product in frame interval , is adjusted automatically according to the speed of detected object.For objects moving at high speed, is small, while, for slow-moving objects, it is larger.Furthermore, (V , V ) may take on different values at any pixel location because (V , V ) may exhibit spatial variability in the optical flow estimation algorithm.By using a wavelet transform, we can rewrite (5): Figure 1 depicts the application of wavelet into optical flow estimation.
The optimal coefficient of ( , , V , ) can be determined by the following expression: where , = 0, 1, . . ., − 1.The sparse representation is = , and the matrix obtained by the wavelet transform is ∈ 2 × 2 .Elements of matrix are written as ) .( 16) The combination of wavelet transform and optical flow defines moving objects via a sparse linear representation in the defined structure.The wavelet algorithm has collected all the information from the optical flow estimation and stored it in the matrix .Once the computation = is complete, an optimal and sparse ( , , V , ) will have been determined.Our method transforms the optical flow estimation into the problem of minimizing an energy function.The coefficients determination yields accurate optical flow vectors.
Classification
3.1.Hybrid Linear-Nonlinear Classifier.The novel hybrid linear-nonlinear classifier (HLNLC) divides the traditional binary classification into two stages: in the first stage, a linear function combines the input feature vector and a scalar variable, and in the second stage, the scalar variable is transformed into a decision variable [25].
In a two-class sorting approach, a particular data set is divided into positive and negative parts.Suppose the feature vector is given by = ( 1 , 2 , . . ., ) , where represents the joint outcome of some random variables, where there are of such variables.The corresponding distributions are described by a multivariable positive normal distribution (where the mean is = ( 1 , 2 , . . ., ) and the × covariance matrix is [ ]) and by a multivariable negative normal distribution (where the mean is = ( 1 , 2 , . . ., ) and the × covariance matrix is [ ]).The probability density function (PDF) for positive and negative components is In first stage of the HLNLC, the feature vector is mapped into a scalar vector .Because follows the multivariable normal distribution and is a linear combination of , we see that follows a two-variable normal distribution: These parameters can be represented by the input vector V and the other related input parameters.They may be written as F(i, j) F(i, j + 1) Figure 2: Rectangle window scan method.
In [26], the classifier is improved by projecting the multivariable classification algorithm into a two-density distribution interval, with the projection method described by In a second stage of the HLNLC algorithm, the likelihood ratio of is used as a decision variable: Based on binormal ROC theory, the corresponding AUC can be expressed as where the and are (, ; ) is the cumulative distribution function (CDF) of the standard normal distribution.Using the parameters ( , , , ), we may express the AUC as a function of the linear coefficient vector V: In order to find the optimal linear function of the HLNLC, which mainly focuses on V and AUC HLNLC , the HLNLC algorithm classifies the optimization problem as follows: The optimization problem is given by AUC HLNLC / ⃗ V = 0, which can be solved with gradient-based mathematical methods.The specific process is shown in the following expressions:
Classification of Optical Flow Vectors.
In this paper, the linear coefficient vector V is determined by the parameter for a 2D optical flow vector ( , , V , ), which can be understood to be the angle between vector coordinate and vector coordinate .Any 2D optical flow vector can be normalized as The operating assumption in the HLNLC algorithm is that two types of feature data follow a pair of multivariable normal distributions.However, the optical flow distributions contain some small differences.We can implement a more robust method.The pair of optical flow vector variables uses bivariate normal distributions to normalize the data.Specifically, ) .(29) Here, ,V represents feature covariance of the optical flow ( , , V , ).Using this method, ( , , V , ) is normalized into the normal distribution.The parameters ( , , , ) that concern the HLNLC algorithm can be obtained with (18), in which positive values indicate motion areas and negative values represent background.AUC HLNLC can be calculated by using (23).The related parameter (( HLNLC ), V( HLNLC )) is used to produce an ROC curve and calculate AUC HLNLC .
Rectangle Window Scan
In order to detect moving objects, an × rectangle window should be determined.We propose rectangle window scan algorithm; in scanning process, the classified optical flow vectors which are attributed to moving object are the input variables, and a rectangle window will shift pixel locations in each direction per unit time.The operation is repeated until the rectangle window size is less than the presupposed threshold value.The detection area marked by the rectangle window will be the final output.Our proposed method could detect multiple moving objects in one scene.However, the iterative process may be time consuming.Figure 2 illustrates the details of rectangle window scan algorithm.
In the method, the th focused area at the th scan line is denoted by (, ), and the related rectangle window is recorded as RW (,) .Initially, the scan area RW (1,1) is obtained by a normal adaptive modification calculation method.Thereafter, the rectangle window shifts to the right by pixels, where = 10 was used in this paper, and RW (1,1) is shifted to RW (1,2) .There is no need to recalculate the overlap region of an integral scan because the two adjacent regions RW (1,1) and RW (1,2) share the same overlap region (RW 1− ) × .For the remaining horizontal scan lines, (, ) describes the region of concern which is below (−1, ).The upper and lower boundaries of the nonoverlapping region are defined as (, ) and (, ).
Based on the distribution principle for motion vectors, less than 50% of the motion vectors may be zero vectors.We use the sum of absolute gradient difference (SAGD) to judge motion vector value.SAGD can be calculated in the following expression: Depending on the current location of rectangle window, some adjacent windows may not exist.We use windows For the remaining areas, (, ), with , = 2, 3, . . ., (,) , the moving object region is identified as
Experimental Results
In this section, we validate the performance of our proposed algorithm on four different videos.All of the source videos were presented in a consistent format (MPEG-2 standard, 25.68 frames per second (fps)).Video (a) was produced with cameras where high-speed moving cars appear in both close and distant scenes, it has 996 frames of 768 × 576 resolution.Video (b) is a standard video compression sequence known as the coastguard sequence, where a video camera is fixed on a moving boat so it appears that the background is moving; it has 876 frames of 768 × 576 resolution.Video (c) is a spatial satellite video sequence, and the satellite is the detected object; it has 1025 frames of 720 × 480 resolution.Video (d) is a human-motion video in which the motion is a human running; it has 825 frames of 720 × 480 resolution.These four videos are the most tested videos, in which the pedestrians, vehicles, and satellites are typical detected targets.Comparison experiments with other state-of-the-art object detection algorithms could demonstrate the efficiency of our proposed method.Specific steps are shown in Figure 3.
Optical Flow Experiments.
We use a constant value to determine the time interval between video frames, and the parameter value = 0.64 pixel/ms was found to be suitable.Each frame was processed with subsampling and rescaled under the wavelet estimation.Two important steps are involved: 1.We compare our proposed method with LK, HS, and occlusion-aware optical flow (OAOF) [27].LK and HS calculations take almost the same time because they involve similar computation methods.OAOF takes a little longer time.Our proposed wavelet optical flow method has an improved computation time, with a reduction of nearly 5∼ 6 sec over LK and HS and 10∼14 sec over OAOF.It clearly demonstrates the efficiency of using wavelet transform in optical flow estimation.
Classification Experiments.
In the first stage of HLNLC algorithm, structure optimal parameters are chosen as = 0.64, and the classifier undergoes 20 iterations.In order to get a dense optical flow field, our method uses a global smoothness constraint.Each optical flow vector has a spatial connection with respect to high-speed moving object.Figure 5 shows the initial classification results.As background information and moving object features have similar characteristics, motion regions adhere and empty holes emerged.Obviously, classification result is not very satisfactory.
In the second stage, the classifier uses a decision variable to deal with topology changes in detection region.Figure 6 shows the classification results about video sequences (a) and (b).Motion areas in video (a) contain queues of cars, which belong to close scene and distant scene, respectively.Because the cars in distant scene are too wide, some background information has also been classified as motion region.Optical flow vectors generated from video (b) reflect background information.We employ a trust-region Newton-Rapthson method to solve this problem [28].This method separates moving objects from background by making judgments about the most meaningful trust-region in sequences based on the maximum likelihood ratio.
We are interested in comparing the classification performance of HLNLC and other classifiers, including PCNN, FNN, GSVM, and LDA.We specified the population parameters of a pair of normal distributions ( , , , ) and drew samples from the specified distributions.Then, different classifiers are applied to each optical flow feature vector in the sampled data.The comparison results are shown in Table 2. M&SD are the mean and standard deviation of the motion ) , The function Φ is the cumulative distribution function of the standard normal distribution.Positive values indicate motion areas, and negative values represent background.AUC, as a decision variable of normal distribution, equals the probability that outcomes from the actual positive class.A greater AUC means higher classification accuracy.Based on AUC, we were able to calculate the detection rate (DR), which is defined as the ratio between total moving object region and background area.
Our experiments use the mean AUC for each sequence which has different frame numbers, respectively.Neural network algorithms (PCNN\FNN) give rise to proliferation errors in the edges of moving objects and have larger deviations in their classification results.GSVM can directly classify data without using PCA, so the result is better than what is produced by NN.About LDA and HLNLC, we calculate AUC by inserting trained LDA vectors, and AUC HLNLC is found to be superior to AUC LDA .In all the methods, We observe that the HLNLC can substantially improve classification performance over other classifiers.M&SD, AUC, and DR curves obtained by different methods are shown in Figure 7.
Rectangle Window Scan Experiments.
In this part, we used three other object detection methods for comparison.These included SIFT, background subtraction (BS), and Hough forest method (HF).The SIFT method is based on SIFT features, which is invariant to image scale and rotation [29,30].The background subtraction method uses temporal differencing pixels from a Laplacian model and completes the object detection task via a threshold value [31].Hough forests can be regarded as a task-adapted codebook, in which different locations, scales, and motions are stored [32].
Figure 8 shows the experimental results; the object detection results obtained by our proposed method are shown with red solid boxes; the results in SIFT, BS, and HF are shown in blue, green, and pink dotted boxes.Our proposed method could detect moving objects in different distant scenes in the same sequence; in video sequence (a), moving cars in the distant scene were detected in green and orange solid rectangle windows.In addition, our proposed method could adaptively adjust the rectangle window that will be suitable for the detected object, such as satellite and human body in video sequences (c) and (d).
We use three measures to compare object detection accuracy [33] "ground-truth" and detected object in frame . is the number of classification errors. is the missed detection count. mapped is the pair number of ground-truth and detected object in frame .OverLapRatio is the quality of alignment between the detected objects and the groundtruth.SFDA is the detection accuracy for a video sequence, which is essentially the average of FDA over all of the relevant frames in the sequence; ODA is the object detection accuracy, which utilized the missed detection and classification error counts.ODP is the object detection precision, which gave us the precision of detection by taking into account the spatial overlap information between ground-truth and system output.
Experimental results are shown in Table 3.Compared with SIFT, our proposed algorithm increases SFDA, ODP, and ODA by about 3%∼7%, 5%∼10%, and >10%.Compared with BS and HF, the SFDA, ODP, and ODA are increased by about 5%∼15%, 8%∼15%, and 10%, respectively.It implies that our proposed algorithm has a higher detection accuracy and lower false detection rate.SFDA, ODP, and ODA curves obtained by different methods are shown in Figure 9.
Conclusions
In this paper, we propose a new computational intelligence method for object detection.We apply the wavelet transform
Figure 1 :
Figure 1: Application of wavelet calculations into optical flow estimation.
( 1 )
normalization of optical flow vectors and computation of the matrix and (2) computation of the spare representation of = .Our results show that cars and human bodies can be detected in (a) and (d).Different frequency components in unstable regions and jitters have been removed in sequences (c).Compared with other video sequences, detected objects in sequence (b) have different characteristics.The optical flow is detected for background area.The experimental results are shown in Figure 4. Computation time comparisons results are shown in Table
Figure 4 :
Figure 4: Optical flow estimation for different sequences.
Table 1 :
Optical flow computation time comparison.
: and represent th ground-truth and detected object in frame . and represent the number of | 5,201 | 2013-09-04T00:00:00.000 | [
"Computer Science"
] |
Chitosan scaffolds with mesoporous hydroxyapatite and mesoporous bioactive glass
Bone regeneration is one of the most well-known fields in tissue regeneration. The major focus concerns polymeric/ceramic composite scaffolds. In this work, several composite scaffolds based on chitosan (CH), with low and high molecular weights, and different concentrations of ceramics like mesoporous bioactive glass (MBG), mesoporous hydroxyapatite (MHAp) and both MBG and MHAp (MC) were produced by lyophilization. The purpose is to identify the best combination regarding optimal morphology and properties. The tests of the scaffolds present a highly porous structure with interconnected pores. The compression modulus increases with ceramic concentration in the scaffolds. Furthermore, the 75%MBG (835 ± 160 kPa) and 50%MC (1070 ± 205 kPa) samples are the ones that mostly enhance increases in mechanical properties. The swelling capacity increases with MBG and MC, respectively, to 700% and 900% and decreases to 400% when MHAp concentration increases. All scaffolds are non-cytotoxic at 12.5 mg/mL. The CHL scaffolds improve cell adhesion and proliferation compared to CHH, and the MC scaffold samples, show better results than those produced with just MBG or MHAp. The composite scaffolds of chitosan with MBG and MHAp, have revealed to be the best combination due to their enhanced performance in bone tissue engineering.
Introduction
Autografts, namely, osteogenic and osteoinductive, are the supreme ways of enhancing bone regeneration in applications as diverse as orthopaedic trauma surgery, correction of congenital bone defects or spinal fusion (Salgado et al. 2004;Giannoudis et al. 2005;Habibovic and Groot 2007;Bhatt and Rozental 2012;de Melo Pereira and Habibovic 2018). Nevertheless, failure rates between 5 and 13% and complications rates (including chronic pain, blood loss, nerve injury, hernia formation, infection, arterial injury) between 8.5% and 20%, have been reported (Kaing et al. 2011;Bhatt and Rozental 2012;Kurien et al. 2013). This has led to research possibility on the use of biomaterials for bone regeneration and the development of alternative bone graft options such as ceramic, polymeric and composite scaffolds (Madihally and Matthew 1999;Rodríguez-Vázquez et al. 2015).
The ceramics hydroxyapatite (HAp) and bioactive glass (BG) are the most used materials to fabricate the bone substitutes available on the market. Alone or in combination with other materials, they present versatility, due to different forms, porosities, pore sizes and structures achievable (Habibovic and Groot 2007;Habibovic et al. 2008;Erol and Boccaccini 2011;García-Gareta et al. 2015). Of the recently developed structures, the mesoporous structure has improved properties regarding both morphology and mechanical response. The morphology showed outstanding surface area values and porosity, conferring high efficiency in chemicals incorporation and subsequent release (in situ drug delivery) of antibiotics, anticancer drugs or cytokines 1 3 (Qiao et al. 2017;Munir et al. 2018). The improved mechanical properties include higher resistance after swelling and assays of simulated body fluids (SBF) (Arcos et al. 2011).
The mesoporous structure is obtained through the incorporation of surfactants in sol-gel process (Arcos et al. 2011) or a surfactant catalyst in the micro-wave synthesis (Zhou et al. 2018). But, ceramic bone grafting materials still have some flaws such as: low fracture strength, low bending strength, brittleness and degradation rates difficult to predict (Giannoudis et al. 2005;Jones 2005; Karageorgiou and Kaplan 2005;De Lo ng et al., 2007;Dorozhkin 2010Dorozhkin , 2013Erol and Boccaccini 2011;Wagoner Johnson and Herschler 2011;García-Gareta et al. 2015;Wegst et al. 2015).
In order to improve ceramic bone graft properties, such as enhanced mechanical properties with scaffold brittleness reduction and biological performance (Wubneh et al. 2018;Ahmadipour et al. 2022), and satisfy clinical requirements (mass transport, vascularization, and host tissue integration) (Webber et al. 2015), a polymer, such as chitosan (CH), is added to scaffold constitution. CH is composed of β(1 → 4)-linked 2-acetamido-2-deoxy-β-d-glucose (N-acetylglucosamine) obtained from the partial deacetylation of chitin (Rodríguez-Vázquez et al. 2015). The degree of deacetylation (DD), crystallinity and molecular weight (MW) are the main aspects in which chitosan can be modified to obtain different physical and mechanical properties (Jain et al. 2013;Rodríguez-Vázquez et al. 2015;João et al. 2017).
Chitosan has a molecular weight in between 50 and 2000 kDa and DD between 40 and 98%. Due to these properties, chitosan has a strong hygroscopic nature, can improve the survival rate of osteoblasts, promote osteoblast differentiation and matrix mineralization (Madihally and Matthew 1999;Jain et al. 2013;João et al. 2017).
The improvement of osteoconduction enhances the bond between bone tissue and the scaffold (Habibovic et al. 2008). In addition, the increase of mechanical strength, pore size, and bioactivity is a result of polymeric and ceramic composite scaffolds (Thein- Han and Misra 2008;Peter et al. 2010a,b). The use of CH and mesoporous ceramics, such as mesoporous Hap (MHAp) or mesoporous BG (MBG), allows easier drug loading and delivery to enhance antiinflammatory responses, osteointegration, osteoinduction, and, ultimately, a faster bone regeneration (Baino et al. 2017;Cai et al. 2018;Yu et al. 2021). Furthermore, a controlled optimization with very specific macrostructure, microstructure, protein coating and chemical composition can lead to an osteoinductive response (Sikavitsas et al. 2001;Salgado et al. 2004;Jones 2005;Karageorgiou and Kaplan 2005;Dorozhkin 2013).
Nevertheless, there is no study on the effect of adding MHAp or MBG in a composite material for bone regeneration applications. Therefore, the main objective of this work is to produce composite scaffolds of CH with different concentrations of MHAp and MBG by lyophilization and compare them with CH scaffolds, with low and high molecular weights, and composite scaffolds using just mesoporous Hap (MHAp) or mesoporous BG (MBG) to determine the most promising formulation in terms of bone regeneration applications.
Materials
Chitosan with a low molecular weight (CHL) of 100 kDa and a degree of deacetylation (DD) of 80%, and chitosan with a high molecular weight (CHH) of 500 kDa and a 79.4% DD were supplied by Bioceramed (Portugal). Lactic acid (2-hydroxypropanoic acid), purchased from HiMedia (minimum assay = 99.0%), was used to dissolve CH.
Ultrapure Water (Milli-Q) was used for the preparation of all solutions and samples.
For the biodegradation test, lysozyme from chicken egg white from Lysozyme BioChemica was used.
Human osteosarcoma cells (SaOS-2 cell line), cultured in McCoy's 5A (Sigma-Aldrich) medium were used in cytotoxicity and adhesion tests. In both tests, population quantification was a result of resazurin (from Alfa Aesar) reduction by viable cells. In the cytotoxicity tests, the positive control was obtained using dimethyl sulfoxide (DMSO). Helix NP™ Green nuclear stain from BioLegend was used for the cell fluorescence assay.
Preparation of mesoporous scaffolds
The scaffolds were fabricated by lyophilization of solutions of CH and CH with ceramic mesoporous materials. The ceramic mesoporous materials were produced by sol-gel method using a non-ionic block copolymer F127 at a concentration of 21% of precursor mass, following Yan et al. MBG synthesis (Yan et al. 2005) and Fathi et al. for the MHAp synthesis (Fathi and Hanifi 2007).
3
The polymeric scaffolds were prepared by dissolving 2% (w/v) CH in a 2% (v/v) lactic acid solution and stirring for 2 h. The composite scaffolds had different fractions of MHApor MBG as 25%, 50% and 75% mass ratios of ceramic/CH and in the MC composites (with both MHAp and MBG), the ceramics were always at a 1:1 ratio of 25% and 50% ceramic/CH mass ratios. The ceramics were ultrasonically dispersed (Ultrasonic Processor UP400S from Heilscher) in 2% (v/v) lactic acid until all the clusters were disaggregated and then the CH solution was added while the solution was being stirred. Next, the composite dispersions were vigorously mixed using a magnetic stirrer for 2 h to obtain a homogeneous mixture.
After obtaining the homogenous dispersions, the solutions were poured into Teflon moulds and kept in the freezer overnight, to remove air bubbles and level the solution's surfaces. Then, the moulds were transferred to the freeze dryer (FreeZone Triad Cascade Benchtop, Labconco, 7400030 model). Lyophilization was performed at 0.1 mbar for 25 h. In order to completely remove the lactate still present inside the scaffolds, these were neutralized in 10% (v/v) NaOH bath, followed by 48 h dialysis (until reaching a pH of around 7) and again lyophilized (VaCo 2 by Zirbus).
X-ray diffraction (XRD)
The X-ray diffractograms were used to determine the crystal phases of different samples. These analyses were carried out at room temperature using a X'Pert PRO PANAlytical X-ray powder diffractometer (CuK-alpha radiation) operating at a voltage of 45 kV in the range 10° < 2θ < 90° with a 0.033° step size.
Porosimetry
The porosity of the scaffolds was calculated by Archimedes method, using a Sartorius BP110 S balance. The samples were previously swelled in a PBS bath for 7 days. This analysis used three replicas for each scaffold.
Scanning electron microscopy (SEM)
The morphology of the composite scaffolds was examined in a field emission SEM (Hitachi S-2700). The samples were frozen and broken in liquid nitrogen, mounted on aluminium platforms for horizontal/transversal view and sputter-coated with a gold-palladium conductive layer (Q3000T D Quorum sputter coater). The images were taken at an accelerating voltage of 15 kV and several magnifications.
Compression modulus
The mechanical properties of the scaffolds were measured with a testing machine from Rheometric Scientific (Minimat Firmware version 3.1), equipped with a 100 N load cell, at a crosshead speed of 1 mm.min −1 at room temperature and in compression mode. The compression modulus of the scaffolds was calculated from the slope of the stress-strain plot at 5% to 10% strain range of ten replicas (Tamplenizza et al. 2015).
Fourier-transform infrared spectroscopy (FTIR)
Fourier-transform infrared (FTIR) spectroscopy was performed on different materials, using a Thermo Nicolet 6700 spectrometer at Attenuated Total Reflectance (ATR) mode in a wavenumber range of 4000-500 cm −1 .
Swelling
The water uptake, or swelling, study was performed in PBS at pH 7.4 at 37 °C using three replicas for each material tested. With the dry weight (W 0 ) of the scaffold registered, scaffolds were placed in PBS buffer solution at pH 7.4 for 12 h, 24 h, 48 h, 72 h and 96 h. The excess water in the interior and surface of the sample, was removed with filter paper (Filter-Lab 1300/80) and wet weight ( W f ) was recorded for the three replicas. The swelling degree was determined by the following ratio:
Biodegradation
Degradation of the composite scaffold was studied in PBS medium, with ionic force of 0.06 and 5 µg/mL of lysozyme (Davies et al. 1969 andFreier et al. 2005). The samples were immersed in the degradation solution and incubated at 37 °C in closed falcon tube 14 days, with enzyme refreshing in 2-day periods. In the end of each interval, the scaffolds were taken from the degradation medium and rinsed methodically with Milli-Q to remove ions adsorbed on surface.
The biodegradation was quantified by the sample's variation of weight in the three replicas (after a lyophilization as a drying process) (Sashiwa et al. 1990). The quantification of the remaining weight is given by:
Bioactivity
For the bioactivity tests, the different samples were cut in squares of 5 mm edge and immersed in 30 mL of SBF solution, reported by Kokubo et al., (Kokubo and Takadama 2006), to guarantee the ratio V S = S A ∕10 , where V S is the volume of SBF in mL and S A is the sample's apparent surface area in mm 2 . The samples were incubated at 37 °C in closed falcon tubes for 3, 6, 12, 24, 48, 72 h and 7 days with two replicas of each analysis (Kokubo and Takadama 2006). After the specified periods, to remove non-adsorbed minerals, scaffolds were washed five times with Milli-Q water. Then, in order to identify apatite precipitation, the scaffolds were dried at ambient conditions and viewed using SEM (Kokubo and Takadama 2006;Peter et al. 2010a).
Cytotoxicity
The cytotoxicity tests were performed according to ISO 10993-5 standard using the extract method. Samples were sterilized with ethanol and irradiated with UV for 2 h and followed by 2 h remaining still at 80 °C, to guarantee ethanol evaporation. For extract preparation, the scaffolds were immersed in McCoy's culture medium at a ratio of 25 mg/ mL (mass of sample/volume of culture medium). These preparations, as well as some extra medium for the extract dilution and the negative control, were incubated at 37˚C under a controlled 5% CO 2 atmosphere for 48 h. The Saos-2 cells were seeded at a concentration of 30 k cells/cm 2 in the wells and incubated for 24 h. Then, the medium was exchanged for the extract and two dilutions (12.5 mg/mL and 6.25 mg/mL) were made, each with four replicates. For the resazurin test, a negative control (cells cultured in a standard, non-cytotoxic environment) and a positive control (cells in a cytotoxic environment, created through the addition of 10 µL of DMSO, a cytotoxic agent, to normal culture medium) were set.
The extracts and controls were incubated for 48 h and then media were replaced by a 1:1 solution of resazurin (dissolved at a concentration of 0.04 mg/mL in PBS) and McCoy's medium and incubated for 3 h. The cell activity was evaluated by measuring the absorbance of the medium at 570 nm (absorption maximum of resorufin) and 600 nm (absorption maximum of resazurin) in a microplate reader (Biotek ELx 800UV) (Carmo 2018).
Cell adhesion
The ability of the scaffolds to support cell metabolism was evaluated through cell adhesion and proliferation studies.
The scaffolds were sterilized in the same way as for the cytotoxicity tests. Then, the materials for the cell culture and material controls were fixed in Teflon supports and placed in a 24-well plate.
The Saos-2 was seeded at a concentration of 30 k cells/ cm 2 directly over the sample's surface and, for the cell controls, in the wells. The cells were maintained in McCoy's medium and incubated at 37 °C in a controlled 5% CO 2 atmosphere for 24 h.
The cell adhesion rate was determined by evaluating the reduction of resazurin to resorufin by metabolically active cells. For this process, the medium was substituted by a 1:1 solution of resazurin/McCoy's medium and incubated for 4 h. Control wells, containing the resazurin/McCoy's mix and McCoy's (both wells without cells) were also incubated. The cell activity was evaluated by measuring the absorbance of the medium at 570 nm and 600 nm in a microplate reader (Biotek ELx 800 UV) (Carmo 2018). The resazurin assay was repeated at 3, 6, 8 and 10 days for evaluation of the cell proliferation for each of the six replicas of all the materials.
After the last readings, the materials were removed from the multi-well plate, washed with PBS and fixed with a 3.7% paraformaldehyde solution, incubated at room temperature for 15 min. Finally, the samples were washed with water and stained with Helix NP™ Green and observed using fluorescence microscopy.
Statistical treatment
All average values calculated and displayed in the graphics include a representation of the experimental standard deviation with a vertical segment. Statistical analysis was performed using the one-way analysis of variance (ANOVA) with several confidence intervals. The value of p < 0.05 was considered to be statistically significant.
FTIR
The FTIR spectra of MHAp in Fig. 1a shows the inorganic carbon ions (CO 3 2− ) located at 1456 cm −1 and 1411 cm −1 and from 742 to 878 cm −1 , a result of asymmetric bending mode of CO 3 2− (Franco et al. 2012 andJoão et al. 2016). The main bands of MHAp are present in broad peaks centred at 1115 cm −1 , 1020 cm −1 , in the range 925 cm −1 to 960 cm − 1 and at 580 cm −1 . The first two bands correspond to P-O vibrating bonds of the phosphate groups in the asymmetric stretching mode, the third band corresponds to a symmetric stretching mode of the ion and the last to the asymmetric bending mode of PO 4 3− (Thein- Han and Misra 2008;Franco et al. 2012;Pighinelli and Kucharska 2014;João et al. 2016).
The structural MBG bonds are present in the peak at 1150 cm −1 , the range from 820 cm −1 to 780 cm −1 and at 569 cm −1 . They correspond to the Si-O-Si asymmetric stretching, symmetric stretching or vibration modes and bending mode, respectively. The Si-O bond with the Q 2 and Q 3 units can be seen at 1032 cm −1 and the Q 1 and Q 2 units at 947 cm −1 (Arcos et al. 2011 andStan et al. 2011).
The FTIR spectra of both CHL and CHH show a broad band in the range of 3270 to 3365 cm −1 that represents the overlap of N-H (3280 cm −1 ) and O-H (3358 cm −1 ) stretching vibration. The bands around 2867 cm −1 and 2921 cm −1 correspond to asymmetric and symmetric stretching modes of C-H of CH 2, respectively. The symmetric stretching is less intense than the asymmetric stretching, so it is partially hidden with the overlapping of the bands (Molaei et al. 2015;Queiroz et al. 2015;João et al. 2017).
The band around 1645 cm −1 shows the C=O stretching of amide I from the residual presence of N-acetyl groups. The 1311 cm −1 band is due to the N-H bending of amide II (Thein- Han and Misra 2008;Correia et al. 2011;Queiroz et al. 2015). The 1581 cm −1 band represents the N-H bending of the primary amine. The absorption signals at 1423 and 1372 cm −1 are attributed to all hydrocarbonate bonds, CH 2 bending and CH 3 symmetrical deformations (Queiroz et al. 2015).
The stretching of the C-O-C bridge is present in the wavenumber of 1149 cm −1 and in the 1065 to 1016 cm −1 range, respectively, to an asymmetric stretching and a simultaneous symmetric and asymmetric stretching vibrations of the ester bond (Thein- Han and Misra 2008;Correia et al. 2011;Song et al. 2014).
The CH out-of-plane bending of the ring of monosaccharides is visible by a band at 896 cm −1 . The band around 650 cm −1 represents the bending deformation of O-H on the polymeric structure (Thein- Misra 2008 andQueiroz et al. 2015).
The composite scaffolds in Fig. 1b, c present the bands of all the ceramic and polymeric materials used. It is possible to observe the intensity reduction of a 1000 cm −1 band which corresponds to the major MBG and MHAp bands. This variation is due to the overlap of symmetric and asymmetric stretching vibrations of the ester bond with Si-O Q 2 and Q 3 units. The addition of MHAp to CH also induces the formation of 560 cm −1 peak for the asymmetric bending mode of PO 4 3− in the CH spectra. The MBG composite reduces the peak of Si-O-Si symmetric stretching at 800 cm −1 , compared to the ceramic spectrum.
X-Ray diffraction
The XRD results presented in Fig. 2 show that all the scaffolds produced have a peak approximately at 20°. This peak is attributed to the chitosan present in the sample since this material has a slightly crystalline structure (Jampafuang et al. 2019). The scaffolds with MBG only show the CH peak, though the scaffolds with a high concentration of MHAp display crystalline peaks of the ceramic and the CH peak.
Porosimetry
All values presented in Table 1 are between 85 and 95% of porosity. The increase of ceramic concentration did not present an evident of direct variation in porosity values. However, addition of ceramic to the matrix tended to reduce the scaffold porosity.
One of the major factors in successful scaffold outcome is high porosity: a network of interconnected large pores without occluded passages that allows for cell migration and proliferation during bone ingrowth, provides open space for nutrient and oxygen supply and further vascularization
Scanning electron microscopy
The SEM images of all the scaffolds produced are shown in Fig. 3. All scaffolds present an interconnected porous structure and slightly preferential orientation as visible in (b2), (d2), (f1), (g1) and (g2). Figure 4a presents the swelling behaviour of the polymeric scaffolds produced, and the swelling stabilization percentage is shown in Fig. 4b. The comparison between polymeric scaffolds shows that both samples have acquired a plateau, and that CHH has shown a significant (p < 0.05) higher swelling capacity than CHL scaffolds.
Swelling
The CHL + MHAp scaffolds present a slightly decreased swelling capacity when compared to CHL scaffolds but the differences are not statistically significant. The CHL + MBG scaffolds present the opposite response with the increase of swelling capacity with the increase in ceramic content. The MC scaffolds have demonstrated to have a significant difference (p < 0.01) between 25 and 50%. While the 25% has presented the lowest swelling capacity of all CHLbased composites, the 50%MC has the highest value. The CHL + 50%MC sample exceeds every other scaffold swelling capacity even the ones with higher ceramic concentration, with the exception of CHL + 75%MBG.
The CHH scaffolds present a more constant behaviour with a significant decrease in swelling capacity in both MHAp and MBG composite scaffolds. Where MHAp significantly decreases swelling capacity with the increase of ceramic content from 25 to 50% (p < 0.01), the MBG composites show a constant behaviour for all ceramic concentrations.
Biodegradability
The biodegradation behaviour of the material is a crucial factor on the long-term performance of tissue-engineered cell-material construct, as cells need a stable material to adhere and proliferate. (Rodríguez-Vázquez et al. 2015, Lončarević et al. 2017a In order to analyze the polymeric membranes biodegradation profile, the scaffolds were immerged in PBS containing lysozyme for 14 days. The obtained results are presented in Fig. 5.
The maximum weight loss analysis results of the polymeric scaffolds present similar results with 89.3 ± 0.2% for CHL and 90.3 ± 0.8% for CHH.
The composite samples show that the progressive increase in ceramic content in the samples lead to decreased degradability of the membrane. This behaviour was expected since the Lysosome only degrades the polymer and the ceramic remains unaltered. (Khan et al. 2007; Thein- Han and Misra 2008).
Moreover, the 75%MHAp scaffolds had the lowest weight loss in both polymers compared to the MBG and
Bioactivity
The in vitro bioactivity study allows for a simulation of the expected in vivo bone regeneration from the apatite formation on the materials surface that occurs when they are immersed in SBF for a specified time gap, since the SBF solution has ion concentrations similar to human blood plasma (Kokubo and Takadama 2006). The composite scaffolds presented different responses to the test as shown in Fig. 6. Nevertheless, all the composites showed an increase of apatite precipitation with time. The precipitation begins at spots with higher rugosities or with small pores and then increases in size and distribution. The samples presented a Ca/P ratio between 1.1 and 1.75, meaning that there is an apatite and other calcium phosphates precipitation. At the end of the assay, an extensive surface coating was still not observed.
Compression modulus
In order to analyze the compression modulus of porous composite scaffolds, the samples were tested using a mechanical testing machine. From the data obtained, the slope of the stress-strain plot at 5-10% deformation range was calculated. During the test, the pores collapsed and the structures underwent densification (Gentile et al. 2012).
With the increase in ceramic content in the scaffolds, the elastic slope tended to increase during the initial 15% of the stress-strain curve, as shown in Fig. 7a, which is due to an increase of the reinforcement effect of the ceramic filler.
The results in Fig. 7b show a similar compression modulus for the CHL and CHH scaffolds. The incorporation of ceramic materials in both low and high MW chitosan scaffolds significantly increased the compression modulus for most of the compositions tested. In the CHL scaffolds, CHL + 75% MBG scaffolds showed significant increase in compression modulus compared to CHL + 75%MHAp, which makes MBG a better mechanical reinforcement when compared to MHAp. However, in CHH scaffolds, the inverse behaviour is observed with a higher reinforcement increase for CHH + 75% MHAp scaffolds than for CHH + 75% MBG scaffolds (p < 0.01).
The highest value of compression modulus of all samples containing 25% ceramic was obtained for the composite produced with both mesoporous powders: CHL + 25%MC. However, the difference in compression modulus between 25% ceramic-content scaffolds was not significant. Regarding the samples containing 50% ceramic, the CHL + 50% MC scaffold has a compression modulus that significantly exceeds every other compression modulus obtained. This shows that the 1:1 mix of mesoporous ceramics is very effective in increasing the mechanical properties of the freeze-dried chitosan scaffolds.
Cell culture studies
The cell response to the composite scaffolds was evaluated through cytotoxicity, adhesion and proliferation tests.
Cytotoxicity
The cytotoxicity assay of the polymeric and composite scaffolds presented in Fig. 8 shows that for all the scaffolds and for the extract concentrations of 6.25 mg/mL and 12.5 mg/ mL, the relative cell viability is higher than 90%, revealing the absence of cytotoxic effects at these extract concentrations. The exception to this rule was the case of CHH + 25% MHAp scaffold that was slightly cytotoxic. Given the fact that CHH and CHH + 50% MHAp scaffolds were not cytotoxic at these extract concentrations, this exception is not worrisome. For the 25 mg/mL extract concentration, some composite scaffolds were slightly or moderately cytotoxic. Therefore, all scaffolds revealed the potential to be used in bone tissue engineering, provided the extracellular fluid in contact with both scaffolds and cells is in such amount and is renewed at a rate that prevents the concentration of lixiviates from the scaffolds to reach concentrations above those at which cytotoxic effects start to be observed.
Cell adhesion and proliferation
One of the major purposes of this experiment is to identify the best ceramic-reinforced freeze-dried chitosan scaffold for bone regeneration. Therefore, the cell adhesion assays were performed in two stages. In the first stage, chitosan with different MWs were tested in order to identify the best cell response in what concerns adhesion and proliferation (Fig. 9), so that in the second stage all the tested composites were based on the same polymer. This selection method allowed the analysis of the second stage to focus on different ceramics under study and their concentration on the scaffold. Therefore, in the second stage, both ceramics used in composite scaffolds production were tested using their highest and lowest concentrations. This makes it possible to identify cell adhesion dependence on ceramic type and concentration.
The first-stage assay revealed that the chitosan scaffolds with CHL have a slightly higher cellular adhesion (48 ± 6%) and a significantly higher cell proliferation rate (Table 2) than the CHH scaffold (41 ± 6% cellular adhesion). As a result, in the next stage only CHL composite scaffolds are studied.
In comparing the CC values, the tested materials have a slow cell proliferation with a constant population until the third day and then a steady growth throughout the experiment.
The cell adhesion assay of the second stage, which evaluates cell populations 24 h after seeding, is summarized in Fig. 10. Compared to CHL scaffolds, the 25% MHAp and 75% MBG scaffolds had a reduced cell adhesion rate. The opposite is observed for the 25% MC scaffolds that had the highest nominal cell adhesion, a difference relative to the CHL scaffold statistically significant (p < 0.05). In MHAp samples, the cell adhesion increased with the increase of ceramic concentration, however, in the MBG scaffolds, the opposite was observed.
The values of the mean cell population normalized to CC values on the first day of culture are presented in Fig. 11. Cell proliferation, calculated as the ratio between cell population on day 10 and on day 1, are shown in Table 3.
Populations on the MHAp and MBG scaffolds show modest increases in number, without reaching the cell seeding density even after 10 days in culture. The 50% MC scaffolds presented the best cell proliferation ratio of all the tested samples.
The fluorescence analysis in Fig. 12 confirms the higher cell population in the MBG and MC scaffolds since the MHAp scaffolds presents a small population of stained cells. However, it is also important to mention the slight autofluorescence of both MBG and MHAp shown in (b.2) and (a.2), respectively. The MC scaffolds, with both MBG and MHAp in its composition, do not present any autofluorescence, therefore, all the spots observed in Fig. 12c ought to be due to living cells.
Discussion
The FTIR spectra of the composites show the presence of all the materials used in their fabrication, even for the smallest ceramic concentrations, as was observed in other reports (Thein- Han and Misra 2008;Peter et al. 2010a). In the XRD diffractograms, it is only possible to identify MHAp in the composites produced due to the typical high CH signal and MBG amorphous nature (Ren et al. 2005;Thein-Han and Misra 2008;Peter et al. 2010a). The SEM images confirm the presence of a ceramic in the scaffolds produced. A uniform distribution of the ceramics in the polymeric matrix is visible. Therefore, the majority of the ceramics appear to be well integrated within the chitosan matrices (Thein- Han and Misra 2008).
The scaffolds present a microporous structure with high porosity and wide pore size distribution that is ideal not only for cell adhesion and proliferation, but also for interlocking between the scaffolds and surrounding tissue, which will improve the mechanical stability of the implant (Loh and Choong 2013;Kang and Chang 2018;Abbasi et al. 2020).
The polymeric scaffold pore organization was more alike to that obtained by Zhang et al. (2012) and Thein-Han et al. ( 2008), who used 3% (m/v) CH dissolved in acetic acid, than to that obtained by Peter et al. (Peter et al. 2010a), who used, as in this work, 2% (m/v) CH, but dissolved CH in 1% (v/v) acetic acid. This morphology difference can be due to the use of lactic acid instead of acetic acid.
In some regions of the composite scaffolds, the pores show a preferential orientation that is a consequence of the cold front propagation direction during the freezing stage of the lyophilization process (Kang et al. 1999;Deville et al. 2006;Grenier et al. 2019)and leads to very different morphologies as shown previously by Madihally et al. (1999). The samples also showed some ceramic aggregation, very common in these structures (Li et al. 2010).
The materials and morphology of the scaffolds resulted in three-stage swelling behaviour for the CH scaffolds and two stages for the composite samples. The first stage corresponds to a quick water absorption attributed to the interaction between water molecules and the chitosan hydrophilic groups (OH and NH 2 ) (Pighinelli and Kucharska 2014). During the second stage, the swelling rate gradually slows down, due to the hydrogen bonds within the CH matrix, which constrains the scaffolds swelling behaviour. In the last stage, the swelling reaches the plateau due to stabilization of scaffolds (Chen et al. 2015).
All samples were able to absorb water, in proportions corresponding to several times their own weight. The individual values obtained are lower than other swelling capacities reported, which surpass 1000% (Thein- Han and Misra 2008;Peter et al. 2010a), however, the other works used a higher CH concentration (Thein-Han and Misra 2008) or other solvent (1% (v/v) acetic acid (Thein- Han and Misra 2008;Peter et al. 2010a)) that results in smaller pores with higher surface area that can increase water absorption and retention. Swelling capacity is an important property since it can lead to an increase of pore size and volume that facilitates cell infiltration and the supply of nutrients and oxygen to the interior of the composite scaffolds but can also lead to loss of mechanical properties (Peter et al. 2010b;Gentile et al. 2012;Gaihre and Jayasuriya 2018). The swelling capacity increased with CH molecular weight. This is in contrast with the work of Thein- Han et al. (2008) who observed no significant difference. This may be explained by differences /in MW and DD: while Thein-Han et al. used 250 kDa and 400 kDa of different DD (75% and 83%, respectively), still in the present work, we used CH samples of the same DD that differ in mass by a factor of 5 (100 kDa and 500 kDa).
The MC composites show a swelling behaviour that contrasts in comparison with the CHL scaffold: while swelling decreased by 25% MC composite, it increased for the 50% MC. This behaviour may be due to water retention inside the pores of BG mesoporous morphology, since several previous works established that BG decreases scaffolds swelling capacity (Peter et al. 2009(Peter et al. , 2010aGentile et al. 2012).
The biodegradation test has shown no significant variation between the samples and the obtained values are similar to other reported data, where the weight loss is between 5 and 15%, for 14 day degradation time (Thein- Han and Misra 2008;Han et al. 2012;Lončarević et al. 2017b). The low degradation rate proves that all the prepared scaffolds are stable for long-term performance.
The bioactivity of the scaffolds is a crucial factor for the long-term performance of tissue-engineered cell-biomaterial constructs, since the increase in scaffold bioactivity can in turn lead to improved bone cell ingrowth (osteoconduction), stable anchoring of scaffolds to host bone tissue (osseointegration), stimulation of immature host cells to develop into osteogenic cells (osteoinduction) and increased vascularization. (Rodríguez-Vázquez et al. 2015;Lončarević et al. 2017a;Turnbull et al. 2018).
The bioactivity test presented slight surface modifications in all compositions and higher apatite precipitation on the exposed surfaces for the MBG scaffolds. These results confirm the superior bioactive nature of MBG compared to MHAp (Baino et al. 2017;Ebrahimi and Sipaut 2021).
The developing of load-bearing scaffolds with high porosity is another of the major purposes of bone tissue engineering. However, the highly porous structure is obtained at the expense of mechanical strength (Ma and Choi 2001;Atkinson et al. 2021). In this trade-off, the highly porous structure is preferred in tissue engineering applications. The composite scaffolds produced have a better mechanical response (higher compression modulus) than both polymeric scaffolds while maintaining a similar porosity. The composites with higher compression modulus are those of 50% MC and 75% MBG. The 50% MC has presented values higher than other composites at 50% ceramic concentration and even 75%MHAp. This result is that obtained by Ebrahimi et al. with HAp70/BG30 but not with the results obtained by HAp50/BG50 (Ebrahimi and Sipaut 2021). The difference in results can be due to the mesoporous structure used in the present work instead of the nanosized ceramics. Furthermore, the compression modulus achieved resembles more to a porous cement (Ebrahimi and Sipaut 2021) than that of a lyophilized polymeric scaffold (Thein- Han and Misra 2008).
The evaluation of scaffolds' cytotoxicity was performed to confirm their in vitro biocompatibility as others have previously shown (Thein- Han and Misra 2008;Peter et al. 2010a;Zhang et al. 2012).
The cell adhesion results show an increase of cell adhesion with the increase of ceramic concentration as expected for MHAp (Thein- Han and Misra 2008;Zhang et al. 2012) and the opposite behaviour for the MBG scaffolds. The 25% MBG and 75%MHAp cell adhesion results are similar to the values obtained using the polymeric scaffold. This fact could be due to morphology variations throughout the samples (Madihally and Matthew 1999;Li et al. 2010) or the slight cytotoxicity of the MBG75% scaffolds (Fig. 7) that ultimately leads to the reduction in cell adhesion and proliferation, as was shown by Luna et al. (Luna et al. 2011). The MC composite scaffolds presented the higher cell adhesion with no significant difference between the different ceramic concentrations (25% and 50% MC). Nevertheless, the 25% MC was the only sample that presented a significant increase compared to the CHL scaffold. This increase shows that using both ceramics in the composite scaffold results in a stronger structure with capacity to provide a stable surface for cell adhesion.
Comparing the proliferation rates, all samples had approximately the same PR with the exception of 50%MC that presented a much higher PR than the other produced samples, with the exception of 75%MBG. This enhanced PR may be due to the improved mechanical properties that allow a stable platform for cell adhesion and proliferation (Thein- Han and Misra 2008). Similarly, other works presented an increase in proliferation with the introduction of BG in polymeric scaffolds such as those of Kandelousi et al. (2019), Dorj et al. (2012) and Peter et al. ( 2010b).
Fluorescence microscopy of the cell cultures confirmed that human osteoblasts were able to attach, proliferate and inhabit all the tested composite scaffolds for 10 days. The MBG and MC present the greatest abundance of cells. The MBG control also presents some background fluorescence due to MBG autofluorescence, already reported by Richter et al. (2022). Cell proliferation in MBG scaffolds is not as high as it appears at first sight in fluorescence microscopy and shows similar results to MHAp, as can be seen in MHAp and MBG high concentrations that have 44 ± 4% and 54 ± 4%, respectively (Fig. 10). This test also indicates that the scaffolds support cell viability and could be a suitable support for bone regeneration applications (Thein- Han and Misra 2008;Peter et al. 2010a;Zhang et al. 2012). The cell populations do not present any preferential organization and appear to have infiltrated within the scaffold, yielding a uniform population throughout the scaffold as expected from Thein- Han et al. (2008) work. This distribution can result in faster and better tissue regeneration.
Conclusion
Chitosan-ceramic composite scaffolds, with both MHAp and MBG, were successfully produced by lyophilization, followed by neutralization and dialysis. The scaffolds obtained present structures with interconnected pores and good ceramics distribution. From the tested polymeric scaffolds, the CHL presented better bioactivity, cell adhesion (48 ± 6%) and proliferation (252 ± 46%).
The best overall performances between the composites were the CHL + 75% MBG and CHL + 50% MC, due to their increased compression modulus (1000 kPa) and enhanced cell proliferation (174 ± 18% and 205 ± 39%, respectively). This study shows that the incorporation of several mesoporous ceramics in chitosan composite scaffolds improves their properties and can lead to better bone regeneration outcome.
Funding Open access funding provided by FCT|FCCN (b-on). This work was financed by FCT-Fundação para a Ciência e a Tecnologia, I.P., in the scope of the projects LA/P/0037/2020, UIDP/50025/2020 and UIDB/50025/2020 of the Associate Laboratory Institute of Nanostructures, Nanomodelling and Nanofabrication-i3N.
Data availability Data may be obtained from authors upon reasonable request.
Conflict of interest
The authors have not disclosed any competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 9,140.6 | 2023-02-09T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
A CALIBRATION WORKFLOW FOR “PROSUMER” UAV CAMERAS
: High-end consumer quadcopter UAVs or so-called “prosumer devices”, have made inroads into the mapping industry over the past few years, arguably displacing more expensive purpose-built systems. In particular, the DJI Phantom series quadcopters, marketed primarily for videography, have shown considerable promise due to their relatively high-quality cameras. Camera pre-calibration has long been a part of the aerial photogrammetric workflow with calibration certificates being provided by operators for every project flown. Most UAV data, however, is processed today in Structure-from-Motion software where the calibration is generated “on-the-fly” from the same image-set being used for mapping. Often the scenes being mapped and their flight-plans are inappropriate for calibration as they do not have enough variation in altitude to produce a good focal-length solution, and do not have cross-strips to improve the estimation of the principal point. What we propose is a new type of flight-plan that can be run on highly textured scenes of varying height prior to mapping missions that will significantly improve the estimation of the interior orientation parameters and, as a consequence, improve the overall accuracy of projects undertaken with these sorts of UAV systems. We also note that embedded manufacturer camera profiles, which correct for distortion automatically, should be removed prior to all photogrammetric processing, something that is often overlooked as these profiles are not made visible to the end user in most image conversion software, particularly Adobe’s CameraRAW.
INTRODUCTION
As the UAV industry continues to mature, sophisticated quadcopter UAVs with features hitherto reserved for purposebuilt systems are becoming available at low price-points and, as a consequence, to a wider range of potential users.In conjunction with Structure-from-Motion photogrammetry software, the data from these UAVs can produce what appears to be a high-quality mapping product with next to no user intervention.Yet this new generation of enthusiastic users often do not have a background in photogrammetry or aerial photography and are largely ignorant of practices long-established in the aerial mapping industry (Fraser, 2013).Camera calibration, in particular, has long been the foundation of the photogrammetric workflow, but is entirely overlooked as such software performs an autocalibration using the same image set as is being used for mapping (Hashim et al., 2013;Suh, Choi, 2017).As a consequence, poor estimates of the interior orientation parameters drive the error into the external and absolute orientations, thereby diminishing the quality of downstream products like DSMs, DEMs and orthophotos.In many cases the calibration process of a project is not included in the final product (Casella et al., 2014), although this was and still is obligatory for commercial aerial mapping projects.As has been noted in the earlier literature, improper estimation of camera internals leads to the so-called "doming effect" in DEMs created from Structure-from-Motion software that uses camera parameters derived from conventional aerial blocks (Wackrow, Chandler, 2011).While novel curving flight-plans have been proposed to remedy this "doming" issue, we propose to address it simply by robust camera pre-calibration procedures that will allow conventional aerial blocks to be flown.
There have been previous attempts to pre-calibrate UAV cameras using a test-field with control points (Honkavaara et al., 2006;Pérez, Agüera, Carvajal, 2012), but such a method requires extensive and time-consuming control pick-up.Other researchers are calibrating UAV cameras terrestrially indoors, again with a large number of control points (Cramer, Pryzbilla, Zuehorst, 2017).What we propose is a workflow that does not require control points, but instead, uses automated, preprogrammed flights to collect imagery suitable for camera calibration.This method needs only a highly textured scene with good variation in height; the flight plans require a maximum of 30 waypoints and can be flown in the field in about 15 minutes or less.
Terrestrial camera calibration procedures are well documented and have been extensively published (Cramer, Pryzbilla, Zuehorst, 2017;Luhmann, Fraser, Maas, 2016).The inversepyramid configuration of camera stations is frequently used.At each station images are captured at multiple rotations --90°, 180° and 270° --about the x-axis to ensure through a standard procedure that matching points are observed through all parts of the lens, something that significantly improves the accuracy of the principal point (Xp, Yp).Although terrestrial calibration procedures have been used with DJI UAVs, they are performed indoors with a large number of control points, conditions that cannot easily be matched in the field (Cramer, Pryzbilla, Zuehorst, 2017).
A significant source of systematic error in camera calibration of "prosumer" UAVs occurs when geometric distortion models are imposed on in-camera JPEGs or are embedded in RAW files and imposed during the conversion of these files to a usable format for photogrammetry; like JPEG or TIFF.These geometric distortion corrections are generalized for the lens and camera combination and cannot take into account the manufacturing variances that are obtained between different instances of the camera model.As a consequence, if imagery is used that has such a geometric calibration applied, photogrammetry software will almost certainly misestimate, and usually significantly underestimate, the real interior parameters.These geometric corrections are frequently not apparent to many end users who even insist on the use of RAW imagery as the basis of photogrammetric processing.For instance, Adobe's CameraRAW, often used to convert RAW images to JPEG format, will apply the manufacturers geometric correction and give the user no option to do otherwise.Therefore, the foundation of our calibration procedure will be a workflow to recover the original images with no geometric corrections applied.
METHODS AND MATERIALS
The camera used for the study, the FC6310, is that built into the DJI Phantom 4 Pro UAV and held by a brushless gimbal.The photographic parameters are given in Table 1 The significant difference in our procedures from earlier studies is that we insist on using a completely automated, custom flight sequence to replicate the best indoor lab results in the field.
Site Locations and Scene Selection
Many of the image sets captured for this work were taken of excavated archaeological structures at the National Institution Stobi in the Republic of North Macedonia, particularly the socalled "Building with Arches", in December of 2018 and February of 2019.Additional image-sets with control points were taken from the summer 2018 excavation season of a structure adjacent to the so-called "Theodosian Palace".Small image sets were also taken of limestone campus buildings at Queen's University, Canada.When selecting a scene for aerial camera calibration we used the same principles as terrestrial scene section: 1) significant variations in depth in the scene, and 2) extensive texture across the scene that fills the field of view.
The former improves the estimation of the focal length (Remondino, Fraser, 2004); the latter ensures that matching points are distributed evenly across each image so that points can be compared across every part of the lens during the bundle adjustment.The "Building with Arches" at Stobi meets all of these criteria in that its partial walls vary in over 5m in places, and the entire structure is comprised of highly textured stone or brick.
Flight Plan Creation
As earlier studies have noted, image sequences for calibration should follow quite different principles than conventional aerial blocks used for mapping.The sequence should cover only a small area, image overlap should be relatively high, the same area should be seen with several camera rotations, and images should be captured from several different heights (Cramer, Pryzbilla, Zuehorst, 2017).The addition of oblique imagery can also significantly improve the accuracy of the calibration by increasing the base-to-distance ratio of points observed by converging cameras (Haala, Cavegn, 2016).Though these principles are well established in the published literature (Slama, Theurer, Henriksen, 1980;Balletti, Guerra, Tsioukas, Vernier, 2014;Remondino, Fraser, 2004), they have not been implemented in an automated flight plan to date.
Flight Planning Software
Creation of these complex flight plans is not a trivial task.Most photogrammetric/survey flight planning software only allow for the creation of aerial blocks at a constant height.Commercial flight planning software for videography does allow for more freedom and control of the UAV's position and pose but is lacking in the ability to create a flight plan based on photogrammetric parameters such as forward/side image overlap and image footprint.DJIFlightPlanner is a third-party flight planning software specifically for DJI UAVs that is designed with surveying in mind.The basic camera specifications for all DJI UAVs are included in the software (Focal Length, Sensor Size etc.), so as to allow users to simply specify a boundary, flying height, and desired overlap to generate a flight plan.The software also allows the user to trigger an image according to time, or by waypoint, where it will hover the UAV while the photo is taken.The software then generates a simple CSV file with the waypoints and actions to be executed by the Litchi app, available for Android or iOS, which will control the UAV during the mission.This intermediate CSV file allows the user to easily modify the flight plan so that it is relatively simple to rotate the UAV while it is flying to take images at 90° or 270° with respect to the direction of travel, as well as with imagery at multiple heights or at oblique camera angles.
Flight Plans
Five flight plans were created, and each flown separately to create five independent camera calibrations that could be compared.All flight plans take images at three heights above the terrain: 60ft, 75ft and 90ft (DFIFlightPlanner requires imperial values to be entered for flying height).At each waypoint the camera captured three images; one image in landscape and two images in portrait ±90° relative to the direction of flight.
The first flight plan (M1 in Table 2) is a recreation of a typical terrestrial camera calibration model as outlined in 3DM Analyst User Guide.The three image tiers form an inverse pyramid with a single strip at 60ft, two at 75ft, and three strips at 90ft.The strips have a high forward overlap of 80%, and as a result the flight contains 90 images in total.Four additional oblique images were taken at -45° to nadir in the corners of the rectangular flight area at 75ft to improve the overall robustness of the calibration.The next two flight plans (M2, M3 in Table 2) do not use the inverse pyramid configuration and instead use three full aerial blocks at the same three altitudes.In each of the blocks landscape and two portrait images were taken at each waypoint along with four oblique images at the corners.The only difference between these two plans was the overlap: M2 had 60% forward and 40% side overlap, and M3 a forward overlap of 80% and 40% sidelap.The final two flight plans (M4, M5 in Table 2) repeat the same procedure as the previous two missions except with an increased forward and side overlap of 80% and 90% respectively.
Post-Processing and Calibration
This research will use two separate photogrammetric software packages.First, CalibCam (version 2.5.0 build 1776), produced by ADAM Technology of Perth, Australia, is used for evaluating calibration accuracy.CalibCam provides reliable reporting of calibration parameters, a correlation matrix and the accuracy to which individual parameters have been solved.The second package, PhotoScan/Metashape (version 1.5.2), was used as an example of the common Structure-from-Motion approach to calibration where the interior orientation is usually not held fixed before the bundle adjustment.PhotoScan has been used extensively in archaeology for documentation at Stobi.Several flight plans from July of 2018, with 10 control points each, were processed in PhotoScan with pre-calibration according to our method, as well with the standard auto-calibration procedure where the image set being processed were used for the solving of camera internals.The residuals on the control points, from separate least-square calculations, have an accuracy of 3mm.
Adobe CameraRAW (version 11.1) and RawTherapee (version 5.5) were used for image conversion from the DNG ("Digital Negative") files produced by the UAV.While the geometric correction embedded in the metadata could not be deactivated in CameraRAW, RawTherapee allows the user to disable this correction.
RESULTS AND ANALYSIS
We will first demonstrate that the automatically applied geometric correction applied to the images from the Phantom 4 Pro have a dramatic impact on the overall calibration.By using RawTherapee we can avoid these geometric corrections and recover the images as shot, as well as a true solution for the lens distortion of the camera.
The accuracy of the flight plans described above was then established in CalibCam by generation matching points by Normalized Cross-Correlation Least Squares Matching, followed by a bundle adjustment to solve for the interior orientation parameters: Focal Length (C), Radial Distortion (K1, K2, K3), Principal Point Offset (Xp, Yp), Decentering Distortion (P1, P2) and the Pixel Scaling Factors (B1, B2).The sigma expressed in pixels in the bundle adjustment report for each of these parameters was then compared between the flight plans.Generally, we did not compare P1, P2, B1, and B2 as these parameters were almost always solved to high accuracy regardless of the flight plan.Instead, the Focal Length and Principal Point showed the most variation and were reported below, along with the three Radial Distortion parameters.
Image Conversion using Adobe and RawTherapee
When using Adobe CameraRAW for converting DNG images to JPEG for post-processing, DJI embeds a lens profile that forces a geometric correction on the JPEG.RawTherapee has the ability to deactivate the use of this geometric lens correction, maintaining the true original image.Two 18-image terrestrial calibration sets were collected on Queen's Campus using the conventional inverse pyramid calibration structure.Figure 1 shows two sample images that demonstrate the obvious effect of the distortion correction.While the image dimensions in pixels of the two images are the same, one noticed extensive cropping and stretching to compensate for barrel distortion when the geometric correction is applied in Adobe CameraRAW.While this corrected image is more visually appealing, it is inappropriate for photogrammetric processing.When we compare a visualization of the interior orientation correction generated from the image sets (Figure 2) with and without the geometric correction applied this visual difference can be quantified.When comparing the displacement of pixels in the outside of the lens, as well as overall RMS value for the amount that pixels have been moved across the entire sensor (Table 3), the effects the manufacturer's software-based lens correction has on the calibration parameters becomes apparent.Although this new method can prevent geometric corrections from being applied, it introduces the problem of vignetting on the outside corners of the images.Manual vignetting correction was applied in RawTherapee to remove this effect.This sort of vignetting correction should only modify the brightness values of pixels and does not impose any geometric correction.In order to verify that the vignetting correction did not impact any interior orientation parameters, calibration with and without vignetting correction were compared.Figure 3 shows that vignetting correction did not impact the distribution of matching points across the lens/sensor.Table 4 shows that the solutions for Focal Length and Principal Point showed no significant differences whether or not vignetting correction was applied.
Camera Calibration Assessment
Missions M4 and M5, which each consisted of over 200 images, were used to demonstrate that calibration accuracy showed greatly diminishing returns after 90 images as seen in Figure 4.
For both M4 and M5, increasing the number of images taken was not necessary to produce a satisfactory calibration with an accuracy of below 0.3 pixels for all important parameters.Instead, as flight plans M1 to M3 show, the configuration of the images is far more important to calibration accuracy than sheer numbers.They will show that the number of images required in a calibration flight-plan can be as low as 20, thereby saving significant amounts of flying and processing time.M1, which used the terrestrial inverse pyramid calibration structure, showed the importance of adding rotated images for the accurate solution of Focal Length and Principal Point (Table 5).In agreement with earlier published literature, the addition of oblique images improved the estimation of Focal Length by an order of magnitude.While the accuracy of the Principal Point was nearly doubled with the use of rotated images, the addition of oblique images had a negligible impact on these parameters.LPPO 8.850 0.070 0.024 0.020 0.061 0.128 0.084 Table 5. Sigma statistics for M1.L -Landscape, P -Portrait -90°, PP -Portrait ±90°, O -Oblique M2 and M3 consisted of images taken at three altitudes, with multiple image being captured at each waypoint with different rotations with respect to the direction of travel along with four oblique images per mission.These missions had more images than M1, but they were much easier to plan than the inverse pyramid structure.The increased number of images in both missions led to an improvement, usually two-fold, in the accuracy of the radial distortion parameters.The same influence of image rotations and oblique images was also observed on Principal Point and Focal Length estimates.A question remained.Were three flying-heights really necessary for a good interior orientation or could the variation in the height in the scene offer sufficient variation for good focal-length estimates.By reducing the number of heights, we could further reduce the number of images required.Table 7 (below) demonstrates the effect of height on the accuracy of the interior orientation parameters.We tried each height, as well as three combinations of two heights in Table 7.These results show that provided the scene used for calibration has considerable variation in height, there is no impact on Focal Length estimations when flying at only one height.The slight improvements on the other parameters with the use of two heights as opposed to one is likely due only to the increased number of images being used.60 & 75 8.849 0.026 0.011 0.009 0.034 0.066 0.040 60 & 90 8.849 0.023 0.010 0.008 0.029 0.057 0.035 75 & 90 8.849 0.024 0.009 0.008 0.028 0.056 0.035 Table 7. Sigma statistics for proposed aerial missions when comparing flying heights(ft).
If only a single height is used, then the number of images can be reduced to under 40.Image overlap remains the next variable can be adjusted.If overlap can be reduced, then even fewer images can be used to calibrate the camera.M3 was used as the basis for this analysis.Only the images captured at 75ft were used, as well as the four oblique images captured at the same height.Table 8 shows the effect of disabling either every fourth image or every second image.This operation requires some clarification.The images were divided into four groups: Landscape, 90-degree rotation, 270-degree rotation, and oblique images.The disabling of every second or every fourth image was applied to each group.The effect of this was to disable every second or every fourth combination of images taken at a particular waypoint.Because only four oblique images were collected, and their value was clearly demonstrated none of these images was disabled.Disabling images, whether every fourth or every second, had negligible impact on calibration accuracy.If every second image in each group can be disabled with no ill-effects, then the calibration image-set can now be reduced to few as 20 images overall by planning forward overlap as 60%.
In-Situ use of Calibration Model and Control Network Check
Within PhotoScan, a camera calibration was produced using an aerial camera calibration mission block that included oblique images and rotations.Two image sets from the same flight plan over a relatively flat area with 10 control points and 9 control points respectively were processed in PhotoScan, first using the pre-calibrated camera, and the second time using autocalibration.
The effect of calibration on the control point residuals were then compared.While not all residuals were improved using the precalibrated camera, overall the RMS error on the control points was reduced significantly with the use of a properly calibrated camera.In particular, and as expected, the height accuracy always significantly improved when a calibration with an accurate Focal Length was used.This fact has long been understood in the aerial mapping community, but to date does not appear to be a lesson that operators of "prosumer" UAVs and Structure-from-Motion software have taken to heart.
CONCLUSION
Prosumer quadcopter UAVs offer real advantages in photogrammetric mapping.The economies of scale in their production have reduced their cost, flexible and inexpensive flight planning software is now available, and, as we have shown, their cameras can be calibrated to a satisfactory level of accuracy if some care is taken.The first step is ensuring that the manufacturer's geometric correction is not imposed on the images.The second step is to create custom calibration flightplans that leverage well-understood calibration practices published over the past decades.As we have demonstrated, as few as 20 images taken over a scene of varying texture and height is sufficient to generate a satisfactory camera calibration.This is hardly a big investment in time or money and could potentially resolve many of the systematic errors like "doming", almost certainly due to misestimation of the interior orientation parameters, without resorting to non-standard curving flightplans.The aerial block structure, long employed for mapping missions using manned aircraft, can still be used for UAVs provided pre-calibration of the cameras is regularly done.As we have shown, this need not be a time-consuming burden on UAV operators.
Figure 1 .
Figure 1.Visual comparison of DNG images when converted to JPEG with manufacturer supplied geometric correction imposed in Adobe (left), and with no geometric correction applied using RawTherapee (right).
Figure 3 .
Figure 3. Relative only Point Density (#points/location) visualization with vignetting correction (left) and without vignetting correction (right)
Figure 4 .
Figure 4. Interior Orientation Results for M4 (green) and M5 (red) as additional images are continually added in an attempt to further improve orientation parameters.
Figure 5 .
Figure 5. Error residuals for control points of two separate missions flown over the same scene on different days; July 13, 2018 (top), July 16, 2018 (bottom). below.
Table 3 .
Distortion statistics for Camera Calibration
Table 4 .
Interior Orientation Results
Table 8 .
Sigma statistics for proposed aerial mission A (M3) comparing the removal of every 2 nd and 4 th image with oblique imagery at 75ft. | 5,033 | 2019-06-04T00:00:00.000 | [
"Computer Science"
] |
Novel Cu-Rich Nano-Precipitates Strengthening Steel with Excellent Antibacterial Performance
: In this study, a certain amount of Cu was added into tentative steel to introduce novel Cu-rich nanoprecipitates, thus enhancing strength yet without sacrificing toughness. This type of precipitates was quite different from previous ε -Cu, and was a novel type of Cu-rich nanoprecipitates, which contained more than 50% Cu. The microstructure, mechanical properties and precipitates of the steels aged at 550 ◦ C for different holding times and were carefully examined. The microstructure of the tested steel was mainly bainite and gradually evolved into equilibrium state after aging. Mechanical properties results showed that after being aged at 550 ◦ C for 10 min, the steel can have an excellent mechanical property combination of strength and toughness. In addition, a large amount of tiny precipitates was uniformly distributed in the matrix of the aging steels, and their size kept at nanoscale. In particular, when the steel was aged at 550 ◦ C for 10 min, it produced the largest number of tiny precipitates of this type. This type of Cu-rich nanoprecipitates emerging from the steel aged at 550 ◦ C for 10 min also brought about a remarkable antibacterial property. It revealed that novel Cu-rich precipitates not only have positive effects on strength and toughness, but also played an important role in antibacterial properties.
Introduction
High strength steels are increasingly used in many areas, such as engineering machinery, marine platforms, naval vessels, pipelines, storage tanks, and bridges [1]. Traditionally, in terms of elements put into steels, the strengthening of steels mainly relies on adding more carbon and more alloying elements, which commonly are Mn, Mo, Cr, Ni, Nb, V, Ti, etc. [2][3][4][5]. In regards to the strengthening mechanism, it often involves solid solution strengthening, dislocation strengthening, precipitation strengthening, subcrystal strengthening and so on [6]. Unfortunately, more carbon and more alloying elements can easily have harmful effects on the weldability. Moreover, to some extent, large size precipitations containing Nb, Ti, and V not only weaken precipitation strengthening and the dislocation strengthening effect, but also are unfavorable for weldability and toughness. In other words, as the strength of the steel in demand is becoming higher and higher, only adopting these methods mentioned above makes it very difficult to enhance the strength level of steels without a reduction of toughness [7].
Recently, the novel Cu -rich nanoprecipitates in high strength steel have grabbed wide attention and become one important turning point for the development of high strength steels [8][9][10][11][12][13][14][15][16]. As we know, the effect of precipitation strengthening highly depends on its type, number density, size and distribution conditions [1]. Recent research has shown that, through suitable composition and heat treatment, novel Cu-rich nanoprecipitates can offer an excellent performance combination of high strength and toughness, yet the carbon content of these high strength steel can be very low [12,16,17]. Optical microscope (OM, Leica DMI5000M, Leica, Germany) and transmission electron microscope (TEM, JEM2100F, JEOL, Tokyo, Japan) were used to observe the microstructures of these steels. For the Optical microscope observation, samples were ground to #2000 with sand papers, and then mechanically polished, and then were etched with 4 vol.% nital solution. The transmission electron microscope (TEM) was used to survey the characteristics of nanoscale Cu-rich precipitates. 0.3 µm thin slices were manually thinned to 50 µm thick and were examined by TEM.
The tensile specimens were 5 mm in diameter and 25.4 in gauge length and were machined from the tentative plates perpendicular to the rolling direction. The tensile tests were conducted at room temperature. To enhance the reliability of the tensile results, two parallel samples were tested to get the average value.
The antibacterial property was examined according to agar plate method (GB4789.2-94), and the tested bacteria were Escherichia coli (E. coli) and Staphylococcus aureus (S. aureus). The tested bacterial strain was obtained from slant culture at logarithmic phase, and the tentative steel and the control specimen were individually incubated both in 105 CFU/mL Escherichia coli (E. coli) and 106 CFU/mL Staphylococcus aureus (S. aureus) at 37 • C for 24 h. Then these bacterial fluids were diluted to 103 CFU/mL and were incubated by agar for 24 h at the same temperature and humidity. The number of bacterial colonies in culture dishes can be calculated. The calculation of antibacterial ratio was as follows: In this equation, R is antibacterial ratio, C is the viable bacterial count of the control specimen, A is the viable bacterial count of the tentative steel [20,21]. Figure 1 shows the optical microstructures of the TMCP and aged specimens. On the whole, the microstructure of the tested steel was mainly bainite, yet there were slight differences under processes involving different techniques. Before the aging process, most of the microstructure of the TMCP steel was tiny granular bainite and polygonal ferrite, with a small amount of lath bainite (Figure 1a). After aging at 550 • C for 5 min, there were several not quite obvious changes (Figure 1b). When the aging holding time was prolonged to 10 min, the proportion of polygonal ferrite began to increase. As the holding time was further extended from 30 min to 60 min, the main microstructure became polygonal ferrite, and the proportion of granular bainite significantly decreased (Figure 1c-e). In regards to all types of bainite, polygonal ferrite is the most stable microstructure, and granular bainite is inferior to it, with the most unstable one being lath bainite. The terminal point of microstructure evolution is equilibrium microstructure-polygonal ferrite [22]. From these microstructure evolution photos, it was evident that as the holding time became longer, the microstructure gradually evolved into an equilibrium state. Figure 1 shows the optical microstructures of the TMCP and aged specimens. On the whole, the microstructure of the tested steel was mainly bainite, yet there were slight differences under processes involving different techniques. Before the aging process, most of the microstructure of the TMCP steel was tiny granular bainite and polygonal ferrite, with a small amount of lath bainite (Figure 1a). After aging at 550 °C for 5 min, there were several not quite obvious changes (Figure 1b). When the aging holding time was prolonged to 10 min, the proportion of polygonal ferrite began to increase. As the holding time was further extended from 30 min to 60 min, the main microstructure became polygonal ferrite, and the proportion of granular bainite significantly decreased (Figure 1ce). In regards to all types of bainite, polygonal ferrite is the most stable microstructure, and granular bainite is inferior to it, with the most unstable one being lath bainite. The terminal point of microstructure evolution is equilibrium microstructure-polygonal ferrite [22]. From these microstructure evolution photos, it was evident that as the holding time became longer, the microstructure gradually evolved into an equilibrium state.
Tensile Properties and Microhardness
The strength curve of the specimens under different processing techniques was shown in Figure 2a. From Figure 2a, it can be found that the aging holding time had obvious effects on the mechanical properties of the steel. After TMCP, the yield strength and the tensile strength were 616 MPa and 744 MPa respectively, and the elongation rate was 24%. After aging at 550 °C for 5 min, there is a small increase in both the yield strength and the tensile strength by about 13 MPa, with 22% of the elongation rate. When the steel was aged at the same temperature for 10 min, the yield strength and the tensile strength rose dramatically to 660 MPa and 782 MPa respectively, yet the elongation rate decreased slightly to 21%. As the aging holding time was prolonged to 30 min, the yield strength grew to 692 MPa and the tensile strength remained in the previous level, with the elongation rate declining to 18%. When it was aged for 60 min, its yield strength and the tensile strength reached their highest point at 756 MPa and 806 MPa, and there is no significant change in the elongation rate. Generally, there was a relatively stable increase in both the yield strength and the tensile strength. By contrast, the elongation rate saw an opposite trend.
The chart (Figure 2b) illustrated the low temperature impact property of the steels under different processing techniques. With the elongation of the holding time, the impact absorbing energy slightly fell first under 5 min holding time, and then rose to its peak point at 10 min aging time. After that, the impact absorbing energy began to drop again, and the longer the holding time, the lower it became. Overall, after aging at 550 °C for 10 min, the steel can have excellent mechanical properties.
Tensile Properties and Microhardness
The strength curve of the specimens under different processing techniques was shown in Figure 2a. From Figure 2a, it can be found that the aging holding time had obvious effects on the mechanical properties of the steel. After TMCP, the yield strength and the tensile strength were 616 MPa and 744 MPa respectively, and the elongation rate was 24%. After aging at 550 • C for 5 min, there is a small increase in both the yield strength and the tensile strength by about 13 MPa, with 22% of the elongation rate. When the steel was aged at the same temperature for 10 min, the yield strength and the tensile strength rose dramatically to 660 MPa and 782 MPa respectively, yet the elongation rate decreased slightly to 21%. As the aging holding time was prolonged to 30 min, the yield strength grew to 692 MPa and the tensile strength remained in the previous level, with the elongation rate declining to 18%. When it was aged for 60 min, its yield strength and the tensile strength reached their highest point at 756 MPa and 806 MPa, and there is no significant change in the elongation rate. Generally, there was a relatively stable increase in both the yield strength and the tensile strength. By contrast, the elongation rate saw an opposite trend. (e)
Tensile Properties and Microhardness
The strength curve of the specimens under different processing techniques was shown in Figure 2a. From Figure 2a, it can be found that the aging holding time had obvious effects on the mechanical properties of the steel. After TMCP, the yield strength and the tensile strength were 616 MPa and 744 MPa respectively, and the elongation rate was 24%. After aging at 550 °C for 5 min, there is a small increase in both the yield strength and the tensile strength by about 13 MPa, with 22% of the elongation rate. When the steel was aged at the same temperature for 10 min, the yield strength and the tensile strength rose dramatically to 660 MPa and 782 MPa respectively, yet the elongation rate decreased slightly to 21%. As the aging holding time was prolonged to 30 min, the yield strength grew to 692 MPa and the tensile strength remained in the previous level, with the elongation rate declining to 18%. When it was aged for 60 min, its yield strength and the tensile strength reached their highest point at 756 MPa and 806 MPa, and there is no significant change in the elongation rate. Generally, there was a relatively stable increase in both the yield strength and the tensile strength. By contrast, the elongation rate saw an opposite trend.
The chart (Figure 2b) illustrated the low temperature impact property of the steels under different processing techniques. With the elongation of the holding time, the impact absorbing energy slightly fell first under 5 min holding time, and then rose to its peak point at 10 min aging time. After that, the impact absorbing energy began to drop again, and the longer the holding time, the lower it became. Overall, after aging at 550 °C for 10 min, the steel can have excellent mechanical properties. the impact absorbing energy began to drop again, and the longer the holding time, the lower it became. Overall, after aging at 550 • C for 10 min, the steel can have excellent mechanical properties.
Precipitation
The distribution condition of Cu-rich nanoprecipitates in the rolled steel sample and in the steel samples aged at 550 • C for different times is shown in Figure 3. For the rolled sample, numerous entangled dislocations can be clearly seen, and no nanoprecipitates were observed (Figure 3a). After the steel was aged at 550 • C for 5 min, a large amount of tiny precipitates uniformly distributed in the matrix, and their size kept at nanoscale (Figure 3b). These precipitates were analyzed by energy spectrum (see Figure 4a, Table 2). The results revealed that these tiny precipitates were Cu-rich phase. After holding for 10 min at the same temperature, the number of these tiny precipitates continued to rise and the size did not apparently grow larger (Figure 3c). As the holding time was further prolonged, precipitates notably become larger, in particular when the sample was aged for 60 min, the size reached above 20 nm (Figure 3d,e). The precipitation process of microalloy elements usually undergoes four steps: segregation of microalloy elements and nucleation, growth, and coarsening of precipitates. From the results, it can be concluded that with the prolonged holding time, Cu element constantly segregated and resulted in the continuous nucleation, growth, and coarsening of Cu-rich precipitates.
Precipitation
The distribution condition of Cu-rich nanoprecipitates in the rolled steel sample and in the steel samples aged at 550 °C for different times is shown in Figure 3. For the rolled sample, numerous entangled dislocations can be clearly seen, and no nanoprecipitates were observed (Figure 3a). After the steel was aged at 550 °C for 5 min, a large amount of tiny precipitates uniformly distributed in the matrix, and their size kept at nanoscale (Figure 3b). These precipitates were analyzed by energy spectrum (see Figure 4a, Table 2). The results revealed that these tiny precipitates were Cu-rich phase. After holding for 10 min at the same temperature, the number of these tiny precipitates continued to rise and the size did not apparently grow larger (Figure 3c). As the holding time was further prolonged, precipitates notably become larger, in particular when the sample was aged for 60 min, the size reached above 20 nm (Figure 3d,e). The precipitation process of microalloy elements usually undergoes four steps: segregation of microalloy elements and nucleation, growth, and coarsening of precipitates. From the results, it can be concluded that with the prolonged holding time, Cu element constantly segregated and resulted in the continuous nucleation, growth, and coarsening of Cu-rich precipitates.
The Antibacterial Performance
To further study the antibacterial property of the tested steel, two control experiments: as-rolled and aged at 550 • C for 10 min, were chosen to conduct an antibacterial test. Figure 5 shows antibacterial rates against Escherichia coli (E. coli) and Staphylococcus aureus (S. aureus) of the two specimens. It conveyed that the antibacterial property of the as-rolled sample was obviously poor, and its antibacterial rates against Escherichia coli (E. coli) and Staphylococcus aureus (S. aureus) were 63% and 70%, respectively. Compared with the as-rolled sample, the control sample (as-aged at 550 • C for 10 min) had excellent antibacterial performance, of which antibacterial rates against Escherichia coli (E. coli) and Staphylococcus aureus (S. aureus) reached 100% and 99.9%.
The Antibacterial Performance
To further study the antibacterial property of the tested steel, two control experiments: as-rolled and aged at 550 °C for 10 min, were chosen to conduct an antibacterial test. Figure 5 shows antibacterial rates against Escherichia coli (E. coli) and Staphylococcus aureus (S. aureus) of the two specimens. It conveyed that the antibacterial property of the as-rolled sample was obviously poor, and its antibacterial rates against Escherichia coli (E. coli) and Staphylococcus aureus (S. aureus) were 63% and 70%, respectively. Compared with the as-rolled sample, the control sample (as-aged at 550 °C for 10 min) had excellent antibacterial performance, of which antibacterial rates against Escherichia coli (E. coli) and Staphylococcus aureus (S. aureus) reached 100% and 99.9%. The photos of antibacterial performance against E. coil and S. aureus on petridishes cultured with two control samples were shown in Figures 6 and 7. As it was illustrated, after the as-rolled sample was cultured with bacterial liquid, the number of bacterial colonies was greater and the sample exhibited poor antibacterial performance (Figure 6a, Figure 7a). By contrast, when the sample aged at 550 • C for 10 min and bacterial liquid were cultured together, there was no bacterial colony and the sample had an excellent antibacterial performance (Figure 6b, Figure 7b).
The Antibacterial Performance
To further study the antibacterial property of the tested steel, two control experiments: as-rolled and aged at 550 °C for 10 min, were chosen to conduct an antibacterial test. Figure 5 shows antibacterial rates against Escherichia coli (E. coli) and Staphylococcus aureus (S. aureus) of the two specimens. It conveyed that the antibacterial property of the as-rolled sample was obviously poor, and its antibacterial rates against Escherichia coli (E. coli) and Staphylococcus aureus (S. aureus) were 63% and 70%, respectively. Compared with the as-rolled sample, the control sample (as-aged at 550 °C for 10 min) had excellent antibacterial performance, of which antibacterial rates against Escherichia coli (E. coli) and Staphylococcus aureus (S. aureus) reached 100% and 99.9%. The photos of antibacterial performance against E. coil and S. aureus on petridishes cultured with two control samples were shown in Figures 6 and 7. As it was illustrated, after the as-rolled sample was cultured with bacterial liquid, the number of bacterial colonies was greater and the sample exhibited poor antibacterial performance (Figure 6a, Figure 7a). By contrast, when the sample aged at 550 °C for 10 min and bacterial liquid were cultured together, there was no bacterial colony and
Discussion
As shown in experimental results, the strength of the steel continued to grow with the prolonged holding time, and the low temperature impact property reached the best condition when it was aged for 10 min. However, from the optical microstructures evolution, the holding time can have slight softening effects on microstructure. This means that the strength range of the steel after aging was not caused by the transformation of the microstructure, but resulted from aging-induced Cu-rich nanoprecipitates. Precipitation strengthening is the most important strengthening method for Curich high-strength steel. Because the early observed strengthening effect mainly depends on the precipitation of ε-Cu on the matrix, the precipitates phase is still a controversial issue. But with the introduction of more advanced detection methods into the research of the Cu-containing precipitates phase, researchers began to develop different opinions on this problem. Goodman [23] discovered that the strength rise at aging peak was not from the ε-Cu phase as people used to think, but was from the uniformly distributed precipitates which contained 50% Cu. It was also found that the precipitation strengthening effect of ε-Cu was far worse than this novel type of precipitates. Actually, the characteristics of Cu-rich precipitates in aspect of size, number density, shape, etc. that determine final mechanical properties greatly depend on the compositional and resultant structural evolution during the aging process [24][25][26][27][28][29][30].
With the beginning of the aging process, the Cu element can gain energy to avoid an ambient potential barrier and then segregate in nearby positions. The longer holding time, the more precipitates develop, which also gradually grow larger. In this experiment, when the samples were holding for both 5 min and 10 min, Cu-rich precipitates were all nanoscale. During this process, the precipitates were small and dislocations cut through them. The amount and the volume fraction of precipitates are so large that dislocation had to cut them continuously, thus constantly enhancing the precipitation effect and resulting in the rising strength. However, as holding time was prolonged to 60 min, the size of precipitates gradually grew larger. Although the strength rose in the process, the increasing size of precipitates would produce negative effects on the toughness. If the holding time was extended further, it could have resulted in an over-aging phenomenon. This cannot be observed in the optical microstructure because of the quite small nanoprecipitates.
In this experiment, the nanoscale and the rising volume fraction of these precipitates made it possible for the uniformly distributed ones to inhibit the dislocation motion, thus gaining higher strength. In fact, tiny novel Cu-rich phase was precipitated from supersaturated matrix during the
Discussion
As shown in experimental results, the strength of the steel continued to grow with the prolonged holding time, and the low temperature impact property reached the best condition when it was aged for 10 min. However, from the optical microstructures evolution, the holding time can have slight softening effects on microstructure. This means that the strength range of the steel after aging was not caused by the transformation of the microstructure, but resulted from aging-induced Cu-rich nanoprecipitates. Precipitation strengthening is the most important strengthening method for Cu-rich high-strength steel. Because the early observed strengthening effect mainly depends on the precipitation of ε-Cu on the matrix, the precipitates phase is still a controversial issue. But with the introduction of more advanced detection methods into the research of the Cu-containing precipitates phase, researchers began to develop different opinions on this problem. Goodman [23] discovered that the strength rise at aging peak was not from the ε-Cu phase as people used to think, but was from the uniformly distributed precipitates which contained 50% Cu. It was also found that the precipitation strengthening effect of ε-Cu was far worse than this novel type of precipitates. Actually, the characteristics of Cu-rich precipitates in aspect of size, number density, shape, etc. that determine final mechanical properties greatly depend on the compositional and resultant structural evolution during the aging process [24][25][26][27][28][29][30].
With the beginning of the aging process, the Cu element can gain energy to avoid an ambient potential barrier and then segregate in nearby positions. The longer holding time, the more precipitates develop, which also gradually grow larger. In this experiment, when the samples were holding for both 5 min and 10 min, Cu-rich precipitates were all nanoscale. During this process, the precipitates were small and dislocations cut through them. The amount and the volume fraction of precipitates are so large that dislocation had to cut them continuously, thus constantly enhancing the precipitation effect and resulting in the rising strength. However, as holding time was prolonged to 60 min, the size of precipitates gradually grew larger. Although the strength rose in the process, the increasing size of precipitates would produce negative effects on the toughness. If the holding time was extended further, it could have resulted in an over-aging phenomenon. This cannot be observed in the optical microstructure because of the quite small nanoprecipitates.
In this experiment, the nanoscale and the rising volume fraction of these precipitates made it possible for the uniformly distributed ones to inhibit the dislocation motion, thus gaining higher strength. In fact, tiny novel Cu-rich phase was precipitated from supersaturated matrix during the aging process, and the interaction effect of these precipitates and movable dislocations is the essence of the resulting precipitation strengthening [31].
As for antibacterial properties, the as-aged at 550 • C for 10 min sample obviously had better performance than the as-rolled sample. Shi and Wang etc. also revealed that the precipitation of Cu-rich phase is the precondition for ensuring having antibacterial properties, and Cu-rich phase Metals 2019, 9, 52 9 of 11 precipitating in the aging process can significantly promote antibacterial properties [20,32]. Therefore, novel Cu-rich precipitates not only have positive effects on strength and toughness, but also was a decisive factor for antibacterial properties.
Firstly, the observed antibacterial activity of Cu nanoparticles is highly dependent on their morphology, especially their large ratio of surface to volume, which enables them to directly interact with the outer microbial membrane of each bacterium [33]. Yin also discovered that the size and the number per unit area of the precipitates were important factors for excellent antibacterial properties [34]. The aging tested plates produced a large amount of uniformly distributed Cu-rich nanoprecipitates, and the number reached the peak when it was aged at 550 • C for 10 min. That means under this technique processing, their surface value was the largest possible and the volume was relatively smaller, thus having largest ratio of surface to volume among these specimens. This was the reason for the steel having remarkable antibacterial property.
However, the underlying mechanism of the antibacterial activity of Cu-rich nanoprecipitates is still controversial. One possible explanation is that cell membranes of bacteria containing nano-sized pores can be overcome by the nanoparticles [35]. The appearance of nanoparticles in suspension can continuously produce ions into the nutrient media. Cu ions from the nanoprecipitates attach to the bacterial cell membrane and rupture it, thus resulting in protein denaturation and cell death. The attachment of both nanoparticles and copper ions to the cell membrane leads to the accumulation of envelope protein precursors and it could cause dissipation of the proton motive force. Cu-rich nanoprecipitates also display destabilization of the outer membrane and rupture of the plasma membrane, thereby causing depletion of intracellular ATP [32,35]. Cu-rich nanoprecipitates and released ions that produce hydroxyl radicals damage or disturb the working of essential proteins and DNA, leading to the cell death [36,37].
Conclusions
The microstructure of the tested steel was mainly bainite, yet there were slight differences under different techniques process. Before the aging process, most of the microstructure of the TMCP steel was tiny granular bainite and polygonal ferrite, with a small amount of lath bainite. After aging at 550 • C, with the prolonged holding time, the microstructure gradually evolved into an equilibrium state. As the holding time was prolonged, the strength of the tested steel constantly increased and the elongation rate gradually decreased. The impact of absorbing energy reached its peak at 10 min of aging time. Overall, after aging at 550 • C for 10 min, the steel can have excellent mechanical properties. By using transmission electron microscope, numerous entangled dislocations were observed in the rolled sample, while no nanoprecipitates were observed. After aging, a large amount of tiny precipitates was uniformly distributed in the matrix, and their size was kept at nanoscale. But as the holding time was extended, the size of precipitates obviously grew. Compared with the as-rolled sample, the sample aged at 550 • C for 10 min had excellent both mechanical properties and antibacterial performance. It revealed that novel Cu-rich precipitates not only have positive effects on strength and toughness, but also was a decisive factor for antibacterial properties. | 6,175.4 | 2019-01-07T00:00:00.000 | [
"Materials Science"
] |
A New Exponential Factor-Type Estimator for Population Distribution Function Using Dual Auxiliary Variables under Stratified Random Sampling
In this paper, we propose a generalized class of exponential-type estimators for estimating the finite population distribution function using dual auxiliary variables under stratified sampling. The biases and mean squared errors (MSEs) of the proposed class of estimators are derived up to the first order of approximation. The empirical and theoretical study of comparisons is discussed. Four populations are taken for the support of the theoretical findings. It is observed that the proposed class of estimators performs better as compared to all other considered estimators in stratified sampling.
Introduction
In survey sampling, the auxiliary information is often used to increase the precision of an estimator of population parameter(s), such as population mean, median, distribution function, quantiles, and standard deviation, etc., exist in the literature, which need single or two auxiliary information.
Our primary goal is to enhance the precision of the estimator; for this reason, we use strati ed random sampling. If the population of interest is homogeneous, then simple random sampling performs good. But there is a situation when the population of interest is heterogeneous, in such situation, it is advisable to use the strati ed random sampling instead of simple random sampling. In strati ed random sampling, we split the whole aggregate into number of nonoverlapping groups or subgroups called strata. ese groups are homogeneous entirely and sample is drawn independently from each stratum separately. To obtain the maximum bene t from strati cation, the values of the Nh must be known. When the strata have been determined, a sample is drawn from each, and the drawings being made independently. In strati ed sampling, every stratum is handled as separate population, and consequently samples are drawn independently from every stratum.
In other words, if SRS is used in each stratum for the selection of the sample, then the corresponding sample is called a strati ed random sample. For good strati cation, it requires that each stratum should be internally homogeneous but should externally di er from one another. Strati cation may often produce gains in the precision of estimates. In strati ed random sampling, the given population is divided into several strata. en, from each stratum, a simple random sample is selected depending upon the size of the stratum. Estimators are rst drawn from each stratum and then combined into a precise estimate of the population parameter.
In the literature of sampling, the authors have estimated the DF using information on one or more auxiliary variable. Chambers and Dunstan [20] suggested an estimator for estimating the DF that requires information both on the study and auxiliary variables. Similarly, Rao et al. [21] and Rao [22] suggested ratio and difference/regression estimators for estimating the DF under a general sampling design. Kuk [23] suggested a kernel method for estimating the DF using the auxiliary information. Ahmed and Abu-Dayyeh [24] estimated the DF using information on multiple auxiliary variables. A calibration approach was used by Rueda et al. [25] to devise an estimator for estimating the DF. Singh et al. [26] considered the problem of estimating the DF and quantiles with the use of auxiliary information at the estimation stage of a survey. Moreover, Yaqub and Shabbir [27], Hussain et al. [28], and Hussain et al. [29] considered a generalized class of estimators for estimating the DF in the presence of non-response, while Hussain et al. [30] proposed two new families of estimators using dual auxiliary information under simple and stratified random sampling. Furthermore, Ahmad et al. [31] suggested a new estimator of DF using auxiliary information.
In this paper, we propose a new estimator for estimating the DF using information on the distribution function and mean of the auxiliary variable. e biases and mean squared errors (MSEs) of the existing and proposed estimators of the DF are derived under the first order of approximation. From theoretical and numerical comparisons, we can say that the proposed estimator is more precise than the existing adapted estimators when estimating the DF. e rest of the paper is organized as follows. In Section 2, some notations are given. In Section 3, some existing estimators of the finite population mean for estimating the finite DF are studied. e proposed estimator is given in Section 4. In Sections 5 and 6, theoretical and numerical comparisons are made, respectively. In Section 7, interpretation of the results in tables is deliberated. Finally, conclusions are drawn in Section 8.
Notation
Consider a finite population Ω � 1, 2, . . . , N { } of N distinct units, which is divided into L homogeneous strata, where the size of hth stratum is N h , for h � 1, 2, . . . , L, such that L h�1 N h � N. Let Y and X be the study and auxiliary variables which take values y h and x h , respectively, where i � 1, 2, . . . , N h and h � 1, 2, . . . , L; for estimating finite population distribution function, assume that a sample of size n h is drawn from the h th stratum using simple random sampling without replacement, such that L h�1 n h � N, where n is the sample size.
Y: the study variable. X: the auxiliary variable. Let and : the population variance of U for the hth stratum, : the population variance of X for the hth stratum, C uh � S uh /U h : the population coefficient of variation of U for the hth stratum, C vh � S vh /U h : the population coefficient of variation of V for the hth stratum, C xh � S xh /U h : the population coefficient of variation of X for the hth stratum, 2 Mathematical Problems in Engineering : the population covariance between U and V, for the hth stratum, : the population covariance between U and X, for the hth stratum, : the population covariance between V and X, for the hth stratum, R uxh � S uxh /(S uh S vh ): the population correlation coefficient between U and V for the hth stratum, R uxh � S uxh /(S uh S xh ): the population correlation coefficient between U and X for the hth stratum, R vxh � S vxh /(S vh S xh ): the population correlation coefficient between V and X for the hth stratum, In order to obtain the biases and mean squared errors (MSEs) of the adapted and proposed estimators of F(y), we consider the following relative error terms. Let
Existing Estimators
In this section, we briefly review some existing estimators of U.
(1) e conventional unbiased mean per unit estimator of U is as follows: the reference of this estimator is not included because this is a conventional unbiased estimator under simple random sampling.
(2) Cochran [32] suggested the traditional ratio estimator of U, which is given by e bias and MSE of U R,h , to first order of approximation, respectively, are (3) Murthy [33] suggested the usual product estimator of U, which is given by e bias and MSE of U P,h , to first order of approximation, are given by e product estimator U P,h is better than where m is an unknown constant. U Reg,h is an unbiased estimator of U. e simplified minimum variance of U Reg,h at the optimum value of m (opt) � R uv (δ 1 /δ 2 ) is (5) Rao [37] suggested an improved difference-type estimator of U, which is given by Mathematical Problems in Engineering where m 1 and m 2 are unknown constants. e bias and MSE of U R.D,h , to the first order of approximation, respectively, are e optimum values of m 1 and m 2 are respectively. e simplified minimum MSE of U R.D,h at the optimum values of m 1 and m 2 is given by (6) Bahl and Tuteja's exponential ratio-type and product-type estimators [34] are given by e biases and MSEs of U BT.R,h and U BT.P,h , to first order of approximation, respectively, are (7) Grover and Kaur [35] suggested a generalized class of ratio-type exponential estimators, which is given by where m 3 and m 4 are unknown constants. e bias and MSE of (F(t − y) GK ), to the first order of approximation, respectively, are e optimum values of m 3 and m 4 determined by minimizing (24) are respectively. e minimum MSE of F(y) GK at the optimum values of m 3 and m 4 is given by
Proposed Class of Estimators
e precision of an estimator surges by using the appropriate secondary information at the estimation stage. In previous studies, the sample distribution function of the auxiliary variable was used to expand the productivities of the prevailing distribution function estimators. In a recent study, Hussain et al. [30] recommended to use ranks of the auxiliary variable as an additional auxiliary variable to increase the precision of an estimator of the population distribution function. Similarly, we use additional auxiliary information on sample mean and sample distribution function of the auxiliary variable along with the sample distribution function of study variable to estimate the finite CDF.
Using the above idea on the lines of Shukla et al. [36], we suggest a general class of exponential factor-type estimators which contains many stable and efficient estimators. By combining the idea of Bahl and Tuteja and Shukla et al. [34,36], the first estimator is given by where Substituting different values of K ih (i � 1,2,3,4) in (18), we can generate many more different types of estimators from our general proposed class of estimators, which are given in Table 1.
Solving U prop h given in (28) in terms of errors, we have where Mathematical Problems in Engineering To first-order approximation, we have Mathematical Problems in Engineering Taking squaring and expectation of (33) to first order of approximation, we get the bias and MSE: Differentiate (35) with respect to θ 1h and θ 2h , and we get the optimum values of θ 1h and θ 2h , i.e., Substituting the optimum values of θ 1h(opt) and θ 2h(opt) in (35), we get minimum MSE of U prop which is given by where is the multiple correlation coefficient of y h on V h and X h . Now by putting different values of K ih in (28), some members of the proposed class of estimators can be obtained as e bias and MSE of U prop1h are given by (2) For K 1h � 1 and K 2h � 2, e bias and MSE of U prop2h are given by Mathematical Problems in Engineering 7 e bias and MSE of U prop3h are given by (4) For K 1h � 2 and K 2h � 1, e bias and MSE of U prop5h are given by e bias and MSE of U prop7h are given by (56) (7) For K 1h � 3 and K 2h � 1, e bias and MSE of U prop9h are given by Mathematical Problems in Engineering e bias and MSE of U prop10h are given by e bias and MSE of U prop11h are given by e bias and MSE of U prop12h are given by (11) For K 1h � 4 and K 2h � 1, e bias and MSE of U prop13h are given by (71) e bias and MSE of U prop14h are given by (74) e bias and MSE of U prop15h are given by
Empirical Study
In this portion, we conduct a numerical study to judge the performances of the existing and proposed DF estimators. For this purpose, two datasets are taken. e summary statistics of these datasets are reported in Tables 2 and 3
Interpretation of Results
As mention above, we used two datasets for numerical illustration. e proposed estimator and the existing estimators were compared between each other with respect to their MSE and PRE values. e results of PREs are presented in Tables 4 and 5. In Tables 2 and 3, we see the summary statistics about the populations. It is further noted that the proposed estimator is more precise than the existing distribution function estimators of Cochran [32], Murthy [33], Rao [37], and Grover and Kaur [38], in terms of MSEs and PREs.
Conclusion
In this paper, we proposed an improved class of estimators of finite population DF by utilizing real-life datasets on dual auxiliary variables in stratified random sampling (StRS) scheme. Bias and MSE expressions of a proposed class of estimators U proph are acquired up to first order of approximation. Based on the theoretical and numerical results, the proposed class of estimators performs better than the existing estimators considered under stratified random sampling. From these findings, we suggest the utilization of the proposed estimators for efficient estimation of population distribution function in the presence of the auxiliary information under stratified random sampling.
Data Availability
All the data used for this study can be found inside the manuscript.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 3,070.2 | 2022-05-10T00:00:00.000 | [
"Mathematics"
] |
Sprouty is a cytoplasmic target of adenoviral E1A oncoproteins to regulate the receptor tyrosine kinase signalling pathway
Background Oncoproteins encoded by the early region of adenoviruses have been shown to be powerful tools to study gene regulatory mechanisms, which affect major cellular events such as proliferation, differentiation, apoptosis and oncogenic transformation. They are possesing a key role to favor viral replication via their interaction with multiple cellular proteins. In a yeast two-hybrid screen we have identified Sprouty1 (Spry1) as a target of adenoviral E1A Oncoproteins. Spry proteins are central and complex regulators of the receptor tyrosine kinase (RTK) signalling pathway. The deregulation of Spry family members is often associated with alterations of the RTK signalling and its downstream effectors, leading to the ERK pathway. Results Here, we confirm our yeast two-hybrid data, showing the interaction between Spry1 and E1A in GST pull-down and immunoprecipitation assays. We also demonstrated the interaction of E1A with two further Spry isoforms. Using deletion mutants we identified the N-terminus and the CR conserved region (CR) 3 of E1A- and the C-terminal half of Spry1, which contains the highly conserved Spry domain, as the essential sites for direct interaction between Spry and E1A. Immunofluorescent microscopy data revealed a co-localization of E1A13S with Spry1 in the cytoplasm. SRE and TRE reporter assays demonstrated that co-expression of Spry1 with E1A13S abolishes the inhibitory function of Spry1 in RTK signalling, which is consequently accompanied with a decrease of E1A13S-induced gene expression. Conclusions These results establish Spry1 as a cytoplasmic localized cellular target for E1A oncoproteins to regulate the RTK signalling pathway, and consequently cellular events downstream of RTK that are essential for viral replication and transformation.
Background
Proteins encoded by the early transcription unit 1A (E1A) of Adenovirus (Ad) are essential for the viral life cycle because of their necessity in regulating the expression of all other viral genes [1]. In addition these proteins modulate the expression of specific cellular genes in infected cells to facilitate viral reproduction [2,3]. E1A oncoproteins cooperate with Ad early region 1B (E1B) oncogene products to transform rodent cells in culture and, depending on the serotype, to induce tumors in immunocompetent animals (e.g. Ad12) [4][5][6]. Ad12 E1A gives rise to five proteins of which the 266R protein (translated from a 13S mRNA; henceforth referred to as E1A 13S ) and the 235R protein (translated from a 12S mRNA; henceforth referred to E1A 12S ) are the predominant isoforms [2,7]. Both proteins are translated in the same reading frame but differ in a short stretch of 31 aa, called CR3, that is absent in E1A 12S . CR3 is one of four E1A regions (CR1, CR2, CR3 and CR4) that are highly conserved among all Adenovirus serotypes [2]. The N terminus and the CRs of E1A mediate most of the gene regulatory functions necessary for viral reproduction and transformation [8]. Due to the lack of a sequence-specific DNA binding activity, E1A proteins, mainly known as transcription factors, fulfill their gene regulatory functions by interaction with cellular transcription factors such as c-Jun, ATF, CREB, or repressors such as pRB and cellular co-factors like p300 and CBP [9][10][11][12].
The idea that E1A is also capable of exerting its regulatory functions by directly affecting cytoplasmic processes was supported by the discovery that a certain amount of E1A proteins is acetylated at Lys 239 , which determines the cytoplasmic localization of E1A proteins by interfering with the nuclear transport [13]. Until now, a few cytoplasmic localized interaction partners of E1A have been identified including the regulatory subunit II of protein kinase A (PKA-RIIα) [14], the receptor for activated C-kinase l (Rackl) [15,16], and the cytoplasmic proteasome 26S [17].
Sprouty (Spry) proteins have been identified as regulatory proteins of the receptor tyrosine kinase (RTK) signalling pathway [18][19][20][21]. They appear to play an inhibitory role in many cellular events due to their effect on RTK, especially in FGF-dependent developmental processes [22][23][24][25]. First identified in Drosophila [19], Sprouty homologues have been discovered in human and mouse. A high degree of conservation of key functional amino acids has been shown for Spry proteins among different species [20,22,26,27].
A unique and highly conserved C-terminal cysteine-rich Spry domain has been identified for all 4 mammalian Spry isoforms. The Spry domain is responsible for palmitoylation at the plasma membrane. Mutations in this region disrupt membrane localization and abrogate Spry functions [22,28,29]. A conserved short N-terminal tyrosinecontaining motif of Spry was discovered to be critical for physiological functions to inhibit FGF signalling and sustain EGF signalling [30,31]. Different interacting partners have been identified, which upon binding with Spry consequently influence RTK signalling pathway, including Rafl, Grb2, c-Cbl and Shp2 [28,[31][32][33]. The deregulation of Sprouty was described in a number of cancers [34][35][36][37][38].
In a search to identify potential cytoplasmic binding partners of Adenovirus E1A oncoproteins we detected Spry1 as a putative binding partner of E1A. We were able to confirm this interaction in GST pull-down and immunoprecipitation assays. We also demonstrated an interaction of two further Spry isoforms with E1A and characterized the protein domains that are responsible for binding. Using confocal immunofluorescence microscopy, we detected a co-localization of Spry1 and E1A 13S exclusively in the cytoplasm. Co-expression of E1A 13S with Spry1 indicated a functional role for this interaction to modulate RTK signalling pathway and thereby to regulate cellular processes.
Sprouty proteins interact with E1A isoforms
In a previous search for cellular targets of adenoviral proteins using a mouse embryo cDNA-expression library (Chevrayx and Nathans, Howard Huges Medical Institute, Baltimore) and a SOS-yeast two hybrid system we detected mouse Spry1 as a cytoplasmic interaction partner of Ad12 E1A proteins (data not shown). In order to confirm this observation, we first examined the binding of Spry1 and further Spry proteins with Ad12 E1A isoforms by GST pull-down assays. Mouse Spry1, Spry2 and Spry4 were exogenously expressed in HeLa cells and incubated with E1A 13S ,-, E1A l2S -and E1A 9,5S -GST-fusion proteins ( Figure 1A). Spry1 and Spry4 interact with all three Ad12 E1A isoforms ( Figure 1B). For Spry2 we were not able to detect an interaction with E1A 12S and E1A 9.5S showed only weak binding. The mutant ΔNE1A 12S , an isoform where we deleted the first 29 amino acids of the N terminus, showed no interaction with Spry1 and Spry2 ( Figure 1B). However, for Spry4 we were able to detect an interaction with ΔNE1A 12S , indicating an essential function of the Ad12 E1A N-terminal domain for efficient interaction with Spry1 and Spry2. Although weak, we surprisingly detected an interaction between Spry1 and the analog aminoterminal deletion mutant of the E1A 13S isoform, indicating that the conserved region 3 (CR3) is also involved in binding with Spry1. However the domain of E1A that is responsible for the interaction with Spry4 remains to be elucidated. To further define the region of Spry1 that is necessary for binding with E1A we constructed two deletion mutants in which we truncated either the N-terminal (ΔNSpry1) or the Cterminal half of Spry1 (ΔCSpry1) ( Figure 1A). Our data showed that the C-terminal half of Spry1, bearing the highly conserved Spry domain, is necessary for the interaction with E1A, whereas the tyrosine-containing sequence of the N-terminal part is not essential for interaction ( Figure 1C). Since the Spred family of proteins are likewise Sprouty-domain-containing proteins that show regulatory functions in RTK signalling pathway [39,40], we studied the interaction of E1A proteins with mouse Spred1 and Spred2 by GST pull-down assays. In our experiments, we were not able to detect any interactions between Spred proteins and E1A isoforms (data not shown). It's possible that conformational changes due to additional binding domains [such Ena/Vasp homolog (EVH)1 and c-Kit binding (KBD)-domain] in Spred proteins prevent E1A from binding with this protein family [20].
To confirm whether the interaction of E1A with Spry occurs in cells we co-expressed Spry1 and Myc-tagged E1A 12S proteins in HeLa cells. Cell lysates were subjected to immunoprecipitation with anti-Spry1 antibody, and the immunoprecipitates were then analyzed for the presence of Myc-tagged E1A 12S. Results from these experiments have confirmed that E1A 12S binds Spry1 in mammalian cells ( Figure 1D).
To verify if the interaction with Spry proteins is restricted to E1A proteins of the highly ongogenic adenovirus serotype 12 (Ad12), we examined the interaction of the non-oncogenic adenovirus serotype 2 (Ad2) E1A 13S protein with Spry1 and were able to detect an interaction ( Figure 2A).
Spry1 interacts with the Human papillomavirus type 16 (HPV16) E7 protein
The E7 oncoprotein of the Human Papillomavirus type 16 (HPV16-E7) displays partial amino acid sequence homology, comparable function, and similar interaction partners with adenoviral E1A oncoproteins [41,42]. Therefore, we examined and confirmed the interaction of Spry1 with the HPV16-E7 protein by GST pull-down assay ( Figure 2B). As a positive control we used GST-E1A 13S and as a negative control we used GST in the pull down assay. Our results establish Spry proteins as potential targets of presumably various DNA tumor virus oncoproteins.
Spry1 co-localizes with E1A 13S in the cytoplasm
To determine the subcellular localization of Spry1 in the presence of E1A proteins, we performed confocal immunofluorescence microscopy using antibodies that specifically recognize Spry1 and Myc-tagged E1A 13S . HeLa cells were serum-deprived overnight and treated with bFGF for 2 h. Whereas Spry1 was distributed within the whole cytoplasm, E1A 13S was predominantly found in the nucleus ( Figure 3) with a lesser amount of E1A 13S detectable in the cytoplasm. This cytoplasmic pool of E1A 13S strongly co-localized with Spry1, predominantly in vesicular structures within the cytoplasm ( Figure 3C). In addition, the subcellular localization of Spry1 is not affected by the presence of E1A proteins.
Co-expression of Spry1 decreases E1A 13S -induced gene transactivation of TRE and SRE In transient expression assays we examined the effect of Spry1 and E1A 13S interaction on gene expression activity. First, we analyzed this effect on the TPA-responsive element (TRE) in HeLa cells, which is transactivated by E1A and c-Jun [43,44]. Our results showed that the expression of the collagenase (Col)-TRE driven reporter gene is down-regulated after Spry1 expression and up-regulated due to the ectopic expression of E1A 13S , whereas it is highly up-regulated when E1A 13S and c-Jun are coexpressed. The E1A 13S -induced gene transactivation was again repressed by co-expression with Spry1 ( Figure 4). These data indicate that Spry1 decreases E1A 13S -induced gene expression. Because Spry1 is known to act as a repressing factor in RTK signalling pathways, it is conceivable that co-expression of Spry1 represses E1A-induced gene expression by inhibiting the activity of specific kinases involved in transcriptional activation. To gain further insight into how the direct interaction of E1A 13S with Spry1 functionally influences gene transactivation we decided to choose another reporter construct. The serum response element (SRE) is known to be downregulated by Spry proteins in response to bFGF treatment [23,45]. In our experiments cells were serum deprived for 24 h prior to stimulation with 10% FCS or 20 ng/ml bFGF for 1, 5, 7 and 9 hours. In HeLa cells, we detected a 2 to 3 fold increase in reporter gene activation in response to E1A 13S expression after 5 -9 h incubation with FCS, whereas in C33A and NIH-3T3 cell lines an upregulation of promoter activity by E1A 13S was already detectable after 1 h of FCS stimulation ( Figure 5). Similar results were obtained using bFGF instead of FCS for induction of the RTK signalling pathway after serum deprivation ( Figure 6A). In cells that express both proteins, E1A 13S abolishes the repression function of Spry1, whereby the E1A 13S -induced gene expression activity consequently decreases by up to more than 50%. Concentration-dependent reporter assays showed a decrease of E1A 13S promoter activation in response to an increase of Spry1 co-expression. Whereas the ability of Spry1 to reduce gene expression activity is abolished in response to an increase of E1A 13S co-expression (Figure 7). In co-expression experiments in which we expressed E1A 12S instead of E1A 13S however, we could not detect an increase in SRE-promoter activity and therefore no significant change after co-expression of Spry1 as well (data not shown). Using the E1A 13S N-terminal deletion mutant (ΔNE1A 13S ) in such transient expression assays, we were unable to detect a decrease of ΔNE1A 13S induced gene expression in response to Spry1 compared with studies using wild type E1A 13S ( Figure 6). Also, the co-expression of ΔNE1A 13S with Spry1 after 7 h and 9 h of stimulation showed a slightly higher SRE-dependent gene expression compared with cells expressing ΔNE1A 13S exclusively. Our GST pull-down data showed that Spry1 interacts only weakly with ΔNE1A 13 , suggesting that the stronger interaction with Spry1 mediated by the N-terminus of E1A might be necessary for the inhibitory effect of Spry1 to repress E1A 13S activity. Moreover it is worthwhile to note that these data also show that SRE can be activated by E1A 13S independent of its N-terminal amino acids.
Co-expression of Spry1 and E1A 13S specifically impairs phosphorylation of ERK1/2 MAP kinase It is reported that Spry1 inhibits the Ras/Raf/MAP kinase pathway [23]. To examine if the SRE-dependent reporter gene expression is affected by the ERK1/2 MAP kinase pathway, we analyzed the phosphorylation of ERK1/2 in HeLa cells after expression of Spry1 and E1A 13S or ΔNE1A 13S . A decrease in phosphorylation of ERK1/2 was detected in cells expressing Spry1, as compared with cells that were transfected with an empty expression plasmid, when visualized by phospho-specific antibodies. Expression of E1A 13S in these cells led to an increased phosphorylation of ERK1/2 after 1 h of bFGF treatment, whereas the addition of Spry1 inhibited ERK1/2 phosphorylation ( Figure 6C). For comparison we assayed the influence of ΔNE1A 13S on ERK1/2 phosphorylation in the presence of Spry1. As expected, we were not able to detect an inhibition of the ERK1/2 phosphorylation/activation after co-expression of Spry1 and ΔNE1A 13S ( Figure 6C). Confirming the SRE reporter assays, a slight increase in phosphorylation was detectable in cells expressing Spry1 and ΔNE1A 13S compared to cells only expressing ΔNE1A 13S . To summerize, these results support our hypothesis of a functional interaction between E1A oncoproteins and Spry1 in the cytoplasm to modulate the Ras/Raf/MAP kinase pathway.
Discussion
E1A oncoproteins have a key role in adenoviral replication. Their specific interaction with cellular proteins induces viral and cellular gene expression which initiates the host cell to enter S-Phase and therefore enables the virus life cycle to continue [46]. E1A binding partners are therefore specific targets that enable the virus to modulate the cell cycle. In this study, we identified Spry proteins as cytoplasmic interacting partners of adenoviral E1A proteins. Since Spry proteins are known as the "regulator" of RTK signalling pathway we studied and demonstrated the ability of E1A to modulate RTK signalling pathway through specific interaction with Spry1.
The mammalian Spry family consists of four Spry proteins. In our GST pull-down assays we showed differences in binding affinity of E1A isoforms with specific Spry family members (Spry1, Spry2, Spry4) which might reflect various interaction mechanisms and potential differences in functions of Spry isoforms. Here we observed a strong binding of Ad12 E1A proteins with Spry1 and Spry2, mediated via the aminoterminal E1Asequence and furthermore the responsibility of CR3 for a less strong interaction with Spry1. Interestingly, the less conserved N-terminal sequence of the Ad12 E1A proteins is responsible for the interaction with two further cytoplasmic proteins, the 26S proteasom and the PKA-RIIα [14,17]. However, the amino acids involved in the interaction of these proteins are still unknown. Moreover we discovered the C-terminal half of Spry1, including the Spry domain, as the responsible region for E1A interaction. However, the Spry-domain-containing Spred proteins showed no interaction with E1A. It has yet to be clarified whether Spry-domain-neighbouring amino acids or variable amino acids within the conserved Spry domain mediate the interaction with E1A. Conformational changes between Spry and Spred proteins due to their Ena/Vasp homolog (EVH)1 and c-Kit binding (KBD)-domain, which is missing in Spry, may prevent E1A from binding [20]. We detected Spry1 accumulated and associated with E1A 13S in vesicular structures within the whole cytoplasm. A localization of Spry proteins in vesicular structures has been reported before [22,47]. Palmitoylation targets Spry to the plasma membrane, which has been shown to be a necessary step for the inhibitory function of Spry in RTK signalling pathway [22,29].
Using TRE or SRE luciferase reporter assays we analyzed the functional consequences on gene expression activity by E1A 13S and Spry1. The decrease of reporter gene expression by Spry1 was abolished when co-expressed with E1A 13S . Expression of constant amounts of Spry1 and increasing amounts of E1A 13S proteins showed that Spry1 proteins are functionally inactivated by E1A 13S and vice versa. A functional repression of Spry1 would lead to an increasing activity of the RTK signalling pathway and to an increasing amount of phosphorylated transcription factors which could therefore enhance E1A-induced gene expression. This observation was supported by our experiments analyzing the phosphorylation of ERK. Overexpression of Spry1 decreased E1A 13S -induced ERK-phosphorylation in comparison to the expression of E1A alone.
The up-or down-regulation of Spry has been described in different cancers [48], indicating the necessity of a balanced function of Spry proteins. Our data indicate that Spry1 is an important target of E1A proteins in the cytoplasm to modulate the RTK signalling pathway to influence cellular processes for optimizing viral replication.
Using the aminoterminal deletion mutant of E1A 13S we obtained unexpected data. ΔNE1A 13S displays only a weak binding with Spry1 via the CR3 in GST pull-down assays ( Figure 1B) and can still increase reporter gene expression, whereas in co-expression with Spry1 no significant repression was detectable ( Figure 6B). Possibly, the interaction via the CR3 has different effects on Spry1 function compared to the combined interaction via the N-terminus and the CR3 domain of E1A 13S . These results were also supported by our phosphorylation studies of ERK, indicating an important N-terminal-dependent function of Ad12 E1A proteins for the interaction with Spry1. It is conceivable that the interaction mediated exclusively via the CR3 has a different effect on Spry1 function than the interaction mediated via the N-terminus and the CR3 domain of E1A 13S . Instead of acting exclusively as inhibitors in signal transduction Spry proteins can be also involved in sustaining signal activity. This function dependents on the activity of binding partners such as c-Cbl [31,49,50]. The transcriptional gene expression activity of ΔNE1A 13S and the interaction of Spry1 exclusively with the CR3 of E1A 13S might cause a different influence on Spry1 function and would therefore explain our results using the E1A deletion mutant. Further studies are necessary to understand the mechanism of interaction between E1A and Sprouty proteins in detail.
Conclusion
In conclusion, our results show for the first time that Spry proteins are targets of adenoviral E1A oncoproteins, which enables the virus to modulate the RTK signalling, leading to the ERK pathway, and to control, in addition to its transcriptional functions, cellular processes like proliferation, differentiation and apoptosis. The fact that Spry1 interacts with E7 of HPV 16 leads to the speculation that this might be a more general way of DNA viruses to modulate RTK signalling pathways. Over the past few years increasing evidence is implicating Spry in tumorgenesis and cancer [34,35,38]. Our identification and analysis of the functional interaction between the viral oncoprotein E1A and Spry support the idea of Spry being an important factor in tumorgenesis.
Cells, Growth Factors, Transfection Methods
HeLa, C33A and NIH-3T3 cells were cultured in Dulbecco's modified Eagle's medium with 10% fetal calf serum (FCS). For GST pull-down assay mouse Spry1, Spry2 and Spry4 were transfected by electroporation with the ECM830 electroporator. Cells were transfected by Trans-Fectin Lipid reagent (Biorad) for co-immunoprecipitation and luciferase assays. For growth factor stimulation, cells were washed and maintained in serum-reduced medium (Dulbecco's modified Eagle's medium with 0.5% newborn calf serum) for 24 h prior to fetal calf serum/ bFGF (Invitrogen) treatment. Cells were harvested after several hours as indicated.
GST pull-down assay
Glutathione Sepharose 4B was purchased from Amersham Bioscience. For preclearing, 0.5 mg of cellular lysate were incubated with 50 μl of Glutathion Sepharose (50%) for 1 h at 4°C. Subsequently, after washing by centrifugation at 500 × g for 5 min, supernatants were incubated with 40 μg of GST-fusion protein for 1 h at 4°C. The proteins bound were subjected to SDS-PAGE and immunoblot analysis was performed as described above.
Co-Immunoprecipitation
Cells were lysed with RIPA buffer (Santa Cruz) and precleared with control IgG (Santa Cruz) and 20 μl of Protein A/G Plus-agarose (Santa Cruz). 0.5 mg of the cell lysates were incubated with 1.6 μg of the precipitating antibody for 1.5 h at 4°C while gentle rocking. 20 μl of Protein A/G Plus-agarose were added for overnight incubation. The beads were collected by centrifugation, washed 3 times with 1 ml of lysis buffer, and boiled in 40 μl 2 × SDS sample buffer. The immunoprecipitates were fractioned by SDS-PAGE and analyzed by immunobloting as described above.
Immunofluorescence
HeLa cells (0.25 × 10 5 ) were seeded onto sterilized glass coverslips contained in 24-well plates. After transfection, cells were maintained in serum-reduced medium overnight and stimulated with 20 ng/ml bFGF for various times. The cells were rinsed with PBS, fixed with 3% paraformaldehyde in PBS for 15 min at room temperature, permeabilized with 0.1% Triton X-100 for 4 min at room temperature, and washed with PBS. After blocking with 1% BSA/PBS for 30 min, cells were incubated with the primary antibody (mouse monoclonal anti-Myc (Invitrogen); rabbit polyclonal anti-Sprouty 1 (H120) (Santa Cruz)) for 1 h at room temperature. After washing with PBS, cells were incubated with the secondary antibody (Alexa Fluor 488 goat anti-rabbit IgG (Invitrogen); Cy3-conjugated goat anti-mouse IgG (dianova)) for l h at room temperature. After the final wash each coverslip was prepared for microscopic examination by applying mounting medium (Mowiol, Hoechst AG).
Luciferase Assay
Cells were transfected by TransFectin Lipid reagent (Biorad) and Luciferase activity in cell lysates was measured by using the Promega-Luciferase assay system in a Berthold Lumat LB 9501 luminometer. In all reporter assays, 2.5 × 10 5 HeLa or C33A cells or 1.8 × 10 5 NIH-3T3 cells were plated on 6-well dishes.
Statistics
All measured values are expressed as the mean ± S.E.M. The significance of the results was analyzed using Student's t-test. | 5,233.2 | 2011-04-26T00:00:00.000 | [
"Biology"
] |
Energy Loss of a Heavy Particle near 3D Charged Rotating Hairy Black Hole
In this paper we consider charged rotating black hole in 3 dimensions with an scalar charge and discuss about energy loss of heavy particle moving near the black hole horizon. We also study quasi-normal modes and find dispersion relations. We find that the effect of scalar charge and electric charge is increasing energy loss.
Introduction
The lower dimensional theories may be used as toy models to study some fundamental ideas which yield to better understanding of higher dimensional theories, because they are easier to study [1]. Moreover, these are useful for application of AdS/CFT correspondence [2][3][4][5]. This paper is indeed an application of AdS/CFT correspondence to probe moving charged particle near the three dimensional black holes which recently introduced by the Refs. [6] and [7] where charged black hole with a scalar hair in (2+1) dimensions, and rotating hairy black hole in (2+1) dimensions constructed respectively. Here we are interested to the case of rotating black hole with a scalar hair in (2+1) dimensions. Recently, a charged rotating hairy black hole in 3 dimensions corresponding to infinitesimal black hole parameters constructed [8] which will be used in this paper. Also thermodynamics of such systems recently studied by the Refs. [9] and [10]. In this work we would like to study motion of a heavy charged particle near the black hole horizon and calculate energy loss. The energy loss of moving heavy charged particle through a thermal medium known as the drag force. One can consider a moving heavy particle (such as charm and bottom quarks) near the black hole horizon with the momentum P , mass m and constant velocity v, which is influenced by an external force F . So, one can write the equation of motion asṖ = F − ζP , where in the non-relativistic motion P = mv, and in the relativistic motion P = mv/ √ 1 − v 2 , also ζ is called friction coefficient. In order to obtain drag force, one can consider two special cases. The first case is the constant momentum which yields to obtain F = (ζm)v for the non-relativistic case. In this case the drag force coefficient (ζm) will be obtained. In the second case, external force is zero, so one can find P (t) = P (0)exp(−ζt). In another word, by measuring the ratioṖ /P orv/v one can determine friction coefficient ζ without any dependence on mass m. These methods lead us to obtain the drag force for a moving heavy particle. The moving heavy particle in context of QCD has dual picture in the string theory in which an open string attached to the D-brane and stretched to the horizon of the black hole. Similar studies already performed in several backgrounds [11][12][13][14][15][16][17][18][19][20][21][22]. Now, we are going to consider the same problem in a charged rotating hairy 3D background. Our motivation for this study is AdS 3 /CF T 2 correspondence [23][24][25]. This paper is organized as the following. In the next section we review charged rotating hairy black hole in (2+1) dimensions. In section 3 we obtain equation of motion and in section 4 we try to obtain solution and discuss about energy loss. In section 5 we give linear analysis and discuss about quasi-normal modes and dispersion relations. Finally in section 6 we summarized our results and give conclusion.
Charged rotating hairy black hole in (2+1) dimensions
The (2+1)-dimensional gravity with a non-minimally coupled scalar field is described by the following action, where ξ is a coupling constant between gravity and the scalar field which will be fixed as ξ = 1/8, and V (φ) is self coupling potential. The metric background of this is given by the Ref. [7], where [8], where Q is infinitesimal electric charge, a is infinitesimal rotational parameter and l is related to the cosmological constant as Λ = − 1 l 2 . Also β is integration constants depends on the black hole charge and mass as the following, and the scalar charge B related to the scalar field as the following, Rotational frequency obtained as the following, and, Also one can obtain the following Ricci scalar, which is singular at r = 0. Finally, in the Ref. [8] it is found that, where r h is the black hole horizon radius. Finally black hole temperature and entropy are obtained by the following relations, 3 The equations of motion The moving heavy particle near the black hole may be described by the following Nambu-Goto action, where T 0 = 1 2πα ′ is the string tension. The coordinates τ and σ are corresponding to the string world-sheet. Also G ab is the induced metric on the string world-sheet with determinant G obtained as the following, where we used static gauge in which τ = t , σ = r, and the string only extends in one direction x(r, t). Then, the equation of motion obtained as the following, We should obtain canonical momentum densities associated to the string as the follows, The simplest solution of the equation of motion is static string described by x = const. with total energy of the form, where r m is arbitrary location of D-brane. As we expected, the energy of static particle interpreted as rest mass.
Time dependent solution
In the general case, we can assume that particle moves with constant speedẋ = v, in that case the equation of motion (13) reduces to, where, The equation (16) gives the following expression, where C is an integration constant which will be determined by using reality condition of √ −G. Therefore we yield to the following canonical momentum densities, These give us loosing energy and momentum through an endpoint of string, As we mentioned before, reality condition of √ −G gives us constant C. The expression √ −G is real for r = r c > r h . In the case of small v one can obtain, which yields to, Therefore we can write drag force as the following, In the Fig. 1 we can see behavior of drag force with the black hole parameters. We draw drag force in terms of velocity and as expected, value of drag force increased by v. Fig. 1 (a) and (b) show that the black hole electric charge as well as scalar charge increase value of drag force. We find a lower limit for the black hole charge which is for example Q ≥ 1.4 corresponding to M = a = B = 1. In this case we find that slow rotational motion has many infinitesimal effect on drag force which may be negligible.
Linear analysis
Because of drag force, motion of string yields to small perturbation after late time. In that case speed of particle is infinitesimal and one can write G ≈ −1. Also we assume that x = e −µt , where µ is the friction coefficient. Therefore one can rewrite the equation of motion as the following, f (r) We assume out-going boundary conditions near the black hole horizon and use the following approximation, which suggest the following solutions, where T is the black hole temperature. In the case of infinitesimal µ we can use the following expansion, Inserting this equation in the (25) gives x 0 = const., and, where A is a constant. Assuming near horizon limit enables us to obtain the following solution, Comparing (26) and (28) gives the following quasi-normal mode condition, It is interesting to note that these results recover drag force (23) for infinitesimal speed. In the Fig. 2 we can see behavior of µ with the black hole parameters. We find that black hole charges increase value of friction coefficient.
Low mass limit
Low mass limit means that r m → r h , and we use the following assumptions, and, so, by using the relation (24) we can write, We can obtain constant A as the following, It tells that µ = 2πT yields to divergency, therefore we called this as critical behavior of the friction coefficient and obtain Fig. 3.
Dispersion relations
Here, we would like to obtain relation between total energy E and momentum P at the slow velocity limit. In that case we can obtain, which gives the total momentum, where we use r min > r h as IR cutoff to avoid divergency. In the similar way we can compute another momentum density, to evaluate total energy as the following, where we used equation of motion and boundary condition x ′ (r m ) = 0. Assuming the following near horizon solution, and combining equations (36) and (38) give the following relation, where M rest given by the equation (15) with replacement r h → r min and, It is usual, non-relativistic dispersion relation for a point particle in which the rest mass is different from the kinetic mass. In the Fig. 4 we draw re-scaled η ≡ 2πα ′ µ in terms of kinetic mass and show that black hole charges increase η as expected.
Conclusions
In this work we considered recently constructed charged rotating black hole in 3 dimensions with an scalar charge and calculated energy loss of heavy particle moving near the black hole horizon. First of all important properties of background reviewed and then appropriate equations obtained. We use motivation of AdS/CFT correspondence and use string theory method to study motion of particle. This is indeed in the context of AdS 3 /CF T 2 where drag force on moving heavy particle calculated. We found that the black hole charges, both electric and scalar, increase value of drag force but infinitesimal value of rotation parameter has no any important effect and may be negligible. We also discussed about quasi-normal modes and obtained friction coefficient and found that black hole charges increase value of friction coefficient which is coincide with increasing drag force. Finally we found dispersion relation which relates total energy and momentum of particle. | 2,328.2 | 2014-01-17T00:00:00.000 | [
"Physics"
] |
Machine Translation with parfda, Moses, kenlm, nplm, and PRO
We build parfda Moses statistical machine translation (SMT) models for most language pairs in the news translation task. We experiment with a hybrid approach using neural language models integrated into Moses. We obtain the constrained data statistics on the machine translation task, the coverage of the test sets, and the upper bounds on the translation results. We also contribute a new testsuite for the German-English language pair and a new automated key phrase extraction technique for the evaluation of the testsuite translations.
Introduction
Parallel feature weight decay algorithms (parfda) (Biçici, 2018) is an instance selection tool we use to select training and language model instances to build Moses (Koehn et al., 2007) phrase-based machine translation (MT) systems to translate the test sets in the news translation task at WMT19 (Bojar et al., 2019). The importance of parfda increase with the increasing size of the parallel and monolingual data available for building SMT systems. In the light of last year's evidence that shows that parfda phrase-based SMT can obtain the 2nd best results on a testsuite in the English-Turkish language pair (Biçici, 2018) when generating the translations of key phrases that are important for conveying the meaning, we obtain phrase-based Moses results and its extension with a neural LM in addition to the n-gram based LM that we use. We experiment with neural probabilistic LM (NPLM) (Vaswani et al., 2013). We record the statistics of the data and the resources used.
Our contributions are: • a test suite for machine translation that is out of the domain of news task to take the chance of taking a closer look at the current status of SMT technology used by the task participants when translating 38 sentences about international relations concerning cultural artifacts, • parfda Moses phrase-based MT results and data statistics for the following translation directions: -English-Czech (en-cs) -English-Finnish (en-fi), Finnish-English (fi-en), -English-German (en-de), German-English (de-en), -English-Kazakh (en-kk), Kazakh-English (kk-en), -English-Lithuanian (en-lt), Lithuanian-English (lt-en), -English-Russian (en-ru), Russian-English (ru-en), • upperbounds on the translation performance using lowercased coverage to identify which models used data in addition to the parallel corpus.
The sections that follow discuss the instance selection model (Section 2), the machine translation model (Section 3), the testsuite used for evaluating MT in en-de and de-en, and the results. Table 1: Statistics for the training and LM corpora in the constrained (C) setting compared with the parfda selected data. #words is in millions (M) and #sents in thousands (K). tcov is target 2-gram coverage. Table 2: Constrained training data lowercased source feature coverage (scov) and target feature coverage (tcov) of the test set for n-grams.
2 Instance Selection with parfda parfda parallelize feature decay algorithms (FDA) (Biçici and Yuret, 2015), a class of instance selection algorithms that decay feature weights, for fast deployment of accurate SMT systems. Figure 1 depicts parfda Moses SMT workflow.
We use the test set source sentences to select the training data and the target side of the selected training data to select the LM data. We decay the weights for both the source features of the test set and the target features that we already select to increase the diversity. We select about 2.2 million instances for training data and about 12 million sentences for each LM data not including the selected training set, which is added later. Table 1 shows size differences with the constrained dataset (C). 1 We use 3-grams to select training data and 2grams for LM data and split the hyphenated words 1 Available at https://github.com/bicici/ parfdaWMT2019 using the "-a" option of the tokenizer used in Moses (Sennrich et al., 2017). tcov lists the target coverage in terms of the 2-grams of the test set. The maximum sentence length is set to 126. Table 2 lists the lowercased coverage of the test set by the constrained training data of WMT19.
Machine Translation with Moses, kenlm and nplm, and PRO
We train 6-gram LM using kenlm (Heafield et al., 2013). For word alignment, we use mgiza (Gao and Vogel, 2008) where GIZA++ (Och and Ney, 2003) parameters set max-fertility to 10, the number of iterations to 7,5,5,5,7 for IBM models 1,2,3,4, and the HMM model, and learn 50 word classes in three iterations with the mkcls tool during training. We use "-mbr" option when decoding the test set. 3 The development set con- Biçici, 2018). This allows us to find parameters whose tuning score reach 1% close to the best tuning parameter set score in only 4 iterations but we still run tuning for 21 iterations. Truecasing updates the casing of words according to the most common form. We truecase the text before building the SMT model as well as after decoding and then detruecase before preparing the translation, which provided better results than simply detruecasing after decoding (Biçici, 2018). We trained nplm LM in 10 epochs. We also experimented with bilingual nplm, which uses nplm in a bilingual setting to use both the source and the target context and builds a LM on the training set (Devlin et al., 2014). Both nplm and bilingual nplm can be used with Moses as a feature within its configuration file. 4 On average, results in Table 3 shows that using only nplm decrease the scores and improvements are obtained when both nplm and kenlm are used. However, the gain from splitting hyphenated words is more and it is a less computationally demanding option. kenlm takes about 20 minutes whereas building a single nplm model took us 11.5 to 14.25 days or 1000 times longer and it takes about 56 GB space on the disk.
Translation Upper Bounds with tcov
We obtain upper bounds on the translation performance based on the target coverage (tcov) of ngrams of the test set found in the selected parfda training data using lowercased text. For a given sentence T , the number of OOV tokens are identified: where |T | is the number of tokens in the sentence. We obtain each bound using 500 such instances and repeat for 10 times. tcov BLEU bound is optimistic since it does not consider reorderings in the translation or differences in sentence length. Each plot in Figure 2 locates tcov BLEU bound obtained from each n-gram and from ngram tcovs combined up to and including n and locates the parfda result and locates the top constrained result. Based on the distance between the top BLEU result and the bound, we can obtain a sorting of the difficulty of the translation directions in Table 5.
German-English Testsuite
We prepared a MT test suite that is out of the domain of news translation task to take a closer Table 10 in terms of BLEU (Papineni et al., 2002) and F 1 (Biçici, 2011) scores. However, such automatic evaluation metrics treat the features or n-grams equivalently or group them based on their length, without knowledge about their frequency in use or significance in conveying the meaning. Word order within a sentence does not contain the majority of information (Landauer, 2002) for vocabulary size |V | ≥ n where n is the average sentence length. For n = 25 words with |V | = 10 5 with equivalent representation using n = 10 phrases with |V | = 10 7 or using n = 50 BPE tokens with |V | = 10 4 or using n = 125 chars with |V | = 25 have differring contribution to the information of the sentence in bits from token order or choice (Table 6). If we use keyword subsequences for F 1 based evaluation, we would cover about 91% of the information in a sentence. Key phrase identification is important since when scores are averaged, important phrases that are missing only decrease the score by 1 |p|N |p| for BLEU calculation for a phrase of length |p| over N |p| phrases with length |p|. We extend our evaluation of the testsuite translations using keywords (Biçici, 2018).
We automate key phrase identification within a reference set of N sentences by selecting among N X candidate n-grams that: • are representative and few • cover significant portion of the text min X T (αX p · X l · 1 βX c + 1 1 1 N X ) • are frequent (X c for counts of phrases) • are less likely to be found (X p for the probability of phrases) and formulate the task as a linear program in Table 7. We use up to 6-grams and set minimum coverage of each sentence to 0.5. We removed some stop words from the phrases: 'of', 'the', 'and', 'of the', 'a', 'an' and replaced those parts with '.*?' and obtained regular expressions. The key phrases we obtain are listed in Table 9. The key phrases are used to evaluate using the F 1 score (Table 10). We plan to extend this work towards more objective key phrase evaluation methods.
Conclusion
We use parfda for building task specific MT systems that use less computation overall and release our engineered data for training MT systems. We also contribute a new testsuite for the German- | 2,126.6 | 2019-01-01T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
Ginseng® Alleviates Malathion-Induced Hepatorenal Injury through Modulation of the Biochemical, Antioxidant, Anti-Apoptotic, and Anti-Inflammatory Markers in Male Rats
This study aims to see if Ginseng® can reduce the hepatorenal damage caused by malathion. Four groups of forty male Wistar albino rats were alienated. Group 1 was a control group that got orally supplied corn oil (vehicle). Group 2 was intoxicated by malathion dissolved in corn oil orally at 135 mg/kg/day. Group 3 orally received both malathion + Panax Ginseng® (300 mg/kg/day). Group 4 was orally given Panax Ginseng® at a 300 mg/kg/day dose. Treatments were administered daily and continued for up to 30 consecutive days. Malathion’s toxic effect on both hepatic and renal tissues was revealed by a considerable loss in body weight and biochemically by a marked increase in liver enzymes, LDH, ACP, cholesterol, and functional renal markers with a marked decrease in serum TP, albumin, and TG levels with decreased AchE and Paraoxonase activity. Additionally, malondialdehydes, nitric oxide (nitrite), 8-hydroxy-2-deoxyguanosine, and TNFα with a significant drop in the antioxidant activities were reported in the malathion group. Malathion upregulated the inflammatory cytokines and apoptotic genes, while Nrf2, Bcl2, and HO-1 were downregulated. Ginseng® and malathion co-treatment reduced malathion’s harmful effects by restoring metabolic indicators, enhancing antioxidant pursuit, lowering the inflammatory reaction, and alleviating pathological alterations. So, Ginseng® may have protective effects against hepatic and renal malathion-induced toxicity on biochemical, antioxidant, molecular, and cell levels.
Introduction
Malathion, one of the first organophosphate pesticides, is still widely used in Egypt, particularly for agriculture. When malathion has acute toxicity, the nervous system is the primary target. It is characterized by overstimulation of the cholinergic pathways. Chlonergic pathways are overstimulated due to the lack of acetylcholinesterase (AChE) and butyrylcholinesterase (BchE) enzymes. [1]. Even at low dosages, Malathion exposure frequently results in severe liver and kidney damage in laboratory animals [2,3].
Malathion's widespread usage contributes to environmental contamination and raises the risk of exposure [4,5]. Excessive exposure can result in acute or chronic poisoning, particularly in underdeveloped countries [6]. Malathion is considered a moderately toxic insecticide by the World Health Organization (WHO) [7]. Prescribing a dose of 2 g/m 2 , with a residual consequence of two to three months, the world health organization's extreme standard intake is 0.02 mg/kg/day [8]. More than 30 million pounds of malathion are used each year, according to the Environmental Protection Agency (EPA). Due to malathion's lipophilicity, it is rapidly absorbed and dispersed throughout the body, resulting in various diseases [9]. Malathion-induced hyperglycemia can potentially be explained by its damaging inflammatory effects on the liver [10]. Excessive oxidative damage has been found in human cells exposed to malathion, increasing the creation of reactive oxygen species [11][12][13]. It alters tissue antioxidant endogenous enzymatic activities and nonenzymatic levels [14]. So, it can lead to mitochondrial malfunction, DNA breakage, and apoptosis [4]. It was found that malathion can cause liver tissue damage and hepatocellular injury [15].
For centuries, Panax Ginseng ® (Araliaceae) has been utilized in East Asia as a food supplement or herbal treatment [16]. Additionally, it can reduce inflammation, blood sugar, blood lipids, and cancer-causing free radicals [17] and prevent chronic fatigue and cardiovascular, digestive, and age-related conditions [18]. Triterpenes, saponins, essential oils, alkaloids, aminoglycosides, fatty acids, peptidoglycan, polysaccharides, vitamins, minerals, and phenolic compounds are vital components found in Ginseng ® extracts. There are many types of ginsenosides in Ginseng ® plants, and they are crucial in the plant's physiological and pharmacological qualities [19]. The hepato-renoprotective effects of Ginseng ® have been highlighted in many studies [20,21] due to its anti-inflammatory, antiapoptotic, and antioxidant attributes [21,22]. Malathion-related hepatorenal injury is the subject of a few clinical trials. Consequently, this study's primary objective is to determine whether Ginseng ® can protect rats against malathion-induced hepato-renal injury.
Chemicals
Panax Ginseng ® powder root extracts 3.5% as manufactured indicated (Ginseng ® , 100 mg soft gelatin capsules; PHARCO pharmaceuticals, Alexandria, Egypt). Malathion (98% active ingredient, O-dimethyl phosphorodithioate of diethyl mercaptosuccinate) was found in Kafr el-zayat, Egypt. The powder was soaked in 70% aqueous ethanol for ten days at 25 • C and filtered. The solution evaporated in vacuo was semi-gelatinous. All other chemicals used were of the highest analytical grade.
Experimental Animals, Treatment Design
Forty albino male Wistar Wistar rats weighed an average of (149 ± 5 g). Rats were obtained from the Medical Research Institute at Alexandria University, Egypt, and were fed regular food and allowed free access to water. They were kept under close supervision, primarily in metallic rat cages with a 12-h light-dark cycle and a temperature of 27 • C ± 2 for a 10-day adaptation period before treatment. After the acclimatization period, four experimental groups of rats were arbitrarily and similarly alienated (n = 10). Group 1 were provided solely malathion-carrying corn oil as a control group. Group 2 was given malathion dissolved in corn oil at 135 mg/kg/orally [23]. Group 3 was given orally both malathion + Panax Ginseng ® (300 mg/kg/day, orally) [24][25][26]. Group 4 was given Panax Ginseng ® at (300 mg/kg/day orally. Treatments were administered daily and continued for up to 30 consecutive days.
Rats were anesthetized with ketamine and xylazine injections on the end day of the trial, with recorded body weight. Serum was separated from blood drawn from the orbital venous plexus to analyze hepatic functional biomarkers using a centrifuge set to 3000 rpm for 10 min. Rats were decapitated, and the liver and kidney were isolated and weighed, then rinsed in ice-cold saline. A 10% neutral-buffered formalin solution was used to preserve tissue for histopathological analysis. To analyze oxidative stress markers, another liver slice was held at −20 • C, while the final one was kept at −80 • C for gene expression.
Biochemical Investigation
Serum aspartate aminotransferase (AST) and alanine aminotransferase (ALT) enzymes were assessed following the manufacture [27], and the alkaline phosphatase (ALP) according to the method [28]. Analysis of total protein and albumin was carried out [29]. Albumin was subtracted from total protein, and the serum globulin level was determined. Serum urea and creatinine levels were also assessed [30,31]. Serum uric acid was tested following [32], using commercially available kits (Spinreact, S.A., Gerona, Spain).
Diamond Diagnostics kits, (Cairo, Egypt) and ELISA kits obtained from Wuhan EIAab Science Co. (Catalogue No; E1864r, Wuhan, China) were used to analyze acid phosphatase (ACP) and the enzyme lactate dehydrogenase (LDH) in serum samples, respectively.
Colorimetric kits from Boehringer Mannheim were used to quantity serum triglycerides (TG) and total cholesterol (TC) (Mannheim, Germany). HDL-C was measured according to Lopes-Virella et al. [33]. A serum sample was precipitated with phosphotungstic acid and magnesium chloride, and the cholesterol concentration was measured in the clear supernatant using the Boehringer Mannheim kit (Mannheim, Germany). After that, LDL-C was calculated following the Friedewald et al. [34] equation: Acetylcholinesterase (AChE) was assessed using an ELISA kit, NOVA (Bioneovan Co., Ltd., DaXing Industry Zone, Beijing, China). Auto analyzers and commercial kits measured the serum's paraoxonase (PON) activities (Rel assay, Gaziantep, Turkey). (Cobas Integra 800, Roche, Basel, Switzerland). The ammonia concentration was measured using an ammonia kit (Abcam, Cambridge, UK).
Antioxidant Tissue Parameters Analysis
Tumor necrosis factor-alpha was detected (EZMTNFA, Millipore, Burlington, MA, USA). Detection of lipid peroxidation (LPO) in terms of malondialdehyde (MDA) formation using spectrophotometry followed by Ohkawa et al. [35] Nitric oxide (NO) assessed calorimetrically following Green et al. [36] Glutathione (GSH) was measured according to the method of Ellman [37]. A yellow chemical spectrophotometrically measured at 405 nm after GSH reduction of 5,5 -dithiobis (2-nitrobenzoic acid) SOD and CAT were found to be measured following Sun et al. [38] and Aebi [39], correspondingly. The activity of glutathione peroxidase (GPx) was assessed following Paglia and Valentine [40].
Gene Expression
According to the manufacturer's instructions, total RNA was extracted using the TRIzol reagent (Life Technologies, Gaithersburg, MD, USA), and cDNA was generated directly using the MultiScribe RT enzyme kit (Applied Biosystems, Foster City, CA, USA). A 7500 Real-Time PCR System (Applied Biosystems, Life Technologies, CA, USA) exposed the cDNA in triplicate for real-time PCR analysis (Applied Biosystems, Foster City, CA, USA) using SYBR Green PCR Master Mix. Compared to the control, the mRNA expression fold change in the genes under study was calculated. GAPDH housekeeping gene were utilized to normalize the mRNA expression of the genes that were being assessed. The primer's sequences are listed in Table S1.
Histopathological Examination
Liver and kidney samples were fixed in 10% neutral-buffered formalin for at least 24 h; tissue samples were then paraffin-embedded using the standard procedure [41]. Paraffin blocks were sectioned into five-micron thick slices stained with HE and viewed under a light microscope.
Statistical Analysis
The data were analyzed with a one-way analysis of variance (SPSS) (version 25). p < 0.05 were considered significant, and data were available as means ± SEM. The significant main effects of the experimental treatment were examined using Duncan's multiple range test.
Bodyweight
As shown in Table 1, malathion-intoxicated rats had significantly lower final body weights than other groups. The liver and kidney absolute and relative weights were within normal ranges. Our result also showed that the co-treatment of Ginseng ® and malathion reduces the adverse effects on rat growth.
Liver and Kidney Serum Markers
As shown in Table 2, there were significant upsurges in the serum activities of liver enzymes such as AST, ALT, ALP, LDH, and ACP in addition to the serum levels of total cholesterol, urea, creatinine, uric acid, and ammonia in the rats of the malathion intoxicated group in contrast to the control rats. At the same time, they displayed a significant decrease in the TP, albumin, and TG serum levels. Conversely, AchE and Paraoxonase activity was downgraded. The co-treatment of malathion intoxicated rats with Ginseng ® showed significant decrements in the serum activities of the previously mentioned liver enzymes and the serum levels of total cholesterol, urea, creatinine, and uric acid, with substantial increments in serum TP, albumin, and TG as compared with the malathion intoxicated rats. In contrast, no significant alterations were observed in the Ginseng ® -treated rats in all measured biochemical parameters concerning the control group.
Hepatic and Renal Oxidative Stress Markers
As shown in Figures 1 and 2, compared to control rats, the malathion intoxicated group showed significant decrements in the liver and kidney tissue non-enzymatic GSH concentration and enzymatic GPX, SOD, and CAT pursuits. At the same time, it revealed significant increments in the concentration of hepatic MDA and NO as oxidative stress indicators. Conversely, the concurrent treatment of malathion intoxicated rats with Ginseng ® exhibited substantial increases in the hepatic and renal GSH level and hepatic activities of GPX, SOD, and CAT with a significant reduction in the concentrations of hepatic and renal MDA and NO when compared to the malathion intoxicated rats only. The antioxidant and oxidant biomarker readings of the Ginseng ® -treated rats did not differ significantly from those of the control rats. Figures 3 and 4 show that malathion-treated rats' livers and kidneys had significantly higher IL-1, Bax, and IFN-mRNA expression than other control groups, and Nrf2, Bcl-2, and HO-1 were downregulated in malathion-treated groups. The co-treatment of Ginseng ® and malathion resulted in significant restoration of deviated genetic expression to control levels. Figures 3 and 4 show that malathion-treated rats' livers and kidneys had significantly higher IL-1, Bax, and IFN-mRNA expression than other control groups, and Nrf2, Bcl-2, and HO-1 were downregulated in malathion-treated groups. The co-treatment of Ginseng ® and malathion resulted in significant restoration of deviated genetic expression to control levels.
8-OHdG and TNF-Alpha
TNF-α and 8-OHdG were increased in the liver and kidneys by malathion administration, indicating a general inflammatory state. The 8-OHdG and TNF-levels were significantly restored when Ginseng ® and malathion were taken together, as shown in Figure 5.
8-OHdG and TNF-Alpha
TNF-α and 8-OHdG were increased in the liver and kidneys by malathion administration, indicating a general inflammatory state. The 8-OHdG and TNF-levels were significantly restored when Ginseng ® and malathion were taken together, as shown in Figure 5.
Histopathological Findings
The liver of the control and Ginseng ® treated rats showed the standard histological structure of hepatic lobules and central vein (Figure 6a). The liver sections of malathiontreated rats exhibited severe congestion of the hepatic venous side of the circulation (central vein, portal vein, and hepatic sinusoid) and moderate mononuclear inflammatory cell infiltration in the portal area (Figure 6b) beside diffuse hydropic degeneration of hepatocytes (Figure 6c), mild sharp edge outline vacuoles and mild dilatation of hepatic sinusoids (Figure 6d). Moreover, there were necrobiotic changes in hepatocytes; some hepatocytes contained pyknotic nuclei, which were small, round, and coarsening of the heterochromatin. Others were lysis of nuclear chromatin and disappearance of nucleus forming ghost nuclei with an increase in cytoplasmic eosinophilia of some hepatocytes (Figure 6e).
Histopathological Findings
The liver of the control and Ginseng ® treated rats showed the standard histological structure of hepatic lobules and central vein (Figure 6a). The liver sections of malathiontreated rats exhibited severe congestion of the hepatic venous side of the circulation (central vein, portal vein, and hepatic sinusoid) and moderate mononuclear inflammatory cell infiltration in the portal area (Figure 6b) beside diffuse hydropic degeneration of hepatocytes (Figure 6c), mild sharp edge outline vacuoles and mild dilatation of hepatic sinusoids (Figure 6d). Moreover, there were necrobiotic changes in hepatocytes; some hepatocytes contained pyknotic nuclei, which were small, round, and coarsening of the heterochromatin. Others were lysis of nuclear chromatin and disappearance of nucleus forming ghost nuclei with an increase in cytoplasmic eosinophilia of some hepatocytes (Figure 6e). played normal structure of the cortex and medulla (Figure 7a). The kidney sections of malathion-treated rats exhibited severe congestion of renal blood vessels (Figure 7b), inter-tubular capillaries, and glomerular capillaries (Figure 7c) beside necrotic glomerulus, hyaline casts in the lumen of renal tubules (Figure 7d), and mild interstitial mononuclear cell infiltrations (Figure 7e). The co-treatment of Ginseng ® with malathion exhibited a nearly normal tissue architecture with minor hydropic renal epithelial cell degeneration and mild congestion of inter-tubular capillaries (Figure 7f). , and hepatic sinusoid (blue arrow), and infiltration of inflammatory cells in the portal area (star) beside diffuse hepatocytes deterioration of (black arrows), mild sharp edge outline vacuoles (red arrows), and mild hepatic sinusoids dilatation (blue arrows). In addition, necrobiotic changes of hepatocytes (pyknotic nucleus (short black arrows), lysis of nuclear chromatin (short blue arrows), and disappearance of a nucleus with an increase in cytoplasmic eosinophilia of some hepatocytes (asterisk) (f) The histological structure of rats treated with Ginseng ® and malathion was practically normal. Bar of (a-d,f) =100 μm and (e) =50 μm. , and hepatic sinusoid (blue arrow), and infiltration of inflammatory cells in the portal area (star) beside diffuse hepatocytes deterioration of (black arrows), mild sharp edge outline vacuoles (red arrows), and mild hepatic sinusoids dilatation (blue arrows). In addition, necrobiotic changes of hepatocytes (pyknotic nucleus (short black arrows), lysis of nuclear chromatin (short blue arrows), and disappearance of a nucleus with an increase in cytoplasmic eosinophilia of some hepatocytes (asterisk) (f) The histological structure of rats treated with Ginseng ® and malathion was practically normal. Bar of (a-d,f) =100 µm and (e) =50 µm.
The Ginseng ® with malathion-treated group showed nearly normal histological structure (Figure 6e). The kidney sections of the control and Ginseng ® treated rats displayed normal structure of the cortex and medulla (Figure 7a). The kidney sections of malathiontreated rats exhibited severe congestion of renal blood vessels (Figure 7b), inter-tubular capillaries, and glomerular capillaries (Figure 7c) beside necrotic glomerulus, hyaline casts in the lumen of renal tubules (Figure 7d), and mild interstitial mononuclear cell infiltrations (Figure 7e). The co-treatment of Ginseng ® with malathion exhibited a nearly normal tissue architecture with minor hydropic renal epithelial cell degeneration and mild congestion of inter-tubular capillaries (Figure 7f). . Malathion-treated rats' kidneys displaying congestion of renal blood vessels (black arrowheads), inter-tubular capillaries (blue arrowheads), and glomerular capillaries (red arrowheads) beside necrotic glomerulus (blue arrow), hyaline casts (black arrows) in the lumen of renal tubules and interstitial mononuclear cell infiltrations (stars) (f). Ginseng ® -treated rats with malathion showed nearly normal histological structure with mild hydropic epithelial cell degenerations (short black arrow) and mild congestion of inter-tubular capillaries (blue arrowheads). Bar = 100 μm.
Discussion
Malathion is a widespread organophosphate insecticide used to control various insects [42]. Malathion and its metabolites can induce oxidative stress and impairment of hepatic and renal function [43], destroying the cellular membranes, causing DNA damage, and increasing ROS production, leading to biological system oxidative damage [44,45]. The gastrointestinal contents and adipose tissue both had high levels of malathion. Biliary excretion appears to be a primary route of elimination for metabolites, as MCA was identified at a relatively high level. The kidneys had the highest quantities of metabolites for DCA and MCA [46] Many antioxidants are used against malathion toxicity, and the purpose of this study was to see if Ginseng ® could protect against malathion poisoning. The liver enzymes play . Malathion-treated rats' kidneys displaying congestion of renal blood vessels (black arrowheads), inter-tubular capillaries (blue arrowheads), and glomerular capillaries (red arrowheads) beside necrotic glomerulus (blue arrow), hyaline casts (black arrows) in the lumen of renal tubules and interstitial mononuclear cell infiltrations (stars) (f). Ginseng ® -treated rats with malathion showed nearly normal histological structure with mild hydropic epithelial cell degenerations (short black arrow) and mild congestion of inter-tubular capillaries (blue arrowheads). Bar = 100 µm.
Discussion
Malathion is a widespread organophosphate insecticide used to control various insects [42]. Malathion and its metabolites can induce oxidative stress and impairment of hepatic and renal function [43], destroying the cellular membranes, causing DNA damage, and increasing ROS production, leading to biological system oxidative damage [44,45]. The gastrointestinal contents and adipose tissue both had high levels of malathion. Biliary ex-Life 2022, 12, 771 13 of 20 cretion appears to be a primary route of elimination for metabolites, as MCA was identified at a relatively high level. The kidneys had the highest quantities of metabolites for DCA and MCA [46] Many antioxidants are used against malathion toxicity, and the purpose of this study was to see if Ginseng ® could protect against malathion poisoning. The liver enzymes play a critical role in regulating physiological processes, for example, biosynthesis of macromolecules, cellular metabolism, and detoxification [47]. In our study, the malathion administration caused significant upsurges in the serum ALT, AST, ALP, LDH, and ACP activities, and AchE and paraoxonase activity compared to a control group. This may be due to the ability of malathion to induce oxidative stress and production of ROS, causing liver damage and necrosis, leading to the liberation of these enzymes from hepatic cells to blood [48,49]. For mediating the bio-activation of thiono-organophosphates, the liver is the most active metabolizing organ [50]. It is one of the essential malathion poisoning targets [51]. Blasiak et al. [52] have reported that malathion has an initial cytotoxic action in human lymphocytes, causing cellular death without causing DNA damage. Still, its active metabolites, malaoxon and isomalathion, act on DNA, breaking its chains.
These results follow [53][54][55], who reported that malathion administration induced liver enzyme activities. Inflammatory cytokines, metabolic dysfunction, apoptosis, and gene expression modulation are all factors that contribute to malathion's hepato-renal toxicity [3]. In agreement with [1,54], our results also showed significant decreases in the serum TP and albumin. This may be explained by the ability of malathion to induce liver damage and decrease the synthesis, digestion, and absorption of protein [56] because the liver is the main site for plasma protein synthesis. Additionally, our results exhibited a considerable rise in cholesterol and a reduction in TG levels. This may be due to the ability of pesticides to block the bile duct and decrease cholesterol secretion in the intestine [57] or to the inhibition of the pancreatic function by malathion leading to poor absorption of lipids [58]. These findings are in line with [54], who reported a substantial rise in cholesterol and a drop in TG in male mice injected with malathion for six days. In renal function tests, our study revealed significant increments in serum urea, creatinine, and uric acid levels in malathion intoxicated rats. These increments indicate renal dysfunction [59]. This may be due to the induced renal oxidative damage with increased ROS production and decreased antioxidants [60]. It may also be due to glomerular filtration deficiency, whereby excretion decreases and serum levels rise [61]. In addition, many medicines can change uric acid levels, affecting uric acid net reabsorption in the proximal tubules [62]. These results agree with the results of [1,63,64].
The hepatorenal induced malathion toxicity is related to the induction of oxidative stress [43], so our previously reported biochemical changes can be proven by our observations that revealed significant reductions in hepatorenal GSH level and antioxidant enzymes activities (GPX, SOD, and CAT) associated with noteworthy increments in MDA and NO levels in the malathion intoxicated group. In addition, when malathion is converted to malaoxon, it is known to produce a lot of reactive oxygen species in the liver [65].
These results are consistent with [66][67][68] in hepatic tissue, and [1,69] in hepatic and renal tissues. Our findings have verified other studies that corroborate that the organophosphorus administration causes a disturbance in hepatic and renal tissues [70]. Organophosphorus exposure is connected with many health problems, including oxidative stress and ROS overproduction [49].
Malathion's toxicity is exacerbated by its metabolites and pollutants. Malathion's principal source of toxicity is malaoxon, which is formed by the oxidation of malathion in mammals, animals, and plants and is 40 times more acutely hazardous than malathion [71,72]. Malathion is transformed to malaoxon via oxidative sulfuration, which is mediated by a microsomal system of enzymes known as mixed-function oxidases (MFO), one of which is cytochrome P450 (CYP450) [65]. The liver has a robust oxidative metabolism and a lot of CYP450 activity, essential for xenobiotic biotransformation [73]. High levels of ROS are produced during the biotransformation processes of malathion into malaoxon. Malathion is Life 2022, 12, 771 14 of 20 also detoxified through glutathione conjugation reactions. Malathion exposure was found to reduce GSH levels, raise GSSG levels, and lower the GSH/GSSG ratio [74].
The malathion-induced oxidative stress can also increase the nitric oxide synthase enzyme activity in the rat liver and nitric oxide production [14]. Furthermore, as reported in this study, the malathion-induced hepatorenal histopathological changes support our biochemical results and the oxidative changes. The liver showed congestion of blood vessels, hepatic degeneration, and necrosis associated with inflammatory cell infiltration results [54,75]. Additionally, the renal tissue showed severe congestion of renal blood vessels and glomerular necrosis with the presence of hyaline casts in a tubular lumen in agreement with [54]. Furthermore, an initial effect of malathion's toxic compounds is that it activates Kupffer cells in the liver, resulting in enhanced MPO activity and the release of pro-inflammatory cytokines such as IL-1β, IL-6, and INF-γ. For the reason inflammatory responses are implicated in liver injury via inducing hepatosteatosis, the function of inflammatory reactions in toxicological processes is of great interest [76].
Our results showed that co-treatment of Ginseng ® to malathion intoxicated rats ameliorated the hepatorenal toxicity induced by malathion. The Ginseng ® biochemical hepatoprotective related outcomes were proved by significant reductions in the serum ALT, AST, ALP, LDH, ACP, AchE, and paraoxonase activities, cholesterol levels with substantial increases in TP, albumin, and TG levels when compared with the malathion intoxicated group. Additionally, the Ginseng ® treatment alone presented a remarkable upsurge in TP levels compared with the control. Our parallel findings show that Ginseng ® can protect the liver against CCL4 toxicity [77], D-galactosamine/lipopolysaccharide [78], Fipronil [79], and cyhalothrin [80], and that Ginsenoside Rg5 improves AChE in the brain cortex [81,82].
Although the liver is the primary source of paraoxonase, it has also been found in the kidney, heart, and brain [83]. After being synthesized in the liver, the paraoxonase enzyme, an antioxidant that prevents LDL oxidation, is transported along with HDL in the plasma [84]. Paraoxonase activity was improved by Ginseng ® supplementation [85], which supports our Paraoxonase findings. Thus, the liver is the primary metabolizing site for thiono-organophosphate biotransformation, with the kidney contributing to hazardous product removal [22,23]. Many illnesses linked to organophosphorus exposure are preceded by excessive production of reactive oxygen species (ROS) and oxidative stress [49,86].
Furthermore, the renal biochemical parameters showed that co-administration of Ginseng ® led to substantial decrements in serum urea, creatinine, and uric acid levels compared with malathion only. These findings accord with the findings that indicate the Ginseng ® renoprotective effect against gentamicin sulfate [77] and Cisplatin [79,87,88]. The antioxidant properties of Ginseng ® have long been recognized due to its ability to increase the expression of antioxidant enzyme genes that scavenge reactive oxygen species. Ginseng ® increases antioxidant enzyme activity and free radical scavenging [89]. SOD and GPx, as well as heme oxygenase-1 (HO-1), were discovered to be boosted by Ginseng ® 's ability to increase the activity of self-antioxidant enzymes such as SOD and GPx [90,91], in addition to lipid peroxidation inhibition [92,93]. Ginseng ® recovers the glomerular filtration, leading to the increased thickness of the basement membrane glomeruli, as reported by [77]. Ginseng ® 's hepato-renal protective impact can be traced back to its biochemical and pharmacological capabilities, which include anti-inflammatory and antihyperlipidemic properties [94], and antioxidant effects as free radical scavenging and stimulating the activities of antioxidant enzymes, which play a role in scavenging of ROS as reported in our study, and in [89]. Additionally, these biochemical results may be confirmed by the reported antioxidant effects of co-treatment with Ginseng ® in our study, showing significant increases in GSH quantities and antioxidant enzyme activities (GPX SOD and CAT) associated with marked reductions in MDA and NO levels in comparison with the malathion intoxicated group. These results agree with the results of [95] in renal tissue and [79,80] in hepatic and renal tissue.
The hepato-renal protective effect of Ginseng ® against malathion exhibited histopathology as Ginseng ® improved and ameliorated the toxic alterations caused by malathion to nearly normal appeared organs. These results are supported by other studies on the protective effect of Ginseng ® against hepatorenal damage induced by fipronil as reported by [79] and hepatic injury caused by cyhalothrin as reported by [80].
TNF-α is a cytokine that promotes inflammation elaborated in innate and acquired immunity, cell proliferation, tissue necrosis, and apoptosis [96]. In mutagenesis damage, 8-OHdG is the most prevalent base modification and is a biomarker for DNA oxidative stress [97]. In this study, the malathion intoxication increased the serum levels ofTNFα significantly following [68] due to enlargement of sinusoids, mononuclear cell infiltration, and hepatic necrosis caused by malathion as reported by [1]. Additionally, it showed a significant increase in 8-OHdG levels in agreement with [98] compared with the control rats. This increment in serum 8-OHdG level approved the role of malathion in exhibiting DNA damage due to induction of genotoxicity and chromosomal aberrations [99].
On the other hand, the co-administration of Ginseng ® to malathion intoxicated rats led to a significant decrease in TNFα in the same harmony with [87,100] against N-acetylp-aminophenol-induced hepatotoxicity and cisplatin-induced renal toxicity, respectively. Furthermore, the increment in 8-OHdG levels is parallel with [101] against cyclosporine nephrotoxicity. The ameliorating effect of Ginseng ® can be returned to its antioxidant properties as previously mentioned due to it containing vital constituents such as ginsenosides, polyacetylenes, flavonoids, and phenolics [95] which are responsible for the protective effects of Ginseng ® against various diseases [102].
Malathion's acute toxicity affects the neurological system due to the inactivation of AChE and BChE enzymes. It could be concluded that Ginseng ® has antioxidant effects against malathion toxicity by improving hepatic, renal biochemical parameters, and oxidative biomarkers which decrease the TNFα and serum 8-OHdG levels. [1].
We discovered significant upsurges in the expression of IL-1β, IFN-γ, and Bax expression. Despite this, mRNA expression of the Bcl-2, Nrf2, and HO-1 genes was decreased, indicating apoptosis and inflammatory response. Furthermore, these investigations establish a connection between malathion-induced inflammatory reactions in the liver and insulin resistance [103]. Ginseng ® 's antioxidant activities and subsequent ROS scavenging, suppression of NF-B activation, and cytokine release are responsible for the enhanced apoptotic rate in cotreated groups. To protect against oxidative stress-induced cell death in neuroblastoma cells, Ginseng ® P53 and caspase-3 were downregulated, whereas the anti-apoptotic Bcl2 increased [104].
In histopathological examination, Ginseng ® supplementation reduced most pathological microscopic changes related to malathion exposure, which supports the previously discussed Ginseng ® protection mechanisms against hepato-renal damage. Our result was supported by [105], who showed malathion's adverse effects on the kidney and hepatic tissue. In the same way, the co-treatment with Ginseng ® reinstates typical hepatic architecture, as stated by [80].
Conclusions
Ginseng ® , as a therapeutic solution including multiple active components, was found to be effective in protecting against the biochemical, oxidative, and inflammatory effects of malathion. This is probably via restoring metabolic parameters, increasing antioxidant defense systems, and lowering inflammatory mediator production. | 6,748 | 2022-05-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Pathogenic Delivery: The Biological Roles of Cryptococcal Extracellular Vesicles
Extracellular vesicles (EVs) are produced by all domains of life. In fungi, these structures were first described in Cryptococcus neoformans and, since then, they were characterized in several pathogenic and non-pathogenic fungal species. Cryptococcal EVs participate in the export of virulence factors that directly impact the Cryptococcus–host interaction. Our knowledge of the biogenesis and pathogenic roles of Cryptococcus EVs is still limited, but recent methodological and scientific advances have improved our understanding of how cryptococcal EVs participate in both physiological and pathogenic events. In this review, we will discuss the importance of cryptococcal EVs, including early historical studies suggesting their existence in Cryptococcus, their putative mechanisms of biogenesis, methods of isolation, and possible roles in the interaction with host cells.
Vesicular Export: A General System of Extracellular Delivery of Biological Structures
Extracellular vesicles (EVs) are vehicles exporting molecules from cells to the extracellular milieu and this kind of transport has been observed in organisms from all domains of life [1]. EVs are round-shaped, bilayered lipid membranes loaded with a diverse nature of molecular classes, including proteins [2], lipids [3,4], glycans [4,5], nucleic acids [6], and pigments [7,8].
In eukaryotes, EVs can be classified according to their mechanism of biogenesis. Apoptotic bodies are extracellular vesicles larger than 1 µm that are released when the producing cells undergo apoptosis [9]. Microvesicles or ectosomes, which range from 100 nm to 1 µm, are originated by shedding at the plasma membrane level [10]. Exosomes range from 30 to 200 nm and they result from the fusion of multivesicular bodies (MVBs) with the plasma membrane [11].
The knowledge of EVs in fungi is in frank expansion [28]. This review will focus on cryptococcal EVs, and we will discuss the main findings in this field, from the early description of EV-like particles in the 1970s to current times ( Figure 1).
Figure 1.
Main discoveries related to cryptococcal extracellular vesicles (EVs). The timeline herein illustrated for Cryptococcus findings was based on that described for all fungal EVs [28].
EV Biogenesis and Secretory Pathways in Cryptococcus
Several studies indicated that fungal EVs are produced and released under the coordination of multiple mechanisms. EVs may be formed at different cellular sites [29], with the possible participation of both post-Golgi conventional secretion and unconventional secretory pathways [30]. Several comprehensive reviews on the regulation of secretory pathways in eukaryotes are available in the literature [31][32][33], and the details of these processes are not in the scope of this manuscript.
The conventional secretory pathway in eukaryotes results in the fusion of post-Golgi vesicles with the plasma membrane, and subsequent release of luminal molecules to the extracellular milieu [34,35]. This general process requires to a large extent the participation of members of the SEC gene family, which regulates the traffic from the endoplasmic reticulum (ER) to the Golgi, and then to the cell surface [34,36]. The role of the SEC6 gene in EV formation was evaluated in Cryptococcus [37].
EV Biogenesis and Secretory Pathways in Cryptococcus
Several studies indicated that fungal EVs are produced and released under the coordination of multiple mechanisms. EVs may be formed at different cellular sites [29], with the possible participation of both post-Golgi conventional secretion and unconventional secretory pathways [30]. Several comprehensive reviews on the regulation of secretory pathways in eukaryotes are available in the literature [31][32][33], and the details of these processes are not in the scope of this manuscript.
The conventional secretory pathway in eukaryotes results in the fusion of post-Golgi vesicles with the plasma membrane, and subsequent release of luminal molecules to the extracellular milieu [34,35]. This general process requires to a large extent the participation of members of the SEC gene family, which regulates the traffic from the endoplasmic reticulum (ER) to the Golgi, and then to the cell surface [34,36]. The role of the SEC6 gene in EV formation was evaluated in Cryptococcus [37]. The disruption of SEC6 Pathogens 2020, 9, 754 3 of 14 resulted in the negative detection of EVs in C. neoformans. However, the functional connection between SEC genes and EV formation has not been established in Cryptococcus [37].
Exosome formation requires the maturation of endosomes into MVBs [11]. The later compartments can be targeted to the cell surface, allowing fusion with the plasma membrane and consequent release of luminal MVB vesicles in the outer space [30,38]. MVB formation requires the functionality of the endosomal sorting complex required for transport (ESCRT). The ESCRT pathway is very complex, and its functionality demands a series of finely regulated events [39,40]. Briefly, it initiates with the phosphorylation of phosphatidylinositol 3-phosphate at the endosomal membrane by the phosphatidylinositol 3-kinase Vps34, resulting in the formation of the ESCRT-0 subcomplex. ESCRT-0 formation will consequently regulate the formation of the ESCRT-I, -II, and -III subcomplexes that will finally result in MVB formation and EV release [39,40].
In fungi, the importance of the ESCRT complex for EV formation was suggested in strains where distinct genes regulating ESCRT functions were disrupted. In Candida albicans, the deletion of several genes related to the ESCRT complex resulted in a significant decrease in EV production [41]. In C. neoformans, a mutant strain lacking expression of Vps27, a component of the ESCRT-0 subcomplex, manifested abnormal vesicle traffic and release, resulting in an accumulation of MVBs in the cytosol [42]. The deletion of other genes belonging to the ESCRT pathway led to significant defects in the delivery of virulence factors associated with EVs [42][43][44][45]. The cryptococcal mutant strains vps27∆(ESCRT-0), vps23∆(ESCRT-I), and snf7∆(ESCRT-III) had attenuated virulence in a mice model of cryptococcosis [42][43][44]. Although the impact of gene deletion on EV production was studied only in the vps27∆strain [42], the attenuation of virulence in ESCRT mutants suggests important connections between the unconventional secretory pathway and the pathogenesis of Cryptococcus.
Other regulators of unconventional secretion were linked to EV formation in C. neoformans. The Golgi reassembly and stacking protein (GRASP), for instance, regulates EV cargo and dimensions in Cryptococcus [46]. In C. neoformans, a grasp∆ mutant strain produced EVs with dimensions that significantly differed from those produced by wild-type cells [46]. This strain also manifested attenuated virulence [47] and a different RNA composition [46]. Autophagy regulators, which also participate in the formation of EVs in other eukaryotes, also participate in the formation of cryptococcal EVs. An atg7∆ strain manifested hypovirulence [48] and EVs produced by this strain had a slightly different RNA composition, in comparison with wild-type cells [46]. Similarly, the flippase Apt1, which plays an essential role in membrane architecture and, consequently, in secretory mechanisms [49,50], was required for correct EV formation and virulence in C. neoformans [49]. Together, these results strongly suggest that EV formation, virulence, and unconventional secretion are connected in C. neoformans.
Cell Wall Passage
In fungi, exported particles and molecules have to overcome the cell wall to reach the outer environment [4,29,51]. EVs supposedly use three putative mechanisms to cross the cell wall. First, EV accumulation in the periplasmic space would create a turgor pressure shoving the vesicles to pass across the naturally existing pores of the cell wall. Second, EVs could catalyze their passage across the cell wall using glycan hydrolases, including β-glucosidases and endochitinases. Third, EVs could use pore channels to reach the outer environment, by getting deformed to adapt to the pore morphology, and moving out through cytoskeleton-dependent mechanisms [3,13,51,52].
In C. neoformans, EVs were found to be released collectively or individually [29], but the exact mechanism explaining how they cross the cell wall has not been characterized. Microscopic analyses demonstrated vesicles near to damaged areas in the cell wall, but a clear association between cell wall breakage and vesicle passage has not been established [29]. Indeed, intra-wall vesicles in apparently intact regions were found in C. neoformans [4]. Microscopic observations also revealed that melanization in C. neoformans is associated with the accumulation of vesicle-like structures in the periplasmic space [29,53]. During this process, a significant reduction in the porosity of the cell wall was observed, and vesicles were observed crossing the cell wall directly [29,53]. In summary, there has Pathogens 2020, 9, 754 4 of 14 been no evidence so far that cryptococcal vesicles use pore channels to reach the extracellular space, reinforcing the hypotheses of pressure-induced release and/or vesicle-mediated cell wall hydrolysis. The latter hypothesis has been recently validated in bacteria. In Bacillus subtilis, EV formation was demonstrated to be a result of endolysins that degraded bacterial peptidoglycan and generated cell wall holes, which finally facilitated EV release [54].
Cell wall porosity can directly impact the efficacy of EV export through the fungal wall. Therefore, the composition of the cell wall might affect EV release. In this sense, C. neoformans mutants lacking each of the eight putative chitin synthase genes (CHS1-8) had their ability to produce EVs analyzed in a recent study. The C. neoformans mutants indeed manifested variable cell wall defects, but the analysis of EV production was puzzling, since the pattern of EV detection in the chs∆ mutants was highly variable [55]. For instance, it was initially predicted that disruption of CHS3, a gene encoding a class IV synthase mainly responsible for chitin synthesis in C. neoformans, would be more efficient in releasing vesicles, based on its previously suggested enhanced cell wall porosity [56]. However, this mutant was the one with the lowest efficacy in EV release. Other mutants (chs4∆ and chs5∆) with no apparent cell wall alterations produced high amounts of EVs. Therefore, the differences observed in the EV analysis were not a consequence of altered cell wall porosity, although the possibility that the mutant strains simply had different abilities to produce EVs could not be ruled out. These results efficiently illustrate the need for a better understanding of how EVs traverse the fungal cell wall. In this sense, a recent study demonstrated that the vast majority of cryptococcal EVs are decorated with mannoproteins [57], suggesting that vesicle composition is directly affected by the presence of cell wall components. These results formed the basis for the proposal of a novel structural model of cryptococcal EVs, in which the outer vesicular layer is composed of the capsular polysaccharide glucuronoxylomannan (GXM), with the lipid bilayer carrying a fibrillar, protein coat enriched with mannoproteins [57].
Bioactive Components of Cryptococcal EVs
The first virulence-associated component characterized in cryptococcal EVs was GXM [4,7,58], the main component of the polysaccharide capsule [59][60][61]. It is now known that approximately 70% of cryptococcal EVs are coated with GXM [57]. In contrast to most of the microbial polysaccharides, GXM is synthesized intracellularly, in the Golgi [58]. In C. neoformans, disruption of the SAV1 gene, which encodes a homolog of the Sec4/Rab8 subfamily GTPases that conservatively regulates exocytosis in yeast, resulted in an accumulation of vesicles loaded with GXM in the cytosol [58]. Additionally, the treatment of C. neoformans cells with brefeldin A, an inhibitor of the Golgi-derived transport, inhibited capsule formation [62]. Finally, deletion of the gene encoding GRASP resulted in aberrant Golgi morphology and reduced GXM secretion, with a negative impact on capsule size and attenuation of virulence in in vitro and in vivo models [47]. Together, these results point to the participation of the Golgi in GXM synthesis and export to the cell surface. The extracellular stage of GXM traffic, however, was not studied until cryptococcal EVs were first characterized. Since GXM is a major extracellular component in the Cryptococcus genus, the above-mentioned results implied the existence of mechanisms of trans-cell wall export.
The deletion of genes related to EV export through the ESCRT complex directly impacted the Cryptococcus capsule. Disruption of VPS34, VPS27, HSE1, VPS23, VPS22, VPS25, VPS20, and SNF7 genes led to a significant decrease in capsule size [42,44,45,63]. These results could be related to the observation of EVs altered in size distribution and reduced capsule dimensions in the C. neoformans vps27∆ strain [42]. In this sense, capsular growth was correlated with EV detection. We observed that induction of capsule growth in vitro was accompanied by an increase in the detection of EVs carrying GXM [4]. Robertson et al. (2012) found that the treatment of C. neoformans cells with EDTA resulted in a remarkable reduction in EV detection, and a significant reduction in capsular diameter [64]. On the other hand, a C. neoformans mutant strain lacking a putative G1/S cyclin (Cln1) displayed an abnormal increase in capsule size, and a significantly increased production of EVs [65]. The content of cryptococcal EVs has been also linked to capsule formation. Deletion of the C. gattii encoding a putative scramblase (Aim25) resulted in an increased capsule size [66]. No differences in the amount of EVs were observed in WT and mutant strains. However, an enrichment of a population of larger EVs with a significantly increased GXM concentration was detected in the mutant. Interestingly, the acapsular strain cap67∆ was more efficient in incorporating GXM from EVs obtained from the aim25∆ strain than the WT strain [66]. The importance of membrane regulators on the proper EV formation and GXM export was also suggested in studies of the Apt1 flippase in C. neoformans. Mutant strains produced EVs with lower concentration of GXM and had smaller capsules in vivo [49,50]. More recently, it has been suggested that ZIP3, a cryptococcal regulator of manganese homeostasis, also participates in EV formation, as concluded from the observation of a higher concentration of GXM in culture supernatants of zip3∆ mutants and a high production of EVs, with an enrichment of an EV population with higher dimensions [67]. Together, these studies suggest the existence of connections between EV production and export of the most important capsule component of Cryptococcus spp.
The C. neoformans mutant strains vps34∆, vps27∆, and hseI∆, all showing functional defects in the ESCRT-0 complex, failed to export laccase to the cell wall [42], which might suggest an association between exosome formation and melanization in Cryptococcus. These results might be related to those observed with a C. neoformans sec6∆ mutant. Sec6 is a protein involved in the polarized fusion of exocytic vesicles with the plasma membrane, and its disruption in Cryptococcus resulted in an increased formation of MVB-like structures, affecting the transport of laccase to the cell wall [37]. A similar interpretation can apply to urease, another EV-linked virulence factor of cryptococci [7]. Interruption of the ESCRT pathway by disruption of the VPS27 gene in C. neoformans resulted in reduced urease activity in vitro [42], and the same phenotype was also observed in the C. neoformans sec6∆ mutant [37]. These studies reinforce the notion that both conventional and unconventional secretory pathways participate in the release of cryptococcal EV-associated virulence molecules.
The diversity of molecules inside the Cryptococcus EVs is not restricted to virulence factors. Several RNA subclasses were described in cryptococcal EVs [6,66,70,71]. The first evidence of the presence of RNA in Cryptococcus EVs was provided by Nicola et al. (2009) using an RNA-selective nucleic acid dye to stain vesicular structures [71]. Different subclasses of RNA were further described in cryptococcal EVs, including, small nuclear RNA, ribosomal RNA, transfer RNA, microRNA, long noncoding RNA, and messenger RNA [6,70,72]. Recently, Liu et al. (2020) showed that Cin1, a multidomain adaptor protein that regulates cryptococcal growth, intracellular transport, and the production of several virulence factors [73], also plays an important role in regulating RNA export in C. deneoformans [70]. RNA export in C. neoformans EVs relies on the unconventional secretory pathway. Disruption of GRASP in C. neoformans leads to a significant change in the RNA cargo in EVs when compared to the WT strain [46]. Since disruption of GRASP also resulted in decreased GXM export, these results reinforce the notion that EVs and unconventional secretory mechanisms are connected in Cryptococcus. Figure 2 illustrates the importance of vesicles and EV cargo in physiology and virulence of Cryptococcus. The participation of cryptococcal EVs and their components in fungal virulence suggests that targeting proteins participating in the secretory machinery could lead to the development of novel chemotherapies. Pharmacological inhibitors of EV formation in fungi have not been characterized so far. However, in other eukaryotes, compounds reported to inhibit EV formation (microvesicles or exosomes) were characterized [74]. If these molecules can also affect EV formation in Cryptococcus and other fungi, they could interfere with their pathogenic potential.
Impact of EVs during Cryptococcus Infection of Host Cells
EVs can interfere with the outcome of the interaction of cryptococci with infected cells. Murine macrophages RAW 264.7 and J774 can incorporate C. neoformans EVs [75,76]. Similarly, C. gattii EVs were incorporated by J774 macrophages [77]. The uptake of EVs by mouse macrophages is very efficient, as concluded from the incorporation of C. gattii EVs in only five minutes [77]. Actin polymerization inhibitors blocked EV uptake, suggesting the participation of cytoskeleton plasticity [77].
Exposure to cryptococcal EVs resulted in alterations of phagocyte functionality. The treatment of RAW 264.7 macrophages with C. neoformans EVs resulted in increased phagocytosis of nonopsonized C. neoformans [75]. A more prominent increase in the phagocytosis levels was observed when the macrophages were stimulated with EVs produced by a C. neoformans acapsular strain, which indicates that changes in vesicular composition differentially impact their functions [75]. In the same study, C. neoformans EVs were demonstrated to affect cytokine production by RAW 264.7 The participation of cryptococcal EVs and their components in fungal virulence suggests that targeting proteins participating in the secretory machinery could lead to the development of novel chemotherapies. Pharmacological inhibitors of EV formation in fungi have not been characterized so far. However, in other eukaryotes, compounds reported to inhibit EV formation (microvesicles or exosomes) were characterized [74]. If these molecules can also affect EV formation in Cryptococcus and other fungi, they could interfere with their pathogenic potential.
Impact of EVs during Cryptococcus Infection of Host Cells
EVs can interfere with the outcome of the interaction of cryptococci with infected cells. Murine macrophages RAW 264.7 and J774 can incorporate C. neoformans EVs [75,76]. Similarly, C. gattii EVs were incorporated by J774 macrophages [77]. The uptake of EVs by mouse macrophages is very efficient, as concluded from the incorporation of C. gattii EVs in only five minutes [77]. Actin polymerization inhibitors blocked EV uptake, suggesting the participation of cytoskeleton plasticity [77].
Exposure to cryptococcal EVs resulted in alterations of phagocyte functionality. The treatment of RAW 264.7 macrophages with C. neoformans EVs resulted in increased phagocytosis of non-opsonized C. neoformans [75]. A more prominent increase in the phagocytosis levels was observed when the macrophages were stimulated with EVs produced by a C. neoformans acapsular strain, which indicates that changes in vesicular composition differentially impact their functions [75]. In the same study, C. neoformans EVs were demonstrated to affect cytokine production by RAW 264.7 macrophages. Stimulation of the macrophages with EVs led to increased production of TNF-α, TGF-β, and IL-10. Once again, differences were found between stimulation of the phagocytes with EVs from acapsular or encapsulated C. neoformans strains. EVs from the acapsular strain led to an increase in the production of TNF-α, which induced antifungal activity. On the other hand, EVs from the encapsulated C. neoformans strain led to a significant increase in the production of TGF-β and IL-10, which are known to be positively modulated by GXM [75]. C. neoformans EVs also modulated nitric oxide (NO) production. Curiously, the stimulation of NO production was significantly less effective when the macrophages were treated with EVs isolated from the acapsular C. neoformans strain cap67∆ [75]. These results might be related to the ability of the EVs to modulate fungal killing by host phagocytes. Accordingly, environmental phagocytes are also affected by Cryptococcus EVs. Rizzo et al. (2017) observed stimulation of Acanthamoeba castellanii with EVs resulted in a significantly increased survival of phagocytized C. neoformans [78].
Besides influencing the performance of phagocytes, cryptococcal EVs also modulated important features of the Cryptococcus physiology during macrophage infection. C. gattii EVs obtained from a virulent strain were used to treat macrophages infected with a non-virulent C. gattii isolate, which resulted in the accumulation of the vesicles in the phagosomes [77]. Inside the phagosomes, the EVs from the pathogenic C. gattii strain stimulated the intracellular replication of the non-pathogenic isolate. Negative results were observed when EVs from the non-pathogenic strain or produced by an acapsular mutant were tested [77]. Similarly, Hai et al. (2020) demonstrated that culture filtrates from a high virulent strain induced an increased virulence in a hypovirulent strain [79]. This effect was only observed under conditions of EV availability. These results indicate that cryptococcal EVs are vehicles operating in the transfer of virulence traits between distinct Cryptococcus strains and demonstrate an important function of the vesicles in cell-to-cell communication processes.
Cryptococcus EVs were also suggested to positively impact both adhesion and invasion of the blood-brain barrier (BBB) by fungal cells [80]. In a mice model, C. neoformans EVs induced an enhanced fungal burden in the brain and the cerebrospinal fluid in a dose-dependent manner, with an accumulation of structures that could correspond to EVs surrounding the brain lesions on infected mice [80]. More recently, additional modulatory effects on the host's immune mechanisms were demonstrated. The mammalian β-galactoside-binding protein Galectin-3 (Gal-3) recognized EVs and promoted vesicle disruption, resulting in decreased levels of interaction of the fungi with macrophages in vitro, reduced recovery of intact EVs, and a diminished uptake of EVs by macrophages [81].
Cryptococcal EVs: Vaccine Candidates?
The ability of cryptococcal vesicles to modulate the host's immunological functions might result in vaccinal potential. Since licensed antifungal vaccines are still not available [82], information on how fungal EVs activate the immune response could be greatly impactful. The vaccinal potential of fungal EVs was first suggested in the Candida model [83,84], and similar observations were recently described in Cryptococcus. In an immunization model of Galleria mellonella with vesicular structures enriched in sterolglycosides (SGs) and GXM, EV administration resulted in the protection of the invertebrate host against a lethal challenge with C. neoformans [85], revealing a potential vaccination strategy for cryptococcosis using sgl1∆ EVs [85]. The vaccinal potential of cryptococcal EVs was recently confirmed in a murine model of cryptococcosis. Mice immunized with EVs obtained from an acapsular C. neoformans mutant strain induced a strong antibody response and significantly prolonged survival of mice upon a lethal challenge with C. neoformans [57]. Importantly, the immunological mechanisms associated with this protection are still unknown, but cryptococcal EVs were recognized by antibodies produced by infected mice [7,57]. Figure 3 presents an overview of the role of EVs during the interaction of Cryptococcus with host cells, including their vaccinal potential.
Facilitated Methods for the Analysis of Cryptococcal Vesicles
The generation of knowledge on the functions and mechanisms of the biogenesis of cryptococcal vesicles has been continuously affected by methodological limitations. Empirically, it is known in the field that Cryptococcus EVs are produced in low yields, in comparison to other models. Indeed, our laboratory experience shows that other yeast genera, including Saccharomyces and Candida, are more efficient producers of EVs. Therefore, the perception that improved methods of EV analysis were necessary for the Cryptococcus model has been clear for years.
EV analysis in fungi and other eukaryotes has historically included isolation of membrane structures from the supernatants of liquid cultures by ultracentrifugation methods, followed by particle analysis by a combination of microscopic and physical methods [4]. Fungal EVs have been analyzed according to these protocols for more than a decade. Although this model has been helpful to address several questions, it must be highlighted that fungal cells are rarely distributed in liquid matrices both in the environment and during infection. The isolation of cryptococcal EVs from liquid media can take up to two weeks, with very low yields of EV isolation. This was the basis for the design of a novel protocol of isolation of EVs from the Cryptococcus genus. We hypothesized that EVs could be recovered from cultures obtained in solid media since there was no evidence in the literature suggesting that EVs were exclusively produced in liquid matrices.
Cultivation of C. neoformans and C. gattii in regular agar plates followed by suspension of yeast cells in PBS for further centrifugation steps resulted in facilitated detection of typical EVs [66]. However, since this study and earlier articles used diverse methods for EV quantification, a reliable comparison between the yields of the different methods is still not available. Of note, the solid medium protocol successfully allowed EV detection independently on the medium used, and all fungal species tested, including Candida albicans, Histoplasma capsulatum, and Saccharomyces cerevisiae, gave positive results. The protocol was shown to be highly reproducible, and fast: from the recovery of fungal cells to the analysis of ultracentrifugation pellets, the time estimated was of 5 h. Isolated EVs were reliably detected by ELISA targeting GXM, nanoparticle tracking analysis, and
Facilitated Methods for the Analysis of Cryptococcal Vesicles
The generation of knowledge on the functions and mechanisms of the biogenesis of cryptococcal vesicles has been continuously affected by methodological limitations. Empirically, it is known in the field that Cryptococcus EVs are produced in low yields, in comparison to other models. Indeed, our laboratory experience shows that other yeast genera, including Saccharomyces and Candida, are more efficient producers of EVs. Therefore, the perception that improved methods of EV analysis were necessary for the Cryptococcus model has been clear for years.
EV analysis in fungi and other eukaryotes has historically included isolation of membrane structures from the supernatants of liquid cultures by ultracentrifugation methods, followed by particle analysis by a combination of microscopic and physical methods [4]. Fungal EVs have been analyzed according to these protocols for more than a decade. Although this model has been helpful to address several questions, it must be highlighted that fungal cells are rarely distributed in liquid matrices both in the environment and during infection. The isolation of cryptococcal EVs from liquid media can take up to two weeks, with very low yields of EV isolation. This was the basis for the design of a novel protocol of isolation of EVs from the Cryptococcus genus. We hypothesized that EVs could be recovered from cultures obtained in solid media since there was no evidence in the literature suggesting that EVs were exclusively produced in liquid matrices.
Cultivation of C. neoformans and C. gattii in regular agar plates followed by suspension of yeast cells in PBS for further centrifugation steps resulted in facilitated detection of typical EVs [66]. However, since this study and earlier articles used diverse methods for EV quantification, a reliable comparison between the yields of the different methods is still not available. Of note, the solid medium protocol successfully allowed EV detection independently on the medium used, and all fungal species tested, including Candida albicans, Histoplasma capsulatum, and Saccharomyces cerevisiae, gave positive results. The protocol was shown to be highly reproducible, and fast: from the recovery of fungal cells to the analysis of ultracentrifugation pellets, the time estimated was of 5 h. Isolated EVs were reliably detected by ELISA targeting GXM, nanoparticle tracking analysis, and transmission electron microscopy. Our most recent unpublished results indicate that the facilitated EV isolation method allows efficient analysis of samples obtained from multiple isolates, separation of vesicles by gradient centrifugation, and a small molecule composition. We anticipate that, in this new scenario, it will be possible to experimentally address currently complicated questions related to vesicle fractionation, diversity, and biogenesis. The most recent methods of EV isolation from cryptococcal cultures are summarized in Figure 4.
Pathogens 2020, 9, x FOR PEER REVIEW 9 of 14 transmission electron microscopy. Our most recent unpublished results indicate that the facilitated EV isolation method allows efficient analysis of samples obtained from multiple isolates, separation of vesicles by gradient centrifugation, and a small molecule composition. We anticipate that, in this new scenario, it will be possible to experimentally address currently complicated questions related to vesicle fractionation, diversity, and biogenesis. The most recent methods of EV isolation from cryptococcal cultures are summarized in Figure 4.
Gaps, Unanswered Questions, and Perspectives
Despite the recent progress in the field of fungal EVs, in particularly in the Cryptococcus model, it is unquestionable that several gaps and questions remain open. For instance, most studies performed so far were based on single, standard strains of C. neoformans rather than C. gattii, which limits our knowledge on the compositional diversity of cryptococcal EVs. Considering that EV composition is a major determinant of their functions, studies on the diversity in the production of EVs by different cryptococcal species and strains are necessary. Similarly, it is still unknown whether the production of EVs changes at the various life-stages of Cryptococcus.
Novel methods for EV separation are similarly necessary. All studies performed with Cryptococcus so far used centrifugation protocols that resulted in the coisolation of diverse EV populations, as recently illustrated in early [7] and recent [57] studies. This limitation directly impacts, for instance, immunological studies, since these studies are testing mixed EV populations that can manifest divergent immunological functions. Therefore, methods separating EVs based on their biogenesis and/or physical chemical properties are required for refining the functional studies, and they likely improve the knowledge on their vaccinal potential.
Finally, as previously mentioned in this manuscript and several others, we do know that cryptococcal EVs have different cellular origins, but we still do not know where exactly they come from. The identification of genes regulating EV formation and/or pharmacological inhibitors of EV release in Cryptococcus will likely open new venues of investigation, with the potential to change the way we understand the functions of cryptococcal EVs. (Rodrigues et al. 2007) B) Solid media (
Gaps, Unanswered Questions, and Perspectives
Despite the recent progress in the field of fungal EVs, in particularly in the Cryptococcus model, it is unquestionable that several gaps and questions remain open. For instance, most studies performed so far were based on single, standard strains of C. neoformans rather than C. gattii, which limits our knowledge on the compositional diversity of cryptococcal EVs. Considering that EV composition is a major determinant of their functions, studies on the diversity in the production of EVs by different cryptococcal species and strains are necessary. Similarly, it is still unknown whether the production of EVs changes at the various life-stages of Cryptococcus.
Novel methods for EV separation are similarly necessary. All studies performed with Cryptococcus so far used centrifugation protocols that resulted in the coisolation of diverse EV populations, as recently illustrated in early [7] and recent [57] studies. This limitation directly impacts, for instance, immunological studies, since these studies are testing mixed EV populations that can manifest divergent immunological functions. Therefore, methods separating EVs based on their biogenesis and/or physical chemical properties are required for refining the functional studies, and they likely improve the knowledge on their vaccinal potential.
Finally, as previously mentioned in this manuscript and several others, we do know that cryptococcal EVs have different cellular origins, but we still do not know where exactly they come from. The identification of genes regulating EV formation and/or pharmacological inhibitors of EV release in Cryptococcus will likely open new venues of investigation, with the potential to change the way we understand the functions of cryptococcal EVs.
Funding:
The authors received no specific funding for this manuscript. | 7,240.8 | 2020-09-01T00:00:00.000 | [
"Biology"
] |
Supervised evolutionary programming based technique for multi-DG installation in distribution system
Muhammad Firdaus Shaari, Ismail Musirin, Muhamad Faliq Mohamad Nazer, Shahrizal Jelani Farah Adilah Jamaludin, Mohd Helmi Mansor, A.V.Senthil Kumar Faculty of Electrical Engineering, Universiti Teknologi MARA Malaysia,Shah Alam, Selangor, Malaysia Faculty of Engineering, Technology and Built Environment, UCSI University, Kuala Lumpur, Malaysia Department of Electrical Power Engineering, College of Engineering, Universiti Tenaga Nasional, Selangor, Malaysia Hindusthan College of Arts and Science, Hindusthan Gardens, Behind Nava India, Coimbatore 641 028, India
INTRODUCTION
In recent decades, distribution network has been developed due to meet the increasing number of consumers power demand and load [1]. This increasing demand has become the most important challenge to the power system in supply the power and load to consumers [2]. But still there are a few side effects that cause voltage drop, increased losses, power loss and load imbalance in the distribution system [1]. The electricity demand is expected to gradually increase by 28% from 2011 until the end of 2040, in 2011 the power growing from 3,839 billion kWh until reach 4,930 billion kWh in 2040 [3]. Power utilities are responding to meet this increasing demand during the period. Briefly power system can be subdivided into three major parts that is generation, transmission, and distribution. In the generation section, the electric power was generated and the voltage was stepped up to high voltage. Then the transmission network transfers the high voltage from generating station to the distribution system which ultimately supplies the load. Before the transmission network connected to the distribution system, the voltage was stepped down to medium and low voltage. Lastly the distribution network distributed the low voltage to the end users [4]. However the low voltage in the distributed system has led to high power losses due to low voltage with having a high current comparable to high voltage network, this cause the distributed system suffers the poor voltage profile and increasing cost power [5].
12
Basically the total power generated at the generation unit is not the same amount of total power consumed by the end users of the distribution system, the amount of the total power slowly dropped during power dispatch [6]. In developing countries, the amount of power loss in the distribution network is about 20% of total power generated loss due to the I2R in the network, this cost millions of dollars wasted every year [7]. The power loss in a distributed system can be categorized into two losses: real power and reactive power, but the effect of active power loss is the most important because it reduces the efficiency of power transfer and deteriorates the voltage profile [5]. To overcome this problem the most cost-effective and economical solution is by installing the distributed generation (DG) in the distribution network [2]. In having, distributed generation (DG) installing to the distribution system help to reduce the network losses, supply power and improving the voltage profile of the distribution system [8].
DGs can also be known as 'Embedded Generations' or 'Disperse Generations' which capable generates power in the range of 3-10,000 kW from renewable energy [9]. DG has also define as the generating plant that generated maximum power capacity of less than 100 MW and connected to the distribution network, according to the CIGRE [10]. However, DG commonly define as electric power source connected directly to the distribution network or on the consumer side of the meter range from a few kWs to a few MWs [11]. The existing of Distributed generation (DG) in electric power system has become an important part in the distribution network as it reinforces the main generation system in covering the growing demand today [12]. Friendly environmental factor such as environmental pollution, had led the increase of DGs unit to be used for establishment of new transmission lines and technology development resources [13]. Unlike the main power stations, DG can be connected or disconnected easily from the network besides providing higher flexibility to the system [12]. DG serves as integrating network for household and industrial units with heat and power generation capacities to achieve utility self-sufficiency and sharing of excess utilities [14].
As the distributed generation (DG) is often used in the distribution network had brought the development of various types of DG such as renewable and non-renewable energy source [15]. The growing of environment concern led to the use of renewable energy source DG such as solar, wind, geothermal, biomass, bio gas and hydroelectric power, which expects a growth of 183% of the installed power for the period between 2009 and 2016. By using the renewable distributed generation, give many benefit such as reduce the consumption of fossil fuel, the greenhouse gas emission and also help reduce noise pollution [14][15][16]. However, there are three main types of DG, that is real power DG or unity power factor DG (UPF-DG), reactive power DG and both real and reactive power DG is also known as lagging power factor DG [17]. Besides that, DG also can be categorized on the basis of power rating such as for micro-distributed generation the rating range is between 1W to 5 kW, small distributed generation the rating goes from 5 kW to 5 MW, medium distributed generation is 5MW to 50 MW and for a large distributed generation the rating is from 50MW to 300 MW [18]. The fundamental purpose having DG in the distribution network is because DG able to yield many benefit to be achieve such as voltage profile improvement, reduced lines losses, increased security for critical loads, grid reinforcement, and reduction in the on-peak operation cost [19]. Besides that by having DG technology will also lead to flexibility in electricity price and the system performance [20]. By having distributed generation (DG) into the system, the system can gain many advantages such as suspend the upgrade of an existing system, reduce peak, minimize power losses, cheap maintenance cost, excellent reliability, power quality improvement, possibility to exploit CHP generation, fulfill the increasing demand without requirement spendthrift investment and shorter construction schedules [21]. Regard of that, to efficiently obtain these benefits, it is essential to determine the optimal placement of DG units in the distribution system so that the system can become more excellent in performance [22].
As to contribute the distribution network and enchance the network performance, many research have focused on developing fast and effective technique to reduce the power loss. Some of the technique is by using analytical approaches, numerical method, and heuristic algorithm [23]. According to Merlin and Back, heuristic method provides the sequential opening algorithm. However Civanleretal, has proposed other heuristic method that is branch exchange algorithm. Although plainness is the advantage of this heuristic method, but the nature of the heuristic method itself are avaricious and provide results without considering the whole problem [24]. The optimization technique like Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colony Algorithm (ABC), and Modified Teaching-Learning Based Optimization (MTLBO) is the technique that most of researchers have focused on developing methodologies for minimizing power loss and assumption of DG location [23,[25][26]. Evolutionary algorithm such as Genetic Algorithm is commonly used in optimization. Genetic Algorithm (GA) usually operates based on population of capability solution applying the fundamental of the fittest to provide better approximate result [27]. Other than, Genetic Algorithm (GA), Particle Swarm Optimization (PSO) also one of the mainly technique that been in the optimization [28]. PSO algorithm is a technique for optimizing complex numerical functions based on simulating the natural behavior of bees and PSO has no overlapping and mutation calculation. PSO take the most optimist information and spread it to other particle, wish to result the speed and the research is fast [29]. In order to improve the location and power loss of DG units, the development of supervised evolutionary programming based technique for multi-DG installation in distribution system is being presented. This algorithm based on traditional evolutionary programming and an orientation table consists of number intervals and the corresponding best location for each interval. This technique guarantees of reaching the optimal solution with less effort and with rapid convergence. The proposed algorithm is implemented in MATLAB and tested on the 69-bus feeder in order to improve the optimal location of DG and minimize the power loss.
RESEARCH METHOD Algorithm
Supervised evolutionary programming is proposed in order to ensure of reaching the optimal solution with rapid convergence and less effort. Figure 1 shows the flow chart of the supervised evolutionary programming and it is discussed in the following step by step procedure. a. Pre-optimization In this section, the load increment is regulate from 1Mvar to 5Mvar with the increment of 1Mvar for bus 6, based on load increment test. It also known as initial/unstable condition before optimization process. b. Orientation table Orientation table is formulated by dividing the DG active power range 0 ≤ Pg ≤ Pgmax to n equally divisions. For each of the divisions the best location is randomly estimated by setting the DG active power to the middle values of each division. Each division will find the location that achieve the minimum active power loss. c. Initialization In this section, DGs locations, X1, X2, and X3 are generated randomly. The range of the location is set between 1 to 68 to determine as location for 69bus system. d. Fitness 1 Collecting suitable variables of location and power based on the 20 accepted data from the initialization process. Known as parent data e. Mutation Operator use to breed offspring/children based on parents produced from generation of N(a,d) Gaussian distributed. A new set of offspring/children are generated based on acceptable parent data from fitness 1. The value of P1, P2, and P3 is used for the mutation process. Equation for mutation as following: Where; x i+m,j = offspring/children x i,j = parents N = Gaussian random variable with mean and variance B = search step x jmax = maximum parents x jmin = minimum parents fi = fitness 'i th ' fmax = maximum fitness f.
Fitness 2
Generation of new 20data of offspring from the mutation process. Also known as offspring data g. Combination This section consists of the combination process from the parent and offspring data. This part, determine either the maximum and minimum fitness fulfil the desired qualification or not. If the fitness does not meet the qualification, the process will automatically repeated again to the fitness 1 and continue until the qualification is fulfil. Table 1 shows the condition of power loss in the 69-bus system during the pre-optimization. Pre-optimization refer to the condition before the multi-DGs are appliance into the system, which refers to the unstable condition. From the table shows that the Plosses increases directly proportional to the increment of lambda, . As power demand increases, there will be a higher power losses due to the energy decreases in the equipment in the distribution system. The main factor of the power losses is due to the distance of distribution line. The higher the current passing through the distribution system, the higher the load demand, Figure 2 shows the graph of increment against power losses before optimization. Figure 2. Graph of increment against power losses before optimization Table 2 shows the result after optimized location and sizing of multi-DGs and also the result of power loss. Based on the result obtained, the location is the same even the increment is increased, it is due to the value of DGs is fixed. The optimization of the location converged after 3 iterations. From the table also show that the value of the loss after optimization is decrease compared the value before optimization. Figure 3 shows the graph of increment against power loss after optimization. Based on Table 3 and Figure 4, the result shows that power loss had been optimized. The power losses before optimization or know as pre-optimization has been decreased after the optimization process. Figure 4. Ploss before versus Ploss after
CONCLUSION
As a conclusion, a supervised evolutionary programming technique able to solve the power system performance and optimize the location of the multi-DGs in order to minimize the power losses. The proposed method, proposing the orientation table in order to prevent the algorithm from collapsing in local minimum locations. The method able to enhance the performance of the distribution system by minimizing the total power loss in the system and optimize the best location of multi-DGs at the selected bus. The proposed method that is supervised evolutionary programming is better at evaluating the optimal DG location and power due to the active power range is already been set and fixed. | 2,949.8 | 2020-03-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
YES as a Tool for Detecting Estrogenic Activity of Some Food Additives Compounds: E 104, E 122, E 124, E 132 and E 171
Food additives are artificial ingredients added to foods, medications and cosmetics in order to enhance flavour, colour, and texture. They facilitate the preparation and prevent spoiling. The most common food additives are: Aspartame Monosodium Glutamate (MSG), Sodium Nitrate, parabens and dyes like tartrazine (Yellow Dye #5). Although many of these individual additives are included in small amounts individual average consumption is estimated at about 5 pounds of synthetic additives in total each year. If we include sugar data as it’s the most used additive by the food industry then that number increases to 135 pounds a year. Between them we can find food dyes (like tartrazine the most widely used) and parabens (methylparaben, ethylparaben, propylparaben, butylparaben and isobutylparaben). The most common health issue that occurs from the intake of food additives is the worsening of asthmatic symptoms and development of allergies. But safety of some of them is well known. For example, the safety of parabens has been debated overlooked to male and female breast, testicular cancer and fertility, they are able to mimic estrogens [1-5]. Except quinolone, food colors are relatively weak mutagen and certified as safe additives despite reports that some people have allergic reactions towards them [6]. However, studies have revealed toxic effects of many colorants [7].
Introduction
Food additives are artificial ingredients added to foods, medications and cosmetics in order to enhance flavour, colour, and texture. They facilitate the preparation and prevent spoiling. The most common food additives are: Aspartame Monosodium Glutamate (MSG), Sodium Nitrate, parabens and dyes like tartrazine (Yellow Dye #5). Although many of these individual additives are included in small amounts individual average consumption is estimated at about 5 pounds of synthetic additives in total each year. If we include sugar data -as it's the most used additive by the food industry -then that number increases to 135 pounds a year. Between them we can find food dyes (like tartrazine the most widely used) and parabens (methylparaben, ethylparaben, propylparaben, butylparaben and isobutylparaben). The most common health issue that occurs from the intake of food additives is the worsening of asthmatic symptoms and development of allergies. But safety of some of them is well known. For example, the safety of parabens has been debated overlooked to male and female breast, testicular cancer and fertility, they are able to mimic estrogens [1][2][3][4][5]. Except quinolone, food colors are relatively weak mutagen and certified as safe additives despite reports that some people have allergic reactions towards them [6]. However, studies have revealed toxic effects of many colorants [7].
All these compounds are classified as xenoestrogens. Xenoestrogens are exogenous chemicals able to modulate endogenous estrogen activity through, for example, structural similarities leads to receptors interactions or estrogen metabolizing enzymes. A common mode of action is with xenoestrogens shows an affinity for Estrogen Receptors (ER) and subsequently causes endocrine disruption through this interaction [8]. Other chemicals may bind to other receptors related to ERs and modulate their functioning [9].
The Estrogen Receptor (ER) is the key factor that elicits estrogenic response in vertebrates. Several structural trends have been identified to be potential ligands for the ER. Various additives like parabens have been proven to be endocrine disruptor compounds using either in vivo or in vitro assay [10]. Recombinant Yeast Assays (RYA) (or Yeast Estrogen Screen) based on use of estrogen receptors of vertebrates are convenient functional assays to evaluate the potential endocrine disruption of a substance [11]. But few studies have focused on the estrogenic activity of food dyes. Studies concern mainly food colors such as tartazine (E 102), sudan I and erythrosine B (E 127) which have effects on the chromosomal structure and can increase Estrogen Receptor (ER) site-specific DNA binding effect to Estrogen Response Element (ERE) in HTB 133 cells [12,13] and in the E-screen test [14]. Axon et al. [15] helped to identify several modulator of the human ER among food additives such as tartazine and sunset yellow.
In the current work, we sought to detect the presence of xenobiotic ER agonists between various food additives using the Yeast Estrogen Screen (YES).
Selected compounds
Selected compounds methylparaben (MP), ethylparaben (EP), n-propylparaben (PP), n-butylparaben (BP), and 17-β-estradiol (E2) were purchased from Sigma-Aldrich (Table 1). All stock solutions were prepared in methanol and serially diluted in distilled water to obtain the targeted concentrations. For each paraben the endocrine activity was assessed at various concentrations in the range of 10 -9 to 10 -3 M. The methanol concentration in the exposure solutions, including controls was 0.01% (v/v) in the tested solutions, which is a non-effective dose as estimated in preliminary tests.
A selection of currently used dyes was chosen for this study. These dyes were selected on the basis of their potential use food industries (http://www.omicsonline.org/2155-6199/2155-6199-1-110. php#Table1) For each dye, a stock solution (0.1 M) was prepared by dissolving *Corresponding author: Ingrid Bazin, Ecole des Mines d'Alès, LGEI center, 6 avenue de Clavière 30319 Alès Cedex, France, E-mail<EMAIL_ADDRESS>in distilled water, followed by filtration through Whatmann No. 5 filter paper. All these dyes appeared perfectly soluble in water at this concentration as assessed by the absence of precipitate. For each dye, the endocrine activity was assessed at four concentrations in the range of 1×10 −6 M to 1×10 -2 M. The natural fluorescence of the dyes and their interference on the fluorescence emitted by yeast was determined beforehand in order to avoid spurious signals generated by the YES test in our experimental conditions. The fluorescence of each dye has been measured in the absence of yeast cells and subtracted from the data obtained for estrogenic tests. The cytotoxicity of each dye was evaluated by measuring yeast growth at an O.D. of 600 nm.
Yeast estrogen screen assay (YES)
Yeast strain BY4741 (MATa ura3D0 leu2D0 his3D1 met15D0) was obtained from Euroscarf, Frankfurt, Germany. Expression plasmid pH5HE0 contains the human estrogen cloned into the constitutive yeast expression vector pAAH5 [11]. Plasmid pVITB2x encompasses the reporter gene beta-galactosidase from Escherichia coli fused to the yeast CYC1 promoter and under the control of the pseudopalindromic Estrogen Responsive Element ERE2 from Xenopus laevis vitellogenin B1 gene (50-AGTCACTGTGACC-30 , two copies) [11]. The test measures β-galactosidase activity with 4-methylumbelliferone β-Dgalactopyranoside (fluorescence at 460 nm and excitation at 355 nm) with a fluorimeter (Fluoroskan Twinkle LB 970, BERTHOLD Technologies) after 6 h of exposure to the compounds. Tests were performed in 96-well plates.
To determine the estrogen agonistic activity of studied compounds, E2 was used as a positive control and distillated water was used as a negative control. Stock solutions of MP, EP, PP BP (10 -2 M), 17 β -estradiol (10 -3 M), and dyes (0.1 M) are prepared in methanol and serially diluted in distilled water. The transformed yeast strain were grown overnight at 30°C in a non-selective medium (YPD: 5 g/L yeast extract, 10 g/L peptone, 20 g/L glucose, all from Sigma-Aldrich, France). The following day, 15 µl of saturated culture is added to 15 ml of selective media (SD, 6.7 g/L yeast nitrogen base without amino acids, DIFCO, Basel, Switzerland; 20 g/L glucose, supplemented with 0.1 g/L of prototrofic markers as required). The final culture was diluted in selective media and adjusted to an optical density (DO) between 0.1-0.2 at 600 nm. 45 µl aliquots were transferred into a 96-well polypropylene microtiter plate. 5 µl aliquots of each sample were then dispensed into wells in quadruplicate. This experimentation was carried out 3 times in order to obtain representative statistical data. After 6h incubation, 50 µl of YPER (Pierce, Rockford, IL, USA) were added to each well and incubated at 30°C for 30 min. 50 µl of buffer Z ( 60 mM Na 2 HPO 4 , 40 mM NaH 2 PO 4 , 10 mM KCl, 1 mM MgSO 4 , pH 7.0 plus 0.5% 2-mercaptoethanol (Fluka), 1% Triton X-100 (Sigma), 4-methylubelliferyl β-D-Galactoside (Sigma)) were added to the cell lysat then incubated at room temperature for 30 min after a centrifugation at 1000 rpm for 1 min. The fluorescence was mesured at 460 nm (excitation at 355 nm and emission at 460 nm). Confirmation of human ER activation by xenobiotics was made with 4 Hydroxytamoxifen (4-OH-TAM). 0.5 µM of 4-OH-TAM was added to yeast cell prior to addition of potential ER activator. 4 OH-TAM competitively binds to estrogen receptors on tumor cells and other tissue targets, producing a nuclear complex that decreases DNA synthesis and inhibits estrogen effects. After 6h exposure to binary mixture, β-D-galactosidase activity was determined.
Calculation of EC 50 values:
EC 50 values, defined as the concentration where transcriptional response reaches 50% of its value at saturating concentration of ligand, were calculated from dose-response assays, using 5 or 7 concentrations for parabens, 5 concentrations of dyes and 8 concentrations for 17-β-estradiol.
Results and Discussion
Natural and artificial azo dyes and parabens are widely used as coloring agents for food ingredients, drugs and cosmetics. These toxicological effects can vary from carcinogenicity and mutagenicity to genotoxicity. Many azo dyes are genotoxic in short-term tests and carcinogenic in laboratory animals [7]. The genotoxicity of these dyes are controversial, it were classified as genotoxic in one review [7] but not in another [16].
To our knowledge, food additives such as ponceau 4R, azorubine, titanium dioxyde or indigotine have not been examined and/or reported to be estrogenic chemicals. Accordingly, we investigated their potency of action in the yeast/(ERE) 2-pLacZ reporter gene assay. Figure 1 illustrates a typical dose-response for E2 on relative reporter gene expression in yeast cells transfected with (ERE) 2-pLacZ. In our experimental conditions the EC 50 is 3.77*10 -10 M (22 ng/L). These data were used to calculate EC 50 % values for a range of compounds. Table 2 and Figure 2 demonstrate that some formulation compounds of cosmetics such as paraben esters, are human ER agonists [17], in our assay. A correlation between the length of the paraben ester chain and the estrogenicity has been also pointed out [17,18]. In addition the cosmetic and food additives titanium dioxide and indigotine were also shown to be ER_ agonists ( Figure 2). They induce a concentrationdependent response in yeast cell. As shown in Figure 2, the estrogenic potency of food dyes is very weak compared to that of E2 or some parabens. As previously reported [15], the food dye quinolone yellow was not shown to be an ER agonist (Figure 2). Table 2 demonstrates that food colorings such as indigotine, ponceau Red 4 and titanium dioxide -the most potent food xenoestrogens examined in this study -were approximately 10 6 times less effective that E2 at activating the human ER in the Yeast cell. The most xenoestrogen between paraben used like food additive is butylparaben. In 2003, butylparaben was cleared by the Joint FAO/WHO Expert Committee on Food Additives (JECFA) for use as a flavoring agent "at very low levels" (not specified) [19]. Butylparaben is used as a food additive in Japan and as an additive to beer to retard microbial growth. Recently, a call for data regarding suggested limits for the use of butylparaben as a food additive [20]. Table 2 also demonstrates that food colorings are generally less potent activators of the human ER than parabens compounds.
We have analyzed if 4 OH-TAM (a classical ER antagonist) can inhibit the estrogenic activity induced by studied compounds. Inhibition of the induction of (ERE) 2-pLacZ reporter can confirm that studied compounds induce an estrogenic effect in yeast cells. Figure 3 demonstrates that (ERE) 2-pLacZ reporter gene induced activity using E2 as well as with benzylparaben, butylparaben, propylparaben ethylparaben, indigotine and titanium dioxide, were inhibited by 4 OH-TAM.
Our assays consist on an engineered yeast strain that harbors two foreign genetic elements: a vertebrate receptor, human Estrogen Receptor (hER), in our case and a reporter gene. Expression of reporter gene, LacZ, is dependent on the occurrence of estrogens. The final product concentration is easy to quantify, the beta-galactosidase activity. The mechanism of YES is shown in the Figure 4. This is a simplified version of the mechanisms used by natural estrogens to operate in vertebrates. The fundamental similarity of the transcriptional machinery in all eukaryotes ensures that it is similar also on yeast. In a strict sense, the YES assay detects only agonists of the ER, which are, substances that bind to ER and elicit transcriptional response in this simplified system. In conclusion some food dyes like titanium dioxide Table 2: EC 50 of 17-β-Estradiol and food additives compounds in distilled water samples obtained in YES test.
(EC 50 values, defined as the concentration where transcriptional response reaches 50% of its value at saturating concentration of ligand, were calculated from doseresponse assays, using 8 concentrations for parabens and food dyes and 10 concentrations for 17β-estradiol). | 2,987.6 | 2017-01-01T00:00:00.000 | [
"Chemistry",
"Medicine",
"Biology"
] |
Co-Combustion of Coal and Alternative Fuels
Energy utilization of alternative fuels including biofuels is one of the main tasks for development of recoverable sources in the world. The research consists of combustion tests in the large CFB boilers and measuring data inside the combustor. Since 1995, 29 large CFB boilers of different designs and power outputs have been in operation in the Czech Republic as shown in the Table 4. Construction of the boilers, technical documentation, licensing and engineering has been based on foreign experience. Every large power project is always preceded by trial measurements and tests on smaller pilot, trial or if need be model equipment. Due to the great difference in scale, some unexpected measuring equipment behavior or problems must be taken into consideration for cocombustion of coal and alternative fuels.
Introduction
Energy utilization of alternative fuels including biofuels is one of the main tasks for development of recoverable sources in the world.The research consists of combustion tests in the large CFB boilers and measuring data inside the combustor.Since 1995, 29 large CFB boilers of different designs and power outputs have been in operation in the Czech Republic -as shown in the Table 4. Construction of the boilers, technical documentation, licensing and engineering has been based on foreign experience.Every large power project is always preceded by trial measurements and tests on smaller pilot, trial or if need be model equipment.Due to the great difference in scale, some unexpected measuring equipment behavior or problems must be taken into consideration for cocombustion of coal and alternative fuels.
The present research aiming at characterising the co-combustion under-atmospheric fluidized bed conditions by different physical and chemical characteristics has the following objectives:
Ash formation upon fluidized bed co-combustion.
Fate of toxic trace metals upon fluidized bed co-combustion.
Recommendations for suitability of co-combustion in the atmospheric circulated fluidized bed boilers CFB and minimizing the harmful solid and gaseous emissions.
Model research
The model research has been carried out at the Technical University of Dresden.It includes the combustion tests -Table 2. -on experimental pilot equipment -Fig. 1. , 2. with atmospheric circulating fluidized bed for coal and bio-fuels produced from the sewage sludge from WWTP (waste water treatment plant) and biomass, and thermo-analytical study of bio-fuels -Table 1.The modelling had the following aims: -To determine non-uniformity of combustion in the fluidized bed combustor as influencing the composition of flue gases and specification in terms of minor constituents (NO x , chlorine compounds, alkalis, etc.).-Analogically the influence of the size or for that matter the influence of fuel granulometric distribution on the process.-Chemical composition, crystallographic structures, and mechanical properties of combustion solid products (bottom ash, fly ash, deposits).
www.intechopen.com
Fossil Fuel and the Environment 64 -Analytical establishment of sulphur forms in fuel and combustion solid products, as well as element analysis for fuel and biomass.-To perform leaching tests for combustion solid products.-Detailed study of mineralogical and chemical composition of bottom ash, fly ash, and the solid emission phase in cyclone, heat exchanger and filter.Fig. 3., 4. Table 3. -Balance for volatile elements, Cl, S, Hg, Se, semi-volatile elements, V, Ni, Co, As, and some non-volatile elements, Cr a Sn.Based on these balances to calculate the content of these elements in emissions and compare with the results of balance measurements.
Fig. 1. General view on pilot plant
Laboratory studies were focused on a detailed identification of input raw materials (coal, biofuel, limestone) so that the measurements could be reproducible: 1. Raw material input analysis and dependence of combustion solid residues on raw material input.2. Combustion inaccuracy assessment in actual unit condition (T, gaseous and solid components, velocities, modelling).3. Balance of combustion elements choice, studying mechanisms of deposit formation and composition.4. Verifying a redistribution model for a choice of elements between the fuel and solid by-products.The model research verify if the alternative fuel produced from biomass and sewage sludge may be used as alternative energy source in respect of the EU legislation, and/or its other modifications (with additives, decontamination technologies) for suitable fuel, which would comply with emission limits or the proposed energy process optimizing the preparation of coal/sludge mixture for combustion in the existing power engineering equipment.
The limiting factor for sewage sludge utilization from WWTP (waste water treatment plant) in agriculture is the increased content of risk elements and also the occurrence of organic pollutants -primarily polyaromatic hydrocarbons, PCB (polyaromatic byfenyls) and AOX (adsorbable organic halid).Other alternative fuels have not these limiting conditions.The limiting factor for sludge combustion at incineration plants is water content.With regard to the fact that from 2005 the EU Directives EU expects to ban waste disposal sites with any material with content of organic substances above 10 %, it is apparent that the priority condition for sludge utilization is sludge decontamination or power engineering utilization (Loo & Kopperjan 2008).Results from tests may be evaluated as very good with the prerequisite for utilization, testing of investigated substances in real combustion units.On the basis of carried out laboratory and pilot tests one may expect good results from these real units (equipment with greater output), many of these experiments have been already performed.From the results of experiments and thermoanalytical studies it is clear that 15 % of alternative fuels -biofuels based on sludge and brown coal can be used in the large fluidized bed boilers located in the Czech Republic.The combined combustion will enable to fulfil the Czech Republic's pledge to the European Commission concerning the development of renewable energy resources by 2010.
Diagnostic methods for operating surveillance of large fluidized bed boilers
Once large units are put into operation during co-combustion of alternative fuels and coal, guaranteed-performance tests must be conducted.The aim of the guaranteed-performance tests is to verify design parameters.Guaranteed-performance figures are compared with reality.Apart from basic measurements there are a number of other similar measurements of specific equipment parts that might be initiated because: the manufacturer is interested in using the experience to improve or design new units and the operator is interested in both eliminating problems and improving the economics of the operation process -Table 4. (Čech 2006).This chapter reviews the development of verification methods and presents some equipment for the determination of all important and interesting measuring data.The conclusions might be useful to energy companies and operators that want to verify operation data of fluidized bed boilers, flue gases and air channels.
Diagnostic measurements at a particular unit basically cover:
The measurement of fluidized bed temperatures, furnace temperatures, flue gases temperatures at ancillary heating surfaces up to the boiler.
The measurement of flue gas velocity in the furnace chamber and exits of cyclones, in cyclones, at the cyclone exit to second pass as well as in the area of additional boiler surfaces, sampling of flue gas in the boiler.
Sampling of characteristic solid ash particles including isokinetic sampling to determine solid particle concentrations.
Flue gas elements
Sampling of flue gas elements from the entire boiler can be divided into three groups: Sampling of flue gases from the bottom part of the fluidized bed.
Sampling of flue gases from the boiler second pass up to the exit to the chimney.
Sampling of flue gases from the boiler furnace, cyclones and cyclone link channels.
To monitor the fluidized bed boiler operation process O 2 , CO, CO 2 , NO X , and SO 2 measurements can be taken.Other elements are usually monitored up to the exit from the separator of solid particles in front of the chimney.Fig. 5. illustrates a cooled sampling probe that might be used to take flue gases samples.The probe has an identical construction to that used for temperature measuring.During exhaustion gas is rapidly cooled down (from 800 o C to approx.30 o C in cooled probe) so that there is no reaction with any other flammable waste gases.Gas is then sampled to be analyzed in the mobile laboratory.It is always recommended that a cooled probe is used to take samples from the furnace, cyclone and cyclone linking channels -Fig.6. Sampling of flue gases from the second pass of the boiler occurs at temperatures safely below 800 o C. Thanks to that temperature a larger part of the gaseous sample is not able to oxidize quickly and thus it is possible to use a sampling tube made of stainless steel or sintered corundum, Al 2 O 3 .To set concentration (e.g., SO 2 ), sampling channels must be heated up during the sampling operation so that no reaction with water occurs.The measurment results suggest that there is intensive suppression of NO X formed increases in areas of secondary and tertiary air supply.CO concentration has developed as expected.
The concentration decreases if the secondary air supply is gradual.
Solid particle concentration measurements
To determine the solid particle concentration in air flow, the Czech standard ČSN ISO 9096 needs to be observed.It is gravimetric determination of concentrations based on isokinetic sampling of solid particles from air flow.In the case of fluidized bed chambers the aim is to determine the solid particle concentration in the lower part of the fluidized bed layer and in the boiler furnace.For a bigger or smaller particle separation, the fluidized layer density varies depending on furnace height as well.The density in the lower part is in the range of 500 -800 kg.m -3 , in the upper part of the furnace with the circulating layer the range is 0.1-0.5 kg.m -3 .Pressure in the fluidized layer is always measured for various height levels.The acquired pressure data are continuously monitored by operation measuring instruments.To determine the solid particles concentration in flue gases, the gravimetric method with solid particles isokinetic sampling can be used.Fig. 9. illustrates the measurement unit for solid particle isokinetic sampling.To determine the concentration at the measuring points, a disposable sampling probe is used.Another option would be a sampling probe with cooled support as illustrated in Fig. 10.This probe was developed to measure solid particles through an opening in the membrane wall flag.The sampling device touches the cooled parts only with smaller part to prevent gases from cooling below the dew point during the sampling process.The results of measurement of the solid particle concentration are illustrated in Table 5. for the 100 %, 70 % and 40 % nominal outputs for 15 % biofuel and 90 % lignite.Table 5. Determination of the circulation number at different boiler outputs
Co-combustion of coal and solid waste fuels
Substitution of conventional fossil fuels (like bituminous coal or lignite) by low-carbon fuels for the energetic use is an efficient and cost-effective means of meeting the Kyoto Protocol establishing greenhouse emission targets for each of the participating developed countries (related to their 1990 emission levels).Considerable reductions of CO 2 emissions can be achieved by combustion of waste; therefore combustion of waste materials of various origins (industrial, agricultural etc.) or their co-combustion with fossil fuels in fluidized bed boilers became a legitimate alternative to conventional coal combustion.Another reason why particular attention is paid to energetic utilization of wastes is also elimination of waste and minimizing costs of waste deposition.(Loo & Kopperjan 2008).
But there are still challenges to be solved such as behaviour of the mineral matter during the wastes' combustion.Although the elemental behaviour during coal combustion has been studied and described in detail, the works dealing with redistribution of elements during waste combustion are quite rare, nevertheless, the conclusions described in these works are rather analogous -the application of the results obtained for the coal combustion on the combustion of wastes is not possible since the character of these materials is quite different.(Bartoňová at al., 2008).Another problem is that even if the waste materials differ from one another in their characteristics and content of toxic elements, most works only focus on wood and bark combustion.
This chapter intends to shed more light on the spectrum of alternative fuels used for energy production focusing on the evaluation of the effect of co-combustion of waste fuel and coal on the environment.In the circulating fluidized bed power station in Tisová -350 t/h -Table 4., the waste alternative fuel (WF) containing plastics (1-20 %), fabric and carpets (45-75 %), rubber (5-15 %), paper (1-10 %) and wood (1-10 %) was co-combusted with the coal and the limestone.The samples of coal, limestone, bottom ash and fly ash were collected at regular time intervals and unburned carbon particles were separated from bottom ash by hand.Analysis of major, minor and trace elements was performed by X-ray fluorescence spectrometry (SPECTRO XEPOS) and mineral analysis was carried out using X-ray diffraction analysis (BRUKER D8 ADVANCE).Ash content of the samples was determined at 815˚C.The distribution of macro pores was determined by means of mercury porozimetry (Micromeritics -AUTOPORE IV); SORPTOMATIC 1990 (Thermo Finnigan) equipment was used for the determination of specific surface area and mezopore-size distribution.Scanning electron microscope micrographs were taken by SEM PHILIPS XL -30.
Mineral analyses
The X-ray diffraction patterns were obtained for the samples of unburned carbon (UC), bottom ash (BA) and fly ash (FA) Fig. 11.With the aid of elemental analyses of unburned carbon and ash samples the major mineral phases were established in the diffraction patterns and are marked with abbreviations explained in the figure caption.Coal has already been given in indicating the dominant occurrence of quartz and kaolinite.Diffuse area observed in the unburned carbon diffraction pattern (approximately from 25˚ to 31˚) corresponds with semi-crystalline carbon phases.Somewhat lower crystallinity (broadened peaks) is evident also in case of magnetite and calcium hydroxide.Conversely, high-degree crystalline levels are represented e.g. by sharp peaks of quartz,
www.intechopen.com
Co-Combustion of Coal and Alternative Fuels 75 lime or anatase.The comparison of the diffraction patterns revealed nearly the same mineral composition obtained for both unburned carbons -the dominant mineral phase in both samples was quartz and minor occurrence of anatase was identified as well.The both bottom ashes showed the similar mineral composition as well -it was lime there that was the most abundant mineral phase and also minor amount of quartz, anhydrite and anatase was identified in these samples.The similar mineral composition was obtained also for both fly ashes where quartz was the most dominant mineral and where the occurrence of lime, anhydrite, anatase and calcite was of minor significance.Hence, it can be concluded that the addition of solid waste fuel to coal during the combustion did not change the mineral composition of both unburned carbon and the ash samples.(Bartoňová at al., 2009).
Chemical analyses
By means of X-ray fluorescence spectrometry the contents of major, minor and trace elements were determined in coal (C), unburned carbon (UC), bottom ash (BA), fly ash (FA) and waste alternative fuel (WF).These results as well as the ash contents in these materials are given in Table 6.The porosity of the coal and bottom ash is rather low, whereas unburned carbon shows highly-developed system of ruptures, pores and cavities leading to high porosity of this material.That is why unburned carbon is being studied in relation to its adsorption properties.
Surface morphology and pore-size distribution
The morphology of coal, waste fuel, unburned carbon and bottom ash grains was studied using scanning electron microscopy with the secondary-electron beam method.The surface structure of coal and waste fuel was determinated and the texture of a typical grain of unburned carbon and bottom ash is shown in Fig. 12, 13.A general view (with magnification of 50x) and a surface detail (with magnification of 1500x) are shown for each material studied.The surface texture shown in Fig. 12., 13. indicates that the porosity of unburned carbon collected at waste fuel co-combustion with coal is much better developed than that of unburned carbon when pure coal without waste fuel was combusted.But some caution is needed in such conclusions due to somewhat low representativity of one studied grain towards the average unburned carbon sample.Therefore pore-size distribution and specific surface area measurements were conducted in order to prevent misinterpretation when comparing adsorption properties of unburned carbon collected during pure coal combustion and during co-combustion of coal and waste fuel.Specific surface area of unburned carbon collected at pure coal combustion was 194 m 2 /g, whereas during co-combustion of the same coal with waste fuel the specific surface area of unburned carbon reached 297 m 2 /g, which is significantly higher value.This work was focused on the comparison of minor and trace elements behaviour during the co-combustion of coal and waste alternative fuels with the previous results regarding the combustion of the same pure coal in the same power station but without the added waste fuel.Elemental behaviour exactly in the combustion chamber did not change noticeably when waste alternative fuel was co-combusted with the coal.Even the most abundant elements in waste alternative fuel (related to coal) -Zn, Cl and Br -showed nearly the same behaviour.This observation can be explained through similar high volatility of these elements both in the coal and in the waste materials.(Bartoňová at al., 2009).Comparison of elemental contents in bottom ash and fly ash was performed to describe further behaviour of elements when leaving the combustion chamber.It was established that when waste fuel was co-combusted with coal, a slight shift towards the higher enrichment of most elements in fly ash (vs.bottom ash) was observed.This trend is the most significant in case of Zn, Cl and Br which are the very elements that were the most abundant in waste fuel (when compared to coal).Therefore it can be concluded that the elements showing high concentrations in waste fuel tend to concentrate in fly Specific surface area of unburned carbon collected at the test where waste fuel was co-combusted with the coal (297 m 2 /g) was significantly higher that that of unburned carbon from the combustion test without waste materials (194 m 2 /g).Comparison of pore-size distribution curves obtained for both unburned carbons revealed that unburned carbon collected during coal and wastes combustion contains larger amount of small pores, whereas macropores are more abundant in the unburned carbon form coal combustion without the waste alternative fuel.The unburned carbon collected at the co-combustion of the coal and wastes is undoubtedly of better adsorption properties.
Co-combustion of coal and waste wood
Biomass represents a lot of various materials, either waste materials or special energetic plants.Fuels based on wood biomass (sawdust, shavings, chips, tree-bark) can be used also for the production of high-quality biofuels, such as wooden briquettes and pellets, or can be co-combusted with coal.(Bartoňová at al., 2008).Average ash content of wood is about 1 -2 % and calorific value ranges from 11 to 18 MJ.kg - .Straw is another advantageous energetic source and its calorific value ranges from 17.6 to 18 MJ.kg - , ash content is about 5.3 -7.1 % and is often used e.g. in Sweden, Denmark or USA.(Loo & Kopperjan 2008).The disadvantages of this material are its huge volume and heat-exchanger fouling problems.
There are also other biomass materials used for the energetic utilization -various agricultural residues (green wastes, hull, shells, pruning, rice straw, rape residues, corncobs and stems, sugar cane trash, cassava rhizome) as well as growing energetic plants.(Winter & Hofbauer, 1997).This chapter mainly evaluates the environmental impact of fluidized bed combustion of different fossil and biomass fuels.Particular attention was paid to the comparison in the release of environmentally most significant molecular species -amount of solid coal combustion products and their leaching behaviour or emissions of sulphur and carbon dioxide.For this work the samples from circulating fluidized bed power station in Štětí -Table 4. -were collected.In this power station coal combustion and co-combustion of coal / wastes tests were performed in circulating fluidized bed boiler at 870°C.Simplified diagram of the combustion facility is given in Fig. 14.In this power station usually lignite is co-combusted with the wood waste (coming from the cellulose production).Usual lignite / wood waste ratio is 10 :1.
Combustion tests
Three combustion tests were performed -Regime I, II and III.In Regime I lignite and limestone were combusted (in weight ratio of lignite/limestone = 10:1).In regime II lignite, limestone, sawdust and tree-bark were combusted in coal/wood waste ratio of 1:1.76.In regime III wood, sawdust and wood chips were combusted in ratio of 1:0.21:1.(This combustion test was rather unusual because no bottom ash was created and the only solid output flow was fly ash).Mass flows of input and output materials (BA -bottom ash, FAfly ash, E,s -solid emission particles) and volume of gaseous emissions (V E,g ) are summarized in Table 8.The ash and water contents in these materials are given in Table 8. as well.Mass flows relate to undried samples.Proximate and ultimate analyses of input and output materials are given in Table 7.
Analyses of emissions
Emissions from combustion unit were analysed and CO, NO X and SO 2 were determined in flue gas, while As, Se, Cd, Hg and Pb were determined in solid particles captured on the filter in flue gas stream.The results of emissions analysis are given in Table 9.In the boiler mantle there are four holes into the combustion chamber.Using these holes as progressive sliping thermocouples to measure temperatures in the fluidized bed at different levels through the holes, where the proble was plugged, three samples of gaseous emissions and ash were collected directly from the fluidized bed.The sliping probe measured temperatures in the fluidized bed at the inlets.All combustion regimes were sampled from storage tanks of fuel and all four sections of the electrostatic precipitator.Furthermore, there was continuous measurement of emissions of NO X , CO, SO 2 in the flue gases (see Fig. 5.).The balance of fuel and combustible waste, the mass flow, moisture content and ash, as well as the mass flow bed ash (BA) and fly ash (FA), the volume of gaseous emissions (VE, g), the quantity of solid emissions, can be evaluated in Table 10.The summary of calculated values shows a relation between the input (m inp ) and output (m out ) data.The difference between the weights of the input current m inp and output current m out under regime III can be explained by the fluid in the boiler not "running" the whole regime III -cleaning ash from coal combustion and therefore part of the ash has gone into the output stream and so its weight is greater than the output current m out .The results confirm that burning wood emits less CO 2 to the atmosphere per unit of energy input than burning brown coal.The data listed in Table 12.show that the minimum content of SO 2 emissions (% S E ,g) is the combustion of coal with limestone (regime I).Absolute numbers of sulphur contained in the mass emissions (m s, E ), however, clearly demonstrate that by burning wood the amount of sulphur getting in the emissions into the atmosphere is about 10 times smaller than that of burning coal.This parameter is much more favourable for burning wood.The most significant results are summarized below:
Regime
Mass balance calculations suggest that mass flow of inorganic matter produced per 1 GW of boiler output has dropped from 28 kg /hr.GW for lignite combustion to 0.7 kg /hr.GW when wood wastes were combusted.
This observation is a source of many advantages relating to ash land-filling e.g.decreasing the amount of ashes produced during the combustion process will consequently result in decreased amount of toxic leachates, above all sulphates, and also the increase of pH (due to high amount of Ca-bearing minerals present in coal ash) will not be as significant). Mass flow of CO 2 produced during the combustion was related to 1 GW boiler output.0.20 kg /hr.GW was obtained for lignite combustion and it has dropped to 0.14 kg /hr.GW released when wood wastes were combusted. Sulphur emissions were also recalculated to 1 GW boiler output -sulphur emission flow calculated for lignite combustion (0.13 kg /hr.GW) was considerably higher than that obtained for wood wastes combustion (0.01 kg /hr.GW).
In conclusion -the results described above unambiguously suggest that the waste wood combustion produces lower amount of environmentally-hazardous pollutants than fossil fuel combustion, even if combusted with Ca-bearing additives.(Klika 2010).
Co-combustion of coal and sewage sludge
The sewage sludge is a heterogenous mixture of organic elements (both live and lifeless microorganizm cells) and incorganic elements.The organic part of the sewage sludge is mainly represented by the proteins, sugars and lipids.The inorganic part susteins mainly of the compounds of silicon, ferrum, calcium and phosphorus.Morover the sludge consist of a wide range of substances as well -heavy metals, persistent organic elements PCB, PCDD/F, PAU etc. and other organic harmful elements.The Table 13.illustrates the summary of the organic polutants in the sewage sludge dry residues taken from the Central Sewage Plant of Ostrava (CSPO) and it is evident that almost all limits of the monitored polutants are exceeded.Such high values prevent the sewage sludge from being used for agricultural purposes and land reclamation -necessitating the usage of both the underground and exterior storage.The biggest problem in this case is the high content of the polyaromatic hydrocarbons that is ten times higher than the allowed limit.It is probably because of the industrial waste-water disposal.The value of TOC (Total Organic Carbon) that does not fit can rather be considered a useful than limiting factor.The energetical content of the sewage sludge is based on the chemical energy of the organic components that are capable of oxidation.To be able to describe the sewage sludge as fuel a material that converts its primary energy into the thermic energy -the condition of being flamable must be met.To make the combustion process balanced it is necessary to achieve fuel efficiencies from dry sludge residues and other heat distributed to the furnace, making it possible to use the water vaporization heat contained in fuel, the heat needed for the superheating of the water vapours in the waste gases and the heat needed for the waste gases heating.The important criterion of keeping the combustion process balanced is thus the water ratio in the sludge.Thus a problem exists because water ratio of the mechanically drained sludge is high (cca.60 -80 %) for the relatively low fuel efficiency and therefore the sludge cannot be combusted by itself.The most important energy characteristic of each single fuel is its efficiency.The dry residue efficiency of the anaerobicly stabilized sewage sludge is in the range of 7 -10 MJ.kg -1 .Fig. 15.shows the sewage sludge structure. indicator
The combustion test description
The combustion test with the mechanically drained digested sewage sludge (the water proportion in the sludge was approx.63 %) was carried out at circulating fluidized bed power station in Třinec with an output of 130 MW t -Table 4. The mixture of hard energy coal and the coal sludge of average efficiency Q i r = 19 MJ.kg -1 , water ratio W r = 7,5 %, ash content A r = 30 % was combusted at the fluid boiler.During the combustion test the fuel was distributed to the boiler in the ratio: 11 % weight -sewage sludge from the Central Sewage Plant of Ostrava, 28 % weight -energy coal and 61 % weight -coal sludge.During the additional combustion of the sludge the mixture characteristics changed as follows: heating value Q i r = 17 MJ.kg - , water ratio w r = 14,5 %, ash content A r = 28 %.Based on the fact that the total heating value of the fuel mixture thus dropped by cca 2 MJ/kg during the additional combustion, the volume of the mixture must be enlarged by approx.0,65 kg.s -1 to maintain the constant boiler output.However the total coal consumption does not raise and this fact is important.The description of the combusted fuel is illustrated in Table 14., 15., and 16.
The glory-angle of the mixture was rapidly changed for the worse, compared to the hard coal.The chain feeders of the crude fuel worked reliably and had no failures.Thanks to the mixture passing through the chain feeder the big pieces of the sludge were crushed.The combustion test showed that 15 % of the sludge content in the mixture was the cut-off amount able to pass the swing-hammer crasher.The moisture was of a fundamental importance concerning the allowable amount of the sludge in the mixture.During the combustion test the sludge moisture was approx.65 % compared to the hard coal moisture of 7, 5 %.The higher moisture makes the temperature drop behind the crasher and it results in sealing the crasher with the mixture of the wet mud.If we focus on the operational efficiency of the boiler under the condition of the additional combustion of the sludge it it is advisable to monitor unwanted states like rust formation caused by high and low temperataure and silting the heat transfer surfaces and abration.In the case of boilers with the fluid bed and the additive desulphurisation, the marks of the chlorine desulphurisation.This reduction in SO 2 content in the waste gases is obtainable only under the condition of additional combustion of the sewage sludge.The additive gets into the sewage sludge during the process of the sludge hygienisation by the lime dosing at the sewage plant.Furthermore it is hydrated on Ca(OH) 2 by the sludge humidity.The lime hydrate is rid of CaO while entering the fluidized bed boiler.The CaO then reacts with the SO 2 and the CaSO 4 .The amount of the additives in the sludge lowers the lime stone consumption as the primary source (Szeliga 2008).The analysis of heavy metals and microelements in the combusted fuels (the energy coal, the coal sludge, the sewage sludge) were carried out in the laboratories of The Technical University of Ostrava.The evaluation was made for the single coal combustion and then for the mixture of coal and sludge.The redistribution of heavy metals and microelements during the additional combustion of the sewage sludge to the combustion hard residues and the emissions are a matter of further research.The combustion test proves there are further opportunities for additional fuel combustion in the fluidized bed boilers.The advantages of this kind of sewage sludge usage are mainly in the reliable decomposition and oxidation of the organic harmful elements and significant sludge volume reduction.Another suitable way of using the sludge is to reduce its humidity, which improves fuel efficiency, transport and manipulation.The disadvantage of the thermic usage of the sewage sludge is higher concentration of heavy metals and microelements entering the combustion equipment.
Crude
Co-combustion of coal and sewage is possible only if there is appropriate content of heavy me tals in the sewage sludge entering the combustion process.The monitoring and analyses of heavy metals in the sewage sludge are nessesary.
Findings
The most important findings from the research can be summarized as follows: 1. Stability of combustion depends on two factors: a) regular and uniform feed regulation of the fuel mixture, b) perfect homogenization of the fuel mixture.Otherwise, pulsation in the furnace can occur.2. Experience with the combustion of sewage sludge showed that the highly volatile matter contents significantly affect the overall combustion process.Care must be taken to achieve complete combustion of the volatiles to ensure higher combustion efficiency and low emissions of CO, hydrocarbons and PAH (polyaromatic hydrocarbons).3.During devolatilization the biomass undergoes a thermal decomposition with subsequent release of the volatiles and formation of tar and char.The results show that the quantities of char and gas formed depend on the type of carbonised material.Furthermore, increasing the pyrolysis temperature leads to a decrease in the quantity of char formed and an increase in the quantity of volatiles.Analyses of the compositions of the volatiles from straw and stover as well as from wood chips and sewage sludge show that CO, H 2 , CO 2 and CH 4 are the main gaseous components.High moisture contents have been found to increase the devolatilisation time.For dry residues, in addition to the expected immediate ignition and the highly volatile matter contents, the volatiles consist mainly of combustibles -CO, H 2 and C x H Y .4. The composition of the ashes from sewage sludge, coal, peat and wood influences melting point.It is known that the Na 2 O contents of the residues are low and comparable to those of sewage sludge, wood, peat and coal.The K 2 O content of the fuel ashes on their melting points is well demonstrated.5. Combination of low flow rate and high temperature causes the particles, which are coated with fuel ash, to contact each other and form weak physical bonds or to agglomerate.The formation of these weak bonds or agglomeration is due to the surface of the particles having a low eutectic point or ash softening temperature.This low value is caused by the high alkali content, specifically sodium and potassium compounds, formed during combustion of the boiler fuel.The agglomerated particles, subjected to high temperatures, then begin to sinter or stick together through bond densification thereby forming a strong physical and chemical bond.6. Agglomeration begins when part of the fuel ash melts and causes adhesion of bed particles.Beginning of agglomeration in the fluidized bed is often indicated by occurrence of temperature differences in the bed and the presence of large fluctuation of bed pressure.When the feeding of the fuel continues it eventually leads to a defluidization of the whole bed.7. To rate the propensity of fuels against fouling, the alkali index has been developed.This index relates the mass of alkali metal oxides K 2 O + Na 2 O produced with ash to the GJ of energy generated thermally and may be used for biomass feedstock.Above 0,17 kg alkali/GJ fouling is likely and above 0,34 kg/GJ fouling is virtually certain to occur.8. Ash deposition from biomass fuels which contain certain chemicals can also create corrosion and erosion of metals.Two most abundant inorganic elements are Si and K that form silicates with a low melting point.The combustion leads to the condensation of molten silicates which are likely to cause fouling and corrosion.Analyses showed that corrosive reactions occur between chemical compounds in the ash particles and the elements in the metal on un-cooled samples at gas temperatures near 650 o C. 9. Solutions for the problems resulting from the low melting points of the ash are: use of additives, use of alternative bed materials in the case of fluidized bed combustion and blending of biofuels with coals, lignite.10.There are three routes of formation of NO X during coal combustion, namely: thermal, prompt and fuel -NO x .Biomass has high contents of volatile matter and low contents of fixed carbon, so that the effect of char on formation of NO X and N 2 O may be significant.However, the catalytic effect of the ash could be important for residues which have high CaO contents.11.Concentrations of heavy metals are in compliance with environmental directive of EU2000/76/EG (including cancerous harmful components and benzopyren +Cd+Co+Cr+As).Combustion of alternative fuels and coal has no significant influence to leaching and Ph factor.
Conclusion
Since 1996 29 large fluidized bed boilers with desulphurization ability during combustion process have been launched in the Czech Republic.The differences in design and various concepts of these units have helped collect a lot of valuable data and gain a great deal of experience.The opportunity had not existed before these units were constructed.
Because the boilers for co-combustion of coal and alternative fuels in the Czech Republic were developed from know-how of foreign suppliers, it was not possible to get familiarized www.intechopen.com Co-Combustion of Coal and Alternative Fuels 89 with the technical parameters until they started operation.The first operation hours of the most of these boilers were affected by the typical characteristics of Czech coal.Highly abrasive ash matter, high humidity, clay impurity of the fuel and higher content of other elements in the raw fuel (stone, wood, metal) made it necessary to modify fuel feed channels, crushers, separating plants and fuel intake to fluid channels.Many times before, these problems resulted in total unit reconstruction or even replacement.Frequent fuel supply discharges led to reduction of durability of heavy linings of the combustion chamber, especially cyclone bricking and chutes under the cyclones.Some problems were caused by ash extraction from fluidized layer, its cooling down, granulometrics finishing and further manipulation.Other problems occured in sintering fluidized particles when combustion temperatures were well below 900 o C. In spite of this mass sintering happens in various parts of the boiler.Last but not least, there is a trend to reduce desulphurization costs if the molar ratio Ca/S is in the range of / from 2.5 to 3, which means higher operation costs compared to wet tailings.
A quite new area of fluidized bed boilers is the combined combustion of coal and alternative fuels or the co-combustion of assorted fuels from renewable sources.Despite some slowdown in the expansion of activities in energetics, there are further projects in the area of applied research focused on operation process optimization, efficiency improvements and operation costs minimization.These are the areas where the information obtained from measurement results in various boiler types can be used.
Fig. 5 .
Fig. 5. Probe for flue gas sampling from fluid layer Fig. 6. illustrates a sampling probe used in the detailed net measuring of O 2 concentration in the boiler combustion chamber with a steam output 125 t/h.(15 % biofuels and 90 % lignite coal).
Fig. 6 .
Fig. 6.Probe for flue gas sampling from furnace of boiler A grid method of measuring O 2 , CO and NO X concentration was used.Measurements were taken using instrumentation openings in the middle of the side walls of the combustion
Fig. 8 .
Fig. 8. Average CO and NO X concentration distribution along the height of the combustion chamber of the boiler at 60 % nominal output in Power station Tisová.
Fig
Fig. 9. Diagram of measurement unit for solid particle isokinetic sampling.
Fig. 15 .
Fig. 15.The structure of the sewage sludge from CSPO
Fig. 16 .
Fig. 16.The scheme of the distribution of the fuel to the CFB boiler
Table 1 .
The analysis of the fuel mixture Fig. 2. CFB boiler 300 kW
Table 4 .
Newly-built fluid boilers with circulating fluid layer in the Czech Republic
Table 7 .
Mass and volume flows of input and output materials
Regime I Regime II Regime III Lignite Lignite Sawdust Tree-bark Wood Sawdust Wood chips
Input mass flow of carbon converted to carbon dioxide (CO 2 ): I -28 kg/h.GW and II-12 kg/h.GW, Table11.-where the index corresponds to the C carbon in coal -are then calculated giving all the input flows of carbon-converted CO 2 .For simplicity it is assumed that all the carbon is burned and transferred to the emissions in the form of CO 2 .
Table 9 .
Analysis of emissions for regimes I, II , III and emission limits
Table 11 .
Calculation of the incoming flow of inorganic materials for 1 GW of power boilers
Table 12 .
Output flows of sulphur (S)
Table 13 .
Organic polutants in sewage sludge
Table 14 .
The fuel characteristics in crude and waterless form
Table 17 .
The content of the combustible carbon in the combustion products corresponds with the fine hard coal combustion.
Table 17 .
The boiler efficiency η k and the combustible matter content in the ash, where C LP indicates the combustible matter content in the bedding ash and C UP indicates the combustible matter content in the ash | 9,098.6 | 2012-03-14T00:00:00.000 | [
"Physics",
"Engineering"
] |
Proposal for an SMS-driven CMMS solution
— The current possibilities for accessing and using CMMS software remotely are essentially through the use of the Internet. This is a major challenge in areas not served by the global network; this is the case in developing countries where only large towns are covered. The purpose of this article is to propose an alternative solution, which uses SMS. After a review of the literature, on the possibilities of using SMSs and existing remote access solutions for CMMS tools, a proposal for using short messages as a solution for the remote use of CMMS software is presented and implemented, as part of the maintenance of medical equipment in two medical structures in Cameroon and Gabon. The results obtained show that the GSM-SMS technology can be used in a relevant and effective way in a context of managing the maintenance of medical equipment to provide mobility and remote control of CMMS software.
This poses a problem accessing this work tool when access to the corporate network is not available. However, because of the very sensitive roles played by the different medical devices, it would be wise to reduce their downtime to a minimum.
In Cameroon, more than 80% of the country's localities are covered by at least one GSM network since 2015 [7]. This is an opportunity for those who are in the areas covered to use the services of short messages (SMS), which can be used for purposes other than the usual communication between individuals. In a study conducted in 2014, Amin and Khan show that, not only is this possible, but also that SMS is well suited to other types of uses [8].
In the field of health, several cases of use of SMS have been envisaged. Ben Townsend and his team used mobile technologies and SMS messaging for the transmission of non-critical medical telemetry from medical sensor data. Their goals were to increase freedom and reduce costs for patients under medical supervision [9]. In the same vein, Manita Rajput and her team also used SMS for monitoring patients with cardiovascular disease. Their approach was to implement a system that alerts the treating physician by SMS, when the monitored parameters reach critical values [10]. In their article [11], Fogg and Allen go further and present ten (10) possible uses of this technology for overall health improvement. These use cases are summarized in Table I. In the agricultural field, Tseng and his team used GSM-SMS technology for monitoring and collecting data (temperature, humidity, wind speed and number of insects / pests caught, etc.) on farms [12].
These examples, which illustrate some possibilities for using short messages, have in common the fact of not referring to maintenance management or maintenance management software.
The modern challenges of profitability and efficiency require organizations to go as close as possible to the source of information in order to facilitate its collection and specially to make it available as soon as possible for rapid decision-making, for example. This results in the availability of tools and equipment needed in the field and often far from the office. These needs for collaborative work and information sharing are not new. In 2003, as part of the European project MOTION, Dustdar and Gall propose a collaborative and mobile work architecture [13]. In 2006, Arnaiz et al. present the potential of ubiquitous computing in the practice of industrial maintenance, as well as a vision of solutions for mobile maintenance, both implemented within the framework of the European project DYNAMITE 017498 [14]. They continue their scientific research in the same direction in 2009, with a focus on conditional maintenance [15]. Central Arkansas Water has been able to improve the efficiency of its water distribution and treatment infrastructure maintenance activities, through the use of a CMMS tool with a geographic information system associated with it, which were deployed on mobile devices in 2009 [16]. In a literature review conducted in 2009, Emmanouilidis et al. find a penetration, albeit at different but sustained scales, of mobile technologies in asset management and industrial maintenance [17]. Later on, in 2014, Bankosz and Kerins developed a prototype to demonstrate the benefits of deploying mobile technology to improve maintenance in a small food manufacturing plant [18]. To meet the specific needs of some companies in charge of maintenance on various sites, where transmission of data in real time is vital, Zhao and Feng have designed and implemented a CMMS system accessible by mobile devices running on Android [19]. In the maritime domain, new information technologies have been used to improve the management of maintenance and monitoring of the parameters of a vessel, to the point that the conditions of the equipment of a ship at sea are known, in real time by staff left on the ground [20]. Similarly, Munyensanga et al. used an application deployed on mobile devices running Android, for improved efficiency of preventive maintenance of a circulation pump intake system [21].
These works have in common the use of the internet as a means of communication between equipments.
The use of current CMMS software forces their users either to have a good internet connection (this is not often possible in developing countries), or to work directly on a workstation connected to the local network; this limits their access to this work tool, during hours of service and service locations only.
Despite the large number of these works on the themes of the use of SMSs in various fields, or the search for mobility in the management of maintenance, the use of SMS has not yet been tested for the management of maintenance of medical equipment.
III. METHODOLOGY To achieve our goals, we adopted an incremental approach, consisting of a progressive implementation of the functions of CMMS commands by SMSs.
In this work we present the effectiveness of a complete cycle of preventive maintenance. The method is described according to the diagram of figure 1. Figure 1 illustrates the data exchange during a preventive maintenance cycle that we propose to solve the problem, namely: access and interact with a CMMS software by SMSs. We describe a preventive maintenance cycle such as the process from scheduling a preventive maintenance plan to the report after the completion of the corresponding work.
The preventive maintenance cycle is as follows: a. Information about preventive maintenance plans for medical equipment is stored in the database, from a workstation. This is path . b. As soon as an equipment is put into operation, the current date of the system is continuously compared to the dates of the maintenance schedule deadlines. At the end of a deadline, the system sends by SMS a notification to the maintenance manager RM, along paths and . c. The RM maintenance manager assigns a maintenance technician TM to this task via SMS, following paths and , then and d. The maintenance technician TM asks the system for the details of the work to be done, along paths .
and . e. The system responds by sending the list of required materials, as well as the procedures for performing maintenance tasks, along paths and . f. After completion of the work, the technician makes records that he sends by SMS to the CMMS, following the paths and . g. The maintenance manager can then verify that the maintenance has been performed correctly, by querying the system and knowing the execution status of the task, along the paths and , then and . Currently, the operation of a web application installed on a remote server is characterized by a series of interactions (requests-responses) with users, through web browser installed on workstations and web server application deployed on server side. In our case, in addition to this usual mode of operation, the system needs to perform a functional autonomy. Indeed, several actions must be executed spontaneously, and not in response to a request initiated by a human operator. In this sense, we can mention the update of preventive maintenance programs, the sending of notifications by SMS when the deadlines are reached, the reading and the processing of incoming SMSs. This is how our system will be able to perform all the tasks entrusted to it, at any time of the day. The different components that come into play during this process are shown in Figure 2. This automatic operation is carried out as follows: The sending and receiving of SMSs is supported by the tool gnokii-smsd, which allows to read and send SMSs, independently and periodically. When connected to a database, incoming messages are put into one table, while outgoing messages must be put into another table. In either case, each message has a token indicating whether the message has been sent (for outgoing messages), or whether it has been processed by the parser, for incoming messages. Updating maintenance plans is processed by procedures stored in the MySQL database. At regular intervals, the periodic installments are compared to the current date. In case of a match, notifications are "dropped" in the outgoing SMS table, where they are supported by gnokii-smsd. The parser reads the incoming SMSs, interprets their contents and executes the actions requested by the users. The actions processed are the issuance of requests for the details of the maintenance procedures, the receipt of the maintenance reports, the updating and the issuance of the execution reports of the maintenance works. A client application, running on mobile devices equipped with the Android system, for sending and receiving short messages.
IV. MATERIAL
As working tools, we have: Android Studio 3.3, for the development of the module running Android.
Eclipse Oxygen Release (4.7.0), for the development of modules running on PC.
Laravel 5.5, a PHP framework for developing dynamic web-type applications.
V. RESULTS AND DISCUSSION
In this paper, medical equipment data from the medical analysis laboratory of a Regional Hospital of Gabon were used to validate our methodology.
As shown in the following screenshots, we have implemented a complete preventive maintenance cycle and the maintenance manager can check the status of each task. Some apparatuses of a technical platform of the Regional Hospital are listed in figure 3. Fig. 3. List of a few medical equipments of a Regional Hospital Figure 3 shows the equipment selected for this work. They were filled in the CMMS application by a user using a computer connected to the corporate network. One selected device is shown in figure 4. Figure 4 shows the list of preventive maintenance plans for GeneXpert DX, a medical biology analyzer. As soon as an equipment is put into operation, the system monitors the deadlines for each maintenance plan, for each medical device. At the end of a deadline, the system updates the monitoring of the maintenance, adding for each device, the future actions to be executed. Figure 5 shows the logs of GeneXpert DX maintenance activities. Equipment maintenance plans do not change as they are provided by the supplier as a recommendation. As shown in this figure, the actions have a status "Not Performed" when they first appear on this maintenance tracking page. Other pre-existing follow-up actions may have the status "In Progress" or "Completed" depending on their level of execution. Fig. 6. Maintenance notification messages to be done Figure 6 shows three (3) notifications sent by the system, by SMS, to the maintenance operators. These messages are issued as soon as a new maintenance action is required. Each notification includes a sequence of parameters and the corresponding values. The content of the first notification is detailed in Table 2. Description of maintenance plan Table 2 shows the ten lines of a notification. Each line has three columns: the parameter, its value, and the meaning. For example, line four (4) indicates the name of the device, object of maintenance; while the fifth line shows its location. Figure 8 shows the messages sent by the system, in response to the request received. Indeed, the two (2) SMS received by the technician contain, respectively, the list of the material and the procedure to follow for a good execution of the work. Figure 9 shows the SMS sent by the technician, once the maintenance actions completed, to announce the effectiveness of the execution of the maintenance. The message contains a report stating that the maintenance was performed normally (MLRQ = RAS). The maintenance task (MLID: 1) is thus completed and closed, as shown by the next figures. Fig. 10. Maintenance follow-up after running jobs: MLID task: 1 is "Completed" Figure 10 shows the entries of the maintenance follow-up table. It shows the start and end dates and times of a maintenance activity, as well as its new status (Completed). Figure 11. Detail of a maintenance log (MLID: 1) after completion of the work Figure 11 shows the details of a maintenance follow-up. In addition to the dates and times of start and end of the work, and the status, there is also note (RAS), indicating that the execution of the work went perfectly and there is nothing to report.
These results show that our solution works. Indeed, we made possible the communication by SMS between a CMMS software and maintenance staff of a health structure. From the notifications sent by the software, to signal maintenance work to be carried out, exchanges ensue until the complete execution of a preventive maintenance activity.
Like our previous work in relation on alert sending [23], the results obtained are encouraging. Although all use cases have not yet been implemented, the current results look promising.
The main limitations of the system, in the current state, notwithstanding the advances presented above, are twofold. Firstly, the data transmitted by SMS are mainly textual, however the user manuals of some equipment include diagram or pictures, related to their maintenance. Secondly the amount of data transmitted by an SMS is low: it makes use of several SMSs for sending some texts.
VI. CONCLUSION The literature review has shown that remote access to CMMS software by short messages has never been experienced. In this paper, we propose a solution for remote management of maintenance by SMSs, in a hospital context. The experience of implementation of this solution shows that is viable. And the results obtained, at this stage, pushes us to optimism as to the continuation of the overall project.
As a perspective, it might be possible to overcome the main limitations: SMS transmission of other types of digital media (images, video, sound). To optimize SMS communication, data compression will be considered in our future work. | 3,393.6 | 2019-04-30T00:00:00.000 | [
"Computer Science"
] |
Evidence that Mediator is essential for Pol II transcription, but is not a required component of the preinitiation complex in vivo
The Mediator complex has been described as a general transcription factor, but it is unclear if it is essential for Pol II transcription and/or is a required component of the preinitiation complex (PIC) in vivo. Here, we show that depletion of individual subunits, even those essential for cell growth, causes a general but only modest decrease in transcription. In contrast, simultaneous depletion of all Mediator modules causes a drastic decrease in transcription. Depletion of head or middle subunits, but not tail subunits, causes a downstream shift in the Pol II occupancy profile, suggesting that Mediator at the core promoter inhibits promoter escape. Interestingly, a functional PIC and Pol II transcription can occur when Mediator is not detected at core promoters. These results provide strong evidence that Mediator is essential for Pol II transcription and stimulates PIC formation, but it is not a required component of the PIC in vivo. DOI: http://dx.doi.org/10.7554/eLife.28447.001
In yeast cells, the PIC has been defined experimentally as the entity that contains Mediator and general transcription factors bound to the core promoter in vivo (Wong et al., 2014). However, the PIC is short-lived (estimated as 1/8 s by Wong et al., 2014), because Mediator only transiently associates with the core promoter; it rapidly dissociates from the PIC upon TFIIH-mediated phosphorylation of the Pol II CTD (Jeronimo and Robert, 2014;Wong et al., 2014). Such TFIIH-dependent dissociation of Mediator is important for efficient escape of Pol II from the promoter into the elongation phase of transcription (Wong et al., 2014). Upon Mediator dissociation and promoter escape of Pol II, the other general transcription factors remain at the core promoter as a post-escape complex (Wong et al., 2014).
Substantial transcription persists upon depletion of essential Mediator subunits
Classic loss-of-function experiments to elucidate the function of genes essential for cell growth are always compromised by the inability to completely remove or inactivate the encoded gene product. As a consequence, various approaches have been used to reduce the function of essential gene products, such as ts mutants (Horowitz and Leupold, 1951;Edgar and Lielausis, 1964;Hartwell, 1967), inducible protein degradation via degron-tagged proteins (Dohmen et al., 1994;Moqtaderi et al., 1996;Nishimura et al., 2009), specific chemical inhibitors (Bishop et al., 2000), and anchor-away (Haruki et al., 2008). These approaches are complementary, and each of them has advantages and disadvantages. The anchor-away method permits the rapid removal of proteins from the nucleus under conditions where cells are not stressed by heat shock or other environmental insults (Haruki et al., 2008). Although cells are treated with rapamycin to induce the anchor-away process, the strains carry the tor1-1 mutation that blocks the physiological effects of rapamycin.
Most importantly, control anchor-away strains show comparable Pol II occupancy profiles in the presence or absence of rapamycin (Wong et al., 2014).
For a comprehensive analysis, we generated anchor-away strains for every essential Mediator subunit, and several non-essential subunits, and examined Pol II occupancy upon rapamycin treatment. Levels of the tagged Mediator subunits associated with the genome were reduced to background levels upon rapamycin addition (Petrenko et al., 2016); Figure 1A), indicating that the anchor-away procedure is efficient. Depletion of each Mediator subunit tested, including those essential for cell growth, does not lead to a global shutdown of Pol II transcription, but rather a modest decrease on average ( Figure 1B). In contrast, depletion of TBP or Pol II leads to a drastic decrease in transcription ( Figure 1B). For Mediator-depleted strains, the strongest decreases in Pol II occupancy are observed upon depletion of the essential head subunit Med17, the essential scaffold subunit Med14, and the essential middle subunit Med7. Depletion of Cdk8, the catalytic subunit of the kinase module, has very modest effects on Pol II occupancy ( Figure 1B).
In the above experiments, genes are expressed at steady-state levels prior to depletion of the Mediator subunit. To address the effect of Mediator depletion on inducible transcription, we depleted cells of Med17 and then analyzed the rapid transcriptional activation response to heat shock and copper. In accord with modest transcriptional effects described above, heat shock induction of HSP82 and copper induction of CUP1 is reduced 2-fold in Med17-depleted cells ( Figure 1C). This observation is consistent with previous observations of heat shock and copper induction in the med17-ts strain (Lee and Lis, 1998;McNeil et al., 1998;Li et al., 1999).
Our reanalysis of published genome-scale Pol II occupancy data in a med17-ts strain (Paul et al., 2015) reveals similar results to those obtained here with the Med17-depletion strain (Figure 1-figure supplements 1A and 2); substantial transcription, albeit at an average 3-fold lower level upon loss of Med17 function. Quantitative analysis on ten additional genes confirms that the effect on Pol II occupancy when Med17 is depleted via anchor-away ( Figure 1C and Figure 1-figure supplement 3A) is similar to that seen when Med17 is inactivated via the temperature-sensitive mutation ( Figure 1D and Figure 1-figure supplement 3B). In both situations, loss of Med17 function leads to dissociation of other head and middle subunits from the enhancer, whereas the tail module remains (Linder et al., 2006;Paul et al., 2015;Petrenko et al., 2016; Figure 1-figure supplement 1B). More generally and as discussed below, the stronger effect on SAGA-dependent genes observed under conditions of Med17 depletion also occurs in the med17-ts strain (Paul et al., 2015). Thus, inactivation or depletion of Med17 by independent methods yields similar disruption of Mediator structure and quantitatively modest transcriptional effects.
For all head, middle, and tail subunits tested, SAGA-dependent genes are more strongly affected by Mediator depletion than TFIID-dependent genes ( Figure 2A and Figure 2-figure supplement 1). The transcriptional profiles in these Mediator-depletion strains are similar, though not identical ( Figure 2B). The relative importance of Mediator at SAGA-dependent vs. TFIID-dependent genes has been described in strains lacking the tail module (Ansari et al., 2012;Paul et al., 2015), but our results confirm those on other Mediator subunits that were reported while this work was in progress (Jeronimo et al., 2016). The relative importance of Mediator for SAGA-dependent vs. TFIID-dependent genes is also observed for Kin28, the kinase subunit of TFIIH (Wong et al., 2014). In contrast, depletion of Cdk8 kinase has only a minor effect on Pol II occupancy, with a distinct transcriptional profile that does not discriminate between SAGA-and TFIID-dependent genes.
Med17 occupancy over Enhancer
Depletion of FRB-tagged Mediator subunits Figure 1. Substantial transcription persists upon anchor-away of essential Mediator subunits. (A) Occupancy of the indicated 3x-HA-FRB tagged Mediator subunit at the indicated enhancers prior to (-Rap) or after (+Rap) being depleted by anchor away. The parental strain containing no FRBtagged protein was used as a negative control. The HA antibody was used except for Med17, in which case an antibody to the native protein was used. Data from (Petrenko et al., 2016). (B) Mean Pol II occupancy over~400 transcribed genes prior to and after anchor-away of the indicated Mediator Figure 1 continued on next page Figure 1 continued subunits, TBP, Pol II (Rpb1) and the parental strain (WT). Sequence reads were normalized as counts per million (CPM), and the curves were aligned relative to the transcription start site (TSS). (C) Pol II occupancy at the indicated constitutive and induced (by heat shock at 39˚C or addition of copper) genes prior to and after Med17 anchor-away. (D) Pol II occupancy at constitutive genes and at the copper-inducible CUP1 gene prior to and after heat inactivation of a med17-ts allele. An isogenic MED17 strain was used as the control. DOI: 10.7554/eLife.28447.002 The following figure supplements are available for figure 1: Depletion of Mediator causes a downstream shift in the Pol II profile, indicating that Mediator inhibits promoter escape Kin28-dependent phosphorylation of the Pol II CTD causes dissociation of Mediator from the PIC (Jeronimo and Robert, 2014;Wong et al., 2014), which is important for efficient escape of Pol II from the promoter (Wong et al., 2014). In particular, depletion of Kin28 causes increased Mediator occupancy at the core promoter (Jeronimo and Robert, 2014;Wong et al., 2014) and an upstream shift in the Pol II profile (Wong et al., 2014), indicative of a defect in promoter escape. Conversely, depletion of Mediator head or middle subunits causes a downstream shift in the Pol II profile ( Figure 3A). This downstream shift is not observed upon depletion of Mediator subunits in the tail or kinase module ( Figure 3B). In addition, the Pol II profile is unaffected under conditions of TBP depletion, even though Pol II transcription is drastically reduced ( Overlaid mean Pol II occupancy curves scaled to 100% (maximum levels) after anchor-away of the indicated head and middle subunits of Mediator. (B) Overlaid mean Pol II occupancy curves scaled to 100% after anchor-away of the indicated tail and kinase module subunits, as well as for the parent strain (WT) before and after rapamycin addition. Sequence reads were normalized as counts per million (CPM), and the curves were aligned relative to the transcription start site (TSS). DOI: 10.7554/eLife.28447.008 The following figure supplement is available for figure 3: Mediator depletion provides evidence that Pol II transcription can occur even when Mediator is not present at the PIC (see Discussion).
Pol II transcription can occur from preinitiation complexes lacking Mediator
Pol II transcription occurs when Mediator subunit occupancy is not detected ( Figure 1A,B), and depletion of Mediator alters the Pol II profile (Figure 3), suggesting that Mediator is not an essential component of the PIC. However, as Mediator occupancy was only assessed at enhancers due to its transient interaction with core promoters, it remained formally possible that sufficient Mediator was associated at core promoters to permit a modest level of transcription. To address this possibility, we utilized the fact that Mediator association with core promoters is stabilized and can be assessed under conditions where Kin28 is depleted or inactivated (Jeronimo and Robert, 2014;Wong et al., 2014).
When Med17 and Kin28 are depleted simultaneously, the level of Pol II occupancy in the coding region is roughly comparable to that observed when these proteins are depleted individually ( Figure 4A and Figure 4-figure supplement 1). However, for all cases tested, Mediator occupancy (Med8 and Med22 subunits) at the core promoter is greatly reduced upon simultaneous depletion of Med17 and Kin28 as compared with depletion of Kin28 alone ( Figure 4A and Figure 4-figure supplements 1 and 2). This observation is true for genes that are continuously transcribed (CCW12, TEF1, PMA1), as well as those induced after depletion by heat shock (HSP12, HSP82, SED1) or copper (CUP1). Most importantly, the Mediator:Pol II occupancy ratio at all these genes upon simultaneous depletion of Med17 and Kin28 is far below the consistent ratio observed in Kin28 depletion strains (Jeronimo and Robert, 2014;Wong et al., 2014). In all cases tested, TBP and TFIIB occupancy at the core promoters is in excellent accord with Pol II occupancy in the coding regions ( Figure 4B and In contrast to these results, simultaneous depletion of Kin28 and TBP drastically reduces transcription and TBP, TFIIB, and Mediator occupancies at the core promoter ( Figure 5A). Moreover, under conditions where TBP/Kin28 depletion is less efficient (obtained by reducing the rapamycin concentration by a factor of four), the level of transcription is reduced in accord with the reduction in TBP, TFIIB, and Mediator occupancy ( Figure 5B). Thus, in the absence of Kin28, depletion of TBP affects the level but not the composition (i.e. the relative occupancy of the components) of the PIC, whereas depletion of Med17 alters the composition of a transcriptionally-competent PIC. These observations suggest that Pol II transcription and hence a functional PIC can occur in the absence of Mediator at the core promoter, and hence that Mediator is not an essential component of the PIC in vivo (see Discussion).
Mediator is important, but not essential, for serine 5 phosphorylation of the Pol II C-terminal domain Mediator can stimulate Kin28-dependent phosphorylation of the Pol II CTD at serine 5 residues in vitro (Guidi et al., 2004;Esnault et al., 2008;Nozawa et al., 2017), but this activity has never been examined in vivo. In accord with the biochemical observations, depletion of all Mediator subunits tested causes decreased phosphorylation of serine 5 residues in the Pol II CTD (normalized to Pol II levels) at all core promoter regions examined ( Figure 6). However, the level of CTD-serine 5 phosphorylation upon Mediator depletion is higher than observed when Kin28 is depleted. As expected (Komarnitsky et al., 2000), CTD-serine 5 phosphorylation levels were low near the 3' end in all strains ( Figure 6-figure supplement 1). Thus, Mediator contributes to, but is not fully responsible for, CTD-serine 5 phosphorylation in vivo.
Pol II transcription is virtually eliminated when Mediator head, middle, and tail modules are simultaneously inactivated Although considerable transcription persists when Med17 or essential Mediator subunits are depleted, the tail module still associates with enhancers and might influence transcription. To address whether transcription can be abolished when all Mediator modules are depleted or eliminated, we generated a med17-AA derivative lacking the genes encoding the Med3 and Med15 tail subunits. In strains lacking Med3 and Med15, a third tail subunit, Med2, is no longer recruited to genes (Zhang et al., 2004;Paul et al., 2015).
In the absence of rapamycin, Pol II levels at PMA1, CCW12, and TEF1 in the triple mutant strain are reduced compared to the wild-type ( Figure 7A). Depleting Med17 in this strain causes a further decrease in transcription of all three genes, close to or at the background level of detection ( Figure 7A). The triple depletion strain can induce SSA4 and HSP82 upon heat shock or CUP1 in response to copper, albeit at a much lower level than wild-type cells or cells deleted either for the tail module ( Figure 6A) or Med17 ( Figure 1C). Comparably low levels of transcription are observed in a strain where Med3, Med15, and Med17 were simultaneously depleted via anchor-away ( Figure 7B). In contrast, simultaneous depletion of the essential subunits Med22 (head module) and Med7 (middle module) results in substantial levels of transcription, comparable to that observed upon Med17 depletion ( Figure 7B). Thus, depletion of all three Mediator modules has a stronger transcriptional effect than conditions where the tail module is present at enhancers (Med17 depletion) or the head and middle modules are present at core promoters (deletion of tail subunits).
The weak heat shock response observed in the triple depletion strain could represent either a low level of Mediator-independent transcription or incomplete depletion of Mediator subunits. As it is impossible to directly exclude the possibility of incomplete depletion, we examined heat shock and copper induction in strains depleted of Pol II or TBP by the same anchor-away method ( Figure 7C). We presume that any transcription observed in TBP-or Pol II-depleted strains represents incomplete depletion. In both cases, there is a very low level of transcriptional activation, roughly comparable to (although perhaps slightly lower than) that occurring in the triple depletion strain. More generally, genome-scale, RNA-seq analysis indicates that the level of Pol II transcription upon depletion of all Mediator modules is indistinguishable from that occurring upon TBP depletion ( Figure 7D). These observations indicate that most, and perhaps all, of the weak activation in the triple depletion strain reflects incomplete depletion of Mediator subunits.
Growth of Mediator-depletion strains
For all 18 Mediator subunits tested, depletion of any individual subunit (or the combination of the essential subunits Med22 and Med7) does not prevent cells from growing at 30˚C ( Figure 8A). As deletion of some Mediator subunits prevents cell growth, these observations indicate that depletion of Mediator subunits by anchor-away is incomplete. Interestingly, as seen in the med17-ts strain, the Med17 and many other anchor-away strains are unable to grow at 37˚C ( Figure 8B). As the med17ts and Med17 depletion strains have effects on transcription ( Figure 1C,D), the failure of the med17ts strain to grow at elevated temperature may not be due to complete inactivation of Med17 but rather the requirement for higher levels of Mediator function to support growth under stressful conditions.
In striking contrast to all individual Mediator subunits tested, growth at 30˚C is abolished upon depletion of individual subunits of any general transcription factor (TBP, TFIIA, TFIIB, TFIIE, TFIIF, TFIIH, Pol II) by the same anchor-away method ( Figure 8A). However, simultaneous depletion/ removal of all Mediator modules (either by depleting Med17 in the tail deletion strain (med3D med15D med17-AA) or triple anchor-away depletion of the same subunits (med3-AA med15-AA med17-AA) results in extremely poor growth at 30˚C ( Figure 8C). Thus, depletion of all Mediator modules causes drastic effects on transcription and cell growth, whereas depletion of individual Mediator subunits has more modest effects. This striking dichotomy suggests that viability of Mediator-depletion strains is not due to incomplete depletion by the anchor-away method per se. Moreover, incomplete anchor-away-mediated depletion cannot easily explain why cells subject to , TBP, and TFIIB at the promoters of the indicated heat shock genes before or after a heat shock at 39˚C in cells that were or were not incompletely depleted for TBP and Kin28 by using rapamycin at 25% of the usual concentration. As Mediator can only be detected at the core Figure 5 continued on next page simultaneous depletion of essential Mediator subunits in the head (Med22) and middle (Med7) module are viable, whereas cells simultaneously depleted of the essential Med17 and two non-essential tail subunits (Med3 and Med15) are inviable.
To explain why some Mediator subunits are essential, we suggest that transcription initiated from Mediator-lacking PICs is substantial but insufficient for cell growth. Incomplete depletion of an essential subunit allows enough additional transcription to put cells over the life/death threshold. This does not occur when transcription is virtually eliminated when all Mediator modules or individual general transcription factors are depleted. Consistent with the idea of a viability threshold, the viable Mediator-depletion strains grow more slowly than parental strains at 30˚C and not at all at 37˚C. The difference between life and death could be due to overall reduced (but not eliminated) transcription or reduced transcription of one or more genes. We also note that, unlike inducible degron-based methods (Dohmen et al., 1994;Moqtaderi et al., 1996;Nishimura et al., 2009), the anchor-away approach does not destroy the protein but rather anchors it to the ribosome. As such, it is formally possible that Mediator might have some non-chromosomal function, and in this regard Mediator has post-transcriptional roles (Carlsten et al., 2013;Conaway and Conaway, 2013;Allen and Taatjes, 2015).
Discussion
Evidence that Mediator is essential for Pol II transcription in vivo Simultaneous depletion of subunits in the head, middle, and tail modules is the most stringent test of Mediator function in vivo. Under this condition (triple depletion strain), there is a drastic effect on Pol II occupancy at genes expressed at steady-state prior to depletion or induced after depletion. The magnitude of the transcriptional defect is roughly comparable to that observed upon depletion of TBP or Pol II by the same method. In addition, cells depleted for all Mediator modules grow extremely poorly, unlike the case for strains depleted for individual Mediator subunits. The transcriptional and growth effects upon depletion of all Mediator modules may be slightly less pronounced than upon depletion of TBP or Pol II. These subtle differences could be due to a very low level of Mediator-independent transcription, or they might simply reflect a very subtle difference in depletion efficiency, and hence be are an experimental artifact. Thus, while impossible to prove conclusively due to the inherent limitations of studying proteins that are essential for cell viability, our results provide strong evidence that Mediator is essential for Pol II transcription in vivo.
Mediator modules make independent contributions to the overall transcriptional function of Mediator
Our results suggest that Mediator modules that associate either with the enhancer or with the core promoter confer partial transcriptional activity, and hence contribute independently to the overall transcriptional function. In this view, depletion/inactivation of Med17 (and other essential subunits) has a relatively modest transcriptional effect, because the tail module remains associated with the enhancer. The molecular basis of this tail-specific function is unknown, but it might reflect the ability of the tail module to interact with a component of the basic Pol II machinery and/or to increase the association of other co-activators (e.g. SAGA or Swi/Snf) with the enhancer. It is also possible that the tail module might increase recruitment of the very low levels of the head and middle subunits to the promoter, but not affect PIC function directly. Conversely, removal of the tail module also has a relatively modest effect on Pol II transcription, because the middle and head modules can associate with the PIC at the core promoter (Jeronimo et al., 2016;Petrenko et al., 2016). This explanation is consistent with the very low level of Mediator at enhancers that drive expression of ribosomal protein and glycolytic genes (Fan et al., 2006). The independent functions of Mediator modules are consistent with the observation that SAGAdependent genes are more affected than TFIID-dependent genes upon depletion of Mediator subunits. By virtue of TAF-DNA interactions (Verrijzer et al., 1995;Oelgeschläger et al., 1996;Burke and Kadonaga, 1997), TFIID strengthens the interaction of the basic Pol II machinery with the core promoter, thereby making the Mediator-dependent connection between enhancer and promoter less important at TFIID-dependent genes. In contrast, transcription of SAGA-dependent genes relies on TBP, not TFIID, and hence Mediator is needed to efficiently connect the enhancer and core promoter.
Mediator is not an obligate component of the preinitiation complex in vivo
As is the case for general transcription factors, the entire Mediator complex is critical for Pol II transcription and hence PIC formation in vivo. However, Mediator interacts both with the enhancer and core promoter, and individual Mediator modules have independent effects on transcription. Furthermore, unlike general transcription factors, Mediator is not required for basal transcription and hence PIC formation in vitro. Thus, it is unclear whether Mediator, like general transcription factors, is an Figure 7 continued module), and Med17 (head module) or for Med7 (middle module) and Med22 (head module). (C) Pol II occupancy in the same genes prior to and after anchor-away of Rpb1 and TBP. (D) Mean Pol II occupancy over~400 transcribed genes in strains depleted for Med14 or Med17 as well as the triple mutant strain (before and after rapamycin) and the parental strain (WT). Sequence reads were normalized as counts per million (CPM), and the curves were aligned relative to the transcription start site (TSS). DOI: 10.7554/eLife.28447.016 obligate component of the PIC in vivo. Two independent observations presented here strongly suggest that considerable transcription can occur from PICs that lack Mediator.
First, and most directly, cells depleted simultaneously for Med17 and Kin28 support substantial Pol II transcription even though a very low level of Mediator is detected at the core promoter. Furthermore, TBP and TFIIB occupancies in such cells are in accord with Pol II occupancy, indicative of a functional PIC in the apparent absence of Mediator. In contrast, cells depleted only for Kin28 have a much higher level of Mediator at the core promoter, even though Pol II, TBP, and TFIIB occupancies are comparable. These low Mediator:Pol II, Mediator:TBP, and Mediator:TFIIB occupancy ratios at the core promoter upon simultaneous depletion of Med17 and Kin28 provides very strong evidence of a transcriptionally competent, Mediator-independent PIC. As discussed below, this conclusion does not rely on the degree of Mediator depletion per se, but rather direct observation at the core promoter. Second, depletion of Mediator head or middle subunits causes a downstream shift in the Pol II occupancy profile. This non-wild-type Pol II profile is very difficult to explain by incomplete depletion per se (see below), and hence it strongly supports the idea of transcription initiated from a PIC lacking Mediator. Notably, this downstream shift in the Pol II profile is not observed upon depletion of tail subunits that do not directly interact with general transcription factors at the PIC, yet have comparable quantitative effects on Pol II occupancy. In addition to these two major arguments, our conclusion is supported by the dichotomy between Mediator and general transcription factors with respect to growth properties upon depletion.
Can incomplete depletion of Mediator explain the above observations that are the basis of our conclusion that Mediator is not an obligate component of the PIC? Mediator is not completely depleted in our experiments, and complete elimination of any essential protein is impossible. However, the definition of incomplete depletion is that the small amount of remaining protein is structurally and functionally identical to the protein prior to depletion. Thus, incomplete depletion of a general transcription factor will reduce (but not eliminate) transcription and its occupancy at the core promoter, but it will not affect either the relative ratios of general transcription factors at core promoters (i.e. PIC level) or the Pol II profile. Indeed, incomplete depletion of TBP not only reduces transcription, but it also reduces to comparable extents the levels of general transcription factors and Mediator at the core promoter. In contrast, depletion of Med17 drastically reduces Mediator occupancy at the core promoter, whereas it has only modest and comparable effects on occupancy of TBP, TFIIB, and Pol II. Thus, as incomplete depletion of Mediator cannot explain the key observations, our results indicate that (1) Mediator is not an obligate component of the PIC, (2) transcription can occur from a PIC lacking Mediator, and (3) that transcription from a Mediator-lacking PIC escapes the promoter more easily than a Mediator-containing PIC.
Mechanistic implications
Mediator is essential for Pol II transcription, yet is not an obligate component of the PIC, and this apparent paradox cannot be explained by the classic Pol II recruitment model in which Mediator bridges the enhancer (via the tail domain) and core promoter (via the head domain). One possibility is that Mediator performs a catalytic function at the core promoter that alters the activity of the PIC in a manner that is essential for transcription. Except for a kinase subunit (Cdk8 in yeast) that has minimal effects on transcription, Mediator does not have any known enzymatic activities. However, Mediator can affect the conformation of Pol II (Plaschka et al., 2015;Tsai et al., 2017), so a Mediator-induced conformational effect could be a catalytic, yet essential function for Pol II transcription. In addition, biochemical experiments have suggested that Mediator functions as an assembly factor that facilitates PIC maturation through different stages (Malik et al., 2017).
Alternatively, the tail module that remains associated at the enhancer upon Med17 depletion could have an independent transcriptional function that does not involve its connection to the middle and head modules. For example, the tail module could interact directly with Pol II or a general transcription factor, thereby stimulating PIC formation. The tail module might indirectly stimulate PIC formation via a direct interaction with the SAGA co-activator complex, whose Spt3 subunit of SAGA interacts with TBP. This suggested mechanism would not only permit increased recruitment of a Mediator-lacking PIC, but it would also explain the observation that SAGA-dependent genes are more affected than TFIID-dependent genes upon depletion of Mediator subunits. These proposed mechanisms, and others not mentioned, can explain why the Mediator is essential for Pol II transcription even though it is not an obligate component of the PIC and hence is different from a general transcription factor.
Yeast strains and growth conditions
Strains used in this study are listed in Supplementary file 1. Anchor-away strains were constructed as described previously (Wong et al., 2014), except for the Med17 strain which was kindly provided by Francois Robert. For spotting assays, yeast cells were grown to an OD 600 nm of 0.3-0.5, diluted to 0.1, and 5-fold serial dilutions of cells were spotted on YPD medium with or without 1 mg/ml rapamycin; the plates were kept at 30˚C or 37˚C for 48-60 hr. For anchor-away, strains were grown in SC liquid media to an OD 600 nm of 0.4, and rapamycin was then added to a final concentration of 1 mg/ml. For 39˚C heat shock, cells (pretreated or not with rapamycin for 45 min) were grown at 30˚C, the culture was filtered, transferred to pre-warmed 39˚C media, and grown at 39˚C for 15 min in the presence or absence of rapamycin. For 37˚C heat inactivation of the med17-ts strain, a similar procedure was followed, but with media pre-warmed to 37˚C, and cells were grown at 37˚C for 1 or 2 hr. For CUP1 induction with CuSO 4 , 1 mM CuSO 4 (final concentration) was added for 15 min.
Chromatin immunoprecipitation (ChIP)
Chromatin, prepared as described previously (Fan et al., 2008), from 5 ml of cells (OD 600 nm~0.5) was immunoprecipitated with antibodies against Pol II unphosphorylated CTD (8WG16, Covance), CTD-phosphorylated on serine 5 (3E8, Millipore), c-Myc (9E10, Santa Cruz), HA (F-7, Santa Cruz), TBP (a kind gift from Steve Buratowski), TFIIB, or Med17 (a kind gift from Steve Hahn). Immunoprecipitated and input samples were analyzed by quantitative PCR in real time using primers for genomic regions of interest and a control region from chromosome V to generate IP:input ratios for each region. The level of protein association to a given genomic region was expressed as fold-enrichment over the control region. For qPCR analysis, 3 to 4 biological replicates were performed for each experiment (biological replicates were culture samples collected on separate days, with lysis and IP performed on separate days). During qPCR analysis, each sample was tested in triplicate (3 'technical replicates') to avoid qPCR error, and the triplicates were averaged. If one of the triplicates differed by more than 2-fold from the other two, it was discarded as an outlier. Error bars represent the standard deviation between the 3 or 4 biological replicates.
ChIP-seq and data analyses
Barcoded sequencing libraries from ChIP DNA (two biological replicates per strain) were constructed as described previously (Wong et al., 2013). Sequence reads were mapped using Bowtie available through the Galaxy server (Penn State) with the following options: -n 2, -e 70, -l 28, -v -1, -k 1, -m -1. Pol II occupancy of a gene was calculated by summing the number of ChIP-seq reads within an appropriate region, normalized to the respective surveyed window size, and is expressed as counts per million mapped reads (CPM). Normalization was also performed with respect to the median Pol II levels at the silent loci (HML and HMR) and a non-transcribed region of Chromosome V set as the 'background' level. Pol II occupancy peaks were called using MACS available through the Galaxy server (Penn State) with tag size set to 35, bandwidth to 150-300 bp, and the P-value cutoff at 1e À05 . Mean occupancy curves were generated using Galaxy deepTools (Freiburg, Germany), scaled relative to the number of mapped reads and fragment size, and expressed as counts per million mapped reads (CPM). TFIID-and SAGA-dependent genes were defined previously (Basehoar et al., 2004;Huisinga and Pugh, 2004). Clustering was performed using the CIMminer (NCI) average linkage algorithm and Matlab. Pol II occupancy profiles for individual Mediator depletion conditions were generated by averaging the values from 2 replicates, defining 100% as the maximal value at +400 to +500 (which is comparable to levels further downstream), and then normalizing all values to the 100% value in that strain. The p-values comparing Mediator depletion to the wild-type control strain at position +100 downstream of the TSS are as follows: Med22 (0.0004); Med7 (0.00007); Med14 (0.015); Med17 (0.2). The overall significance is considerably higher because these p-values at +100 do not consider differences in Pol II occupancy at other positions, which are clearly apparent in Figure 3. The ChIP-sequencing data and associated files are available through the Gene Expression Omnibus (GEO) under the accession number GSE93190. For analysis of the Pol II occupancy data in Paul et al. (2015), data was downloaded from the NCBI Sequence Read Archive under the accession number SRP047524. | 7,434 | 2017-07-12T00:00:00.000 | [
"Biology"
] |
Release of nickel ions and changes in surface microstructure of stainless steel archwire after immersion in tomato and orange juice
Stainless steel archwire is an important component of orthodontic appliances that have the potential to corrode. Consumption of foods and beverages with a low pH, such as fruit-based juices, can trigger the release of nickel ions in stainless steel archwire. This study aimed to determine the difference in the amount of nickel ions release and the surface microstructure of stainless steel archwire after immersed in tomato and orange juice. The sample used is stainless steel archwire with a diameter of 0.016 inches and length of 5 cm immersed in 15 ml of solution and then stored at 37°C in an incubator for 24 hours. The samples were divided into three groups (immersed in tomato juice,orange juice and artificial saliva), each group consisted of 9 samples. The solution was tested using an Inductively Coupled Plasma Mass Spectrometer (ICP-MS) to determine the number of nickel ions released. The archwire surface microstructure was tested using a Scanning Electron Microscope (SEM). The results showed that the average amount of nickel ion release in orange juice is more than tomato juice. There was a significant difference between the amount of nickel ion released and surface microstructure on stainless steel archwire after being immersed in tomato and orange juice.
Introduction
The orthodontic archwire is an important component of orthodontic appliances [1]. One type of archwire that is often used in fixed orthodontic appliances is stainless steel [2]. Stainless steel orthodontic archwire contains iron, chromium, nickel, and carbon [3]. Stainless steel orthodontic archwire is generally used because it has a good modulus of elasticity, high strength, corrosion resistance, and economical price [2]. Although stainless steel is believed to be corrosion resistant, several studies have shown that stainless steel archwire has the potential to corrode. Corrosion can occur because the orthodontic archwire is constantly in contact with saliva [4]. In the oral cavity, the archwire gradually corrodes, resulting in the release of the metal elements that make up the archwire. The release of metal ions is influenced by various factors such as changes in temperature, microflora, diet, enzymes, and salivary acidity (pH) [5].
Food and beverages that have an acidic pH such as fruit juices, vinegar and carbonated drinks can increase the release of nickel ions from orthodontic appliances [6]. Fruit-based juice is a drink that is consumed daily and it is recommended because of its nutritional value. The chemical composition of fruit-based juice mostly consists of two or more organic acids [7]. Tomato juice is a beverage with an acidic pH (3.7 < pH < 4.5) and orange juice is a beverage with a high acidic pH (pH < 3.7) [8]. The dominant organic acids contained in oranges and tomatoes are citric acid and malic acid [9,10]. Metallic elements released can provide biological effects such as allergic reactions, carcinogenic, mutagenic, and cytotoxic effects [11]. Nickel is considered one of the most common allergens, with allergy prevalence rates of up to 30% depending on age, sex, and race [12][13]. Several studies have shown that nickel causes changes in periodontal and immunological conditions in patients allergic to nickel [14][15][16].
This study aimed to determine the release of nickel ions and microstructural changes on the surface of stainless steel archwire after being immersed in tomato and orange juice.
Materials and Method
This research was experimental laboratory study using post-test only with control group design. The sample used was stainless steel orthodontic archwire with a diameter of 0.016 inches and a length of 5 cm. The samples were divided into three groups, each group consisted of 9 samples. Treatment group 1 sample was immersed in tomato juice, treatment group 2 was immersed in orange juice, and the control group was immersed in artificial saliva. The tomatoes used were red tomatoes from Berastagi (Solanum lycopersicum L.). The orange used was a sweet orange from Berastagi (Citrus sinesis). The juice was made with a concentration of 100% without the addition of water and sweeteners. Then, the immersed samples were stored in an incubator at 37ºC for 24 hours. The measurement of nickel ion levels in the sample test solution was conducted by using the ICP-MS (Inductively Coupled Plasma Mass Spectrometer) at BTKL-PP Medan. Samples were also tested with a Scanning Electron Microscope (SEM) at the UNIMED Physics Laboratory to observe the microstructure of the wire surface after immersed. The normality test of the research data was carried out using the Shapiro-Wilk normality test. If the data is normally distributed (p> 0.05), the test is continued with the One Way ANOVA parametric statistical test with a 95% confidence level. This research has received ethical approval from the Research Ethics Commission of the Faculty of Medicine, Universitas Sumatera Utara.
Result
This study showed that the average amount of the release of nickel ions in group 1 was 0.008 ± 0.000 mg/L, group 2 was 0.012 ± 0.000 mg/L, and the control group was 0.000 ± 0.000 mg/L. The results of the Shapiro-Wilk normality test show that the data distribution is normally distributed. Group 1 had a significance value of p = 0.941 (p > 0.05), group 2 had a significance value of p = 0.890 (p > 0.05) and the control group had a significance value of p = 0.276 (p > 0.05). The homogeneity test results using the Levene test showed that the data was homogeneous with a significance value of p = 0.605 (p≥ 0.05). Data analysis was continued by using the One-way ANOVA test. The test results showed that there was a significant difference between the amount of nickel ion released in stainless steel orthodontic archwire after being immersed in tomato juice and orange juice with a significance value of p = 0.001 (p < 0.05) ( Table 1).
Discussion
Stainless steel archwire immersed in tomato juice (pH = 4.9) and orange juice (pH = 4.8) shows a more significant release of nickel ions than archwire immersed in artificial saliva (pH = 7.9 ). The highest average amount of nickel ions release occurred in stainless steel archwire immersed in orange juice. The results of this study are in line with the study of Bonde et al., who stated that the amount of nickel ion released on stainless steel orthodontic archwires after being immersed in coconut water (pH = 4.75) was higher than artificial saliva [17]. The research of Pakpahan and Handali showed that the average amount of the release of nickel ions in stainless steel bracket immersed in lemon juice is more significant then in artificial saliva [18]. Sumule et al.'s study also stated that the release of nickel and chromium ions in stainless steel brackets immersed in carbonated drinks was higher than the control group [19].
The amount of nickel ions release in stainless steel archwire after being immersed in orange juice is more significant than tomato juice. This is because the pH of orange juice is lower than tomato juice. The content of organic acids such as citric acid, malic acid, lactic acid, and several other acids causes orange juice to have an acidic pH, thus affecting the release of nickel ions [10]. Citric acid (C6H8O7) has a fairly high H + particles [20]. An increase in H + ions from the reacting acid will react and reduct, resulting in more metal ions being oxidized so that it can accelerate the corrosion rate. This is caused by the oxidation rate, which is proportional to the reduction rate during the corrosion process of stainless steel wire, characterized by the increase in the release of nickel and chromium ions from the wire [21].
This study indicates a significant difference between the release of nickel ions in stainless steel orthodontic archwire after being immersed in tomato juice and orange juice. This study is in line with the research of Kristianingsih et al., who showed that there was a significant difference between the release of nickel and chromium ions in stainless steel archwire after immersed in a carbonated beverage with an acidic pH [21]. This study is also in line with the study of Situmeang et al., who showed that there were significant differences in the release of nickel and chromium ions in stainless steel archwire after immersed in vinegar [22].
The SEM examination results in this study show that there were differences in the surface microstructure of stainless steel archwire after being immersed in tomato and orange juice. The surface image of stainless steel archwire immersed in tomato juice shows the surface roughness ( Figure 1). Meanwhile, stainless steel archwire after immersion in orange juice shows extensive damage to its surface ( Figure 2). The surface of the stainless steel archwire immersed in artificial saliva was smoother than the surface of the archwire immersed in tomato and orange juice (Figure 3).
The results of this study are in line with the research of Sharma et al., who showed that there was surface damage in stainless steel archwires immersed in tomato juice [7]. The results of this study are supported by the research of Kao and Huang, who stated that lower pH increased corrosion of orthodontic archwires. The results showed that the surface of the stainless steel archwire in the treatment group with artificial saliva (pH = 4) and NaF had scratches and pitting corrosion [23]. This study is in line with the research of Pataijindachote et al., who stated that the surface of the archwire immersed in artificial saliva with pH 2.5 for 90 days showed significant changes in the archwire surface, especially in Australian archwire [24].
Stainless steel orthodontic appliances rely on the formation of a passive oxide layer to prevent corrosion. The addition of nickel and chromium to stainless steel metal alloys provides corrosion resistance. Chromium in stainless steel forms a passive protective oxide layer (Cr2O3) which provides a barrier against oxygen diffusion and other corrosive environments [25]. Even if a protective oxide layer is present on the metal surface, release of metal ions can still occur. The oxide layer can also slowly dissolve when the metal is exposed to oxygen from the surrounding environment [26].
Acidic drinks such as fruit-based juices can degrade the surface quality of orthodontic archwires. Damage and surface roughness of stainless steel archwire after being immersed in tomato and orange juice occurs due to the acidic pH of the juice. Tomato and orange juice contain two or more organic acids. The corrosion mechanism on metal surfaces in the presence of organic acid media occurs by the adsorption of acid molecules to the surface [7]. Decreasing the pH can damage the oxide layer on the surface of the wire, it will cause corrosion and the release of elements from metal that can change the surface microstructure to become rough. Surface roughness increases over time and can lead to pitting corrosion [24]. The release of metal ions results in characteristic changes and damage to the metal structure, which can weaken the strength and affect the wire's aesthetics, quality, and physical shape [1].
Conclusions
Based on the results of the above research, tomato juice and orange juice can cause the release of nickel ions and changes in the surface microstructure of stainless steel orthodontic archwire. With a pH of 4.8, orange juice caused higher damage with an average nickel ion release of 0.012 mg/L compared to tomato juice with a pH of 4.9 which had an average nickel ion release of 0.008 mg/L. | 2,665.6 | 2021-11-01T00:00:00.000 | [
"Medicine",
"Materials Science"
] |