text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
HIGH RESOLUTION MASS SPECTROMETRY IN LIPIDOMICS The boost of research output in lipidomics during the last decade is tightly linked to improved instrumentation in mass spectrometry. Associated with this trend is the shift from low resolution—toward high‐resolution lipidomics platforms. This review article summarizes the state of the art in the lipidomics field with a particular focus on the merits of high mass resolution. Following some theoretical considerations on the benefits of high mass resolution in lipidomics, it starts with a historical perspective on lipid analysis by sector instruments and moves further to today's instrumental approaches, including shotgun lipidomics, liquid chromatography–mass spectrometry, matrix‐assisted laser desorption ionization‐time‐of‐flight, and imaging lipidomics. Subsequently, several data processing and data analysis software packages are critically evaluated with all their pros and cons. Finally, this article emphasizes the importance and necessity of quality standards as the field evolves from its pioneering phase into a mature and robust omics technology and lists various initiatives for improving the applicability of lipidomics. © 2020 The Authors. Mass Spectrometry Reviews published by John Wiley & Sons Ltd. Mass Spec Rev I. INTRODUCTION Lipids are one of the major compound classes in biological systems and fulfill important physiological tasks. Their hydrophobicity enables them to form cellular membranes that constitute a boundary against the cells hydrophilic surroundings. This compartmentalization is the physical basis of any living entity. The second important biological task of lipids is energy storage. Lipids are perfectly suited for this physiological duty due to the high amount of energy generated by their oxidation. The third task fulfilled by lipids is signaling, by participation in intermolecular and intramolecular autocrine, paracrine, and endocrine regulatory processes. Besides their biological functions lipids are classified into eight categories according to their chemical building blocks (1) fatty acyls, (2) glycerolipids, (3) glycerophospholipids, (4) sphingolipids, (5) sterols, (6) prenol lipids, (7) saccharolipids, and (8) polyketides are all based on fatty acyls/fatty alkyls, sphingosine, or prenol as basic hydrophobic building blocks (Fahy et al., 2005) (Fig. 1). All categories are further subdivided into lipid classes and lipid subclasses. In total 43,659 individual lipids-21,706 curated compounds and 21,953 computationally generated compoundsare compiled in the LIPID MAPS structure database (LMSD), which is the most comprehensive database in the field of lipidomics. Nevertheless, it is speculated that the number of naturally occurring lipids ranges at 100,000 or even more species, which is still far from the numbers annotated in databases. As stated above, lipids play an important role in many cellular processes and hence are also involved in formation, prolongation but also resolution of many diseases like chronic inflammation, cardiovascular and neurodegenerative disorders, diabetes, or cancer to mention just the most representative ones. Thus, identification and also subsequent quantification of lipids has become an important need in biomedical research and the most suitable method for archiving this goal is clearly mass spectrometry (Rustam & Reid, 2018). This is reflected in a tremendous increase of the publication output in the last 10 years. According to Web of Knowledge the number of publications for the search term "Lipidomics" increased by a factor of 7.7 during this period, which makes it one of today's fastest-growing research fields. One of the reasons for this astonishing success is the availability of constantly developing chromatographic, mass spectrometric, and bioinformatics tools alike. Particularly high-resolution mass spectrometry had a big impact on this success story in the last decade. This review article will, therefore, put its emphasis on the development and use of high -resolution methods for lipidomic analysis. II. THEORETICAL CONSIDERATIONS One major challenge in lipidomics is the extremely high diversity of molecular lipid species to be expected in most biological samples, which, in turn, is to a high degree a result of the combinatorial possibilities arising from combination of the various building blocks of lipids. This fact leads to many potential mass spectral overlaps of lipid molecular ions and molecular adduct ions. While isomeric species do have exactly the same elemental composition and cannot be separated by mass spectrometry without fragmentation, isobaric lipids do not have the same elemental composition and can thus be separated with sufficiently high mass resolution. Table 1 achieve full baseline separation. Full baseline separation gets particularly important when an isobaric species at low mass spectral intensity is to be observed beside, for example, a highly abundant major lipid species. The more the peak intensities of both isobars are starting to equalize the less resolution is needed, because clear baseline separation is becoming less and less important for separating isobaric m/z values of about equal height, when compared with isobaric m/z values with larger inequalities in peak height. The first four examples of the table can routinely be separated by quadrupole time-of-flight (Q-TOF) technology, which for some instruments even reaches up to 80,000 resolution. While the first example depicts a highly unsaturated phosphatidylethanolamine species besides a saturated phosphatidylcholine (PC) species ( 12 C 1 vs. 1 H 12 ) the second example shows the often observed overlap of PC and phosphatidylserine species ( 12 C 2 1 H 8 vs. 16 O 2 ), which is basically due to a two oxygen difference in elemental composition. Both examples depict monoisotopic peaks of protonated adducts commonly detected in lipid mass spectra. The mass difference between plasmalogens and odd fatty acyl carbon numbered diacyl species shows only one oxygen difference in elemental composition ( 16 O 1 vs. 12 C 1 1 H 4 ) and is, therefore, harder to detect than the previously mentioned two oxygen "shift". Another issue often encountered are overlaps of monoisotopic and M + 1 peaks between even and odd mass ions, which is particularly true for sphingomyelin and PC species ( 13 C 1 1 H 5 14 N 1 vs. 16 O 2 ). This is usually just observed in shotgun lipidomics without any chromatographic pre-separation and requires about 30,000 resolution. The by far most widely observed isobaric overlap in lipidomics is between the monoisotopic peak of a lipid species and the M + 2 peak of the same lipid species with just one additional double bond in its fatty acyl chains ( 12 C 2 1 H 2 vs. 13 C 2 ). The resolution needed for resolving these overlaps is roughly 180,000, dependent in its exact number on the mass and the intensity relation of both peaks. This resolution is only achievable by Fourier transformion cyclotron resonance-mass spectrometry (FT-ICR-MS) or by certain Orbitrap instruments. When such M + 2 or also M + 1 peaks are not resolved either in their m/z dimension or in their chromatographic dimension isotopic correction functions are needed for calculation of monoisotopic peak intensities (Han & Gross, 2005). In a nutshell, by knowledge of the natural abundance of 13 C and the intensity pattern of the peak cluster under investigation, it is possible to calculate the percentage Mass Spectrometry Reviews DOI 10.1002/mas.21627 163 contribution of monoisotopic and M + 1 or M + 2 masses, even when they are not mass resolved. According to the concept proposed by Wang, Huang, and Han (2014) respectively, their contribution to M + 1 and M + 2 peaks is in most lipids negligible. For very accurate quantitation of minute amounts of a compound in the presence of large potentially overlapping isotopic peaks in the same spectrum it is in any event advisable to use a resolution in excess of 500,000 for fine isotopic resolution. Another example for isobaric species would be overlapping protonated and sodiated adducts as exemplified in Table 1 for PC 34:1 and PC 36:4 ( 12 C 2 vs. 1 H 1 23 Na 1 ), which already needs a resolving power of around 600,000, but can be avoided by selective suppression of sodiated adducts by addition of ammonia salts (Brugger et al., 1997). Bielow et al. (2017) show in a very systematic manner the various isotopic patterns to be encountered in lipidomics and the massdependent mass resolution needed for resolving certain isobars. Although the high mass resolution is by itself not able to resolve all the isomeric possibilities arising by the sheer combinatorial power of the various esterified fatty acyls, it is nevertheless a very helpful instrumental asset for reducing the number of lipid candidates and even more so for increasing the certainty of analysis by contributing high confidence elemental compositions. When for example all possible molecular lipid species at nominal mass 773 are calculated by taking into account the fatty acids mentioned at The Lipid Web (https://lipidhome.co.uk/), just for PC we end up with 202 possibilities (Fig. 2). But one has to keep in mind that this number still does not reflect any branched, cyclic, oxygenated, or in other ways modified rarely occurring fatty acids, which would increase this number even more. The most important advantage of high mass resolution in this example is the separation of diacyl and ether lipids, which differ by one oxygen in their sum composition (C 44 H 88 O 7 N 1 P 1 vs. C 43 H 84 O 8 N 1 P 1 ) and of highly unsaturated even carbon numbered fatty acyl PC species from monounsaturated odd carbon numbered fatty acyl PC species (C 44 H 72 O 8 N 1 P 1 vs. C 43 H 84 O 8 N 1 P 1 ). In such a case a mass resolution of around 45,000 will be sufficient to cut the number of possibilities from 202 down to 58. At this point of structure elucidation, high mass resolution of intact lipid molecules nevertheless runs into its limits, because the remaining 58 possibilities are all isomers with exactly the same elemental composition and can only be separated by fragmentation, chromatography or ion mobility. A. Sector Mass Spectrometry From a historical perspective sector mass spectrometers were among the first instruments available for high mass resolution lipid analysis and were used in this field from the early 80s on (Jensen & Gross, 1987). The particular merits of sector instruments are not only substantiated by their high mass resolution and mass accuracy but also by the availability of high energy collisional-activated dissociation (CAD) resulting in charge remote fragmentation (CRF) reactions, which allow for localization of structural details such as double bonds, branches, epoxy-, hydroxy-, cyclopropane, and cyclopentane moieties (Jensen, Tomer, & Gross, 1985Tomer, Crow, & Gross, 1983;Tomer, Gross, & Deinzer, 1986). The underlying mechanism of fragmentation is a highly specific 1,4-elimination of H 2 which results in the loss of methane, ethane, propane, etc. from the omega terminus of fatty acyls. These neutral losses have a very predictable pattern as long as fatty acyls are straight-chained, saturated and without any other substituents. But whenever such "obstacles for fragmentation" occur in a fatty acyl moiety the fragmentation pattern starts to change distinctively, thus indicating the position and nature of irregularities in the homologous carbon chain (Tomer, Crow, & Gross, 1983;Jensen et al., 1985). Furthermore, it was proven that it was even possible to determine the double bond locations in fatty acyls esterified in triacylglycerols (TG) by CRF (Cheng, Pittenauer, & Gross, 1998). Therefore CRFs are still up to today a powerful tool for in-depth structural elucidation of lipids. B. Matrix-Assisted Laser Desorption Ionization-Time-of-Flight (MALDI-TOF) MALDI-TOF instruments are in use for analysis of lipids since the late nineties (Schiller et al., 1999), but although these instruments are able to quickly deliver data when the right matrix is found (Leopold et al., 2018), their usage is still rather limited. This might be attributed to some limitations inherent to MALDI-TOF technology: MALDI is not easily coupled with chromatography and thus lacks pre-separation, it does not have any precursor selection for reliable fragment spectra unless MALDI-TOF/TOF is used and it also lacks the resolution of Q-TOF, Orbitrap, and FT-ICR-MS instrumentation. Therefore, when the matrix is optimized, MALDI-TOF is rather used as a fast screening method with low identification confidence. This is very well exemplified by the fast acquisition of differential lipid profiles on urine, which serve as a starting point for further in-depth exploration of lipids showing a significant difference between statistical groups (Tipthara & Thongboonkerd, 2016). Another niche of application for this technology is the use of MALDI-TOF/TOF for in-depth structural characterization of lipids, which capitalizes on the availability of high energy CAD spectra in these instruments. This results in CRF patterns similar to sector mass spectrometry, which allow the allocation of fatty acid sn-positons, double bonds and other modifications at the fatty acyl tails of lipids (Pittenauer & Allmaier, 2009), although the isolation window of four m/z for MS/MS generation can become a so far unresolved challenge when working on lipids. The drawback of MALDI-TOF/TOF for the structure elucidation of lipids is its current lack of automatization and the missing embedding into high throughput lipidomic workflows. C. Mass Spectrometry Imaging The eventually most important application of MALDI-TOF these days is mass spectrometry imaging. This is performed by placing a few micrometer thick cryo-dissections of organs onto a MALDI target, covering them with MALDI matrix and subsequently scanning them in two dimensions by the laser in pixels of a few micrometers (Wang, Wang, & Han, 2018). The resulting mass spectra can be reconstructed to give a twodimensional picture of m/z values, which eventually allow location of certain lipids in the respective tissue. Recently, Ellis et al. (2018) showed on an LTQ-Orbitrap instrument the potential of coupling between high-resolution shotgun lipidomics and MALDI imaging. At a pixel size of 40 µm one FT-MS full scan at a resolution of 240,000 and parallel low-resolution IT-MS/MS scans in data-dependent acquisition (DDA) mode were acquired. Both scan types were merged by the software and each pixel was processed like one sample of a shotgun experiment. This finally led to two-dimensional rat cerebellum images at a lateral resolution of 40 µm where lipid assignment from high-resolution full scan spectra was further corroborated by characteristic fragments from the respective MS/MS spectra. In a similar manner, distribution of sulfoglycosphingolipids in tumor tissue was determined by MALDI imaging on an LTQ-Orbitrap mass spectrometer, taking into account high mass resolution FT-MS full scans and MS/MS scans by CAD, pulsed Q collisional dissociation (PQD), and higher energy collision activated dissociation (HCD) . Another interesting approach for pinpointing spatial distribution of lipids is laser capture microdissection of tissue slices with subsequent lipid extraction and shotgun lipidomics (Knittelfelder et al., 2018). The big advantage of this method is the increased amount of time which can be spent on each pixel allowing for various targeted selected ion monitoring (t-SIM) and MS/MS experiments and results in a very deep coverage of each pixels lipidome. When a lateral resolution beyond 1 µm is needed, then TOF-SIMS or SIMS-FT-ICR-MS would be the instrumentation of choice (Smith et al., 2013;Desbenoit et al., 2014). Besides a spatial resolution down to 100 nm, which basically already enables coarse subcellular localization of lipids, the second big advantage of SIMS is that it is a matrix-free method, thus excluding all sources of error arising from matrix deposition. On the downside, SIMS is prone to produce in-source fragmentation, eventually resulting in loss of information on product-precursor relationships. D. Shotgun Lipidomics The term shotgun lipidomics comprises a variety of different instrumental platforms operated in direct infusion and mostly relying on electrospray ionization (ESI). Due to the lack of any chromatographic separation, high mass resolution is increasing the confidence of analysis enormously in such a setting, even though shotgun approaches literally always also have to rely on fragmentation of intact lipid ions in a further MS/MS step. While in the pioneering phase of lipidomics in the 90s most instrumental platforms were triple quadrupoles operated under nominal mass resolution (Han & Gross, 1994Brugger et al., 1997;Liebisch et al., 1999Liebisch et al., , 2002, the development in the last two decades clearly shifted shotgun lipidomics toward high mass resolution equipment, consisting particularly of Q-TOF and Orbitrap instrumentation (Ekroos et al., 2002;Schuhmann et al., 2006Schuhmann et al., , 2011Schuhmann et al., , 2012Ejsing et al., 2009;Almeida et al., 2015;Ellis et al., 2018;Horing et al., 2019). On the infusion side of such platforms the Nanomate nanoESI chip from Advion Inc. can be regarded as a very useful complementary piece of equipment, because it uses one nanoESI spray needle for each sample and thus minimizes carry over effects which are frequently observed when using just syringe infusion (Schwudke et al., 2006). Furthermore, nanoESI increases signal intensities and diminishes the amount of sample needed per injection (Hsu, 2018). Generally, the biggest advantage of shotgun lipidomics over LC-MS lipidomics is the quantitative aspect. Because of its stable ionization environment any fluctuations arising from chromatography, like changing mobile phase composition, matrix or target compound concentration can be excluded (Han & Gross, 2005;Schwudke et al., 2006;Horing et al., 2019). Thus, only one internal standard per polar lipid class is usually sufficient, because the ionization efficiency depends just on the polar head group where the charge is located and not on the varying fatty acyl chains (Wang, Wang, & Han, 2017). Regarding robustness, an interesting shotgun lipidomics study showed a very good stability of lipid concentrations in human plasma over a range of 3.5 years with coefficients of variation mostly below 15%, which would qualify this method even for U.S. Food and Drug Administration studies according to good laboratory practice (Heiskanen et al., 2013). The drawback of shotgun lipidomics are its inherent ion suppression effects, because all lipids are ionized together without any preseparation. This can in the worst case lead to complete suppression of minor constituents of the lipidome, especially when they have to be detected simultaneously beside highly abundant other compounds. By use of intrasource separation, ion suppression effects can be alleviated for certain lipid classes, resulting in specific ionization enhancement of certain lipid classes (Han et al., 2006). Furthermore, a recently published concept to at least partially deal with this issue is spectral stitching (Southam et al., 2016;Schuhmann et al., 2017). The proposed workflow parses the range of a full scan MS 1 spectrum into certain extremely wide selected ion monitoring (SIM) ranges of 20-50 m/z units, which are acquired in a sequential manner. These SIM spectra are subsequently stitched together by the software and result in one single full scan spectrum at the end of this process. This circumvents at least the ion suppression effects arising from limited fill capacities of ion storing devices such as Orbitrap or ICR cells. But it nevertheless leaves the ion suppression effects in the ESI source untouched. A particular shortcoming of shotgun lipidomics, when compared with chromatography based approaches, is the inability to separate isomeric lipid species just by mass. Although this can be solved by fragment spectra, an additional chromatographic dimension would provide a higher degree of certainty in such cases. But as eluded in the previous chapter even some isobaric lipid overlaps can become a challenge when Q-TOF instead of Orbitrap or FT-ICR-MS technology is used. When using instrumentation with a resolution of 500,000 or even above isotopic labeling experiments are an interesting application for determination of metabolic fluxes by using isotopes such as 15 N or 17 O (He et al., 2011). These isotopes have a very low natural abundance which has been shown to be highly beneficial for 15 N labeling in HepG2 cells . The advantage of Q-TOF mass spectrometry is its acquisition speed, which allows for data-independent acquisition (DIA) MS/MS ALL methods as recently proposed by Gao et al. (2018). Due to the sheer acquisition speed of the TripleTOF used, this workflow is able to automatically acquire MS/MS spectra with a precursor selection window of 1 Da for a mass range as wide as 1000 Da, which has the advantage of 100% MS/MS spectra coverage for the whole mass range scanned. The nominal mass parsing of the scan range also circumvents the drawback of previous MS/MS ALL concepts, which operated with wider isolation windows and thus could only compensate the loss of unambiguous precursor-fragment relationships by additional use of chromatography and retrospective in silico retention time-fragment relationship alignment. A further step of improvement of MS/MS ALL technology termed MS ALL was performed on an Orbitrap Fusion Tribrid and fully capitalizes on the wealth of fragmentation options available on this type of instrument (Almeida 166 Mass Spectrometry Reviews DOI 10.1002/mas.21627 et al., 2015). This method also includes full scan spectra at a resolution of 450,000 (m/z 200) in positive and negative polarity in a low and high m/z range. MS/MS spectra were acquired in 1.0008 Da steps over the entire m/z range in the HCD cell and in the linear ion trap, each at a resolution of 30,000. Additionally, MS 3 spectra on selected lipids were acquired in the linear ion trap. The only shortcoming of this method could turn out to be the collision energy settings, which are eventually not completely optimal for each lipid class, particularly when a huge number of different lipid classes is to be analyzed. Recently, Ellis et al. (2018) showed on an LTQ-Orbitrap instrument the potential of coupling between high-resolution shotgun lipidomics and MALDI imaging. At a pixel size of 40 µm one FT-MS full scan at a resolution of 240,000 and parallel low-resolution IT-MS/MS scans in DDA mode were acquired. Both scan types were merged by the software and each pixel was processed like one sample of a shotgun experiment. This finally led to two-dimensional rat cerebellum images at a lateral resolution of 40 µm where lipid assignment from high-resolution full scan spectra was further corroborated by characteristic fragments from the respective MS/MS spectra. In a similar manner, the distribution of sulfoglycosphingolipids in tumor tissue was determined by MALDI imaging on an LTQ-Orbitrap mass spectrometer, taking into account high mass resolution FT-MS full scans and MS/MS scans by CAD, PQD, and HCD . Another interesting approach for pinpointing spatial distribution of lipids is laser capture microdissection of tissue slices with subsequent lipid extraction and shotgun lipidomics (Knittelfelder et al., 2018). The big advantage of this method is the increased amount of time that can be spent on each pixel allowing for various t-SIM and MS/ MS experiments and results in a very deep coverage of each pixels lipidome. If deeper structural elucidation of lipids including localization of fatty acyl double bond positions is of interest, UVinduced photodissociation (UVPD) might in future become the fragmentation technique of choice. In a nutshell, activation of bond cleavages between allylic methylene groups and the corresponding double bond by a 193 nm UV laser is the mechanism, by which unambiguous double bond localization in fatty acyls of phospholipids and long-chain bases of sphingolipids has been proven on Orbitrap instrumentation recently (Ryan et al., 2017;Williams et al., 2017). A further method for double bond localization and separation of regioisomers would be OzID, which relies on the recation of ozone with aliphatic double bonds, similarly to mechanisms of lipid peroxidation (Brown, Mitchell, & Blanksby, 2011). This reaction results via the generation of ozonides and Criegee intermediates in generation of truncated aldehydes and Criegee ions, with the site of truncation indicative for the double bond location. The drawback of OzID are its instrumental demands, because the mass spectrometer has to be customized for getting ozone into the collision cell or ion trap. Recently, the UV-induced Paterno-Büchi reaction of aliphatic double bonds with acetone came into the focus of lipidomics, because it enables localization of double bonds by analysis of its reaction products, which are consistently truncated at the positions of fatty acyl double bonds (Zhang et al., 2019). When acetone is added post-column and an UV emitter is placed in front of the ion source, this online reactor is even able to be coupled with LC-MS instrumentation. E. LC-MS The two most widely used approaches in LC-MS are reversedphase chromatography and hydrophilic interaction liquid chromatography (HILIC) (Holcapek, Liebisch, & Ekroos, 2018). While reversed-phase chromatography separates lipids by composition of their fatty acyl chains, HILIC separates lipids according to their polar head groups, which results indistinct lipid class separation. The fundamental separation mechanism in reversed-phase chromatography of lipids is described by the equivalent carbon number model predicting increasing retention times with an increasing fatty acyl carbon number and decreasing retention times with an increasing number of double bonds. Therefore it is possible to separate lipid species from the same lipid class by their cumulative carbon number-double bond index and with increasing chromatographic plate number it is even possible to separate isomeric species according to their fatty acyl composition (Knittelfelder et al., 2014). Due to this advantage of lipid molecular species separation, many LC-MS lipidomics platforms are based on reversed-phase chromatography coupled to Orbitrap, FT-ICR-MS, or Q-TOF instruments (Hein et al., 2009;Fauland et al., 2011;Knittelfelder et al., 2014;Sala et al., 2015;Williams et al., 2017;Griffiths et al., 2018;Holcapek, Liebisch, & Ekroos, 2018;Schott et al., 2018;Schlotterbeck et al., 2019). However, it has to be mentioned that carryover effects can become a problem in reversed-phase chromatography, particularly when C18 or even C30 columns are used (authors unpublished observations). Thus it is important to closely monitor any carry-over effects by running solvent blanks every few (e.g., 10) samples and allowing several minutes of washing and equilibration time. Figure 3 exemplifies the merits of chromatographic separation coupled to high-resolution mass spectrometry: The upper panel shows reversed-phase chromatographic separation of a mouse liver lipid extract in a total ion chromatogram according to fatty acyl composition and lipid class, but the chromatographic peak at 22.92 min still contains many chromatographically overlapping TG species. Nevertheless the high mass resolution of an Orbitrap instrument is able to separate the various adduct ions and their isotopic peaks at the given retention time and subsequently identifies the mass at 874.7855 as an elemental composition potentially corresponding to a [TG 52:3 + NH 4 ] + ion. In parallel the linear ion trap acquires an MS/MS spectrum of this mass peak in DDA mode, which firstly corroborates the identity of TG 52:3 and secondly elucidates it to be an TG 16:0_18:1_18:2 by the corresponding fatty acyl neutral losses of molecular weight of 256, 282, and 280. While such a setup often shows very high selectivity relying on retention time, exact mass of intact lipids and characteristic MS/MS fragments, the quantitative aspects are its biggest disadvantage. In contrast to shotgun lipidomics or HILIC it is not sufficient to use just one or two internal standards per co-eluting lipid class, but ideally one stable isotope-labeled internal standard per compound, because with changing matrix and mobile phase composition also ion suppression effects change from spectrum to spectrum. Since one internal standard for each lipid species is for economic reasons usually not feasible, four to ten internal standards per lipid class distributed over its retention time range are a good compromise to achieve at least semi-quantitative data . Another interesting recently proposed approach is called lipidome isotope-labelling of yeast (LILY) and relies on a fully 13 C-labeled yeast lipidome from pichia pastoris grown on completely 13 C-labeled cell culture medium (Rampler et al., 2018). This concept results in availability of one stable isotope-labeled internal standard for each lipid species as long as the same organism is used. Nevertheless, all the naturally grown 13 C-labeled lipids from this yeast extract need in first place to be quantified by known amounts of nonlabeled reference standards, therefore shifting the bottleneck of standardization from the availability of isotope-labeled internal standards to the availability nonlabeled reference compounds. Owing to their separation power, reversed-phase chromatography based lipidomics platforms are often used in DDA mode either for targeted or for nontargeted lipidomics. Good examples for targeted analysis with high-resolution instruments would be lipid class-specific methods focused on sphingolipids or sterols on a Q-Exactive in parallel reaction monitoring (PRM) mode (Peng et al., 2017;Schott et al., 2018). When used for nontargeted analysis high mass resolution is even more imperative because in such a setting it might become important to determine the identity of so far unknown lipid structures, which is close to impossible without the availability of accurate mass on molecular adduct ions and fragment ions alike. In a comparative nontargeted lipidomics study including 7 Q-TOF models, one Q-Exactive and one TOF instrument, it was shown that the results were quite similar independently of the high-resolution machinery used (Cajka, Smilowitz, & Fiehn, 2017). Although the general merit of nontargeted omics approaches is the reduction of complexity because ten thousands of features are reduced to eventually just a few hundred significantly regulated features, it is nevertheless a tedious and daunting job to unambiguously identify all the corresponding lipids . It is needless to say 168 Mass Spectrometry Reviews DOI 10.1002/mas.21627 that in such a process high mass accuracy is absolutely mandatory and improves the certainty of lipids identified by a great deal. Just when taking into account C, H, O, N, P, and S in a distribution typical for lipids (no more than 18 O, 3 N, 2 P, and 2 S) including its most abundant isotopes ( 13 C and 34 S), assuming just even electron ions formed by ESI and no more than eight ring double bond equivalents, results at m/z 810.60073 ([M + H] + of PC 38:4) in 11 possible elemental compositions at 1 ppm mass accuracy and 48 possible elemental compositions at 5 ppm mass accuracy. When quantitation of lipids is needed, HILIC has a clear advantage in comparison with reversed-phase chromatography. Since all lipids from a certain lipid class are eluting in a narrow retention time range, each lipid class can almost be regarded as one chromatographic peak with very similar mobile phase composition and matrix effects. Consequently the response factors for individual molecular lipid species within the same lipid class are very close to each other and it is possible to obtain good quantitative results with just one or eventually two internal standards, similarly to shotgun lipidomics (Cifkova et al., 2012). Therefore HILIC separation coupled to highresolution mass spectrometry is a combination worth considering and starts to gain more attention recently (Triebl et al., 2014;Hajek et al., 2017). Another recent development in lipidomics is the use of nanoHPLC, which was shown to tremendously increase the coverage of detected lipids. While conventional narrow-bore reversed-phase HPLC could separate 127 molecular lipid species reversed-phase nano HPLC could separate 436 molecular lipid species, which were subsequently identified on a Q-Exactive (Danne-Rasche, Coman, Coman, 2018). These results could potentially pave the road toward a much wider use of nano HPLC systems in lipidomics, if the robustness issues typically arising from miniaturization of chromatography can be overcome. 02wOver the last decade, supercritical fluid chromatography (SFC) has come to a stage of maturity in lipidomics, at which application is conceivable routinely. The big advantage of SFC over conventional HPLC is better chromatographic separation at shorter elution times. The compatibility of supercritical carbon dioxide as mobile phase with ESI is ensured by addition of a makeup liquid between column and ion source. Thus it has become possible to separate as many as 305 lipid species from 25 lipid classes in a chromatographic run of just 6 min by ultra-high-performance SFC (UHSFC) (Lisa & Holcapek, 2015). In a comparison of UHSFC with UHPLC it was shown that UHSFC could identify by a factor of 3.4 more lipids in 40% less run time when coupled to a Q-TOF . A. Shotgun Software Tools The challenges in the field of shotgun lipidomics for data processing are on one hand the simultaneous ionization of all components of a sample and on the other hand the acquisition of samples with multiple strategies, for example, ionization in positive and negative mode or extractions with different chemical and/or physical conditions to improve ionization efficiency for different lipid classes (Han et al., 2004;Jiang et al., 2007). In order to process this conglomerate of collected data sets, various specialized software tools are available to process these samples. The automated multidimensional mass spectrometry-based shotgun lipidomics is a building-block concept with a combination of a nontargeted and a targeted approach to identify and quantify data from several shotgun lipidomics experiments. This concept of feature identification is based on information of the total number of carbon atoms, the number of double bonds, the chemical formulas, the monoisotopic mass, and building blocks, for example, chain, backbone, and head groups which in combination represent the whole lipid (Yang et al., 2009). LipidXplorer software is based on declarative molecular fragmentation query language to identify and quantify obtained spectra on an individually defined identification routine (Herzog, Schwudke, & Shevchenko, 2013). It is a highly adaptable device-independent system, which can handle low-resolution data, precursor and neutral loss scans as well as bottom-up (Schuhmann et al., 2011) and top-down (Schwudke et al., 2007) approaches. Further typical shotgun lipidomics tools are LipidView/LipidProfiler from AB SCIEX , LipidInspector (Schwudke et al., 2006), and The analysis of lipid experiments (ALEX). ALEX is a graphical user interface (GUI) based framework consisting of 6 modules and is designed to process high-resolution data from multiplex shotgun workflows from raw data conversion to final lipid quantification. The lipid annotation is based on a database with stored information on 85 lipid classes and over 20,000 lipid species (Husen et al., 2013). B. LC-MS Software Tools In contrast to direct infusion mass spectrometry, raw data from LC-MS methods cannot be exported as averaged profile data for further processing. Each data point may belong to another representative feature. This illustrates a fundamental difference between the requirements for data processing packages and tools of shotgun-MS and LC-MS approaches. Two main acquisition techniques are widely used: DDA and DIA. DIA can be subdivided into sequential window acquisition of all theoretical fragment-ion spectra (SWATH), where additional isolation windows from 20 up to 50 Da are required to simplify MS 1 spectra connection, and all-ion fragmentation (AIF), MS ALL respectively MS E (Fenaille et al., 2017). With the DDA approach, data are recorded as full scan spectra at MS 1 level and MS/MS spectra are automatically generated based on their intensity and/or external precursor lists. This clear relationship between precursor and fragment ions is beneficial compared with DIA approaches, with the limitation that minor contaminations caused by a 1 Da precursor selection window and co-eluting isomeric features are possible. Another DDA disadvantage is the lack of MS/MS confirmation spectra of all precursor ions of interest. An experiment with a standard mixture of 40 metabolites showed 85% MS/MS coverage (Benton et al., 2015), however, this can depend heavily on the sample matrix and chromatography. This problem primarily affects low-intensity ions and retention times similar to target ions with high intensity. In MS E and SWATH approaches all features are fragmented which theoretically means 100% MS/ MS coverage. This results in highly complex MS/MS spectra, where proper software processing tools are required. Several Creative-Commons By-Attribution Kind et al. (2013Kind et al. ( , 2014 software solutions are available for DDA and DIA as shown in Table 2. There is a group of software tools which only specialize in one or two steps. In combination, however, they are very flexible and can cover the entire workflow: (i) raw file conversion, for example, msConvert (Adusumilli & Mallick, 2017) or Reifycs Abf converter. (ii) For peak picking, blank filtering, adduct, and polarity combining, and isotope filtering typical solutions are omics software tool name (XCMS) (Mahieu, Genenbacher, & Patti, 2016) in combination with Camera (Mahieu, Genenbacher, & Patti, 2016) or MZmine 2 (Pluskal et al., 2010). (iii) Several specialized software tools are available for MS/MS annotation XCMS2 (Benton et al., 2008), LipiDex (Hutchins, Russell, & Coon, 2018, LipidBlast , or LipidIMMS (Zhou et al., 2019), a package solution for annotation with an additional ion mobility dimension. The advantage of packagebased workflows is that they can be customized for each device and research area, but setting them up can be more complicated and time-consuming. To simplify data processing, there are workflow-oriented software solutions such as LipidMatch Flow, in which msConvert for the manufacturer-specific raw file conversion and MZmine 2 for data processing and filtering are integrated in the GUI. There are also some commercial software solutions for the whole workflow like SimLipid (PREMIER Biosoft) or LipidSearch (Thermo Fisher Scientific) and several open-source solutions like lipid data analyzer 2 (LDA2) , liquid (Kyle et al., 2017), MS-DIAL 4.0 (Tsugawa et al., 2015), Open-MS (Pfeuffer et al., 2017), and Greazy (Kochen et al., 2016). The main differences between these software solutions are peak picking and filtering, in which chromatographic data are translated into feature tables and MS/ MS features are identified and linked. The different algorithms are listed in Table 2. The main problems are usually over or under annotation. The quality strongly depends on the complexity of the measured samples, the ionizability of the ions, the compound concentration and the combination of the MS device and optimized algorithm parameters. MS/MS annotation in lipidomics is the main difference to other areas of MS such as metabolomics or proteomics. Due to the chemical structure (polar head group and acyl chain), they can be annotated with databases on fragmentation ion similarity scoring and/or according to structure specific fragmentation rules. MS-Dial, LipiDex, and LipidBlast MS/MS annotation are based on similarity, for example, LipiDex is using a modified dot product to score experimental MS/MS data to MassBank (Horai et al., 2010), LipidBlast (Kind et al., 2013), and NIST 12 MS/MS library. LDA 2 uses rule decisions based on text files to process MS/MS data. The rules can be easily extended based on diagnostic ions, neutral losses, intensity ratios, exclusions based on false-positive ions and also combinatory rules . There is a group of online tools and software packages that specialize in computational approaches to compound annotation. In silico fragmentation, the software identifies unknown compounds by comparing and ranking theoretical MS/MS spectra with experimental MS/MS spectra. MetFrag 2.2 (Ruttkies et al., 2016) is a web service that can be used as a desktop version or integrated into the XCMS and OpenMS workflow. It supports structure imports from common databases such as PubChem, KEGG (Kanehisa et al., 2008;Kanehisa & Sato, 2019), ChemSpider and user-defined data. Another software tool is Competitive Fragmentation Modeling-ID (CFM-ID) 3.0 (Allen et al., 2014;Djoumbou-Feunang et al., 2019), which contains a compound library obtained from the METLINE metabolite database with different collision energies for fragmentation evaluation. It also has a rule-based library designed specifically for larger molecules like lipids to speed up prediction and improve accuracy. Despite the different software solutions, the exact structure identification is still a difficult task, even if the quality of annotation has been massively improved with high-resolution MS-devices and MS/ MS information. There are few points that are not solved in standard lipidomics approaches, for example, stereoisomers, sn1, sn2, enantiomers or double bond positions. C. Tools for Batch Normalization Experiments with a larger number of samples can be challenging because of changing conditions during analysis, such as drift of instrument sensitivity, changes of eluent composition over time, temperature changes and batch interruption due to instrument errors. These factors might lead to lower statistical power (Xiao et al., 2014). It should be noted that data processing and normalization can have a major impact on your results. Therefore, the results should always be checked for plausibility. There are several data-driven normalization methods. Li et al. (2016) compared 16 data-driven normalization methods with four different data sets using the online tool Metapre and categorized them in superior, good, and poor performing methods (Li et al., 2016). Another tool for data-driven normalization is Metabox (Wanichthanarak et al., 2017). A critical point in data-driven normalization is that differences due to systematic errors and the variability of sample preparation cannot easily be distinguished from phenotypic variations. Quality control (QC)-based normalization and/or internal standard (IS)-based normalization strategies is another approach. A QC is usually a pooled sample which is acquired with a certain frequency between samples. Software solutions based on QC approaches are Batch Normalizer (Wang, Kuo, & Tseng, 2013), which corrects batch variability using LOESS regression, the Random Forest-based online tool SERRF (Fan et al., 2019), the Support Vector Machine based StatTarget (Luan et al., 2018), and EigenMS (Karpievitch et al., 2014). ISbased normalization works with several standards which are added to each sample. Since the availability of standard compounds is limited and the costs can be very high, at least 1 standard per lipid class should be added. The number of standards required also depends on the method used, for example, reversed-phase, HILIC or shotgun MS. Best-Match Internal Standard (B-MIS) (Boysen et al., 2018) normalizes peak areas based on isotopic-labeled internal standards which behave similarly during the analysis. Lipid-match normalizer (Koelmel et al., 2019) is an extension of the Lipidmatch tool, which uses a ranking system to find the most suitable lipid standard for each analyte. LDA 2 follows a similar approach with internal standards and an automatic assignment of the respective standards to the targets. There are also tools which are combining data-driven approaches with ISand/or QC-based approaches such as NOREVA (Li et al., 2017) (NORmalization and EVAluation of MS-based metabolomics data). It is an online service with 24 different data-driven normalization methods with QC-based or QC/IS-based normalization strategies and evaluates the performance for multiple testing. Despite these different tools, it is still difficult to compare data between different MS platforms. A strategy based on internal standards or a combined strategy is therefore the most promising way to achieve platform-independent results V. REPORTING AND QUALITY STANDARDS Little more than a decade ago, when lipidomics was still in its very infancy, an international group of acknowledged researchers in the field founded the International Lipids Classification and Nomenclature Committee (ILCNC), which created a logically structured classification system for lipids as depicted in Figure 1 (Fahy et al., 2005). On the basis of this endeavor the LIPID MAPS consortium developed the accompanying LMSD (Sud et al., 2007) and a few years later the LipidomicNet consortium proposed a shorthand nomenclature for reporting of mass spectrometry identified lipids (Liebisch et al., 2013). The core intention of the proposal for a shorthand nomenclature is to report only unambiguous and experimentally proven details according to an annotation system indicative for the analysis depth of uncovered lipid structures. If we stick to the example given in Figure 2, a precursor ion scan m/z 184 on a triple quadrupole instrument with just direct infusion could indicate a PC species at m/z 773 but cannot determine if it is a diacyl or an ether species and should therefore be annotated as PC (773). This annotation potentially subsumes already 202 molecular species if just the most commonly detected fatty acid combinations are taken into account. With the availability of a mass resolution of at least 40,000, an ether species could, for example, be experimentally excluded due to the number of oxygens, at least when assuming that the structure under investigation is not an oxidized phospholipid. Thus it could then be labeled PC 35:1, with still 58 possible underlying structures. When now in addition to high mass resolution also MS/MS spectra are available, the mass spectrometrist might be able to infer about the nature of the fatty acyls and eventually even their position. This would, for example, allow annotating the molecule as either PC 17:0_18:1 with unknown fatty acyl snpositions or as PC 18:1/17:0 when these details are uncovered. At this level the number of structure proposals would be down to 12 or even 6, respectively. Any further elucidation of the remaining structural ambiguities, which basically includes positions and geometries of double bonds, has to involve more sophisticated methods like ozonolysis (OzID), UVPD, or silver ion chromatography (Brown, Mitchell, & Blanksby, 2011;Lisa & Holcapek, 2013;Williams et al., 2017). The legacy of the lipidomics shorthand nomenclature group is the recently founded lipidomics standards initiative, which is an association of 25 leading lipidomics labs for governing the development of standardized practices (https://lipidomics-standards-initiative. org/) (Liebisch et al., 2019). The guidelines elaborated by this consortium cover the whole lipidomics workflow from sample collection to data reporting and should in future alleviate collaboration, data exchange and data interpretation in the field. Another recently launched important international initiative is the Plasma Lipidomics Reference Value Group (Burla et al., 2018), which evolved from a recent interlaboratory comparison and has the goal to introduce lipidomics into clinical practice by establishing a panel of diagnostically important lipids including their reference values in plasma. Most recently, the International Lipidomics Society (ILS) emerged from all these activities. ILS is intended as an umbrella organization and communication hub for improved coordination of the many ongoing community efforts and should foster a concerted development of lipidomics as a research field (https://lipidomicssociety.org/). VI. CONCLUDING REMARKS Within the last decade lipidomics was one of the fastestgrowing research fields in life sciences and the development of new analytical methods accompanied by the availability of new mass spectrometry equipment had a tremendous impact on this evolution. In this respect, the shift from low toward highresolution mass spectrometry is particularly worth mentioning, because this phenomenon runs in parallel to the development of the whole field. Although high mass resolution and the resulting accurate mass are very important ingredients for improving identification certainty of lipids, there are certain natural limitations which cannot be overcome by high-resolution mass spectrometry alone. Therefore a healthy mix of analytical devices (chromatography, fragmentation techniques, etc.) helps to cope when one is lost in the seemingly overwhelming jungle of lipid isomerism, but high mass resolving power is the natural ally who paves the road for separating these isomers from their isobars in the first place. HPLC high-performance liquid chromatography LC-MS liquid chromatography-mass spectrometry MALDI matrix-assisted laser desorption ionization Q-TOF quadrupole time-of-flight TOF time-of-flight UHPLC ultra high-performance liquid chromatography ACKNOWLEDGMENTS This work was supported by the Austrian Federal Ministry of Education, Science and Research grant number BMWFW-10.420/0005-WF/V/3c/2017.
10,626
2020-03-31T00:00:00.000
[ "Chemistry" ]
The Generalized Solutions of the n th Order Cauchy–Euler Equation : In this paper, we use the Laplace transform technique to examine the generalized solutions of the n th order Cauchy–Euler equations. By interpreting the equations in a distributional way, we found that whether their solution types are classical, weak or distributional solutions relies on the conditions of their coefficients. To illustrate our findings, some examples are exhibited. Introduction The nth order Cauchy-Euler equations a n t n y (n) (t) + a n−1 t n−1 y (n−1) (t) + · · · + a 1 ty (t) + a 0 y(t) = 0, where a 0 , a 1 , . . . , a n are real constant coefficients and t ∈ R, are often one of the first higher order ordinary homogeneous linear differential equations with variable coefficients introduced in an undergraduate level course. Naturally, we will discuss the second order Cauchy-Euler equations first. The appropriate form for its solution is y = t r where r is a parameter to be resolved. Replacing y with t r in the Cauchy-Euler equations yields the characteristic polynomial whose roots determine the forms of the general solution (e.g., see the textbooks [1,2]). This same technique can be carried over to solve the higher order Cauchy-Euler equations. In the framework of distribution theory, R. P. Kanwal [3] classifies solution the type of ordinary homogeneous linear differential equations a n (t)y (n) (t) + a n−1 (t)y (n−1) (t) + · · · + a 1 (t)y (t) + a 0 (t)y(t) = 0, (2) where the coefficient functions a 0 (t), a 1 (t), . . . , a n (t) are infinitely differentiable and t ∈ R. The type can be explained as follows. The solution is a classical solution if it is at least n times continuously differentiable so that the differentiation in Equation (2) can be achieved in the ordinary sense with an identity result. The solution is a weak solution if it is less than n times continuously differentiable and thus it does not satisfy Equation (2) in the ordinary sense but in the weak or distributional sense. The solution is a distributional solution if it is a singular distribution satisfying Equation (2) in the weak sense. All of these are referred to as generalized solutions. It is widely known that the normal form of Equation (2) does not have weak or distributional, but classical solutions. Of particular interest are the singular distributions appearing as a finite series of the Dirac delta function and its derivatives. They can arise as a distributional solution for certain classes of ordinary differential equations with singular coefficients (see J. Wiener [4] in 1982). The applications of the distribution theory to differential equations have been examined by L. Schwartz [5] and A. H. Zemanian [6]. In 1983, J. Wiener and S. M. Shah [7] provided an overview of research in the distributional field and proposed a unified way in the investigation of both distributional and entire solutions to some classes of linear ordinary differential equations. Many mathematicians have also studied the distributional solutions in the field of theory of distributions, as can be seen in [8][9][10][11][12]. A. Kananthai [13] in 1999 considered certain third order Cauchy-Euler equations where m is an integer and t ∈ R. He constructed a formula for m corresponding to each type of generalized solution of Equation (3), which are Laplace transformable. In 2017, A. Liangprom and K. Nonlaopon [14] extended the same study to certain fourth order Cauchy-Euler equations, a natural extension of Equation (3). The result for the general nth order Cauchy-Euler equations in this form was finally established by A. Sangsuwan, K. Nonlaopon and S. Orankitjaroen [15] one year after. In 2018, P. Jodnok and K. Nonlaopon [16] presented the generalized solutions of the fifth order Cauchy-Euler equations of the form where a 0 , a 1 , . . . , a 4 are real constants and t ∈ R. Depending on the values of a 0 , a 1 , . . . , a 4 , they showed that the solutions of Equation (4) are either the weak solutions or the distributional solutions. In 2015, S. Nanta [17] studied the distributional solutions of the nth order Cauchy-Euler a n t n y (n) (t) + a n−1 t n−1 y (n−1) (t) + · · · + a 1 ty (t) + a 0 y(t) = 0, where a i , i = 0, 1, . . . , n are real constants using Fourier transform. She found that the type of solutions of Equation (5) depend on the conditions of a i . Here we aim to seek the generalized solutions of the nth order Cauchy-Euler equations of the form of Equation (5) in the space of right-sided distributions. The solutions are obtained by applying the Laplace transform technique. Our work is an improved version of that of A. Sangsuwan et al. [15]. The present paper is arranged into three sections. In Section 2, we provide related definitions and lemmas necessary to obtain our main results. We then proceed to prove our results together with supported examples in Section 3. Preliminaries The space D (the space of distributions) is the dual space of D, the space of testing functions. The value of T acting on a testing function φ(t) is written as T, φ or T, φ(t) , and T, φ(t) ∈ C, where C is the set of complex numbers. Distributions that are most useful are those generated by locally integrable functions. In fact, every locally integrable function f (t) generates a distribution, which is defined by Definition 2. The kth order derivative of a distribution T is defined by Definition 3. Let f (t) be a locally integrable function which satisfies the following conditions: (i) f (t) = 0 for all t < 0; (ii) there exists a real number c such that e −ct f (t) is absolutely integrable over R. where s is a complex variable. Furthermore, if f is continuous, then its Laplace transform F(s) is analytic on the half-plane Re(s) > σ a , where σ a is an abscissa of absolute convergence for L { f (t)}. Recall that the Laplace transform G(s) of a locally integrable function g(t) that satisfies the conditions of where Re(s) > σ a . Definition 5. Let f (t) be a distribution satisfying the following properties: (i) f is a right-sided distribution, that is, f ∈ D R . (ii) There exists a real number c for which e −ct f (t) is a tempered distribution. The Laplace transform of a right-sided distribution f (t) satisfying (ii) is defined by where X(t) is an infinitely differentiable function with support bounded on the left, which equals to 1 over the neighborhood of the support of f (t). For Re(s) > c, X(t)e −(s−c)t is a testing function in the space S of testing functions of rapid descent and e −ct f (t) is in the space S of tempered distributions. Equation (9) can be deduced to then Equation (10) posses the sense given by the right-hand side of Equation (9). Now, F(s) is a function of s defined over the right half-plane Re(s) > c. A. H. Zemanian [6] proved that F(s) is an analytic function in the region of convergence Re(s) > σ 1 , where σ 1 is the abscissa of convergence for which e −ct f (t) ∈ S for some real c > σ 1 . For more details about the Laplace transform of distributions, see [18,19] and the references therein. (iii) L δ (k) (t) = s k , −∞ < Re(s) < ∞. Lemma 1. Let ψ(t) be an infinitely differentiable function. Then and We refer the reader to [3] for a proof of Lemma 1. A useful formula that follows from Equation (11), for any monomial ψ(t) = t n , is with coefficients a i (t) ∈ C n and a n (0) = 0 has a solution of order k (order of distribution Equation (15)), then we have Conversely, if k is the smallest non-negative integer root of Equation (16), there exists a kth order solution of Equation (15) at t = 0. We refer the reader to [4] for a proof of Lemma 2. Main Results Equipped with the Laplace transform technique, we are now ready to prove our main results. Theorem 1. Consider the nth order Cauchy-Euler equations of the form a n t n y (n) (t) + a n−1 t n−1 y (n−1) (t) + · · · + a 1 ty (t) + a 0 y(t) = 0, where a i , i = 0, 1, 2, . . . , n are real constants, a n = 0, n is any integers with n ≥ 2 and t ∈ R. The types of Laplace transformable solutions in D R of Equation (17) depend on the value of a i , and are given by the following cases: (i) If there exists a non-negative integer k such that then there exists a distributional solution of Equation (17), which is a singular distribution of the Dirac delta function and its derivatives. (ii) If there exists a non-negative integer k less than or equal to n such that then there exists a weak solution of Equation (17). Moreover, the solution is continuous if k is greater than or equal to 1. (iii) If there exists a positive integer k such that then there exists a classical solution of Equation (17). Proof. We rewrite Equation (17) in brief as where a n = 1. Applying Laplace transform to Equation (21) with a notation L {y} = Y(s), we now refer to properties (iv) and (v) in Example 1 to get Consider a solution of Equation (23) in a simple form Y(s) = s r , where r is a real constant that must be determined. Replacing Y (i) (s) for i = 1, 2, 3, . . . , n in Equation (23) gives We now examine the proposed three cases of the value r. Case (i). If r is a non-negative integer, then substituting r = k for k ∈ N ∪ {0} into Equation (24), we obtain Equation (18). Thus, if the condition of Equation (18) holds, then the solution of Equation (21) is Y(s) = s k . Obviously Y(s) is analytic over the whole s-plane. Taking inverse Laplace transform to Y(s) and applying property (iii) in Example 1, we obtain the distributional solutions of Equation (17) in the form Case (ii). If r is a negative integer which is no less than −(n + 1), then substituting r = −(k + 1) for k ∈ { 0, 1, 2, . . . , n } into Equation (24), we obtain Thus, if the condition of Equation (19) holds, then the solution of Equation (21) is Y(s) = s −(k+1) . Now we take the inverse Laplace transform to Y(s), applying property (i) in Example 1, and we obtain the weak solutions of Equation (17), since k ≤ n in the form Observe that the solution is continuous for k ≥ 1. Case (iii). If r is a negative integer less than −(n + 1), then a substitution of r = −(n + k + 1) for k ∈ N into Equation (24), gives n ∑ i=0 (−1) i i! −(n + k + 1) + i i a i = n ∑ i=0 (−1) i i! (−(n + k + 1) + i)(−(n + k + 1) + i − 1) · · · (−(n + k + 1) Thus, if the condition of Equation (20) holds, then the solution of Equation (21) is Y(s) = s −(n+k+1) . Now we take the inverse Laplace transform to Y(s) and, applying property (i) in Example 1, we obtain the classical solutions of Equation (17) because k ≥ 1 and the solutions are Theorem 2. The distributional solution of the nth order Cauchy-Euler equations of the form a n t n y (n) (t) + a n−1 t n−1 y (n−1) (t) + · · · + a 1 ty (t) + a 0 y(t) = 0, Remark 3. If a 1 = a 2 = · · · = a n = 1 and a 0 = m, then Theorem 1 is identical to the condition in [15]. Author Contributions: All authors contributed equally to this article. They read and approved the final manuscript. Funding: This research received no external funding.
2,949.8
2019-10-10T00:00:00.000
[ "Mathematics" ]
K-Semistability of cscK Manifolds with Transcendental Cohomology Class We prove that constant scalar curvature Kähler (cscK) manifolds with transcendental cohomology class are K-semistable, naturally generalising the situation for polarised manifolds. Relying on a recent result by R. Berman, T. Darvas and C. Lu regarding properness of the K-energy, it moreover follows that cscK manifolds with discrete automorphism group are uniformly K-stable. As a main step of the proof we establish, in the general Kähler setting, a formula relating the (generalised) Donaldson–Futaki invariant to the asymptotic slope of the K-energy along weak geodesic rays. X , such questions are closely related to the Yau-Tian-Donaldson (YTD) conjecture [27,49,54]: A polarised algebraic manifold (X, L) is K-polystable if and only if the polarisation class c 1 (L) admits a Kähler metric of constant scalar curvature. This conjecture was recently confirmed in the Fano case, i.e. when L = −K X , cf. [16][17][18]52]. In this important special case, a cscK metric is nothing but a Kähler-Einstein metric. For general polarised cscK manifolds, the "if" direction of the YTD conjecture was initially proven by Mabuchi in [37], see also [5]. Prior to that, several partial results had been obtained by Donaldson [28] and Stoppa [46], both assuming that c 1 (L) contains a cscK metric. For transcendental classes very little is currently known about the validity of a correspondence between existence of cscK metrics and stability in the spirit of the YTD conjecture. Moreover, from a differential geometric point of view, there is no special reason to restrict attention to Kähler manifolds with associated integral (or rational) cohomology classes, which are then automatically of the form α = c 1 (L) for some ample (Q)-line bundle L over X . In order to extend the study of stability questions to a transcendental setting, recall that there is an intersection theoretic description of the Donaldson-Futaki invariant, cf. [39,53]. As first pointed out by Berman [4], a straightforward generalised notion of K-stability in terms of cohomology can thus be defined and a version of the YTD conjecture can be made sense of in this setting. The setup is explained in detail in Sect. 3. Our main goal is to establish the following result: Theorem A Let (X, ω) be a compact Kähler manifold and let α := [ω] ∈ H 1,1 (X, R) be the corresponding Kähler class. For precise definitions we refer to the core of the paper. As an immediate consequence of [6, Theorem 1.1] and the above Theorem A (i) we obtain the following corollary, which is a main motivation for our work (see also Remark 1.2). The corresponding statement in the case of a polarised manifold was first obtained by Donaldson in [28], as an immediate consequence of the lower bound for the Calabi functional. See also [43,47] for related work on slope semistability. The approach taken in this paper should however be compared to, e.g. [42] and [4,5,12,13], where K-semistability is derived using so called "Kempf-Ness type" formulas. By analogy to the above papers, our proof relies on establishing such formulas valid also for transcendental classes (see Theorems B and C), in particular relating the asymptotic slope of the K-energy along weak geodesic rays to a natural generalisation of the Donaldson-Futaki invariant. This provides a link between K-semistability (resp. uniform K-stability) and boundedness (resp. coercivity) of the Mabuchi functional, key to establishing the stability results of Theorem A. An underlying theme of the paper is the comparison to the extensively studied case of a polarised manifold, which becomes a "special case" in our setting. Notably, it is then known (see, e.g. [4,5,12,13]) how to establish the sought Kempf-Ness type formulas using Deligne pairings; a method employed by Phong-Ross-Sturm in [42] (for further background on the Deligne pairing construction, cf. [30]). Unfortunately, such an approach breaks down in the case of a general Kähler class. In this paper, we circumvent this problem by a pluripotential approach, making use of a certain multivariate variant ϕ 0 , . . . , ϕ n (θ 0 ,...θ n ) of the Monge-Ampère energy functional, which turns out to play a role analogous to that of the Deligne pairing in arguments of the type [42]. The Deligne pairing approach should also be compared to [26,50] using Bott-Chern forms (see, e.g. 2.4 and [44, Example 5.6]). [8,Theorem 1.2] we in fact further see that cscK manifolds (X, α) with discrete automorphism group are uniformly K-stable. The above thus confirms one direction of the Yau-Tian-Donaldson conjecture, here referring to its natural generalisation to the case of arbitrary compact Kähler manifolds with discrete automorphism group, see Sect. 5.2. Generalised K-Semistability We briefly explain the framework we have in mind. As a starting point, there are natural generalisations of certain key concepts to the transcendental setting, a central notion being that of test configurations. First recall that a test configuration for a polarised manifold (X, L), in the sense of Donaldson, cf. [27], is given in terms of a C * -equivariant degeneration (X , L) of (X, L). It can be seen as an algebrogeometric way of compactifying the product X × C * → X . Note that test configurations in the sense of Donaldson are now known (at least in the case of Fano manifolds, see [36]) to be equivalent to test configurations in the sense of Tian [49]. As remarked in [4], a straightforward generalisation to the transcendental setting can be given by replacing the line bundles with (1, 1)-cohomology classes. In the polarised setting we would thus consider (X , c 1 (L)) as a "test configuration" for (X, c 1 (L)), by simply replacing L and L with their respective first Chern classes. The details of how to formulate a good definition of such a generalised test configuration have, however, not yet been completely clarified. The definition given in this paper is motivated by a careful comparison to the usual polarised case, where we ensure that a number of basic but convenient tools still hold, cf. Sect. 3. In particular, our notion of K-semistability coincides precisely with the usual one whenever we restrict to the case of an integral class, cf. Proposition 3.14. We will refer to such generalised test configurations as cohomological. Definition 1.3 (Cohomological test configuration) A cohomological test configuration for (X, α) is a pair (X , A) where X is a test configuration for X (see Definition 3.2) and A ∈ H 1,1 BC (X , R) C * is a C * -invariant (1, 1)-Bott-Chern cohomology class whose image under the canonical C * -equivariant isomorphism is p * 1 α, see (6). Here p 1 : X × P 1 → X denotes the first projection. Remark 1.4 Note that the definition is given directly over P 1 so that we consider the Bott-Chern cohomology on a compact Kähler normal complex space. In the polarised case, defining a test configuration over C or over P 1 is indeed equivalent, due to the existence of a natural C * -equivariant compactification over P 1 . In practice, it will be enough to consider the situation when the total space X is smooth and dominates X × P 1 , with μ : X → X × P 1 the corresponding canonical C * -equivariant bimeromorphic morphism. Moreover, if (X , A) is a cohomological test configuration for (X, α) with X as above, then A is always of the form , for a unique R-divisor D supported on the central fibre X 0 , cf. Proposition 3.10. A cohomological test configuration can thus be characterised by an R-divisor, clarifying the relationship between the point of view of R-divisors and our cohomological approach to "transcendental K-semistability". A straightforward generalisation of the Donaldson-Futaki invariant can be defined based on the intersection theoretic characterisation of [39,53]. Indeed, we define the Donaldson-Futaki invariant associated with a cohomological test configuration (X , A) for (X, α) as the following intersection number computed on the (compact) total space X . Here V andS are cohomological constants denoting the Kähler volume and mean scalar curvature of (X, α), respectively. Finally, we say that (X, α) is K-semistable if DF(X , A) 0 for all cohomological test configurations (X , A) for (X, α) where the class A is relatively Kähler, i.e. there is a Kähler form β on P 1 such that A + π * β is Kähler on X . Generalised notions of (uniform) K-stability are defined analogously. Transcendental Kempf-Ness Type Formulas As previously stated, a central part of this paper consists in establishing a Kempf-Ness type formula connecting the Donaldson-Futaki invariant (in the sense of (1)) with the asymptotic slope of the K-energy along certain weak geodesic rays. In fact, we first prove the following result, which is concerned with asymptotics of a certain multivariate analogue of the Monge-Ampère energy, cf. Sect. 2.2 for its definition. It turns out to be very useful for establishing a similar formula for the K-energy (cf. Remark 1.5), but may also be of independent interest. In what follows, we will work on the level of potentials and refer the reader to Sect. 4 for precise definitions. Theorem B Let X be a compact Kähler manifold of dimension n and let θ i , 0 i n, be closed (1, 1)-forms on X . Let (X i , A i ) be cohomological test configurations for respectively, the asymptotic slope of the multivariate energy functional ·, . . . , · := ·, . . . , · (θ 0 ,...,θ n ) is well defined and satisfies as t → +∞. See Sect. 4.1 for the definition of the above intersection number. Remark 1.5 In the setting of Hermitian line bundles, the above multivariate energy functional naturally appears as the difference (or quotient) of metrics on Deligne pairings. Moreover, note that the above theorem applies to, e.g. Aubin's J-functional, the Monge-Ampère energy functional E and its 'twisted' version E Ric(ω) but not to the K-energy M. Indeed, the expression for M(ϕ t ) on the form ϕ t 0 , . . . , ϕ t n (θ 0 ,...,θ n ) involves the metric log(ω + dd c ϕ t ) n on the relative canonical bundle K X /P 1 , which blows up close to X 0 , cf. Sect. 5. As observed in [13], it is however possible to find functionals of the above form that 'approximate' M in the sense that their asymptotic slopes coincide, up to an explicit correction term that vanishes precisely when the central fibre X 0 is reduced. This is a key observation. We further remark that such a formula (2) cannot be expected to hold unless the test configurations (X i , A i ) and the rays (ϕ t i ) are compatible in a certain sense. This is the role of the notion of C ∞ -compatibility (as well as the C 1,1 -compatibility used in Theorem C). These notions may seem technical, but in fact mimic the case of a polarised manifold, where the situation is well understood in terms of extension of metrics on line bundles, cf. Sect. 4.1. As a further important consequence of the above Theorem B we deduce that if (X , A) is a relatively Kähler cohomological test configuration for (X, α), then for each smooth ray (ϕ t ) t 0 , C ∞ -compatible with (X , A), we have the inequality This is the content of Theorem 5.1, and should be compared to the discussion in the introduction of [42]. As an important special case, this inequality can be seen to hold in the case of a weak geodesic ray associated with the given test configuration (X , A), cf. Sect. 4.1 for its construction. The inequality (3) is moreover enough to conclude the proof of Theorem A, as explained in Sect. 5.2. Using ideas from [13] adapted to the present setting, we may further improve on formula (3) and compute the precise asymptotic slope of the K-energy. In this context, it is natural to consider the non-Archimedean Mabuchi functional M NA (X , A) DF(X , A) with equality precisely when the central fibre X 0 is reduced. We then have the following result, special cases of which have been obtained by previous authors in various different situations and generality. Theorem C Let (X , A) be a smooth, relatively Kähler cohomological test configuration for (X, α) dominating X ×P 1 . For each subgeodesic ray (ϕ t ) t 0 , C 1,1 -compatible with (X , A), the following limit is well defined and satisfies as t → +∞. In particular, this result holds for the weak geodesic ray associated with (X , A), constructed in Lemma 4.6. Remark 1.6 When the class A on X is merely relatively nef it is possible to obtain similar statements, but this necessitates much more involved arguments. Either way, the above result is more than enough for our purposes here, e.g. for proving the main result, Theorem A. For polarised manifolds (X, L) and smooth subgeodesic rays (ϕ t ) t 0 , this precise result was proven in [13] using Deligne pairings, as pioneered by Phong-Ross-Sturm in [42] (cf. also Paul-Tian [40,41]). A formula in the same spirit has also been obtained for the so-called Ding functional when X is a Fano variety, see [5]. However, it appears as though no version of this result was previously known in the case of non-polarised manifolds. Structure of the Paper In Sect. 2 we fix our notation for energy functionals and subgeodesic rays. In particular, we introduce the multivariate energy functionals ·, . . . , · (θ 0 ,...,θ n ) , which play a central role in this paper. In Sect. 3 we introduce our generalised notion of cohomological test configurations and K-semistability. In the case of a polarised manifold (X, L), we compare this notion to the usual algebraic one. We also discuss classes of cohomological test configurations for which it suffices to test K-semistability, and establish a number of basic properties. In Sect. 4 we discuss transcendental Kempf-Ness-type formulas and prove Theorem B. This involves introducing natural compatibility conditions between a ray (ϕ t ) and a cohomological test configuration (X , A) for (X, α). As a useful special case, we discuss the weak geodesic ray associated with (X , A). In Sect. 5 we finally apply Theorem B to yield a weak version of Theorem C, from which we in turn deduce our main result, Theorem A. By an immediate adaptation of techniques from [13] we then compute the precise asymptotic slope of the Mabuchi functional, thus establishing the full Theorem C. Notation and Basic Definitions Let X be a compact complex manifold of dim C X = n equipped with a given Kähler form ω, i.e. a smooth real closed positive (1, 1)-form on X . Denote the Kähler class [ω] ∈ H 1,1 (X, R) by α. In order to fix notation, let Ric(ω) = −dd c log ω n be the Ricci curvature form, where dd c := √ −1 2π ∂∂ is normalised so that Ric(ω) represents the first Chern class c 1 (X ). Its trace S(ω) := n Ric(ω) ∧ ω n−1 ω n is the scalar curvature of ω. The mean scalar curvature is the cohomological constant given byS where V := X ω n := (α n ) X is the Kähler volume. We say that ω is a constant scalar curvature Kähler (cscK) metric 2 if S(ω) is constant (equal toS) on X. Throughout the paper we work on the level of potentials, using the notation of quasi-plurisubharmonic (quasi-psh) functions. To this end, we let θ be a closed (1, 1)form on X and denote, as usual, by PSH(X, θ) the space of θ -psh functions ϕ on X , i.e. the set of functions that can be locally written as the sum of a smooth and a plurisubharmonic function, and such that θ ϕ := θ + dd c ϕ 0 in the weak sense of currents. In particular, if ω is our fixed Kähler form on X , then we write for the space of Kähler potentials on X . As a subset of C ∞ (X ) it is convex and consists of strictly ω-psh functions. It has been extensively studied (for background we refer the reader to, e.g. [10] and references therein). Recall that a θ -psh function is always upper semi-continuous (usc) on X , thus bounded from above by compactness. Moreover, if ϕ i ∈ PSH(X, θ) ∩ L ∞ loc , 1 i p n, it follows from the work of Bedford-Taylor [2,3] that we can give meaning to the product p i=1 (θ + dd c ϕ i ), which then defines a closed positive ( p, p)-current on X . As usual, we then define the Monge-Ampère measure as the following probability measure, given by the top wedge product MA(ϕ) := V −1 (ω + dd c ϕ) n . -if ϕ 0 is another θ i -psh function in PSH(X, θ) ∩ L ∞ loc , then we have a 'change of function' property Demanding that the above properties hold necessarily leads to the following definition of Deligne functionals, that will provide a useful terminology for this paper. Definition 2.1 Let θ 0 , . . . , θ n be closed (1, 1)-forms on X . Define a multivariate energy functional ·, . . . , Remark 2.2 The multivariate energy functional ·, . . . , · (θ 0 ,...,θ n ) can also be defined on C ∞ (X ) × · · · × C ∞ (X ) by the same formula. In Sects. 4 and 5 it will be interesting to consider both the smooth case and the case of locally bounded θ i -psh functions. Using integration by parts one can check that this functional is indeed symmetric. Proof Since every permutation is a composition of transpositions it suffices to check the sought symmetry property for transpositions σ := σ j,k exchanging the position of j, k ∈ {0, 1, . . . , n}. Suppose for simplicity of notation that j < k and write where in the last step we used integration by parts and write (with factors θ j and θ t k omitted). The case j > k follows in the exact same way, with obvious modifications to the above proof. Example 2.4 As previously remarked, note that the above functionals can be written using the Deligne functional formalism. Indeed, if θ is a closed (1, 1)-form on X , ω is a Kähler form on X and ϕ is an ω-psh function on X , then and Compare also [44, Example 5.6] on Bott-Chern forms. Subgeodesic Rays Let (ϕ t ) t 0 ⊂ PSH(X, ω) be a ray of ω-psh functions. Following a useful point of view of Donaldson [27] and Semmes [45], there is a basic correspondence between the family (ϕ t ) t 0 and an associated S 1 -invariant function on X ׯ * , where¯ * ⊂ C denotes the punctured unit disc. We denote by τ the coordinate on . Explicitly, the correspondence is given by where the sign is chosen so that t → +∞ corresponds to τ := e −t+is → 0. The function restricted to a fibre X × {τ } thus corresponds precisely to ϕ t on X . In the direction of the fibres we thus have p * 1 ω + dd c x 0 (in the sense of currents, letting p 1 : X × → X denote the first projection). We will use the following standard terminology, motivated by the extensive study of (weak) geodesics in the space H, see, e.g. [9,15,20,27,45]. Definition 2.7 We say that a function ϕ is C 1,1 -regular if dd c ϕ ∈ L ∞ loc , and we set Recall that a C 1,1 -regular function is automatically C 1,a -regular for all 0 < a < 1. On the other hand, this condition is weaker than C 1,1 -regularity (i.e. bounded real Hessian). Proposition 2.8 Let θ 0 , . . . , θ n be closed (1, 1)-forms on X and let (ϕ t i ) t 0 be a smooth ray of smooth functions. Let τ := e −t+is and consider the reparametrised ray (ϕ τ i ) τ ∈¯ * . Denoting by i the corresponding S 1 -invariant function on X × * , we have where X denotes fibre integration, i.e. pushforward of currents. Proof The result follows from a computation relying on integration by parts and is an immediate adaptation of, for instance, [7, Proposition 6.2]. As a particular case of the above, we obtain the familiar formulas for the second-order variation of E and E θ , given by is affine along weak geodesics, and convex along subgeodesics. The K-Energy and the Chen-Tian Formula Let ω be a Kähler form on X and consider any path (ϕ t ) t 0 in the space H of Kähler potentials on X . The Mabuchi functional (or K-energy) M : H → R is then defined by its Euler-Lagrange equation Xφ It is indeed independent of the path chosen, and the critical points of the Mabuchi functional are precisely the cscK metrics, when they exist. By the Chen-Tian formula [14] it is possible to write the Mabuchi functional as a sum of an "energy" and an "entropy" part. More precisely, with our normalisations we have where the latter term is the relative entropy of the probability measure μ := ω n ϕ /V with respect to the reference measure μ 0 := ω n /V . Recall that the entropy takes values in [0, +∞] and is finite if μ/μ 0 is bounded. It can be seen to be always lower semi-continuous (lsc) in μ. Following Chen [14] (using the formula (5)) we will often work with the extension M : H 1,1 → R of the Mabuchi functional to the space of ω-psh functions with bounded Laplacian. This is a natural setting to consider, since weak geodesic rays with bounded Laplacian are known to always exist, cf. [9,15,20,21] as well as Lemma 4.6. For later use, we also state the following definition. Definition 2.9 The Mabuchi K-energy functional is said to be coercive if there are constants δ, C > 0 such that We further recall that the Mabuchi functional is convex along weak geodesic rays, as was recently established by [6], see also [19]. As a consequence of this convexity, the Mabuchi functional is bounded from below (in the given Kähler class) whenever α contains a cscK metric, see [29,35] for a proof in the polarised case and [6] for the general Kähler setting. Cohomological Test Configurations and K-Semistability In this section we introduce a natural generalised notion of test configurations and K-semistability of (X, α) that has meaning even when the class α ∈ H 1,1 (X, R) is non-integral (or non-rational), i.e. when α is not necessarily of the form c 1 (L) for some ample (Q)-line bundle L on X . As remarked by Berman in [4], it is natural to generalise the notion of test configuration in terms of cohomology classes. In the polarised setting, the idea is to consider (X , c 1 (L)) as a "test configuration" for (X, c 1 (L)), by simply replacing L and L with their respective first Chern classes. This approach is motivated in detail below. Moreover, a number of basic and useful properties will be established, and throughout, this generalisation will systematically be compared to the original notion of algebraic test configuration (X , L) for (X, L), introduced by Donaldson in [27]. Remark 3.1 Much of the following exposition goes through even when the cohomology class α is not Kähler. Unless explicitly stated otherwise, we thus assume that α = [θ ] for some closed (1, 1)-form θ on X . Test Configurations for X We first introduce the notion of test configuration X for X , working directly over P 1 . For the sake of comparison, recall the usual concept of test configuration for polarised manifolds, see, e.g. [12,48]. In what follows, we refer to [31] for background on normal complex spaces. Definition 3.2 A test configuration X for X consists of -a normal compact Kähler complex space X with a flat morphism π : X → P 1 -a C * -action λ on X lifting the canonical action on P 1 -a C * -equivariant isomorphism The isomorphism 6 gives an open embedding of X × (P 1 \{0}) into X , hence induces a canonical C * -equivariant bimeromorphic map μ : X X × P 1 . We say that X dominates X × P 1 if the above bimeromorphic map μ is a morphism. Taking X to be the normalisation of the graph of X X × P 1 we obtain a C * -equivariant bimeromorphic morphism ρ : X → X with X normal and dominating X ×P 1 . In the terminology of [12] such a morphism ρ is called a determination of X . In particular, a determination of X always exists. By the above considerations we will often, up to replacing X by X , be able to assume that the given test configuration for X dominates X × P 1 . Moreover, any test configuration X for X can be dominated by a smooth test configuration X for X (where we may even assume that X 0 is a divisor of simple normal crossings). Indeed, by Hironaka (see [33,Theorem 45] for the precise statement concerning normal complex spaces) there is a C * -equivariant proper bimeromorphic map μ : X → X , with X smooth, such that X 0 has simple normal crossings and μ is an isomorphism outside of the central fibre X 0 . As a further consequence of the isomorphism (6), note that if is a function on X , then its restriction to each fibre X τ X , τ ∈ P 1 \{0} identifies with a function on X . The function thus gives rise to a family of functions (ϕ t ) t 0 on X , recalling our convention of reparametrising so that t := − log |τ |. Remark 3.3 When X is projective (hence algebraic), the GAGA principle shows that the usual (i.e. algebraic, and normal) test configurations of X correspond precisely to the test configurations (in our sense of Definition 3.2) with X projective. Cohomological Test Configurations for (X, α) We now introduce a natural generalisation of the usual notion of algebraic test configuration (X , L) for a polarised manifold (X, L). This following definition involves the Bott-Chern cohomology on normal complex spaces, i.e. the space of locally dd cexact (1, 1)-forms (or currents) modulo globally dd c -exact (1, 1)-forms (or currents). The Bott-Chern cohomology is finite dimensional and the cohomology classes can be pulled back. Moreover, H 1,1 BC (X , R) coincides with the usual Dolbeault cohomology H 1,1 (X , R) whenever X is smooth. See, e.g. [11] for background. Definition 3.4 A cohomological test configuration for is p * 1 α. Here p 1 : X × P 1 → X is the first projection. Definition 3. 5 We say that a test configuration (X , A) for (X, α) is smooth if the total space X is smooth. In case α ∈ H 1,1 (X, R) is Kähler, we say that (X , A) is relatively Kähler if the cohomology class A is relatively Kähler, i.e. there is a Kähler form β on P 1 such that A + π * β is Kähler on X . Exploiting the discussion following Definition 3.2 we in practice restrict attention to the situation when (X , A) is a smooth (cohomological) test configuration for (X, α) dominating X × P 1 , with μ : X → X × P 1 the corresponding C * -equivariant bimeromorphic morphism. This situation is studied in detail in Sect. 3.4, where we in particular show that the class A ∈ H 1,1 (X , R) is always of the form A = μ * p * 1 α + [D] for a unique R-divisor D supported on the central fibre, cf. Proposition 3.10. It is further natural to ask how the above notion of cohomological test configurations compares to the algebraic test configurations introduced by Donaldson in [27]. On the one hand, we have the following example: is an algebraic test configuration for (X, L) and we letȲ,L and L, respectively, denote the C * -equivariant compactifications over P 1 , then (Ȳ, c 1 (L)) is a cohomological test configuration for (X, c 1 (L)), canonically induced by (Y, L). On the other hand, there is no converse such correspondence. For instance, even if (X, L) is a polarised manifold there are more cohomological test configurations (X , A) for (X, c 1 (L)) than algebraic test configurations (Y, L) for (X, L). However, we show in Proposition 3.14 that such considerations are not an issue in the study of K-semistability of (X, α). The Donaldson-Futaki Invariant and K-Semistability The following generalisation of the Donaldson-Futaki invariant is straightforward, at least when the test configuration is smooth (in general one can use resolution of singularities to make sense of the intersection number below). Definition 3.7 Let (X , A) be a cohomological test configuration for (X, α). The Donaldson-Futaki invariant of (X , A) is We recall that X is assumed to be compact, cf. Definition 3.2, and that K X /P 1 := K X − π * K P 1 denotes the relative canonical divisor. The point is that by results of Wang [53] and Odaka [39] DF(Ȳ, c 1 (L)) coincides with DF(Y, L) whenever (Y, L) is an algebraic test configuration for a polarised manifold (X, L), with Y normal (see the proof of Proposition 3.14). Hence the above quantity is a generalisation of the classical Donaldson-Futaki invariant. The analogue of K-semistability in the context of cohomological test configurations is defined as follows. Definition 3. 8 We say that (X, α) is K-semistable if DF(X , A) 0 for all relatively Kähler test configurations (X , A) for (X, α). Remark 3.9 With the study of K-semistability in mind, we emphasise that the Donaldson-Futaki invariant DF(Y, L) (cf. [39,53]) depends only on Y and c 1 (L). The notion of cohomological test configuration emphasises this fact. In order to further motivate the above definitions, we now introduce a number of related concepts and basic properties that will be useful in the sequel. Test Configurations Characterised by R-Divisors Recall that if (X , L) is an algebraic test configuration for a polarised manifold (X, L) that dominates (X, L) × C, then L = μ * p * 1 L + D for a unique Q-Cartier divisor D supported on X 0 , see [12]. Similarly, the following result characterises the classes A associated with smooth and dominating cohomological test configurations, in terms of R-divisors D supported on the central fibre X 0 . Proposition 3.10 Let (X , A) be a smooth cohomological test configuration for (X, α) dominating X ×P 1 , with μ : X → X ×P 1 the corresponding canonical C * -equivariant bimeromorphic morphism. Then there exists a unique R-divisor D supported on the central fibre X 0 such that Proof Let α := [ω] ∈ H 1,1 (X, R). We begin by proving existence: By hypothesis X dominates X × P 1 via the morphism μ, such that the central fibre decomposes into the strict transform of X ×{0} and the μ-exceptional divisor. We write X 0 = i b i E i , with E i irreducible. Denoting by [E] the cohomology class of E and by p 1 : X × P 1 → X the projection on the first factor, we then have the following formula: Lemma 3.11 Let be a closed (1, 1)-form on X . Then T := − μ * (μ * ) is a closed (1, 1)-current of order 0 supported on ∪ i E i = Exc(μ). By Demailly's second theorem of support (see [23]) it follows that Since By the Künneth formula, it thus follows that H 1, and η a class in H 1,1 (X ). The restrictions of A and μ * p * 1 α to π −1 (1) X ×{1} X are identified with α and η, respectively. Since D is supported on X 0 it follows that η = α. We thus have the sought decomposition, proving existence. As for the uniqueness, we let D 0 be the set of R-divisors D with support contained in the central fibre X 0 . Consider the linear map The desired uniqueness property is equivalent to injectivity of R. Hence, assume that [D] = 0 in H 1,1 (X ). In particular D |E i ≡ 0 and it follows from a corollary of Zariski's lemma (see, e.g. [1,Lemma 8.2]) that D = cX 0 , with c ∈ R. But, letting β be any Kähler form on X , we see from the projection formula that since V is the Kähler volume. Hence [X 0 ] is a non-zero class in H 1,1 (X ). It follows that c = 0, thus D = 0 as well. We are done. This gives a very convenient characterisation of smooth cohomological test configurations for (X, α) that dominate X × P 1 . In what follows, we will make use of resolution of singularities to associate a new test configuration (X , A ) for (X, α) to a given one, noting that this can be done without changing the Donaldson-Futaki invariant. Indeed, by Hironaka [33, Theorem 45] (see also Sect. 3.2) there is a C * -equivariant proper bimeromorphic map μ : X → X , with X smooth and such that X 0 has simple normal crossings. Moreover, μ is an isomorphism outside of the central fibre X 0 . Set A := μ * A. By the projection formula we then have The following result states that it suffices to test K-semistability for a certain class of cohomological test configurations 'characterised by an R-divisor' (in the above sense of Proposition 3.10). A) for (X, α) dominating X × P 1 . Proposition 3.12 Let α ∈ H 1,1 (X, R) be Kähler. Then (X, α) is K-semistable if and only if DF(X , A) 0 for all smooth, relatively Kähler cohomological test configurations (X , Proof Let (X , A) be any cohomological test configuration for (X, α) that is relatively Kähler. By Hironaka (see [33]) there is a sequence of blowups ρ : X → X × P 1 with smooth C * -equivariant centres such that X simultaneously dominates X and X × P 1 via morphisms μ and ρ, respectively. Moreover, there is a divisor E on X that is ρexceptional and ρ-ample (and antieffective, i.e. −E is effective). By Proposition 3.10, we have , where D is an R-divisor on X supported on X 0 . Note that the class μ * A ∈ H 1,1 (X , R) is relatively nef. We proceed by perturbation. Since α is Kähler on X , we may pick a Kähler class η on P 1 such that p * 1 α + p * 2 η =: β is Kähler on X × P 1 . Since E is ρ-ample one may in turn fix an ε ∈ (0, 1) sufficiently small such that ρ * β + ε[E] is Kähler on X . It follows that ρ * p * 1 α + ε[E] is relatively Kähler (with respect to P 1 ) on X . Thus ρ * p * 1 α + [D] + δ(ρ * p * 1 α + ε[E]) is relatively Kähler for all δ 0 small enough. In turn, so is A δ := ρ * p * 1 α +[D δ ], where D δ denotes the convex combination D δ := 1 1+δ D + δε 1+δ E. Assuming that the DF-invariant of a smooth and dominating test configuration is always non-negative, it follows from the projection formula and continuity of the Donaldson-Futaki invariant, that as δ → 0. The other direction holds by definition, so this proves the first part of the lemma. Remark 3.13 With respect to testing K-semistability one can in fact restrict the class of test configurations that need to be considered even further, as explained in Sect. 3.6. Cohomological K-Semistability for Polarised Manifolds It is useful to compare cohomological and algebraic K-semistability in the special case of a polarised manifold (X, L). Conversely, suppose that (X, L) is algebraically K-semistable and let (X , A) be a cohomological test configuration for (X, α). By Lemma 3.12 we may assume that (X , A) is a smooth, relatively Kähler test configuration for (X, α) dominating X ×P 1 , with μ :→ X × P 1 the corresponding C * -equivariant bimeromorphic morphism. By Proposition 3.10 we further have A = μ * p * 1 c 1 (L) + [D] for a uniquely determined R-divisor D on X supported on the central fibre X 0 . Since A is relatively Kähler, there is a Kähler form η on P 1 such that A + π * η is Kähler on X . Approximating the coefficients of the divisor D by a sequence of rationals, we write D = lim D j for Q-divisors D j on X , all supported on X 0 . As j → +∞, we then have which is a Kähler form on X . Since the Kähler cone is open, it follows that μ * p * 1 c 1 (L)+ [D j ] + π * η is also Kähler for all j large enough. Now let L j := μ * p * 1 L + D j . By the above, L j is a relatively ample Q-line bundle over X and c 1 (L j ) → A. We thus conclude that (X , L j ) (for all j large enough) is an ample test configuration for (X, L). Hence 0 DF(X , L j ) −→ DF(X , A) as j → +∞, which is what we wanted to prove. The Non-Archimedean Mabuchi Functional and Base Change Let (X , A) be a cohomological test configuration for (X, α). A natural operation on (X , A) is that of base change (on X and we pull back A). Unlike resolution of singularities, however, the DF-invariant does not behave well under base change. In this context, a more natural object of study is instead the non-Archimedean Mabuchi functional M NA (first introduced in [12,13], where also an explanation of the terminology is given). Definition 3.15 The non-Archimedean Mabuchi functional is the modification of the Donaldson-Futaki invariant given by Note that the 'correction term' V −1 ((X 0,red − X 0 ) · A n ) X is non-positive and vanishes precisely when the central fibre X 0 is reduced. The point of adding to DF this additional term is that the resulting quantity M NA (X , A) becomes homogeneous under base change, i.e. we have the following lemma. Lemma 3.16 ([12]) Let (X , A) be a cohomological test configuration for (X, α) and let d ∈ N. Denote by X d the normalisation of the base change of X , by g d : X d → X the corresponding morphism (of degree d) and set Proof We refer the reader to [12, Proposition 7.13], whose proof goes through in the analytic case as well. As an application, it follows from Mumford's semistable reduction theorem ([32, p. 53], see also [34, §16, p. 6] for a remark on the analytic case) that there is a d ∈ N, a finite base change f : τ → τ d (for d 'divisible enough'), a smooth test configuration X and a diagram X X d X such that X is semistable, i.e. smooth and such that X 0 is a reduced divisor with simple normal crossings. In particular, note that the correction term V −1 ((X 0,red −X 0 )·A n ) X vanishes. Here X d denotes the normalisation of the base change, which is dominated by the semistable test configuration X for X . Moreover, g d • ρ is an isomorphism over P 1 \{0}. Letting A d := g * d A be the pullback of A to X d , and A := ρ * A d the pullback to X , it follows from the above homogeneity of the M NA that where d is the degree of g d . We have thus associated with (X , A) a new test configuration (X , A ) for (X, α) such that the total space X is semistable. Up to replacing X with a determination (see Sect. 3.1) we can moreover assume that X dominates X × P 1 . Hence, the above shows that DF(X , A) DF(X , A )/d. By an argument by perturbation much as the one in the proof of Proposition 3.12, we obtain the following stronger version of the aforementioned result. and only if DF(X , A) 0 for all semistable, relatively Kähler cohomological test configurations (X , A) for (X, α) dominating X × P 1 . Transcendental Kempf-Ness Type Formulas Let X be a compact Kähler manifold of dimension n and let θ i , 0 i n, be closed (1, 1)-forms on X . Let α i := [θ i ] ∈ H 1,1 (X, R) be the corresponding cohomology classes. In this section we aim to prove Theorem B. In other words, we establish a Kempf-Ness type formula (for cohomological test configurations), which connects the asymptotic slope of the multivariate energy functional ϕ t 0 , . . . , ϕ t n (θ 0 ,...,θ n ) (see Definition 2.2) with a certain intersection number. In order for such a result to hold, we need to ask that the rays (ϕ t i ) t 0 are compatible with (X i , A i ) in a sense that has to do with extension across the central fibre, see Sect. 4 .1. For what follows, note that, by equivariant resolution of singularities, there is a test configuration X for X which is smooth and dominates X × P 1 . This setup comes with canonical C * -equivariant bimeromorphic maps ρ i : X → X i , respectively. In particular: Definition 4.1 We define the intersection number by means of pulling back the respective cohomology classes to X . Remark 4.2 Up to desingularising we can and we will in this section consider only smooth cohomological test configurations (X i , A i ) for (X, α i ) dominating X × P 1 , with μ i : X i → X ×P 1 the corresponding C * -equivariant bimeromorphic morphisms, respectively. We content ourselves by noting that the following C ∞ -compatibility condition can be defined (much in the same way, using a desingularisation) in the singular case as well. Compatibility of Rays and Test Configurations Let (X , A) be a smooth (cohomological) test configuration for (X, α) dominating X × P 1 , with μ : X → X ×P 1 the corresponding canonical C * -equivariant bimeromorphic morphism. We then have for a unique R-divisor D supported on X 0 , with p 1 : X × P 1 → X denoting the first projection, cf. Proposition 3.10. We fix the choice of an S 1 -invariant function 'Green function' ψ D for D, so that δ D = θ D + dd c ψ D , with θ D a smooth S 1 -invariant closed (1, 1)-form on X . Locally, we thus have where (writing D := j a j D j for the decomposition of D into irreducible components) the f j are local defining equations for the D j , respectively. In particular, the choice of ψ D is a uniquely determined modulo smooth function. The main purpose of this section is to establish Theorem B, which is a formula relating algebraic (intersection theoretic) quantities to asymptotic slopes of Deligne functionals (e.g. E or J) along certain rays. However, such a formula cannot hold for any such ray. The point of the following compatibility conditions is to establish some natural situations in which this formula holds. Technically, recall that a ray (ϕ t ) t 0 on X is in correspondence with an S 1 -invariant functions on X ׯ * . The proof of Theorem B, will show that it is important to extend the function • μ on X \X 0 also across the central fibre X 0 . To this end, we introduce the notions of C ∞ -, L ∞ -and C 1,1 -compatibility between the ray (ϕ t ) t 0 and the test configuration (X , A). The purpose of introducing more than one version of compatibility is that we will distinguish between the following two situations of interest to us: (i) smooth but not necessarily subgeodesic rays (ϕ t ) that are C ∞ -compatible with the smooth test configuration (X , A) for (X, α), dominating X × P 1 . Here we can consider α = [θ ] ∈ H 1,1 (X, R) for any closed (1, 1)-form θ on X . (ii) locally bounded subgeodesic rays (ϕ t ) that are L ∞ -compatible or (more restrictively) C 1,1 -compatible with the given smooth and relatively Kähler test configuration (X , A) for (X, α), dominating X × P 1 . Here we thus suppose that α is a Kähler class. Theorem B has valid formulations in both these situations, as pointed out in Remark 4.11. The second situation is interesting notably with weak geodesic rays in mind, cf. Sect. 4.3. C ∞ -Compatible Rays We first introduce the notion of smooth (not necessarily subgeodesic) rays that are C ∞ -compatible with the given test configuration (X , A) for (X, α). Definition 4.3 Let (ϕ t ) t 0 be a smooth ray in C ∞ (X ) and denote by the corresponding smooth S 1 -invariant function on X ׯ * . We say that (ϕ t ) and (X , A) are C ∞ -compatible if • μ + ψ D extends smoothly across X 0 . The condition is indeed independent of the choice of ψ D , as the latter is a well-defined modulo smooth function. In the case of a polarised manifold (X, L) with an (algebraic) test configuration (X , L) this condition amounts to demanding that the metric on L associated with the ray (ϕ t ) t 0 extends smoothly across the central fibre. As a useful 'model example' to keep in mind, let be a smooth S 1 -invariant representative of A and denote the restrictions |X τ =: τ . Note that τ and 1 are cohomologous for each τ ∈ P 1 \{0}, and hence we may define a ray (ϕ t ) t 0 on X , C ∞ -compatible with (X , A), by the following relation λ(τ ) * τ − 1 = dd c ϕ τ , where t = − log |τ | and λ(τ ) : X τ → X 1 X is the isomorphism induced by the C * -action λ on X . We further establish existence of a smooth C ∞ -compatible subgeodesic ray associated to a given relatively Kähler test configuration (X , A) for (X, α). Kähler, then (X , A) is C ∞ -compatible with some smooth subgeodesic ray (ϕ t ). Lemma 4.4 If A is relatively Proof Since A is relatively Kähler, it admits a smooth S 1 -invariant representative with + π * η > 0 for some S 1 -invariant Kähler form η on P 1 . By the dd c -lemma on X , we have = μ * p * 1 ω + θ D + dd c u for some S 1 -invariant u ∈ C ∞ (X ), which may be assumed to be 0 after replacing ψ D with ψ D − u. As a result, we get We may also choose a smooth S 1 -invariant function f on a neighbourhood U of such that η |U = dd c f , and a constant A 1 such that D AX 0 . Using the Lelong-Poincaré formula δ X 0 = dd c log |τ | we get on π −1 (U ). Since D − AX 0 0, it follows that f • π + A log |τ |−ψ D is μ * p * 1 ω-psh, and hence descends to an S 1 -invariant p * 1 ω-psh function˜ on X × U (because the fibres of μ are compact and connected, by Zariski's main theorem). The ray associated with the S 1 -invariant function :=˜ − A log |τ | has the desired properties. C 1,1 -Compatible Rays and the Weak Geodesic Ray Associated with (X , A) Let (X , A) be a smooth, relatively Kähler cohomological test configuration for (X, α) (with α Kähler). With this setup, it is also interesting to consider the following weaker compatibility conditions, referred to as L ∞ -compatibility and C 1,1 -compatibility, respectively. Definition 4.5 Let (ϕ t ) t 0 be a locally bounded subgeodesic ray, and denote by the corresponding S 1 -invariant locally bounded p * 1 ω-psh function on X ׯ * . We say that (ϕ t ) and (X , A) are L ∞ -compatible if • μ + ψ D is locally bounded near X 0 , resp. C 1,1 -compatible if • μ + ψ D is of class C 1,1 on π −1 ( ). Indeed, we will see that C 1,1 -compatibility is always satisfied for weak geodesic rays associated with (X , A). In particular, for any given test configuration, C 1,1 -compatible subgeodesics always exist. This is the content of the following result, which is a consequence of the theory for degenerate Monge-Ampère equations on manifolds with boundary. We refer the reader to [10] for the relevant background. Lemma 4.6 With the situation (2) in mind, let (X , A) be a smooth, relatively Kähler cohomological test configuration of (X, α) dominating X × P 1 . Then (X , A) is C 1,1compatible with some weak geodesic ray (ϕ t ) t 0 . Remark 4.7 The proof will show that the constructed ray is actually unique, once a ϕ 0 ∈ H is fixed. Let D, θ D , ψ D and be as above. Since is relatively Kähler there is an η ∈ H 1,1 (P 1 ) such that +π * η is Kähler on X . We may then write˜ = +π * η +dd c g, where˜ is a Kähler form on X and g ∈ C ∞ (X ). In a neighbourhood of¯ the form η is further dd c -exact, and so we write η = dd c (g • π) for a smooth function g • π on . We now consider the following degenerate complex Monge-Ampère equation: Since˜ is Kähler, it follows that there exists a unique˜ -psh function˜ solving ( ) and that is moreover of class C 1,1 (see for instance [10,Theorem B]. We now define a p * 1 ω-psh function on X ׯ * → X by μ * =˜ − ψ D + g + g. We then have μ * ( p * 1 ω + dd c ) =˜ + dd c˜ on π −1 (¯ * ). In particular, defines a weak geodesic ray (ϕ t ) t 0 on X . Moreover, the current has locally bounded coefficients. Indeed, dd c˜ ∈ L ∞ loc (as solution of ( ), cf. [10]) and θ D is a smooth (1, 1)-form onX . The constructed ray is thus C 1,1 -compatible with (X , A). A Useful Lemma We now note that in order to compute the asymptotic slope of the Monge-Ampère energy functional E or its multivariate analogue E (ω 0 ,...,ω n ) we may in fact replace L ∞compatible rays (ϕ t ) with (X , A) by C ∞ -compatible ones. Indeed, note that any two locally bounded subgeodesic rays (ϕ t ) and (ϕ t ) L ∞ -compatible with (X , A) satisfy • μ = • μ + O(1) near X 0 , and hence ϕ t = ϕ t + O(1) as t → +∞. This leads to the following observation, which will be useful in the view of proving Theorems B and C. (X, α i ), respectively, dominating X × P 1 . Let (ϕ t i ) t 0 and (ϕ t i ) t 0 be locally bounded subgeodesics that are L ∞ -compatible with (X i , A i ), respectively. Then Lemma 4.8 Let (X i , A i ) be smooth, relatively Kähler cohomological test configurations for Recall that the mass of the Bedford-Taylor product (ω i + dd c ϕ t i ) is computed in cohomology, thus independent of t. Hence, the quantity is bounded as t → +∞. By symmetry, the argument may be repeated for the remaining i, yielding the result. Asymptotic Slope of Deligne Functionals: Proof of Theorem B With the above formalism in place, we are ready to formulate the main result of this section (Theorem B of the introduction). It constitutes the main contribution towards establishing Theorem A, and may be viewed as a transcendental analogue of Lemma 4.3 in [13]. We here formulate and prove the theorem in the 'smooth but not necessarily Kähler' setting (see Sect. 4.1, situation (1)). However, one should note that there is also a valid formulation for L ∞ -compatible subgeodesics, as pointed out in Remark 4.11. Theorem 4.9 Let X be a compact Kähler manifold of dimension n and let θ i , 0 i n, be closed (1, 1)-forms on X . Set Proof Fix any smooth S 1 -invariant (1, 1)-forms i on X i such that . Let (ϕ t i ) t 0 be smooth and C ∞ -compatible with (X i , A i ), respectively. Let X be a smooth test configuration that simultaneously dominates the X i . By pulling back to X we can assume that the X i are all equal (note that the notion of being C ∞compatible is preserved under this pullback). In the notation of Sect. 4.1, the functions i • μ + ψ D are then smooth on the manifold with boundary M := π −1 (¯ ), and may thus be written as the restriction of smooth S 1 -invariant functions i on X , respectively. Using the C * -equivariant isomorphism X \X 0 X × (P 1 \{0}) we view ( i − ψ D ) |X τ as a function ϕ τ i ∈ C ∞ (X ). By Proposition 2.8 we then have Proof The result follows from Proposition 2.8 and the fact that μ is a biholomorphism away from τ = 0, where also δ D = 0 (recalling that the R-divisor D is supported on X 0 ). Denoting by u(τ ) := ϕ τ 0 , . . . , ϕ τ n , the Green-Riesz formula then yields which converges to (A 0 · · · · · A n ) as ε → 0. It remains to show that To see this, note that for each closed (1, 1)-form on X and each smooth function on X , there is a Kähler form η on X and a constant C large enough so that + Cη + dd c 0 on X . Moreover, we have a relation and repeat this argument for each i, 0 i n, by symmetry. It follows from the above 'multilinearity' that we can write t → E(ϕ t 0 , . . . , ϕ t n ) as a difference of convex functions, concluding the proof. Remark 4.11 The above proof in fact also yields a version of Theorem 4.9 for subgeodesics (ϕ t i ) t 0 that are L ∞ -compatible with smooth test configurations (X i , A i ) for (X, α i ) dominating X ×P 1 . This follows from the observation that one may replace L ∞ -compatible subgeodesic rays with smooth C ∞ -compatible ones, using Lemmas 4.4 and 4.8. As a special case of Theorem B we obtain transcendental versions of several previously known formulas (see for instance [13]). As an example, we may deduce the following formula for the asymptotics of the Monge-Ampère energy functional by recalling that if ω is a Kähler form on X and (ϕ t ) t 0 is a subgeodesic ray, then ϕ t (ω,...,ω) . Corollary 4.12 Assume that (X , A) is smooth and dominates X ×P 1 . For each smooth ray (ϕ t ) t 0 C ∞ -compatible with (X , A), we then have Remark 4.13 Here E NA makes reference to the non-Archimedean Monge-Ampère energy functional, see [12] for an explanation of the terminology. To give a second example of an immediate corollary, interesting in its own right, we state the following (compare [25]): Corollary 4.14 Assume that (X , A) is smooth and dominates X ×P 1 . For each smooth ray (ϕ t ) t 0 C ∞ -compatible with (X , A), we then have Proof Note that we may write J(ϕ t ) = V −1 ϕ t , 0, . . . , 0 (ω,...,ω) − E(ϕ t ) and apply Theorem 4.9. Asymptotics for the K-Energy Let (X, ω) be a compact Kähler manifold and α := [ω] ∈ H 1,1 (X, R) a Kähler class on X . As before, let (X , A) be a smooth, relatively Kähler cohomological test configuration for (X, α) dominating X × P 1 . In this section we explain how the above Theorem 4.9 can be used to compute the asymptotic slope of the Mabuchi (K-energy) functional along rays (ϕ t ), C 1,1 -compatible with (X , A). It is useful to keep the case of weak geodesic rays (as constructed in Lemma 4.6) in mind, which in turn implies K-semistability of (X, α) (Theorem A). Regarding the proof of Theorem C, we will see that the Mabuchi functional is in fact of the form ϕ t 0 , . . . , ϕ t n (θ 0 ,...,θ n ) for the appropriate choice of closed (1, 1)-forms θ i on X and rays (ϕ t i ) on X , but Theorem 4.9 does not directly apply in this situation. Indeed, the expression for the Mabuchi functional involves the metric log(ω+dd c ϕ t ) n on K X /P 1 , which may blow up close to X 0 (in particular, the compatibility conditions are not satisfied). However, a key point is that we can cook up a functional M B of the above 'multivariate' form that satisfies the same asymptotic slope as the Mabuchi functional (up to an explicit error term), and to which we may apply Theorem 4.9. More precisely, we show that and use Theorem 4.9 to choose M B so that moreover lim t→+∞ M B (ϕ t )/t = DF(X , A). It follows that the asymptotic slope of the Mabuchi (K-energy) functional equals DF(X , A) A Weak Version of Theorem C We first explain how to obtain a weak version of Theorem C, as a direct consequence of Theorem 4.9. This version is more direct to establish than the full Theorem C, and will in fact be sufficient in order to prove K-semistability of (X, α), as explained in Sect. 5.2. Here φ U depends on s U , but the curvature current dd c φ is globally well defined and represents the first Chern class c 1 (L). In the sequel we identify the additive object φ with the Hermitian metric it represents. In the above sense, now let B be any smooth metric on K X /P 1 := K X − π * K P 1 . Consider the canonical isomorphism μ : X \X 0 → X ×(P 1 \{0}). Since the restriction of K X /P 1 to each fibre X t coincides with K X t , which in turn can be identified with K X via μ, we can then associate with B a ray of smooth metrics on K X that we denote by (β t ) t 0 (or (β τ ) τ ∈¯ * for its reparametrisation by t = − log |τ |). Fix log ω n as a reference metric on K X , and let , i.e. the function on X given as the difference of metrics β τ − log ω n on K X . The constructed ray (ξ t B ) t 0 is then C ∞ -compatible with the cohomological test configuration (X , K X /P 1 ) for (X, K X ). Now let (ϕ t ) t 0 be any subgeodesic ray C 1,1 -compatible with (X , A). By Lemmas 4.4, 4.8 and Theorem 4.9 it follows that as t → +∞. Indeed, by Lemma 4.4 we may choose a smooth subgeodesic ray (ϕ t ) t 0 in H that is C ∞ -compatible (and hence also L ∞ -and C 1,1 -compatible) with (X , A). Up to replacing (ϕ t ) with (ϕ t ) we may thus assume that (ϕ t ) is smooth and C ∞ -compatible with (X , A), using Lemma 4.8, so that Theorem 4.9 applies. Motivated by the Chen-Tian formula (5) and the identity (8), we thus introduce the notation the point being that the asymptotic slope of this functional coincides with the Donaldson-Futaki invariant (even when the central fibre is not reduced). Proof This result is an immediate consequence of (8), the Chen-Tian formula (5) and Corollary 4.12. Hence, it suffices to establish the following inequality To do this, we set (τ ) := (M − M B )(ϕ t ). By the Chen-Tian formula (5) and cancellation of terms we have recalling the definition (7) of ξ t B and Definition 2.1. In view of Proposition 3.10, we as usual let D denote the unique R-divisor supported on X 0 such that A = μ * p * 1 α + [D], with p 1 : X × P 1 → X the first projection. Fix a choice of an S 1 -invariant function 'Green function' ψ D for D, so that δ D = θ D + dd c ψ D with θ D a smooth S 1 -invariant closed (1, 1)-form on X . Moreover, set := μ * p * 1 α + θ D (for which [ ] = A then holds) and let denote the S 1 -invariant function on X ×P 1 corresponding to the ray (ϕ t ). In particular, the function •μ+ψ D extends to a smooth -psh function on X , by C ∞ -compatibility. With the above notation in place, the integrand in the above expression for (τ ) can be written as is the volume form defined by the smooth metric B + π * log( √ −1 dτ ∧ dτ ) on K X . Since is smooth on X and λ B is a volume form on X , this quantity is bounded from above. Moreover, we integrate against the measure (ω + dd c ϕ τ ) n which can be computed in cohomology, and thus has mass independent of τ . Hence Dividing by t and passing to the limit now concludes the proof. As explained below, the above 'weak Theorem C' actually suffices to yield our main result. Proof of Theorem A We now explain how the above considerations apply to give a proof of We are now ready to prove Theorem A. Proof of Theorem A Let X be a compact Kähler manifold and ω a given Kähler form, with α := [ω] ∈ H 1,1 (X, R) the corresponding Kähler class. Let (X , A) be any (possibly singular) cohomological test configuration for (X, α) which by desingularisation and perturbation (see Proposition 3.12) can be assumed to be smooth, relatively Kähler and dominating X × P 1 . Consider any ray (ϕ t ) t 0 such that Theorem C applies; for instance, one may take (ϕ t ) to be the associated weak geodesic ray emanating from ω (i.e. such that ϕ 0 = 0), which due to [15] (cf. also [9,20,21]) is C 1,1 -compatible with (X , A). Now suppose that the Mabuchi functional is bounded from below (in the given class α). In particular, we then have using the weak version of Theorem C, cf. Theorem 5.1. Since the cohomological test configuration (X , A) for (X, α) was chosen arbitrarily, this proves Corollary 1.1, i.e. it shows that (X, α) is K-semistable. In a similar vein, suppose that the Mabuchi functional is coercive, i.e. in particular M(ϕ t ) δJ(ϕ t ) − C for some constants δ, C > 0 uniform in t. Note that Corollary 4.14) and the (weak) Theorem C provides a link with the intersection theoretic quantities J NA (X , A) and M NA (X , A), respectively. More precisely, dividing by t and passing to the limit we have Since (X , A) was chosen arbitrarily it follows that (X, α) is uniformly K-stable, concluding the proof of Theorem A. As remarked in the introduction it follows from convexity of the Mabuchi functional along weak geodesic rays, cf. [6,19], that the Mabuchi functional is bounded from below (in the given class α) if α contains a cscK representative. In other words, Corollary 1.1 follows. Moreover, it is shown in [8, Theorem 1.2] that the Mabuchi functional M is in fact coercive if α contains a cscK representative. As a consequence, we obtain also the following stronger result, confirming the "if" direction of the YTD conjecture (here referring to its natural generalisation to the transcendental setting, using the notions introduced in Sect. 3). Asymptotic Slope of the K-Energy Building on Sect. 5.1 we now improve on the weak version of Theorem C (cf. Theorem 5.1) by computing the asymptotic slope of the Mabuchi (K-energy) functional (even when the central fibre is not reduced). To this end, recall the definition of the non-Archimedean Mabuchi functional, i.e. the intersection number M NA (X , A) := DF(X , A) + V −1 ((X 0,red − X 0 ) · A n ) X , discussed in Sect. 3.6. Note that it satisfies M NA (X , A) DF(X , A) with equality precisely when the central fibre is reduced. Adapting the techniques of [13] to the present setting we now obtain the following result, corresponding to Theorem C of the introduction. Theorem 5.6 Let X be a compact Kähler manifold and α ∈ H 1,1 (X, R) a Kähler class. Suppose that (X , A) is a smooth, relatively Kähler cohomological test configuration for (X, α) dominating X × P 1 . Then, for each subgeodesic ray (ϕ t ) t 0 , C 1,1 -compatible with (X , A), the asymptotic slope of the Mabuchi functional is well defined and satisfies as t → +∞. Remark 5.7 In particular, this result holds when (ϕ t ) t 0 is the weak geodesic ray associated with (X , A), constructed in Sect. 4.1. Proof of Theorem 5.6 Following ideas of [13] we associate with the given smooth, relatively Kähler and dominating test configuration (X , A) for (X, α) another test configuration (X , A ) for (X, α) which is semistable, i.e. smooth and such that X 0 is a reduced R-divisor with simple normal crossings. As previously noted, we can also assume that X dominates the product. In the terminology of Sect. 3.6, this construction comes with a morphism g d • ρ : X → X , cf. the diagram in Section 3.6. Pulling back, we set A := g * d ρ * A. Note that A is no longer relatively Kähler, but merely relatively semipositive (with the loss of positivity occurring along X 0 ). On the one hand, Lemma 3.16 yields where d > 0 is the degree of the morphism g d . On the other hand, we may consider the pullback by g d • ρ of the weak geodesic (ϕ t ) t 0 associated with (X , A). This induces a subgeodesic (ϕ t ) t 0 which is C 1,1 -compatible with the test configuration (X , A ) for (X, α) (in particular, the boundedness of the Laplacian is preserved under pullback by g d • ρ). Replacing τ by τ d amounts to replacing t by d · t, so that Combining equations (9) and (10) In other words, it suffices to establish (11). By the asymptotic formula 5.3 it is in turn equivalent to show that We use the notation of the proof of Theorem 5.1. In particular, we set (τ ) := (M − M B )(ϕ τ ). As in the proof of Theorem 5.1 we have an upper bound (τ ) O(1), using that the restriction of the relatively semipositive class A to X \X 0 is in fact relatively Kähler. To obtain a lower estimate of (τ ) we consider the Monge-Ampere measure MA(ϕ τ ) := V −1 (ω + dd c ϕ τ ) n and note that since the relative entropy of the two probability measures MA(ϕ τ ) and e β τ / X e β τ is non-negative. We now conclude by estimating this integral, using the following result from [13]: Lemma 5.8 ([13]). Let (X , A) be a semistable and dominating test configuration for (X, α) and let B be any smooth metric on K X /P 1 . Let (β t ) t 0 be the family of smooth metrics on K X induced by B. Denote by p 1 the largest integer such that p − 1 distinct irreducible components of X 0 have a non-empty intersection. Then there are positive constants A and B such that holds for all t. We refer the reader to [13] for the proof and here simply apply the result: Recalling that t = − log |τ |, Lemma 5.8 yields that log X e β τ = o(t) and so it follows from completing the proof.
16,704.8
2017-10-16T00:00:00.000
[ "Mathematics" ]
An Algorithm Substitution Attack on Fiat-Shamir Signatures Based on Lattice : Many evidences have showed that some intelligence agencies (often called big brother) attempt to monitor citizens’ communication by providing coerced citizens a lot of subverted cryptographic algorithms and coercing them to adopt these algorithms. Since legalized services on large number of various applications and system architectures depend on digital signature techniques, in the context some coerced users who use double authentication preventing signatures to design some novel digital signature techniques, have some convincing dissertations to defuse requests from authorities and big brothers creating some corresponding subverted signatures. As rapid progress in quantum computers, National Security Agency advisory memorandum and announcement of National Institute of Standards and Technology procedures from standardization focus on some cryptographic algorithms which are post quantum secure. Motivated by these issues, we design an algorithm substitution attack against Fiat-Shamir family based on lattices (e.g., BLISS, BG, Ring-TESLA, PASSSign and GLP) that are proven post-quantum computational secure. We also show an efficient deterable way to eliminate big brother’s threat by leaking signing keys from signatures on two messages to be public. Security proof shows that our schemes satisfy key extraction, un-detectability and deterability. Through parameters analysis and performance evaluation, we demonstrate that our deterring sub-verted Fiat-Shamir signature is practical, which means that it can be applied to privacy and protection in some system architectures. Introduction Since the first computer was intruded, hackers have been developing the technology of "backdoor" which allows them to enter the system again. The main functions of the backdoor is that it has no ability to prevent the system manager from entering this system again and discover these hackers. Many techniques have been used in this "backdoor", such as intercepting postal shipping to steal and substitute networking hardware, sabotaging Internet routers, injecting malware, installing backdoors, wire-tapping undersea cables and so on [1 -3] . In 2013, Edward Snowden brought shocking news that many ongoing surveillance programs with an underlying "backdoor" target at citizens conducting by National Security Agency (NSA) and partners from all over the world [4] . A typical example is a pseudorandom generator (PRG) named Dual_EC_ DRBG backdoored by NSA, from NIST (National Institute of Standards and Technology). After choosing a few concrete parameters employed in the PRG, an attacker or any adversary is not able to differentiate exports on PRG from any random number but can forecast following exports [5] . In this circumstances, post-Snowden cryptography attracts much of attentions in recent years. As one of the research topics in post-Snowden cryptography, the notion of algorithm substitution attack (ASA) was formalized by Bellare et al [6] in some semantics from algorithms named symmetric encryption algorithms. The ASA method is capable of any attacker or old big brother to substitute a few of pieces of randomized encryption algorithms or signature algorithms with a modified one such that it can leak secret keys subliminally and undetectably to the adversary. Ateniese et al [7] first proposed a model of ASA on signature schemes, however, the subverted signature is generic and inefficient. Then Liu et al [8] introduced a much high efficient ASA method about one affirmatory crowd in some signature schemes. Recently, Beak et al [9] presented a much more efficient and undetectable method ASA from a classical DSA (digital signature algorithm) digital signature. At present most proposed subverted signatures only consider how to subvert signature schemes, there still needs some countermeasures to address big brother's threat, while a new proposed signatures named double authentication preventing signatures can be used to deter this kind of big brother's action. The double authentication preventing signatures (DAPS) are a class of extraordinary digital signatures with double signatures extractability which means deterable when there exist two different signatures from messages (m 0 , p 1 ) and (m 0 , p 2 ), where p 1 ≠p 2 . When a signature is subverted and satisfies double signature extractability, the subverted signature can be deterable by revealing real signature's signing secret keys to anyone. Unlike these two ring signatures with linkability and traceability [10][11][12][13] , our DAPRS (double authentication preventing ring signatures) has stronger accountability which leads any two pairs of signatures produced by same member in a ring set to reveal his (or her) secret signing keys. As for linkable ring signatures, it allows anyone to efficiently decide whether any two pairs of signatures are produced by the same member without revealing his (or her) identity. As for traceable ring signatures, it can reveal his (or her) identity if any two signatures are produced by the same member. On the basis of the digital signatures in Refs. [14][15][16], most designed DAPS based on discrete logarithm problems and large integer factorization problems face new challenges because there exist polynomial computational complexity quantum algorithms to solve the two problems, such as Shor algorithm [17,18] . Hence, it is necessary to study some post quantum secure DAPS. Lattice-based signature is a cutting-edge cryptographic "technology". It has several interesting properties, such as high computational efficiency, novel and powerful cryptographic functionalities/applications, strong provable security guarantees, believed "post-quantum" security and so on. Therefore, it is vital for us to study some lattice-based algorithm substitution attacks. While most efficient lattice-based signatures which are a promising post-quantum cryptography belong to Fiat-Shamir signature paradigms (e.g., BLISS [19,20] , GLP [21] , PASS-Sign [22] , Ring-TESLA [23] ) and Hash-and-sign paradigms (e.g., GGH [24] , NTRUSign [25] , GPV [25] ) at the NIST workshop on post-quantum cryptography. Lattice-based Hash-and-sign paradigm follows that a trapdoor function can be provided by a short lattice basis. Most of them are heuristic security (no actual security proofs) and are rela-tively inefficient than lattice-based Fiat-Shamir (FS) signature paradigms [26] . Furthermore most lattice-based Hash-and-sign paradigms have unique signatures which are against subversion attacks [7] . Hence we aim at algorithm substitution attacks on FS lattice signature paradigms (FS-LBS). In this paper, we present ASA against those schemes that any three consecutive subverted signatures can extract signing keys. At the same time, we provide some countermeasures against ASA by using DAPS to deter the big brother's threaten. And we show that our scheme can be applied to some practical architectures based on some concrete experiment analysis. The remainder of this paper is organized in the following sequence. Section 1 shows some preliminaries and cryptographic knowledge. Section 2 provides some notions about deterable subverted signatures and some design requirements. Section 3 presents our concrete deterable subverted FS-LBS scheme and gives proof of key extraction, undetectability and deterability. In Section 4, we make some parameter analysis and performance evaluation. Finally, we give our conclusions. Notations Some basic notations have been shown in Table 1. to a block of n integers, that means Definition 2 (Ring-LWE , , , R σ n q D [27,28] ) Given m uniformly elements , Definition 3 (Rejection sampling lemma [29][30][31][32][33] If a constant M exists, the following distribution In the following, we use the RejectionSample to represent the algorithm. Description of Lattice Based Fiat-Shamir Type Signature Schemes The Fiat-Shamir type signatures based on lattice consist of algorithms in the following: Key Generation: 1) Pick  a  at random. 2) Choose uniformly random 1  a s s t : 1) Select two random numbers Here 1  with tiny modulus is the subset of  . Cryptographic Hash function H outputs a low norm subset of  . Double Authentication Preventing Signatures A DAPS includes four probability polynomial time (PPT) algorithms (KGen, Sign,Ver, Extract) as follows: 1) Given a security parameter  , the algorithm KGen(1 λ ) outputs public keys pk and private keys sk. 2) The algorithm Sign(sk, a, p) outputs a signature  on a pair of public/private key (pk, sk) and a subject/message pair (a, p). 3) The algorithm Ver(pk, a, p,  ) outputs either 0 for rejection or 1 for acceptance on pk, (a, p) and  . 4) The algorithm Extract outputs the private key sk on input pk, , a a p p   . Our Deterable Subverted Signatures We first provide the threat model in this section. Then we give formal definitions for the syntax of deterable subverted signatures. Compared with regular deterable digital signatures, these schemes need a "extraction key" for their manipulates if there exists a subversion attack. Finally we provide some security and functionality features of a deterable subverted signatures on the basis Refs. [7][8][9]. Threat Model Since authentication services of various system models and applications depend upon digital signatures, in the context coerced users who use a DAPS to design some court convincing signatures to refuse big brothers' (or attackers') requirements, we construct a subverted signature with a deterable function by using an algorithm substitution attack on double authentication preventing signatures. Notions of Deterable Subverted Signatures Definition 4 A deterable subverted signature SIG for nonsubverted signature SIG includes four PPT algorithms as follows: 1) On inputting a security parameter  , this algorithm Gen outputs a subversion key subk. 2) On inputting a subversion key subk, a state l, a private key sk, and a message  , the algorithm SIG outputs a subverted signature  by an updated status l. 3) On inputting a message  , a public key pk and a subverted signature  , this algorithm Ver outputs 1 which means accept and outputs 0 which means reject. 4) On inputting a pair of colliding messages 1 2 ( , )   , a public key pk, and its corresponding non-subverted signatures 1 2 ,   , this algorithm Deter outputs the private key sk. Security and Functionality Features The key extraction algorithm means that anyone including big brothers and attackers can compute the signature private key from known information if he or she makes a signature on a pair of colliding messages. Given the private key sk and its corresponding public key pair pk, the functionality undetectability means that any users can not find the detecting subversion. O  is given as follows: Our Deterable Subverted Lattice Based Fiat-Shamir Type Signatures This section introduces a self-enhancement or subverted attack on Fiat-Shamir type lattice signatures. Our subverted Fiat-Shamir type lattice-based signatures (FS-LBS) are described as follows: • Subverted key generation ( , )   z z hold. If a FS-LBS scheme is subverted, the action of the signer can be found by revealing real signer's signing secret keys to anyone. When the signing keys is vital for him, in some cases the signer will be punished or the signer will result in great economic losses, or there exist some court convincing reasons to deny big-brother or authority agency demands. Aiming at the subverted lattice-based Fiat-Shamir type signatures, we add a Deter algorithm. Our Deter algorithm makes sure that the lattice-based Fiat-Shamir type signature is against algorithm substitution attack. If 0 j  mod 2, by using ℬ's subversion keys subk 1 2 ( , , ) F   , compute signing keys as follows: 1 2 , s s by using the same method as above, so ℬ can compute the signing keys by the subverted signature algorithm FS-LBS Theorem 2 Given subverted FS-LBS scheme FS-LBS , the detection advantage is negligible under the assumption of pseudo-random function (PRF) F. Proof By a sequence of games, we prove the theorem. We define the events ( 1,2, ) . Game 1 As before, but we modify Game 0 to use b = 0 for answering  's queries as Game 0 and to use a uniform random string for substituting b = 1 (j = 1 mod 2) and noncomputed 1 2 ,   y y by PRF. A detailed description is given as follows. T   . If b=0, this game responds to a valid FS-LBS scheme and j = 0,    to  after receiving every signing query on m and a reset query rt , respectively. If b = 1, this game carries on as follows: When  does some signing queries on m, j ←0, τ←0 If j = 0 mod 2 1) Pick 1 Due to Game 3, we modify this game as Game 2 in the following: While  makes some reset queries rt, Game 3 is not able to reset j and τ but it answers the adversary  's other queries by some same ways similar to Game 2. Finally, the distribution of subverted FS-LBS scheme FS-LBS is identical to the distribution of real FS-LBS scheme except of j=0 mod 2 and j=1 mod 2, because 1 Numerical Analysis This part first numerically makes some efficiency of our deterable subverted FS-LBS scheme in terms of storage overhead and computational overhead which are listed in Table 2 and Table 3. As for the storage overhead, it consists of size of pair of public/secret keys and size of signature which is listed in Table 2. The communication cost is determined by the number of j. As for the computational overhead, compared with the Hash functions, the most resource-consuming operation is the multiplication over ring  . In the signing process, Ver process and Deter process, the computational overhead is listed in Table 2, where the number of multiplications over ring  is linear to the number of j. For simplicity, we denote that PM represents the polynomial point multiplications, PA represents the polynomial additions, PS represents the polynomial subtraction, RS represents polynomial Gauss sampling and H and F represent the Hash functions. By implementation analysis in Ref. [9], we can see that there needs at most three consecutive signatures in the subverted FS-LBS which does not affect practicality of the subverted FS-LBS. So in our constructed scheme, we do not consider signature loss. Here we only analyze the security level, size of secret keys (sk), size of public keys (pk) and size of signature for some deterable subverted FS-LBS in Refs. [20,21,[29][30][31][32]. From Tables 2-3, we can see that our deterable subverted signatures have reasonable efficiency in terms of communication cost, computational overhead and storage overhead. Implementation The implementation is conducted with NFLlib, which is a NTT-based fast lattice cryptography library, on Intel i7-7700 CPU @ 3.60GHz and Ubuntu linux operation system. By statistics, these important algorithm operations mainly consist one polynomial addition, one polynomial multiplication and one polynomial Gaussian. Since the implementation of any Hash function is not included in NFLlib, we test the running time of three hash functions by a HMAC based on SM3 algorithm. The execution time of each cryptographic operation in different parameters is shown in Table 4. Here the execution time of these Hash functions we consider is the same. Experimental results for proposed deterable subverted FS-LBS of each algorithm are depicted in Fig. 1. We increase the number of j from 2 to 10 for each test to see the time cost of Sign , Ver and Deter algorithm. Here procedure of subversion key generation can be considered as a random number, so we can omit its time consumption. Conclusion This paper first explores a novel algorithm substitution method on lattice-based Fiat-Shamir type signature schemes. Based on this, then we provide countermeasures to deterable signature subversion. Security proof shows that our construction satisfies three different security and privacy requirements. Parameter analysis demonstrates that our construction is feasible. In future, we will study our algorithm by widening range of possible schemes that is vulnerable to algorithm substitution attack or by other much more valuable methods and countermeasures on these post quantum secure signatures. In addition, some other possible work will focus on some algorithm substitution attacks on other cryptographic primitives.
3,641.4
2022-03-01T00:00:00.000
[ "Computer Science", "Mathematics" ]
A deep learning enhanced inverse scattering framework for microwave imaging of piece-wise homogeneous targets In this paper, we present a framework for the solution of inverse scattering problems that integrates traditional imaging methods and deep learning. The goal is to image piece-wise homogeneous targets and it is pursued in three steps. First, raw-data are processed via orthogonality sampling method to obtain a qualitative image of the targets. Then, such an image is fed into a U-Net. In order to take advantage of the implicitly sparse nature of the information to be retrieved, the network is trained to retrieve a map of the spatial gradient of the unknown contrast. Finally, such an augmented shape is turned into a map of the unknown permittivity by means of a simple post-processing. The framework is computationally effective, since all processing steps are performed in real-time. To provide an example of the achievable performance, Fresnel experimental data have been used as a validation. Introduction Microwave imaging (MWI) exploits the capability of electromagnetic (EM) waves to penetrate material bodies to enable the non-invasive inspection of unknown scenarios that are otherwise not directly accessible.As such, MWI is relevant to several applications fields as different as biomedical imaging [1,2], subsurface sensing [3], food security monitoring [4], or throughwall imaging [5]. By measuring the scattered field arising from the interaction of a known EM incident field with a target, it is possible to obtain an image depicting the EM properties (i.e.dielectric permittivity and conductivity) of such target, as well as its morphology.From a mathematical point of view, MWI corresponds to the solution of an inverse scattering problem (ISP), which is a well-known non-linear and ill-posed inverse problem [6]. To cope with the difficulties of the ISP, many solution methods have been developed [7][8][9].However, still no 'universal' method exists.For instance, quantitative methods [7,8], which aim at the complete solution of the ISP, are computationally demanding, prone to the occurrence of false solutions 4 , and often rely on available a-priori information on the targets to perform a successful reconstruction.Conversely, qualitative methods [9] cast the ISP in terms of an auxiliary linear ill-posed problem, thus overcoming non-linearity and requiring an almost negligible computational burden.However, besides still having to face an ill-posed problem, they can only provide explicit information on the target's morphology and not on its EM properties. Recently, a huge interest in the literature has been devoted to the possibility of addressing the ISP non-linearity and ill-posedness resorting to computational methods based on the deep learning (DL) paradigm [11,12].Different from traditional approaches, DL is data-driven: common DL architectures run an optimization procedure (the training) from which a model is built by analyzing a collection of examples. Among the possible ways to exploit such data-driven approach in ISP solution [11], physicsassisted techniques are worth to be considered.In these approaches, domain knowledge in the specific problem at hand is incorporated in the internal structure of the DL architecture or provided into its inputs by pre-processing the raw data.For MWI, this represents a particularly convenient strategy, since MWI data are not 'homogeneous', as they can be collected in different conditions (e.g.number and position of the probes or operating frequency).As such, MWI data are usually not abundant enough to enable a direct learning approach which solely relies on the scattering measurements.In fact, embedding domain knowledge allows the training with less examples than a direct learning counterpart, as the model does not have to 'learn' the all the physics involved in the problem [11]. The most common domain knowledge incorporation is carried out by pre-processing the MWI scattering measurements with traditional imaging algorithms.In doing so, convolutional neural network (CNN) models, which are known to be excellent image processing frameworks [13], could be employed.To this end, a crucial aspect that must be considered is the choice of the MWI algorithm.First of all, since DL models work in real-time (once trained), computationally intensive quantitative methods are not suitable if speed is a requirement, since they would act as a bottleneck in the processing workflow.Also, it is worth recalling that the 4 A false solution is an estimate of the unknown that fits the data but is different from the ground truth.False solutions are a consequence of both the non-linearity and ill-posedness of the ISP and arise when the ISP is solved via local iterative optimization.Theoretically, global optimization methods could circumvent false solutions occurrence and converge to a global optimum.In practice they cannot, due to the curse of dimensionality [10] arising from the exponential growth of the computational cost with the number of unknowns.possible occurrence of false solutions is an issue, as the output of quantitative methods is to some extent not predictable.Finally, in both qualitative and quantitative methods, the need of tuning regularization parameters poses an issue on the possibility of full-automated operation, which is another attractive feature of the DL paradigm. In [14], such difficulties have been addressed training a physics-assisted CNN to image piece-wise homogeneous targets from input images obtained using two techniques or schemes, back-propagation scheme (BPS) and dominant current scheme (DCS).The chosen CNN architecture is the U-Net [15], which is very popular as a tool to face computer vision tasks, as it can be configured to handle an image in input and provide an output which is still an image [16].Notably, the authors show that the U-Net trained either with BPS or DCS outperforms a direct learning implementation with the raw scattering measurements.Although both BPS and DCS do not slow down the processing workflow, and may be used in real-time, they still present some drawbacks.More precisely, BPS is based on the linearized back-propagation algorithm and therefore may lead to significantly inaccurate images when the assumptions underneath the linearization are violated.Whereas, DCS do not require approximations, but are dependent on the measurement configuration, so that the network has to be retrained every time the set-up is changed.Also, both methods provide discretized images in which the discretization step is dictated by the working wavelength, thus posing a constraint on the size of the images fed into the network. Motivated by the above considerations, the authors of this work have considered the use of the orthogonality sampling method (OSM) [17] as the domain knowledge-embedding imaging algorithm [18,19].The OSM is a qualitative method introduced by Roland Potthast, in which an indicator function is computed to estimate the shape of the unknown targets.The OSM has the remarkable feature of being based on an implicit regularization, thus not requiring any regularization parameter tuning and not being limited by underlying approximations.Moreover, similar to other sampling methods [9], the spatial discretization of the resulting image is arbitrary and thus not influenced in any way by the measurement configuration.Last but not least, it has been shown in [20] that OSM images encode information on the spatial behavior of the EM properties of the targets, owing to the relation between the OSM indicator and the radiating component of the induced currents.Based on these considerations, it was shown that a U-Net fed with OSM images could be trained to achieve an objective reconstruction of the targets' shape [18] or an estimate of the targets shape and EM properties, provided they belonged to a fixed and known set of values [19]. In this paper, we show how an OSM-informed U-Net can be trained to solve the more general problem of imaging piece-wise homogeneous targets, i.e. retrieve their shape and EM properties, without limiting the possible contrast to a finite set of values as in [19].To face the increased complexity of such a problem, the following strategies are put into action: • Different from our previous works where U-Net task consisted in classification problems (binary segmentation [18] or categorical segmentation [19]), the U-Net is herein trained within a pixel-wise regression framework, to allow retrieving a continuous set of values; • The a priori information on the piece-wise nature of the targets is encoded by representing the spatial map of the EM properties distribution to be predicted by the network in terms of the corresponding spatial gradient, which allows to explicitly enforce into the training process the implicitly sparse nature of the information to be retrieved.We refer to this map as the augmented shape, to recall that it conveys information on both the target's internal and external boundaries and the relative contrast variation with respect to the (known) background medium; • Finally, a simple post-processing procedure is developed to turn the network's output (i.e. the pixel-wise regression of the gradient's values) into the map of the targets showing both their shape and permittivity. In the following, the proposed framework is employed to solve the canonical 2D scalar problem (TM polarized fields) in free space.After training the U-Net on simulated data, the resulting model is tested on the Fresnel experimental scattering measurements [21], to provide a performance assessment against this broadly adopted benchmark.The remainder of the paper is organized as follows.In section 2, the problem is formulated.The physics-assisted DL framework proposed to retrieve the contrast is presented in section 3, wherein each processing step is described.In section 4, implementation details of the U-Net and its training/validation on simulated data are given.Section 5 presents the validation of the overall framework against Fresnel experimental data [21], conclusions follow.Throughout the paper a time-harmonic behavior was supposed and the corresponding time factor e jωt was assumed and dropped. Note that preliminary results concerned with this work were presented in [22]. Formulation of the problem Let Ω denote the imaging domain embedded in a homogeneous and lossless medium of relative permittivity ε b , which hosts the cross-section Σ of a collection of possibly overlapping targets invariant along one direction (say the z-axis).The targets are piece-wise homogeneous.Hence each of them is characterized by a relative dielectric permittivity ε(r) and an electric conductivity σ(r), with r = (x, y).All materials are supposed to be non-magnetic, i.e. the magnetic permeability is everywhere that of vacuum, µ 0 . The unknown targets are probed with TM-polarized incident fields E inc , transmitted by a set of antennas located in r t ∈ Γ, with Γ being a closed curve located in the far-zone of Ω.For each transmitter, the interaction between the incident field and the targets gives raise to the scattered field E s .The superposition of these two fields becomes the total field E = E inc + E s which is measured by a set of receivers that, without any loss of generality, is assumed to be located on Γ as well, with the receiver position being r s . For each frequency f belonging to the set of frequencies adopted for the imaging experiment, the overall phenomenon is cast through a Fredholm type integral equation as: where G(r s , r ′ ) is the Green's function of the assumed homogeneous background medium and τ (r) = ε eq (r)/ε b − 1 is the contrast function encoding the properties of the targets.ε eq (r) = ε(r) − j σ(r)/ωε 0 denotes the relative complex permittivity of the targets, with j being the imaginary unit, ω = 2π f the pulsation and ε 0 the dielectric permittivity of vacuum.The total field E is defined through another Fredholm integral equation of the first kind as: where The retrieval of the contrast function τ from measurements of the fields they scatter is the objective of the ISP.However, due to the smoothing kernel of (1) and the dependence of the total field on τ , the problem turns out to be non-linear and ill-posed [6,7]. The proposed physics-assisted DL framework Figure 1 shows the processing flow of the proposed MWI-DL framework, whose steps are detailed in the following. The OSM and the domain knowledge it supplies In the first step of the proposed approach, the measured scattered fields (raw data) are processed with the OSM to obtain a set of images (one for each working frequency). As most qualitative methods [9], OSM provides an estimate of the targets shape through an indicator function, which attains its higher values when evaluated in points belonging to the targets and lower values elsewhere [17].However, the OSM indicator function is not achieved through the solution of an auxiliary linear ill-posed problem.This entails that unlike other qualitative methods, in OSM there is no need of determining any regularization parameter.This is not a negligible advantage since the estimation of the proper regularizer is a tedious optimization problem [23]. This remarkable OSM property descends from the fact the indicator is built exploiting the reduced scattered field E red , which, for each frequency and for each scattered field, is computed as: where <, > denotes the scalar product on Γ and r p a point of an arbitrary grid sampling the imaging domain Ω.As discussed in [20,24], the reduced field is related to the adjoint solution of an inverse source problem, as such, it is implicitly regularized.The OSM indicator function is calculated as: with || || denoting the L 2 norm computed on Γ.It is worth noting that the computational burden required to evaluate I is negligible, as it is only a scalar product in each sampling point (which is in addition an intrinsically parallelizable process) has to be computed.As such, OSM image formation can be performed in real-time. In addition to this, as shown in [20], the reduced field is related to the radiating component of the contrast source.Accordingly, the indicator I will not only provide an estimation of the targets support, but it will also bear information on the behavior of their EM properties.In particular, higher permittivity values will correspond to higher intensity values of I.However, the relationship between the I values and the corresponding ones of the contrast is not straightforward. The DL architecture In the second step, the OSM images are fed into the network, which is in charge of estimating theaugmented shape. From the perspective of DL, the process of solving the ISP is driven by data [11].In particular, assuming a supervised learning procedure, the adopted DL architecture, say F θ , is specialized for the problem at hand through a process called training.This is an iterative optimization procedure in which a set of N training pairs (x n , y n ) is exploited to optimize the parameters θ that characterize the network against some loss function M, i.e.: where F θ (y n ) is the prediction made by the architecture corresponding to the ground truth value x n .To exploit the above general scheme for the specific problem at hand, the architecture F θ , the loss function M, and the training pairs (x n , y n ), have to be defined: • The network's input y n is a stack of one or more OSM images, depending on the number of frequencies • As far as the ground truth x n is concerned, the most straightforward choice would be to define it as an image in which each pixel is associated with the local value of the contrast.However, for piece-wise homogeneous targets, a more efficient way to encode τ is to express it through its spatial gradient.Different from the original image, in which all pixels belonging to the target will be different from zero, the gradient only assumes nonzero values at boundaries.As well known, this naturally provides a sparse representation of the unknown, which encodes all the required information with a minimal number of non-zero coefficients.Accordingly, the network's output xn,i is the predicted augmented shape, i.e. a map of the spatial gradient of the EM properties of the targets.More in detail, the image gradient is computed using the intermediate difference gradient method as: and the augmented shape fed into the network (as ground truth during training) is obtained as ∥∇τ ∥ = ( ∂τ ∂x ) 2 + ( ∂τ ∂y ) 2 .Note the gradient can be computed using different gradient operators, like Sobel or Prewitt [25], but they involve the convolution of the image with a 3 × 3 filter, thus resulting in a less sparse version of the gradient, and therefore less effective for our purposes.• The task to be performed by the network is to transform the input OSM image into an image depicting the estimated augmented shape.Such a task can be cast in terms of a pixel value regression, i.e. transform each pixel of the input image into an estimated contrast gradient ∇τ value.To this end, a U-Net architecture [15,26] is considered, whose specific structure and different processing steps are detailed in figure 2. The U-Net training for the (non-linear) regression task at hand is assessed taking as loss function M the mean squared error MSE defined as [13]: where N B is the batch size [13] and N P is the total number of pixels per image.Additionally, it is worth noting that U-Net is not necessarily limited to single input images.Hence, when multi-frequency data are available, OSM images for each single frequency data can be supplied stacked together using the U-Net channel dimension [13]. Contrast estimate The last step of the processing flow is to determine the contrast map from the augmented shape predicted by the network. For a homogeneous target in free space, such a task is straightforward, as ∥ ∇τ ∥= τ .Hence, the contrast map in this case could be readily retrieved by assigning to each pixel belonging to the identified contour the contrast value obtained by averaging the values of ∥ ∇τ ∥ estimated by the network. In the more general case of targets embedded in a homogeneous medium of known or estimate permittivity, the above also applies 5 .Hence, it is possible to extend the above straightforward approach also to nested targets through the following post-processing procedure: (i) for each contour, create a separate image having the same size as the original image; (ii) for each image, assign a contrast value to the pixel internal to each contour, by averaging the estimated gradient on the contour; (iii) sum all the partial images to obtain the final result. Note that, by means of the above procedure, the superposition of the targets having overlapping supports allows to restore the contrast with respect to the host medium which embeds the targets. Network implementation and optimization To cope with the 2D canonical ISP in free space at hand, the implementation of the U-Net is carried out optimizing its parameters θ with a training set of simulated data similar to the one used in [26].In particular, the training set consisted of cylinders placed in groups of two with variable size, location and permittivity.However, as opposed to [26], single targets were not considered in the simulations.Also, no profile was allowed to be partially outside of the imaging domain, while target overlapping was permitted.Details of the measurement conditions are listed in table 1. For the training and assessment, a total set of N = 7000 scattering experiments was simulated.Among them, 85% were used as the training set and 15% as validation set.In particular, for each simulated target, N F = 8 OSM indicator functions were built using equation ( 4).The data have been numerically computed using a proprietary forward solver based on Richmond's implementation of the method of moments [27].The code has been validated against the reference paper for consistency [27].For reproducibility, the training dataset is publicly available [28]. Accordingly, the input of our U-Net is a stack of N F matrices each encoding the 64 × 64 image of the OSM indicator I at each frequency.A normalization to [0, 1] was carried out for each indicator [29]. The optimization of the loss function in equation ( 8) was carried out using Adam optimizer [30], with a learning rate of 10 −4 and a batch size of 16.An optimal solution is found after several passes through all samples in the training set.A complete pass of the whole training set is known as epoch.A 200 epoch-long training was performed. The result of the training process is depicted in figure 3, which reports the behavior of the MSE for both the training and validation set along the epochs.As can be seen no overfitting occurs. Performance evaluation metrics To quantitatively assess the performances of the optimized models, two metrics were used.The first considered metric is the mean absolute percentage error (MAPE) [31], which is defined as: Although MAPE provides an estimation of the performance, it can suffer from weighting down the reported performance as a consequence of the high number of pixels with zero values.For this reason, a modified version where the MAPE is only computed over the pixels with positive values in the ground truth was calculated as well: While MAPE can be interpreted as a performance metric of the qualitative error, i.e. how well the framework retrieves the shapes of the targets, MAPE >0 reports the performance concerning the retrieval of the actual values of ∥ ∇τ ∥. For each sample of the validation set (1050 samples), the metrics were calculated and averaged over the whole validation set.The resulting MAPE is 0.94%, which confirms that the trained network is capable of performing satisfactory estimations.On the other hand, when restring the error on the non-zero pixels the error grows, being MAPE >0 of 13.64%.This is related to the fact the MSE appraises the image as a whole so that the loss value is biased by the background pixels whose number largely exceeds the non-zero pixels. Four randomly selected samples of the validation set are shown in figure 4, along with the OSM indicator images at the considered frequencies.As can be seen the OSM images visually suggest several properties of the target, but their contour and permittivity are not at all evident.The U-Net predictions are shown in figure 5.These results are consistent with the aforementioned performance metrics: U-Net not only does successfully find the boundaries between the targets and the background but also the ones between the two targets.When it comes to the quantitative gradient values, the accuracy is lower. Assessment of the proposed physics-assisted framework against Fresnel experimental data To show the capability of the proposed framework to retrieve the contrast map of piece-wise homogeneous targets, the widely adopted benchmark data provided by the Institut Fresnel [21] have been considered. The Fresnel targets and the OSM indicator maps for each frequency are depicted in the first column of figure 6.While the results of the analysis are reported in figure 7.In particular, the first row reports the expected augmented shape, the second row the augmented shape retrieved by the U-Net, the third row the ground truth permittivity assuming the average values given in the database, while the last row shows the permittivity map estimated using the post-processing procedure.As a first comment, it is worth to remark that this validation has been carried out using the optimized U-Net resulting from the training process described in the previous section, without retraining.This a noticeable aspect, since the Fresnel experimental data and targets are to some extent different from those considered in training the U-Net.In particular: • the measurement configuration is different, since the Fresnel data are collected within an aspect-limited configuration, whereas the configuration used in section 4 is full aspect; • in one of the Fresnel datasets, three targets are present, while only up to two targets where considered in the training; • only dielectric targets were considered in the training, while one of the Fresnel targets includes a metallic object; • the dielectric materials employed to build the Fresnel targets are not exactly lossless, opposite to the targets considered in the training. As can be seen, the developed framework, while not optimized for the considered experimental data, successfully resolves the targets and provides quite accurate reconstructions of their augmented shapes. As far as the estimation of the permittivity values is concerned, from figure 7. it appears that the retrieved values are quite close to the actual values for the low-permittivity foam targets, while they are quite different for the plastic targets.More in detail, as reported in table 2, where also the MAPE computed for each material is reported, the retrieved values are always close to the lower expected actual value for each material and in general appear to be underestimated.This is due to the fact the estimated augmented shape is a blurred version of the ground truth, so that the estimated gradient value is averaged on a larger number of pixels than the ground truth. Conclusion This work presents a MWI framework for real-time and user-independent imaging of piecewise homogeneous targets.Besides the methodological interest, this class of targets is relevant in most applications, wherein EM properties of the targets of interest are indeed piece-wise (like in non-destructive testing) or can be well approximated by average values (like in biomedical imaging). The core of the proposed framework is the efficient encoding of the a priori information on the piece-wise target's nature by means of their augmented shape, i.e., the amplitude of the spatil gradient of the contrast.Accordingly, the approach is implemented by training a U-Net CNN to retrieve this quantity, which embeds the information on both the shape and the EM properties of the targets.Then, the predicted augmented shape is processed by means of a simple deterministic procedure to turn it into a map of the targets' permittivity. The network is trained by exploiting a physics-assisted approach in which domain knowledge is supplied in the form of OSM images at multiple frequencies.Such a qualitative imaging technique is a convenient way to pre-process the raw data, thanks to its capability to form the image in real-time and without any supervision.Moreover, these images result from the back-projection of the data from the measurement domain onto the imaging domain, using the adjoint operator.As such, they directly represent the information embedded into the data in the imaging domain, and they are not the outcome of an inversion process prone to the choice of a regularization parameter.For this reason, considering multiple frequencies allows us to include all the information embedded in the measured data in the learning process.In particular, including high-frequency images in the network's learning process is useful, even if they appear poor in terms of targets' reconstructions, since high-frequency data may contain pieces of information that are not present in the low-frequency data (e.g. in terms of details at a finer spatial resolution).Finally, the adopted physics-assisted approach allows the U-Net to manipulate images and transform them into the predicted augmented shape, taking advantage of its demonstrated capability to effectively deal with this kind of inputs. The framework has been validated with the experimental data from the Fresnel Institut concerned with inhomogeneous targets [21].The results achieved with this widely adopted benchmark showed the overall capability of the proposed framework to perform the task and operate in cases different from the specific conditions for which the U-Net was trained.In particular, while the network was trained with purely lossless dielectric targets, the overall framework also works successfully when dealing with slightly lossy dielectric targets and metallic targets.On the other hand, it can be expected that dealing with dielectric targets with larger losses would require including those cases in the training to preserve comparable performances.Similarly, while in this work only circular cylinders were considered as targets, the approach is fully general and can be applied to targets having other shapes, provided a suitable training set is implemented to consider those different profiles. Finally, despite the positive results, there is still room for improvement, especially regarding the quantitative estimation of the permittivity values is concerned.Future research will address this issue as well as the application of the framework to more complex scenarios. Figure 2 . Figure 2. U-Net architecture diagram.The network of consists of several layers in which the operations depicted with arrows are performed.The size of the matrices is detailed for each layer, the specific values being reported in the text.In our implementation of U-Net , K = 32. Figure 3 . Figure 3. Training procedure of the proposed physics-assisted DL.MSE of the training and validation split with MSE axis in logarithmic scale. Figure 4 . Figure 4. Four randomly selected samples from the validation dataset.The first row depicts the target's contrast map while the other rows report the OSM indicator at each frequency and for each target. Figure 5 . Figure 5. Augmented shapes predicted by the network for the four validation samples.One sample per row, with the first column representing the ground truth and the second presenting the prediction made by U-Net. Figure 6 . Figure 6.The Fresnel targets considered for the validation of the proposed framework.The first row depicts the target's contrast map while the other rows report the OSM indicator at each frequency and for each target. Figure 7 . Figure 7.The results of the experimental validation.For each target, the first row depicts the augmented shape ground truth.The second row represents the prediction made by U-Net.The third row represents the ground truth assuming the average values of the targets permittivity given in the database.The last row shows the permittivity map retrieved by the framework after the post-processing. Table 1 . Simulations for training data generation.Distance of the source from the center of Ω 167 cm Distance of the receiver from the center of Ω
6,771.8
2024-02-02T00:00:00.000
[ "Engineering", "Physics" ]
GPU-based Clustering Algorithm for the CMS High Granularity Calorimeter The future High Luminosity LHC (HL-LHC) is expected to deliver about 5 times higher instantaneous luminosity than the present LHC, resulting in pile-up up to 200 interactions per bunch crossing (PU200). As part of the phase-II upgrade program, the CMS collaboration is developing a new endcap calorimeter system, the High Granularity Calorimeter (HGCAL), featuring highly-segmented hexagonal silicon sensors and scintillators with more than 6 million channels. For each event, the HGCAL clustering algorithm needs to group more than 105 hits into clusters. As consequence of both high pile-up and the high granularity, the HGCAL clustering algorithm is confronted with an unprecedented computing load. CLUE (CLUsters of Energy) is a fast fullyparallelizable density-based clustering algorithm, optimized for high pile-up scenarios in high granularity calorimeters. In this paper, we present both CPU and GPU implementations of CLUE in the application of HGCAL clustering in the CMS Software framework (CMSSW). Comparing with the previous HGCAL clustering algorithm, CLUE on CPU (GPU) in CMSSW is 30x (180x) faster in processing PU200 events while outputting almost the same clustering results. Introduction The luminosity of the future High Luminosity Large Hadron Collider (HL-LHC) is going to achieve up to 7.5 × 10 34 cm −2 s −1 [1] in its ultimate scenario, which is 5 times that delivered at present. This leads to the production of high pile-up events containing up to 200 interactions in each bunch crossing (PU200). Current CMS endcap calorimeters [2] are designed with a lifetime radiation limit of 500 fb −1 [3], which will be reached at the end of LHC Run-III in 2023. During the third long shutdown period of LHC from 2024 to 2026, the CMS Collaboration is going to conduct the Phase-II upgrade of the CMS detector [3]. One of the major tasks in the CMS Phase-II upgrade is to replace the current endcap calorimeters, including both endcap electromagnetic and hadronic calorimeters, with a new high granularity calorimeter system (HGCAL) which is based on highly-segmented Silicon sensors and plastic scintillators. The design of HGCAL [4] is shown in Figure 1. Two HGCAL endcaps will be mounted on both sides of the CMS detector. Each endcap weighs about 215 tons and measures about 2 m in longitudinal direction and 2.3 m in radial direction, covering 1.5 < |η| < 3.0. The full system operates at a temperature of −35 • C maintained by a CO 2 cooling system. Each endcap consists of 50 layers, each of which combines passive absorber material and active sensor material. The front 28 layers are the electromagnetic part (CE-E), which uses Cu, CuW and Pb as absorber and Si wafers with 120, 200, 300 µm thickness as sensors. The back 22 layers are the hadronic part (CE-H), which uses stainless steel and Cu as absorber and includes 8 full Silicon layers plus 14 hybrid layers of Si sensors and plastic scintillators with SiPM readout. The electromagnetic radiation thickness and hadronic interaction thickness of CE-E are 25X 0 and 1.3λ respectively, while the hadronic interaction thickness of CE-H is 8.2λ. In total, The full HGCAL system has 620 m 2 of Silicon and about 400 m 2 of plastic scintillators. The size of each Si sensor is 0.5-1.0 cm 2 and the number of Si channels is about 6 million. The size of the scintillators is 4-30 cm 2 and the number of scintillator channels is about 240 thousand. As a consequence of both high pile-up in the HL-LHC and enormous number of channels in the HGCAL, the number of input hits to HGCAL clustering algorithm is huge, usually in the order of n ∼ O(10 5 ) in PU200 events, where n denotes the number of hits. The clustering algorithm aggregates hits in 2D clusters layer by layer, producing about k ∼ O(10 4 ) clusters, where k denotes number of clusters. The average number of hits in a cluster is about m = n/k ∼ 10; therefore HGCAL clustering task is characterized by n > k m. Since cells are small compared to shower lateral size, an "energy density" is defined to better hint regional energy blobs in the HGCAL clustering. After 2D clustering algorithm, 3D showers in HGCAL are reconstructed by collecting and associating 2D clusters on different layers using TICL algorithms [5]. The current trigger system in CMS consists of two levels: Level 1 Trigger (L1T) and High Level Trigger (HLT). L1T utilizes customized ASICs and FPGAs to reduce the event rate from 40 MHz LHC collision frequency to 100 kHz within a 4 µs time budget for decision; HLT is fully based on C++ software running on CPUs and further reduces event rate from 100 kHz to 1 kHz with a 300 ms time budget for decision. However, in the era of HL-LHC, CMS HLT expects 30 times more computing load: 1.3x from upgraded detectors with more channels; 3x from increased number of pile-up interactions; 7.5x from improved L1T output rate. Among this 30x surge of the computing demand, improvement in the CPU performance by 2026 is expected to account for only 4x. Therefore, there will be a considerable deficit of computing power if the HLT architecture remains unchanged in 2026. In the HL-LHC era, the HLT time budget for CMS HGCAL clustering is roughly estimated to be less than a few tens of milliseconds. It is particularly a huge challenge of computing for the HGCAL clustering algorithm to process n ∼ 5 × 10 5 hits within such a limited time budget. To cope with this computing challenge, CMS is studying the feasibility of heterogeneous computing in HLT and offline reconstruction. With the support of CUDA [6] in the CMS software framework (CMSSW), it is possible to accelerate the HGCAL reconstruction with GPUs. In this paper, we present both CPU and GPU implementations of a fast fully-parallelizable density-based clustering algorithm, CLUsters of Energy or CLUE [7], in CMSSW for HGCAL reconstruction. In addition, we demonstrate that CLUE on CPU (GPU) is about 30x (180x) faster than the previous HGCAL clustering algorithm [8] in CMSSW for PU200 events. HGCAL Clustering Algorithm The previous clustering algorithm [8] used in the CMS HGCAL reconstruction was based on Clustering by Fast Search and Find Density Peak (CFSFDP) [9] and exploited a KD-Tree spatial index [10]. In the step of calculating local density ρ, KD-Tree provides a significant speedup comparing with not using any spatial index [8]. However, it has three crucial computing weaknesses: first, KD-Tree does not provide the optimal spatial index for HGCAL, because its window-query is of O(n log n) complexity and it is hard to construct or query on the GPUs; second, the calculation of separation δ does not take advantage of spatial index but still relies on a costly O(n 2 ) loop; third, the expansion of clusters happens in sequential order of decreasing density, which is not only costly because of sorting but also hard to parallelize. CLUsters of Energy (CLUE) [7] is a recently-proposed parallelizable high-speed clustering algorithm. It overcomes the above three computing weaknesses and achieves an average O(n) computational complexity in the applications like HGCAL where n > k m. CLUE uses a spatial index [11] for fast querying of neighbours. Figure 2 is a demonstration of CLUE procedure provided in [7]. Both the CPU and the GPU version of CLUE, referred as CLUE-CPU and CLUE-GPU in this paper, have been implemented in CMSSW for HGCAL reconstruction. CLUE-CPU is implemented in C++, while CLUE-GPU is implemented using CUDA. Figure 3 shows the workflow of CLUE-GPU within CMSSW: hits are offloaded from CPU to GPU after energy calibration; then CLUE steps are carried out on GPU; in the end, the clustering results are transported back to CPU for post processing and other downstream HGCAL reconstruction related to 3D linkage of CLUE clusters. To validate the implementation of CLUE in CMSSW, results of CLUE-CPU and CLUE-GPU are compared with the previous clustering algorithm in CMSSW version 10.6, referred as CMSSW_10_6_X in this paper. Based on the simulated tt events, CLUE-CPU and CLUE-GPU completely agree with each other, while both of them show some rare disagreements with the previous clustering algorithm implemented in CMSSW_10_6_X. Such disagreements are caused by the different ordering of hits with exactly equal ρ or equal δ when using different data structures, namely grid in CLUE and KD-Tree in CMSSW_10_6_X. An example of clustering result is shown in Figure 4, where from left to right are results from CMSSW_10_6_X, CLUE-CPU and CLUE-GPU. In this example, CLUE-CPU and CLUE-GPU provide almost the same result as the clusters in CMSSW_10_6_X. However, a small notable difference is the blue cluster, which includes 4 hits in CMSSW_10_6_X but 2 in CLUE. This is because the hit at about (x=60, y=106) cm is equally close to the two neighbouring hits in orange cluster and blue cluster, and its two different assignments, caused by Figure 2. Demonstration of CLUE procedure [7]. The definitions of four internal variables {ρ, δ, nh, f ollowers} are also given in [7]. Before the clustering procedure starts, a fixed-grid spatial index is constructed. In the first step, shown as (a), CLUE calculates the local density ρ for each point. The color and size of points represent their local densities. In the second step, shown as (b), for each point CLUE calculates its nearest-higher nh (defined as the nearest hit with higher density) and its separation δ (defined as the distance to nh). The black arrows represent the relation from the nearest-higher of a point to the point itself. If the nearest-higher of a point is -1, there is no arrow pointing to it. In the third step, shown as (c), CLUE promotes a point as a seed if ρ, δ are both large, or demote it to an outlier if ρ is small and δ is large. Promoted seeds and demoted outliers are shown as stars and grey squares, respectively. In the fourth step, shown as (d), CLUE propagates the cluster indices from seeds through their chains of followers. Noise points, which are outliers and their descendant followers, are guaranteed not to receive cluster ids from any seeds. The color of points represents the cluster ids. A grey square means its cluster id is undefined and the point should be considered as noise. different ordering of these two neighbors in spatial index, are equally correct. The topology of blue cluster in both cases are acceptable. Therefore, it is reasonable to conclude that CLUE in CMSSW gives almost the same clustering result as CMSSW_10_6_X with neglectable differences. Performance of CLUE in CMSSW The execution time of HGCAL clustering are tested using PU200 events. The testing platform is based on Intel i7-4770K CPU and NVIDIA GTX 1080 GPU. The average execution time is shown in Figure 5, where measured time includes all clustering steps and all necessary data transfer between CPU and GPU. The previous clustering algorithm in CMSSW_10_6_X using a single thread CPU takes 6110 ms on average. In comparison, CLUE-CPU takes only 203 ms using the same single thread CPU, producing almost the same result but 30x faster. The GPU implementation in CMSSW includes three versions. The first version is a plain CUDA implementation of CLUE-CPU and average execution time is 159 ms. The second version combines the data of all hits in the entire HGCAL as a single Structure of Array (SoA) to improve access to global memory and to allow parallelization of hits on different layers. The average execution time of the second version is reduced to 50 ms. The third version uses one-time GPU memory allocation and memory release before and after processing all events respectively. It further reduces execution time to 32 ms, which is decomposed into 6 ms for kernel execution, 20 ms for host-device data transportation and 6 ms for SoA conversion. The 6 ms total kernel execution time is comparable with that in [7]. The speedup factor of CLUE-GPU over CLUE-CPU is about 6x. In the future, the latency due to data traffic and SoA conversion can be shared with other reconstruction processes if more processes are also offloaded to GPU. Such latency can also be partially hidden if multiple CUDA streams work on different events simultaneously to keep the GPU occupied. Conclusion CLUE has been integrated into CMSSW for the 2D clustering of HGCAL reconstruction. Thanks to the support of heterogeneous architecture in CMSSW, CLUE is able to run not only on CPUs but also be offloaded to GPUs via CUDA. Comparing with performance of previous CPU-based clustering algorithm in CMSSW_10_6_X, CLUE produces almost the same clustering result but 30x (180x) faster when running on CPU (GPU). The average execution time of CLUE-GPU for PU200 events is reduced to 32 ms, which is promising to satisfy the time budget for HGCAL clustering in HLT. Within 32 ms of CLUE-GPU, kernels for algorithmic computation take only 6 ms in total, while the latency due to data transfer (20 ms) and SoA conversion (6 ms) are the bottlenecks. However, in the future, the latency due to data traffic and SoA conversion can be shared with other reconstruction steps and can also be partially hidden if multiple CUDA streams work on different events simultaneously.
3,082.8
2020-01-01T00:00:00.000
[ "Computer Science", "Physics" ]
A dataset for multimodal music information retrieval of Sotho-Tswana musical videos The existence of diverse traditional machine learning and deep learning models designed for various multimodal music information retrieval (MIR) applications, such as multimodal music sentiment analysis, genre classification, recommender systems, and emotion recognition, renders the machine learning and deep learning models indispensable for the MIR tasks. However, solving these tasks in a data-driven manner depends on the availability of high-quality benchmark datasets. Hence, the necessity for datasets tailored for multimodal music information retrieval applications is paramount. While a handful of multimodal datasets exist for distinct music information retrieval applications, they are not available in low-resourced languages, like Sotho-Tswana languages. In response to this gap, we introduce a novel multimodal music information retrieval dataset for various music information retrieval applications. This dataset centres on Sotho-Tswana musical videos, encompassing both textual, visual, and audio modalities specific to Sotho-Tswana musical content. The musical videos were downloaded from YouTube, but Python programs were written to process the musical videos and extract relevant spectral-based acoustic features, using different Python libraries. Annotation of the dataset was done manually by native speakers of Sotho-Tswana languages, who understand the culture and traditions of the Sotho-Tswana people. It is distinctive as, to our knowledge, no such dataset has been established until now. a b s t r a c t The existence of diverse traditional machine learning and deep learning models designed for various multimodal music information retrieval (MIR) applications, such as multimodal music sentiment analysis, genre classification, recommender systems, and emotion recognition, renders the machine learning and deep learning models indispensable for the MIR tasks.However, solving these tasks in a datadriven manner depends on the availability of high-quality benchmark datasets.Hence, the necessity for datasets tailored for multimodal music information retrieval applications is paramount.While a handful of multimodal datasets exist for distinct music information retrieval applications, they are not available in low-resourced languages, like Sotho-Tswana languages.In response to this gap, we introduce a novel multimodal music information retrieval dataset for various music information retrieval applications.This dataset centres on Sotho-Tswana musical videos, encompassing both textual, visual, and audio modalities specific to Sotho-Tswana musical content.The musical videos were downloaded from YouTube, but Python programs were written to process the musical videos and extract relevant spectral-based acoustic features, using different Python libraries.Annotation of the dataset was done manually by native speakers of Sotho-Tswana languages, who understand the culture and traditions of the Sotho-Tswana people.It is distinctive as, to our knowledge, no such dataset has been established until now. Value of the Data • To address complex music information retrieval tasks for data scientists and researchers, such as sentiment analysis, genre classification, emotion/mood recognition, and recommendation systems, diverse datasets are essential.Under-resourced languages, like Sotho-Tswana, lack these datasets, impeding progress in language processing for these linguistic communities.This dataset, emphasizing under-represented African languages, empowers researchers by offering diverse multimodal data.It levels the playing field for all languages in the realm of music information retrieval.• The dataset acts as a technological resource for under-resourced languages, like Sotho-Tswana languages, thereby aiding in the development of technology for under-resourced languages, such as Sotho-Tswana.The methodology employed in this paper can also be extended to other under-resourced languages.The dataset can be reused to train different deep learning models for the textual, visual, and audio modalities while performing various music information retrieval tasks, such as sentiment analysis and genre classification, using the late fusion method.• Those working on creating and enhancing models for music-related tasks can use this dataset to facilitate the development of more accurate and culturally inclusive algorithms.• The dataset serves as a technological resource that can encourage people to continue learning Sotho-Tswana languages, thereby aiding in the preservation of endangered dialects within the under-resourced Sotho-Tswana language group. Background Modern multimodal music information retrieval often relies on machine learning and deep learning models, which require diverse multimodal datasets to accomplish tasks such as multimodal music sentiment analysis, multimodal music genre classification, multimodal music recommender systems, and multimodal music emotion/mood recognition.This motivates us to source and compile such a dataset so that music information retrieval tasks can be performed with Sotho-Tswana musical videos.Furthermore, in music information retrieval, a significant amount of information about the music is hidden in textual, audio, and visual modalities rather than in one modality alone. Data Description Two main folders were used to organize all the dataset files, as shown in Fig. 1 .Three additional folders will be created by the user when accessing the dataset, as stated in the data accessibility section of the specifications table.This folder contains the following CSV files of the dataset: 1. VideoSegment.csv:This CSV file contains the list of all 1861 segments of video clips.The metadata of this CSV file includes SN, Filename, URL, Title of Song, Type, Language, Sentiment, Lyrics, Genre, and Meaning of Song, with the following descriptions: SN is the serial number of the segments of the video clip, Filename is the name of the file used to store the video clip, URL is the Uniform Resource Locator link for the video file, Title of Song is the title of the Sotho-Tswana song as shown on YouTube, Type is the type of file (MP4), Language is the specific Sotho-Tswana language used in the song, Sentiment is the sentiment polarity of the video based on the aspects of moral and cultural values of Sotho-Tswana speaking communities (Negative, Neutral, Positive), Lyrics is the English text translated from the song, Genre is the group that the song belongs to, while Meaning of Song is a brief explanation of the meaning of the lyrics of the song.The various metadata of this CSV file were also used in the CSV file, AudioSegments.csv.The annotations of these metadata have been described in the data annotation section of 3.3. AudioSegments.csv: This CSV file contains the list of all 1861 segments of audio clips.The various metadata of this file are the same as the metadata of the CSV file, VideoSegment.csv. Final_Acoustic_Features.csv: This CSV file contains the acoustic features of all 1861 segments of audio clips.The metadata of this file are SN, Filename, Title of Song, URL, Language, Lyrics, Genre, Sentiment, Onset Strength, Chroma STFT, Harmonic Percussive Source Separation (HPSS), Zero Crossing Rate, and Mel-Frequency Cepstral Coefficients (MFCC). VideoImages.csv: This CSV file is generated when the user executes the Jupyter notebook, named "Video_Images.ipynb.''It contains the list of the generated images/frames corresponding to the different segments of video clips selected by the user for training.The CSV file will be used to train the deep learning model for the visual modality.Because the size of the generated images/frames of all the segmented videos is too large, we have not uploaded this CSV file.We expect the user to select the segments of video clips to use for the training.Afterward, execute the Jupyter notebook, "Video_Images.ipynb,'' which will generate the frames/images and VideoImages.csv. • Notebooks: This folder contains all the Jupyter Notebook files that were used to automate some of the processes that created the dataset.It includes the following Jupyter notebook files: 1. Download.ipynb:This Jupyter notebook contains the Python script/program that will be used to download the videos from YouTube. 2. SplitVideo.ipynb:This Jupyter notebook contains the Python program that splits each of the downloaded 98 video clips into equal fifteen-second segments of video clips.The notebook utilizes a text file called split1.txt,which specifies the various time durations for splitting a downloaded video depending on its size.The reason for splitting the videos is to improve the performance during model training and evaluation, (Kiranyaz et al. 2006) [ Experimental design The experiment used to generate the dataset consists of writing and executing Python programs on a Jupyter notebook.The description of the experiment that generated the dataset has been illustrated in the Data Flow Diagram in Fig. 2 . The dataset creation process was initiated in Process (1) by downloading raw video clips using the video downloader site, savefrom.net,and search phrases, or using the Python script, download.ipynb,to download the videos from YouTube.The Python package Pytube was used to write the Python script, download.ipynb.In Process (2), the downloaded raw video clips, numbering 98, were segmented into various video segments, each lasting fifteen seconds.This segmentation was achieved using the Python package ffmpeg_tools within the moviepy library.As depicted in the data flow diagram in Fig. 2 , this process produced a CSV file named "VideoSegment.csv"and different segmented video clips. In Process (3), the video segment clips were utilized to extract corresponding audio segment clips.This process also generated a list of all the produced audio segment clips, which was saved as a CSV file named "AudioSegments.csv."The extraction was carried out using VideoFileClip and write.audiofile,both components of the moviepy library. In Process (4), the CSV file generated in the previous process and the segmented audio clips were employed to extract various spectral-based acoustic features for each segmented audio clip.This extraction was performed using librosa, another Python package.The resulting spectralbased acoustic features were stored in a CSV file named "Final_Acoustic_Features.csv." In Process (5), the Python package CV2 was employed to extract video frames/images from the segmented video clips.Simultaneously, a list of all generated video frames/images was compiled and stored in a CSV file named "VideoImage.csv."This CSV file, "VideoImage.csv,"will be utilized to train appropriate deep learning models for the textual and visual modalities.Conversely, the CSV file "Final_Acoustic_Features.csv" will be employed to train a suitable deep learning model for the audio modality. However, depending on the Music Information Retrieval (MIR) application, the user may choose to extract relevant spectral-based acoustic features directly from the segmented audio clips instead of using "Final_Acoustic_Features.csv."In that case, an appropriate notebook, which has been included, will be utilized to separate the audio segments from the video segments.Subsequently, acoustic features can be extracted during model training.The arrows in the data flow diagram of Fig. 2 show the flow of data into the various processes. Materials While most music information retrieval datasets have focused on Western music, there exists a database of African music, such as the archive of the Royal Museum of Central Africa in Tervuren, Belgium, which holds one of the world's largest collections of audio music from Central Africa (Moelants et al. [ 2 ]; Matthé et al. [ 3 ]; Antonopoulos et al. [ 4 ]).Another non-Western music corpus (dataset) for music information retrieval is the one collected in the CompMusic project, which consists of five different music cultures: Arab-Andalusian (Maghreb), Beijing Opera (China), Turkish Makam (Turkey), Hindustani (North-India), and Carnatic (South-India), (Serra 2014) [ 5 ].The dataset is an audio recording with appropriate information that covers varieties of melodies and rhythms present in each musical culture. While Moelants et al. [ 2 ] used a sample of the African music database for pitch and scale analysis, Matthé et al. [ 3 ] used the same African music database for flexible querying based on the needs of the user, and Antonopoulos et al. [ 4 ] used a sample of the same African music database for music retrieval based on rhythmic similarity with a sample of Greek traditional dance music.On the other hand, the audio dataset of the CompMusic project was aligned so that musically meaningful features related to melody and rhythm would be extracted from the audio dataset, (Serra [ 5 ]). To our knowledge, based on the evidenced literature, there does not exist a multimodal dataset of diverse Sotho-Tswana musical videos that can be used for different multimodal MIR tasks, like multimodal music sentiment analysis, multimodal music genre classification, and multimodal music emotion recognition of Sotho-Tswana musical videos.Our dataset concentrates on textual (lyrics), audio (voice), and visual (pictures) modalities, recognizing the inherent multimodal information present in videos through lyrics, audio, and visual channels. In a manner akin to our dataset, diverse multimodal datasets have directed their emphasis towards distinct modalities.Pandeya et al. [ 6 ] delved into the audio and visual modalities within musical videos, while Weiß et al. [ 7 ] focused their attention on lyrics (text), sheet music (visual images), and symbolic data modalities.This diversity in modality focus across various datasets contributes to a richer landscape for multimodal research, accommodating the multifaceted nature of information present in different types of multimedia content. In their investigation, Nishikawa et al. [ 8 ] utilized two modalities, namely lyric and audio modalities, to estimate musical mood.Similarly, Zalkow et al. [ 9 ] examined symbolic encoding and audio modalities.The selection of modalities is contingent upon the authors, yet the overarching concept centres around the utilization of diverse modalities for music information retrieval (MIR). Much like our dataset, specifically tailored for multimodal music information retrieval (MIR), several other multimodal music datasets have been formulated for various applications.For instance, Weiß et al. [ 7 ] curated a dataset comprising 24 songs from Franz Schubert's Winterreise, composed in 1827, which holds significance in the domains of music processing, music theory, and historical musicology. Additional noteworthy datasets include the Musical Theme Dataset (MTD), introduced by Zalkow et al. [ 9 ], catering to MIR research needs.The CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI) dataset, scrutinized for the analysis of human multimodal language by Zadeh et al. [ 10 ], and the Multimodal Opinion level Sentiment Intensity dataset (MOSI), applied in the analysis of online opinion videos as presented by Zadeh et al. [ 11 ], contribute substantially to the advancement of research in multimodal music information retrieval across diverse applications. The Multimodal Sentiment Analysis Challenge, 2022, conducted in Lisbon, Portugal on October 10, 2022, emphasized the generation of three multimodal datasets for the detection of humour, emotional reactions, and stress.Despite these datasets not being centred around musical video content, as elucidated by Lukas Christ et al. [ 12 ], the CMU-MOSEAS (CMU Multimodal Opinion Sentiment, Emotions, and Attributes) dataset was crafted specifically for sentiment and emotion analysis, as outlined by Zadeh et al. [ 13 ]. In concordance with our dataset, numerous multimodal datasets, such as MOSI and CMU-MOSEI, derived from YouTube, have been employed to facilitate segmentation and subjectivity at the opinion level (Zadeh et al. [ 11 ]; Zadeh et al. [ 10 ]).In contrast to our dataset, the CH-SIMS dataset was meticulously curated to encompass both unimodal and multimodal aspects.This distinctive feature necessitated the independent annotation of unimodal and multimodal components (Yu et al. [ 14 ]). The construction of the audio modality component within our dataset involved the utilization of acoustic features, like Mel Frequency Cepstral Coefficients (MFCC), recognized as spectralbased acoustic features.MFCC, a well-established acoustic feature, has been extensively employed in training diverse machine learning models for voice identification, as demonstrated by Ali et al. 15 ].Additionally, Hazra et al. [ 16 ] leveraged MFCC to train various deep learning models for the discernment of emotions in human speech.These methodologies underscore the versatility and efficacy of MFCC in diverse applications within the realm of acoustic feature-based modelling.Based on the results reported by Pyrovolakis et al. [ 17 ], which showed that combining many spectral-based acoustic features improved the accuracy of the training, we have included different spectral-based acoustic features as part of the dataset. Methods The dataset creation process involved distinct stages: data acquisition, data preprocessing, and data annotation, as elucidated by Gandhi et al. [ 18 ].Each of these stages will be comprehensively addressed, incorporating the recommended fusion method tailored for multimodal music information retrieval applications utilizing the dataset. Data acquisition Like other multimodal datasets, the videos for this dataset were sourced from the social media platform, YouTube, as detailed by Zadeh et al. [ 11 ].Various search phrases were employed on YouTube, in conjunction with savefrom.net,a video downloader site, and a Python script download.ipynb,to procure Sotho-Tswana musical video clips.These search phrases encompassed, among others, "Traditional Sotho-Tswana Music," "Cultural Music in Sotho-Tswana Languages," "Church Music in Sotho-Tswana Languages," "Gospel Music in Sotho-Tswana Languages," and "Sotho-Tswana Songs."Native speakers proficient in the Sotho-Tswana languages were enlisted to discern and identify relevant musical videos within the dataset.A total of 150 video clips were initially downloaded in the MP4 file format through this process. Data preprocessing Musical video clips containing solely instrumentals, devoid of spoken words in Sotho-Tswana languages, were excluded, as were those in which the sound of instruments rendered it challenging to distinctly identify the language and words employed in the music composition.Similarly, musical videos with a duration of less than 15 s were discarded because the duration of such videos would be too short to discern the sentiment and genre of such videos during annotation.Additionally, musical videos exceeding a duration of two hours were omitted due to constraints associated with storage capacity and CPU processing time required for their handling. Given the disparate durations of the downloaded musical videos, they were partitioned into fifteen-second segments, resulting in a total of 1861 segments of musical video clips.This was necessary to improve the performance during model training and evaluation, (Kiranyaz et al. [ 1 ]).This segmentation mirrors the approach employed in the creation of the CMU-MOSI multimodal video dataset (Zadeh et al. [ 10 ]). Data annotation This stage encompasses the labelling of various metadata of the dataset.Part of the metadata that were annotated for each video included language, lyrics, genre, and sentiment.The annotations of these metadata were done manually by native speakers of Sotho-Tswana languages, who understand the musical culture and tradition of the Sotho-Tswana people, with the ability to translate lyrics of Sotho-Tswana music into English.They listened to and watched each of the downloaded musical videos from beginning to end.Afterward, they determined the language of the musical video, translated the lyrics into English, and determined sentiment polarity based on the aspect of moral and cultural values of the Sotho-Tswana people, together with the genre of the music.The annotators also explained the meaning of the lyrics of the musical video.Since each segment of the video is from a particular downloaded video, therefore, the annotations for the segments of the same downloaded video are the same.Fig. 3-5 show the distributions of the segments of the video based on language, sentiment polarity, and genre, respectively.Multimodal datasets, as exemplified by Zadeh et al. [ 11 ] and Zadeh et al. [ 10 ], adopted a unified single annotation approach for each dataset instance.This methodology involves combining both modalities to derive a singular annotation for each instance.Two annotators listened to and watched each of the downloaded raw video clips, determined the language used, translated the lyrics of the song into English, determined the genre that it belongs to, based on the Sotho-Tswana music genres, and finally determined the sentiment polarity of the music, based on the aspect of moral and cultural values of the Sotho-Tswana people. While our dataset, Oguike et al. [ 19 ], employed a single class annotation for these three modalities (text [lyrics], audio [voice], and visual [picture] modalities), Zadeh et al. [ 11 ] incorporated both class annotation and an estimation of the strength of the class annotation. Recommended fusion method The recommended approach for integrating features from the textual, visual, and audio modalities of this dataset is the decision-level (late) fusion method.This recommendation is grounded in its superiority over alternative fusion methods, offering advantages such as ease of training, enhanced flexibility, and simplicity, as highlighted by Pandeya et al. [ 20 ].In alignment with this fusion method, the dataset was configured by segregating the textual and audio modalities from the visual modality.The intent is to independently train each modality and subsequently employ late fusion to amalgamate the outcomes of the training, as detailed by Gandhi et al. [ 18 ]. Limitations One of the limitations is the limited storage and processing capacity of the computer system used to create the dataset.With 98 downloaded musical videos of different durations, they were segmented into 1861 video clips of equal fifteen-second durations.Each of these 1861 segments of video clips is to be split into frames/images to obtain the sequence of images/frames for each video.This will result in many images/frames that will not be able to be stored or processed on a computer system with moderate storage and processing capacity, even with the use of Google Colab.Based on this limitation, we advise the user to use a random sample of segments of video clips instead of using all the 1861 segments of video clips. Ethics Statement Though the videos were downloaded from YouTube, we have only provided the links to the videos in the dataset, without distributing them on the public repository. Declaration of Generative AI and AI-Assisted Technologies in the Writing Process During the preparation of this work, the author(s) used ChatGPT to improve language and readability.After using ChatGPT, the author(s) reviewed and edited the content as needed and take full responsibility for the content of the publication. • Fig. 1 : This figure illustrates the organization of the dataset into different main folders.The description of each folder follows below.•CSV_Files: Fig. 1 . Fig.1.Organization of folders that store the multimodal musical dataset. Fig. 2 . Fig. 2. Data flow diagram of the processes that created the dataset. Fig. 4 . Fig. 4. Distribution of several video segments based on sentiment polarity. Fig. 5 . Fig. 5. Distribution of several video segments based on the Sotho-Tswana music genre. 1 . Download the dataset.2. Utilize the URLs in the CSV files, VideoSegment.csvand AudioSegments.csv, to download the raw video clips from YouTube using the savefrom.netvideo downloader site.Alternatively, use the python script called download.ipynb,which has been included as part of the dataset, to download the YouTube videos.Ensure that you specify a CSV file with URL metadata in download.ipynb.The CSV file can be either VideoSegment.csvor AudioSegments.csv, which you have downloaded.3.As the downloaded video clips have varying durations, employ the Jupyter Create a new folder named Video_Clips and store all the segmented video clips in this folder.Organize the segments of video clips so that those from the same downloaded video clip are placed in the same sub-folder.6. Utilize the Jupyter notebook, separate.ipynb,to separate the audio modality of each segmented video clip from the video.7. Ensure that the name of each segment of the audio file corresponds to the corresponding name in the CSV file, AudioSegments.csv.8. Create a new folder named Audio_Clips and store all the segments of audio files in this folder.9. Use the Jupyter notebook, Video_Images.ipynb,to generate frames/images from the segmented video clips.This will generate the CSV file called Video_Images.csv.Due to limitations in computing resources, when training the deep learning models, you may not use all the segmented video clips and segmented audio clips.If you decide to use only some of them, delete the ones that are not being used from the CSV files, VideoSegment.csvand AudioSegments.csv.However, if you are using a high-performance computer system, you can utilize all the segmented video clips and audio clips.10.Store all the generated frames/images in a new folder named Video_Frames.11.Depending on the MIR task you intend to perform, employ appropriate deep learning models to train the audio, textual, and visual modalities of the dataset, using the late fusion method. [ 1 ]nyaz et al. 2006)pynb, along with the text file, split1.txt, to divide each downloaded video clip into equal fifteen-second segments of video clips.This will help to improve the performance during model training and evaluation,(Kiranyaz et al. 2006)[ 1 ]. 4. Ensure that the name of each segment of the video clip matches the corresponding name in the CSV file, VideoSegment.csv.(continued on next page ) 5. 1 ]. 3. Separate.ipynb:This Jupyter notebook contains the Python program that separates the audio modality from each of the segmented video clips.4. Features.ipynb:This Jupyter notebook contains the Python program used to extract the different spectralbased acoustic features of all the segmented audio clips. 5. Video_Images.ipynb:This Jupyter notebook contains the Python program used to generate the frames/images of the segments of video clips.It also generates the CSV file called VideoImages.csv,which is used to train the deep learning model for the visual modality.
5,513.2
2024-06-01T00:00:00.000
[ "Art", "Computer Science" ]
Gold Nanorods Capped with Different Ammonium Bromide Salts on the Catalytic Chemical Reduction of p-Nitrophenol It is known that the reactivity of the nanocatalytic systems is related to the particle size and shape and also to the features of the capping agents on the nanostructures. In this study, gold nanorods (AuNRs) were synthesized by the seed-mediated method using different tetraalkylammonium bromide salts as capping agents, that are, cetyltrimethylammonium (CTABr), N,N-dimethyl-N-cetyl-N-(2-hydroxyethyl)ammonium (HEA16Br), and N,N-dimethyl-N-cetylN-(2-hydroxypropyl)ammonium (HPA16Br), and used as catalyst for the chemical reduction of p-nitrophenol (PNP) in the presence of NaBH4. The catalytic systems were characterized by ultraviolet-visible (UV-Vis) absorption spectroscopy and transmission electron microscopy (TEM). The effect of the ammonium bromide-based capping agent on the catalytic activity of AuNRs was evaluated by performing the chemical reduction of p-nitrophenol in the presence of excess NaBH4 in aqueous medium. Under the reaction conditions employed, the catalytic systems displayed detectable subtle differences in terms of induction times and apparent activation energy (Ea) values. These results show that slight changes carried out in the chemical structure of the capping agent are able to imprint even slightly modification of the kinetic parameter of the catalytic reaction. Introduction In contrast to the bulk form, gold in nanometric scale presents particular chemical properties and can even be employed as a catalyst in important chemical reactions, 1,2 such as C-C and C-Het bond formation, 3,4 oxidation, 5,6 hydrogenation 7 and chemical reduction. [8][9][10] The catalytic activity of gold nanoparticles (AuNPs) is related not only to the large number of atoms present on the surface of the nanoparticles, i.e., large surface/volume ratios, 11 but also to singular properties of these particles that arise due to quantum confinement. 12 In the nanometric regime, both the surface/volume ratio and quantum confinement effects are strongly dependent on particle size 13 and shape. 14 Colloidal solutions of AuNPs with different sizes and shapes, for instance, display completely different optical properties, with localized surface plasmon resonances (LSPR) that can cover the whole visible spectrum. 15,16 This set of properties that arises in the nanometric scale ("nano effect") is sometimes observed in nanocatalysis, mainly in the case of metal nanoparticles. 17,18 For instance, nanoparticles with different shapes but approximately with the same number of atoms present different catalytic reactivities, since the number of exposed atoms per particle is shape-dependent. 19 Moreover, only AuNPs with diameters between 1 and 10 nm are able to effectively promote CO oxidation. 5,6 Another important feature that must be considered in relation to catalytic systems based on nanoparticles is the nature of the molecules used to surround the nanoparticles, to prevent agglomeration in colloidal suspensions. 20 Indeed, these molecules (capping agents) play an important role in the catalytic properties of the particle, since, in general, the reagents need to pass through this molecular layer to have access to the catalytic surface. [21][22][23][24] Chemical reduction catalyzed by AuNPs in the presence of hydrogen molecules is now possible, but the applications are restricted, since gold surfaces have limited ability to adsorb and activate hydrogen and thus harsh reaction conditions may be required, i.e., high temperatures and pressures. 25 chemoselective reduction of nitroarene compounds to the respective amines 28 catalyzed by AuNPs, using NaBH 4 as a reducing agent, 29 is extensively described in the literature. This chemical transformation is applied in the fine chemicals industry, for example, in the manufacture of analgesic and antipyretic drugs, corrosion inhibitors, etc. 30,31 In this context, p-nitrophenol (PNP) is frequently used as a model substrate to compare and evaluate the potential of reducing agents and catalysts for the chemical reduction of nitroarene to aminoarene compounds. 32 The catalytic reduction of PNP meets all criteria for adoption as a model catalytic reaction, preferably under aqueous conditions and at close to room temperature. 33,34 The evaluation of the catalytic reduction of PNP in the presence of AuNPs has been reported considering the different aspects of the catalytic systems, such as particle size/surface area, 35,36 morphology, 15 facets, 37 and active site requirements. 38 Also, several authors 39,40 have compared the catalytic activity of colloidal gold nanoparticles with different capping agents and ligands. The structural aspects of the nanoparticle as well as the capping agents are the main aspects that can modulate the reactivity of the nanocatalyst. Recently, we reported the catalytic activity of AuNPs with different morphologies but the same capping agent in the chemical reduction of PNP. 16 In the study reported herein, we prepared catalytic systems based on gold nanorods (AuNRs) with different capping agents based on ammonium bromide and compered their catalytic properties on the reduction of p-nitrophenol (PNP) to p-aminophenol (PAP) under the same number of catalytic nanoparticles. This is the right condition to analyze the effect of similar capping agents and determine which is the suitable agent for the reaction. Materials HAuCl 4 .3H 2 O (99.9%), NaBH 4 (99%), (+)-L-ascorbic acid (99%), cetyltrimethylammonium bromide (CTABr, 98%), and AgNO 3 (99%) were obtained from Sigma-Aldrich (St. Louis, USA) and used as purchased. Deionized water was used to prepare all aqueous solutions. All reactions were conducted in the presence of air. N,N-Dimethyl-N-cetyl-N-(2-hydroxyethyl)ammonium bromide (HEA16Br) and N,N-dimethyl-N-cetyl-N-(2-hydroxypropyl)ammonium bromide (HPA16Br) were prepared as previously described. 24 UV-Vis spectra were recorded on a Shimadzu UV-2600 (Kyoto, Japan), with the aid of a temperaturecontrolled cell, Shimadzu CPS-100 (Kyoto, Japan) and optical glass cells with a length of 1.0 cm. The set-up was configured to fix the baseline of the distilled water absorption band from 390 to 410 nm. Transmission electron microscopy (TEM) was performed on a FEI Tecnai 20 electron microscope (Hillsboro, USA) at an accelerating voltage of 120 kV, and the samples were prepared with the addition of a drop of the gold colloidal solution on a copper grid coated with a porous carbon film. The hydroxylated ammonium salts were readily synthesized in one-step procedure described by Roucoux and co-workers. 24 Briefly, for HEA16Br, 15 mL of ethanol, 10.7 mL of bromohexadecane, and 3 mL of 2-(N,N-dimethylamino) ethanol were placed in a two-necked flask maintained under reflux at 80 °C and left under stirring for 48 h. The same procedure was performed for the synthesis of HPA16Br, in this case applying 3.5 mL of 3-(N,N-dimethylamino)-1-propanol. Synthesis of gold nanorods The AuNRs-based catalysts were prepared by the seedmediated method, adapted from the protocols developed in the literature. 41,42 Briefly, two solutions were initially prepared: (i) seed solution: in a 25 mL flask, an aqueous solution of 0.025 mol L -1 HAuCl 4 (0.1 mL; 0.0025 mmol) was mixed with an aqueous solution of 0.067 mol L -1 CTABr (7.4 mL; 0.5 mmol). An ice-cold aqueous solution of 0.01 mol L -1 NaBH 4 (0.6 mL; 0.006 mmol) was then added and the color of the solution immediately turned brown. After 2 min, the system was left for at least 2 h without stirring prior to use; and (ii) growth solution: in a 25 mL flask, an aqueous solution of 0.025 mol L -1 HAuCl 4 (0.2 mL; 0.005 mmol) was added to a 0.068 mol L -1 aqueous solution of the respective surfactant (7.3 mL; 0.5 mmol). In the next step, 0.15 mL of an aqueous solution of 0.004 mol L -1 AgNO 3 (0.15 mL) was added under stirring, followed by the addition of an aqueous solution of 0.0788 mol L -1 ascorbic acid (0.070 mL). The system became colorless, verifying the reduction of Au 3+ to Au + . The growth of the nanoparticles was initiated by the addition of an aliquot (0.060 mL) of the seed solution to the freshly prepared growth solution. The solution was kept briefly under stirring (10 s) and then allowed to stand for at least 4 h without stirring. Before the application of the nanoparticles in the catalytic reactions (within 24 h), the AuNRs obtained were separated by centrifugation (13500 rpm, 15 min, 25 °C) and redispersed in deionized water (8.0 mL). Results and Discussion We carried out a systematic study of reactions for the chemical reduction of PNP in the presence of NaBH 4 and AuNRs capped with three different capping agents based on the ammonium bromide salts, CTABr, HEA16Br and HPA16Br (see Figure 1), as catalysts. Based on these experiments, physical-chemical parameters, such as apparent rate constant (k app ) and apparent activation energy (E a ), were obtained for all of the reaction systems. In the presence of these three surfactants, we prepared three colloidal AuNR@ammonium bromide salts via the seed-mediated method, using the approach developed by Nikoobakht and El-Sayed. 41 The colloids obtained were characterized by UV-Vis spectroscopy and their particles were analyzed by TEM (Figure 2). The three absorption spectra presented in Figure 2a are typical of systems containing colloidal AuNR, i.e., showing two maximum absorption bands (λ max ). 41 In the three cases, the maximum absorption bands are around 515 and 700 nm, suggesting that the AuNRs formed in all systems present similar aspect ratios. 43 It worth mentioning here that the slightly difference between the three extinction spectra, mainly in the second maximum absorption band (λ max2 ), can be due to: (i) the small differences in the aspect ratio of the AuNRs produced in each system; and (ii) the slight difference in the nature of the chemical molecular structure of the capping agents (see Figure 1). 44 The TEM images confirmed the formation of AuNRs with similar dimensions, i.e., approximately 30 × 10 nm (aspect ratio 3.0). In the three colloids, it was found approximately the same number of quasi-spherical particles (ca. 15%), normally generated using this method. 45 For the catalytic reaction, it is important to note that in our study we assumed that the same number of particles of AuNRs were added to the reaction mixtures, since for each colloidal solution the same number of seed particles (that grow and form AuNRs) was added. This control ensured that the number of particles formed in the three colloidal solutions is practically the same. Prior to the catalytic reactions, all AuNPs were isolated from the mother colloidal solution to eliminate the excess of capping agent and all traces of soluble gold species, which can interfere in the catalytic process. 46,47 The catalytic reduction of the p-nitrophenol (PNP) to p-aminophenol (PAP) by NaBH 4 was chosen as a model reaction to evaluate the catalytic activity of the systems AuNR@CTABr, AuNR@HEA16Br and AuNR@HPA16Br. UV-Vis extinction spectra were recorded over time to follow the chemical reaction ( Figure 3). In all cases, the maximum extinction band at 400 nm is related to the presence of the sodium salt of PNP, formed under alkaline conditions soon after the addition of NaBH 4 , in the medium. At this moment, the color of the mixture changes immediately from light to bright yellow. The chemical reduction initiates only after the addition of the catalyst (AuNRs) in the reactor, easily confirmed by a change in the color of the solution, from bright yellow to colorless (Figure 3a) and measured by the decrease in the band of the ionic form of PNP (400 nm) and a new weak band at 300 nm appears due to the formation of the reaction product, i.e., the ionic form of PAP (Figure 3b). 48,49 In order to obtain the kinetic parameters related to the chemical reduction of PNP for the three catalytic systems, we carried out a series of catalytic reactions at different temperatures. These data can be used to evaluate the relationship between the nature of the AuNR capping agent and the catalytic activity of the systems for the chemical reduction of PNP to PAP. Since the catalytic reactions occur in a pseudo-firstorder reaction regime, because the concentration of sodium borohydride employed was much higher than the stoichiometric amount needed to reduce PNP, 50 it was reasonable to assume that the concentration of BH 4 remained constant during the reaction. Plots of ln[PNP]ln[PNP] 0 versus reaction time show linearity for the three catalytic systems (see Supplementary Information section for details), and from the slope of the line we obtained the k app for each reaction, 51 as summarized in Table 1. From the data in Table 1, it can be observed that the apparent rate constant for the reaction catalyzed by the system AuNR@HPA16Br has the lowest value for all reaction temperatures tested, and the highest values were attained when the catalytic system AuNR@CTABr was used. For these systems, the substrate must pass through the barrier formed by the capping agents (CTABr, HEA16Br, or HPA16Br) that surrounds the AuNRs, and we suggest that the access of the substrates to the gold surface is different for each system. It can be seen that the induction time verified before the PNP reduction reaction is in fact different for the three systems, mainly at lower reaction temperatures (see Table 2). The same trend observed for the reaction rate is once again verified for the induction time, i.e., the system AuNR@HPA16Br presented the longest induction period and AuNR@CTABr the shortest. The most important factors related to these differences are postulated as follows: (i) the resistance of the capping agents, hindering the reagents (PNP and NaBH 4 ) from reaching the surface of the nanorods; [52][53][54] and (ii) the release of the final product from the surface of the catalyst. All other factors can be considered at the same level for the three systems. [55][56][57] At this point, it is worth mentioning how the molecular structure of the capping agents are arranged around the AuNRs. There are strong evidences that freshly prepared colloidal seed mediated AuNRs have a compact bilayer of CTABr capping the nanoparticle. 47,58,59 However, in the reaction medium of catalytic tests employed in this work, this arrange can be destabilized leading to the formation of a less compact arrange of the capping agent, suggesting the formation of micellar structures anchored on the gold surface. 47 However, in the presence of hydroxylated chains, as in the case of HEA16Br and HPA16Br, stronger intermolecular interactions, via hydrogen bonds, between the surfactant molecular structures must occur, hampering the adsorption of the reagents on the metallic surface of the particle, leading to longer induction times and slower reaction rates. 24,47 These hydrogen bond interactions seems to be more effective when the hydroxylated carbon chain is longer (HPA16Br). The Arrhenius equation 51 can then be used to obtain the E a value for the chemical reduction of PNP reactions using each catalytic system (see Figure 4), i.e., 41, 43, and 50 J mol -1 K -1 for the catalytic systems AuNR@CTABr, AuNR@HEA16Br, and AuNR@HPA16Br, respectively. These values are consistent with similar studies. 19,60-62 Conclusions In this study we demonstrated that aqueous colloidal solutions based on AuNRs coated with hydroxylated ammonium salts, prepared by the seed-mediated method, were catalytically active in the conversion of PNP, producing PAP in the presence of NaBH 4 . Even though the amount of gold and number of particles were practically the same for the three catalytic systems evaluated, the catalytic properties differed. The reaction rates of the systems were very sensitive to the reaction temperature and in all cases, under the same reaction conditions, the values for the apparent rate constant (k app ) of the systems and, consequently, the apparent activation energy (E a ) differed considerably. These results reinforce the fact that the nature of the capping agent must be considered in the evaluation of the catalytic properties. It is postulated that the main reason for these differences in the catalytic properties is the permeability of the double layer of the capping agents in relation to the reagents. It is possible that the capping agents bearing hydroxylated substituents generate more compact double layer structures, hindering the access of the reagents to the surface of the catalyst, i.e., the surface of the gold particle. Supplementary Information Supplementary information (plots of [PNP] versus time and ln[PNP]ln[PNP] 0 versus time at different temperatures) is available free of charge at http://jbcs.sbq.org.br as PDF file.
3,707.2
2021-01-01T00:00:00.000
[ "Chemistry" ]
Insect diversity on Calotropis gigantea (L.) in Sri Lanka : Calotropis gigantea is a drought-resistant and salt-tolerant medicinal plant native to Sri Lanka. Although C. gigantea is widely distributed in Sri Lanka, information on insects associated with the plant is less understood. The objective of the study is to identify the diversity of insect fauna associated with C. gigantea . Surveys were conducted in 120 sites covering all provinces of Sri Lanka to document the insect fauna associated with C. gigantea and their biotic associations. The insects found in C. gigantea were cataloged as pests, pollinators, and occasional visitors. A total of thirteen morphospecies of phytophagous pests, six species of pollinators, and fourteen species of occasional visitors were documented. Dacus persicus and Paramecops farinosa were the highly damaging pests while Sphaeroderma sp. was more widespread. Xylocopa spp. were the most abundant insect pollinators. Dacus persicus and P. farinosa were identified as monophagous species of C. gigantea. Occasional visitors belonged to five orders and their diversity was very high. As the initial record from Sri Lanka, the findings of the study provide information on the identification of the insect fauna associated with Calotropis and their association with C. gigantea . INTRODUCTION Calotropis gigantea (L.) Dryand (Apocynaceae), commonly known as Arka Madura, Yercum (Sethi, 2014), Crown flower, or Giant milkweed (Kadiyala et al., 2013;Saikia et al., 2015), is native to India, China, Bangladesh, Burma, Indonesia, Malaysia, Pakistan, Thailand, Philippines, and Sri Lanka (Kumar and Kumar, 2015). In many countries in Asia, C. gigantea is used as a medicinal plant to cure various ailments, including bronchial asthma, cholera, convulsions, pneumonia, ringworm infection, smallpox infection, toothache, epilepsy, fever, leprosy, rheumatism, catarrh, cold, cough, inflammation, tumors, mental disorders, snakebite infection, and tuberculosis Kumar and Kumar, 2015;Abeysinghe, 2018). In Sri Lanka, C. gigantea is widely used in Ayurvedic medicine, for the treatment of pain and inflammation (Shukla et al., 2018). Furthermore, in Sri Lankan traditional medicine, C. gigantea is used to treat scorpion poisoning (Ediriweera et al., 2018) and snake bites (Herath, 2017). Similarly, the root is used to treat dysentery (Gunaratna et al., 2015). Also, Sri Lankan farmers used latex of the plant as a sticky material in sticky traps. Alpha and Beta calotropeol and Beta amyrin in latex act as excellent compounds for crop pest control (Widanapathirana and Dassanayake, 2013). Especially sugarcane farmers use latex of the plant to control termite attacks on their crop fields (Wanasinghe et al., 2018). Furthermore, under laboratory conditions, extractions of C. gigantea are effective in controlling the cotton mealybugs (Prishanthini and Vinobaba, 2014). Sri Lankan Buddhists offer Calotropis flowers to Load Buddha and in Thailand, flowers are used to decorate temple ceremonies (Gaur, et al., 2013). Insects associated with C. gigantea utilize the plant as a feeding substrate, a shelter, a hunting ground (Salau and Nasiru, 2015) as well as a breeding place. Phytophagous insects associated with C. gigantea play an important ecological role while they act as pests, predators as well as parasites (Saikia et al., 2015). The diversity of insect fauna associated with C. gigantea varies in different regions of the World. Although latex of C. procera is considered toxic to insects (Al dhafer et al., 2011) large numbers of insect pests cause considerable damage to the plant. Dhileepan (2014) explains that there are sixty-five phytophagous insect species associated with Calotropis spp. (including C. gigantea) in their native range. Among them, more than 50% of insects feed on leaves while others feed on flowers, stems, seeds, and fruits. Most of the insects associated with Calotropis species were recorded from India. Aphids, grasshoppers, and caterpillars of Danaus spp. are the common plant feeders associated with C. procera (Al dhafer et al., 2011). Danaus spp. is a pest of Calotropis species in Australia, Hawaii, Fiji, Brazil, Jamaica, and Puerto Rico (Dhileepan, 2014). The gregarious feeding nature of Aphis nerii Boyer leads to defoliation and dieback of shoots and immature fruits of C. procera (Dhileepan, 2014). Most of the insects associated with C. procera are polyphagous except for Paramecops farinosa Schoenherr and Dacus persicus Hendel. They are highly destructive, monophagous pests (Dhileepan, 2014). Dacus persicus is distributed in India, Sri Lanka, Pakistan, Iran, and Iraq. Another fruit fly species, Dacus longistylus attacking C. procera fruits is widely distributed in Afrotropical regions. Paramecops farinosa is distributed in India and Pakistan with a less dispersal range than Dacus spp. (Dhileepan, 2014). Also, a study from India reveals that several other species of pests of C. gigantea cause considerable damage to the plant (Tara and Madhu, 2011). As the initial record, Platycorynus sp. was recorded on C. gigantea plants in West Bengal. Platycorynus sp. feeds on leaves and flowers of C. gigantea plants (Sudip et al., 2004). Calotropis flower is a good nectar source for pollinators. The majority of insect pollinators of the plant belonged to the order Hymenoptera (Salau and Nasiru, 2015). A study in Israel found two carpenter bees, Xylocopa pubescens Spinola and X. sulcatipes Maa as major pollinators of C. procera. Carpenter bees are widely distributed in Asia and Africa (Eisikowitch, 1986;Zafar et al., 2018). In India, Apis dorsata Fabricius, Apis florae Fabricius, and Apis mellifera Linnaeus are active diurnal flower visitors of Calotropis spp. (Sudan, 2013). Dipterans of Family Muscidae, Sarcophagidae, and Syrphidae also visit C. procera flowers. Butterflies of the family Nymphalidae, Noctuidae, and Lycaenidae are occasional pollinators of C. procera (Sudan, 2013). Diversity of predators associated with Calotropis spp. varies according to the different regions of the World. In the Central region of Saudi Arabia, 22 species of insects are reported as predators associated with C. procera (Al dhafer et al., 2011). Occasional visitors display a "neutral" relationship with the C. procera plant. They do not feed on the plant or prey on associated insects (Al dhafer et al., 2011). Further, the study of Al dhafer et al. (2011) explains that occasional visitors may feed around the plant without having a direct association with C. procera. Although C. gigantea is widely distributed in Sri Lanka, no systematic surveys have been conducted so far. Therefore, the information on insects associate with C. gigantea in Sri Lanka is lacking and only a few records are available related to insect fauna of C. gigantea in Sri Lanka. Few species of bees including Amegilla comberi Cockerell, Amegilla fallax Smith, Amegilla violacea Lepeletie, Xylocopa fenestrata Fabricius and Xylocopa tenuiscapa Westwood were identified as pollinators of Calotropis in Sri Lanka (Karunaratne, et al., 2005). Also, butterfly larvae of Danaus chrysippus chrysippus were recorded as feeders on Calotropis leaves and flowers (Jayasinghe et al., 2013;Perera and Wickramasinghe, 2014). There is no information on insects associated with C. gigantea in Sri Lanka except in the above studies. Therefore, the present study was conducted to fill the important gaps in the survey, catalog, and identification of insect fauna of C. gigantea in Sri Lanka. Study sites The study was conducted from December 2014 to June 2015, at a monthly interval, to identify the associated insect fauna of C. gigantea in Sri Lanka. Field visits were conducted covering 120 sites in nine provinces ( Figure 1). Sampling was done only once for each site. Roadside sampling sites were selected randomly at thirty minutes intervals while traveling on a vehicle with a speed of fifty kilometers per hour. If a new site with C. gigantea plants was not observed after thirty minutes, traveling was continued until a site with C. gigantea was found (Wijeweera et al., 2021). In all sampling sites, the distribution of C. gigantea (GPS coordinates) was recorded. At each site, associated insect fauna of C. gigantea was observed for thirty minutes. During the survey, insects associated with the plant were observed and photographed. The insects were collected directly from various parts of the plant i.e. leaves, flowers, flower buds, stems, and fruits by hand-picking. Two or three individuals of the same species in each site were collected into small plastic vials for morphological identification. Identification of insects The specimens were preserved, pinned, and lodged in the laboratory of the Department of Zoology at the University of Ruhuna, Sri Lanka. Specimens were identified up to genus/species level under the guidance of Entomologists of Entomology Division, National Plant Quarantine Service, Department of Agriculture, Gannoruwa. Unidentified specimens were sent to Mr. Justin Bartlett, Technical Officer (Taxonomy), Biosecurity Queensland, Department of Agriculture and Fisheries (DAF), Australia for identification. Insect fauna of C. gigantea and their distribution in Sri Lanka A total of 32 insect fauna (morphospecies) belonging to twenty-three families was observed on C. gigantea in Sri Lanka. Insects were categorized as phytophagous insects/ pests, pollinators/ flower visitors, and occasional visitors. High insect diversity was recorded in C. gigantea associated with the coastal belt of Sri Lanka. Also, well-established insect populations were observed in inland areas of the Southern, Northern, Eastern, and North-Central provinces of Sri Lanka. Insect pests of C. gigantea in Sri Lanka The majority of the insect fauna associated with C. gigantea in Sri Lanka were phytophagous species (Table 1). The phytophagous insects associated with C. gigantea belonged to nine families, Chrysomelidae, Lygaeidae, Cercopidae, Membracidae, Tephritidae, Aphididae, Papilionidae, Curculionidae, and Cerambycidae, and feed on different plant parts including leaves, fruits, flowers, flower buds, fruits, seeds, and plant sap (Table 1). Spilostethuspanduru Militaris, Sphaeroderma sp., Graptostethus servus Fabricius, Spilostethus hospes Scopoli, and Aphis nerri Boyer de Fonscolombe were observed as gregarious feeders. Field observations revealed that all the pests caused minor damage to C. gigantea plants, except Phelipara moringae Aurivillius, Dacuspersicus Hendel, and Paramecops farinosa Schoenherr which are the major pests. Even though P. moringae is a destructive pest, they were less abundant within the country with respect to D. persicus and P. farinosa. Dacus persicus and P. farinosa were highly destructive pests as they damage fruits and seeds of C. gigantea plants influencing the reproductive output of the plant (Figure 2). Pollinators and occasional visitors associated with Calotropis gigantea in Sri Lanka The most common pollinators were Xylocopa fenestrate Fabricius, Xylocopa caerulea Fabricius and Apis cerana Fabricius. They consumed nectar as well as pollen of C. gigantea flowers while Spindasis lohita Horsfield, Danaus chrysippus Linnaeus, and Xylocopa spp. only feed on nectar (Figure 3). Occasional visitors utilized C. gigantea plants as a shelter and hiding place. Coccinella sp. fed on aphids on C. gigantea plants. Different insect pollinators/ flower visitors and occasional visitors associated with C. gigantea are given in Table 2. DISCUSSION The insects collected from C. gigantea during this study in Sri Lanka are very similar to the insects reported on C. procera and C. gigantea from India (Pugalenthi and Livingstone, 1997;Chandra et al., 2011;Jana et al., 2012;Dhileepan 2014). A similar study had been conducted in Saudi Arabia related to C. procera and 99 insect species belonging to 43 families were identified (Al dhafer et al., 2011). According to a study in the Jabalpur district in India, (Chandra et al., 2011), 8 species of insects from 6 families have been documented on C. procera. Similarly, a study in West Bengal in India reported 19 insect species from ten families of C. procera (Jana et al., 2012). Thirteen phytophagous pest species belonging to nine families were recorded in Sri Lanka. In the study of Saudi Arabia (Al dhafer et al., 2011), three species; the carpenter moth, Semitocossus Johannes (Staudinger), scale insect Contigaspis zilla (Hall), and milkweed aphid Aphis nerii (Boyer de Fons-colombe) were pests of C. procera. Similarly, a study in India revealed eight species of pests associated with C. procera including A. chrysippus, S. pandurus, S. hospes, L. acuta, A. nerii, A. foveicollis, C. peregrinus, and C. sexmaculata (Chandra et al., 2011). Concerning both studies, the highest pest species richness was recorded in Calotropis in Sri Lanka. Most pest species recorded in the present study were polyphagous feeders. Only P. farinosa (Aak weevil) and D. persicus (Aak fruit fly) were monophagous feeders (Dhileepan, 2014;Wijeweera et al., 2021). A similar observation was recorded in India (Dhileepan, 2014) and Pakistan (Sudan, 2013;Shabbir et al, 2019;Ali et al 2020; as well. Dacus persicus was found in 26 sites including coastal and inland regions of the country. Gravid females lay eggs inside C. gigantea fruits by penetrating the skin of the fruit with its ovipositor (Kumar and Kumar, 2015). The larval stage of D. persicus is a major destructive seed predator in Calotropis species (Sharma and Amriphale, 2008). Nourishment and development of larval stages of D. persicus are taken place within the fruit. Infected fruits rot and often drop prematurely. Pupation of this species occurs in the soil after the detachment of the fruit from the tree (Dhileepan, 2014). The damage is directly focused on the reproductive output of the plant which severely reduces the propagation and dispersal of the Calotropis species. Pakistan (Sudan, 2013), and Sri Lanka. The slow-moving nature of P. farinosa might limit their distribution into twenty inland sites and their relative frequency of occurrences was recorded as 16.67% (Table 1). Larval stages of P. farinosa feed and destroy fruits, and seeds while the adult weevil depends on the C. procera plant for feeding, sheltering, and oviposition. Paramecops farinosa feeds on leaves, flowers, and fruits (Sharma and Amriphale, 2007;Dhileepan, 2014). Field observations revealed that the adults prefer to feed on flower buds and tender leaves. Paramecops farinosa naturally occurs in India, Sphaeroderma sp. mostly occurred (67.5%) in many of the sampling sites (Table 1). It is considered a polyphagous pest feeding on leaves (Dhileepan, 2014). According to observations of the present study, it tends to aggregate as groups to feed on tender leaves. Damaged leaves appeared as a perforated mesh with emerging whitish latex. Later on, damaged leaves become brownish in color and dried out. According to a study in India (Saikia et al., 2015) both adult and larvae of Corynodes sp. are pests of C. gigantea plants. Corynodes beetles appear metallic-blue in color and they feed on leaves while larval stages act as stem borers. Lygaeid bugs S. hospes, S. pandurus, and G. servus were the second most common (66.67%) polyphagous insect pests associated with C. gigantea plants (Table 1). Graptostethus servus Fabricius feeds on the seeds of Calotropis sp. Also, it is recorded in India, China, Pakistan, Indonesia, Turkey, Syria, and Australia (Hussain et al., 2014). The three species are seed predators of Calotropis plants (Dhileepan, 2014). Additionally, adult and nymphal stages of S. hospes feed on leaf sap of Calotropis sp. Nymphal stages inflict severe damage to the tender leaves by actively sucking leaf sap. Attacked leaves appear yellowish in color and dried (Saikia et al., 2015). These damages were observed in the present study too. Spilostethus hospes has also been recorded in Australia, China, Malaysia Archipelago, Pakistan, India, and New Caledonia while S. pandurus has been reported from Australia, India, and Pakistan (Chandra et al., 2015). Abidma refula (Spittlebug) was recorded as the third most common insect pest of C. gigantea. It fed on plant sap and was recorded in 62 sites (51%) in Sri Lanka (Table 1). One species of the same genus was recorded in India which feeds on C. gigantea as well as C. procera (Dhileepan, 2014). Oxyrachis sp. (cow bug) was observed as a pest of C. gigantea in 57 sites. Oxyrachis sp. is widely distributed in South Asia, including in India, Sri Lanka, and Nepal. They occasionally feed on C. procera. Both nymphs and adults feed gregariously on the sap of immature stems, leaves, and flower buds (Sudan, 2013). The gregarious feeding nature of Oxyrachis sp. leads to the weakening of plants. They produce honeydew which leads to the attraction of ants to C. procera plants (Sudan, 2013) and a similar observation was recorded in the present study also. Apart from pests and pollinators in C. gigantea, there was a great species richness in occasional visitors. Drosophila sp. was commonly observed under the shade of C. gigantea leaves in 33 sampling sites. Coccinella sp. visits C. gigantea plants to hunt small insects. They are predators of aphids and other small insects. Similarly, Brumoides suturalis Fabricius is a predatory beetle recorded in Pakistan, Bangladesh, China, Taiwan, and India that feeds on aphids and mealybugs associated with C. gigantea (Sudan, 2013). CONCLUSIONS The findings of the present study provide detailed records of insect fauna associated with the C. gigantea plant in Sri Lanka. Insects associated with Calotropis were cataloged as pests, pollinators, and occasional visitors. A total of thirtytwo morphospecies of insects belonging to twenty-three families were identified. Thirteen pests associated with C. gigantea were identified up to the genus/species level. Six species of pollinators and high diversity of occasional visitors belonged to five orders also documented.
3,864
2022-06-21T00:00:00.000
[ "Environmental Science", "Biology" ]
A deep-learned skin sensor decoding the epicentral human motions State monitoring of the complex system needs a large number of sensors. Especially, studies in soft electronics aim to attain complete measurement of the body, mapping various stimulations like temperature, electrophysiological signals, and mechanical strains. However, conventional approach requires many sensor networks that cover the entire curvilinear surfaces of the target area. We introduce a new measuring system, a novel electronic skin integrated with a deep neural network that captures dynamic motions from a distance without creating a sensor network. The device detects minute deformations from the unique laser-induced crack structures. A single skin sensor decodes the complex motion of five finger motions in real-time, and the rapid situation learning (RSL) ensures stable operation regardless of its position on the wrist. The sensor is also capable of extracting gait motions from pelvis. This technology is expected to provide a turning point in health-monitoring, motion tracking, and soft robotics. F or exploring and understanding complex systems like the earth and the universe, monitoring its different parts is essential. This can be achieved by obtaining the status of each part through various signals such as radio waves, mechanical vibrations, and electricity. Since these signals are generated by many components that are widely distributed throughout the systems, their acquisition and integration are key to understanding their nature. Advanced technologies, like the global seismographic network and the radio telescope system, facilitate the collection of signals by placing highly sensitive detectors at positions where signals converge. This enables the decoupling and integration of the information entangled in the converging signals into one frame of knowledge for the observation of the entire system. A similar approach can be applied to monitor the complex movements of the human body. Current methods that directly measure joint [1][2][3][4][5][6][7] and muscle activity by electromyography (EMG) signals 8-10 embody inefficiencies since detecting signals from each joint requires a great number of sensors connected with thousands of outer wirings in order to decode human movements. Furthermore, EMG signals are not only affected by inconsistencies due to the coupling between neighboring muscles but also require a large number of sensors [11][12][13] , introducing time and labor-intensive data preprocessing. Such impracticality can be avoided by developing and using a suitable, highly sensitive sensor rather than pinpointing every joint and muscle of our body. We therefore propose an ultrasensitive skin-like sensor that measures previously undetectable signals from small skin deformations that are far from the joint, coupled with a deep neural network that clarifies the movement of the corresponding body parts. The sensor is attached onto the wrist and is capable of extracting signals corresponding to multiple finger motions, providing a method to understand the human body that is more efficient than pinpointing every joint and muscle. Laser-induced nanoscale cracking allows for the sensor to achieve high sensitivity, and the performance of the sensor was engineered through a concrete theoretical model. A consecutive laser serpentine patterning also allows for the sensor to conformably attach to the epidermis. The deep neural network successfully decodes the temporal sensor signals from the wrist to generate the corresponding finger motion. Our RSL system guides users to collect data from arbitrary part of the wrist and automatically trains the model in real-time demonstration with a virtual 3D hand that mirrors the original motions. The sensor is also applicable to the pelvis and can successfully generate dynamic gait motions in realtime. These would facilitate an in-direct remote measurement of human motions which can be used in various applications that rely on motion perception such as rehabilitation, prosthetics, human-machine interfaces, and wearable haptics for virtual reality. Results Deep-learned skin-like sensor system. An illustration of motions in human body is shown in Fig. 1a. Movement of any joint is associated to its surroundings 14 , involving electrical signals such as action potential of muscle, or mechanical signals of skin deformation. The blue arrows highlight the likely information flow caused by the movement from the main joints. Attempts to capture these signals are numerous including measuring the movement of the foot from shin 15 , knee movements from thigh 16 , and information converging around the pelvis 17 with signals representing the entire gait motions. Similarly, motion of the arm 18 and the face expression 19 can be also identified. Predicting the status of motion aside from the main joints is like earthquake prediction, mainly involving time, location, and magnitude. Similarly, the aim of our study is to decode and extract the "epicentral" motions from the detected signal. Among the numerous motions generated in the human body, hand exhibits the highest degree of freedom which exquisitely performs a range of tasks 20 ; hence, predicting its motions is very challenging. Our study, therefore, initially focused on decoding the dynamic finger motions in real-time (Supplementary Table 1). Figure 1b illustrates the platform of the sensing system. A topographical movement of the wrist is triggered by the epicentral finger motion, with the attached crack-based sensor producing a signal containing the motion information. A sample scanned electron microscopy (SEM) image of the sensor crack is shown in the lower right corner. The magnified image of the sensor attached above the skin is shown in Fig. 1c. The serpentine patterns allow a conformal contact with the epidermis, enabling a more direct measurement of skin deformation. The design of our analysis is shown in Fig. 1d. The wrist contains information reflective of several finger motions. The highly sensitive crack-based sensor detects the deformation of the wrist as unidentified signals. The signals are then analyzed in a temporal sequence through our encoding network, and the current status of the motion is simultaneously generated through the decoding network. Highly sensitive sensor by laser-induced crack generation. The process requires a sensor that is sensitive enough to measure the minute deformation while holding high conformability with the skin in order to catch the subtle topology transitions of the wrist. Digital laser fabrication provides a viable solution to obtain both features through laser controlled cracking and serpentine patterning ( Supplementary Fig. 6a, Supplementary Table 2). The periodic serpentine structure exhibits higher level of elastic deformation 21 , causing a conformal contact between the electrode and skin 22 ; this promotes sensing of minute skin deformation. A crack-induced layer with micro serpentine patterns can easily be generated by simply scanning the laser with different power conditions. Cracked layer is used as a sensing element, since these structures are widely utilized [23][24][25] in detection of minute mechanical stimulations. Figure 2a illustrates the fabrication process and the structure of the sensor. Colorless polyimide (CPI) is uniformly coated on a glass substrate and fabricated silver nanoparticle (AgNP) ink is then spin-coated over the layer. The bilayer of AgNP and PI is firstly patterned into the serpentine structure through a 355 nm wavelength laser ablation (over 100 mW). This process [26][27][28] is better than the conventional fabrication method 29,30 often requiring high temperature, vacuum environment, or a preprocessed mold. Subsequently, the laser power is lowered within a certain range (6-13 mW) to selectively convert the AgNPs into a crack-induced layer. The patterned structure is easily peeled from the glass substrate, with Fig. 2b, c depicting the magnified optical image of the final structure. The sensor performance is controlled through the annealing region as depicted in the middle line of Fig. 2c. The fabricated free-standing sensor is displayed in Fig. 2d. The sensor is directly mounted on the skin through the assistance of adhesive PDMS. The strain distribution of the sensor under 15 % strain is observed through a finite element method (FEM, COMSOL Multiphysics) as illustrated in Fig. 2e. On account of the out-of-plane buckling deformation of the sensor, an effective strain under 2% is applied through the electrode. The performance of the sensor under the serpentine pattern is discussed in Supplementary Note 5, Supplementary Fig. 5. Correlation between cracking and control parameters. Previous works on crack-based ultrasensitive sensors mainly involve fabrication by bending a metal-sputtered soft substrate 23,24,31,32 . The performance of the conventional sensors is engineered through varying substrate thicknesses, substrate modulus, annealing times, and using stress concentration structures 23,25,32,33 . However, previous approaches failed in deducing the relation between the control parameters and the sensor performance. The conventional technology barely explains the grain size that is associated with crack characteristics. A correlation between these can be examined by analyzing the signal outputs during the initial crack formation, whereas previous studies relied on the data obtained from the sample after cracking ends. As depicted in Fig. 3a, initial cracking of the laser annealed layer is proceeded before utilizing as a sensor, and the electrical response in respect to strain in the cracking process significantly varies from that in the sensor operation process. The electrical resistance discontinuously increases in the initial micro cracking, whereas in the sensor utilization, a continuous change of resistance is observed since the gap of the crack is widened continuously. The discontinuous nature of initial cracking makes it difficult to obtain meaningful information from the signal output; therefore, we designed a bending test under quasistatic conditions as shown in Fig. 3b, leading the set of discontinuous cracks to propagate continuously along the line. Crack occurs in regions where the local strain ɛ is higher than the critical crack strain ɛ c (red in Fig. 3b), and do not occur below the critical strain (black in Fig. 3b). Since the first buckling mode of thin film is defined as a sinusoidal form, the curvature of the deformed sensor is represented as cosine curve (Supplementary Note 1, Supplementary Fig. 1). The projected length of cracked zone l cp is defined implicitly as follows: where ɛ is the local strain of the sensor and dl is the displacement of the bending stage. The resistance ratio between the noncracked and cracked region is defined as α = r c /r n where r c is the resistance per unit length of the cracked zone and r n is the resistance per unit length of the non-cracked zone. The normalized resistance of the sensor according to the displacement dl of the bending stage is expressed as: where R 0 is the initial resistance of the sensor and l c is the length of cracked zone. The detailed derivation of Eqs. 1 and 2 are found in Supplementary Note 1. In this model, two free factors, ɛ c and α that determine the final shape of the electrical response are found by fitting the experimental data as shown in Fig. 3c. Higher laser power decreases the ɛ c and α; ɛ c = 2.977 × 10 −4 ,α = 1.846 for 9 mW, ɛ c = 2.4 × 10 −4 ,α = 1.4 for 11 mW. Conditions under 6 mW are not enough to provide electrical pass ways through annealing between the particles and cases above 13 mW cannot be appropriately fitted by two free factors ( Supplementary Fig. 6b). Selective laser sintering provides a sophisticated method for manipulating the critical crack strain. As illustrated in Fig. 3d, high power annealing lowers the porosity of the particle layer and provides a higher bonding energy per unit area, w f through the necking between particles 34,35 . According to Irwin 36 , a crack propagates further when the following condition is satisfied. where G c is the critical energy release rate, U is the potential energy of body, A is the crack area, and R is the resistance function. The typical energy release rate for the displacementcontrolled case gradually decreases with the crack size 37 as depicted in Fig. 3e Fig. 3 Investigation of crack characteristics. a Electrical response while initial cracking (left) and sensor operation (right). b Schematic illustration and modeling parameters of displacement controlled bending environment. c The initial cracking resistance changes of the sensors prepared by different laser power with model-fitting curve. d Schematic representation of sintered particle layer and SEM image of the cross section of annealed (right) and nonsintered (left) particle layer. Scale bar, 40 μm. e-g Energy release rate curve and resistance function of "non-cracking" (e), "stable cracking" (f), and "unstable cracking" (g). Arrow at the bottom describes the increasing direction of electrical conductivity (blue) and degree of crack (gray). h-j Crack appearance for each regime, "non-cracking" (h), "stable cracking" (i), and "unstable cracking" (j). The first row indicates the schematic of crack length and the second shows the SEM images respect to each case. Scale bar, 14 μm. k The operating output resistance of the sensors prepared by different laser power. ARTICLE NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-16040-y right-hand side of Eq. 3 is defined as follows: where a is crack size, H(x) is the Heaviside step function, and a 0 is the void size determined by the porosity of the sintered layer. Since critical cracking occurs at the intersection point of the G and R curve interpreting Eq. 3, the resistance function is categorized into three cases based on the relative position of G to the maximum strain ɛ max ; these include non-cracking (Fig. 3e), stable cracking (Fig. 3f), and unstable cracking (Fig. 3g). The excessive power of the laser anneals the particle layer into a fine bulk metal structure with high bonding energy. Since the R curve is above the set of G curves in Fig. 3e, the intersection point is inexistent; thus, the crack cannot propagate further and maintains its initial size. We found that the condition for noncracking was above~13 mW. The illustration in Fig. 3h and the SEM image demonstrate that the crack is restricted at the boundary of the annealed area, restricting cracking of the sensor's active area (conducting area). Meanwhile, at low power annealing condition under 6 mW provides an inadequate bonding energy to bypass the envelope of G curves as shown in Fig. 3g. In such a case, the R curve with bonding energy w f4 meets the G curve corresponding to some strain ɛ c4 ; however, the equilibrium crack size is infinitely large since the intersection point with maximum strain ɛ max diverges. Moreover, the particle layer lacks the electrical path delivering sensor signals due to insufficient annealing power, and the corresponding crack feature is depicted in Fig. 3j. The annealing condition between unstable cracking and non-cracking involves a distinct intersection point of the G and R curves throughout the straining range as shown in Fig. 3f, with a finite equilibrium crack size for various conditions (stable cracking). In this condition, easy manipulating of the critical crack strain of the resultant structure is possible by varying the laser power. For instance, the higher power induces a smaller void size (a 2 ) and higher bonding energy (w f2 ) on the structure, causing a smaller critical crack strain (ɛ c2 < ɛ c3 ). We already confirmed such a relation in the interpretation in Fig. 3c. To investigate the dependence of the annealing power on the sensitivity in the stable crack regime, we found the correlation between the critical crack strain and the length of crack is given by: where b is the thickness of the sensor, L is the length of the sensor, and p is the propagated length of the crack (Supplementary Note 2). Equation 5 shows that the square of the critical crack strain is proportional to the propagated length of the crack. As shown in Fig. 3i, the crack is propagated with a certain crack length and reduces the conducting path of the sensor, which in turn, increases the resistance ratio α. Moreover, p directly represents the grain structure of the sintered area, whereas other properties like young's modulus are combined with other physical properties to define the grain size. A structure with larger grain size yields longer p and smaller crack asperity since the formation of the crack scatters less at the coarse grain boundary 38 . Moreover, the distribution of the crack asperity exhibits fractal similarity to the grain size distribution by the renormalization theory. Since the finely cracked face responds sensitively under strain ( Supplementary Notes 3, 4, Supplementary Figs. 2 and 4), the sensor with larger p is more sensitive (Gauge Factor (GF) > 2000 at 0.55%) as shown in Fig. 3k. More detailed information on the validity and the relationship between Eq. 5 and the fitted ɛ c and α is discussed in Supplementary Note 2. The overall process of the theoretical analysis is summarized in Supplementary Fig. 7. Learning the dynamic motions with a single sensor. We used a deep neural network to identify complex hand motion from highly sensitive sensor signals. As illustrated in Fig. 4a, various hand motions result in signals from skin deformations and muscle movements. To guide our network to correctly identify the moving finger, we defined a metric space as in Fig. 4b. The R values express the bend of a finger while θ values represent the identity of the moving finger. The metric is designed to consider the spatial positions of the fingers and how humans distinguish different hand motions. It is much harder to distinguish hand motions when the fingers are barely bent than when they are fully bent. Furthermore, the motions of two fingers apart are more easily distinguished than motions of two fingers that are close to each other. Therefore, to represent this, points on our metric space are closer to each other when r and the difference between their θ values are lower. This Euclidean distance between points is used as our network's loss function to help it learn to differentiate different hand motions. For example, if the little finger is the finger that is bent, we pose a higher penalty for our model when it incorrectly determines the bent finger as the thumb than when it incorrectly determines it as the ring finger. Therefore, we designed neural network to accomplish two tasks: firstly, analyzing sensor signal patterns into a latent space encapsulating temporal sensor behavior and secondly, mapping latent vectors to our finger motion metric space defined above. Encoding and decoding network in Fig. 4c achieve above goals respectively. To maximize user convenience regarding usability and mobility 39,40 , we used a single-channeled sensor to generate signals corresponding to complex hand motions. Thus, it was necessary to utilize temporal sensor patterns to correctly determine the hand motion the signals were generated from. We therefore trained a long short-term memory (LSTM) network 41 , a type of RNN architecture, to identify such temporal behaviors (Fig. 4c), as it is a type of deep neural network designed to analyze sequential data. Detailed description of the LSTM is included in Supplementary Note 6. To map latent vectors into corresponding points in our 2d metric space, the decoding network is composed of two separate dense layers, mapping encoded latent vectors into r and θ respectively. The resulting vectors from our network are visualized in Fig. 4d. We used principal component analysis to project the latent vectors onto the 2D vector space. In general, the sensor signals corresponding to a specific finger create a circle in the 2D vector space. Since the finger motions involve a cycle of bending and unbending the finger between a starting straightened position and an ending bent position, this observation is expected. However, there are two main changes to the data after it is passed through the encoding network ( Supplementary Fig. 10). Firstly, the starting points, where all fingers are straightened, are aligned by the encoding network. By labeling the input vectors as a point in the half-circle metric space that we defined, we intended to represent the starting points as closer vectors in our metric space. The alignment above demonstrates that our model maps straightened finger motions to closer latent vectors as we intended. Secondly, the data points for the ring finger, which were widely distributed across the projected 2D plane before encoding, create a circle with a radius similar to those of the other fingers after encoding. The encoding network transforms different data points to latent vectors that represent their corresponding finger motion. Therefore, even if the original sensor signals had different values, they are still projected to similar latent variables as long as they correspond to the same finger motion. This demonstrates that our network correctly utilizes temporal sensor behavior to analyze the different patterns for each finger motion. Figure 4e shows the generation of r and θ values by the dense layers from the network-produced vectors mapped to the metric space. Even though some data points are misclassified when the r value is low, dense layers clearly discriminate different finger motions when the fingers are significantly bent and the r values are high. A realtime demo of our network analyzing sensor signals from the hand motions of the sensor wearer can be seen in Fig. 4g and the Supplementary Movie 1. Finger motions are generated by analyzing the strain changes at the subject's wrist site. However, a simple wrist movement can also modify sensor signals by producing non-finger motion noises. To verify whether our sensor can generate signals that allow our model to distinguish between different noises and finger motions, we conducted an additional experiment to check if our model can classify five motions and three types of noises generated by non-finger bending motions as shown in Supplementary Fig. 11a. Three noises are sensor signals caused by directly touching the sensor, twisting the wrist, and bending the wrist, we call them touch, twist, and wrist respectively. To perform the classification task, we modified the decoding network to a three-layered dense block producing 8-dimensional vector output. Each value in an output vector is model predicted probability for each eight classes. A class with a maximum probability is chosen as the model predicted class for a given sensor input. As illustrated in the confusion matrix in Supplementary Fig. 11b, our model could correctly classify finger motions and noises with 96.2% in average and 92.9% in the worst case for little finger motions. The result shows that our sensor can generate distinctive signal patterns for different hand motions including non-finger motions so that our model can distinguish finger motions from noises generated by three non-finger motions. From the above results, we know that given the sensor data of a user, our network is trained to correctly classify the user's finger motions. However, attaching the sensor to a different user, the muscle movements and sensor values corresponding to the hand motions of the new user may be different from those of the previous users, as human muscle movement vary from person to person. Since our network is trained for different sensor patterned dataset, the network may consequently fail to determine hand motions by the new user. We therefore need to retrain the network with the new data from the new user. However, if we train our model from scratch, we need at least 2000 sensor frame from 80 s of finger movement for each finger. It is impractical and inconvenient to collect a 400 s of training dataset each time the sensor is attached to a new user. Even if we were to collect enough data, the training time necessary for the LSTM network to extract the hidden sensor patterns from the dataset is too high. Similar issues arise when the sensor is attached to an area different from before or when the sensor itself is replaced with a new one. These problems hinder practical applications aimed for usage by multiple end users. To address these problems, we designed the RSL, a deeplearning system guides user to collect data and automatically processes them to retrain our models with only a small amount of data in a short period of time. The procedure of the system (Fig. 4f) involves following onscreen instructions to collect data for 8 s per finger when the sensor is placed on a new user ( Supplementary Fig. 12). By sliding a time window of size 16, we group the collected data to form 16 consecutive sensor signal input. The generated input is used as a single input for our model. The detailed data processing is described in Supplementary Note 6. The RSL system uses transfer learning 42 techniques to utilize knowledge on sensor behaviors obtained during previous training steps. The parameters for the LSTM and dense layers are then transferred from the pretrained model to the new model. After retraining for around 5 min with the newly collected data, the model is then ready to generate the hand motions of the new user. Through RSL system, all steps required for generating the hand motions of a new user are processed automatically. Typically, the temporal behavior patterns of the sensor signals that were already previously analyzed by our pretrained model is transmitted to the new network. Consequently, the retraining time is massively reduced because the network only needs to retrain its mapping functions to map input values to a different range of sensor values. The effectiveness of using transfer learning is evident in the loss comparison graph (Supplementary Fig. 8). In the absence of transfer learning, over 20 min are required for the loss to decrease to below 0.1, whereas in its presence, the time is within 5 min for the same dataset. The detailed information regarding the network design can be found in Supplementary Note 6. As a proof-of-concept demonstration of our system's expandability, the sensor is used to decode the keyboard typing of numpad which the signals are combined with the movements of the wrist and the finger. The modified model decoded 9 classes of number in real-time that are pressed by fingers (Supplementary Note 7, Supplementary Fig. 13). Moreover, a single sensor is also attached on pelvis to identify the gait motions. The modified model (Supplementary Note 8) successfully generated the positions of the ankle and knee as shown in Supplementary Figs. 14, 15. Moreover, the signals are collected in the cases where the wrist and the finger movements are coupled. Discussion Inspired by the understanding of detection techniques for measuring converging signals, we present a technique for measuring dynamic motions by a deep-learned soft sensor attached on the surface of the skin, that is, superior to conventional approaches. Apart from the traditional wafer-based fabrication, the proposed laser fabrication provides a powerful solution for viable sensor utilization. The relationship between the sensor performance and the controlling parameters was investigated to ensure precise manipulation. A deep neural network is synchronized with the measuring equipment and the sensor, demonstrating a perfect operation in decoding finger motions. The concept of our system is expandable to other body parts, and offers great potential for detecting other stimuli and physiological signals. For device expansion on other body parts, a concrete ergonomic analysis will be needed to select an optimum location to measure epicentral motions. Methods of selecting required number of sensors and technique of integrating with wireless platform is necessary for practical use. Methods Synthesis of silver nanoparticle ink. 0.25 mol/l of silver nitrate (99% Aldrich) was used as a precursor and dissolved in ethylene glycol (EG, 99.9%, Aldrich) with 0.02 mol/l of polyvinylpyrrolidone (PVP, Mw = 10,000). The solution was stirred in a reaction condition of 150°C until the synthesis was completed. The synthesized particles were then separated by centrifugation of 7000 rpm for 30 min and washed by ethanol. The collected particles were re-dispersed in ethanol at a concentration of 30 wt%. Data acquisition and communication. Pretraining and recalibration data were received by digital multimeter (Keithley 7510, Keithley) at 40 hz of sampling rate. For the real-time demo, the sensor signals are recorded at an identical sampling rate, and simultaneously delivered to the learning network. Quasi-static bending test. Bending was applied by linear displacement (VT-80, PI) at a speed of 0.05 mm/s. The electrical signals were received at the same time in order to identify the relation between annealed nanoparticle layer and the critical cracking strain. Strain mapping of the wrist. Deformation of the wrist caused by the finger motions is measured by digital image correlation (DIC). A random speckle patterns were distributed over the wrist in order to process deformation mapping through DIC. Sensor attachment on skin. In order to conformably attach the electronics, the entire sensor is embedded with an adhesive PDMS, tuned by ethoxylated polyethylenimine (PEIE) 43 as shown in Supplementary Fig. 3a. Magnified image of the attached sensor is depicted in Supplementary Fig. 3b. The vertical height information of the system is illustrated as Supplementary Fig. 3c. Experimental setup for direct laser writing. The optical system is consisted of pulsed 355 nm laser (Nanio Air 355-3-V, InnoLas Photonics). Synthesized AgNP ink (20 wt%) is spin coated on CPI substrate with 200 rpm and 1 min condition, which allows fine deposition and evaporation of the solvent.
6,689.2
2020-05-01T00:00:00.000
[ "Computer Science" ]
Implementation of Science Student Worksheets Based on Multiple Intelligences Material Temperature and Its Changes Abstract INTRODUCTION In the 2013 Curriculum students are required to be active and optimize their intelligence and talents.Education must be appropriate to individual differences and teachers must pay attention to the individual uniqueness of students (Surna & Pandeirot, 2014).Dewey in Surna & Pandeirot (2014) also said that all children have the right to receive expertise and skills that should be implemented by education providers.Therefore, learning strategies are needed that are able to facilitate all student activities, one of which is Multiple Intelligences.This theory was coined by Howard Gardner, a psychologist from Harvard.Initially Gardner discovered seven types of intelligence but later expanded them to nine.In Baharuddin & Wahyuni (2015) the nine intelligences include: (1) logic/mathematics; (2) musical/rhythmic; (3) verbal/language; (4) physical, body/movement; (5) visual/room; (6) intrapersonal; (7) interpersonal; (8) naturalist; and (9) existential.The Multiple Intelligences theory can be used as an alternative strategy in the teaching and learning process at school which helps teachers teach and pay attention to the intelligence and needs of their students so as to obtain better learning outcomes (Aryani, Sudjito, & Sudarmi, 2014). In the process and learning at school, one of the teaching materials often used by teachers is worksheets.Worksheets is a printed teaching material in the form of sheets of paper containing material, summaries and instructions for implementing learning tasks that must be carried out by students, which refer to the basic competencies that must be achieved (Prastowo, 2011).According to the General Guidelines for the Development of Teaching Materials (Diknas, 2008), student worksheets are sheets containing assignments that must be carried out by students.Student worksheets usually consist of instructions or steps to complete a task which must clearly state the basic competencies to be achieved. Seeing the importance of worksheets in the world of education today, many people are conducting research on worksheets, one of which is multiple intelligences-based worksheets.The aim is to make students learn more creatively.Wijayanti (2014) proved that the Multiple Intelligencesbased worksheets he created were successful in improving students' creative thinking abilities.Therefore, researchers conducted research into the development of Multiple Intelligences-based science worksheets which can facilitate the various intelligences possessed by each student.Apart from that, this worksheet can also be used as independent teaching material to increase student creativity. In this Multiple Intelligences-based science worksheet, in each sub-chapter the learning material presents the 9 multiple intelligences mentioned by Gardner. RESEARCH METHODS This type of research can be interpreted as a process or steps to develop a new product or improve an existing product.This research is research and development.The steps in this development use the Borg & Gall model in Sugiyono (2015).This research model is a research model used to develop or validate products used in education and learning.Development Procedure Potential and Problems At this stage the researcher conducted research on the teaching materials used at SMPN 1 Jambi City, general problems or obstacles faced by science teachers in teaching, selection of media or learning resources, and availability of worksheets.This stage was carried out by reviewing the teaching materials used in schools, interviews with science teachers at SMPN 1 Jambi City, and observations. Data Collection After obtaining potential and problems based on the results of observations, interviews and curriculum analysis, the next thing to do is collect data.According to Sugiyono (2015), the data collected can be used as material for planning certain products which are expected to overcome these problems.This data collection is carried out to find out what students need in learning which will then be used as a basis for making the initial product of the Multiple Intelligences Worksheet.Apart from that, researchers also collect material from various sources which will be presented in the worksheet that will be developed. Product Design The stages carried out in designing this product are as follows: a) Material analysis stage Material analysis aims to identify, detail and systematically organize the main relevant parts that students will learn.The first step taken was to identify core competencies, basic competencies, indicators and learning objectives for temperature and changes based on the syllabus used at the school as details in compiling the worksheets and secondly to create an arrangement or sequence of sub-materials which would later become the content of the material in the worksheets.The choice of format in developing worksheets is adjusted to the factors described in the learning objectives.The format chosen is for designing the appearance, content and selection of learning strategies.c) Multiple Intelligences-based science worksheet design stage After analyzing the material and selecting the format for preparing the worksheets, the next stage is to create or design a Multiple Intelligences worksheets with material on temperature and its changes. The material in this worksheet is prepared using language that is easy for students to understand and includes images related to the material.Apart from that, this worksheet is equipped with practical sheets and practice questions. Design Validation According to Sugiyono (2013), design validation is an activity process for assessing product designs which is carried out by providing assessments based on rational thinking, not yet tested in the field.Product validation is carried out by experts or experienced experts to assess new products that have been designed in such a way. Product testing Science worksheets based on Multiple Intelligences on temperature and its changes have been validated and then tested on 35 students in class VII C and 32 students in class VII B at SMPN 1 Jambi City. The research instrument used was a questionnaire.According to Sugiyono (2013), a questionnaire is a data collection technique that is carried out by distributing a set of questions or written statements to respondents to answer.In this study, the questionnaire used was divided into two based on the filler/respondent, namely: Validation questionnaire for material and design experts and Perception questionnaire for students RESULTS AND DISCUSSION Worksheets Science based Multiple Intelligences Which has developed will done validation.Validation is carried out to obtain approval from the specified validators.To obtain this approval, the electronic module will receive assessments and suggestions repair.After get evaluation and suggestion from the validators, the next step is to revise or improve the electronic module.In this research, material and media were validated by two validators.The following is an example of a science worksheet cover design that the researcher made. Cover design No Visual Information 1 The initial cover design was designed in such a way as to attract students' attention to see it On the cover there are pictures relate with lessons like image of a student studying.There are also material titles, and the description test questions used are taxonomy-based Merrill. The material validation process is carried out once, while the design validation is carried out three times.From the material and design validation process, the validators have stated that the Multiple Intelligences -based science worksheet developed is suitable for testing. The worksheet was tested on class VII D students at SMPN 1 Jambi City to test the reliability of the questionnaire.Calculations using the alpha formula obtained a reliability of 0.67 in the high category.So it is concluded that this research questionnaire can be trusted and is used to collect non-test data on the suitability of Multiple Intelligences -based science worksheets.Next, a trial was carried out on 35 students in class VII C and 32 students in class VII B at SMPN 1 Jambi City to see students' perceptions or responses to the worksheets.The following are the results of the perception questionnaire that has been distributed.The trial was carried out by distributing perception questionnaires to students.From the trials that have been carried out, data on students' perceptions of the Multiple Intelligences-based science worksheet that was developed was obtained.The results of the analysis of student perceptions show a figure of 83% for class VII C and 81% for class VII B which is in the "very good" category.So the average percentage of students at SMPN 1 Jambi City is 82%.This is in accordance with the percentage scale for the very good category, namely in the range 81.00%-100% (Akbar, 2013).Overall, it can be concluded that this Multiple Intelligences-based science worksheet has a very good response from students so that it can be used in the learning process, especially regarding temperature and its changes. CONCLUSION Based on this research, science worksheets based on Multiple Intelligences were produced on temperature and its changes in class VII.The resulting worksheets consists of 5 pages consisting of outer and inner covers, foreword, table of contents, general and specific instructions for using worksheets, core competencies and basic competencies, concept map, core contents of the worksheets along with intelligence symbols, bibliography, and back cover of the worksheets.After validating the material once and validating the design three times, the Science worksheets based on Multiple Intelligences was worth trying out and received an average percentage of student perception of 82% which stated that the worksheets was in the "very good" category.
2,060.8
2015-09-20T00:00:00.000
[ "Education", "Environmental Science", "Physics" ]
Preparation of Boron Nitride-Coated Carbon Fibers and Synergistic Improvement of Thermal Conductivity in Their Polypropylene-Matrix Composites The purpose of this study is to prepare boron nitride (BN)-coated carbon fibers (CF) and to investigate the properties of as-prepared fibers as well as the effect of coating on their respective polymer–matrix composites. A sequence of solution dipping and heat treatment was performed to blanket the CFs with a BN microlayer. The CFs were first dipped in a boric acid solution and then annealed in an ammonia–nitrogen mixed gas atmosphere for nitriding. The presence of BN on the CF surface was confirmed using FTIR, XPS, and SEM analyses. Polypropylene was reinforced with BN–CFs as the first filler and graphite flake as the secondary filler. The composite characterization indicates approximately 60% improvement in through-plane thermal conductivity and about 700% increase in the electrical resistivity of samples containing BN-CFs at 20 phr. An increase of two orders of magnitude in the electrical resistivity of BN–CF monofilaments was also observed. Introduction With the swift advances in the production of compact electronics, the thermal management of such units has become of high interest to engineers and designers [1][2][3]. Accordingly, the fabrication of thermally conductive and electrically insulating materials for this purpose has been targeted in several studies to efficiently enhance the heat rejection performance from the system while utilizing the advantages of novel materials, including lower cost and weight, as well as higher chemical and physical stability under harsh working conditions. Despite their low intrinsic thermal conductivity, polymers are good alternatives for this application, mainly because of their high resistance to corrosion and fouling, ease of processing, recyclability, low cost, and light weight [4][5][6]. To enhance the thermal conductivity in polymers, highly conductive fillers, such as carbon fibers (CFs), carbon nanotubes (CNTs), graphene nanosheets, and graphite powder, as well as ceramic and metallic particles can be used [7][8][9][10][11]. Among these, CF has drawn more attention in composite design where anisotropic high thermal conductivity, large mechanical load transfer, and light weight are desirable. However, the inertness of the CF surface and the difference in its surface energetics with polymers play a key role in the integrity and properties of the final multicomponent system [12][13][14][15]. Therefore, the challenge addressed here is to enhance the interfacial adhesion between the matrix and the filler using the proper interfaces to obstruct the boundary phonon scattering while buffering the electrical conductivity of CFs for a better and safer performance in electronic units. Owing to their high thermal conductivity, low electrical conductivity, and high-temperature stability, ceramics are regarded as promising materials where such properties are needed [16]. Decoration of CF with ceramic films and/or particles is one of the ways used to form a superior thermally conductive CF-based composite with an electrically buffering coating [17]. There are a variety of methods to fabricate ceramic coatings, including sintering [18,19], laser cladding [20], dip-coating [21][22][23], chemical vapor deposition (CVD) [24][25][26][27], magnetron sputtering [28,29], or a sequence of all or some of these processes. Although sintering and CVD are considered to be the primary methods for the large-scale fabrication of ceramic-coated fibers, they impose several restrictions, including high cost, complexity, and difficulty of operation, as well as enormous energy consumption. On the other hand, the dip-coating method has been known for its simplicity and safety. In the present study, boron nitride (BN) was selected as the target ceramic coating, as it possesses desirable properties, including low density, high thermal conductivity (~320 W/mK) and electrical resistivity [30][31][32]. Lii et al. reported a similar process for BN deposition on CF and graphite substrate, composed of a boric acid-urea solution for boriding (or boronizing) and ammonia-assisted nitriding in a resistance furnace. They analyzed the effect of various compositions (via molarity control) of boriding solution and nitriding temperature (up to 1000 • C) on the morphology of ceramic coatings [21]. Following a similar dip-coating process of BN on CF substrates, Zhou et al. studied the effect of BN coatings on the dielectric properties of paraffin-matrix composites reinforced with treated CFs [23]. Here, we report the preparation and characterization of BN-CFs using only a gaseous source of nitrogen in a high-temperature (1400 • C) furnace. Moreover, to the best of our knowledge, there have not been any reports on the improvement of thermal conductivity of polymer-matrix composites (PMCs) reinforced with BN-CFs. Polypropylene (PP) was used as the matrix mainly due to its wide commercial availability and good recyclability [33,34]. In this study, to further contribute to the formation of conductive pathways within the matrix, graphite flakes (GFs) were employed as a cost-effective secondary filler, which can lower the percolation threshold of the fillers due to their two-dimensional geometry. The main objectives of the present study are as follows: 1. To simplify the synthesis of a BN microlayer on CFs by removing the adverse environmental effects of using urea in the boriding process, hence carrying out nitriding by using only a gaseous precursor in a high-temperature furnace. 2. To measure the effects of BN coating on the thermal conductivity and electrical resistivity of the final PMC. 3. To propose BN-CFs incorporated with GF as the reinforcing filler system for the fabrication of functional PMCs. Materials The carbon fibers studied in this work, supplied by Toray Co., Tokyo, Japan, have a thermal conductivity of about 10.46 W/mK and were polyacrylonitrile (PAN)-based (T-300, 3 K). The fibers were chopped into one-inch threads before the treatment. Boric acid (H 3 BO 3 , 99.5%-pure, MW = 61.83 g/mol) and methanol were the reagent grades, purchased from Deajung Chemicals Co., Namyangju-si, Korea. The natural graphite powder (flakes, 99% carbon basis, +50 mesh particle size ≥ 80%) was provided by Sigma-Aldrich Chemicals Co., St. Louis, MO, USA. The matrix was polypropylene (SJ 150), supplied by Lotte Chemical Corp., Seoul, Korea. With the exception of the CFs, all the materials were used as received and without further purification. Preparation of BN-Modified CFs Firstly, the carbon fibers were immersed in acetone for 24 h, cleaned with distilled water several times, and then dried in an oven at 80 • C for 12 h. The obtained CFs were then dipped in the precursor solution with boric acid dissolved in methanol in a 1:3 mass ratio. After being ultrasonically agitated for 30 min, the treated carbon fibers were carefully parted from the solution and put into an oven at 80 • C, until completely dried. Afterwards, the borided CFs were put into a zirconia crucible and mounted inside the heating tube. The furnace was heated to 1400 • C under flowing nitrogen (N 2 ) at 200 cc/min. The chamber was then maintained at 1400 • C for 60 min, while ammonia (NH 3 ) gas (1000 ppm) was introduced at 100 cc/min. Ammonia acted as promoter and an extra source of nitrogen to obtain better solubility in carbon fiber substrate [35,36]. After 1 h, the ammonia gas flow was stopped and the samples were cooled down under N 2 atmosphere. The heating and cooling rates were 10 • C/min. The preparation process of BN-CFs is illustrated in Figure 1. Firstly, the carbon fibers were immersed in acetone for 24 h, cleaned with distilled water several times, and then dried in an oven at 80 °C for 12 h. The obtained CFs were then dipped in the precursor solution with boric acid dissolved in methanol in a 1:3 mass ratio. After being ultrasonically agitated for 30 min, the treated carbon fibers were carefully parted from the solution and put into an oven at 80 °C, until completely dried. Afterwards, the borided CFs were put into a zirconia crucible and mounted inside the heating tube. The furnace was heated to 1400 °C under flowing nitrogen (N2) at 200 cc/min. The chamber was then maintained at 1400 °C for 60 min, while ammonia (NH3) gas (1000 ppm) was introduced at 100 cc/min. Ammonia acted as promoter and an extra source of nitrogen to obtain better solubility in carbon fiber substrate [35,36]. After 1 h, the ammonia gas flow was stopped and the samples were cooled down under N2 atmosphere. The heating and cooling rates were 10 °C/min. The preparation process of BN-CFs is illustrated in Figure 1. Fabrication of PMC Samples First, 120 g of polypropylene pellets were poured into the mixing chamber of an internal mixer at a temperature of 180 °C. The rotation of the mixing shafts was kept at 20 rpm. After the polymer was completely melted, the BN-CFs (at 20 phr) were added and mixed for 30 min. To facilitate the formation of conductive pathways and to reduce the phonon scatterings, GF (at 20 phr) was employed as the secondary (contribution) filler. The pasty composite was then extracted and compression-molded into 20 × 20 × 1 mm square samples. The molding pressure was controlled by a manual compressor and increased gradually to allow the remaining air in the composite to leave. During the molding process, the pressure was kept at 200 bar while the temperature was reduced by natural convection from 190 to 100 °C. The samples were then extracted and cooled down at room temperature. Surface Characteristics Fourier-transform infrared spectroscopy (FTIR, Nicolet iS10, Thermo Fisher Scientific Inc., Waltham, MA, USA) was carried out to determine the functional groups present on the surface of the treated CFs. FTIR samples were prepared via potassium bromide (KBr) pelleting, and the Fabrication of PMC Samples First, 120 g of polypropylene pellets were poured into the mixing chamber of an internal mixer at a temperature of 180 • C. The rotation of the mixing shafts was kept at 20 rpm. After the polymer was completely melted, the BN-CFs (at 20 phr) were added and mixed for 30 min. To facilitate the formation of conductive pathways and to reduce the phonon scatterings, GF (at 20 phr) was employed as the secondary (contribution) filler. The pasty composite was then extracted and compression-molded into 20 × 20 × 1 mm square samples. The molding pressure was controlled by a manual compressor and increased gradually to allow the remaining air in the composite to leave. During the molding process, the pressure was kept at 200 bar while the temperature was reduced by natural convection from 190 to 100 • C. The samples were then extracted and cooled down at room temperature. Surface Characteristics Fourier-transform infrared spectroscopy (FTIR, Nicolet iS10, Thermo Fisher Scientific Inc., Waltham, MA, USA) was carried out to determine the functional groups present on the surface of the treated CFs. FTIR samples were prepared via potassium bromide (KBr) pelleting, and the analysis was performed at room temperature (298 ± 1 K) under air atmosphere and for 4000-1000 cm −1 wavelength range. Various samples were prepared and analyzed to ensure the consistency of the acquired FTIR spectra. X-ray photoelectron spectroscopy (XPS, PHI 5000 VersaProbe II, ULVAC-PHI, Inc., Chigasaki, Japan) was used to further identify the surface chemistry of the BN-modified CFs. Unless otherwise specified, the X-ray anode was run at over 5 W, and the high voltage was kept at 5.0 kV. The energy resolution was fixed at 0.50 eV to ensure sufficient sensitivity. The base pressure of the analyzer chamber was about 5 × 10 −8 Pa. Both the whole spectra (0-1200 eV) and narrow ones for all the elements were recorded with a very high resolution. Binding energies were calibrated with containment carbon (C 1s = 284.6 eV). The B 1s and N 1s peaks were then deconvoluted using the Shirley-type baseline and an iterative least-squared optimization algorithm. The quality of the ceramic coating was observed via a field emission scanning electron microscope (FESEM, S-4800, Hitachi High-Technologies Corp., Tokyo, Japan), and the elemental composition of the surface was mapped using a SEM-coupled energy dispersive spectrometer (EDS, Oxford Instruments, Abingdon, UK). Thermal and Electrical Characteristics The thermal stability of the prepared fibers was examined using a thermogravimetric analyzer (TGA, Shimadzu Corp., Kyoto, Japan). For this purpose, the weight of the samples was collected as a function of temperature change. The sample was heated up to 1000 • C with a heating rate of 10 • C/min under flowing air at 50 cc/min. The thermal conductivity of PMC samples reinforced with BN-CFs and GF was measured at a steady state, in the vertical direction, to reveal the interfacial effect of ceramic coating on the enhancement of phonon ballistic movement. If achieved, this was expected to subsequently result in a better conduction of energy from the fibrous filler to the composite matrix and vice versa. Thermal conductivity measurement was performed according to the ASTM D5470 method, using the vertical type thermal conductivity measurement system (Hantech Co., Ltd., Gunpo-si, Korea). Before the measurement, the samples were polished until a smooth surface was visually observed. They were then clamped, by about 0.5 kgf, between two thermally conductive polished surfaces (see Figure 2a). Next, a heat flux was imposed to the specimen. The sample was assumed as a thermal barrier with known thickness. The apparatus then measured the thermal impedance (thermal resistance, R i ) of the sample with the assumption of negligible contact resistance between the sample and the conductive surfaces. The resistance of specimens with different thicknesses (d i ) and identical surface areas (A) was recorded (see Equation (1)) and plotted against the thickness (see Figure 2b). Thus, the apparent thermal conductivity of the sample could be obtained, as it is equal to the reverse value of the graph slope (see Equation (2)). Several samples were repeatedly measured to ensure the consistency of results. where k represents the apparent thermal conductivity. The electrical resistivity of the composite samples was measured using a conventional four-point probe method (Loresta-GP MCP-T610, Mitsubishi Chemical Analytech Co., Ltd., Yamato, Japan). In order to better clarify the insulating effect of the BN layer on the electrical characteristic of carbon fibers, the electrical resistivity of CF monofilaments was also measured. The error range of electrical resistivity is under 2%. The electrical resistivity of the composite samples was measured using a conventional four-point probe method (Loresta-GP MCP-T610, Mitsubishi Chemical Analytech Co., Ltd., Yamato, Japan). In order to better clarify the insulating effect of the BN layer on the electrical characteristic of carbon fibers, the electrical resistivity of CF monofilaments was also measured. The error range of electrical resistivity is under 2%. Surface Characteristics of BN-CFs The change in the surface morphology of the fibers can be seen in SEM images of as-received and as-prepared CFs shown in Figure 3. All images show the surface morphology of samples along the radial axis and are zoomed in from left to right. They indicate that the fibers were not attached after BN coating, which can be due to a sufficient and non-excessive supply of precursor during the BN synthesis. It could also mean that a good dispersion of coated CFs in PMC can be achievable. Moreover, Figure 4 shows the cross-sectional images of as-received and treated fibers. Crystal-like structures of BN coating were observed in these images. Usually, high crystallinity in solid materials leads to better and higher thermal conduction. SEM images show that the BN coated on CF was not uniformly formed along the axis of the fiber, so it is difficult to confirm the exact coating thickness. It can be roughly estimated that the BN layer has a thickness of about 0.5-0.6 µm. Surface Characteristics of BN-CFs The change in the surface morphology of the fibers can be seen in SEM images of as-received and as-prepared CFs shown in Figure 3. All images show the surface morphology of samples along the radial axis and are zoomed in from left to right. They indicate that the fibers were not attached after BN coating, which can be due to a sufficient and non-excessive supply of precursor during the BN synthesis. It could also mean that a good dispersion of coated CFs in PMC can be achievable. Moreover, Figure 4 shows the cross-sectional images of as-received and treated fibers. Crystal-like structures of BN coating were observed in these images. Usually, high crystallinity in solid materials leads to better and higher thermal conduction. SEM images show that the BN coated on CF was not uniformly formed along the axis of the fiber, so it is difficult to confirm the exact coating thickness. It can be roughly estimated that the BN layer has a thickness of about 0.5-0.6 µm. The change in functional groups present on the fibers was tested via FTIR analysis, and the spectra are shown in Figure 5. The obtained spectrum for as-received fiber confirms the presence of typical O-H stretching (broad), C-H stretching, and C=O stretching absorption peaks at about 3500, 2800, and 1700 cm −1 , respectively. The O-H stretching bond can be mainly attributed to the hydroxyl (-OH) group formed as a result of the rapid absorption of H 2 O in the air by the potassium bromide, which was used as the matrix during pelleting. The C-H and C=O stretching bonds are the characteristic peaks of the carboxyl (-COOH) group, implying that the CF surface was partially oxidized. In the case of BN-CF, two peaks corresponding to the formation of B-H stretching and B-N bending bonds were detected at about 2400 and 1380 cm −1 , respectively [37]. The former belongs to the primary amine (-NH) group, which overlaps with the same wavenumber as the O-H stretching bond on the CF surface. The absorption peak in the middle is the indicator of boron bonded to hydroxyl groups present on the CF surface. Moreover, absorption bands detected at 800 and 1380 cm −1 are attributed to the formation of the target coating, i.e., B-N bending vibration [38,39], indicating the synthesis of hexagonal BN (h-BN) [40] and turbostratic BN (t-BN) [21], respectively [23]. The obtained spectrum for the modified fibers shows the successful synthesis of a chemically bonded BN-CF surface. The chemical composition of the synthesized coating layer was examined by XPS and EDS analyses. EDS mapping of the fiber surface, shown in Figure 6, reveals the elemental composition of the coating microlayer. It can be seen that, besides the presence of carbon and oxygen atoms, the treated CF surface consists of boron and nitrogen elements, indicating the presence of BN on the CF substrate. The change in functional groups present on the fibers was tested via FTIR analysis, and the spectra are shown in Figure 5. The obtained spectrum for as-received fiber confirms the presence of typical O-H stretching (broad), C-H stretching, and C=O stretching absorption peaks at about 3500, 2800, and 1700 cm −1 , respectively. The O-H stretching bond can be mainly attributed to the hydroxyl In the case of BN-CF, two peaks corresponding to the formation of B-H stretching and B-N bending bonds were detected at about 2400 and 1380 cm −1 , respectively [37]. The former belongs to the primary amine (-NH) group, which overlaps with the same wavenumber as the O-H stretching bond on the CF surface. The absorption peak in the middle is the indicator of boron bonded to hydroxyl groups present on the CF surface. Moreover, absorption bands detected at 800 and 1380 cm −1 are attributed to the formation of the target coating, i.e., B-N bending vibration [38,39], indicating the synthesis of hexagonal BN (h-BN) [40] and turbostratic BN (t-BN) [21], respectively [23]. The obtained spectrum for the modified fibers shows the successful synthesis of a chemically To further confirm the elemental composition of the treated CF surface, XPS spectra (see Figure 7) were obtained and deconvoluted. The elemental composition data are provided in Table 1. The coated CF surface shows extra peaks centered at about 400 and 200 eV, which correspond to nitrogen and boron, respectively [23]. The deconvoluted spectra for asymmetric B1s and N1s bands are shown in Figure 8. Two main peaks were identified at about 190.9 and 398.5 eV corresponding to the formation of B-N and N-B bonds on the treated fiber, respectively [41,42]. As shown in Figure 8a, the presence of boron oxide (B2O3) species was weakly detected at 192.9 eV [43]. The formation of N-C bonds was confirmed with the respective peak observed at 399.9 eV [42]. Furthermore, the detection of an oxygen peak in BN-CFs can be attributed to the presence of boron oxynitrides (BOxNy) and the subsequent absorption of oxygen [23]. The XPS results nicely concur with the findings of the FTIR and EDS analyses. To further confirm the elemental composition of the treated CF surface, XPS spectra (see Figure 7) were obtained and deconvoluted. The elemental composition data are provided in Table 1. The coated CF surface shows extra peaks centered at about 400 and 200 eV, which correspond to nitrogen and boron, respectively [23]. The deconvoluted spectra for asymmetric B 1s and N 1s bands are shown in Figure 8. Two main peaks were identified at about 190.9 and 398.5 eV corresponding to the formation of B-N and N-B bonds on the treated fiber, respectively [41,42]. As shown in Figure 8a, the presence of boron oxide (B 2 O 3 ) species was weakly detected at 192.9 eV [43]. The formation of N-C bonds was confirmed with the respective peak observed at 399.9 eV [42]. Furthermore, the detection of an oxygen peak in BN-CFs can be attributed to the presence of boron oxynitrides (BO x N y ) and the subsequent absorption of oxygen [23]. The XPS results nicely concur with the findings of the FTIR and EDS analyses. Thermal and Electrical Properties of BN-CFs The thermal stability of the fibers before and after the modification was examined via TGA. The sample weight was consistently recorded at temperatures ranging from 25 to 1000 °C, and the obtained thermograms are shown in Figure 9. According to these results, the typical high stability and near-zero weight loss of carbon fibers until 640 °C were observed before and after the surface treatment [44]. As reported elsewhere [23], the rise of the temperature above 640 °C shows a slower degrading rate for BN-CFs, as the BN microlayer acted as a protective buffer and was oxidized before the fiber surface. The more gradual oxidation of BN-CFs can be attributed to the formation of a liquid B2O3 film on the CF surface, acting as an oxygen molecular trap (a diffusion barrier) and a thermal shield [45]. This indicates a higher stability of the coated fiber in harsh environments and a slower decay of the structure when compared with that of a pristine CF. It was assumed that high-temperature treatment of CFs and the formation of BN coating on the fiber surface before TGA analysis formed a thermal shield, consisting of boron oxide and carbon oxide, which delayed the total consumption of the BN-modified CFs when compared with that of the untreated fiber. Thermal and Electrical Properties of BN-CFs The thermal stability of the fibers before and after the modification was examined via TGA. The sample weight was consistently recorded at temperatures ranging from 25 to 1000 °C, and the obtained thermograms are shown in Figure 9. According to these results, the typical high stability and near-zero weight loss of carbon fibers until 640 °C were observed before and after the surface treatment [44]. As reported elsewhere [23], the rise of the temperature above 640 °C shows a slower degrading rate for BN-CFs, as the BN microlayer acted as a protective buffer and was oxidized before the fiber surface. The more gradual oxidation of BN-CFs can be attributed to the formation of a liquid B2O3 film on the CF surface, acting as an oxygen molecular trap (a diffusion barrier) and a thermal shield [45]. This indicates a higher stability of the coated fiber in harsh environments and a slower decay of the structure when compared with that of a pristine CF. It was assumed that high-temperature treatment of CFs and the formation of BN coating on the fiber surface before TGA analysis formed a thermal shield, consisting of boron oxide and carbon oxide, which delayed the total consumption of the BN-modified CFs when compared with that of the untreated fiber. Thermal and Electrical Properties of BN-CFs The thermal stability of the fibers before and after the modification was examined via TGA. The sample weight was consistently recorded at temperatures ranging from 25 to 1000 • C, and the obtained thermograms are shown in Figure 9. According to these results, the typical high stability and near-zero weight loss of carbon fibers until 640 • C were observed before and after the surface treatment [44]. As reported elsewhere [23], the rise of the temperature above 640 • C shows a slower degrading rate for BN-CFs, as the BN microlayer acted as a protective buffer and was oxidized before the fiber surface. The more gradual oxidation of BN-CFs can be attributed to the formation of a liquid B 2 O 3 film on the CF surface, acting as an oxygen molecular trap (a diffusion barrier) and a thermal shield [45]. This indicates a higher stability of the coated fiber in harsh environments and a slower decay of the structure when compared with that of a pristine CF. It was assumed that high-temperature treatment of CFs and the formation of BN coating on the fiber surface before TGA analysis formed a thermal shield, consisting of boron oxide and carbon oxide, which delayed the total consumption of the BN-modified CFs when compared with that of the untreated fiber. Moreover, to evaluate the effect of synthesized BN-CF on heat transfer enhancement in PMCs, samples were examined for their ability to conduct thermal energy carriers, known as phonons. As thermal energy is mainly conducted through lattice vibrations (or phonons) in nonmetal solid materials, phonons are the leading energy carriers in polymer-based composites [46]. Therefore, a proper filler-matrix interface will enhance the motion of crystal lattices within a composite. Thermal conductivity values are shown in Table 2. It was found that the conductivity of the composite reinforced by BN-CF and GF was increased by about 30% when compared with that of the composite reinforced with as-received CF and GF with the same filler content (i.e., 20 phr ea.). This can be ascribed to the lower thermal resistance of the ceramic coating and to the formation of a proper bridge for energy conduction between filler and matrix. Moreover, the surface roughness of CF can be increased after BN modification, resulting in a larger specific surface area. This leads to greater surface energy, which in turn increases the adhesion of fibers to the polymer, facilitating the motion of energy carriers within the composite. The electrical resistivity values of monofilament fibers are shown in Table 3. As the BN is formed on the surface of the carbon fiber, the difference in resistivity is about 100 times higher than that of the as-received CF monofilament. This further confirms the proper formation of a BN coating layer on the CF surface as well as its effectivity on the electrical resistivity of fibers. Moreover, the results of electrical resistivity for composite samples are shown in Table 2. As seen here, it is also clear that coated fibers become less electrically conductive as BN acts as the electrical buffer layer, preventing the free motion of electrons within the composite. Table 2. Thermal conductivity and electrical resistivity of neat polypropylene (PP) and CF/GF -reinforced PMC samples before and after BN coating. Sample Thermal conductivity (W/mK) Electrical resistivity (Ω.cm) Neat PP 0.18 -CF20/GF20/PP 0.47 4.72 BN-CF20/GF20/PP 0.75 37.84 Moreover, to evaluate the effect of synthesized BN-CF on heat transfer enhancement in PMCs, samples were examined for their ability to conduct thermal energy carriers, known as phonons. As thermal energy is mainly conducted through lattice vibrations (or phonons) in nonmetal solid materials, phonons are the leading energy carriers in polymer-based composites [46]. Therefore, a proper filler-matrix interface will enhance the motion of crystal lattices within a composite. Thermal conductivity values are shown in Table 2. It was found that the conductivity of the composite reinforced by BN-CF and GF was increased by about 30% when compared with that of the composite reinforced with as-received CF and GF with the same filler content (i.e., 20 phr ea.). This can be ascribed to the lower thermal resistance of the ceramic coating and to the formation of a proper bridge for energy conduction between filler and matrix. Moreover, the surface roughness of CF can be increased after BN modification, resulting in a larger specific surface area. This leads to greater surface energy, which in turn increases the adhesion of fibers to the polymer, facilitating the motion of energy carriers within the composite. The electrical resistivity values of monofilament fibers are shown in Table 3. As the BN is formed on the surface of the carbon fiber, the difference in resistivity is about 100 times higher than that of the as-received CF monofilament. This further confirms the proper formation of a BN coating layer on the CF surface as well as its effectivity on the electrical resistivity of fibers. Moreover, the results of electrical resistivity for composite samples are shown in Table 2. As seen here, it is also clear that coated fibers become less electrically conductive as BN acts as the electrical buffer layer, preventing the free motion of electrons within the composite. Table 2. Thermal conductivity and electrical resistivity of neat polypropylene (PP) and CF/GF -reinforced PMC samples before and after BN coating. Sample Thermal Conductivity (W/mK) Electrical Resistivity (Ω.cm) Neat PP 0.18 -CF20/GF20/PP 0.47 4.72 BN-CF20/GF20/PP 0.75 37.84 Overall, the effectivity of the synthesized ceramic coating layer was shown with the results obtained for the thermal and electrical characteristics of the composites. Similar to other ceramic materials, electrons are bound tightly in BN. This leads to an insulating response of the material under electrical charge, which can otherwise be harmful for electronic units. On the other hand, h-BN usually, depending on the growth conditions, has a fairly good crystallinity. As explained before, crystallinity leads to a longer phonon mean free path within the material. This means that, unlike electrons, phonons can move more freely in the synthesized ceramic. Moreover, the coating reduces the surface energy gap between the carbon filler and the thermoplastic matrix. This further contributes to the ballistic movement of phonons between the components of the composite system. What is more is the increase in thermal conductivity as well as the reduction in electrical conductivity of composites imply a better dispersion of fillers after the modification of CFs. Table 3. Electrical resistivity of monofilament fibers. Conclusions In this study, the synthesis of BN coating on the carbon fiber surface was carried out by means of stepwise solution boriding and high-temperature nitriding of fibers. According to the results obtained from surface analyses, including SEM, FTIR, XPS, and EDS, the fibers were successfully coated without the use of urea in the process. The formation of the expected B-N bond was confirmed by FTIR and XPS spectroscopy methods. The obtained coating was found to increase the final oxidation temperature of CF, as it formed a thermally diffusing layer on the CF surface. Furthermore, composite samples were fabricated by adding short treated-fibers and graphite powder to a polypropylene matrix. According to the results, BN-CF/GF improved the thermal conductivity by about 316% and 60% compared with those of neat PP and PMC reinforced with as-received CF/GF, respectively. It was found that the synthesized ceramic coating can play a significant role in improving the adhesion of CFs to the polymer, as well as in increasing the thermal stability of fibers in the air. It was also shown that the BN-CF as filler dramatically improved the electrical insulation performance in the composite, i.e., by about 700%.
7,359.8
2019-12-01T00:00:00.000
[ "Materials Science" ]
Characterization of Individuals with Sacroiliac Joint Bridging in a Skeletal Population: Analysis of Degenerative Changes in Spinal Vertebrae The aim of this study was to characterize the individuals with sacroiliac joint bridging (SIB) by analyzing the degenerative changes in their whole vertebral column and comparing them with the controls. A total of 291 modern Japanese male skeletons, with an average age at death of 60.8 years, were examined macroscopically. They were divided into two groups: individuals with SIB and those without bridging (Non-SIB). The degenerative changes in their whole vertebral column were evaluated, and marginal osteophyte scores (MOS) of the vertebral bodies and degenerative joint scores in zygapophyseal joints were calculated. SIB was recognized in 30 individuals from a total of 291 males (10.3%). The average of age at death in SIB group was significantly higher than that in Non-SIB group. The values of MOS in the thoracic spines, particularly in the anterior part of the vertebral bodies, were consecutively higher in SIB group than in Non-SIB group. Incidence of fused vertebral bodies intervertebral levels was obviously higher in SIB group than in Non-SIB group. SIB and marginal osteophyte formation in vertebral bodies could coexist in a skeletal population of men. Some systemic factors might act on these degenerative changes simultaneously both in sacroiliac joint and in vertebral column. Introduction Low back pains disturb daily activities to varying degrees in people throughout history. Degenerative changes not only in the lumbar spine but also in sacroiliac joint (SIJ) are involved in these pathological conditions-those in SIJ account for approximately 16% to 30% of cases of chronic low back pain [1][2][3]. Patients with an anterosuperior osteophytic bone bridge of SIJ were reported to have lumbar back pain [4]. Sacroiliac joint osteophytes cause sciatica for SIJ impinging on the sciatic nerve [5]. Clinically, they exist in individuals at a constant rate whose SIJ are united with bony bridges. Prevalence of sacroiliac joint bridging (SIB) was higher in males than in females and increased with ageing [6][7][8]. The degenerative changes in the SIJ are suspected as the pathogenesis of low back pains to some degree, and these pathological conditions progress to bony bridging of SIJ eventually. Martin et al. [9] reported a case of a patient with the symptom of secondary to anterior bridging of the SIJ; his pain was relieved by the surgical removal of the bony bridge across the anterior portion of the right SIJ. Moreover, patients with low back pain may be treated with the stabilization of the SIJ by means of noninvasive interventions [10,11] or surgical techniques [12,13]. Particularly, SIJ fixation operations using 2 BioMed Research International minimally invasive techniques have been reported with good outcomes [14,15]. Meanwhile, the relevance of arthritis in SIJ and spondyloarthropathy is a cause of concern. Generally, the early symptom in a patient with ankylosing spondylitis (AS) can be varying degrees of low back pain; AS most commonly occurs in young males as persistent low back pain and stiffness that is worse in the morning and at night and improves with activity [16]. Recently a new disease concept for axial spondyloarthritis (axSpA), early stage of SIJ arthritis without radiological evidence, has been alerted [17][18][19][20][21]. We have already indicated that some general and systemic factors could work to affect osteoarthritis onset and progression in upper and lower extremities [22]. Furthermore, we hypothesized that the vertebrate bones in the individuals with SIB might have a general tendency to be highly ossified. Therefore, in this study, we evaluated the prevalence of SIB in a skeletal population and the degenerative changes in the whole vertebral column to characterize these individuals. Materials. In this study, a total of 291 modern Japanese male skeletons were macroscopically examined. They were obtained from cadavers provided to Nagasaki University School of Medicine for anatomical dissection by medical students between the 1950s and the 1970s. They belonged to the same skeletal sample in our preceding study [9], and they were voluntarily donated nearly all by anonymous individuals. The present work does not pose any ethical problems from the viewpoint of the 2013 Declaration of Helsinki. After they had been dissected, their soft tissues were removed to produce dry skeletal preparations. The sex and ages at death of all the individuals were registered. The mean age at death was 60.8 years, with a range of 19 to 89 years. They were divided into two groups: individuals with sacroiliac joint bridging (SIB group) and those without bridging (Non-SIB group) (Step 1 in Figure 1). To reveal the characteristics of these skeletons in the SIB group, about one hundred skeletons were selected randomly from the Non-SIB group for some statistical analyses ( = 92) (Step 2 in Figure 1). The ratios of the vertebrae bones which could be evaluated without any defects were 97.8% in the cervical vertebrae, 99.5% in the thoracic vertebrae, and 98.1% in the lumbar vertebrae; for nearly all of these spinal bones, almost all of the vertebral bodies and the zygapophyseal joints were evaluated. To focus on the degenerative changes with ageing phenomena in the vertebral bones, the statistical examination objects were confined to the skeletons older than 60 years old, 22 individuals from the SIB group (73.0 years old on average) and 48 individuals from the Non-SIB group (71.3 years old on average) (Step 3 in Figure 1). There was no significant difference in age between these two groups. Sacroiliac Joint Bridging. For each individual, the left and right SIJ were visually examined to categorize them into two groups: SIB group (Figure 2(a)) and Non-SIB Step 1 Step 2 Step 3 group in Step 1. In the cases in which we were unable to assign group membership, computed tomography (CT) scanning images provided diagnostic clarity ( Figure 2(b)). By contrasting these two groups, the marginal osteophytes around the vertebral bodies and degenerative changes of the zygapophyseal joints in the skeletons were selected randomly, evaluated, and characterized. Marginal Osteophyte of Vertebral Bodies. Marginal osteophytes of vertebral bodies were evaluated according to the diagnostic criteria reported earlier [23,24] in Step 2 and Step 3: Grade 0: normal (no pathological changes), Grade I: horizontally grown osteophytes, Grade II: vertically grown osteophytes, Grade III: significantly grown osteophytes, and Grade IV: bridging osteophytes to adjacent vertebrae ( Figure 3). The bones were scored at eight separate locations, including inferoanterior, inferoright, inferoposterior, and inferoleft segments of the upper vertebral body and superoanterior, superoright, superoposterior, and superoleft segments of the lower vertebral body. Then, the marginal osteophyte score (MOS) of each intervertebral space was calculated by averaging the total of the grade scores of the eight positions. Furthermore, the cases that contained more than one Grade IV area were defined as fused vertebra. Degenerative Changes of Zygapophyseal Joints. Degenerative changes of the zygapophyseal joints, from the articulation of C2/3 to L5/S1 inclusive, were evaluated with the criteria reported earlier [25] in Step 2 and Step 3: Grade 0: normal (no pathological changes), Grade I: osteophytes on the rim of articular surface without pitting on the surface, Grade II: osteophytes on the rim of articular surface with lipping with slight pitting, Grade III: osteophytes all around the rim of the articular surface with moderate pitting on the surface and the rims of articular surface which tend to be broken, and Grade IV: osteophytes on the rim of articular surface with severe pitting on the surface and the rim becomes unclear ( Figure 4). The grades were independently recorded for the right and left sides of the superior and inferior articular processes of the respective vertebrae. Then, the degenerative joint score (DJS) value for each intervertebral level was calculated by averaging the total of the eight grade numbers. Statistical Analysis. The values of the correlation coefficient between MOS and the age at death and those between DJS and the age at death were tested in the SIB group and in the Non-SIB group. After adjustment for the age differences between these two groups as described in the results, these scores were statistically tested using the Wilcoxon test. Prevalence of SIB. Sacroiliac joint bridging was documented in 10.3% (30/291 male individuals) and 15.0% (26/148 male individuals) in people aged 60 years or older. In most of the pelvises in the SIB group, the anterior sacroiliac ligaments were ossified to varying degrees. In some cases, nearly the total area of the ligament was completely ossified. Moreover, in the other individuals, the ossification of the ligament could not be confirmed; instead, bony unions were recognized between both joint surfaces. These cases were examined closely with CT scanning. The average age of the 25 skeletons whose whole vertebral columns were curated in SIB group was 70.0 (range = 32-89) years. That was significantly higher than that of the 92 in the Non-SIB group at 58.3 (range = 19-83) years ( < 0.01). Marginal Osteophyte Scores (MOS). The MOS values were calculated in respective intervertebral spaces from C2/3 articulation (between the 2nd cervical vertebra and the 3rd) to L5/S articulation (between the 5th lumbar vertebra and the bony sacrum). MOS in all of the intervertebral spaces in affected individuals were related to their age at death both in SIB group ( = 0.44, = 0.033) and in Non-SIB group ( = 0.70, < 0.01) ( Figure 5). Figure 6 indicates the average values of MOS on the intervertebral levels in 22 skeletons from the SIB group and in 48 skeletons from the Non-SIB group. Both of these two groups showed a similar pattern in MOS on the vertebral body. There were two large distributional peaks with the lowest peak at C6/7 or T1/2: the first peak at C5/6 and the second peak at L3/4 or L 4/5. Also, there were smaller peaks at T12/L1 in both groups. There was little difference between the values of the MOS of the two groups in cervical spines and lumbar spines. However, there was a significant difference between the two groups in T5/6 level ( = 0.030) and L4/5 level ( = 0.038); moreover, the scores in the thoracic spines were consecutively higher in the SIB group than in the Non-SIB group. Figure 7 shows the average values of the MOS by comparing the anterior part and the posterior part of the vertebral bodies in both groups. Osteophyte formation was more dominant in the anterior aspect of the bone than in the posterior aspect. Particularly in the anterior aspect, the difference between both groups was proved to be significant; the average values of the SIB group were significantly higher the SIB group, which was significantly higher than the value of 0.85 in the Non-SIB group ( < 0.01). Degenerative Joint Score (DJS) in Zygapophyseal Joint. The relationship between the average values of DJS in all of the intervertebral spaces in respective individuals and their age at death was shown in the scatter chart ( Figure 9); the correlation coefficient in the Non-SIB group was 0.62 ( < 0.01), and that in the SIB group was 0.39 ( = 0.054). As stated previously, the objects were confined to the skeletons which were 60 years and older. For 22 individuals in the SIB group and 48 in the Non-SIB group, the average values of DJS in respective intervertebral levels were calculated ( Figure 10). Average scores of DJS increased gradually from C2/3 to the lower cervical level, and they kept almost flat in the thoracic vertebrae. However, they increased again gradually from T9/10 to the peak of L4/5. The values of score in the Non-SIB group were higher than those in the SIB group from the cervical to the thoracic vertebrae consecutively. In particular, the scores of the former were significantly higher in C2/3 ( = 0.033), C3/4 ( < 0.01), C4/5 ( = 0.031), T3/4 ( = 0.022), and T9/10 ( < 0.01). Little difference was recognized between both groups in the lower intervertebral levels than T9/10. Discussion It has been stated that osteophytes on the vertebral margin develop in an attempt to strengthen the vertebral bodies in response to continual pressure and weakening of the skeletal structure with ageing [26][27][28][29]. Therefore, in a normal spine, vertebral osteophytes do not develop before the vertebral epiphyseal rings have fused which occurs at around 20 years of age [26]. On the other hand, in the iliac articular facet, the first alterations of cartilage structure can be detected around the onset of puberty [30]. The SIJ can fuse not only with bone proliferating changes and the aging process, but also with diffuse idiopathic skeletal hyperostosis (DISH) and seronegative spondyloarthropathies such as ankylosing spondylitis (AS) or psoriatic arthritis. Resnick [31] stated that bony ankylosis in degenerative diseases resulted from para-articular bridging osteophytes, whereas the true intraarticular ankylosis characteristic of ankylosing spondylitis L5/S * * * * * * P < 0.05 Figure 10: The averages and standard deviations of degenerative joint score (DJS) of zygapophyseal joint in the intervertebral spaces in 48 skeletons from Non-SIB group (blue) and 22 skeletons from SIB group (red). Average scores of DJS increased gradually from C2/3 to the lower cervical level, and they kept almost flat in the thoracic vertebrae. However, they increased again gradually from T9/10 to the peak of L4/5. The values of score in the Non-SIB group were higher than those in the SIB group from the cervical to the thoracic vertebrae consecutively. was generally absent. He observed that osteophytes were a feature of degenerative sacroiliac disease and were predominant on the anterior surface of the ilium and sacrum but were not prominent in ankylosing spondylitis. Moreover, Dar et al. [32] reported that SIB was dominant in the superior region of this joint. However, there is the possibility the current study may have included some skeletons of seronegative spondyloarthropathies such as ankylosing spondylitis, but it is unlikely to have had an effect on the results due to the fact that there is a lower prevalence of this disease in Japan (6.5 out of 100,000) [33]. Waldron and Rogers [34] investigated the skeletons of the 18th and 19th centuries from a crypt in England, and they reported that the prevalence rates of sacroiliac fusion were 6.3% in the males and 4.3% in females. Dar et al. [32] analyzed 2845 skeletons from an American osteological collection of people who died during the first half of the 20th century and found that SIB was present in 12.27% of the males, contrasting with only 1.83% of the females; these changes were independent of ethnic origin but were age dependent. In our study, bridging was present in 10.3% of Japanese males. This is the first report on the frequency rate of SIB in Asian people. Excessive mechanical stress, particularly at a younger age, may predispose one to osteophyte formation later in life [35,36]. Watanabe and Terazawa [37] stated that the marginal osteophyte on the vertebral bodies began to form around the age of 30 in both sexes. Differences between individuals owed to a response to erect posture during bipedal locomotion rather than differences in occupational stress [38]. In this study, osteophytes were not present until 7 the age of around thirty either; and osteophytes increased with aging, which was significantly correlated with age. In the cervical vertebrae, several authors had a consensus of opinion: C5/6 and C6/7 have the greatest frequency of osteophyte formation, attributed to their mobility and loadbearing nature [39][40][41], and O'Neill et al. [35] reported that 681 women and 499 men over 50 years of age had thoracic osteophytes most frequently on T9 and T10. Moreover, Van der Merwe et al. [42] investigated a total of 101 male and 117 female morphologically normal vertebral columns and found that the highest frequency and degree of projections were on C5, T11, T12, L3, L4, and L5, while the lowest frequency was observed on T2 and L1 in females and on T2 in males. Nathan [26] examined 346 white and black male and female individuals and observed that the highest incidence of osteophytes in each region was in the vicinity of the peaks of the spinal curves (C5, C8, and L3-4), whereas the lowest frequencies were found where the line of weight-bearing crosses the spine (T1, T12, and L5-S1). In this study, the higher osteophyte scores were on C5/6, L2/3, L3/4, and L4/5 and the lowest score was observed on T1/2; the thoracic region had less bony spur development than both the cervical and lumbar vertebrae. This might be because they are more stable due to the presence of the ribs and less mobile than the other vertebrae [43]. Under these situations, thoracic vertebrae, with limited influence of mechanical stress, might be sensitive to the individual's general tendency for additional bone forming. Moreover, it has been reported that vertebral deformity and osteoarthritis are frequent in osteoporotic vertebrae in the aged individuals [44]. In this study, as compared with the Non-SIB group, marginal osteophyte scores were higher in the SIB group, especially in thoracic vertebrae. Moreover, the frequency of vertebral body fusion was evidently higher than in the Non-SIB group. Considering these findings, it was suggested that there was a tendency to form bone with aging in the SIB group. Master et al. [45] investigated the prevalence of combined lumbar and cervical arthrosis in a large population sample and examined its association with age, sex, and race. They confirmed that lumbar arthrosis and advancing age were associated with cervical arthrosis independent of race and sex, and they proposed that lumbar arthrosis and age were associated with cervical arthrosis. Just as Higuchi [25] reported, in this study, degenerative scores of zygapophyseal joint increased gradually from the upper cervices with the peak of C4/5, and they increased again in the lower thoracic vertebrae and lumbar vertebrae. The correlation coefficient between the scores and age at death in the Non-SIB group was significantly high, but correlation within the SIB group was not significant ( = 0.54); this meant that degenerative changes in zygapophyseal joints developed and proceeded with little correlation with the ageing process in the SIB group. Additionally, the values of degeneration scores of zygapophyseal joint in cervical and thoracic vertebrae were higher in the Non-SIB group than in the SIB group. Considering the result that, in the SIB group, the marginal osteophyte scores were higher and the frequency of vertebral body fusion was higher than in the Non-SIB group, it was possible that intervertebral mobility was restricted to reduce the mechanical loads on zygapophyseal joints. Apparently, this might be the reason why there were discrepancies between marginal osteophyte formation and zygapophyseal degeneration in the two groups. Recently a new concept disease of axSpA, which was once understood as an early stage of AS, has not necessarily been interpreted as the same entity as AS by some researchers; this is because the gender ratios were different between axSpA and AS [20,46,47], and a number of studies demonstrate that AS and axSpA differ in their genetic property (HLA-B27 typing) [46]. This axSpA has far greater clinical heterogeneity and has a broader aetiopathogenesis; the natural history of axSpA has not yet been reliably established [46]. Considering these heterogeneities, some of the skeletons classified to the SIB group in this study could belong to some kinds of this entity. New bone formation of the entheses with possible progression to ankylosis is among the hallmarks in these patients of axSpA [48]. TNF antagonists were effective to control inflammation in the long time in axSpA. However, once the bone formation process is underway, it may not be possible to slow the rate of new bone formation in axSpA [49]. According to another hypothesis, inflammation and new bone formation may be triggered by the same factor and then go on to develop independently of each other via different molecular mechanisms [48]. It may be possible that these hypotheses were suggestive of considering the etiology of the bone forming phenomena in the SIB group of this study. Waldron and Rogers [34] investigated the modern skeletons of 41 individuals with the SIJ's fusion compared with 82 adult skeletons without that condition. This study showed a significant association between sacroiliac fusion and the presence of DISH and osteoarthritis of the spine, but not for osteoarthritis at any other site. Moreover, Dar et al. [50] studied 289 human male skeletons for the presence of SIB, entheseal ossification, cartilaginous calcification, and other axial skeleton joint fusions; they stated that SIB was strongly associated with entheseal reactions in other parts of the body. With our study, it was indicated that SIB and marginal osteophyte formation could coexist in vertebral bodies in a skeletal population of men. Therefore, it is likely that some combination of systemic factors, for example, genetic, nutritional, hereditary, or hormonal factors, might act on these degenerative changes simultaneously both in SIJ and in the vertebral column. These findings might indicate a new concept about the pathological conditions with systematic bone formation tendencies in the human axial skeletons. Further studies, including genetic analysis, might be required for establishing it. Conclusions Sacroiliac joint bridging and marginal osteophyte formation in vertebral bodies could coexist in a skeletal population of men. Some systemic factors might act on these degenerative changes simultaneously both in sacroiliac joint and in vertebral column.
4,929.4
2014-09-08T00:00:00.000
[ "Medicine", "Biology" ]
Structure-resistive property relationships in thin ferroelectric BaTiO\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{3}$$\end{document}3 films A combined study of local structural, electric and ferroelectric properties of SrTiO\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{3}$$\end{document}3/La\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{0.7}$$\end{document}0.7Sr\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{0.3}$$\end{document}0.3MnO\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{3}$$\end{document}3/BaTiO\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{3}$$\end{document}3 heterostructures was performed by Piezoresponse Force Microscopy, tunneling Atomic Force Microscopy and Scanning Tunneling Microscopy in the temperature range 30–295 K. The direct correlation of film structure (epitaxial, nanocrystalline or polycrystalline) with local electric and ferroelectric properties was observed. For polycrystalline ferroelectric films the predominant polarization state is defined by the peculiarity of screening the built-in field by positively charged point defects. Based on Scanning Tunneling Spectroscopy results, it was found that a sequent voltage application provokes the modification of local resistive properties related to the redistribution of point defects in thin ferroelectric films. A qualitative analysis of acquired Piezoresponse Force Microscopy, tunneling Atomic Force Microscopy and Scanning Tunneling Microscopy images together with Scanning Tunneling Spectroscopy measurements enabled us to conclude that in the presence of structural defects the competing processes of electron injection, trap filling and the drift of positively charged point defects drives the change of resistive properties of thin films under applied electric field. In this paper, we propose a new approach based on Scanning Tunneling Microscopy/Spectroscopy under ultrahigh vacuum conditions to clarify the influence of point defects on local resistive properties of nanometer-thick ferroelectric films. On the growth conditions and preliminary characterization of the samples. For this study more than 30 SrTiO 3 /La 0.7 Sr 0.3 MnO 3 /BaTiO 3 (STO/LSMO/BTO) heterostructures were grown. The growth conditions for all samples are summarized in the Table S1 below. Samples chosen for the paper are highlighted in red. Sample Growth The samples from Fig. 2 of the main text were grown on STO(100) STEP substrate. After the deposition, epitaxial BTO films were annealed at temperatures between 500 • C and 800 • C in 0.4 mbar of oxygen partial pressure for 1 hour. Sample cooling took place at 0.33 mbar of oxygen with a cooling rate of 10 • C/min. The sample slb161209 with epitaxial BTO film is shown in Fig. 2(a, e). The deposition process was monitored in situ with Reflection High-Energy Electron Diffraction (RHEED) operated at high voltage of 20 kV and electron beam current of 38 mA [ Fig. S1]. RHEED intensity oscillations indicate a layer-by-layer growth of LSMO. Two RHEED patterns shown in the inset of Fig. S1 were taken at the moments marked by arrows. During the growth of BTO films, no RHEED oscillations were observed, but the RHEED pattern remained that of a two-dimensional surface, indicating changes of the growth mode to a step-flow [1]. Figure S1. RHEED intensity as a function of time during LSMO deposition. The insets show tho RHEED patterns acquired during LSMO film deposition. X-ray diffraction (XRD) experiments were carried out with a SmartLab diffractometer (Rigaku) equipped with a 9 kW Cu anode X-ray tube. XRD data of 3.5 nm-thick BTO film grown on LSMO/STO structure are presented in Fig. S2. The normal two theta-omega scan from Fig. S2(a) shows that BTO/LSMO films are epitaxially grown on the (100) STO substrate, BTO films are c-axis oriented with the c direction perpendicular to the substrate surface, having a c lattice parameter of 4.15 Å. The in-plane XRD spectrum was acquired without monochromator due to the weak intensity of the diffraction signal and this explains the presence of the Cu Kβ and W Kα1,2 lines observed near the (200) diffraction peak of the BTO film. The spectrum was acquired with a four-bounce monochromator. From the in-plane XRD scan shown in Fig. S2(b) we found the in-plane lattice parameter of a = 3.91 Å for BTO films. LSMO and BTO films are fully strained on the STO substrate, since the in-plane lattice parameters of BTO film equals to that of the substrate. The sample slb170310 is shown in Fig. 2(b, f). After the deposition, BTO film was annealed at 650 • C for 30 min at 0.4 mbar of oxygen partial pressure. Sample cooling took place at 0.4 mbar of oxygen, with a cooling rate of 10 • C/min. The sample slb171106 is shown in Fig. 2(c, g). After the deposition, BTO film was cooled down to 620 • C at 0.4 mbar of oxygen pressure, and down to room temperature at 1.3 mbar of oxygen pressure with a cooling rate of 10 • C/min. The sample slb161123 is shown in Fig. 2(d, h). After the deposition, BTO film was cooled down at 0.1 mbar of oxygen pressure with a cooling rate of 10 • C/min. On the lateral resolution of STM. From a macroscopic point of view, according to the s-wave model [2,3], the lateral resolution of the STM can be estimated as where R is a radius of the tip, s is a distance between the tip and the surface. In the direction of the surface normal (z-axis), the resolution is less than 0.01 nm for Omicron VT SPM XA microscope, as is given in technical information from the manufacturer and has been proven in our laboratory tests. Mentioned approach is applicable, provided that the linear dimensions of the feature under investigation are much bigger than the surface lattice constant, and the tip-sample separation is large enough. Our measurements were carried out at relatively high voltages and close to the possibly lowest currents (tunneling conditions were I t = 10 pA, V s = 1.5 V). The dependence of the current on the distance is exponential: decreasing the set point of the tunnel current by an order of magnitude leads in increasing the tunnel gap by 1 Å. This fact is widely covered in the literature [4,5] in contrast to the issue of the origin of z(V) dependence. The latter is shown in the Fig. S3 for the case of gold-intercalated graphene being a good example. For comparison, typical tunneling conditions to achieve atomic resolution on graphene are I t = 1 nA, V s = 3-10 mV with the tunnel gap of several angstroms. Summarizing, we can conclude that the approach for estimating the STM lateral resolution is applicable here. It should be noted that the terms "high spatial resolution" and "atomic resolution" should not be confused. A special case of high-resolution STM is atomic resolution, which, for most structures, cannot be described by Tersoff-Hamann's theory. To achieve atomic resolution both the presence of localized electronic states of the tip and localized states of the sample are necessary. Theoretical description of the atomic resolution can be given using the so-called "Chen's derivative rule" [6]. Radius R was estimated from Scanning Electron Microscope (SEM) image of STM tip obtained in the backscattered electrons detection 3/6 mode (AsB) with a beam energy of 20 kV and a beam current of 281 pA. Taking the sum of R = 75 Å and s = 10 Å to be R + s = 85 Å, we can estimate the upper limit of the resolution of about 1 nm. Note, the actual radius of the tip curvature can be much less than the resolution achievable in the SEM. On the lateral resolution of AFM. Geometrical analysis of a tip-sample contact [7] allows us to obtain the following expression for the minimal separation between the resolved asperities when the AFM image "dip" can still be detected, i.e., for the vertical resolution limit ∆Z: d ∼ = [(8(R + r)∆Z)] (1/2) . Because the best spatial resolution must be the invariant characteristic of the instrument (independent on the studied object), it should be defined, e.g., from the condition of two point objects (r = 0) detection. Then, the best lateral resolution limit d for a standard AFM instrument with a vertical resolution limit ∆Z = 0.1 nm and tip (with conductive coating) curvature radius R = 35 nm, equals to d ≈ 5 nm. On the conductivity of LSMO electrode. AFM image of LSMO electrode is provided in the Fig. S4. The metallic type of conductivity was validated by measurements of temperature dependence of the film resistivity through the top electrode deposited on the LSMO layer. The resistivity of the LSMO layer at RT is well below 1 Ω · cm. The conductivity map of LSMO electrode surface was also measured with cAFM (diamond coated tips were used) and represented the uniform distribution of current over the surface. On the local ferroelectric properties of the samples. Piezoresponse phase distribution over the sample surface and FE hysteresis loops were measured for all samples [Fig. S5]. Epitaxial, nanocrystalline and polycrystalline samples with medium grains exhibit a uniform piezoresponse distribution over the surface as well as symmetric and rectangular FE loops. The results of the poling procedure, that is a scan over rectangular regions with DC voltage between the bottom electrode and the grounded conductive tip, reflect the fact, that changing the polarity of the voltage applied gives rise to the corresponding polarization reorientation in the film: bright and dark areas in the phase images in Fig. S5(a-d) are related to the upward or downward directions of polarization, respectively. Typical FE loop for these films are shown in Fig. S5(e, f). For BTO films with large crystallites, FE domain structure is clearly distinguished inside individual grains in the as-grown state Fig. S5(g, h). Prevalence of the domains with an upward polarization (60 -70 % of the total number of domains) is observed. A significant contrast in the PFM phase distribution appears only after poling with negative voltage and is attributed to the downward polarization orientation. The image of the regions poled with positive voltage does not differ considerably from that of the pristine state. Together with imprinted local piezoresponse loops, these results justify the preference of an upward polarization orientation in as-grown polycrystalline BTO films with large grains. The mean value of the imprint bias is about 0.4 V for BTO film with large grains. The shape of the local hysteresis loop does not change much at different locations, indicating a uniform distribution of the local FE properties over the film surface [ Fig. S5(f)]. On the specificity of local STS measurements of epitaxial and nanocrystalline samples. STM imaging of epitaxial film requires V s = 4.5 V to ensure I t = 1 pA, while the parameters are 4 V, 10 pA and 1.5 V, 10 pA for nanocrystalline and polycrystalline films, respectively. A simple estimation of the STM probe area involved in the current measurements suggests the value of 1 nm for it. Thus, measurements of the local I-V characteristics with STM probe require high current density for epitaxial and nanocrystalline FE films, which provokes a non-reversible modification of the film structure. Indeed, the comparison of the STM images before and after I-V curve measurements proves the modification. For polycrystalline FE films, STM experiments are not accompanied by the structure modification. On the temperature dynamics of local resistive properties of the polycrystalline samples with medium and large grains. STS I-V curve measurements of polycrystalline films with medium and large grains are nonlinear and asymmetric, and agree well with the results of conductive AFM measurements with TEs involved. I-V curves demonstrate a pinched hysteresis [ Fig. S6(a,b)] that we have previously observed in thin epitaxial BTO films (with thicknesses d > 4 nm) in the broad temperature range (30-295 K) [8]. It is partially associated with the impact of the polarization switching currents on the measured current. For polycrystalline films, asymmetries of the I-V curves [ Fig. S6(a, b)] and of the FE loops [ Fig. S5(f)] do not correlate with each other. Namely, the switching voltage for the the nonlinear part of I-V curves is twice smaller than the coercive voltage of FE film (extracted from the local piezoresponse loops) for the forward branch, and exceeds several times the coercive voltage for the reversed branch. This indicates that memristive behavior of thin FE films cannot be fully attributed to the specificity of polarization charge screening as in the case of tunnel electroresistance in FTJs. It should be noted that I-V curves obtained with STM tip are linear in a double logarithmic scale [ Fig. S6(c, d)] with characteristics part corresponding to power dependence I∼U n with n = 3 in the whole temperature range. This is consistent with the results of conductive AFM measurements over the TEs and signifies the contribution of SCLC. As we consider, the resistive properties of polycrystalline FE films are modified compared to that of epitaxial layers due to the point defects involved in the transport mechanisms. The defects are mainly oxygen vacancies and associated complexes. Taking into account the n-type conductivity in BTO, the electronic band structure of polycrystalline BTO films with in-gap states associated with the oxygen vacancies suggests the formation of a rectifying contact with most commonly used electrode materials (platinum, tungsten, LSMO). This explains the appearance of the asymmetry in I-V curves measured on thin polycrystalline BTO films.
3,059
2020-09-28T00:00:00.000
[ "Physics", "Materials Science" ]
Using Fuzzy Ontology to Improve Similarity Assessment: Method and Evaluation Assessing semantic similarity is a fundamental requirement for many AI applications. Crisp ontology (CO) is one of the knowledge representation tools that can be used for this purpose. Thanks to the development of semantic web, CO‐based similarity assessment has become a popular approach in recent years. However, in the presence of vague information, CO cannot consider uncertainty of relations between concepts. On the other hand, fuzzy ontology (FO) can effectively process uncertainty of concepts and their relations. This paper aims at proposing an approach for assessing concept similarity based on FO. The proposed approach incorporates fuzzy relation composition in combination with an edge counting approach to assess the similarity. Accordingly, proposed measure relies on taxonomical features of an ontology in combination with statistical features of concepts. Furthermore, an evaluation approach for the FO‐based similarity measure named as FOSE is proposed. Considering social network data, proposed similarity measure is evaluated using FOSE. The evaluation results prove the dominance of proposed approach over its respective CO‐based measure. INTRODUCTION Similarity reasoning is the identification of syntactically different concepts that are semantically close. Assessing concept similarity is growing in importance within ontology engineering and, in particular, ontology merging and ontology alignment. 1 An ontology is a knowledge representation mechanism, which is understandable by intelligent agents. It consists of a hierarchical description of concepts in a particular domain connected by taxonomic and non-taxonomic relations. 2 It is employed in reasoning about domain concepts. Ontologies are the fundamental infrastructures in semantic web. 3 With the rapid development of the semantic web, it is likely that the number of ontologies will greatly increase during the next few years, which leads to the arising demand for rapid and accurate assessing concept similarity. 4,5 In this context, assessing concept similarity becomes more important in the presence of vague information. When some relations between domain concepts are vague or when there is uncertainty in defining a concept, 6 these types of problems can be tackled with fuzzy information. Fuzzy logic theory was proposed by Zadeh 7 and later applied successfully in various research areas. [8][9][10][11][12] Fuzzy logic, as a powerful infrastructure in uncertainty management, was coupled with the ontology to originate fuzzy ontology (FO). 13 FO is a generalization of crisp ontology (CO) where fuzzy relations exist between crisp concepts. FO have been successfully implemented in several application areas such as news summarization, 14 diet recommendation, 15,16 flight booking, 17 information retrieval, [18][19][20][21][22][23][24][25] reputation management, 26,27 collision avoidance, 28 and knowledge mobilization. 29,30 However, the literature on FO-based assessment of similarity is limited to the use of formal concept analysis (FCA). FCA is concerned with the formalization of concepts and conceptual thinking. 31 A key limitation of FCA-based approach, however, is that it necessitates a particular type of world modeling, that is, conceptto-attributes, which may not be applied in all situations. In addition, considering the need for human intervention in concept-to-attributes database creation, FCA-based approach is semi-automatic rather than fully automatic and of exponential time order, O(2 N ). 32 This article presents a measure for assessing concept similarity based on FO. With respect to other papers defined in the literature, the key concepts underlying the proposed approach are its independence of FCA and lower pre-required complexity. As a case study, this measure is then used to generate a similarity matrix of concept pairs in the context of social networks (SNs). Furthermore, to evaluate the proposed semantic similarity measure, a new evaluation approach is proposed. The approach, named as FO-based similarity evaluation (FOSE), is the first data-driven evaluation approach of FO-based semantic similarity as far as this study is concerned. FOSE is then applied on our case study to evaluate proposed FO-based similarity measure. The rest of this paper is organized as follows: Section 2 defines some notations and terminology used in the rest of this paper. Section 3 is devoted to the proposed FO-based assessing concept semantic similarity. In Section 4, the proposed approach for evaluating a FO-base similarity measure is introduced. A case study of deployment of proposed measure in a context of SNs is covered in Section 5 and finally comparison with literature work, concluding remarks, and future work studies are discussed in Section 6. NOTATIONS AND PRELIMINARY DEFINITION In this section, we formally define some terms and describe the notation used in this paper. In formal terms, a fuzzy set can be defined as follows: DEFINITION 1. A fuzzy set S over the universe of discourse X is defined by its membership function μ S , which maps S elements to a value between [0 1] interval. 17 where S is the fuzzy set and μ is the membership function. μ s (x) represents the degree to which x belongs to S. If X is continuous, then S can be rewritten as follows: Additionally, S can be organized in to an ordered set of pairs as follows: Typically, an ontology is illustrated as a directed acyclic graph (DAG) or a hierarchy, in which nodes correspond to concepts and edges represent relationships between pairs of concepts. In some ontologies, there is only one relationship between nodes, whereas in more general case, there exist more than one relationship between nodes. 34 The most common type of ontology relation is the taxonomical "is-a" relation, which indicates the similarity of concept pairs. In a hierarchy corresponding to an ontology, there is a node specified as the root. The root is the starting node. A path is a sequence of adjacent (via the edges) nodes in the hierarchy. The name of each node at an intermediate level is associated with a parent, one or more child nodes and one or more sibling nodes. 35 Parent node is the node one level higher in the hierarchy. Inversely, if a node is a parent of another node, the node is called a child of the parent. Consequently, a node may have several parent nodes, and vice versa. Sibling nodes share the same parent. The depth of a node is the length of the path to its root. Let us use parent(c), sibling(c), and depth(c) operator to demonstrate the parent, sibling set, and depth of a node c in a hierarchy, respectively. A node that is connected to all lower-level nodes is demonstrated by ancestor(c). Given two nodes in an ontology, they must share a set of common ancestor nodes, and the one with the highest depth is typically referred to as the lowest common ancestor (LCA) of the two nodes. Discarding the direction of the edges in an ontology, there exists at least one path between every pair of two node. 34 Among all possible paths between two crisp concept c 1 and c 2 in an ontology, the one passing their LCA is the shortest path (sp) between two concepts, that is, sp(c 1 , c 2 ) = |path (c 1 , LCA(c 1 , c 2 )| + |path(LCA(c 1 , c 2 ), c 2 )|, where |path(c i , c j )| counts the number of edges (relations) in the path from c i to c j . DEFINITION 5. A FO is defined by means of fuzzy relations characterized by a membership function. Considering Equation 5, this type of ontology is formulated as follows: where, R f is a (binary) fuzzy relation over two countable crisp set of concepts 36 Figure 1 where crisp concepts are connected to each other by fuzzy "is-a" relations and a fuzzy degree of membership, μ(x)in [0,1] interval is assigned to each relation. Let R 1 (x, y), (x, y) ∈ X × Y and R 2 (y, z), (y, z) ∈ Y × Z be two fuzzy relations. The max-product composition of two fuzzy relations R 1 and R 2 is denotes as R 1 • · R 2 (x, z) and defined as follows 37 A dataset, X = {x 1 , x 2 , . . . , x N } is a set of N objects or data points represented as feature vectors in a F-space. A distance matrix of a dataset, dist(X), is a matrix demonstrating the distance between each pairs of X elements. Having an ontology O = (C, R, D) of N concepts, a distance matrix is corresponded to C, which determines the distance of concept pairs. Semantic similarity (distance) computes the similarity between concepts that need not be lexically similar. Semantic distance can be inferred from web data. Statistical analysis of web files for a set of concepts C results in a distance matrix denoted as web_dist(C). Another approach for the assessment of concept semantic distance is based on an ontology. DEFINITION 6. Having a CO of N concepts as CO = (C, R, D), the semantic distance of concept pairs can be calculated based on ontology relations, R. Accordingly, the result is a distance matrix of concept pairs with the size N × Nwhich is denoted as CO_dist. However, if the source ontology is a FO, the distance measure would be a FO-based distance measure, denoted as FO_dist. Having two distance matrix of objects (concepts), their correlation and variation can be evaluated by Pearson correlation, relative root squared error (RRSE), and relative absolute error (RAE) as follows: Relative root squared error (A, Relative absolute error (A, Pearson correlation measures the strength of a relation between two variables A and B. a and b denote different values of variables A and B, respectively.ā and bspecify the average value of variables A and B, respectively. The higher value of Pearson correlation indicates a more dominant relation between two variables. The two other criteria, RAE and RRSE, measure relative variation between a target variable, A, and its predicted value, B. RRSE calculates the root of squared error between target variable A and its predicted value B. RAE on the other hand calculates the absolute value of this variation. The smaller RAE and RRSE indicate a better prediction. Here |x| refers to the absolute value of x. In this context, the problem we want to solve is the following: having a set of concepts of web data, assess their semantic similarity based on FO and evaluate the results. A NEW SEMANTIC SIMILARITY MEASURE BASED ON FO Semantic similarity computes the similarity between concepts that need not be lexically similar. 38 Ontology-based analysis has been a popular approach in recent years for semantic similarity assessment. However, they mainly utilizes CO structure. 34 Despite the information richness of FO, it has been considered to a limited extent in the literature for semantic similarity measurement. A FO is more informative in comparison with a CO. In addition to providing taxonomical relations of concepts, it provides information on the strength of the relation between concept pairs. Accordingly, similarity assessment based on FO could provide results that are closer to reality. The input for similarity assessment based on FO is a FO of concepts connected by fuzzy relations. If a FO is not available, a CO can be converted to a FO using the algorithm proposed by Ref. 39. Their approach generates a FO from a CO by means of a distance matrix of concepts inferred from the web, dist_web(C). This approach maps a two-dimensional dist_web matrix to a nested ontology structure to generate a FO. Assessing Concept Similarity Based on FO This section aims at proposing a novel approach for semantic similarity assessment based on FO. Before proceeding, let us define some propositions, which will be used later in this section: r Two completely similar objects give the maximum similarity, (sim (x, y) = 1), whereas the least similar pairs give the minimum value (sim (x, y) = 0). That is, similarity is the complement of dissimilarity in the range of [0 1] and so one can be easily derived from the other as follows: 40 r In a FO = (C, R f , D), the fuzzy value of taxonomical "is-a" relation between concept pairs demonstrates their level of belonging to each other. Accordingly, the fuzzy value of taxonomical "is-a" relations existing in a FO can be considered as the similarity of concepts, s. t., between c i to c j . Assuming c i as the node with lower depth in comparison with c j , s. t. depth(c i ) ≤ depth(c j ), then relations between c i and c j lie in one of the following two categories: 1. c i is the parent or an ancestor of c j . 2. c i and c j are neither parent nor ancestors but are connected to each other via common ancestors. According to this partitioning, our proposed approach for semantic distance assessment of any concept pairs (c i, c j ) in a FO is defined by Algorithm 1. If c i is an ancestor of c j : As mentioned earlier in Equation 13, for concepts directly connected to each other by "is-a" relation, the fuzzy value of their relation, that is, μ R (c i , c j ), is set as their similarity. However in case that there are some intermediate relations on the path from c i to c j , all these relations are composed using fuzzy max-product composition as defined in Equation 8 in order to generate the final value for similarity of c i to c j . Afterward, considering Equation 15, their distance equals their similarity subtracted from 1. More precisely, let us assume that there are k intermediate concepts between c i and c j . The similarity of c i to c j is calculated by max-product composition (Equation 8) of all intermediate fuzzy relations on the path from c i to c j as follows: and accordingly their distance is equal to: FO dist (c 1 , c 10 If c i and c j are connected via a common ancestor(s), which means that the path from c i to c j passes from their common ancestor(s), the pass containing their LCA is the shortest path between c i and c j . 41 Subsequently, their distance is set as the sum of the distance of each to their LCA. To calculate the distance of each to their LCA, the method defined in previous section is used, s. t. the max product composition of relations on the path from each node to their LCA is calculated. As an example consider Figure 2 again. The similarity of c 8 and c 10 is calculated by consideration of the similarity of each to c 3 , which is their LCA. Finally, the distance of each node to itself is set as 0. Consequently, the proposed distance measure can be summarized as follows: International Journal of Intelligent Systems DOI 10.1002/int Examples To illustrate the behavior of our approach, let us consider following portions of two FOs as depicted in Figures 1 and 2 and calculate semantic similarity of some of their concepts: Example 1. Consider the FO of Figure 1, which consists of six concepts connected by five fuzzy relations. According to the proposed method the similarity between "Computer Science" and "Regression Analysis" and the distance between "Reinforcement Learning" and "Data Mining" is defined as follows: Properties of the Proposed Measure In order to show the validity of the presented measure, we have studied the properties that a distance measure must fulfill. It is important to note that the fulfillment of those properties is a requirement if the measure is used in conjunction with some reasoning techniques. 42 A distance function must satisfy three properties: positivity, minimality, and symmetry as stated by: 43 Proof. Positiveness: Calculation of fuzzy distance between concepts A and B consists of three steps. First fuzzy value of the shortest path from each node to their LCA is calculated then these values are subtracted form 1 and afterward they are summed up. The first part, calculation of distance of each node to their LCA, is composed of multiplication of some fuzzy values. Accordingly, the result lies in [0 1] interval. Afterward, the result is subtracted from one. Since the result lies in [0 1] interval its subtraction form 1 is always positive and then two positive numbers are added which results in a new positive number. Minimality: As mentioned earlier, the distance of each node to itself is set to zero. A NEW APPROACH FOR EVALUATING FO-BASED SIMILARITY MEASURE The literature on evaluation of ontology-based similarity measures is limited to consideration of CO. This section first investigates some criteria that can be considered in evaluation of a FO-based similarity measure. Next, a novel evaluation approach named as FOSE is introduced. To the best of our knowledge, FOSE is the first data-driven evaluation approach for FO-based similarity measures. Possible Evaluation Criterion for a FO-Based Similarity Measure To evaluate a CO-based dissimilarity measure, a common approach is to compare it with web data. 44 However, to evaluate the power of fuzziness in a model, it is common to compare it against its equivalent crisp model. Accordingly, in our evaluation in addition to consideration of web-based distance of data, web_dist, we consider its equivalence CO-based distance measure, CO_dist. CO_dist: To have an acceptable evaluation, the underlying logic for calculation of CObased measure, must be similar to proposed FO-based similarity measure. Rada 45 proposed an approach for CO-based similarity measure which utilizes the concepts of shortest path between concepts and consideration of their LCA in the same way as our proposed FO-based similarity measure. Accordingly, in our evaluation we consider Rada measure for calculation of CO-based distance measure which is defined as follows: web_dist: In order to evaluate distance based on web data, terms (concepts) co-occurrences in web files is considered, which is a common literature approach. 26 Co-occurrence refers to the number of times two specific concepts have been appeared concurrently in a same file. This criterion is used to evaluate web-based distance of concepts. FO_dist: Proposed FO-based distance measure (Equation 16) is used to calculate concepts distances based on FO. the distance matrix based on FO, FO_dist. The three distance matrix are put aside as follows to compare the results of proposed FO-based similarity measure. 1. Evaluation of the correlation between web_dist and CO_dist, and compare it with the correlation between web_dist and FO_dist. 2. Evaluation of the RAE between web_dist and CO_dist, and compare it with the RAE between web_dist and FO_dist. 3. Evaluation of the RRSE between web_dist and CO_dist, and compare it with the RRSE between web_dist and FO_dist. The correlation, RAE, and RRSE are calculated as in Equations 9, 10 and 11, respectively. The Proposed Approach for FOSE According to previous section, evaluation of a FO-based similarity measure requires to compare it against its corresponding CO-based similarity measure. The criterion for comparison could be their RAE (Equation 11) to a standard web-derived distance matrix which both are built upon on. This evaluation approach clarifies how well FO has measured similarity in comparison with CO. Considering the proposed FO-based similarity measure introduced in Section 3 and its respective CO-based similarity measure introduced in this section, a new evaluation criterion for FOSE method is proposed noted as FOSE. The criterion is ratio based. RAE of FO-based similarity matrix and web_dist matrix is set as its numerator and RAE of CO-based similarity matrix and web_dist matrix is set as its denominator. Accordingly, FOFOSE criterion is defined as follows: where web_dist matrix is the web-based distance matrix as defined in Section 4.2. FO_dist is the distance based on FO using proposed method (Equation 16), CO_dist is the distance based on CO using Equation 17 and RAE(A,B) equals the RAE of point to point elements of A and B which is calculated as in Equation 11. FOSE considers three distance matrices to evaluate a FO-based distance measure: the distance matrix generated from web-data, (web_dist), the distance matrix based on CO, (CO_dist), and the distance matrix based on FO, (FO_dist). Comparing the RAE between web_dist and CO_dist with that of web_dist and FO_dist, FOSE evaluates a FO-based similarity measure. Thus, the FOSE value less than 1 indicates the superiority of FO over CO. The range of FOSE possible values and their meaning is demonstrated in Table I. CASE STUDY In this case study, we focus on the effectiveness of the FO-based semantic similarity measure in the context of SN. In our context, we focus on LinkedIn, 46 Members can create customizable profiles that detail employment history, business accomplishments, and professional competencies in their area of expertise. Consequently, they may develop contacts, find jobs, and answer questions. FO Development In LinkedIn SN, each person fills out his/her own profile with a set of skills that defines his/her areas of expertise. A screen shot of a LinkedIn's profiles skills section is depicted in Figure 3. Subsequently, by crawling profile data of 130 unique users, various skills were collected from their LinkedIn profiles. In order to create an ontology of skills for the SN, individual skills must be clustered as the groundwork for ontology construction. A common metric for web data clustering, is terms co-occurrences. 47,48 Co-occurrence refers to the number of times two individual terms have been used within the same text file. Mapping this definition to our context is equivalent to the number of times two terms have been used concurrently as an individual skill. Calculation of this metric resulted in 28,536 couples of co-occurred terms. By the accomplishment of this stage, dataset specifications were extracted as outlined in Table II and the web-based distance matrix is constructed based on concept co-occurrences. To create a CO of terms, agglomerative hierarchical clustering (AHC) is applied on the skill set. AHC forms the clusters from "bottom-up" and is a common literature approach for learning of an ontology from a data set. 49 Having generated the CO, the corresponding CO-based distance matrix is generated using Equation 17. Assessing FO-Based Concept Similarity Having a CO, FO is generated by the algorithm, 39 which maps a twodimensional distance matrix of concepts to hierarchical ontology structure. For the generated FO, distance of concept pairs is calculated by the proposed approach (Equation 16) and FO-based distance matrix is determined. Evaluation of FO-Based Similarity Measure At this step, the FO-based distance measure, FO_dist, is compared against the CO-based distance measure, CO_dist, using correlation (Equation 9), RRSE (Equation 10) and RAE (Equation 11), and FOSE criterion. The overall evaluation results are summarized in Figures 4-7. Table III outlines the four dataset's specifications that were considered in this experiment. All datasets are generated from skillsets of LinkedIn profiles (Table II). As is demonstrated, datasets are sorted in ascending order based on their size. Figure 4 demonstrates the correlation between FO_dist and web-based distance matrix, web_dist, with that of CO_dist and web_dist for all datasets. According to Figure 4, FO has a higher correlation with the initial web-based distance matrix compared with CO. Considering Table III, the larger datasets result in higher superiority of FO in comparison with CO for the conceptual modeling of the real world. This shows the potential of this approach for larger datasets as is necessary for SN analysis. In Figures 5 and 6, the comparison is performed between FO-based similarity measure and its corresponding CO-based similarity measure by means of RAE and RRSE criteria. Figure 5 demonstrates the RAE between FO_dist and CO_dist with web_dist. As illustrated for all datasets, RAE of FO-based measure is lower that CO-based similarity, which underlines its dominance in semantic similarity assessment. In the same way, Figure 6 illustrates the RRSE between each of FO_dist and CO_dist with web_dist. Results indicates FO-based measure superiority over CObased measure especially when the size of datasets increases. Eventually, the proposed evaluation criterion, FOSE, is calculated for the proposed FO-based similarity measure and summarized in Figure 7. Considering Table I, values less that 1 and close to 0 indicates the FO-based similarity measure superiority over CO-based similarity measure. Subsequently, the decreasing trend of FOSE values by increasing datasets size and the near to zero, (0.07), value for the largest dataset indicates proposed method perfection. CONCLUSION AND FUTURE WORK In this section, first the works of other authors, whose proposals are close to ours are reviewed and compared with our proposal. Then the paper is concluded and directions for future works are outlined. Related Work In this section, we first review the literature on CO-based semantic similarity measures and afterward consider the literature on a more narrowed topic of FO-based similarity measures. Common approaches for CO-based similarity measures can be classified to edge-counting, information-based, and feature-based approaches. 50,51 Edge-counting approaches refer to the group of methods that utilize ontology structure to assess the similarity. Rada 45 considered the length of the shortest path between ontology nodes. The longer the path, the more semantically far the concepts are considered. Later on, Refs. 52-54 integrated some other features of ontology structure like its depth to improve the accuracy of this similarity measure. Features-based approaches consider the degree of overlapping between sets of ontological features. Concepts are considered as sets of features and their evaluation is estimated based on the number of similar features they have. In this way, common features tend to increase similarity and non-common ones tend to diminish it. This approach is originated from Tversky model 55 and has been applied in molecular biology, 56 adaptive e-learning 57 and ontology merging 58 and alignment. 59 Information-based approaches calculates statistical specification of concepts based on a corpus or other data sources. 60 Some of common statistical specifications in the literature are concept frequencies, 6 which refers to the number of occurrences in the corpus and co-occurrences, 61 which considers simultaneous occurrence of two concepts in the same file. FO-based similarity assessment is mainly obtained by fuzzy FCA (FFCA). FFCA-based approaches lie in feature-based approaches since they consider common features of concepts in the lattice to assess their similarity. FFCA has been used in various domains. For instance 62 used it in combination with WordNet a for similarity assessment and 44 used an ontology obtained by FCA to assess similarity for products for information retrieval. A highlighted work of this domain is Ref. 61, which has used FFCA method in combination with information content theory to assess similarity. Despite its popularity, FCA-based approach 63 is of high computational complexity. Its pre-requirement is to build formal concept lattices, which is a complex task of O (2 N ) time order. The proposed measure assess similarity based on the shortest path between concepts which is an edge-counting approach in combination with fuzzy grades of membership which is an information-based approach. Accordingly, proposed measure lies in a hybrid category as illustrated in Table IV. Summary and Future Work Assessing concept semantic similarity is a critical module in many applications of artificial intelligence. This paper proposed a novel approach for concept similarity assessment which is based on FO. The proposed approach incorporates fuzzy relation composition in combination with an edge counting approach to assess the similarity. Accordingly, proposed measure relies on taxonomical features of an ontology in combination with its fuzzy grades of membership. Consequently, the approach lies in the hybrid category of assessing similarity that utilizes both taxonomical and information content metrics. Differently to the literature on FO-based semantic similarity measures, the proposed approach does not utilize FCA. Considering the limited world modeling of FCA and its high computational complexity, the proposed approach has lower pre-required complexity and higher flexibility. a http://wordnet.princeton.edu/ Furthermore, an evaluation method for FO-based similarity measure, named as FOSE, was proposed, which determines the variation between proposed FObased similarity measure with real world-data and compare it with that of CO-based similarity measure. As far as the present research is concerned, FOSE is the first data-driven evaluation approach for FO-based similarity measures. To evaluate the proposed measure by means of FOSE, a case study of LinkedIn SN was considered. Experimental results reveal superiority of proposed FO-based similarity measure in comparison with CO-based measures with respect to its correlation to the real world data and error minimization. Our future work is concerned with assessing similarity of concepts in a context with higher degrees of uncertainty based on interval and general type-2 fuzzy ontologies.
6,573
2017-02-01T00:00:00.000
[ "Computer Science" ]
3-difference cordial labeling of some path related graphs Let G be a (p, q)-graph. Let f : V (G)→ {1, 2, . . . , k} be a map where k is an integer, 2 ≤ k ≤ p. For each edge uv, assign the label |f(u)− f(v)|. f is called k-difference cordial labeling of G if |vf (i)− vf (j)| ≤ 1 and |ef (0)− ef (1)| ≤ 1 where vf (x) denotes the number of vertices labelled with x, ef (1) and ef (0) respectively denote the number of edges labelled with 1 and not labelled with 1. A graph with a k-difference cordial labeling is called a k-difference cordial graph. In this paper we investigate 3-difference cordial labeling behavior of triangular snake, alternate triangular snake, alternate quadrilateral snake, irregular triangular snake, irregular quadrilateral snake, double triangular snake, double quadrilateral snake, double alternate triangular snake, and double alternate quadrilateral snake. Introduction Graphs considered in this paper are finite and simple.Graph labeling is used in several areas of science and technology such as coding theory, astronomy, circuit design etc.For more details on application of graph labeling, see [2].Let G be a (p, q)-graph.Let f : V (G) → {1, 2, . . ., k} be a map.For each edge uv, assign the label |f (u) − f (v)|.f is called a k-difference cordial labeling of G if |v f (i) − v f (j)| ≤ 1 and |e f (0) − e f (1)| ≤ 1 where v f (x) denotes the number of vertices 3-Difference cordial labeling We investigate the 3-difference cordial labeling of some path related graphs.The triangular snake T n is obtained from the path P n by replacing each edge of the path by a triangle C 3 . Proof.Let P n be a path u 1 u 2 . . . Value of n Table 2. Edge labels Figure 1.A 3-difference cordial labeling for T 8 Next is the alternate triangular snake.An alternate triangular snake AT n is obtained from a path u 1 u 2 ...u n by joining u i and u i+1 (alternatively) to new vertex v i .That is every alternate edge of a path is replaced by C 3 . Proof.Case 1.Let the first triangle starts from u 2 and the last triangle be ends with u n−1 . In this case |V (AT Assign label 2 to the vertices v 1 , v 2 , ... We assign the label to the path vertices u 1 , u 2 , ...u n in the pattern 1, 3, 1, 3,.... 1, 3. Note that in this process the vertex u n and u n−1 received the labels 1, 3 respectively.In this case the vertex condition is given by v Case 2. Let the first triangle starts from u 1 and the last triangle be ends with u n . Here Assign the label to the vertices as in case 1.The vertex and edge conditions are given by v 2 and e f (0) = n − 1 and e f (1) = n respectively.Case 3. Let the first triangle starts from u 2 and the last triangle be ends with u n .Note that in this case |V (AT n )| = 3n−1 2 and |E(AT n )| = 2n − 2. Assign the label to the vertices as in case 1.It is easy to verify that the last vertex u n received the label 1 in this case.This vertex labeling is a 3-difference cordial labeling follows from the vertex and edge condition 2 and e f (0) = e f (1) = n − 1. Case 4. Let the first triangle starts from u 1 and the last triangle be ends with u n−1 .This case is equivalent to case 3. Now we look into alternate quadrilateral snake.An alternate quadrilateral snake AQ n is obtained from a path u 1 u 2 ...u n by joining u i ,u i+1 (alternatively) to new vertices v i , w i respectively and then joining v i and wi.That is every alternate edge of a path is replaced by a cycle C 4 . Proof.Case 1.Let the first C 4 be starts from u 2 and the last C 4 be ends with u n−1 .Note that in this case We consider the vertices in the path.Assign the labels 2, 1, 1, 3 to the vertices u 1 , u 2 , u 3 , u 4 .Next we assign the labels 2, 1, 1, 3 to the next four vertices u 5 , u 6 , u 7 , u 8 .Continuing this way, assign the label to the next four vertices and so on.Clearly in this process the vertex u n received the labels 1 or 3 according as n ≡ 2 (mod 4) or n ≡ 0 (mod 4).Then we move to the vertices v i and w i .Fix the labels 2 and 3 to the vertices v 1 and w 1 respectively.Assign the labels 1 and 3 to the vertices v 12i+2 and w 12i+2 for the values i = 0, 1, 2, 3,... Then assign the label 2 to the vertices v 12i+3 and w 12i+3 for i = 0, 1, 2, 3,...For the values of i = 0, 1, 2, 3,... assign the label 3 to the vertices v 12i+4 and w 12i+4 .Next assign the labels 2 and 3 to the vertices v 6i+5 and w 6i+5 for all the vertices of i = 0, 1, 2, 3... Then we assign the labels 1 and 3 to the vertices v 6i and w 6i for i = 1, 2, 3,...For all the values of i = 1, 2, 3,... assign the label 2 to the vertices v 6i+1 and w 6i+1 respectively.Now we assign the label 3 to the vertices v 12i+8 and w 12i+8 for all the values of i = 0, 1, 2, 3,... Then assign the labels 2 and 1 to the vertices v 12i+9 and w 12i+9 for i = 0, 1, 2, 3,...For the values i = 0, 1, 2, 3,... assign the labels 3 and 2 to the vertices v 12i+10 and w 12i+10 respectively.Clearly Clearly the vertex and edge label satisfy the condition of this case are given in table 3 and table 4.Here First we consider the vertices of a path.Assign the labels 1, 3, 2, 1 to the first four vertices u 1 , u 2 , u 3 , u 4 respectively.Next we assign the labels 1, 3, 2, 1 to the next four vertices u 5 , u 6 , u 7 , u 8 respectively.Proceeding like this, assign the next four vertices and so on.Clearly the vertex u n received the label 1 or 3 according as n ≡ 0 (mod 4) or n ≡ 2 (mod 4).Consider the vertices v 6i+2 and v 6i+4 .Now we assign the label 3 to the vertices v 6i+2 and v 6i+4 for all the values of i = 0, 1, 2, 3,... Assign the label 3 to the vertices w 2i for i = 1, 2, 3,... Next we assign the label 2 to the vertice v 6i+3 and v 6i+ for all the values of i = 0, 1, 2, 3,...For all the vvalues of i = 1, 2, 3,... assign the label 2 to the vertices w 2i+1 .Now we consider the vertices v 6i and v 6i+1 for all the vaalues i = 1, 2, 3,...The vertex and edge condition of this case are given in table 5 and tablet6.Table 6.Edge label Case 3. Let the first C 4 be starts from u 2 and the last C 4 be ends with u n .Note that Value of n First we fix the label 1 to the vertex u 1 .Then we consider the path vertices u i as in the labels 1, 3, 2, 1 to the four vvertices u 2 , u 3 , u 4 , u 5 respectively.Next assign the labels 1, 3, 2, 1 to the net four vertices u 6 , u 7 , u 8 , u 9 respectively.Continuing like this to assign the label to the next four vertices and so on.In this process, the last vertex u n received the label 3 or 1 according as n ≡ 3 (mod 4) or n ≡ 1 (mod 4).Consider the vertices v 6i+1 and v 6i+3 .Assign the label 2 to the vertices v 6i+1 and v 6i+3 for the values i = 0, 1, 2, 3,... Next assign the label 3 to the vertices v 6i+2 and v 6i+4 for i = 0, 1, 2,...For the values of i = 0, 1, 2,.... assign the label 1 to the vertices v 6i+5 .Then assign the label 1 to the vertices v 6i for i = 1, 2, 3,... Now we move to the vertices w i .Assign the label 2 to the vertices w 2i+1 for all the values of i = 0, 1, 2, 3,... Then assign the label 3 to the vertices w 2i for i = 1, 2, 3,... Clearly the vertex and edge condition of this case are given in table 7 and table 8. Table 8.Edge label Case 4. Let the first C 4 be starts from u 1 and the last C 4 be ends with u n−1 .This case is equivalent to case 3. Next investigation is about the irregular triangular snakes.The irregular triangular snake IT n is obtained from the path u 1 u 2 ...u n with vertex set Theorem 2.4.The irregular triangular snake is a 3-difference cordial graph. Proof. Clearly |V (IT Assign the labels 1, 3, 2, 2, 3, 1, 2, 2 to the first eight path vertices.Next eight path vertices are labeled by 1, 3, 2, 2, 3, 1, 2, 2. Proceeding like this, assign the label to the 8t vertices of the path.Note that the vertex u 8t receive the label 2. Assign the label to the next r vertices u 8t+1 , u 8t+2 , ..., u 8t+r by the sequence of integers 1, 3, 2, 2, 3, 1,.... Next assign the label to the first five vertices v 1 , v 2 , v 3 , v 4 , v 5 by the integers 1, 3, 2, 1, 3. Then assign the labels 1, 3, 2, 1, 3 to the next five vertices v 6 , v 7 , v 8 , v 9 , v 10 .Continuing this way, assign the label to the next five vertices and so on.If all the vertices v i are labeled, then stop.Otherwise there are some non labeled vertices and the number of labeled verties is less than or equal to 4. Assign the labels 1, 3, 2, 1 to the non labeled vertices.That is if only one non labeled vertices is exists we use the label 1 only if it is two then we use the labels 1, 3 and so on.This labeling is clearly 3-difference cordial labeling follows from table 9. Value of n e f (0) e f (1) n is odd The irregular quadrilateral snake IQ n is obtained from the path P n : u 1 u 2 ...u n with vertex set Theorem 2.5.The irregular quadrilateral snake is a 3-difference cordial graph. Proof.Clearly |V (IQ n )| = 3n − 4 and |E(IQ n )| = 4n − 7 respectively.We consider the path vertices u i .Assign the labels 1, 2, 3 to the first three path vertices u 1 , u 2 , u 3 respectively.Then assign the label 1 to the remaining path vertices u 4 , u 5 , u 6 , ... Next assign the label 2 to the vertices v i and assign the label 3 to the vertices w i .Clearly e f (0 A double triangular snake DT n consists of two triangular snakes that have a common path.That is a double triangular snake is obtained from a path u 1 u 2 ...u n by joining u i and u i+1 to a new vertex v i (1 ≤ i ≤ n − 1) and to a new vertex w i (1 ≤ i ≤ n − 1). www.ijc.or.idFirst we consider the path vertices u i .Assign the labels 1, 3 to the vertices u 1 , u 2 respectively.Then assign the labels 2, 3, 3, 3 to the path vertices u 3 , u 4 , u 5 , u 6 .Using the same pattern assign the labels 2, 3, 3, 3 to the next four path vertices u 7 , u 8 , u 9 , u 10 .Continuing this way assign the next four vertices and so on.If all the path vertices are labeled in this way then we stop the process.Otherwise there are some non labeled vertices exists and in the case the number of non labeled vertices less than or equal to 3. Assign the labels 2, 3, 3 to the non labeled vertices.If only one non labeled vertex exists then we use the label 2 only.If there are two labeled vertices then we use the labels 2, 3. Next we move to the vertices v i , w i .Assign the label 1 to all the vertices v i (1 ≤ i ≤ n − 1).Assign the label 2 to the vertex w 1 .Then assign the labels 3, 2, 2, 2 to the four vertices w 2 , w 3 , w 4 , w 5 .Next we assign the labels 3, 2, 2, 2 to the next four vertices w 6 , w 7 , w 8 , w 9 .Proceeding like this, if all the vertices w i are labeled then stop.Otherwise next non labeled vertices in the sequence 3, 2, 2. This labeling is a 3-difference cordial labeling and its vertex and edge condition is given by v Table 10.Edge label A double quadrilateral snake DQ n consists of two quadrilateral snake have a common path. Consider the path vertices u i .Assign the labels 1, 1, 2, 2 to the first four vertices u 1 , u 2 , u 3 , u 4 respectively.Then assign the labels 1, 1, 2, 2 to the next four vertices u 5 , u 6 , u 7 , u 8 .Proceeding like this, assign the next four vertices and so on.If all the path vertices are labeled in this way then the labelling is complete.Otherwise there are some non labeled vertices exist.If the number of non labeled vertices less than or equal to 3, then assign the labels 1, 1, 2 to the non labeled vertices.If only one non labeled vertex exists then we use the label 1 only.If there are two unlabelled vertices, then we use the labels 1, 1. Now we move to the vertices v i and w i .Assign the labels 2, 1, 3, 2 to the vertices v 1 , v 2 , v 3 , v 4 respectively.Now we assign the labels 3, 1, 2, 1 to the vertices w 1 , w 2 , w 3 , w 4 respectively.Next assign the label 2 to the vertices v 12i+5 , v 12i+7 , v 12i+9 , v 12i+11 for all the values of i = 0, 1, 2, 3,.... Then assign the label 1 to the vertices w 12i+5 , w 12i+6 , w 12i+9 , w 12i+10 for i = 0, 1, 2, 3,.... Now for all the values of i = 1, 2, 3,... assign the label 2 to the vertices v 12i+1 , v 12i+3 , v 12i+4 .Next we assign the label 1 to the vertices w 12i+1 , w 12i+2 , w 12i+4 for all the values i = 0, 1, 2, 3,.... Then we assign the label 1 to the vertices v 12i+6 , v 12i+10 for all the values of i = 0, 1, 2, 3,....For all the values of i = 0, 1, 2, 3,.... assign the label 3 to the vertices w 12i+7 , w 12i+8 , w 12i+11 .Now assign the label 1 to the vertices v 12i , v 12i+2 for all the values of i = 0, 1, 2, 3,.... Next assign the label 3 to the vertices w 12i , w 12i+3 for all the values of i = 0, 1, 2, 3,....We consider the vertices x i and y i .Assign the labels 2, 3, 1, 2 to the vertices x 1 , x 2 , x 3 , x 4 respectively.Then we assign the labels 2, 3, 1, 2 to the next four vertices x 5 , x 6 , x 7 , x 8 respectively.Continuing this way we assign the next four vertices and so on.If all the vertices are labeled in this way, then stop.Otherwise there are some non labeled vertices exist. If the number of non labeled vertices less than or equal to 3, then assign the labels 2, 3, 1 to the non labeled vertices.If only one unlabeled vertex exists, then we use the label 2 only.If there are two then we use the labels 2, 3. Now we assign the label 3 to all the vertices y i (1 ≤ i ≤ n − 1).This labeling is 3-difference cordial labeling follows from the following tables.Table 12.Vertex label A double alternate triangular snake DAT n consists of two alternate triangular snakes that have a common path.That is a double alternate triangular snake is obtained from a path u 1 u 2 ....u n by joining u i and u i+1 (alternatively) to two vertices v i and w i . Theorem 2.8.Double alternate triangular snake DAT n is a 3-difference cordial graph. Proof.Case 1.The triangle starts from u 1 and end with u n .In this case |V (DAT n )| = 2n and |E(DAT n )| = 3n − 1. Assign the label 1 to the path vertex u 1 .Now we assign the label 2 to the vertices u 12i+2 , u 12i+3 , u 12i+6 , u 12i+7 , u 12i+10 , u 12i+11 for all the values of i = 0, 1, 2, 3,... Then we assign the labels 3 to the vertices u 12i+4 , u 12i+5 , u 12i+8 , u 12i+9 for all the values of i = 0, 1, 2, 3,...For all the values of i = 0, 1, 2, 3,... assign the label 1 to the vertices u 12i and u 12i+1 .Next we move to the vertices v i and w i .Assign the labels 2, 1, 3, 1, 2, 3 to the first six vertices v 1 , v 2 , v 3 , v 4 , v 5 , v 6 respectively.Then we assign the labels 2, 1, 3, 1, 2, 3 to the next six vertices v 7 , v 8 , v 9 , v 10 , v 11 , v 12 respectively.Proceeding like this assign the label to the next six vertices and so on.If all the vertices v i are labeled then stop the process.Otherwise there are some non labeled vertices and number of labeled vertices is less than or equal to 5. Now assign the www.ijc.or.id 3-difference cordial labeling of some path related graphs | R. Ponraj, M. M. Adaickalam and R. Kala labels 2, 1, 3, 1, 2 to the non labeled vertices.If four non labeled vertices are exist then assign the labels 2, 1, 3, 1 to th non labeled vertices.If the number of non labeled vertices is 3 then assign the labels 2, 1, 3 to the non labeled vertices.If only one non labeled vertex exist then assign the label 2 only.If it is two then assign the labels 2, 1 to the non labeled vertices.Consider the vertices w i .Assign the labels 3, 1, 1 to the first three vertices w 1 , w 2 , w 3 respectively.Then we assign the labels 3, 1, 1 to the next three vertices w 4 , w 5 , w 6 respectively.Continuing this way we assign the label to the next three vertices and so on.If all the vertices w i are labeled, then stop the process.Otherwise there are some non labeled vertices exists.If the number of non labeled vertices are less than or equal to 2 then assign the labels 3, 1 to the non labeled vertices.If only one non labeled vertex is exist then assign the label 3 only.The edge condition is given by e f (0) = 3n−2 2 and e f (1) = 3n 2 .Also the vertex condition is given by a table 13.Table 13.Vertex label Case 2. The triangle starts from u 2 and end with u n−1 . In this case First we consider the path vertices u i as in the labels 1, 3, 2, 2 to the first four path vertices u 1 , u 2 , u 3 , u 4 respectively.Then we assign the labels 1, 3, 2, 2 to the next four path vertices u 5 , u 6 , u 7 , u 8 respectively.Proceeding like this assign the label to the next four vertices and so on.Clearly in this process the vertex u n received the label 2 or 3 according as n ≡ 0 (mod 4) or n ≡ 2 (mod 4).Next we move to the vertices v i .Assign the labels 1, 2, 3, 2, 3, 1 to the six vertices v 1 , v 2 , v 3 , v 4 , v 5 , v 6 respectively.Then we assign the labels 1, 2, 3, 2, 3, 1 to the next six vertices v 7 , v 8 , v 9 , v 10 , v 11 , v 12 respectively.Continuing this process assign the label to the next six vertices and so on.If all the vertices of v i are labeled then we stop the process.Otherwise there are some non labeled vertices are exist.If the number of non labeled vertices are less than or equal to 5 then assign the labels 1, 2, 3, 2, 3 to the non labeled vertices.If there are four non labeled vertices are exist then assign the labels 1, 2, 3, 2 to he non labeled vertices.If the number of non labeled vertices are three then assign the labels 1, 2, 3 to the non labeled vertices.If it is two then assign the labels 1, 2 to the non labeled vertices.If only one non labeled vertex is exist then assign the label 1 only.Now we consider the vertices w i .Assign the label to the vertices w 6i+2 , w 6i+5 , w 6i+3 for all the values of i = 0, 1, 2,... Then assign the labels 3 to the vertices w 6i+4 for i = 0, 1, 2, 3,...For all the values of i = 1, 2, 3... assign the label 3 to the vertices w 6i and w 6i+1 .The edge condition is e f (0) = 3n−4 2 and e f (1) = 3n−6 2 .The vertex condition is given in table table 14.Table 15.Vertex label Case 4. The triangle starts from u 2 and end with u n−1 .This case is similar to case 3. Value of n Finally we look into the graph double alternate quadrilateral snake.Double alternate quadrilateral snake DAQ n consists of two alternate quadrilateral snake that have a common path.That is it is obtained from a path u 1 u 2 ...u n joining u i and u i+1 (alternatively) to new verticces v i , x i and w i , y i respectively and adding the edges v i w i and x i y i .Theorem 2.9.All double alternate quadrilateral snakes are 3-difference cordial graphs. Proof.Case 1.The squares starts from u 1 and end with u n .In this case |V (DAQ n )| = 3n and |E(DAQ n )| = 4n−1.We consider the path vertices u i .Assign the labels 1, 1, 3, 3 to the first four path vertices u 1 , u 2 , u 3 , u 4 .Then we sign the labels 1, 1, 3, 3 to the next four path vertices u 5 , u 6 , u 7 , u 8 .Continuing this way we assign the label to the next four vertices and so on.Clearly in this process the last vertex u n received the label 3 or 1 according as n ≡ 0 (mod 4) or n ≡ 2 (mod 4).Now we move to the vertices v i and w i .Assign the labels 2, 1 to the first two vertices v 1 and v 2 .Then we assign the labels 2, 1 to the next two vertices and so on.Proceeding like this, we assign the labels to the next two vertices and so on.Clearly the last verte v n−1 received the label 1 or 2 according as n ≡ 0 (mod 4) or n ≡ 2 (mod 4).Now we assign the labels 3, 2 to the two vertices w 1 , w 2 respectively.Then we assign the labels 3, 2 to www.ijc.or.id 3-difference cordial labeling of some path related graphs | R. Ponraj, M. M. Adaickalam and R. Kala the next two vertices w 3 , w 4 respectively.Continuing this way we assign the label to the net two vertices and so on.Note that in this process the last vertex w n−1 receied the label 2 or 3 according as n ≡ 0 (mod 4) or n ≡ 2 (mod 4).Next we move to the vertices x i and y i .Assign the labels to the vertices x i and y i is same as assign the label to the vertices v i and w i .The vertex and edge condition of this case is v f (1) = v f (2) = v f (3) = n and e f (0) = 2n − 1 and e f (1) = 2n.Case 2. The squares starts from u 1 and end with u n−1 .In this case |V (DA(Q n ))| = 3n − 4 and |E(DA(Q n ))| = 4n − 7. Consider the path vertices u i .Assign the label to the vertices u i as in case 1.We move to the vertices v i and w i .Assign the labels 2, 2 to the vertices v 1 and w 2 respectively.Then assign the labels 2, 1 to the vertices v 2 , v 3 respectively.Now we assign the labels 2, 1 to the next two vertices v 4 , v 5 respectively.Continuing this process assign the label to the next two vertices and so on.Clearly the last vertex v n−1 received the label 1 or 2 according as n ≡ 0 (mod 4) or n ≡ 2 (mod 4).Assign the labels 3, 2 to the vertices w 2 , w 3 respectively.Then assign the labels 3, 2 to the next two vertices w 4 , w 5 respectively.Proceeding like this assign the label to the next two vertices and so on.Note that the last vertex w n received the label 2 or 3 according as n ≡ 0 (mod 4) or n ≡ 2 (mod 4).Next we move to the vertices x i and y i .Assign the labels 2, 3 to the vertices x 1 and w 1 respectively.Now we assign the label 2, 1 to the vertices x 2 , x 3 .Then we assign the labels 2, 1 to the next two vertices x 4 , x 5 respectively.Continuing like this we assign the label to the next two vertices and so on.Clearly the last vertex x n labeled by the integers 1 or 2 according as n ≡ 0 (mod 4) or n ≡ 2 (mod 4).Assign the labels 3, 2 to the vertices y 2 , y 3 respectively.Then we assign the labels 3, 2 to the next two vertices y 4 , y 5 respectively.Continuing this process assign the label to the next two vertices and so on.Clearly the last vertex y n−1 received the label 2 or 3 according as n ≡ 0 (mod 4) or n ≡ 2 (mod 4).The vertex and edge condition of this case is v f Assign the label 1 to the vertex u 1 .Next we assign the labels 3, 3, 1, 1 to the next four vertices u 2 , u 3 , u 4 , u 5 respectively.Then assign the labels 3, 3, 1, 1 to the net four vertices u 6 , u 7 , u 8 , u 9 respectively.Proceeding like this assign the label to the next four vertices and so on.Note that in this case the last verte u n u n received the label 3 or 1 according as n ≡ 3 (mod 4) or n ≡ 1 (mod 4).Next we move to the vertices v i and w i .Assign the labels 1, 2 to the vertices v 1 , v 2 respectively.Then we assign labels 1, 2 to th next two vertices v 3 , v 4 respectively.Proceeding this way we assign the next two verticces and so on.Clearly the last vertex v n received the label 1 or 2 according as n ≡ 3 (mod 4) or n ≡ 1 (mod 4).Now we move to the vertices w i .Assign the labels 2, 3 to the vertices w 1 and w 2 respectively.Then we assign the labels 2, 3 to the next two vertices w 3 , w 4 respectively.Continuing this way assign the label to the next two vertices and so on.Note that the last vertex w n−1 received the label 2 or 3 according as n ≡ 3 (mod 4) or n ≡ 1 (mod 4).Consider the vertices x i and y i is same as assign the label to the vertices v i and w i .The vertex and edge condition of this case is v f (1) = n and v f (2) = v f (3) = n − 1 and e f (0) = e f (1) = 2n − 2. Figure 2 . Figure 2. A 3-difference cordial labeling of irregular triangular snake IT 10 Table 1 . Vertex label values of n e f (0) e f(1) Table 4 . Edge label Case 2. Let the first C 4 be starts from u 1 and the last C 4 be ends with u n . Table 14 . Vertex label the values of i = 1, 2, 3,... assign the label 2 to the path vertices u 12i .Then we assign the label 3 to the path vertices u 12i+5 , u 12i+6 , u 12i+9 and u 12i+10 for all the values of i = 0, 1, 2, 3,... Now we consider the vertices v i and w i .Assign the labels 3, 2, 1, 1, 2, 3 to the first six vertices v 1 , v 2 , v 3 , v 4 , v 5 , v 6 respectively.Then we assign the labels 3, 2, 1, 1, 2, 3 to the next six vertices v 7 , v 8 , v 9 , v 10 , v 11 , v 12 respectively.Proceeding like this assign the label to the next six vertices and so on.The last six vertices v n−5 , v n−4 , v n−3 , v n−2 , v n−1 , v n are labeled by 3, 2, 1, 1, 2, 3. Consider the vertices w i .Assign the label to the vertices w i as in case 1.The vertex condition of this case is e f (0) = e f (1) = 3n−3 2 .The edge condition of this case is given by table 15.
7,432.6
2018-06-12T00:00:00.000
[ "Mathematics" ]
Multipolar Intuitionistic Fuzzy Ideal in B-Algebras B-algebra is an algebraic structure which combine some properties from BCK-algebras and BCI-algebras. Some researchers have investigated the concept of multipolar fuzzy ideals in BCK/BCI-algebras and multipolar intuitionistic fuzzy set in B-algebras. In this paper, we construct a new structure which is called a multipolar intuitionistic fuzzy ideal in B-algebras. This structure is a combination of three structures such as multipolar fuzzy ideals in BCK/BCI-algebras, fuzzy B-subalgebras in B-algebras, and multipolar intuitionistic fuzzy B-algebras. We investigated and proved some characterizes of the multipolar intuitionistic fuzzy ideal, such as a necessary condition and sufficient condition. INTRODUCTION Zadeh [1] introduced a new idea, namely a fuzzy set as a non-empty set with a degree of membership whose value in interval [0,1] in 1965. The degree of membership of each member of the set is determined by the membership function. That notion from Zadeh became the basis for further researchers to develop fuzzy concepts in various fields such as graph theory, data analysis, decision making, and so on. A simple example of an algebraic structure is a group. Not only groups, -algebras, -algebras and -algebras are also other examples of algebraic structures. Imai and Iseki [2] proposed the notion a new algebraic structure called -algebras in 1966. -algebras is an important class of algebraic structure which is constructed from two different fragments, set theory and propositional calculus. In the same year, Iseki [3] continued his research to propose the notion of -algebras which is generalization from -algebras. A new idea about algebraic structure is calledalgebras which satisfies some properties from -algebras and -algebras was proposed by Neggers and Kim in [4]. They also investigated its properties. Zhang [5] introduced the concepts of bipolar fuzzy sets which is the extension of fuzzy set. Meng [6] studied about fuzzy implicative ideals in -algebras in 1997. Moreover, Muhiuddin and Al-Kadi [7] introduced bipolar fuzzy implicative ideals in -algebras. They discussed about the relationship between a bipolar fuzzy ideal and bipolar fuzzy implicative ideal. Furthermore, Chen et al. [8] introduced the concepts of multipolar fuzzy sets which is the extension of bipolar fuzzy set. Kang et al. [9] proposed the concepts about multipolar intuitionistic fuzzy set with finite degree and its application in / -algebras. In 1999, Attanasov [10] introduced the new notion about intuitionistic fuzzy set. Jun et al. [11] defined fuzzy -algebras. Then, Al-Masarwah and Ahmad [12] discussed about multipolar fuzzy ideals in / -algebras. Ahn and Bang [13] studied fuzzy -subalgebras in -algebras. Recently, Borzooei et al. [14] proposed the concept about multipolar intuitionistic fuzzy -algebras and some properties. They constructed a simple multipolar fuzzy set. Then, they also discussed about multipolar intuitionistic fuzzy subalgebras of -algebras. In this paper, we construct a new structure which is called a multipolar intuitionistic fuzzy ideal in -algebras. This structure is a combination of three structures which are the results of research by Al-Masarwah and Ahmad [12], Ahn and Bang [13], and Borzooei et al. [14]. Next, we investigated and proved some necessary condition and sufficient condition of the multipolar intuitionistic fuzzy ideal. METHODS By using literary study and analogical related concepts from [12], [13] and [14], we propose the terminology of multipolar intuitionistic fuzzy ideal in -algebras. We start to describe the structure of -algebra, fuzzy -algebra, and multipolar intuitionistic fuzzy sets. Each structure is given its definition, examples, and some of its properties. Definition 2.1 [15] -algebra is a nonempty set with 0 as identity element (right) and a binary operation * satisfying the following axioms for all , , ∈ : i. RESULTS AND DISCUSSION In this section, we will describe the structure of multipolar intuitionistic fuzzy ideal in -algebras. The description begins with the definition of the new structure, then examples are given, and its properties are determined and proven. For any ∈ and multipolar intuitionistic fuzzy set (l,) in , we give the conditions for the set ( ) to be an ideal of and its example. Hence, 0 ∈ ( ). ii. Next, we discuss some properties of multipolar intuitionistic fuzzy ideal in -algebras. Corollary If we assume that is a commutative -algebra, then the statements in Proposition 3.7 and Proposition 3.8 are equivalent. Furthermore, we also give another condition of multipolar intuitionistic fuzzy ideal in -algebras such that make this following proposition. Proof. We assume that (l,) is a multipolar intuitionistic fuzzy ideal over . Let , , ∈ such that ( * ) * = 0. So, * ≤ . By using Definition 3.1 (i) and (ii), we have Hence, (l,) is a multipolar intuitionistic fuzzy ideal over . ∎ CONCLUSIONS In this paper, we apply the terminology of multipolar intuitionistic fuzzy ideal in -algebras and investigate some properties. We also explain the conditions for a multipolar intuitionistic fuzzy set to be a multipolar intuitionistic fuzzy ideal and give some examples. These definitions and main results can be applied with similarly in other algebraic structure such as -algebras, -algebras and -algebras.
1,159.2
2022-03-11T00:00:00.000
[ "Computer Science", "Mathematics" ]
So you think you can PLS-DA? Background Partial Least-Squares Discriminant Analysis (PLS-DA) is a popular machine learning tool that is gaining increasing attention as a useful feature selector and classifier. In an effort to understand its strengths and weaknesses, we performed a series of experiments with synthetic data and compared its performance to its close relative from which it was initially invented, namely Principal Component Analysis (PCA). Results We demonstrate that even though PCA ignores the information regarding the class labels of the samples, this unsupervised tool can be remarkably effective as a feature selector. In some cases, it outperforms PLS-DA, which is made aware of the class labels in its input. Our experiments range from looking at the signal-to-noise ratio in the feature selection task, to considering many practical distributions and models encountered when analyzing bioinformatics and clinical data. Other methods were also evaluated. Finally, we analyzed an interesting data set from 396 vaginal microbiome samples where the ground truth for the feature selection was available. All the 3D figures shown in this paper as well as the supplementary ones can be viewed interactively at http://biorg.cs.fiu.edu/plsda Conclusions Our results highlighted the strengths and weaknesses of PLS-DA in comparison with PCA for different underlying data models. Background Partial Least-Squares Discriminant Analysis (PLS-DA) is a multivariate dimensionality-reduction tool [1,2] that has been popular in the field of chemometrics for well over two decades [3], and has been recommended for use in omics data analyses. PLS-DA is gaining popularity in metabolomics and in other integrative omics analyses [4][5][6]. Both chemometrics and omics data sets are characterized by large volume, large number of features, noise and missing data [2,7]. These data sets also often have lot fewer samples than features. PLS-DA can be thought of as a "supervised" version of Principal Component Analysis (PCA) in the sense that it achieves dimensionality reduction but with full awareness of the class labels. Besides its use for dimensionality-reduction, it can be adapted *Correspondence<EMAIL_ADDRESS>1 Bioinformatics Research Group (BioRG), Florida International University, 11200 SW 8th St, 33199, Miami, FL USA Full list of author information is available at the end of the article to be used for feature selection [8] as well as for classification [9][10][11]. As its popularity grows, it is important to note that its role in discriminant analysis can be easily misused and misinterpreted [2,12]. Since it is prone to overfitting, cross-validation (CV) is an important step in using PLS-DA as a feature selector, classifier or even just for visualization [13,14]. Furthermore, precious little is known about the performance of PLS-DA for different kinds of data. We use a series of experiments to shed light on the strengths and weaknesses of PLS-DA vis-à-vis PCA, as well as the kinds of distributions where PLS-DA could be useful and where it fares poorly. The objective of dimensionality-reduction methods such as PCA and PLS-DA is to arrive at a linear transformation that converts the data to a lower dimensional space with as small an error as possible. If we think of the original data matrix to be a collection of n m-dimensional vectors (i.e., X is a n × m matrix), then the above objective can be thought of as that of finding a m × d transformation matrix A that optimally transforms the data matrix X into a collection of n d-dimensional vectors S. Thus, S = XA + E, where E is the error matrix. The matrix S, whose rows correspond to the transformed vectors, gives d-dimensional scores for each of the n vectors in X. The new features representing the reduced dimensions are referred to as principal components (PC). In PCA, the transformation preserves in its first PC as much variance in the original data as possible. On the other hand PLS-DA preserves in its first PC as much covariance as possible between the original data and its labeling. Both can be described as iterative processes where the error term is used to define the next PC. Figure 1 highlights the differences showing an example of a synthetic data set for which the PC chosen by PCA points to the bottom right, while the one chosen by PLS-DA is roughly orthogonal to it pointing to the bottom left. It is also important to note that a higher explained variance or higher correlation for both PCA and PLS-DA doesn't always mean a better model, even though they are many times linked [14]. The following paragraphs give a more thorough description of the methods and their differences: PCA Informally, the PCA algorithm calculates the first PC along the first eigenvector by minimizing the projection error and then iteratively projects all the points to a subspace orthogonal to the last PC and repeats the process on the projected points. An alternative formulation is that the principal component vectors are given by the eigenvectors of the non-singular portion of the covariance matrix C given by: where C n is the n × n centering matrix. The loading vectors, denoted by L 1 , . . . , L n , are given in terms of the eigenvectors, e 1 , . . . , e n and the eigenvalues, λ 1 , . . . , λ n , of C as follows: PLS-DA In its standard variant the components are required to be orthogonal to each other. In a manner similar to Eq. (1), the first PC of PLS-DA can be formulated as the eigenvectors of the non-singular portion of C, given by: where y is the class label vector. shows a data set where PLS-DA picks the direction that helps best separate the labels, while PCA picks the direction that least helps separate them The iterative process computes the loading vectors a 1 , . . . , a d , which give the importance of each feature in that component. In iteration h, it has the following objective: where b h is the loading for the label vector y h , X 1 = X, and X h and y h are the residual (error) matrices after transforming with the previous h − 1 components. sPLS-DA Variant of PLS-DA that makes a sparsity assumption, i.e., that only a small number of features are responsible for driving a biological event or effect under study has been devised [15,16] and shown to be successful with applications where the number of features far outnumber the number of samples [17]. Using lasso penalization, these methods add penalties (L 1 or L 0 norm) to better guide the feature selection and model fitting process and achieve improved classification by allowing to select a subset of the covariates instead of using all of them. Methods In this section, we discuss the aim, design and settings of the experiments. Synthetic data for the experiments The following describes a standard experimental setup. Clarifications are provided wherever the experiments differed from this norm. For each of the experiments, labeled synthetic data were generated as follows. The basic input parameters for each experiment are the number of samples n and the number of features of each sample m. Every data set assumed that there was a rule (e.g., a linear inequality), which was a function of some subset of the m features (i.e., signal features), while the rest were considered as noise features. The input parameter also included the rule and consequently the set of signal features. This rule will be considered as the ground truth. PLS-DA was then applied to the data set to see how well it performed feature selection or how well it classified. All experiments were executed using PCA and sPLS-DA, where the loading vector is only non-zero for the selected features. Both are available in the mixOmics R package [18], which was chosen because it is the implementation most used by biologists and chemists. The noise features of all points are generated from a random distribution that is specified as input to the data generator. The default is assumed to be the uniform distribution. The satisfied rule dictates the generation of the signal features. Performance metrics for the experiments As is standard with experiments in machine learning, we evaluated the experiments by computing the following measures: true positives (tp), true negatives (tn), false positives (fp), false negatives (fn), precision (tp÷(tp+fp)), and recall (tp ÷ (tp + fn)). Note that in our case precision and recall are identical. This is because of their formula is the same if fp = fn. The data is created with s signal features and s features are selected. Because s is the number of signal features, regardless of whether they were selected or not, s = tp + fn. Also, because only s features are selected, s = tp + fp. Making both equations equal, we get that fp = fn. Since tn is large in all our feature extraction experiments, some of the more sophisticated measures are skewed and therefore not useful. For example, the F1 score will be necessarily low, while accuracy and specificity will be extremely high. When the number of noise features is low, precision could be artificially inflated. However, this is not likely in real experiments. Graphs are shown as 3D plots where the z axis represents the performance measure (percentage of signal features in the features marked as important by the tools), and the x and y axes show relevant parameters of the experiment. Experiments varying n/m We first show how the ratio of the number of samples, n, to the number of features, m affects the apparent performance of PLS-DA and the number of spurious relationships found. As described earlier, we generated n random data points in m-dimensional space (from a uniform distribution) and labeled them randomly. The ratio n/m was reduced from 2:1 to 1:2 to 1:20 to 1:200. Given the data set, it is clear that any separation of the data found by any method is merely occurring fortuitously. When we have at least twice as many features as samples, PLS-DA readily finds a hyperplane that perfectly separates both merely by chance. As shown in Fig. 2, the two randomly labeled groups of points become increasingly separable. This is explained by the curse of dimensionality, that predicts the sparsity of the data to grow increasingly faster with the number of dimensions.These executions only range in ratios from 2:1 to 1:200. In many current omics data sets, ratios can even exceed 1:1000 (i.e., data sets with 50 samples and 50,000 genes are common). This is one of the reasons of the need of sample size determination when designing an experiment [19]. If any separating hyperplane is used as a rule to discriminate blue points from orange points, then even though the apparent error rate (AE) decreases for this set, its ability to discriminate any new random points will remain dismal [20]. In fact, the CV error rates using 1000 repetitions for the first PC in the four experiments shown in Fig. 2 were 0.53, 0.53, 0.5 and 0.48 respectively, showing that even though separability increased, the errors remain Results In this section, we discuss a variety of experiments with synthetic and real data that will help us explain the strenghts and weaknesses of PLS-DA vis-á-vis PCA and other tools. Experiments using PLS-DA as a feature selector We used 3 sets of methods for generating the synthetic points. In the first set, we consider point sets that are linearly separable. In the second data set we assume that the membership of the points in a class is determined by whether selected signal features lie in prespecified ranges. Finally, we perform experiments with clustered points. Experiments with Linearly Separable Points For these experiments we assume that the data consist of a collection of n random points with s signal features and m − s noise features. They are labeled as belonging to one of two classes using a randomly selected linear separator given as a function of only the signal features. The experiments were meant to test the ability of PLS-DA (used as a feature extractor) to correctly identify the signal features. The performance scores shown in Fig. 3 were averaged over 100 repeats. Note that the linear model used implements the following rule R 1 , where C is a constant set to 0.5: Two sets of experiments were performed. In the first set, s was fixed at 10, but n and m were varied (see Fig. 3). In the second set n was fixed at 200, but s and m were varied (see Additional file 1). PCA consistently outperformed PLS-DA in all these experiments with linear relationships. Also, when the number of samples was increased, the performance of PCA improved, because there is more data from which to learn the relationship. However, it did not help PLS-DA, because the model is not designed to capture this kind of relations. Note that PCA is successful only because the features that are the signal are the only ones correlated. The loading vector is a reflection of what PCA or PLS-DA guessed as the linear relationship between the features. We, therefore, set out to verify how far was the linear relationship that was guessed by the tools used. Even if the tools picked many noise features, we wanted to see how they weighted the noise and signal features they picked. Toward this goal, we ran an extra set of experiments with the model shown above to see if the loading vector from PLS-DA indicated a better performance than what might be suggested by Fig. 3. Note that ideally the loading vector should have zeros for the noise features and ones for the signal features. We computed the cosine distance between the loading vector computed in the experiment and the vector reflected by the true relationship. As shown in Additional file 2, we see that the loading vectors of both PCA and PLS-DA failed to reflect the true relationship. These experiments were performed using n = 200 averaged over 100 repetitions. Even though PCA successfully selected many of the signal features during feature selection, it was unable to get sufficiently close to the underlying linear relationship, perhaps due to the compositional nature of the signal variables, which gives rise to correlations. Other experiments carried out with the same results include changing the magnitude of constant in the inequality and changing the relationship from a linear inequality to two linear equalities, i.e., the points lie on two hyperplanes. Cluster model For these experiments, the signal features of the points were generated from a clustered distribution with two clusters separated by a prespecified amount. All noise features were generated from a uniform distribution. The R package clusterGeneration was used for this purpose, which also allows control over the separation of the clusters. Cluster separation between the clusters was varied in the range [ −0.9, 0.9]. Thus when the points are viewed only with the noise features, they appear like a uniform cloud, and when viewed only with the signal features, the members of the two classes are clustered. Note that cluster separation of -0.9 will appear as indistinguishable clusters, while a separation of 0.9 will appear as well-separated clusters. The experiments were executed with s = 10, n = 200, averaged over 100 repetitions. The executions with clustered data showed PLS-DA to be clearly superior to PCA. As shown in Fig. 4, while it is true that the difference narrows when the number of samples is made very large or the clusters are widely separated (i.e., cleanly separated data),it still remains significant. PLS-DA is able to select the correct hyperplane even with few samples and even when the separation between the clusters is low (values close to 0). PCA needs both an unreasonably large number of samples and very well separated clusters to perform respectably in comparison. However, data with high separation values are embarrassingly simple to analyze with a number of competing methods. Interval model In this set of experiments the rules that determined class membership are often encountered in biological data sets. We used two different methods to generate data from this model. In the first one, we constrained the signal features and in the second we constrained the noise ones. To generate such data sets, members of one class had the constrained features selected uniformly at random from prespecified intervals, while all other features were generated from a uniform distribution in the range [ 0, 1]. We divided the range [ 0, 1] into subintervals of width 1/p. Experiments were carried out with p = 3, 5 and 10. Depending on the experiment, signal and noise feature were assigned to either a subinterval of width 1/p or the entire interval of [ 0, 1]. The results are shown in Additional file 3. When the signal features are constrained, PLS-DA consistently outperforms PCA. This due to the strong correlation between the signal features for class members that PLS-DA is able to detect. On the other hand, when the noise features are constrained, PCA consistently outperforms PLS-DA. The latter performs poorly when the number of signal features is 1 and p = 3, because the distribution of values for the single signal is not very different from the distribution of the noise. Experiments as a classifier Our final set of experiments with synthetic data was to see how PLS-DA fared as a classifier. The following experiments were executed 100 times each, with 10 signal features. For the cross-validation error calculation, 5 folds and 10 repetitions were used. In all of the experiments there is a correspondence between a high performance as feature selector and a low CV error. As shown in Additional file 4a for the linear relationship model, its performance is no better than chance for a 2-class experiment. This corroborates the poor performance of PLS-DA as a feature selector for this model. For the results with the cluster model shown in Additional file 4b, the CV error is almost 0 in every case, except when the number of samples is low, which is again consistent with what we saw in the feature selection experiments. The performance gets noticeably worse when, in addition to a low number of samples, the number of noise features is large. This is because the signal is hidden among many irrelevant features, something that one has come to expect with all machine learning algorithms. Additional file 4c and d show the results for the interval model. As in the case of the feature selection experiments, both versions performed roughly the same, classifying much better than chance and having their best performance when the number of samples was large and the number of noise features was low, as expected. Comparisons with other methods To compare the PLS-DA with other known feature selectors, we applied 3 more methods to the previous data models: Independent Component Analysis (ICA), as a feature extraction method that transforms the input signals into the independent sources [21]. Sparse Principal Component Analysis (SPCA) via regularized Singular Value Decomposition (SVD) [22] was built by adding sparsity constraint. Regularized Linear Discriminant Analysis (RLDA) was computed by using L 2 regularization to stabilize eigendecomposition in LDA [23]. We found that PCA-based algorithms (PCA and SPCA) have similar overall performance among the three experiments. The same happens with LDA-based models (RLDA and sPLS-DA). As Additional files 5 and 6 show, PLS-DA, ICA and RLDA are not able to detect linear relationships, while SPCA and PCA are. For the interval model with p = 3, either to constrain signal or noise doesn't seem to change the behavior of the LDA-based models, being outperformed by PCA when noise is constrained as shown Additional files 7 and 8. The performance of every method except for ICA goes down as s becomes small. The performance of ICA depends on the number of noise features for both the interval and linear models. In the cluster model experiment as shown in Additional file 9, SPCA performs better than PCA as the separation between the cluster gets higher. The separation between the cluster does not affects the performance of ICA, that stays near 0. RLDA's and PLS-DA's performance excel, with similar behavior (Fig. 5). Novel analysis of a real dataset Bacterial Vaginosis (BV) is the most common form of vaginitis affecting a large number of women across the world [24]. BV is associated with an imbalance of the vaginal flora and damage to the epithelial and mucus layer compromising the body's intrinsic defense mechanisms. This can result in adverse sequelae and increasing the risk of many STIs [25]. In a landmark paper, human vaginal microbial communities were classified into five community state types (CSTs) [26]. CSTs I, II, III, and V are dominated by different Lactobacillus species, whereas CST IV has no specific dominant species and is regarded as the heterogeneous group. While this CST classification has enhanced our understanding of bacterial vaginosis [26][27][28], a quantitative method to reliably distinguish the CSTs was not available until the development of the specificity aggregation index [29] based on the species specificity [30]. The values of this index range from 0, indicating that the species is absent in that CST to 1, indicating that that OTU is always detected and only detected in that CST. We used the abundance matrix from [26] (394 samples, 247 OTUs), and with a one vs all approach we devised a simple scheme to differentiate each CST from all of the others using the abundance of each taxon. The importance of each feature given by the specificity index computed in [29] was used as the ground truth. Only the top 10 OTUs for each CST were considered and their importance values were normalized. Results are summarized in Fig. 6. As PLS-DA and PCA return a ranked list of features, a varying threshold on the percentage of features selected is shown on the X axis of Fig. 6. The Y axis represents the sum of the specificity indices achieved by the best features at that cutoff. Note that by just selecting half of the features, a cumulative specificity of 0.9 is achieved by both methods. PLS-DA reaches specificity values over 0.8 with less than 5 features selected, which means that in all of the cases, PLS-DA's top features are indeed the right set of features. In contrast, PCA's specificity has a slower growth at the beginning (selects the wrong features), but when half of them are selected both methods achieve the same specificity. Discussion Our work sheds light on the kind of relationships and data models with which PLS-DA can be effective both as a feature selector as well as a classifier. In particular, we claim that when classes are determined by linear or non-linear relationships, PLS-DA provides almost no insight into the data. But it is effective when the classes have a clustered distribution on the signal features, even when these features are hidden among a large number of noise attributes. PLS-DA retains a strong performance In all of the experiments carried out there was a correspondence between performance of the tools as feature selector and CV error. This reinforces the argument that the CV error is an excellent way to differentiate a good model from a bad one and every paper using PLS-DA must report it to have any validity. Moreover, just-by-chance good behaviors are commonplace when using this tool because the sparsity of the data grows increasingly faster with the number of dimensions and it becomes easier for PLS-DA to find a perfectly separating hyperplane. Also even though PCA ignores the information regarding the class labels of the samples, it can be remarkably effective as a feature selector for classification problems. In some cases, it outperforms PLS-DA which is made aware of the class labels in its input. Conclusions The obvious conclusion from our experiments is that it is a terrible idea to use PLS-DA blindly with all data sets. In spite of its attractive ability to identify features that can separate the classes, it is clear that any data set with sufficiently large number of features is separable and that most of the separating hyperplanes are just "noise". Thus using it indiscriminately would turn into a "golden hammer", i.e., an oft-used, but inappropriate tool. Fortunately, the use of CV would readily point to when it is being used ineffectively. Our work sheds light on the kind of relationships and data models with which PLS-DA can be effective and should be used both as a feature selector as well as a classifier in the case that the underlying model of the data is known or can be guessed. When it is not possible, one should rely on the CV error and use extreme care when making conclusions and extrapolations. Also, one should take advantage of the multitude of tools available and use different methods depending on the dataset, as the simple PCA was able to outperform PLS-DA depending on the conditions.
5,808.6
2017-10-21T00:00:00.000
[ "Computer Science", "Mathematics" ]
Spatial distribution of the sibling species of Anopheles gambiae sensu lato (Diptera: Culicidae) and malaria prevalence in Bayelsa State, Nigeria Background Much of the confusing ecophenotypic plasticity of Anopheles gambiae sensu lato is attributable to the differential biological traits of the sibling species, with their heterogeneous geographical distribution, behavioral dissimilarities and divergent population dynamics. These differences are critical to their roles in malaria transmission. Studies were, therefore, undertaken on the spatial distribution of these species and malaria prevalence rates in Bayelsa State, September, 2008-August 2010. Methods Mosquito sampling was in 7 towns/villages in 7 Local Government Areas (LGAs) in 3 eco-vegetational zones: Fresh Water Swamp Forest (FWSF): Sagbama, Yenagoa, Kolokuma-Opokuma LGAs; Brackish Water Swamp Forest (BWSF): Ogbia, Ekeremor, Southern Ijaw LGAs; Mangrove Water Forest (MWF): Nembe LGA. Adults were collected twice quarterly by the Pyrethrum Spray Catch (PSC) technique. Anopheles was separated morphologically and the sibling species PCR- identified. Simultaneously, malaria prevalence rates were calculated from data obtained by the examination of blood smears from consenting individuals at hospitals/clinics. Results An. gambiae s.s. was dominant across the 3-eco-vegetational zones. Spatial distribution analyses by cell count and nearest neighbor techniques indicated a tendency to clustering of species. An. gambiae s.s. and An. arabiensis clustered in Ekeremor LGA while these 2 species and An. melas aggregated in Nembe. The gonotrophic (physiological) status examination revealed that 34.3, 23.5, 23.1 and 18.4% of the population were fed, unfed, gravid and half gravid respectively. The highest malaria prevalence rates were obtained at Kolokuma-Opokuma and Nembe LGAs. Variation in prevalence rates among LGAs was significant (t = 5.976, df = 6, p-value = 0.002, p < 0.05). The highest prevalence rate was in the age group, 30-39 yrs, while the lowest prevalence was in the 0-9 yrs group. Conclusion High malaria prevalence rates were associated with An. gambiae s.s. either in allopatry or sympatry across eco-vegetational zones. In areas where the sibling species clustered, they probably formed nidi for transmission. Socio-economic conditions might have contributed to reduced prevalence in Yenagoa, State Capital. Background Much of the confusing eco-phenotypic plasticity of An. gambiae s.l. is the differential biological traits of the sibling species, their heterogeneous geographical distributions, behavioral dissimilarities and divergent population dynamics [1]. These differences are critical to the transmission of malaria in different zones of Nigeria [2][3][4][5]. Human malaria is caused by Plasmodium parasites and transmitted by female Anopheles mosquitoes. In Africa, the most efficient vectors are the Anopheles gambiae complex and Anopheles funestus group. An. gambiae is a complex of seven sibling species varying in their vectorial ability and ecological niche [6,7]. The sibling species are: the Freshwater An. gambiae s.s., An. arabiensis, An. quadrianulatus A, An. quadrianulatus B; the salt waterbreeding An. melas and An. merus, and An. bwambae found in hot springs in Uganda. The differences in the biology of the sibling species of An. gambiae s.l. have highlighted the need for mapping their spatial distribution and malaria prevalence patterns in order to enhance effective implementation of integrated control approaches [8]. Maps have been produced at continental and subregional scales [9]. The present study is aimed at developing a GIS-based overlay on the spatial patterns of PCR-identified sibling species of An. gambiae complex and Plasmodium falciparum malaria in Bayelsa State, Nigeria. Study area The study was conducted in 7 Local Government Areas (LGAs), Bayelsa State, Nigeria. Bayelsa State is located (5°22′E, 6°45′E and 4°15′N, 5°23′N) in the lower Delta plain formed during the Holocene of the quaternary period by the accumulation of sedimentary deposits [10]. The vegetation comprises three eco-vegetational zones: fresh water swamp forest, brackish water swamp forest and mangrove coastal water forest. The topography of study area is characterized by a maze of creeks and swamps criss-crossing the low-lying plain. The study LGAs were Yenagoa (4°53′N and 5°17′E), Sagbama (5°09′ N and 6°14′E), Kolokuma-Opokuma (5°09′N and 6°14′E) in the fresh water swamp forest; Ogbia (4°53′N, 6°22′E), Southern Ijaw (4°07′N, 6°08′E) Ekeremor (5°02′N and 5°48′E) in the brackish water swamp forest and Nembe (4°27′N and 6°26′E) in the mangrove coastal water forest. All LGAs were rural, with the exception of semi-urban Yenagoa, the State capital. Many houses had traditional architectural design with mud walls and thatched roofs while few were built with blocks and roofed with corrugated iron sheets. The major occupations of the people were fishing, farming and petty trading. Ethical consideration Before the commencement of the study, consent was obtained from the Ministry of Health, Bayelsa State, through the Primary Health Care (PHC) department, the village and household heads. Sample size Sampling of the study population involved successive selection of new participants who presented at the Out-Patient Department of the 7 selected General Hospitals/ Clinics (Okolobiri, Olobiri, Sagbama, Amasoma, Ekeremor, and Kaiama) in each LGA until a sample size as described in Daniel [11] was obtained. These were individuals of all ages who had lived for at least 6 months and planned to stay for a further 6 months in the study areas. A total of 6321 individuals presented at the hospital, September, 2008-August, 2010. Blood sample collection EDTA bottles were labeled following entry into the routine register with data on sex, occupation, and location of participants. A 2 ml-volume of intravenous blood was collected from each individual and transferred to a labeled EDTA bottle. Grease-free slides were labeled using patients' details from the EDTA bottles. An aliquot of the blood was measured with a 1 ml-micropipette and dropped on a labeled slide to prepare thick and thin blood films following WHO standard procedures [12]. Preparations were air-dried and fixed with methanol for 30 seconds then stained with 4% Giemsa in phosphate buffer (7.2) for 30 minutes. Microscopy was used to examine the smears for the presence of malaria parasites under X1000 objective (Olympus, Japan) in oil immersion. Preliminary examination was carried out at the hospital where the blood was collected. The presence of malaria parasites in sexual and asexual stages was considered a positive diagnosis. The second and third examinations were at The Parasitology Research Laboratory, Department of Animal and Environmental Biology, University of Port Harcourt for quality assurance. Slides were reported as negative for malaria parasites after examining at least 50 fields and no parasites were detected. Prevalence rates were calculated. Mosquito collection Collection of mosquitoes was undertaken in 7 villages/ towns in the 7 LGAs. Their co-ordinates were obtained by geographic positioning system (GPS). The villages were randomly selected based on accessibility and availability of supporting staff. Selection of houses was based on their similarity in architectural designs. Six houses were used in each town/ village; these houses were utilized throughout the study. There were 1-2 rooms in each house. Adult mosquitoes were collected by the Pyrethrum Spray Catch (PSC) method [13]), 0600-0730 hrs twice in each quarter, September, 2008-August, 2010. Selected rooms had at least one person sleeping overnight. Prior to spraying, the floors were covered with clean white sheets, outlets were closed and pyrethroid sprayed; the sheets were removed 15 min post-spray. Knocked down mosquitoes were picked up with pointed tip forceps and placed in labeled plastic cups. The gonotrophic (physiological) stages were determined as per WHO [14] and Noutcha and Anumudu [4]. Based on abdominal conditions, they were grouped as: unfed, fed, half gravid and gravid. Unfed females had a dark and flattened abdomen; fed had a dark red abdomen with blood occupying most of the abdomen; in half gravid, blood occupied only 3-4 segments of the ventral surface and 6-7 segments of the dorsal surface of the abdomen; in gravid females, most blood was digested and the abdomen was whitish and distended. Subsequently, the mosquitoes were taken to the laboratory for morphological identification using keys by Gilles and de Meillon [15]. An. gambiae s.l. adults were preserved dried in Eppendorf tubes containing desiccated silica gel for molecular characterization. PCR-Identification of the members of an. gambiae complex Extraction method had been extensively discussed [16]. Map processes A scanned administrative map (1:500.000) of the State was geo-referenced and digitized using Arc View GIS software (version 3.29 ESRI CA, USA) [17,18]. Separate layers were created for the Plasmodium falciparum malaria prevalence rates and PCR-identified An. gambiae complex from each site. Spatial maps were displayed and classified using a specific identifier of Arc View Spatial Extension [13]. Spatial analyses The cell count and K-nearest neighbor analyses in Dave and Uriel [19] were adapted to describe the spatial distribution patterns. The, Mean Variance Ratio (MVA) and near-neighbor (Rn) values were calculated. When MVA or Rn was <1, the spatial pattern is described as clustered (aggregated); when they are equal to 1, the spatial pattern is random and when it is >1, the spatial pattern is even (uniform). Spatial distribution of An. gambiae complex and P. falciparum The spatial patterns of An. gambiae sibling species across the study locations showed a tendency to clustering or aggregation (MVA = 0.57, Rn = 0.57) ( Table 1). Discussion The clustered spatial patterns among the An gambiae sibling species were similar to results obtained by Sogoba et al. [20] and probably reflected variation in the favorability of the environment [21]. Sympatric occurrence of the An. gambiae complex had been documented [5]. These clusters may serve as nidi of transmission; they may also serve as refugia, where pathogens, vectors and hosts persist during unfavorable periods [22]. The Anopheles gambiae s.l. population was virile, with approximately 35% fed and about 40% half gravid or gravid. Breman [23] provided a list of intrinsic and extrinsic determinants of the malaria burden. The intrinsic factors include: host genetic susceptibility and host immunological status. The extrinsic factors are: parasite species, mosquito species and environmental conditions. Environmental conditions are climatic conditions, and availability of breeding sites. The socio-economic component consists of education, social, behavioural, political and economic status of host populations. Parasite and host populations were apparently not responsible for the variation in malaria prevalence across the State. There were no differences in species of parasites. It was unlikely that genetic susceptibility and immunological status of human hosts varied significantly across the ecovegetational zones and the semi urban/rural divide in the State. Warm temperature, high rainfall and humidity were pervasive across the State; breeding sites were also available because adults were collected throughout the year. The low malaria prevalence rates in Yenagoa LGA, the State capital and the only semi-urban location in the study area was probably due to the higher living standard (better housing, knowledge of disease, community participation in malaria prevention and control) [24,25]. One of the factors that might have contributed to the high prevalence rates in Nembe LGA, might be attributable to the clusters of the 3 sympatric An. gambiae sibling species that formed nidi for transmission [22]. The high prevalence rates in Kolokuma-Opokuma, where An. gambiae s.s. was abundant and allopatric could be attributed to the high density and vectorial competence of the efficient An. gambiae s.s. as a malaria vector [1,6,16,20,26]. This efficiency may also explain the relatively high prevalence rates in Sagbama and in the area of Southern Ijaw LGA, contiguous with Sagbama where An. gambiae s.s. was allopatric. Although the literature indicates that annual deaths from malaria are mainly in infants and young children [27,28]; these results show the highest prevalence rate in the 30-39 yrs age group. It is apparent that the source and composition of sample populations have a significant impact on the pattern of malaria prevalence rates across age groups. Conclusion High malaria prevalence rates were associated with An. gambiae s.s. either in allopatry or sympatry across ecovegetational zones. In areas where the sibling species clustered, they probably formed nidi for transmission. Socio-economic conditions might have contributed to reduced transmission in Yenagoa, State Capital.
2,711.8
2014-01-17T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Evaluating the Evaluation Metrics for Style Transfer: A Case Study in Multilingual Formality Transfer While the field of style transfer (ST) has been growing rapidly, it has been hampered by a lack of standardized practices for automatic evaluation. In this paper, we evaluate leading automatic metrics on the oft-researched task of formality style transfer. Unlike previous evaluations, which focus solely on English, we expand our focus to Brazilian-Portuguese, French, and Italian, making this work the first multilingual evaluation of metrics in ST. We outline best practices for automatic evaluation in (formality) style transfer and identify several models that correlate well with human judgments and are robust across languages. We hope that this work will help accelerate development in ST, where human evaluation is often challenging to collect. Introduction Textual style transfer (ST) is defined as a generation task where a text sequence is paraphrased while controlling one aspect of its style (Jin et al., 2020). For instance, the informal sentence in Italian "in bocca al lupo!" (i.e., "good luck") is rewritten to the formal version "Ti rivolgo un sincero augurio!" (i.e., "I send you a sincere wish!"). Despite the growing attention on ST in the NLP literature (Jin et al., 2020), progress is hampered by a lack of standardized and reliable automatic evaluation metrics. Standardizing the latter would allow for quicker development of new methods and comparison to prior art without relying on time and cost-intensive human evaluation that is currently employed by more than 70% of ST papers (Briakou et al., 2021a). ST is usually evaluated across three dimensions: style transfer (i.e., has the style of the generated output changed as intended?), meaning preservation (i.e., are the semantics of the input preserved?), and fluency (i.e., is the output well-formed?). As we will see, a wide range of automatic evaluation metrics and models has been used to quantify each of these dimensions. For example, prior work has employed as many as nine different automatic systems to rate formality alone (see Table 1). However, it is not clear how different automatic metrics compare to each other and how well they agree with human judgments. Furthermore, previous studies of automatic evaluation have exclusively focused on the English language (Yamshchikov et al., 2021;Pang, 2019;Pang and Gimpel, 2019;Tikhonov et al., 2019;Mir et al., 2019); yet, ST requires evaluation methods that generalize reliably beyond English. We address these limitations by conducting a controlled empirical comparison of commonly used automatic evaluation metrics. Concretely, for all three evaluation dimensions, we compile a list of different automatic evaluation approaches used in prior ST work and study how well they correlate with human judgments. We choose to build on available resources as collecting human judgments across the evaluation dimensions is a costly process that requires recruiting fluent speakers in each language addressed in evaluation. While there are many stylistic transformations in ST, we conduct our study through the lens of formality style transfer (FoST), which is one of the most popular style dimensions considered by past ST work (Jin et al., 2020;Briakou et al., 2021a) and for which reference outputs and human judgments are available for four languages: English, Brazilian-Portuguese, French, and Italian. • We contribute a meta-evaluation study that is not only the first large-scale comparison of automatic metrics for ST but is also the first work to investigate the robustness of these metrics in multilingual settings. • We show that automatic evaluation approaches based on a formality regression model fine-tuned on XLM-R and the chrF metric correlate well with human judgments for style transfer and meaning preservation, re-spectively, and propose that the field adopts their usage. These metrics are shown to work well across languages, and not just in English. • We show that framing style transfer evaluation as a binary classification task is problematic and propose that the field treats it as a regression task to better mirror human evaluation. • Our analysis code and meta-evaluation files with system outputs are made public to facilitate further work in developing better automatic metrics for ST: https://github.com/ Elbria/xformal-FoST-meta. Limitations of Automatic Evaluation Recent work highlights the need for research to improve evaluation practices for ST along multiple directions. Not only does ST lack standardized evaluation practices (Yamshchikov et al., 2021), but commonly used methods have major drawbacks which hamper progress in this field. Pang (2019) and Pang and Gimpel (2019) show that the most widely adopted automatic metric, BLEU, can be gamed. They observe that untransferred text achieves the highest BLEU score for the task of sentiment transfer, questioning complex models' ability to surpass this trivial baseline. Mir et al. (2019) discuss the inherent trade-off between ST evaluation aspects and propose that models are evaluated at specific points of their trade-off plots. Tikhonov et al. (2019) argue that, despite their cost, human-written references are needed for future experiments with style transfer. They also show that comparing models without reporting error margins can lead to incorrect conclusions as state-of-the-art models sometimes end up within error margins from one another. Structured Review of ST Evaluation We systematically review automatic evaluation practices in ST with formality as a case study. We select FoST for this work since it is one of the most frequently studied styles (Jin et al., 2020) and there is human annotated data including human references available for these evaluations (Rao and Tetreault, 2018;Briakou et al., 2021b). Tables 1 and 2 summarize evaluation details for all FoST methods in papers from the ST survey by Jin et al. (2020). 1 Most works employ automatic evaluation for style (87%) and meaning preservation (83%). Fluency is the least frequently evaluated dimension (43%), while 74% of papers employ automatic metrics to assess the overall quality of system outputs that captures all desirable aspects. Across dimensions, papers also frequently rely on human evaluation (55%, 58%, 60%, and 40% for style, meaning, fluency, and overall). However, human judgments and automatic metrics do not always agree on the best-performing system. In 60% of evaluations, the top-ranked system is the same according to human and automatic evaluation (marked as in Table 1), and their ranking disagrees in 40% of evaluations (marked as in Table 1). When there is a disagreement, human evaluation is trusted more and viewed as the standard. This highlights the need for a systematic evaluation of automatic evaluation metrics. Finally, almost all papers (91%) consider FoST for English (EN), as summarized in Table 2. There are only two exceptions: Korotkova et al. (2019) study FoST for Latvian (LV) and Estonian (ET) in addition to EN, while Briakou et al. (2021b) study FoST for 3 Romance languages: Brazilian Portuguese (BR-PT), French (FR), and Italian (IT). The former provides system output samples as a means of evaluation, and the latter employs human evaluations, highlighting the challenges of automatic evaluation in multilingual settings. Next, we review the automatic metrics used for each dimension of evaluation in FoST papers. As we will see, a wide range of approaches is used. Yet, it remains unclear how they compare to each other, what their respective strengths and weaknesses are, and how they might generalize to languages other than English. Automatic Metrics for FoST Formality Style transfer is often evaluated using model-based approaches. The most frequent method consists of training a binary classifier on human written formal vs. informal pairs. The classifier is later used to predict the percentage of generated outputs that match the desired attribute per evaluated system-the system with the highest percentage is considered the best performing with respect to style. Across methods, the corpus used to train the classifier is the GYAFC parallelcorpus (Rao and Tetreault, 2018) consisting of 105K parallel informal-formal human-generated excerpts. This corpus is curated for FoST in EN, Meaning Preservation Evaluation of this dimension is performed using a wider spectrum of approaches, as presented in the third column of Table 1. The most frequently used metric is reference-BLEU (r-BLEU), which is based on the n-gram precision of the system output compared to human rewrites of the desired formality. Other approaches include self-BLEU (s-BLEU), where the system output is compared to its input, measuring the semantic similarity between the system input and its output, or regression models (e.g., CNN, BERT) trained on data annotated for similaritybased tasks, such as the Semantic Textual Similarity task (STS) (Agirre et al., 2016). Fluency Fluency is typically evaluated with model-based approaches (see fourth column of Table 1). Among those, the most frequent method is that of computing perplexity (PPL) under a language model. The latter is either trained from scratch on the same corpus used to train the FoST models (i.e., GYAFC) using different underlying architectures (e.g., KenLM, LSTM), or employ large pre-trained language models (e.g., GPT). A few other works train models on EN data annotated for grammaticality (Heilman et al., 2014) or linguistic acceptability (Warstadt et al., 2019) instead. Overall Systems' overall quality (see fifth column of Table 2) is mostly evaluated using r-BLEU or by combining independently computed metrics into a single score (e.g., geometric mean -GM(.), harmonic mean -HM(.), F1(.)). Moreover, 6 out of 8 approaches that rely on combined scores do not include fluency scores in their overall evaluation. English Focus Since most of the current work on FoST and ST is in EN, prior work relies heavily on EN resources for designing automatic evaluation methods. For instance, resources for training stylistic classifiers or regression models are not available for other languages. For the same reason, it is unclear whether model-based approaches for measuring meaning preservation and fluency can be ported to multilingual settings. Furthermore, reference-based evaluations (e.g., r-BLEU) require human rewrites that are only available for EN, BR-PT, IT, and FR. Finally, even though perplexity does not rely on annotated data, without standardizing the data language models are trained on, we cannot make meaningful cross-system comparisons. Summary Reviewing the literature shows the lack of standardized metrics for ST evaluation, which hampers comparisons across papers, the lack of agreement between human judgments and automatic metrics, which hampers system development, and the lack of portability to languages other than English which severely limits the impact of the work. These issues motivate the controlled multilingual evaluation of evaluation metrics in our paper. Evaluating Evaluation Metrics We evaluate evaluation metrics (described in §3.2) for multilingual FoST, in four languages for which human evaluation judgments (described in §3.1) on FoST system outputs are available. Human Judgments We use human judgments collected by prior work of Rao and Tetreault (2018) for EN and Briakou et al. (2021b) for BR-PT, FR, and IT. We include details on their annotation frameworks, the quality of human judges, and the evaluated systems below. Human Annotations We briefly describe the annotation frameworks employed by Rao and Tetreault (2018) and Briakou et al. (2021b) to collect human judgments for each evaluation aspect: 1. formality ratings are collected-for each system output-on a 7-point discrete scale, ranging from −3 to +3, as per Lahiri (2015) (Very informal, Informal, Somewhat Informal, Neutral, Somewhat Formal, Formal. Very Formal); 2. meaning preservation judgments adopt the Semantic Textual Similarity annotation scheme of Agirre et al. (2016), where an informal input and its corresponding formal system output are rated on a scale from 1 to 6 based on their similarity (Completely dissimilar, Not equivalent but on same topic, Not equivalent but share some details, Roughly equivalent, Mostly equivalent, Completely equivalent); 3. fluency judgments are collected for each system output on a discrete scale of 1 to 5, as per Heilman et al. (2014) (Other, Incomprehensible, Somewhat Comprehensible, Comprehensible, Perfect); 4. overall judgments are collected following a relative ranking approach: all system outputs are ranked in order of their formality, taking into account both meaning preservation and fluency. Human Annotators Both studies recruited workers from the Amazon Mechanical Turk platform after employing quality control methods to exclude poor quality workers (i.e., manual checks for EN, and qualification tests for BR-PT, FR, and IT). For all human evaluations and languages Briakou et al. (2021b) report at least moderate interannotator agreement. Evaluated Systems The evaluated system outputs were sampled from 5 FoST models for each language, spanning a range of simple baselines to neural architectures (Rao and Tetreault, 2018;Briakou et al., 2021b). We also include detailed descriptions of them in Appendix C. For each evaluation dimension 500 outputs are evaluated for EN and 100 outputs per system for BR-PT, FR, and IT. Evaluation Metrics For the FoST evaluation aspects described below, we cover a broad spectrum of approaches that range from dedicated models for the tasks at hand to more lightweight methods relying on unsupervised approaches and automated metrics. Formality We benchmark model-based approaches that fine-tune multilingual pre-trained language models (i.e., XLM-R, mBERT), where the task of formality detection is modeled either as a binary classification task (i.e., formal vs. informal), or as a regression task that predicts different formality levels on an ordinal scale. Meaning Preservation We evaluate the BLEU score (Papineni et al., 2002) of the system output compared to the reference rewrite (r-BLEU) since it is the dominant metric in prior work. Prior reviews of meaning preservation metrics for paraphrase and sentiment ST tasks in EN (Yamshchikov et al., 2021) cover n-gram metrics and embeddingbased approaches. We consider three additional metric classes to compare system outputs with inputs, as human annotators do: 1. n-gram based metrics include: s-BLEU (self-BLEU that compares system outputs with their inputs as opposed to references, i.e., r-BLEU), METEOR (Banerjee and Lavie, 2005) based on the harmonic mean of unigram precision and recall while accounting for synonym matches, and chrF (Popović, 2015) based on the character n-gram F-score; 2. embedding-based methods fall under the category of unsupervised evaluation approaches that rely on either contextual word representations extracted from pre-trained language models or non-contextual pre-trained word embeddings (e.g., word2vec (Mikolov et al., 2013); Glove (Pennington et al., 2014)). For the former, we use BERT-score (Zhang et al., 2020a) which computes the similarity between each output token and each reference token based on BERT contextual embeddings. For the latter, we experiment with two similarity metrics: the first is the cosine distance between the sentence-level feature representations of the compared texts extracted via averaging their word embeddings; the second is the Word Mover's Distance (WMD) metric of Kusner et al. (2015) that measures the dissimilarity between two texts as the minimum amount of distance that the embedded words of one text need to "travel" to reach the word embeddings of the other; 3. semantic textual similarity (STS) models constitute supervised methods that we model via fine-tuning multilingual pre-trained language models (i.e., XLM-R, mBERT) to predict a semantic similarity score for a pair of texts on an ordinal scale. Fluency We experiment with perplexity (PPL) and likelihood (LL) scores based on probability scores of language models trained from scratch (e.g., KenLM (Heafield, 2011)), as well as pseudolikelihood scores (PSEUDO-LL) extracted from pre-trained masked language models similarly to Salazar et al. (2020), by masking sentence tokens one by one. Experiment Settings Supervised Metrics For all supervised modelbased approaches, we experiment with fine-tuning two multilingual pre-trained language models: 1. multilingual BERT, dubbed mBERT (Devlin et al., 2019)-a transformer-based model pretrained with a masked language model objective on the concatenation of monolingual Wikipedia corpora from the 104 languages with the largest Wikipedias. 2. XLM-R (Conneau et al., 2020)-a transformer-based masked language model trained on 100 languages using monolingual Common-Crawl data. All models are based on the Hugging-Face Transformers (Wolf et al., 2020) 2 library. We fine-tune with the Adam optimizer (Kingma and Ba, 2015), a batch size of 32, and a learning rate of 5e−5 for 3 and 5 epochs for classification and regression tasks, respectively. We perform a grid search on held-out validation sets over learning rate with values: 2e−3, 2e−4, 2e−5, and 5e−5 and over number of epochs with values: 3, 5, and 8. Cross-lingual Transfer For supervised modelbased methods that rely on the availability of human-annotated instances to train dedicated models for specific tasks, we experiment with three standard cross-lingual transfer approaches (e.g., Hu et al. (2020) Training Data Table 3 presents statistics on the training data used for supervised and unsupervised models across the 3 ST evaluation aspects. For datasets that are only available for EN, we use the already available machine translated resources for STS 10 and formality datasets (Briakou et al., 2021b). The former employs the DeepL service (no information of translation quality is available) while the latter uses the AWS translation service 11 (with reported BLEU scores of 37.16 (BR-PT), 33.79 (FR), and 32.67 (IT)). 12 The KenLM models for all the languages are trained on 1M randomly sampled sentences from the OpenSubtitles dataset (Lison and Tiedemann, 2016). Experimental Results We analyze the results of comparing the outputs from the several automatic metrics to their humangenerated counterparts for formality style transfer ( §5.1), meaning preservation ( §5.2), fluency ( §5.3) via conducting segment-level analysis-and then, turn into analyzing system-level rankings to evaluation overall task success ( §5.4). Formality Transfer Metrics The field is divided on the best way to evaluate the style dimension -formality in our case. Practitioners use either a binary approach (is the new sentence formal or informal?) or a regression approach (how formal is the new sentence?). We discuss the first approach and its limitations in § 5.1.1, before moving to regression in § 5.1.2. Evaluating Binary Classifiers As discussed in §2, the vast majority of FoST works evaluate style transfer based on the accuracy of a binary classifier trained to predict whether humanwritten segments are formal or informal. Yet, as Table 1 indicates, this approach fails to identify the best system in this dimension 59% of the time. To better understand this issue, we evaluate these classifiers on human-written texts versus ST system outputs. Table 4 presents F1 scores when testing the binary formality classifiers on the task they are trained on: predicting whether human-written sentences from GYAFC and XFORMAL are formal or informal. First, the last column (i.e., δ(XLM-R, mBERT)) shows that XLM-R is a better model than mBERT for this task, across Table 4: F1 scores of binary formality classifiers under different cross-lingual transfer settings. Numbers in parentheses indicate performance drops over ZERO-SHOT. ZERO-SHOT yields the highest scores across languages and pre-trained language models. XLM-R yields improvements over mBERT across most setting (δ(XLM-R, mBERT)). Table 5: Spearman's ρ correlation (%) of formality regression models. Numbers in parentheses indicate performance drops over ZERO-SHOT. ZERO-SHOT yields the highest scores across languages and pre-trained language models. XLM-R yields improvements over mBERT across most settings (δ(XLM-R, mBERT)). Human Written Texts languages, with the largest improvements in the ZERO-SHOT setting where XLM-R beats mBERT by +3, +2, +1 for BR-PT, FR, and IT respectively. Second, ZERO-SHOT is surprisingly the best strategy to port EN models to other languages. TRANSLATE-TRAIN and TRANSLATE-TEST hurt F1 by 3 and 9 points on average compared to ZERO-SHOT, despite exploiting more resources in the form of machine translation systems and their training data. However, transfer accuracy is likely affected by regular translation errors (as suggested by larger F1 drops for languages with lower MT BLEU scores) and by formality-specific errors. Machine translation has been found to produce outputs that are more formal than its inputs (Briakou et al., 2021b), which yields noisy training signals for TRANSLATE-TRAIN and alters the formality of test samples for TRANSLATE-TEST. System Outputs We now evaluate the best performing binary classifier (i.e., XLM-R in ZERO-SHOT setting) on real system outputs-a setup in line with automatic evaluation frameworks. Figure 1 presents a breakdown of the number of formal vs. informal predictions of the classifiers binned by human-rated formality levels. Across languages, the performance of the classifier deteriorates as we move away from extreme formality ratings (i.e., very informal (−3) and very formal (+3)). This lack of sensitivity to different formality levels is problematic since system outputs across languages are concentrated around neutral formality values. In addition, when testing on BR-PT, FR, and IT (ZERO-SHOT settings), the classifier is more biased towards the formal class, which leads one to question its ability to correctly evaluate more formal outputs in multilingual settings. Taken together, these results suggest that validating the classifiers against human rewrites rather than system outputs is unrealistic and potentially misleading. Table 5 presents Spearman's ρ correlation of regression models' predictions with human judgments. Again, XLM-R with ZERO-SHOT transfer yields the highest correlation across languages. More specifically, the trends across different transfer approaches and different pre-trained language models are similar to the ones observed on evaluation of binary classifiers: XLM-R outperforms mBERT for almost all settings, while ZERO-SHOT is the most successful transfer approach, followed by TRANSLATE-TRAIN, with TRANSLATE-TEST yielding the lowest correlations across languages. Interestingly, regression models highlight the differences between the generalization abilities of XLM-R and mBERT more clearly than the previous analysis on binary predictions: ZERO-SHOT transfer on XLM-R yields 8%, 8%, and 10% higher correlations than mBERT for BR-PT, FR, and IT-while both models yield similar correlations for EN. fer is a close second to chrF, consistent with this model's top-ranking behavior as a formality transfer metric. However, chrF outperforms the remaining more complex and expensive metrics, including BERT-score and mBERT models. In contrast to Yamshchikov et al. (2021), embedding-based methods (i.e., cosine, WMD) show no advantage over n-gram metrics, perhaps due to differences in word embedding quality across languages. Finally, it should be noted that r-BLEU is the worst performing metric across languages, and its correlation with human scores is particularly poor for languages other than English. This is remarkable because it has been used in 75% of automatic evaluations for FoST meaning preservation evaluation (as seen in Table 1). We, therefore, recommend discontinuing its use. Table 7 presents Spearman's ρ correlation of various fluency metrics with human judgments. Pseudo-likelihood (PSEUDO-LL) scores obtained from XLM-R correlate with human fluency ratings best across languages. Their correlations are strong across languages, while other methods only yield weak (i.e., KenLM, mBERT) to moderate correlations (i.e, KenLM-PPL) for IT. We, therefore, recommend evaluating fluency using Pseudolikelihood scores derived from XLM-R to help standardize fluency evaluation across languages. System-level Rankings Finally, we turn to predict the overall ranking of systems by focusing on how many correct pairwise system comparisons each metric gets correct. For each language, there are 5 systems, which means there are 10 pairwise comparisons, for a total of 40 given the 4 languages. We analyze corpus-level r-BLEU, commonly used for this dimension, along with leading metrics from the other dimensions: XLM-R formality regression models, chrF and XLM-R pseudo-likelihood. r-BLEU gets 30 out of 40 comparisons correct while the other metrics get 25, 22, and 19 respectively. This indicates that r-BLEU correlates with human judgments better at the corpus-level than at the sentence-level, as in machine translation evaluation (Mathur et al., 2020). We caution that these results are not definitive but rather suggestive of the best performing metric, given the ideal evaluation would be a larger number of systems with which to perform a rank correlation. The complete analysis for each language is in Appendix B. Conclusions Automatic (and human) evaluation processes are well-known problems for the field of Natural Language Generation (Howcroft et al., 2020;Clinciu et al., 2021) and the burgeoning subfield of ST is not immune. ST, in particular, has suffered from a lack of standardization of automatic metrics, a lack of agreement between human judgments and automatics metrics, as well as a blindspot to developing metrics for languages other than English. We address these issues by conducting the first controlled multilingual evaluation for leading ST metrics with a focus on formality, covering metrics for 3 evaluation dimensions and overall ranking for 4 languages. Given our findings, we recommend the formality style transfer community adopt the following best practices: 1. Formality XLM-R formality regression models in the ZERO-SHOT cross-lingual transfer setting yields the clear best metrics across all four languages as it correlates very well with human judgments. However, the commonly used binary classifiers do not generalize across languages (due to misleadingly over-predicting formal labels). We propose that the field use regression models instead since they are designed to capture a wide spectrum of formality rates. Meaning Preservation We recommend using chrF as it exhibits strong correlations with human judgments for all four languages. We caution against using BLEU for this dimension, despite its overwhelming use in prior work as both its reference and self variants do not correlate as strongly as other more recent metrics. 3. Fluency XLM-R is again the best metric (in particular for French). However, it does not correlate well with human judgments as compared to the other two dimensions. 4. System-level Ranking chrF and XLM-R are the best metrics using a pairwise comparison evaluation. However, an ideal evaluation would be to have a large number of systems with which to draw reliable correlations. Cross-lingual Transfer Our results support using ZERO-SHOT transfer instead of machine translation to port metrics from English to other languages for formality transfer tasks. We view this work as a strong point of departure for future investigations of ST evaluation. Our work first calls for exploring how these evaluation metrics generalize to other styles and languages. Across the different ways of defining style evaluation (either automatic or human), prior work has mostly focused on the three main dimensions covered in our study. As a result, although our metaevaluation on ST metrics focuses on formality as a case study, it can inform the evaluation of other style definitions (e.g., politeness, sentiment, gender, etc.). However, more empirical evidence is needed to test the applicability of the best performing metrics for evaluating style transfer beyond formality. Our work suggests that the top metrics based on XLM-R and chrF are robust across 4 Romance languages; yet, our conclusions and recommendations are currently limited to this set of languages. We hope that future work in multilingual style transfer will allow for testing their generalization to a broader spectrum of languages and style definitions. Furthermore, our study highlights that more research is needed on automatically ranking systems. For example, one could build a metric that combines metrics' outputs for the three dimensions, or one could develop a singular metric. In line with Briakou et al. (2021a), our study also calls for releasing more human evaluations and more system outputs to enable robust evaluation. Finally, there is still room for improvement in assessing how fluent a rewrite is. Our study provides a framework to address these questions systematically and calls for ST papers to standardize and release data to support larger-scale evaluations. A Cross-metric Correlation Analysis Correlations across meaning preservation metrics Figure 3 presents a cross-metric correlationbased analysis of the different approaches for measuring meaning preservation. We observe consistent trends across languages: methods that are similar in nature correlate well with each other. Concretely, across settings, n-gram based methods (i.e., BLEU, METEOR, and chrF) yield 0.8 − 0.9 correlation scores. The latter also holds when looking at correlations within the group of embedding-based methods (cosine and WMD) and and group of STS approaches for EN, FR, and IT, while for BR-PT we observe that the correlation between XLM-R and mBERT based approaches is smaller (0.7 vs. 0.8 for other languages). Finally, n-gram approaches correlate better with STS methods (with correlations in the range of 0.7 − 0.8) across languages, while the lowest correlations (0.5−0.6) are observed between embedding-based methods (i.e., cosine, WMD) and each of the rest metrics. Correlations within and across formalityfluency metrics Figure 4 presents results of cross-metric correlations for the studied approaches that capture formality transfer and fluency. For formality, each of the translate-based approaches (i.e., TRANSLATE-TRAIN and TRANSLATE-TEST) yields high correlations (0.8 − 0.9) between models that fine-tune XLM-R vs. mBERT, while their correlations decrease (0.7) for IT and BR-PT in the zero-shot setting. Finally, pseudo-perplexity metrics extracted from XLM-R-that consists the best correlated metric with human judgments for fluency-yield positive correlations with all formality metrics. Table 8 presents the number of correct systemlevel pair-wise comparisons of automatic metrics based on human judgments. For STS, chrF, F.REG*, F.CLASS*, and PSEUDO-LKL*, system-level scores are extracted via averaging sentence-level scores. For s-BLEU and r-BLEU the system scores are extracted at the corpus-level. The total number of pairwise comparisons for each language is 10 (given access to 5 systems). Among the meaning preservation metrics (i.e., STS, s-BLEU, and chrF), chrF yields the highest number of correct comparisons (i.e., 37 out of 40 for all languages). The formality regression models (i.e., F.REG*) result in correct rankings more frequently than the formality classifiers (i.e., F.CLASS*) yielding 35 out of 40 correct comparisons. Reference-BLEU (i.e., r-BLEU) is compared with overall ranking judemnts. It ranks 8 out of 10 systems correctly for EN, FR, and BR-PT and only 6 for IT. Finally, perplexity (i.e., PPL) results in the fewest correct rankings at system-level (i.e., 22 out of 40), despite correlating well with human judgments at the segment-level. Additionally, in Figure 2 we visualize the differences between relative rankings induced by human judgments and the best segment-level correlated metrics for each dimension, averaged per system. C Evaluated Systems Details For each of BR-PT, IT, and FR, outputs are sampled from: 1. Rule-based systems consisting of handcrafted transformations (e.g., fixing casing, normalizing punctuation, expanding contractions, etc.); 2. Round-trip translation models that pivot to EN and backtranslate to the original language; 3. Bi-directional neural machine translation (MT) models that employ side constraints to perform style transfer for both directions of formality (i.e., informal↔formal)-trained on (machine) translated informal-formal pairs of an English parallel corpus (i.e., GYAFC); 4. Bi-directional NMT models that augment the training data of 3. via backtranslation of informal sentences; 5. A multi-task variant of 3. that augments the training data with parallel-sentences from bilingual resources (i.e., OpenSubtitles) and learns to translate jointly between and across languages. For EN, the outputs were sampled from: 1. A rule-based system of similar transformations to ones for BR-PT, FR, and IT; 2. A phrase-based machine translation model trained on informal-formal pairs of GYAFC; 3. An NMT model trained on GYAFC to perform style transfer uni-directionaly; Figure 2: Difference in relative ranking between human judgments and automatic metrics across systems (i.e, represented by different markers) for different evaluation dimensions. STS, s-BLEU and chRF are compared with meaning rankings, r-BLEU (reference-BLEU) with overall, XLM-R classifiers (*F.CLASS) and regression (*F.REF) models with formality, and XLM-R pseudoperplexity (*PPL) with fluency. 4. A variant of 3. that incorporates a copyenriched mechanism that enables direct copying of words from input; 5. A variant of 4. trained on additional backtranslated data of target style sentences using 2. In general, neural models performed best for all languages according to overall human judgments, while the simpler baselines perform closer to the more advanced neural models for BR-PT, FR, and IT. For each evaluation dimension 500 outputs are evaluated for EN and 100 outputs per system for BR-PT, FR, and IT. D Meaning Preservation Metrics (reference-based) Table 9 presents supplemental results on meaning preservation metrics for reference-based settings. Table 9: Spearman's ρ correlation of meaning preservation metrics for reference-based meaning. Mode-based metrics marked with * use XLM-R while markers ∼ use mBERT as the base pre-trained language model. F.REG refers to formality regression models, PPL to perplexity, and LL to likelihood.
7,254.8
2021-10-20T00:00:00.000
[ "Linguistics", "Computer Science" ]
TLC Procedure for Determination of Approximate Contents of Caffeine in Food and Beverages An inexpensive TLC method is proposed for quantification of caffeine in food and beverage commercial products. The extraction is carried out with dichloromethane and the residue is analyzed by thin layer chromatography. The chromatograms are sprayed with a reagent containing iodine for visualization and the area of the spots is determined by freely available software. A good correlation was observed between contents of caffeine and TLC spot areas. Quantification of caffeine was carried out for a medicine tablet, coffee and guarana powders, a kola soft drink and a yerba mate beverage. Values close to the contents expected or within the admitted ranges were obtained. The method may be inadequate if high precision is essential, but it might be useful if values approximate to the real caffeine contents are satisfactory. Since the method requires no costly equipment, it seems to be feasible for chemistry teaching at several academic levels. Introduction Caffeine (1,3,7-trimethylxanthine) is a stimulant of the central nervous system, relatively abundant in coffee and cacao beans, kola nuts, guarana berries and leaves of tea and yerba mate [1].Worldwide, caffeine is the most consumed alkaloid, mainly in beverages such as tea, coffee and soft drinks.Common medicines widely consumed possess caffeine as one of their active components [2].Food and Drug Administration (FDA) regards caffeine either as a drug or a functional food [3]. It is commonly assumed that caffeine reduces fatigue and enhances physical endurance, mental alertness and concentration.It is admitted that daily doses of caffeine below 250 mg are safe [4].Excessive consumption may cause accelerated heart rate, nervousness, anxiety and insomnia, particularly in people still lacking tolerance to caffeine, such as children and teen agers [2].Care is also necessary regarding pregnant women, since high caffeine intake may have negative consequences on baby delivery and induce low weight of newborns [4]. Caffeine consumption has increased over the last decades.Frary and collaborators [1] estimated as 193 mg the average daily consumption per capita in the USA in the period 1994-1998.However, the FDA estimation for daily caffeine consumption in 2012 was 300 mg by adults and 100 mg by teen agers [2].Denmark, Finland and Brazil are countries with highest consumption by adult people, daily averages corresponding to 390, 329 and 300 mg, respectively [3].It is expected that caffeine consumption will increase even further, due to the widening spread of the fitness culture, the assumption of caffeine thermogenic properties and the increasing popularity of energy drinks. Analysis of caffeine is a convenient theme in chemistry education.It is a drug familiar to people of all continents, who consume it every day and are curious about its real effects.Many analytical methods have been proposed for detection and evaluation of the quantity of caffeine in distinct materials, such as biological fluids (urine, blood), food and beverages.High precision methods, based chiefly on gas chromatography (GC) or high performance liquid chromatography (HPLC) and capillary electrophoresis have been proposed [5].Some methods based on thin layer chromatography (TLC) require HPTLC system and densitometry [6].Other methods demand either the hyphenated methodology TLC-MS [7] or fluorescence plate reader [8].For most schools, such procedures are unrealistic and unfeasible due to the demand of expensive instrumentation, which in addition requires high-cost operation and maintenance. The aim of the present work is the proposal of simple, fast and low cost procedures for extraction and determination of caffeine content in beverages in commercial products, such as roast and powdered coffee and guarana, as well as beverages.We have used the proposed procedures in laboratory classes for undergraduate and graduate students with satisfactory results.It is expected that the procedures may be feasible also for chemistry teaching at the technical and high school levels. Pedagogical Objectives The present proposal addresses the general concept of TLC to quantify caffeine in products of every-day consumption.It is a physical and chemical method for separation, detection and quantification of chemicals.In general, the technique is simple, fast and applicable to several research areas of chemistry and biology.It has been used widely in natural products chemistry, aiming at purification of extracts, isolation and identification of representatives or constituents of essential and seed oils, waxes, terpenes, alkaloids, steroids and saponins, among other classes of secondary metabolites.Comparison with commercial standards enables fast screening of substances from plants and commercial products [9]. The basic parameter used to characterize migration of substances by TLC is the R f value [10], which is determined by the formula: distance moved by the substance R distance moved by the mobile phase f = R f values vary from 0 to 1. Material Used for Analysis The experiment was planned for analysis of three kinds of material: a medicine tablet, two kinds of powdered products and two beverages.All selected materials are easily available in commerce.The tablet medicine contains caffeine and acetaminophen (paracetamol); it is often used to alleviate headaches and reduce fever and influenza symptoms.The powdered products were guarana and roasted coffee, while the beverages were a canned energy drink (a kola beverage) and a canned yerba mate beverage. Caffeine Extraction Finely powdered tablets, guarana and coffee powders (0.1 g of each material) were transferred to Falcon tubes (15 mL capacity).A volume of 0.1 mL of 0.2 M NaOH solution of was added to each tube and then 3 mL of dichloromethane.The tubes were gently shaken by inversion for 5 min, taking care to avoid formation of emulsion.Anhydrous sodium sulfate (0.5 g) was added, the tubes capped, stirred and then left to stand.With the aid of a Pasteur pipette, the dichloromethane extract of each tube was filtered through Whatman filter paper number 1 (diameter 90 mm), previously soaked with dichloromethane, to a glass tube (diameter 1.5 cm, 10 cm high).The residue in each Falcon tube was washed twice with 2 mL of dichloromethane, the supernatant passed through the filter paper and combined with the previous extract.The solvent in each tube was evaporated to dryness on water bath at 50°C. Volumes of 5 mL of each beverage were transferred to Falcon tubes (15 mL capacity).Sequentially, 0.1 mL of 0.2 M NaOH solution and 3 mL of dichloromethane were added.The mixture was gently stirred by inverting the tubes ten times, making sure that emulsion does not take place.After standing the tubes until complete separation of two phases, the lower one (dichloromethane) was removed with a Pasteur pipette, filtering it through Whatman filter paper number 1 (diameter 90 mm), containing 3 g of anhydrous sodium sulfate, previously soaked with dichloromethane.The water phase in the Falcon tubes were treated twice with 2 mL dichloromethane, the tubes were left standing for complete separation of phases and the lower one filtered similarly as described above.The combined extracts were pooled in glass tubes (diameter 1.5 cm, 10 cm high).The solvent in the tubes was evaporated to dryness on water bath at 50 °C. All analyses of solid and liquid products were carried out in triplicates.The results are expressed as means ± standard error (SE). Thin-layer Chromatography (TLC) The dry extracts obtained were dissolved in exactly 2 mL of dichloromethane and transferred to Eppendorf tubes previously cooled on ice to prevent evaporation of the solvent.The extracts were analyzed using aluminum TLC plates, coated with silica gel Typ 60 (Merck).With a mechanical pipette, the extracts were deposited 1 cm from the bottom of the plate and spaced out 1.5 cm from one another.Each deposit on the plate did not exceed 5 µL.Total volumes deposited for each extract were: powdered tablet -4 µL; coffee powder -10 µL; guarana powder -3.75 µL; kola beverage -50 µL; yerba mate beverage -60 µL.Three independent plates were prepared with all extracts.The plates were placed inside previously saturated TLC chromatography tanks, containing the mixture ethyl acetate: methanol: ammonium hydroxide (85: 10: 5).The total run of the mobile phase was 18 cm.The visualization of the caffeine spots on the chromatograms were obtained by spray TLC plates with 15 mL of the reagent ferric chloride: iodine.The reagent was prepared by mixing equal volumes of two solutions: a) 1 g of iodine dissolved in 25 mL of acetone; b) 2.5 g of ferric chloride and 5 g of tartaric acid, both dissolved in 25 mL of water.Immediately after the spray, the TLC plate was placed between two glass plates and its image digitalized with a desk scanner (HP Deskjet Ink Advantage 3636), adjusting the analysis to a resolution of 200 dpi. Caffeine Calibration Curve Dichloromethane solutions of caffeine at 0.6, 0.8, 1.0, 1.2, 1.4 and 1.6 µg.µL -1 were used.The solutions were prepared by mixing volumes of a dichloromethane stock solution of caffeine at 10 mg.mL -1 and pure dichloromethane, according to Table 1. Two successive deposits of 5 µL of solutions at each concentration were made on TLC plates.Chromatography analysis, visualization and image digitalization were carried out as described above.Three independent analyses for calibration curve construction were carried out. Processing of TLC Chromatograms for Determination of Areas of Caffeine Spots The areas (mm 2 ) corresponding to each of the spots on the chromatograms were determined with the freely available software ImageJ® The values of mean areas in mm 2 corresponding to the spots on the three TLC chromatograms of the standard solutions were used to obtain the calibration curve and equation of the line. Hazard The ideal solvent for caffeine extraction and solubilization is chloroform.However, whenever possible it is recommended to substitute chloroform for another solvent, due to its assumed carcinogenicity.Dichloromethane is a convenient alternate solvent, because of its lower toxicity and efficiency similar to chloroform for caffeine solubilization. The extraction does not require water boiling temperature (thus reducing risks of accidents in the laboratory) and may be completed in 45 min, as is the case also of a method reported previously [12]. Safety goggles, gloves and coat should be used to avoid direct contact with reagents.The deposit of the extracts and caffeine standard inside TLC plates were performed with the aid of a fume hood, as well as nebulization of the visualization reagent. Upon contact with air, solid iodine turns into gaseous iodine by the physical process of sublimation, which may cause eye and skin irritation. Results and Discussion The pellet on the bottom of the tubes after solvent evaporation was relatively plentiful and visible as crystalline needles (Figure 1).This is a positive aspect of introducing caffeine extraction in laboratory classes.The sight of conspicuous crystals provides the students a rewarding feeling of achievement. World Journal of Chemical Education The TLC analyses took 45 min to complete the 18 cm run of the solvent.Spots of caffeine appeared immediately after spraying the ferric chloride: iodine visualization reagent.The spots had dark brownish-violet color on a yellow-light brown background, with R f in the range 0.63-0.65 (Figure 2). Although no rigorous limit detection has been determined, it is safe to say that 2 µg of caffeine may be detected on silica gel layers sprayed with the reagent.The analysis to obtain the calibration curve revealed that within the mass range 6-16 µg the areas of the spots were proportional to the deposited caffeine masses (Figure 3).It was observed that increasing the caffeine mass beyond 16 µg the linear relationship between mass and area is lost, the curve tending to a plateau (data not shown), probably due to a density increase of the spots.As far as the caffeine masses lie in the range of linearity, the areas of the spots alone provide means for the quantification of caffeine, and thus no need of densitometer is required, differently from other reported methods [6,8].The calibration curve and corresponding regression equation and coefficient of correlation are shown on Figure 4. The areas of the spots and the contents of caffeine, as determined with the regression equation (Figure 4), are given on Table 2. On the same table are given the expected contents based on information provided by the description leaflet of the medicine tablet, the bottle labels (energy drink, yerba mate beverage) or literature about the product (coffee and guarana powders).Standard errors regarding the determined caffeine contents varied in the range 1.9% (yerba mate beverage) to 6.5% (kola soft drink).Such fluctuation is too high if precision in the values to be determined is essential.However, if high precision is not a crucial requisite, the fluctuation found seems to be not prohibitive. The contents of caffeine determined for coffee and guarana powder, as well for the yerba mate beverage, lie within the variation range admitted for the products (Table 2).Regarding the content of caffeine found for the kola drink and the tablet medicine, the means of the values found were 15% and 8% lower than the amount of caffeine declared by the producer companies.Depending on the accuracy expected for procedures to be used in laboratory classes of chemistry, the results obtained may be considered satisfactory.The disadvantage regarding the lack of a high degree of precision of the proposed method is compensated by the feasibility of its implementation in laboratory classes.Since the method requires neither costly chemicals nor expensive equipment, it may be adequate not only for education at the university level, but also for students at technical schools.Hopefully, the procedures might be adapted even for teaching at high school level, as far as the establishments have facilities enabling simple experiments in chemistry or biochemistry.Guarana powder (%) b 3.75 47.9 ± 0.9 5.1 ± 0.1 a Declared on the drug leaflet; b mass/mass. Conclusion The proposed procedure enables a low cost and rapid extraction and quantification of caffeine in food and beverages.Although not applicable in case high precision is crucial, the procedures provide approximate values and are feasible for chemical education at universities and technical schools. [11], according to the following options: File -Open; Image -Crop (this option focuses the analysis exclusively on the selected TLC spot); Analyze -Set scale -Distance in pixels (200); Known distance -25.4 mm; Pixels aspect ratio: 1.0; Unit of length -mm; Image type -8-bit; Process -Binary -Make Binary; Analyze -Analyze particles; Show -Outlines.With this procedure, the software provides black-and-white images and outlines of each spot and the corresponding areas in square millimeters. Figure 1 . Figure 1.Crystals of caffeine on bottom of glass tube after procedure of extraction from guarana powder Figure 2 .Figure 3 . Figure 2. Thin layer chromatogram of dichloromethane solutions with known quantities of caffeine Figure 4 . Figure 4. Calibration curve relating quantities of caffeine analyzed by thin layer chromatography and areas of the corresponding spots, regression equation and correlation coefficient
3,348.8
2017-08-17T00:00:00.000
[ "Chemistry" ]
Analyzing-Evaluating-Creating: Assessing Computational Thinking and Problem Solving in Visual Programming Domains Computational thinking (CT) and problem-solving skills are increasingly integrated into K-8 school curricula worldwide. Consequently, there is a growing need to develop reliable assessments for measuring students' proficiency in these skills. Recent works have proposed tests for assessing these skills across various CT concepts and practices, in particular, based on multi-choice items enabling psychometric validation and usage in large-scale studies. Despite their practical relevance, these tests are limited in how they measure students' computational creativity, a crucial ability when applying CT and problem solving in real-world settings. In our work, we have developed ACE, a novel test focusing on the three higher cognitive levels in Bloom's Taxonomy, i.e., Analyze, Evaluate, and Create. ACE comprises a diverse set of 7x3 multi-choice items spanning these three levels, grounded in elementary block-based visual programming. We evaluate the psychometric properties of ACE through a study conducted with 371 students in grades 3-7 from 10 schools. Based on several psychometric analysis frameworks, our results confirm the reliability and validity of ACE. Our study also shows a positive correlation between students' performance on ACE and performance on Hour of Code: Maze Challenge by Code.org. INTRODUCTION Computational thinking (CT) is emerging as a critical skill in today's digital world.According to the work of [1], "computational thinking involves solving problems, designing systems, and understanding human behavior, by drawing on the * This extended version of the SIGCSE 2024 paper includes all 21 test items from ACE along with their answers in the appendix. concepts fundamental to computer science".Several works have also discussed the multi-faceted nature of CT and its broader role in the acquisition of creative problem-solving skills [2,3].As a result, CT is being increasingly integrated into K-8 curricula worldwide [4,5].With the growing integration of CT at all academic stages, there has also been a surge in demand for validated and reliable tools to assess CT skills, especially at the K-8 stages [6,7].These assessment tools are essential for tracking the progress of students, guiding the design of curricula, and supporting teachers as well as researchers to assist students in the acquisition of CT skills [3,6,8,9]. Prior work has proposed several assessments that measure students' CT during their K-8 academic journey.On the one end, several portfolio-based assessments have been proposed that measure students' CT through projects in specific programming environments [10].Although portfolio-based tests provide open-ended projects to capture students' analytical, evaluative, and creative skills, they are challenging to implement and interpret on a larger scale [7,11].On the other end, several diagnostic assessment tools have been proposed that measure CT in the form of multiple-choice items [7,[12][13][14].These assessment tools are preferred for their practicality in large-scale administration and suitability for both pretest and posttest conditions [11].However, scalability comes at the cost of limiting the ability to effectively measure students' computational creativity.Thus, there is a need to develop multi-choice tests that also capture students' computational creativity. To this end, we have developed a novel test for grades 3-7, ACE, that focuses on the three higher cognitive levels of Bloom's Taxonomy, i.e., Analyzing, Evaluating, and Creating [15].It comprises a diverse set of multiple-choice items spanning all three higher cognitive levels, including the highest level of Creating.Figure 1 illustrates the diversity of items covered by ACE.Further details of the development of ACE are presented in Section 3. In this paper, our objective is to validate ACE with students from grades 3-7, and report on its psychometric properties.Specifically, we center the analysis around the following research questions: (1) RQ1: How is the internal structure of ACE organized w.r.t.item categories pertaining to Bloom's higher cognitive levels?(2) RQ2: What is the reliability of ACE w.r.t.consistency of its items?(3) RQ3: How does performance on ACE correlate with performance on real-world programming platforms and students' prior programming experience? Table 1: Categorization of different CT assessments proposed in recent works.The first column shows the specific CT Assessment.The next three columns, Applying-Analyzing, Analyzing-Evaluating, and Evaluating-Creating, classify the assessment based on these different cognitive levels of Bloom's Taxonomy where "✓" implies presence of the levels and "✗" implies absence of the levels.The "Grade" column refers to the intended grades (age group) for the test.The "Validity" column refers to three dimensions across which the test was validated, including (i) "Student": test items validated with students; (ii) "Expert": test items validated with experts; (iii) "Convergent": test validated w.r.t.performance on another test/course.Finally, the "Domain" column shows the domain on which the items in the test were designed.Further details are presented in Section 2. CT Assessment and Tests Applying RELATED WORK Prior work has proposed several CT assessments and their categorizations based on their format including the following [7,11]: (a) portfolios, which are project-based programming assessments; (b) interviews, which are used in conjunction with portfolios to gain insights into students' thinking process; (c) summative assessments, which are long-format answer type questions to measure CT specifically in the context of a particular domain; (d) multi-choice diagnostic tests, which measure CT aptitude and may be administered in both pretest and posttest conditions.As mentioned in Section 1, we focus on multi-choice CT tests due to their practicality and scalability.Table 1 presents several different multi-choice diagnostic tests proposed in the literature, viewed through the lens of Bloom's taxonomy [11].Specifically, we classify them based on their coverage of the higher cognitive levels of the taxonomy (Applying, Analyzing, Evaluating, and Creating). These tests cater to students from different school years, starting from kindergarten (K) through the early years of college.Next, we describe three representative assessments in different years.The competent Computational Thinking test (cCTt) [7] was proposed recently in 2022 for students in grades 3-4.The test comprised items that only required finding solution codes or completing a given solution code.These types of items invoke students' Applying, Analyzing, and Evaluating cognitive levels.The Computational Thinking Challenge (CTC) [13] was proposed in 2021 for students in grades 9-12.The test contains programming items in the form of Parsons problems [24], solution-finding multi-choice items, and general items on real-world problemsolving.The items in CTC also cover all cognitive levels except the Creating level.Finally, Placement Skill Inventory v1 (PSIv1) [14] was also proposed recently in 2022 for college students as a placement test.The test contains multi-choice theoretical items on programming and covers only the Applying and Analyzing levels of Bloom's taxonomy.Contrary to these tests, ACE contains items that require synthesizing new problem instances to verify the correctness of a proposed solution.These items in ACE are intended to cover the Bloom's Creating cognitive level.ACE is developed for students in grades 3-7. Table 1 also shows different domains in which CT is measured in these tests.For grades K-8, the most popular setting is block-based visual programming, likely because of the low syntax overhead of the domains and the ease of measuring CT concepts such as conditionals, loops, and sequences [7,9,12,21].Beyond block-based programming domains, several CT tests also utilize real-world settings, including everyday-scenarios (e.g., a scenario related to seating arrangements in a gathering) [20], robotics [19], and real-world problem-solving (e.g., a problem related to route planning in a city) [13].The advantage of these real-world settings and domains is their administration with minimal domain knowledge, thus making them suitable pretest and posttest candidates.ACE is based on the block-based visual programming domain. Finally, an important aspect of developing such CT assessments is their validation and reliability [7].Generally, CT assessments are validated using three methods: (a) with students in specific grades for which the assessment was designed; (b) with expert feedback; (c) w.r.t.another test or performance in a course (i.e., convergent validity).For a well-rounded evaluation, it is advisable to explore all three validation methods [7,11].As shown in Table 1, most tests are validated with students, while some are refined by experts.However, the incorporation of convergent validity is less common.ACE is validated using all three methods. OUR TEST: ACE The development of ACE is centered around the higher cognitive levels of Bloom's taxonomy: Analyzing, Evaluating, and Creating.The test contains items grounded in the domain of block-based visual programming.Specifically, we consider the popular block-based visual programming domain of Hour of Code: Maze Challenge [16] by code.org[25]. We picked this domain as it encapsulates important CT and problem-solving concepts of conditionals, loops, and sequences, within the simplicity of the block-based structure. Students can attempt tasks in this domain with a simple description of the constructs and task, as discussed in the caption of Figure 1.Next, we describe the items in ACE which are divided into the following three categories based on Bloom's higher cognitive levels: • Applying-Analyzing: This category comprises items either on finding a solution code of a given task or reasoning about the trace of a given solution code on one or more visual grids.They are based on the Applying and Analyzing levels of Bloom's taxonomy, as they require applying CT concepts and analyzing code traces.These items are typically the most common type of items included in several CT tests [7,20]. • Analyzing-Evaluating: This category comprises items that require reasoning about errors in candidate solution codes of a task and evaluating the equivalence of different codes for a given task.They are based on the Analyzing and Evaluating levels of Bloom's taxonomy.Several CT assessments also include these types of debugging items [9,13]. • Evaluating-Creating: This category comprises items that require reasoning about the design of task grids for given solution codes.They are based on the Evaluating and Creating levels of Bloom's taxonomy, as they involve synthesizing components of visual grids such as Avatar, Goal, and Wall.These items are unique to ACE and capture the open-ended nature of task design, such as counting several possible task configurations to satisfy a given solution code (see items Q18 and Q21 in Figure 1). STUDY AND DESCRIPTIVE STATISTICS In this section, we provide details of the data collection process for ACE's psychometric evaluation. Two-Phase Data Collection Process The study to evaluate the psychometric properties of ACE was planned in two phases, spread across two weeks.The first phase was intended to familiarize students with the block-based visual programming domain of Hour of Code: Maze Challenge (HoCMaze) [16] by code.org[25], and introduce them to basic programming concepts.Additionally, it would serve as a baseline to correlate students' performance w.r.t.ACE, and measure the convergent validity of ACE.In the second phase of the study, students would take the ACE test.This two-phase study design ensured that students would have enough focus on each study component as well as a time gap between domain familiarity and the actual test. We obtained an Ethical Review Board approval from the Ethics Committee of Tallinn University before conducting the study.The study was conducted in Estonia, where a random selection of 10 schools was pooled from 11 out of 15 counties.Participation in the study was voluntary for both The data collection process was conducted in May 2023. During both phases, students received usernames to ensure anonymity throughout the study.The first phase of data collection included one 45-minute lesson during which the students filled in a short background questionnaire in Google Forms (about 5 minutes) and then solved 20 tasks from HoCMaze (about 40 minutes).We hosted these 20 tasks on a separate platform created for the study to enable the collection of students' performance data on these tasks.Students were allowed multiple attempts to solve each task and could score a maximum of 20 points, i.e., 1 point per task.Henceforth, we refer to students' performance in this phase as their HoCMaze score.The second phase took place one week later and involved a 45-minute lesson during which students took the ACE test.The test was administered through a Qualtrics survey.Students could score a maximum of 21 points, i.e., 1 point per item.Henceforth, we refer to students' performance in this phase as their ACE score. RESULTS AND DISCUSSION In this section, we discuss the results of the study centered around the research questions (RQs) introduced in Section 1. RQ1: Internal Structure of ACE We assess the internal structure of ACE w.r. RQ2: Reliability of ACE Next, we determine the reliability of ACE, i.e., a measure of its ability to produce consistent and stable results over repeated administrations (a higher value being better).One standard way to measure this is through the Cronbach alpha value [13,28] that reflects the average inter-item correlations in a test.Another method is the reliability of student ability estimates obtained from Item Response Theory (IRT).In our study, we apply IRT analysis on students' responses to ACE and fit a 1-parameter logic Rasch model (1-PL IRT) [26].The model estimates the per-item difficulties and students' abilities, and provides the reliability of these estimates. The overall reliability for our test was good with a Cronbach alpha value of 0.813.Among the three item categories, Cronbach alpha was 0.622 for ACE[01-07], 0.562 for ACE[08-14], and 0.625 for ACE [15][16][17][18][19][20][21]. Figure 3a shows the 1-PL IRT item characteristic curves for all items; we find that Q02 is the easiest and Q17 is the hardest ACE item. Figure 3b illustrates the difficulty of items as well as the estimated ability of students' in our population.The 1-PL IRT Person reliability value for all 21 items is 0.790 (with p < 0.01). Next, we discuss the potentially problematic item Q17 shown in Figure 4. We find that its exclusion from the model doesn't significantly improve the IRT Person reliability.One possible reason Q17 prompted incorrect responses is that it was the first item in ACE requiring enumeration of all possible Avatar locations.However, students adapted to similar formats in subsequent items (e.g., Q18 and Q21 in Figure 1).Prior work confirms that varying response formats can cause deviations [29].A possible revision of item Q17 could be simplifying the visual grid to reduce its complexity. RQ3: Correlating ACE scores We measure the convergent validity of ACE w.r.t.HoC-Maze scores.Additionally, we also measure the correlation of the three ACE categories with both HoCMaze scores as well as overall ACE scores.Finally, we measure the influence of extrinsic factors such as prior programming experience on ACE scores.To measure all these correlations, we perform standard Pearson's correlation analysis between each of these features on data from our entire student population [13,30].High positive values of Pearson's correlation coefficient, r, indicate a strong positive correlation.In terms of the effect of prior programming experience on ACE, we observed a significant positive correlation with both the student's year of study (r = 0.358, p < 0.01) and age (r = 0.359, p < 0.01).Our result aligns with prior work [31] indicating that participants' developmental factors (e.g., reading skills, abstract thinking) can impact test performance.In our student population, varying programming exposure due to elective programming courses influenced prior programming skills.Analyzing this further, we discovered that students who took after-school programming classes outperformed those who did not on ACE (p < 0.05, w.r.t.t-test [32]). Limitations Next, we discuss a few limitations of our current study.Firstly, in this study, we evaluated the convergent validity of ACE w.r.t.HoCMaze scores.However, it would be more informative to evaluate ACE w.r.t.other types of assessments, such as portfolios, which specifically consider Creating cognitive level.Moreover, it would be interesting to evaluate the convergent validity of ACE w.r.t.students' performance in other subjects involving CT.Secondly, grade 3 did not present a significant correlation between ACE and HoCMaze scores (Pearson's r = 0.068; p = 0.633), possibly because of difficulties with text comprehension of the item descriptions.Hence, refining the presentation of items could be beneficial for this age group.Finally, we presented the test items in a fixed order, which might have affected students' performance on specific items such as Q17.Implementing a randomized order of the test items within each category could be a way to address this limitation. CONCLUSION AND FUTURE WORK We developed a new test, ACE, to assess CT and problemsolving skills, focusing on higher levels of Bloom's taxonomy, including Creating.We capture this level through a novel category of items that go beyond solution finding or debugging and consider task design.In this paper, we studied the psychometric properties of ACE, and our results confirm ACE's reliability and validity.There are several ex-citing directions for future work.Firstly, we can extend the framework of items to develop tests with more advanced programming constructs, such as variables/functions suitable for higher grades.Secondly, while we studied the utility of items in ACE for CT assessments, these items could also be incorporated as part of the curriculum to teach students richer CT and problem-solving skills such as problem design and test-case creation.Q07.You are given a code.You are also given three grids GRID-1, GRID-2, and GRID-3.Which of these grids can be solved with this code?a b c d e f g h 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 Q08.You are given a code and a grid.When the code is run, the AVATAR crashes on a WALL.At which block in this code does the crash happen?a b c d e f g h 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 Q15.You are given a code and an incomplete grid without the AVATAR.What could be the initial position of the AVATAR such that the grid is solved by the code? a b c d e f g h 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 three grids GRID-1, GRID-2, and GRID-3 (b) Q07.Solution checking Q09.You are given a code and a grid.You may have to fix some errors in the code such that it solves the grid.How can you fix the code? a b c d e f g h 1 code does not have any errors and it already solves the grid OPTION B Add move forward after Block-2 OPTION C Add move forward after Block-4 OPTION D Change Block-3 to turn left and Block-5 to turn right (c) Q09.Code debugging Q13.You are given a code CODE-1 and two smaller codes CODE-2 and CODE-3.You have to think about the AVATAR's behavior when a code is run on a grid.Which of these two smaller codes produce the same behavior as CODE-1 on any grid? of these two smaller codes (d) Q13.Code equivalence Q18.You are given a code and an incomplete grid without the GOAL.You can add the GOAL in any grid cell which is not occupied by the AVATAR and is not a WALL.How many different locations of the GOAL are possible such that the grid is solved by the code?Q18.Goal design Q21.You are given a code and an incomplete grid.You can add additional WALL cells to the grid by converting any of FREE cells into WALL cells.What is the smallest number of additional WALL cells you must add such that the grid is solved by the code? a b c d e f g h 1 Figure 1 : Figure 1: (a) shows the distribution of test items w.r.t to CT and problem-solving concepts and Bloom's cognitive levels.(b)-(f) are examples of five items from ACE.These items are grounded in the domain of Hour of Code: Maze Challenge (HoCMaze)[16], which can be found at studio.code.org/s/hourofcode.HoCMaze domain comprises elementary block-based visual programming tasks where one has to write a solution code that would navigate the Avatar (blue dart) to the Goal (red star) without crashing into Walls (gray grid cells).We encourage the reader to attempt these items; all 21 test items from ACE along with their answers are provided in the appendix. Figure 2 : Figure 2: An overview of the performance of students on ACE.(a) overall distribution of ACE scores across all 371 students; (b) distribution of ACE score per grade; (c) success rate of students for each item in ACE.Details are in Section 4. Wright Map: Item and Student Distribution Figure 3 : Figure 3: Results from a 1-parameter Rasch model [26] on the ACE items and student scores.(a) Item characteristic curve for each item in ACE and (b) Wright map corresponding to our student population. Figure 5 : Figure 5: Pearson's correlation coefficient, r, between ACE and HoCMaze, between ACE and its categories, and between each category.All values are significant with p < 0.001. are given a grid.Which code solves this grid?are given a grid.Which code solves this grid?are given a grid.Which code solves this grid?are given a grid and its solution code.What happens to the AVATAR when the code is run on this grid?a b c d e f g h OPTION A AVATAR will pass through the grid cell b5 OPTION B AVATAR will pass through the grid cell g5 OPTION C AVATAR will pass through the grid cell b3 OPTION D AVATAR will pass through the grid cell c3 Q06.You are given a grid and its solution code.What happens to the AVATAR when the code is run on this grid?a b c d e f g h OPTION A AVATAR will pass through the grid cells f2 and e2 OPTION B AVATAR will pass through the grid cells e3 and d3 OPTION C AVATAR will pass through the grid cells e4 and d4 OPTION D AVATAR will pass through the grid cells d4 and c4 GRID-1 OPTION B GRID-1 and GRID-3 OPTION C GRID-1 and GRID-2 OPTION D All three grids GRID-1, GRID-2, and GRID-3 are given a code and a grid.You may have to fix some errors in the code such that it solves the grid.How can you fix the code? a b c d e f g h 1 code does not have any errors and it already solves the grid OPTION B Add move forward after Block-2 OPTION C Add move forward after Block-4 OPTION D Change Block-3 to turn left and Block-5 to turn right Q10.You are given a code and a grid.You may have to fix some errors in the code such that it solves the grid.How can you fix the code?code does not have any errors and it already solves the grid OPTION B Add move forward before Block-2 OPTION C Change Block-4 to if path to the left and Block-5 to turn left OPTION D Remove Block-3 Q11.You are given a code and a grid.You may have to fix some errors in the code such that it solves the grid.How can you fix the code?code does not have any errors and it already solves the grid OPTION B One additional move forward needs to be added somewhere in the code to fix it OPTION C One additional turn right needs to be added somewhere in the code to fix it OPTION D One additional turn left needs to be added somewhere in the code to fix it Q12.You are given a code CODE-1 along with two smaller codes CODE-2 and CODE-3.You have to think about the AVATAR's behavior when a code is run on a grid.Which of these two smaller codes produce the same behavior as CODE-1 on any grid?CODE-2 and CODE-3 OPTION D None of these two smaller codes Q13.You are given a code CODE-1 and two smaller codes CODE-2 and CODE-3.You have to think about the AVATAR's behavior when a code is run on a grid.Which of these two smaller codes produce the same behavior as CODE-1 on any grid?CODE-2 OPTION B Only CODE-3 OPTION C Both CODE-2 and CODE-3 OPTION D None of these two smaller codes Q14.You are given two codes CODE-1 and CODE-2, along with a grid.Which of the following is true for the AVATAR's behavior when CODE-1 and CODE-2 are run?have the same behavior for the given grid.However, there are other grids for which they have different behaviors.OPTION B They have different behaviors for the given grid.However, there are other grids for which they have the same behavior.OPTION C They have the same behavior for every grid.OPTION D They have different behaviors for every grid. 8 B cell d5 facing north OPTION B Grid cell d4 facing west OPTION C Grid cell b5 facing east OPTION D Grid cell c5 facing east Q16.You are given a code and an incomplete grid without the AVATAR.What could be the initial position of the AVATAR such that the grid is solved by the code?cell h5 facing west OPTION B Grid cell a3 facing east OPTION C Grid cell b3 facing east OPTION D Grid cell h3 facing westQ17.You are given a code and an incomplete grid without the AVATAR.How many different positions of the AVATAR are possible such that the grid is solved by the code?are given a code and an incomplete grid without the GOAL.You can add the GOAL in any grid cell which is not occupied by the AVATAR and is not a WALL.How many different locations of the GOAL are possible such that the grid is solved by the code?are given a code and an incomplete grid without the GOAL.You can add the GOAL in any grid cell which is not occupied by the AVATAR and is not a WALL.How many different locations of the GOAL are possible such that the grid is solved by the code?are given a code and an incomplete grid.You can add two additional WALL cells in any of the FREE cells.What could be possible locations of two additional WALL cells such that the grid is solved by the code? at the grid cells g6 and f7 OPTION B WALL at the grid cells g6 and f2 OPTION C WALL at the grid cells g3 and f2 OPTION D WALL at the grid cells g3 and f7 Q21.You are given a code and an incomplete grid.You can add additional WALL cells to the grid by converting any of FREE cells into WALL cells.What is the smallest number of additional WALL cells you must add such that the grid is solved by the code? .ANSWERS TO ACE TEST ITEMS Below we provide answers to the 21 ACE test items.
6,148.8
2024-03-07T00:00:00.000
[ "Computer Science", "Education" ]
On the problem of maximal $L^q$-regularity for viscous Hamilton-Jacobi equations For $q>2, \gamma>1$, we prove that maximal regularity of $L^q$ type holds for periodic solutions to $-\Delta u + |Du|^\gamma = f$ in $\mathbb{R}^d$, under the (sharp) assumption $q>d \frac{\gamma-1}\gamma$. Introduction We address here the so-called problem of maximal L q -regularity for equations of the form for all M > 0, there exists M > 0 (possibly depending on M, γ, q, d) such that Q being the d-dimensional unit cube (−1/2, 1/2) d . It is known that maximal L q -regularity cannot be expected in general (even for classical solutions) if see Remark 3.1. The validity of (M) has been conjectured to hold in the complementary regime q > d γ − 1 γ ( and q > 1 ), but, to the best to our knowledge, the problem has remained so far unsolved in general. P.-L. Lions has discussed this conjecture in a series of seminars (e.g. [6]), and during his lectures at Collège de France [5]. He indicated some special cases that can be successfully addressed. When γ = 2, the so-called Hopf-Cole transformation v = e −u reduces (1) to a semilinear equation, and (M) may be obtained for any q > d/2 using (maximal) elliptic regularity and the Harnack inequality. Ad-hoc treatments for the special cases d = 1 and q < d/(d − 1) have been discussed in [6] also. As a final suggestion, an integral version of the Bernstein method [4] could be implemented to prove (M) when q is close enough to d (see also [1] for further refinements of this technique), but the full regime (2) seems to be out of range using these sole arguments. We develop here a new method to obtain (M) in generality, assuming only q > 2 (which is always satisfied under (2) if γ > d/(d − 2)). The proof is based on an crucial estimate of the form for any k ≥ 0, where ω {|Du| ≥ k} → 0 as k → ∞. Such an estimate is obtained starting from a classical idea of Bernstein [2], namely shifting the attention from the equation (1) for u to an equation for (a suitable power of) |Du| 2 . A strong degree of coercivity with respect to |Du| 2 itself in this equation, which stems from uniform ellipticity and the nonlinear term |Du| γ , turns out to be a key ingredient to derive (3). Once (3) can be then recovered up to k = 0. This second key step has been inspired by a very interesting argument that appeared in [3], which suggests that, despite the strong non-linear nature of |Du| γ in (1), some information can be extracted from the equation on sets {|u| ≥ k} (and on {|Du| ≥ k} in our case) for k large. Our result reads as follows. Proof of the main theorem For the sake of brevity, we will often drop the x-dependance of u, Du, ..., and the d-dimensional Lebsesgue measure dx under the integral sign. (x) + = max{x, 0} will denote the positive part of x, and for any p > 1, p ′ = p/(p − 1). This section is devoted to the proof of Theorem 1.1, which will be based on the following lemma. and for all k ≥ 1, We postpone the proof of the lemma, and show first how (4) yields the conclusion of Theorem 1.1. Note that the function F : Since u ∈ C 3 (Q), the function k −→ Y k is continuous and tends to zero as k → ∞ (it eventually vanishes for k large). Hence we deduce that and finally The estimate on ∆u L p (Q) is then straightforward. Having proven Theorem 1.1, we now come back to the main estimate (4). 2 , δ ∈ (0, 1) to be chosen later. Note that, for any δ ∈ (0, 1), g enjoys the following properties: for all s ≥ 0, Note also that . . , d and β > 1 to be chosen later as test functions in the HJ equation. First, integrating by parts and substituting w Moreover, again integrating by parts, Note also that in (8) integrating on Q and on Ω k is the same, by the presence of w k . We use first Cauchy-Schwarz inequality, the equation (1) and the inequality (a − b) 2 ≥ a 2 2 − 2b 2 for every a, b ∈ R to get Moreover, again by Cauchy-Schwarz inequality (be careful about g ′′ < 0) and (7), The above inequalities then yield Note that for γ > 1 it holds and hence, we are allowed to conclude This gives, back to (8) and substituting (1 + |Du| 2 ) where c 1 = c 1 (δ, d, γ) > 0. We now estimate the five terms on the right hand side of the previous inequality. The first three terms are somehow similar: using Cauchy-Schwarz inequality and that 2sg ′ ≤ g ′′ , we have for some We now make some choices for the coefficients. Recalling that d γ ′ < q, we take Note d γ ′ < p < q. Assuming that p > 2 (which is always true when γ > d d−2 , otherwise see the remark at the end of the proof), we have β > 1 whenever δ is close enough to zero. Moreover, Therefore, we apply Hölder's inequality (with conjugate exponents p/2 and p/(p − 2)) and Young's inequality, and then w k ≤ w together with (12) to obtain where c 3 = c 3 (δ, d, γ, p) > 0. Plugging the previous inequality into (10) yields δ 2d The fourth term in (9) is a bit more delicate, we proceed as follows. Use first that s ∫ where c 4 = c 4 (δ, d, γ, β) > 0. Since k ≥ 1, w ≥ 1 on Ω k , hence, recalling also (12), ∫ We now focus on the fifth term in (9). By Young's inequality, Furthermore, letting Plugging the previous inequality into (16) and using again Young's inequality leads to for some c 5 = c 5 (δ, d, γ, p) > 0. Plug now (14), (15) and (17) into (9) to obtain Sobolev's inequality related to the continuous embedding of We finally choose δ > 0 small enough so that δ pq q−p < 1. Recall that p < q, so using repeatedly Hölder's and Young's inequalities we obtain Then, on B 1/2 := {|x| < 1/2}, and v ε = 0 on ∂ B 1/2 . Therefore, there exists M > 0, depending on c, d, γ only, such that Note that the example is meaningful only if γ > d d−1 , that is when d γ−1 γ > 1. Note also that though v ε is not periodic, being smooth on B 1/2 and vanishing on ∂ B 1/2 , it is straightforward to produce similar examples in the periodic setting. Finally, different choices of the truncation χ(|x|) = χ ε (|x|) lead to counterexamples in the regime q < d γ−1 γ . Remark 3.2. d = 1, 2. Theorem 1.1 is stated in dimension d ≥ 3, but the proof for d = 1, 2 follows identical lines. As it usually happens, the point is that in the latter case W 1,2 (Q) is continuously embedded into L p (Q) for all finite p ≥ 1, and not only into L 2d d−2 (Q). Remark 3.3. Less regularity of u.
1,776.6
2020-01-31T00:00:00.000
[ "Mathematics" ]
Augmented Reality Surgical Navigation System Integrated with Deep Learning Most current surgical navigation methods rely on optical navigators with images displayed on an external screen. However, minimizing distractions during surgery is critical and the spatial information displayed in this arrangement is non-intuitive. Previous studies have proposed combining optical navigation systems with augmented reality (AR) to provide surgeons with intuitive imaging during surgery, through the use of planar and three-dimensional imagery. However, these studies have mainly focused on visual aids and have paid relatively little attention to real surgical guidance aids. Moreover, the use of augmented reality reduces system stability and accuracy, and optical navigation systems are costly. Therefore, this paper proposed an augmented reality surgical navigation system based on image positioning that achieves the desired system advantages with low cost, high stability, and high accuracy. This system also provides intuitive guidance for the surgical target point, entry point, and trajectory. Once the surgeon uses the navigation stick to indicate the position of the surgical entry point, the connection between the surgical target and the surgical entry point is immediately displayed on the AR device (tablet or HoloLens glasses), and a dynamic auxiliary line is shown to assist with incision angle and depth. Clinical trials were conducted for EVD (extra-ventricular drainage) surgery, and surgeons confirmed the system’s overall benefit. A “virtual object automatic scanning” method is proposed to achieve a high accuracy of 1 ± 0.1 mm for the AR-based system. Furthermore, a deep learning-based U-Net segmentation network is incorporated to enable automatic identification of the hydrocephalus location by the system. The system achieves improved recognition accuracy, sensitivity, and specificity of 99.93%, 93.85%, and 95.73%, respectively, representing a significant improvement from previous studies. Introduction In recent years, several studies have proposed applying augmented reality (AR), virtual reality (VR), and artificial intelligence (AI) technologies with medicine, producing promising results but also exhibiting some limitations. For instance, AR or VR technology can be a promising tool for complex procedures, especially in maxillofacial surgery, to ensure predictable and safe outcomes [1]. AR-based surgical navigation techniques can be Ratio (FHR)) in terms of their importance in predicting hydrocephalus using a Random Forest classifier. Martin et al. [27] proposed a method using Compositional Pattern Producing Network (CPPN) to enable Fully Convolutional Networks (FCN) to learn cerebral ventricular system (CVS) location. To address the ventriculomegaly problem that arises in the clinical routine, dilation of the CVS is required. However, current AI methods cannot automatically locate hydrocephalus. To address this issue, this paper proposes a comprehensive solution to obtain the surgical target, scalpel entry point, and scalpel direction, and automatically locate hydrocephalus. The proposed approach includes virtual object automatic scanning operation navigation to improve accuracy and the use of a tablet computer lens to align two custom image targets for locating the virtual head and virtual scalpel. The improved U-Net [28] is also utilized to recommend target points, resulting in a surgery efficiency and accuracy rate of 99.93%. Clinical trials were conducted for EVD (extra-ventricular drainage) surgery, and surgeons have confirmed the feasibility of the system. Overall, the proposed approach has the potential to enhance surgical outcomes and improve patient care in the field of neurosurgery. System Overview The proposed system comprises an AR device (Surface Pro 7 (Microsoft Co., Redmond, WA, USA)), printed images for head and scalpel positioning, an upright tripod, a flat clamp, and a medical septum cover. The tablet tracks the feature points of the two positioning images to display the virtual image correctly (see Figure 1a). The system follows the flow chart depicted in Figure 1b, providing multiple functions such as automatic scanning of DICOM-formatted virtual objects, selection of surgical targets, generation of surgical trajectories, and color-assisted calibration of surgical entry points to aid surgeons during EVD surgery. Table 1 defines the symbols and parameters used in the proposed method, while Table 2 presents the main abbreviations employed in this article. Notation Definition Dis x /Dis y /Dis z Distance on the x/y/z axis of the DICOM object E p The edge point of the scalpel l 2D dicomX /l 2D dicomY The length/width of the DICOM image l 3D The length of the head along the x/y/z axes N target The number of the specific DICOM slice with the ideal target N total The total amount of DICOM slices Num x /Num y /Num z The number of DICOM slices on the x/y/z axis The ideal 2D target position Pos 3D target (x, y, z) The ideal 3D target position Pos 3D 0 (0, 0, 0) The origin point The reference point on the left/right of the x-axis T x /T y /T z The thickness on the x/y/z axis of DICOM slices TrueX/TrueY/TrueZ The x/y/z-axis displayed the DICOM slice Virtual Object Automatic Scanning The patient's head CT scans are converted into a virtual head object with Avizo software (Waltham, MA, USA) [29], which provides an image analysis platform for identifying the locations of the scalp, skull, and hydrocephalus. The principle of the automatic scanning method is based on a trigger function. Virtual planes that are perpendicular to the x, y, and z axes in Unity are utilized to scan virtual objects. The entry and exit points (or two collision Bioengineering 2023, 10, 617 5 of 18 points) are obtained when these virtual planes enter and leave the virtual object or when they enter the object from two opposite directions, resulting in a total of six reference points. As an example, RP x R (reference point on the right of the X-axis) is obtained when the axial plane enters the virtual head object from the right, while RP x L (reference point on the left of the X-axis) is obtained when the axial plane enters the virtual scalp from the left. RP x R and RP x serve as two reference points on the X-axis for displaying real-time DICOM (Digital Imaging and Communications in Medicine) images. The system performs simultaneous virtual scalp scans along the three-axis (sagittal, frontal, horizontal, and lateral) to obtain a total of six reference points, which are displayed as virtual head objects as RP x R , RP x L , RP y T , RP y B , RP z T , and RP z B . These reference points play a critical role in locating the DICOM plane, which significantly impacts target accuracy (refer to Section 2.5 for detailed methodology). After completing the calibration, an accurate augmented reality environment is generated to assist with EVD surgical navigation. Using Machine Learning to Predict and Recommend Surgical Target To predict the location of the hydrocephalus and identify the surgical target, the system selected the connection area with the largest hydrocephalus, computed the center point of this area, and marked it as the final recommended surgical target point. The system's connection areas were selected based on the segmented output of U-Net, with the largest connected pixels chosen using the bwconncomp function in Matlab. This function is designed to identify and count connected components in binary segmented images. DICOM files were utilized to conduct cross-validation on ten groups of patients, with the database divided into five folds. Each group was assigned a number from 1 to 10, and two groups of patient DICOM data were used as testing sets, while the remaining groups served as training sets. Consequently, there were five folds with five distinct testing sets, with four folds (consisting of eight groups of patients' DICOM data) as the training set and the other fold (the DICOM data of the other two groups) as the test set. The training set included label data and normalized data from the eight patient groups, while the test set used the label and normalized data from the other two groups. The normalized data was used as the basis for the final accuracy test. Manual Operation Target Point Positioning After superimposing the virtual head onto the real head, DICOM images in the horizontal and lateral planes are displayed on the tablet. This allows the surgeon to individually select and confirm the accuracy of the target position. Once the target position is confirmed, the relative position is then converted into conversion space, and the specific DICOM slice containing the ideal target (N target -th slice) can be obtained from the total number of DICOM slices N total . The DICOM image is displayed in the lower left of the screen, with the ideal target position displayed as Pos 2D target X 2D target , Y 2D target . The length and width of the DICOM image are I 2D dicomX and I 2D dicomY , respectively. The origin point Pos 3D o (0, 0, 0) is located in the lower left of the head, with the length of the head along the X, Y, and Z axes being I 3D X , I 3D Y , I 3D Z , respectively. The target position in space is denoted by (1) DICOM Image Real-Time Display and Selection of Target Point After automatic scanning and obtaining the six reference points on the three axes, the longest distance of the reference point on each axis is calculated as Dis x (the distance on the x-axis), Dis y (the distance on the y-axis), and Dis z (the distance on the z-axis). The resulting values are then divided by the number of DICOM slices on the corresponding axis, including Num x , Num y , and Num z . The resulting values are the thicknesses on the x, y, and z axes, denoted as T x , T y , and T z , respectively. The distance between the scalpel's edge point (E p ) and RP L x is divided by T x to determine the corresponding DICOM slice on the x-axis, known as TrueX. This algorithm is repeated for the y and z axes. Once TrueX and TrueZ have been calculated, the Unity Web Request function is utilized to present a real-time DICOM image display ( Figure 2a) in combination with augmented reality. This allows surgeons to access the display without having to look away to a separate screen. Surgeons can then simply tap on the screen to select the ideal target ( Figure 2b). Generation of Surgical Trajectory and Calibration of Entry Point Angle Once the target point is selected, a surgical simulation trajectory is generated, connecting the scalpel's edge to the target point. Generation of Surgical Trajectory and Calibration of Entry Point Angle Once the target point is selected, a surgical simulation trajectory is generated, connecting the scalpel's edge to the target point. The surgeon then confirms this trajectory by pressing the function button, which generates the final trajectory connecting the surgical entry point to the target point ( Figure 3a Once the target point is selected, a surgical simulation trajectory is generated, connecting the scalpel's edge to the target point. The surgeon then confirms this trajectory by pressing the function button, which generates the final trajectory connecting the surgical entry point to the target point ( Figure 3a). To ensure maximum accuracy of the location and path of the surgical entry point, color-assisted angle calibration is used. The color of the trajectory changes from red ( Figure 3b Experiments Studies and Tests To demonstrate the feasibility of the method proposed in Section 2, prosthesis experiments were first conducted in the laboratory using the proposed method. Subsequently, clinical feasibility tests are carried out in hospitals. A Surface 7 tablet was used as the AR device in both test reports. Furthermore, Hololens 2 smart glasses are currently the most popular advanced medical AR HMD devices. A detailed explanation of the potential outcomes when substituting the AR devices with the Hololens 2 smart glasses is provided. Experiment of the Virtual Object Automatic Scanning To evaluate the accuracy of the proposed virtual object automatic scanning method, DICOM data from ten patients were utilized. The automatic scanning error was determined by measuring the distance between the predicted axis plane (automatically selected by the deep learning system) and the actual axis plane (where the target point was located). The minimum error, maximum error, lower quartile, median, and upper quartile, as well as the target point error established by the software Avizo, were also calculated. The virtual model of the point was imported into 3ds Max (Autodesk Inc., San Francisco, CA, USA) for anchor point processing, ensuring constant relative positions of the scalpel and the target point, which facilitated error calculation. Additionally, it should be noted that the predicted axis plane was obtained by selecting the center point of the largest connection area as the target point after predicting the contour of the ventricle using the deep learning model. The actual axis plane was obtained by extracting the center point of the 3D object of the target point, which is created in 3ds Max from the DICOM data. Before this, the 3D object of the ventricle was generated in 3ds Max, followed by the 3D object of the target point that corresponds to the doctor-marked target point on the DICOM. Experiment of Machine Learning to Predict and Recommend Surgical Target To predict the location of hydrocephalus, U-Net (the original model) was employed for deep learning to maximize accuracy and minimize loss by setting 30 epochs, and identifying all ventricular regions in the patient's DICOM images (Figure 4b). The Labeled ventricle contour (green) and the predicted ventricle contour (red) were drawn using Matlab (MathWorks Inc., Natick, MA, USA) ( Figure 4a). Finally, the average sensitivity, specificity, and accuracy for predicting the location of hydrocephalus were calculated, and these data (sensitivity, specificity, and accuracy) are the result of comparing the contour with the pixel-by-pixel method. Additionally, it should be noted that the predicted axis plane was obtained by selecting the center point of the largest connection area as the target point after predicting the contour of the ventricle using the deep learning model. The actual axis plane was obtained by extracting the center point of the 3D object of the target point, which is created in 3ds Max from the DICOM data. Before this, the 3D object of the ventricle was generated in 3ds Max, followed by the 3D object of the target point that corresponds to the doctor-marked target point on the DICOM. Experiment of Machine Learning to Predict and Recommend Surgical Target To predict the location of hydrocephalus, U-Net (the original model) was employed for deep learning to maximize accuracy and minimize loss by setting 30 epochs, and identifying all ventricular regions in the patient's DICOM images (Figure 4b). The Labeled ventricle contour (green) and the predicted ventricle contour (red) were drawn using Matlab (MathWorks Inc., Natick, MA, USA) ( Figure 4a). Finally, the average sensitivity, specificity, and accuracy for predicting the location of hydrocephalus were calculated, and these data (sensitivity, specificity, and accuracy) are the result of comparing the contour with the pixel-by-pixel method. The U-Net (the original model) architecture consists of two paths, the encoder, and the decoder. The encoder path, comprising convolutional and pooling layers, is responsible for extracting and learning contextual features. Conversely, the decoder path, which includes transpose convolution and up-sampling layers, aims to transfer these learned features into a single prediction layer of the same size as the input image, known as the dense prediction or segmentation map. Each convolution layer is appended with batch normalization and ReLU activation function, except for the last layer, which produces a binary output using sigmoid activation. The entire network uses a convolution kernel of size 3 × 3 and stride of 1 with feature maps of 32, 64, 128, 256, and 320 across all resolution levels. Hyper-parameters used in this study were a learning rate of 0.003, a batch size of 20, and a total of 30 epochs. The objective was to minimize the overall loss of each pixel by computing the dice loss function between the segmented map and labeled reference, and the Adam optimizer was utilized to optimize the weight parameters in each layer. Test of Clinical Feasibility The feasibility of the proposed system was tested at various stages of clinical implementation, beginning with the conversion of DICOM images from patients into 3D models. To assess the clinical feasibility of each step, a pre-operative simulation was conducted in the operating room approximately 1 h before the surgery. Figure 5 illustrates the setup of the system in the operating room and its operation by a neurosurgeon. Specifically, Figure 5a shows the superimposed 3D model of the patient's head position, while Figure 5b shows the DICOM-selected target position on the display. Figure 5c depicts the alignment position and angle following entry point selection, and Figure 5d shows the completed alignment. Following the entire process, an experienced surgeon concluded that the system concept is feasible for clinical use. The feasibility of the proposed system was tested at various stages of clinical implementation, beginning with the conversion of DICOM images from patients into 3D models. To assess the clinical feasibility of each step, a pre-operative simulation was conducted in the operating room approximately 1 h before the surgery. Figure 5 illustrates the setup of the system in the operating room and its operation by a neurosurgeon. Specifically, Figure 5a shows the superimposed 3D model of the patient's head position, while Figure 5b shows the DICOM-selected target position on the display. Figure 5c depicts the alignment position and angle following entry point selection, and Figure 5d shows the completed alignment. Following the entire process, an experienced surgeon concluded that the system concept is feasible for clinical use. Test of Hololens 2 Feasibility In order to test whether our proposed method is accurate on HoloLens 2, we designed a separate accuracy experiment specifically for HoloLens 2. A solid sponge brick was used for flower arrangement, and the target point was set as the middle point at the bottom of the sponge brick. A virtual sponge brick model of the same size was created in Unity, and the target point, insertion point, and guide path were set. The experimenters wore Hololens 2 and inserted the real navigation stick into the sponge brick through the virtual guide path seen in the Hololens 2 screen to test whether it could accurately reach the target point. To perform the experiment, the experimenter needed to superimpose the real sponge brick and the virtual model (Figure 6), align the navigation stick with the path of the virtual guide (Figure 6b), insert the sponge brick along the guiding direction, and pass the navigation stick through the sponge bricks. The difference between the "true target position" and "the final position where the experimenter arrived at the real sponge brick using the navigation stick" was measured to calculate the error distance. Test of Hololens 2 Feasibility In order to test whether our proposed method is accurate on HoloLens 2, we designed a separate accuracy experiment specifically for HoloLens 2. A solid sponge brick was used for flower arrangement, and the target point was set as the middle point at the bottom of the sponge brick. A virtual sponge brick model of the same size was created in Unity, and the target point, insertion point, and guide path were set. The experimenters wore Hololens 2 and inserted the real navigation stick into the sponge brick through the virtual guide path seen in the Hololens 2 screen to test whether it could accurately reach the target point. To perform the experiment, the experimenter needed to superimpose the real sponge brick and the virtual model (Figure 6), align the navigation stick with the path of the virtual guide (Figure 6b), insert the sponge brick along the guiding direction, and pass the navigation stick through the sponge bricks. The difference between the "true target position" and "the final position where the experimenter arrived at the real sponge brick using the navigation stick" was measured to calculate the error distance. To perform the experiment, the experimenter needed to superimpose the real brick and the virtual model (Figure 6), align the navigation stick with the path of tual guide (Figure 6b), insert the sponge brick along the guiding direction, and p navigation stick through the sponge bricks. The difference between the "true targ tion" and "the final position where the experimenter arrived at the real sponge bric the navigation stick" was measured to calculate the error distance. Results of the Virtual Object Automatic Scanning The virtual object automatic scanning error (Figure 7a) was calculated by de ing the distance between the axis plane that is automatically selected by the syst the axis plane where the actual target point is located. The average automatic sc error was 1.008 mm with a deviation of 0.001 mm. The minimum and maximum Results of the Virtual Object Automatic Scanning The virtual object automatic scanning error (Figure 7a) was calculated by determining the distance between the axis plane that is automatically selected by the system and the axis plane where the actual target point is located. The average automatic scanning error was 1.008 mm with a deviation of 0.001 mm. The minimum and maximum errors were 0.978 mm and 1.039 mm, respectively. Due to the small deviation, the lower quartile, median, and upper quartile are represented by a straight line in the box plot. The box plot indicates two outliers with values of 0.978 mm and 1.039 mm, respectively. The target point error (Figure 7b) was determined using Avizo software. To facilitate alignment in Unity, the anchor points of the two virtual objects were adjusted to the same position. Subsequently, the distance between the real target point and the virtual target point was used to obtain the target point error, which was found to be 1 mm with a deviation of 0.1 mm. The minimum and maximum errors were 0.495 mm and 1.21 mm, respectively, while the lower quartile, median, and upper quartile were 0.88 mm, 0.98 mm, and 1.12 mm, respectively. Stability tests (Figure 7c) were conducted in a total of 20 phantom trials. A script was written to record the center position of the scalp generated by the image target of the head every 1 s for 1 min. The stability was measured in 3 dimensions and normalization was performed afterward. The average stability and deviation were 0.076 mm and 0.052 mm, respectively. Results of the Machine Learning to Predict and Recommend Surgical Target The proposed system utilizes machine learning (specifically, the U-Net model) to predict and recommend surgical targets. This system was tested on 10 hydrocephalic patients, To facilitate alignment in Unity, the anchor points of the two virtual objects were adjusted to the same position. Subsequently, the distance between the real target point and the virtual target point was used to obtain the target point error, which was found to be 1 mm with a deviation of 0.1 mm. The minimum and maximum errors were 0.495 mm and 1.21 mm, respectively, while the lower quartile, median, and upper quartile were 0.88 mm, 0.98 mm, and 1.12 mm, respectively. Stability tests (Figure 7c) were conducted in a total of 20 phantom trials. A script was written to record the center position of the scalp generated by the image target of the head every 1 s for 1 min. The stability was measured in 3 dimensions and normalization was performed afterward. The average stability and deviation were 0.076 mm and 0.052 mm, respectively. Results of the Machine Learning to Predict and Recommend Surgical Target The proposed system utilizes machine learning (specifically, the U-Net model) to predict and recommend surgical targets. This system was tested on 10 hydrocephalic patients, and the results indicated an average sensitivity of 93.85%, specificity of 95.73%, and accuracy of 99.93% in predicting the location of hydrocephalus. The U-Net model generates a binary mask output, with ones indicating the ventricular region and zeros indicating other parts of the image. By comparing the output prediction to the labeled reference, true positive (TP), true negative (TN), false positive (FP), and false negative (FN) values can be computed. This allows for the calculation of sensitivity (TP/(TP + FN)), specificity (TN/(TN + FP)), and accuracy ((TP + TN)/(TP + FP + TN + FN)). Notably, the labeled reference of the ventricular region is available for experimental data, enabling the calculation of these indices in a similar manner. The system enhances location prediction for hydrocephalus in terms of accuracy, sensitivity, and specificity. As shown in Table 3, the hydrocephalus prediction function presented in this paper can more accurately predict the location of hydrocephalus and provide surgeons with better surgical target recommendations, regardless of accuracy, sensitivity, and specificity. Results of the Proposed System The proposed approach exhibits fewer image navigation limitations and lower costs than optical navigation. A virtual object automatic scanning method is proposed to reduce calibration time in the preoperative stage, taking only 4 s. This represents an 87%, 96%, and 98% reduction in time compared to Konishi Table 4 shows that the proposed image positioning method offers cost savings in comparison to the other three positioning methods, while also improving registration time and target accuracy. The registration time of 4 s is achieved through the virtual object automatic scanning method, while the accuracy of 1 ± 0.1 mm is obtained from the "3.1. Prosthesis experiment." The proposed system provides superior target accuracy performance, primarily due to the virtual object automatic scanning method that offers accurate reference points, and all functions are performed within the system. This indicates that compared to other research methods, there are no external factors that may impact the accuracy of the target. Table 5 presents the accuracy results of five experiments conducted by five male experimenters aged 22 to 25, indicating that the impact of visual assistance with HoloLens 2 can differ significantly among users. Consequently, software feedback is essential for the navigation stick assistance method. However, as illustrated in Figure 8, the augmented reality performance of HoloLens 2 in tracking spatial images lacks stability, leading us to abandon the use of HoloLens 2 in the clinical feasibility test. For the clinical feasibility testing, a Microsoft Surface Pro 7 tablet was ultimately used. navigation stick assistance method. However, as illustrated in Figure 8, the augmented reality performance of HoloLens 2 in tracking spatial images lacks stability, leading us to abandon the use of HoloLens 2 in the clinical feasibility test. For the clinical feasibility testing, a Microsoft Surface Pro 7 tablet was ultimately used. Image Positioning vs. Optical Positioning Compared to the current optical navigation method, the proposed method in this paper offers significant advantages in terms of intuition, mobility, accessibility, and costeffectiveness. Most current image-guided surgical navigation methods combine optical navigation with a navigation stick tracked by a cursor ball and display navigation information on an external screen. Regarding intuition, our proposed method provides surgeons with intuitive spatial information through AR perspective display. In terms of mobility, the current optical navigation method requires a specific operating room, whereas our system can be used for surgical navigation in different fields with only a 10-min setup time. Furthermore, the proposed system is more accessible and cost-effective than the optical navigation method due to its lower equipment and practice costs. Augmented reality is an excellent solution for guiding surgery in areas with insufficient medical resources. Image Positioning vs. Optical Positioning Compared to the current optical navigation method, the proposed method in this paper offers significant advantages in terms of intuition, mobility, accessibility, and costeffectiveness. Most current image-guided surgical navigation methods combine optical navigation with a navigation stick tracked by a cursor ball and display navigation information on an external screen. Regarding intuition, our proposed method provides surgeons with intuitive spatial information through AR perspective display. In terms of mobility, the current optical navigation method requires a specific operating room, whereas our system can be used for surgical navigation in different fields with only a 10-min setup time. Furthermore, the proposed system is more accessible and cost-effective than the optical navigation method due to its lower equipment and practice costs. Augmented reality is an excellent solution for guiding surgery in areas with insufficient medical resources. Our Method vs. Other Method Currently, several advanced augmented reality methods show promise for surgical navigation [36][37][38][39], but they still have limitations. Gavaghan et al. [36] used a projector to project liver blood vessels onto the liver surface, which has good accuracy but lacks deep spatial information and may not be suitable for guiding surgery. Kenngott et al. [37] proposed a method that provides three-dimensional anatomical information and has undergone clinical feasibility testing, but this method only offers viewing functions and lacks other auxiliary guidance. Heinrich et al. [38] and Hecht et al. [39] both provided visual aids for guided injections but lack feedback. In comparison to the current augmented reality methods for medical guidance [36][37][38][39], the proposed method in this paper exhibits significant advantages in accuracy, provision of anatomical information, stereoscopic display, path navigation, visual feedback, and clinical suitability, as outlined in Table 6. As a result, the proposed method outperforms the current methods in all aspects. Tablets vs. Smart Glasses The proposed system was implemented on a Microsoft Surface Pro 7 tablet and Microsoft HoloLens 2 smart glasses to compare their performance in terms of stability, flexibility, and information richness. The performance metrics are presented in Table 7. In terms of stability, the Surface Pro 7 displays the head model and maintains the navigation stick's stability well in a fixed position. On the other hand, the HoloLens 2 shows good stability for the head model in a fixed position, but its field of view changes with user movement, resulting in increased offset error. Additionally, the HoloLens 2 exhibits noticeable visual instabilities when tracking the navigation stick in motion. Concerning flexibility, the Surface Pro 7 requires an additional stand that limits the viewing angle, while the HoloLens 2 has superior flexibility. Regarding comfort, the Surface Pro 7 is more comfortable as physicians do not need to wear the HoloLens 2, which can increase head weight, eye pressure, and visual interference. Regarding information display richness, the HoloLens 2 can set windows in space to display DICOM information in a larger, clearer, and more persistent way. In contrast, the Surface Pro 7 can only display information on a single screen. Moreover, to avoid blocking the surgical guidance image, the DICOM image display must be canceled after selecting the target on the Surface Pro 7, preventing simultaneous display. Although multiple DICOM images can be superimposed and displayed on the head at the same time, the visual effect is not comfortable. Directly locking and displaying the target in an AR manner is a relatively simple visual effect after judging the target position. Therefore, despite the HoloLens 2's flexibility and complete information display advantages, guidance accuracy is the most critical factor, making the Surface Pro 7 the ideal platform for implementation. The Impact of Viewing Angle Changes on the Coordinates To discuss the intrinsic error of Vuforia, the influence of the head model and the tip position of the navigation stick on the coordinate position at different visual angles was tested. Figure 9 presents the results of testing commonly used navigation stick angles (60~140 • ) and head model recognition, graphing their influence on coordinates under reasonable usage angles (60~120 • ). The X coordinate of the navigation bar is found to be significantly affected by the viewing angle, but within the most frequently used range of angles (80~100 • ), the error is only ±1 mm, and there is no significant effect on the Y and Z coordinates, with most errors outside the outliers within ±1 mm. Additionally, to examine the impact of changes in the viewing angle of the head model identification map, the system was tested in the range of 60-120 degrees in 10-degree increments, as both the head model identification map and the camera's viewing angle are fixed values. Bioengineering 2023, 10, x FOR PEER REVIEW 15 of 19 Therefore, despite the HoloLens 2's flexibility and complete information display advantages, guidance accuracy is the most critical factor, making the Surface Pro 7 the ideal platform for implementation. The Impact of Viewing Angle Changes on the Coordinates To discuss the intrinsic error of Vuforia, the influence of the head model and the tip position of the navigation stick on the coordinate position at different visual angles was tested. Figure 9 presents the results of testing commonly used navigation stick angles (60°~140°) and head model recognition, graphing their influence on coordinates under reasonable usage angles (60°~120°). The X coordinate of the navigation bar is found to be significantly affected by the viewing angle, but within the most frequently used range of angles (80°~100°), the error is only ±1 mm, and there is no significant effect on the Y and Z coordinates, with most errors outside the outliers within ±1 mm. Additionally, to examine the impact of changes in the viewing angle of the head model identification map, the system was tested in the range of 60-120 degrees in 10-degree increments, as both the head model identification map and the camera's viewing angle are fixed values. Error of X axes Sterile Environment for Surgery To ensure suitability for clinical applications, the proposed system must be able to function in a sterile environment. As such, a layer of the surgical cell membrane is covered on the identification map, and the tablet is wrapped in a plastic sleeve, allowing it to remain operational without compromising sterility. Clinical Significance and Limitation In summary, previously proposed methods have not provided a comprehensive solution for accurately guiding surgical targets, scalpel entry points, and scalpel orientation in brain surgery. The proposed approach aims to address these shortcomings. To ensure ease of portability, a tablet PC is used as the primary AR device in the proposed system. The DICOM data processing takes approximately one hour to complete the system update. Surgeons can use the proposed system before and during surgery for real-time guidance on surgical targets, entry points, and scalpel paths. In terms of precision, the proposed system has an average spatial error of 1 ± 0.1 mm, which is a significant improvement over many previous methods. The system achieves improved recognition accuracy, sensitivity, and specificity, with values of 99.93%, 93.85%, and 95.73%, respectively, marking a significant improvement over previous studies. Smart glasses are not recommended for the proposed AR system due to their potential to introduce significant errors, as accuracy and stability are important considerations. Conclusions This study combined virtual object automatic scanning with deep learning and augmented reality to improve surgeons' surgical procedures. U-Net was utilized for deep learning to predict the location of hydrocephalus, reducing pre-operative time requirements and increasing surgical precision. Augmented reality overlays virtual images directly on the patient's head, allowing for intuitive guidance in locating surgical target points and trajectory guidance for improved accuracy in EVD surgery. The proposed system also employed color coding for angle correction at the surgical entry point, allowing for more intuitive and accurate operations. The developed EVD surgical navigation system using virtual object automatic scanning and augmented reality shows improved accuracy, registration time, and surgical costs. Future work will focus on exploring the use of smart glasses for collaborative operations and conducting clinical trials for intraoperative navigation to enhance the clinical utility of the proposed system. Sterile Environment for Surgery To ensure suitability for clinical applications, the proposed system must be able to function in a sterile environment. As such, a layer of the surgical cell membrane is covered on the identification map, and the tablet is wrapped in a plastic sleeve, allowing it to remain operational without compromising sterility. Clinical Significance and Limitation In summary, previously proposed methods have not provided a comprehensive solution for accurately guiding surgical targets, scalpel entry points, and scalpel orientation in brain surgery. The proposed approach aims to address these shortcomings. To ensure ease of portability, a tablet PC is used as the primary AR device in the proposed system. The DICOM data processing takes approximately one hour to complete the system update. Surgeons can use the proposed system before and during surgery for real-time guidance on surgical targets, entry points, and scalpel paths. In terms of precision, the proposed system has an average spatial error of 1 ± 0.1 mm, which is a significant improvement over many previous methods. The system achieves improved recognition accuracy, sensitivity, and specificity, with values of 99.93%, 93.85%, and 95.73%, respectively, marking a significant improvement over previous studies. Smart glasses are not recommended for the proposed AR system due to their potential to introduce significant errors, as accuracy and stability are important considerations. Conclusions This study combined virtual object automatic scanning with deep learning and augmented reality to improve surgeons' surgical procedures. U-Net was utilized for deep learning to predict the location of hydrocephalus, reducing pre-operative time requirements and increasing surgical precision. Augmented reality overlays virtual images directly on the patient's head, allowing for intuitive guidance in locating surgical target points and trajectory guidance for improved accuracy in EVD surgery. The proposed system also employed color coding for angle correction at the surgical entry point, allowing for more intuitive and accurate operations. The developed EVD surgical navigation system using virtual object automatic scanning and augmented reality shows improved accuracy, registration time, and surgical costs. Future work will focus on exploring the use of smart glasses for collaborative operations and conducting clinical trials for intraoperative navigation to enhance the clinical utility of the proposed system. Informed Consent Statement: Not applicable for studies not involving humans. Data Availability Statement: The statistical data presented in this study are available in Tables 3 and 4. The datasets used and/or analyzed during the current study are available from the corresponding author upon request. These data are not publicly available due to privacy and ethical reasons.
8,789.2
2023-05-01T00:00:00.000
[ "Medicine", "Computer Science", "Engineering" ]
Financial capacity of provinces in Sumatra during regional autonomy era The regional autonomy policy gives greater authority to regencies and cities to take responsibilities both in terms of regional revenue or regional expenditure. Ideally, all the local government expenditures can be fulfilled with their Local Own-source Revenue, so that the region fully becomes an autonomous region which means that the dependence of central government to local decreases. According to the percentage of direct expenditure to total regional revenue of provinces in Sumatra, the average amount is less than 50 percent of regional income. Meanwhile, more than 50 percent of the total regional revenue of provinces in Sumatra is used for indirect expenditure. The average degree of fiscal decentralization of provinces in Sumatra from 2015 to 2017 has amount about 37 percent. It means that the fiscal decentralization degree was low, thus the budgeting performance was poor/bad. Financial independence level of provinces in Sumatra is 57.36 percent which means that the regions are considered to be independent enough in implementing regional autonomy. Financial dependence level of region is about 63.55 percent which means that the fiscal dependence of provinces in Sumatra is great enough and its budget performance is not so good. This shows that the region dependence in Sumatra on the aids from central funds is still very much felt and noticeable. INTRODUCTION The regional autonomy policy gives greater authority to regencies and cities to take responsibilities both in terms of regional revenue or regional expenditure. According to Altunbas & Thornton (2011), several economists have made the case for fiscal decentralization (regional autonomy) as a means to promote better governance Ideally, all the local government expenditures can be fulfilled with their Local Ownsource Revenue (PAD), so that the region fully becomes an autonomous region which means that the dependence of central government to local decreases. However, in reality, after implementing fiscal desentralization there was an increase in the role of transfer mechanism from central government through balancing fund (Mahi, 2005). In addition, Thornton (2007) stated that results from a cross section study of 19 OECD member countries suggest that when the measure of fiscal decentralization is limited to the revenues over which sub-national governments have full autonomy, its impact on economic growth is not statistically significant Mardiasmo (2002) stated that before the autonomy era, there was a high expectation from regional governments to be able to develop their region based on their own ability and motivation. However, the real condition is getting further from year to year. There are fiscal dependency and subsidies as well as central government support as the manifestation of the inability of PAD in financing regional expenses. Sumatra is one of the four biggest islands in Indonesia. This island has an area of 473,481 km 2 with 10 provinces and 55,700,000 inhabitants in 2017, which is the second largest population after Java. In terms of PAD in 2015, North Sumatra Province has the highest PAD at 4.883 trillion rupiah, followed by Riau Province at about 3.476 trillion rupiah, and South Sumatra Province at about 2.534 trillion rupiah. Bengkulu Province has the lowest Local Own-source Revenue in Sumatra at 701.330 billion rupiah. In 2016, the highest PAD was owned by North Sumatra at 4.630 trillion rupiah or it was decreasing by 5 percent from the earlier year, Riau Province's PAD was 3.495 trillion rupiah, and South Sumatra Province's PAD experienced an increase of 8 percent compared to the previous year. In terms of the total amount of balancing fund provided by the central government, Riau got the highest balancing fund in Sumatra in 2015 amounting to 2.548 trillion rupiah, followed by South Sumatra at 2.329 trillion rupiah and Aceh at 1.561 trillion rupiah. In 2016, the highest balancing fund was still given to Riau of 4.085 trillion rupiah, followed by South Sumatra at 2.713 trillion rupiah and West Sumatra at 2.649 trillion rupiah. In 2016, the biggest regional expenditure was owned by Aceh at about 12.874 trillion rupiah and North Sumatra at about 9.950 trillion rupiah. Based on the above conditions, it is considered that regional autonomy implementation will carry some consenquences to the regional financial capacity or PAD that will differ according to its region capabilities and potential. Regional expenditure that increases every year requires each region to increase their income. This case is also applied to provinces in Sumatra. From 10 provinces in Sumatra, it is found that revenues from PAD have not been able to finance region expenditure and the shortfall is covered by central government through balancing funds given to all provinces in Sumatra and from other regional revenues. As statement aforementioned, it can be concluded that the problem in this research are: 1) How is the budgeting allocation and regional expenditure of provinces in Sumatra?; 2) How is the financial capacity of the region, including degree of fiscal decentralization and financial independence/ dependence level of region? METHODS The data used in this study are secondary data, namely time series data from 2013-2017. Main data were obtained from relevant agencies, such as Directorate-General of Regional Fiscal Balance (Ministry of Finance of the Republic Indonesia), Indonesian Central Bureau of Statistics and Indonesian National Development Planning Agency. The descriptive statistics were used in analyzing the data. This research uses measurement of financial capacity that includes of: Degreee of fiscal desentralization calculated as: DDF = PAD / TPD where: DDF = degree of fiscal decentralization PAD = local own-source revenue TPD = total of regional income The measurement criteria are as in the Table 1. Fiscal capacity or degree of fiscal decentralization (DDF) is stated very low so that the budgeting performance is very low/bad as well. Fiscal capacity or degree of fiscal decentralization (DDF) is stated low, so that the budgeting performance is low as well. Fiscal capacity or degree of fiscal decentralization (DDF) is stated in medium, so that the budgeting performance is also medium or good enough. Fiscal capacity or DDF is stated high, so that the budgeting performance is stated high or good. The measurement criteria are as in the Table 2. The region is considered incapable The region is considered not independent enough The region is considered independent enough The region is considered independent The measurement criteria are as in the Table 3. Fiscal dependence is very little, which means that budgeting performance is excellent. Fiscal dependence is good enough, which means the budgeting performance is good enough. Fiscal dependence is great enough, which means the budgeting performance is not so good. Fiscal dependence is very great, which means the budgeting performance is very bad. Allocation of income and regional expenditure in Sumatra Prior to the era of regional autonomy, the government expenditure consisted of routine expenditure and development expenditure. The amount of routine expenditure is greater than that of development expenditure. After implementing regional autonomy, routine expenditure of local government still has a dominant contribution if it is compared to the development expenditure. In the structure of routine expenditure, employee expenditure is still dominant, while in the structure of development expenditure, the biggest role comes from the transportation sector (Kusriyawanto, 2014). Before the implementation of regional autonomy in 2000, there were eight provinces in Sumatra. In terms of the proportion of direct expenditure and indirect expenditure to the total income prior to regional autonomy, it shows that direct expenditure was greater than indirect expenditure to the total of regional income. Allocation of direct expenditure was lower than indirect expenditure in 1997/1998. Regions that had allocation of direct expenditure of more than 50 percent were Jambi and Bengkulu, while other regions allocated their direct expenditure under 50 percent. In 1998/1999, even though the proportion of direct expenditure was still under 50 percent from the total income, there was a slightly increase from the previous period. Meanwhile there was a decline in indirect expenditure in 1998/1999. The increase was due to a decrease in total income and in indirect expenditure (Table 4). The allocation of direct expenditure had bigger amount than indirect expenditure in 1999/2000. Most of provinces in Sumatra experienced an increase in direct expenditure. The increase of direct expenditure allocation in Sumatra had reached 50 percent. Along with the increase in direct expenditure allocation, the allocation of indirect expenditure experienced a decline as well. Increasing direct expenditure allocation is a commitment form between central government and regional governments to further enhance regional development and also the start of the implementation of regional autonomy. In 2015 -2016, there was an increase for direct expenditure allocation of provinces in Sumatra, goods and services expenditure and capital expenditure accounted for half of direct expenditure. Regions that had big portion of direct expenditure were Aceh, Riau, Bengkulu, Bangka Belitung and Riau Island as the developing provinces. The amount of direct expenditure is an illustration of the government's commitment to the community development. Ideally, direct expenditure takes 70 percent of the total regional expenditure. From the allocation of direct expenditure, it is found that goods and services expenditure has the largest part, followed by capital expenditure and employee expenditure. The decline of goods and services expenditure is followed by an increase in capital expenditure and employee expenditure. Unfortunately, the increase in employee expenditure is greater than the increase in capital expenditure. Apart from direct expenditure, another regional expenditure is indirect expenditure. Allocation of indirect expenditure for each province in Sumatra is not equal. Allocation of indirect expenditure has an average number of more than 50 percent of total income. Most of indirect expenditure in Sumatra was approximately 60.71 percent in 2015, 57.58 percent in 2016, and 58.8 percent in 2017. This decline was due to a decrease in some components of indirect expenditure such as grant expenditure, profit sharing expenditure with provinces/regencies and cities. Even though there was a decline in three components of indirect expenditure, employee expenditure has the biggest portion in indirect expenditure and experiences an incline every year. The increase is caused by the increase in the number of employees, civil servant in region and contract employee, which would cause additional amount of employment expenditure. In terms of the percentage of direct expenditure to the total of regional income of provinces in Sumatra, less than 50 percent of the regional income is used for direct expenditure. Large amount of indirect expenditure shows that expenditure budget for each province in Sumatra is not yet on target. Ideally, the budget allocation for indirect expenditure is about 30 percent of the total expenditure allocation. Employee expenditure that takes in indirect expenditure allocation does not give any positive impact on the region expenditure because it is not giving any good impact on community development. Regional expenditure should not only be spent on paying employee salaries as well as useless events, but it should be directed towards activities that have a direct impact on the development of local communities. Degree of fiscal decentralization Prior to the implementation of fiscal decentralization in 1999 -2000, local governments still relied on and mobilized the existing local revenues and expenditure budget to enhance economic development. This was due to the central government role in regulating and controlling the regional government budgeting. Entering fiscal decentralization in 2001 -2004, the effect of decentralization has increased on economic growth. This is because local governments have been given authority to utilize their own financial resources and supported by balancing funds from central to region (Kharisma, 2013). Ideally, through regional autonomy, each region is expected to be able financing its own region in accordance with their local/Local Own-source Revenues. Prior to regional autonomy, local own-source revenues in all provinces in Sumatra made a very small contribution. In 1997/1998, the average degree of fiscal decentralization of provinces in Sumatra was about 28.97 percent. It experienced a decline in 1998/1999 to 24.12 percent, and to 22.85 percent in 1999/2000. During these years before regional autonomy era, DDF experienced a decline. It was due to the low income from locally generated revenue (own-source revenue) such as regional tax and retribution. DDF of provinces in Sumatra before regional autonomy was very low with DDF ratio less than 25 percent (Table 5). It means that the budgeting performance was bad/poor as well and its Local Own-source Revenues gave a very small contribution to the total regional income. The degree of fiscal decentralizationafter regional autonomy experiences an increase, from a very low to low budgeting performance. From Table 2 it can be stated that DDF of provinces in Sumatra during 2015 -2017 has an average of 37 percent. It means that DDF is low so that the budgeting performance is still low/bad. This case can be intrepreted that the performance of provinces in Sumatra to implement autonomy is still very low. DDF was about 40.28 percent in 2015, 36.37 percent in 2016, and 33.44 percent in 2018. This means that the Local Own-source Revenues were still lower than the Total Regional Income. Even though there is an increase on DDF of provinces in Sumatra after regional autonomy, the budgeting performance is still low which means that the Local Own-source Revenue still gives low contribution to the total regional income. As stated by Setiaji & Hadi (2007), the contribution of Local Own-source Revenue to the regional expenditure during regional autonomy era is not much better than when it was before autonomy era, caused by the strong dependence of local governments on the central government. However, the growth of PAD during autonomy era has a positive difference compared to before autonomy. Financial independence level of regions in Sumatra Financial independence level of region is a ratio between Local Own-source Revenue and the total transfer revenue from central government. The bigger the value of TKD, the more independent the region is good enough. Local Own-source Revenue before regional autonomy was smaller than in the earlier period of regional autonomy, which means that PAD experienced a big enough increase in the era of regional autonomy. Level of regional independency of provinces in Sumatra before the regional autonomy was considered to be not independent enough. It can be seen from its average ratio of financial independence which was about 47.78 in 1997/1998, 35.15 in 1998/1999 and 35.30 in 1999/2000. TKD value less than 50 percent indicates that the Local Ownsource Revenue is still very small compared to the transfer funds from central government. The support from the central government is used by the regional government to finance total regional expenditure that cannot be met by the Local Own-source Revenue (Tabel 6). Level of financial independence of region of provinces in Sumatra during 2015 -2017 (after regional autonomy) in average experienced a decline every year. Financial independence of region was 66.35 percent in 2015, 56.79 percent in 2016, and 48.94 percent in 2017. The TKD value shows that the regions are considered independent enough, which means that the role of central government has been decreasing and PAD's ability to finance the region development is good enough. Based on the financial independence level in 2015, there were 2 provinces that had TKD value more than 100 percent, North Sumatra and Riau which were categorized as independent. This is because of PAD of both provinces can support their own development in the region. The less independent regions are Bengkulu, Bangka Belitung, and Aceh. These three provinces could not let go their fiscal dependency on the central government through transfer funds due to their PAD which is smaller than the transfer funds from central. After implementing regional autonomy, financial independence of provinces in Sumatra experiences a positive growth and is categorized as independent enough. The growth in the financial independence is caused by the increase in Local Own-source Revenue of provinces in Sumatra. This finding is in line with the research conducted by Frediyanto & Purwanti (2010), which concluded that there was a significant difference of regional income before and after regional autonomy, except for PAD ratio. After the implementation of regional autonomy, local governments tried to increase their PAD (Local Own-source Revenue) through the increase of tax and retribution. Local governments in the regional autonomy era are able to increase their Local Own-source Revenues. Nevertheless, the increase in PAD does not give higher contribution to APBD. Prior to regional autonomy, it was found that most of (88.57) regions had low financial capacity and still relied on funding from central to finance their capital expenditure. This condition is still happened even after regional autonomy implementation, the number of regions with low financial capacity even escalated (from 88.57 percent to 91.43 percent). Financial dependence level of regions in Sumatra Before the implementation of regional autonomy, the financial dependence on central government felt real, the centralistic financial system does not motivate provinces to explore their big potential of their region which can be used as their Local Own-source Revenue (Halim, 2001). From Table 7, local fiscal dependence on the central government were still strong. It can be seen from the average number of TKtD that was between 51 -75 percent. In 1997/1998, the regional fiscal dependence ratio was 64.52 percent, experienced an increase to 70.25 percent, and in 1999/2000 the ratio was 69.82. The increase in the ratio shows that the budgeting performance of local government keeps decreasing. Almost all provinces in Sumatra has a strong financial dependence on the central government with TKtD value above 50 percent. This was caused by the absence of motivation from local governments to explore and make use of their region potential. After the implementation of regional autonomy, local financial dependence since 2015 -2017 experienced an increase every year. Almost all provinces in Sumatra relied on the transfer funds from central government as their source of income. In 2015, TKtD value experienced an increase to 60 percent, then added to 63.62 percent in 2016, and increased up to 67.05 percent in 2018. It means that the fiscal dependence in Sumatra is great enough and the budgeting performance and is not so good. High level of regional financial dependence on transfer funds from central government shows that regional income sourced from PAD could not give big contribution to the total regional income. Kuncoro (2007); Amril, Erfit, Safri (2015); Ekawarna (2017) gave an empiric fact about the phenomenon of Flypaper Effect, that there is a high financial dependence level of local government (regency/city) on the income from central government, in the form of Balancing Funds (DAU, DAK, and DBH). Small amount of PAD of provinces in Sumatra is not only local government's mistake, because there is still a limited source of PAD that can be used. Sources of potential income are managed directly by the central government, meanwhile in another side, the effort to increase PAD through tax or regional retribution is not effective as it becomes a burden to community (Adi, 2012). Conclusion Implementation of regional autonomy is expected to increase the efficiency, effectiveness, and accountability of the public sectors in Indonesia. It provides a great opportunity to the regions to improve its financial capability. Regions are required to look for alternative sources of development budgeting without lessening expectations to have aids and sharing from central government and use public funds according to its priorities and community aspirations. Fiscal decentralization is the granting of authority to the regions to explore sources of revenue, the right to receive transfers from (upper) government and decide routine expenditure and investment. The most important factor in fiscal decentralization is to what extent the regional government is given the authority to decide allocation based on their own expenditure. Another factor is the region capacity to improve its PAD. An increase in PAD as the budgeting source for the implementation of regional autonomy will determine the success of regional development in the future. The research findings show that (1) percentage of direct expenditure to the total of regional income of provinces in Sumatra is less than 50 percent on average. (2) Based on the degree of fiscal decentralization, since 2015-2017 provinces in Sumatra got 37 percent in average, which means that the degree of fiscal decentralization is low and the budgeting performance is low. Local financial independence level of provinces in Sumatra was 57.36 percent, which means that the region is considered independent enough in implementing regional autonomy. The financial dependence was about 63.55 percent, which means that the fiscal dependence of provinces in Sumatra is great enough and the budgeting performance is not so good. RECOMMENDATION Provincial governments in Sumatra have to be more careful in managing regional finances, especially in allocating direct expenditure or indirect expenditure fund. Ideally, the portion of direct expenditure is about 70 percent of the total expenditure, particullarly for capital expenditure. While for indirect expendiure is more focusfocused on revenue sharing expenditure to province/region and city and village government and financial aids to province/region and city and village government so that regional development is successful and equal. In the era of regional autonomy, local government dependence on central government should be decreasing. Provincial governments in Sumatra must be able to gradually reduce its dependence on higher government. Efforts must be made to increase revenues by exploring its resources of PAD, good management of natural resources and the promotion for investment (in collaboration with outsiders to invest in regional development), including human resource development. Investment activities are expected to provide a very large and good contribution to the efforts of local tax revenues in particular and PAD income in general.
4,971.4
2019-11-22T00:00:00.000
[ "Economics" ]
Uncertainty Analysis for RadCalNet Instrumented Test Sites Using the Baotou Sites BTCN and BSCN as Examples : Vicarious calibration and validation techniques are important tools to ensure the long-term stability and inter-sensor consistency of satellite sensors making observations in the solar-reflective spectral domain. Automated test sites, which have continuous in situ monitoring of both ground reflectance and atmospheric conditions, can greatly increase the match-up possibilities for a wide range of space agency and commercial sensors. The Baotou calibration and validation test site in China provides operational high-accuracy and high-stability vicarious calibration and validation for high spatial resolution solar-reflective remote-sensing sensors. Two sites, given the abbreviations BTCN (an artificial site) and BSCN (a natural sandy site), have been selected as reference sites for the Committee on Earth Observation Satellites radiometric calibration network (RadCalNet). RadCalNet requires sites to provide data in a consistent format but does not specify the required operational conditions for a RadCalNet site. The two Baotou sites are the only sites to date that make spectral measurements for their continuous operation. One of the core principles of RadCalNet is that each site should have a metrologically rigorous uncertainty budget which also describes the site’s traceability to the international system of units, the SI. This paper shows a formalized metrological approach to determining and documenting the uncertainty budget and traceability of a RadCalNet site. This approach follows the Guide to the Expression of Uncertainty in Measurement. The paper describes the uncertainty analysis for bottom-of-atmosphere and top-of-atmosphere reflectance in the spectral region from 400 to 1000 nm for the Baotou sites and gives preliminary results for the uncertainty propagating this to top-of-atmosphere reflectance. Introduction Vicarious calibration methods-where satellite sensor observations are calibrated or monitored using ground observations of surface and atmospheric properties-have been used for radiometric satellite sensors operating in the solar-reflective spectral region (400-2500 nm) since the 1980s [1]. The Baotou site is a comprehensive calibration test site with many different types of target for radiometric and geometric calibration and validation of both spectrally-reflective sensors and synthetic aperture radar (SAR) sensors. On the Baotou site there are four targets set up with permanent instrumentation to allow for continuous measurement of ground spectral reflectance (along with atmospheric parameters). Three of these targets are artificial targets, separately made of black, grey, and white gravel, and the fourth target is a natural target within a desert environment. The grey permanent artificial target was chosen to be one of the initial four RadCalNet sites in 2014 and the natural sandy site went through a formal process during 2019 and was accepted as a new RadCalNet site in 2020. Grey Permanent Artificial Target and Sandy Site To meet the requirements of wide dynamic range and stability of the ground targets in sensor radiometric calibration, multi-greyscale permanent artificial targets were built in the Baotou site. Figure 1 shows the image (from Google Earth) of the Baotou site area. In the permanent target region, the grey-scale artificial target is composed of two white, one grey, and one black uniform gravel squares, each of which covers an area of 48 × 48 m. The grey target has been incorporated into the RadCalNet as BTCN (marked by red squared box in Figure 1a) [8]. A sandy site (300 × 300 m), marked by blue squared box in Figure 1a Site Instrumentation for RadCalNet At the Baotou site, the surface-reflected radiance is measured by several automatic observation systems of ground-reflected radiance (one on the grey target and two on the sandy site) developed based on commercial Colorimetry Research (CR) series spectrometers produced by Colorimetry Research, Inc. The systems cover the spectral region from 380 to 1080 nm, with a spectral resolution (i.e., full width at half maximum, FWHM) of 2 nm, have nominally a 3° field of view and are mounted at a height of 2.5 m (BSCN) or 2.0 m (BTCN) (see Figure 2). They observe the ground at nadir every 2 min. Aerosol and water vapor content atmospheric parameters are obtained from the AERONET [9] sun photometer that is near the BTCN grey target and 1800 m away from the BSCN sandy site. The sun photometer is a Cimel CE318 instrument operated by the Baotou Site operator and atmospheric data are available from the AERONET website [10]. An all-sky imager has been deployed at the Baotou site to acquire cloud amount, cloud picture, and cloud height. A Metrological Approach It is core to the RadCalNet philosophy that the sites are SI traceable and have uncertainty budgets determined in a metrologically-rigorous manner. The metrological approach is needed to ensure inter-site interoperability, to provide long-term stability, and to enable the sites to be compared with the satellite observations. Metrological traceability relies on three core principles: 1) an unbroken chain of calibrations back to the reference SI units, 2) the propagation of uncertainties through that chain, and 3) comparisons performed to validate uncertainty statements. Here, we follow metrological best-practice with our analysis. The uncertainty analysis is performed according to the GUM [7] and its supplements, with the Law of Propagation of Uncertainties used to propagate uncertainties for the laboratory calibration and field operation of the instrument, and Monte Carlo Methods (GUM Supplement 1; [7]) providing uncertainties associated with the atmospheric radiative transfer. As these sites operate continuously, we also need to consider the error correlation between different observations. The error is the unknown difference between the measured value and the conceptual true value. While this error is unknown, we can evaluate the associated uncertainty (the dispersion, around the measured value, of values that could reasonably be attributed to the measurand), and we can evaluate the error correlation, i.e., to what extent the unknown errors are common or independent between two observations. It is important to understand this to appropriately combine multiple observations. For RadCalNet data we need to consider error correlations between the two instruments on the BSCN site and between those and the instrument on the BTCN site, and also to understand the error correlation between multiple measurements by the same instrument. Finally, we need to consider error-correlation between the measured values in different spectrometer spectral channels. Here we adapt a framework developed for documenting the uncertainty associated with satellite observations [11]. That approach, developed from metrological principles, involves the following steps: 1) We define the traceability chain and the measurement function for the observed measurand; 2) we present an "uncertainty tree diagram" that shows the effects (sources of uncertainty) that influence our measured value and how these propagate to the measured value; 3) for each effect we create an 'effects table' that documents the uncertainty, the error correlation structures on different dimensions, and the sensitivity coefficient (that translates the uncertainty associated with that effect into an uncertainty associated with the measurand), and 4) we provide a combined standard uncertainty associated with all effects, with independent effects (no error correlation), with common effects (full error correlation), and with structured effects (partial error correlation). With the satellite sensors considered in [11], the error correlation structures were provided for within scanlines, between scanlines, and between orbits. Here we consider error correlation structures between observations of a single spectrometer, between spectrometers, and between wavelengths. Principles of Radiative Transfer To convert the observed radiance into reflectance, and to propagate reflectance to TOA, atmospheric radiative transfer must be considered. Figure 3 gives a simplified (ignoring multiple scattering) visualization of the relevant light paths. The red lines represent the illumination of the Baotou site. Some light illuminates the Baotou site directly from the sun. A proportion of this is lost through atmospheric scattering on the way to the site. The site is also illuminated by the sky through light that is scattered in the atmosphere onto the site (sky diffuse) and light which has reflected off nearby locations and scattered onto the site (sky scatter). We use the radiative transfer model MODTRAN-5 [12] to estimate both the direct solar irradiance (direct incoming beam minus the lost irradiance) and the sky irradiance (the combination of sky diffuse and sky scatter). When propagating to top of atmosphere we must consider light that reaches the satellite directly, the loss of radiation through scattering on the upward path and light that has scattered into the beam from the sun (without reaching the ground) or from other parts of the ground background. Again, MODTRAN-5 is used for this evaluation. Note that as a default MODTRAN-5 assumes that the surrounding ground has the same reflectance as the area of interest. Where this is not the case (e.g., for the grey site BTCN, which is an artificial target), there is an "adjacency effect" that must be separately considered (see Section 4.6). Overview of Traceability The observational spectrometers at the Baotou RadCalNet sites measure upwelling radiance from the surface. This upwelling radiance is converted to ground reflectance using the measurement equation: gnd ( , ; ) = gnd ( , ) sun ( , ; ) + sky ( , ) + 0 Here gnd ( , ; ) is the observed ground reflectance (which is actually a hemispherical-conical reflectance factor [13]) at time for solar zenith angle and at wavelength . This is calculated from the measured ground radiance gnd ( , ) and the calculated solar, sun ( , , ; ), and sky, sky ( , ), irradiances. The "plus zero" term follows the approach described in [11], as representing the extent to which this measurement model approximates reality. Here it represents the assumption that for a spectrometer we can ignore the spectral bandwidth of observation and assume the radiance is measured "at a wavelength" rather than "integrated within a spectral band". It also represents the assumption that downwelling irradiance can be simply calculated as a sum of solar and sky irradiances calculated from the MODTRAN-5 model. This ground reflectance and its associated uncertainty is the main product provided to RadCalNet. RadCalNet performs its own propagation to top-of-atmosphere reflectance and provides that to users through the forum. Separately, AIR provides a top-of-atmosphere reflectance product matched to the spectral bands of desired satellite sensors to commercial and state customers. In this paper we concentrate on the evaluation of uncertainty for the ground reflectance, however we give some indication of how this is propagated to top-of-atmosphere in Section 4.6. Traceability to SI comes from the calibration of the spectrometers used to measure ground radiance. These were calibrated at the Chinese national metrology institute, NIM, against primary standards. As a national metrology institute, NIM participates in the "Mutual Recognition Arrangement" [14]. The Mutual Recognition Arrangement is an agreement between the world's metrology institutes to ensure global consistency of the SI. Institutes participate in regular international comparisons that are operated under strict procedures (e.g., results are submitted to a pilot who is the only institute who sees all results before publication) and are either formally accredited by standards agencies or peer reviewed by equivalent international institutes through formal audits of measurement procedures, analysis protocols, and uncertainty budgets. In this case, NIM calibrated the spectrometers against a radiance source that had been created from an FEL lamp (FEL is a designation for the type of lamp and not an abbreviation) illuminating a diffuser panel. The spectral irradiance of the lamp FEL ( ) had been calibrated by direct comparison with a high temperature blackbody. The spectral radiance factor of the diffuser ( ) for a 0°/45° geometry (that is when the lamp illuminates it at 0° incidence angle and the spectrometer views it at 45° observation angle) was calibrated on NIM's primary reflectance facility. At each wavelength, the gain of the spectrometer was calculated as: where ( ) is the gain of the spectrometer, source ( ) is the radiance of the source made from the lamp and diffuser, and cal ( ) is the measured count signal during calibration. The plus-zero term again represents the approximations inherent in this model, here including the assumption that the spectrometer is linear and the assumption that the spectrometer's spectral bands are sufficiently narrow to be able to be treated as though the measurement was at a single wavelength. When this instrument is used in the field, the measured field radiance is given by: where ( ) is the gain obtained in Equation (2) and gnd ( ) is the signal when observing the ground. Here the plus zero again represents the linearity and monochromatic assumptions. It also assumes that the gain in the field is the same as the gain during calibration. This means that there is an assumption that the gain has not changed due to instrument ageing, transportation vibrations, or operational temperature differences. In practice, a difference has been observed due to temperature, which means that the error does not have an expected value of zero and therefore this equation has been modified to: where, for a temperature T. The coefficients ( = 1,2,3) have been established through an empirical fit to experimental data and ref is the reference temperature, i.e., the temperature maintained during calibration, here 25°C. More details on this fit process are given in Section 4.3.1. The sun and sky downwelling irradiance terms in Equation (1) are calculated using MODTRAN-5 from the observation time and location, which determines the solar zenith angle, θ, and from the atmospheric parameters measured by the AERONET station, in particular the aerosol optical thickness and the water vapor column. MODTRAN-5 calculates the solar irradiance as: where ( ) is the solar spectral irradiance at 1 astronomical unit based on the Thuillier solar irradiance model [15], is the sun-earth distance at the time of observation, is the solar zenith angle, ( ) is the calculated atmospheric transmittance based on the atmospheric parameters, and MDTN ( ) is the assumed (normalized to unit area) spectral bandpass function within MODTRAN-5 (here a triangle with a base of 20 cm -1 ). The sky irradiance is similarly calculated in MODTRAN-5 from the same solar irradiance model, the same atmospheric parameters, and the same spectral integral. Uncertainty Tree Diagram The uncertainty tree diagram is a conceptual diagram introduced in [11] which shows the origin of each term in the primary measurement equation (here Equation (1)). The diagram shows the sources of uncertainty that affect each term and gives the sensitivity coefficients-that is, the conversion factors that convert an uncertainty associated with an input quantity into the uncertainty associated with the measurand. Where an input quantity is itself calculated from its own input quantities, this is shown through additional sensitivity coefficients and equations. The sensitivity coefficients in any "branch" of the uncertainty tree can be multiplied together (chain rule) to provide the sensitivity of the primary measurand to the input quantity twig. Each source of uncertainty identified as a twig on this diagram should be evaluated both for magnitude and to understand the error correlation forms. The uncertainty tree diagram for the Baotou measurements of ground spectral reflectance is given in Figure 4, which is reproduced at larger scale in Figure A1 (see Appendix A). Uncertainty Associated with Laboratory Calibration of the Field Spectrometer: Spectral Calibration The spectrometers used at BTCN and BSCN were calibrated for wavelength accuracy and bandwidth using a mercury line source. The spectral radiance of the Hg source was determined, and from this for "clean" lines, with sufficient separation from other lines, a Gaussian distribution was fitted to the observed radiance. This provided an estimate of peak wavelength, which was compared to the Hg line wavelengths in air, and an (approximate) estimate of the instrument bandwidth. Note that future work is planned to characterize the spectrometer's performance using a tunable laser system (see also Appendix B). This will provide improved information about the bandwidth and wavelength accuracy, which are limited here to the very small number of available cleanly separated spectral lines from the Hg source. Across the spectrometer, the bandwidth (the standard deviation of the Gaussian fitted to the measured values) was consistently between 2.5 and 3 nm. This is wider than the MODTRAN-5 bandwidth assumption used in the solar and sky irradiance calculations. MODTRAN-5 uses a triangular bandpass function with a half base width of 10 cm -1 , corresponding to 0.2 nm at 400 nm and 1 nm at 1000 nm. Note that the official RadCalNet product integrates over a 10 nm bandwidth. The wavelength error of the spectrometer was approximately 1.7 nm (the spectrometer measured a wavelength that was 1.7 nm shorter than the true wavelength). No correction has been made until now for this wavelength error. To account for the uncertainty associated with the measured reflectance due to wavelength error, we need to consider both the impact on the calibration of the spectrometer gain using the calibration source and the impact on the field measurement. In both cases, the error can be approximated by: where the first derivative can be estimated numerically. Using a balanced numerical derivative: we could estimate the error on each measured spectrum. We then calculated the ratio of the field measurement to the laboratory measurement with and without correcting this error. The results are shown in Figure 5. To account for the time when this was not corrected, an uncertainty was applied to the measured signal that is 1.5% from 500 to 700 nm and from 780 to 900 nm, and ranges from 2% to 5% elsewhere. This is used rather than an error correction because the exact shape of the curve in Figure 5 will depend on the specific spectrum measured for each observation. Uncertainty Associated with Laboratory Calibration of the Field Spectrometer: Radiometric Calibration The laboratory calibration of the field spectrometer is described by Equation (3). It is calibrated against a known-radiance source made from a lamp-illuminated diffuser panel. There are uncertainties associated with each term in the measurement equation, and additional uncertainties associated with assumptions implicit in the measurement equation. These assumptions include assumptions about source uniformity and instrument linearity. Lamp-Diffuser Panel Radiance The lamp was calibrated for spectral irradiance on NIM's primary facility through a direct comparison with a blackbody source. The uncertainties associated with this calibration have been documented previously and validated through comparisons as part of the Mutual Recognition Arrangement of national metrology institutes [14]. As is common for national metrology institute calibrations, the calibration certificate for the lamp provides uncertainties at the 95% confidence level. The standard uncertainty is obtained by dividing the certificate uncertainty by the certificate-provided coverage factor, here k = 2 for each wavelength value in turn. The lamp calibration was performed in 50 nm spectral steps and was interpolated to intermediate wavelengths using a cubic spline; the uncertainty was similarly interpolated. There are additional uncertainties associated with the lamp spectral irradiance at the time of the spectrometer calibration due to lamp ageing since calibration and lamp current setting. The uncertainty associated with these quantities was estimated to be between 0.17% and 0.32% for all wavelengths through the annual stability of standard lamp and current setting of the power supply. The diffuser was calibrated for spectral radiance factor on NIM's reflectance facility as an absolute calibration; again the 95% confidence level uncertainty provided for each wavelength was converted to a standard uncertainty. The spectral radiance factor is defined as the "ratio of the radiance due to reflection of the medium in the given direction to the radiance of a perfect reflecting diffuser identically irradiated" in the International Lighting Vocabulary [16] There are additional uncertainties associated with the distance setting and angular alignment of the lamp-diffuser pair. The lamp was set at 500 mm from the diffuser, which was the calibration distance for the lamp irradiance. However, because the diffuser is a bulk diffuser, the exact distance is difficult to set, and a residual distance uncertainty of 0.5 mm is assumed (from the inverse square law, a 0.5 mm uncertainty in 500 mm corresponds to a 0.20% uncertainty in irradiance). Similarly, the diffuser was calibrated for radiance factor for a 0°/45° geometry, and a spectrally-flat uncertainty of 0.20% was included to account for angular differences from this condition. Source Non-Uniformity for Spectrometer Calibration A source produced by a lamp-diffuser combination shows significant non-uniformity. Centrally with the lamp's filament, the diffuser is at its brightest and this drops away from that central position (towards the outside and corners of the diffuser). This bright area may not align with the defined optical axis. The radiance of the panel is calculated from the irradiance of the lamp-which has in turn been measured over a small area around the optical axis-and the reflectance of the panel. The spectrometer used at Baotou has a field of view of 3°, and was set at 250 mm from the panel giving an observational area with diameter 13 mm. The non-uniformity of the lamp-illuminated diffuser panel was measured at NIM. A CS-2000 spectroradiometer was used to measure the non-uniformity of diffuser, and the full field of view (FOV) of the spectroradiometer is 0.2°. The spectroradiometer at the same distance was used to scan the spectral radiance of the diffuser at 1 mm intervals over its surface. The measurement distance between the spectrometer and the diffuser is 250 mm, so the diameter of the field of view on the diffuser is 0.9 mm. The non-uniformity was calculated as the difference between an average radiance over the 13 mm area observed by the Baotou spectrometer and the point observation for brightest radiance. This was 0.29%. Combined Uncertainty Associated with Source Radiance The uncertainty associated with the source radiance comes from a combination of all the factors described above. These uncertainty effects are independent of each other and therefore inter-term error correlation does not need to be considered. However, we do still need to consider other error correlations. A single spectrometer is calibrated once a year against the lamp-diffuser panel radiance source. All uncertainties associated with that calibration will create an unknown calibration error that will be common for all measurements by the spectrometer in the field until the next calibration. Because the same lamp and diffuser panel were used to calibrate all three spectrometers in position at BTCN and BSCN, many uncertainty effects will also lead to a common error between those spectrometers. An error in the lamp irradiance or diffuser reflectance will be common to all measurements by all spectrometers. However, an error due to the alignment of the lamp-diffuser panel combination will be different for each spectrometer as the system was entirely realigned between spectrometers. We also need to consider error correlation scales from wavelength to wavelength. Uncertainties associated with alignment, distance, and stray light (all geometrical) create errors that are fully correlated between wavelengths. Any error in the lamp current setting will also be fully correlated from wavelength to wavelength, although as shorter wavelengths have a larger sensitivity to current setting, the uncertainty will be larger at shorter wavelengths (see Section 3.3.3 in [11] for a similar example). The lamp and diffuser calibrations at NIM create partially correlated errors from one wavelength to the next as some aspects of the primary calibration (e.g., alignments, reference blackbody temperature) are fully correlated and others (e.g., noise during calibration) are not correlated. As NIM repeated the calibration multiple times and had taken care to reduce the uncertainty associated with random effects as far as possible, the error correlation for these are "mostly fully correlated". The uncertainties associated with reference source radiance are listed in Table 1, which acts as an "effects table" (to use the term introduced in [11]). (2) FEL ( ) in Equation (2) ( ) in Equation (2) +0 in Equation (2) +0 in Equation (2) +0 in Equation (2) Error correlation 1 Lamp ageing is likely to have an error that is fully correlated between spectrometers as long as the lamp is not operated for a long time between the two spectrometer calibrations. Current setting is independent between spectrometers as it is dominated by short term effects. 2 Source non-uniformity is dominated by effects that are common for all calibrations as the spatial distribution stays basically constant for all alignments and the spectrometers have near identical fields of view. 3 The lamp irradiance calibration and the diffuser calibration were performed on NIM's primary facilities. The uncertainty budget for that has components that are fully correlated between wavelengths and components that are independent between wavelengths. Spectrometer Noise During its Calibration The uncertainty associated with this noise in the calibration signal was estimated from the relative standard deviation of 20 successive measurements of the stable calibration source. The uncertainty associated with spectrometer noise is very close to zero at most of wavelengths, which was estimated to be less than 0.1%. Spectrometer Nonlinearity Equations (2) and (3) assume that the spectrometer is linear. We determine a single radiance calibration factor (radiometric gain) from a calibration against a single source at one radiance level. However, spectrometers often suffer from non-linearity [17], usually caused by processes in the read-out electronics. There are two types of nonlinearity that must be considered: radiance level nonlinearity (gain is not constant with changing radiance) and integration-time nonlinearity (the measured counts are not proportional to the integration time). To evaluate these nonlinearities, the spectrometer was tested using a special integrating sphere system designed by NIM. The spectrometer remained on its "auto ranging" setting, so the integrating time varied dynamically. The integrating sphere is illuminated by two lamps, each with an adjustable aperture in front of it. In this way the overall radiance of the sphere could be varied and each lamp in turn could be closed off. The sphere's spectral radiance was varied so that at 800 nm it ranged from 0.005 to 0.25 W m -2 sr -1 . The normal calibration radiance at 800 nm is 0.071 W m -2 sr -1 and the grey target radiance at 800 nm ranges between 0.005 W m -2 sr -1 (lowest typical values) and 0.1 W m -2 sr -1 (highest typical values). The nonlinearity factor σ of the spectrometer is calculated as: where ( ) is the response of the spectrometer for a radiance level , where is the source with only lamp A, with only lamp B, or with lamps A and B simultaneously. The results are shown in Figure 6 and show an increase in nonlinearity for the shortest and longest wavelengths. Because nonlinearity is not corrected in the measurements during calibration or in the field, we treat the observed non-linearity as an uncertainty component. At the shortest wavelengths it is difficult to separate nonlinearity from noise, but taking into account that nonlinearity is unlikely to be significantly different from pixel to pixel, an uncertainty curve has been approximated using a local standard deviation. At longer wavelengths there is a noticeable trend, and uncertainty bounds have been drawn to include this trend. From this we assume that the uncertainty associated with nonlinearity is 0.28% from 540 nm to 850 nm, increasing to 0.6% linearly for longer and shorter wavelengths. Stray Light during Calibration and Field Operation Equation (3) assumes that the measured count signal comes from the source radiance, both in the laboratory and in the field environment. In practice there are several effects that alter this assumption. First, there may be an electronic bias, a "dark count", that is present when there is no illumination. This dark count is likely to be temperature sensitive and thus will change over time. In the laboratory the dark count can be measured by closing the input optics. In the field, such measurements are not possible routinely, but some tests can be performed manually. Beyond 500 nm the ratio of dark signal to light signal is less than 0.1%. Second, there may be external stray light that reaches the instrument sensors from outside the main field of view. In the laboratory such stray light will come from any reflections of the source into the beam. In the field, it will come from the bright surrounding area. This is discussed in Appendix B1. The uncertainty associated with this stray light has been estimated to be 0.1% for all wavelengths. Third, there is internal stray light. Internal stray light comes from light that is scattered onto the "wrong" pixel from the wrong wavelength. Appendix B discusses how the stray light was evaluated using cut on filters. The error associated with not correcting for stray light is 25% at 400 nm, dropping to 10% at 420 nm, 5% at 440 nm, and 3% at 460 nm. While Appendix B provides a method that could be used to correct for stray light, in practice this has not been applied to data collected at BTCN and BSCN to date. The RadCalNet WG has agreed to operate with a "collection" process, where data that are on the portal can only be updated retrospectively at defined times, known as "collections" and the next collection is likely to be during 2021. Therefore, at present there is no stray light correction and a correction cannot be performed until 2021. Thus, the uncertainty is at its full magnitude. By the next collection we hope to have calibrated the instruments using tunable lasers so that the full correction algorithm discussed in Appendix B3 can be applied. Repeatability of the Calibration/Transportation Stability The repeatability of the calibration and the sensitivity of the spectrometer to transportation was tested using a stable laboratory source at NIM. The spectrometer was calibrated on three occasions: twice while realigning but not transporting the instrument, and a third occasion where the instrument was transported then realigned. The relative difference of the second and third calibrations compared to the first calibration is shown in Figure 7. This suggests that the dominant effect is realignment. For the purposes of this analysis the uncertainty associated with transportation is less than 0.5%. Combined Uncertainty Associated with the Laboratory Calibration of the Spectrometer The combined uncertainty associated with the laboratory calibration of the spectrometer is a combination of the effects given in this section. Table 2 lists the different effects, the magnitude of their uncertainty, and the error correlation structures. The uncertainty associated with source radiance is obtained by combining the different components in Table 1. The wavelength accuracy is a setting of a spectrometer. In this way the associated error is fully correlated for all observations with that spectrometer and it is independent between spectrometers. Because the wavelength error changes across the spectrum, we consider its error partially correlated from wavelength to wavelength, however, the error is dominated by a common component. Noise is naturally entirely independent from observation to observation and because external stray light is a property of the calibration set up, this is considered to have a fully correlated error. Nonlinearity and internal stray light are properties of the spectrometer and have a stable spectral feature. Finally, calibration repeatability, which includes the stability of the instrument on transfer to the field, has an error considered to be fully correlated for all measurements by that spectrometer but independent between spectrometers. Table 2. Uncertainties associated with laboratory calibration of the spectrometer. Uncertainty Associated with the Field Measurement of Radiance In this section we consider uncertainties associated with the measured field radiance obtained from Equation (4). Uncertainties associated with the calibration of the instrument, described above, affect the gain term, noise affects the measured in-field signal, and the temperature correction has uncertainties related to the uncertainty in the instrument in-field temperature, and the uncertainty associated with our knowledge of the coefficients and form of the correction Equation (5). Uncertainties associated with the stability of the instrument gain have already been considered in the "calibration repeatability" term in the section above and are not included here to avoid double-counting. Spectrometer Temperature Stability Equation (5) gives an empirically determined model for the temperature correction. This was obtained by calibrating the instrument against a stable source while its temperature was varied from 10 to 40°C and the results compared to the reference calibration at 25°C. The fitted curves as a function of wavelength are shown in Figure 8. To work out the uncertainty associated with this correction, there are two components. First the uncertainty associated with the gain correction due to an uncertainty in instrument temperature of 2°C can be estimated by applying the law of propagation of uncertainty to Equation (10). Here the notation ( )| ( ) describes the uncertainty associated with the gain at a temperature T due to the uncertainty associated with that temperature ( ). This is used because there are other uncertainties associated with the gain that would need also to be considered in a full uncertainty analysis of the gain. Second, there is an uncertainty associated with the plus-zero term in Equation (5). This is the uncertainty associated with the suitability of the model to represent the true temperature sensitivity of the instrument. This uncertainty was estimated from the residual of the original measurement points to the model (Figure 9). The uncertainty associated with temperature correction includes both the propagated uncertainty (due to a 2 °C uncertainty associated with the temperature of the instrument in the field) and the model uncertainty given here. Figure 9. Residual from the measurements to the fit model at different wavelengths. The black curve gives the assumed uncertainty associated with the model (the plus-zero term in Equation (5)). Noise in the Field Measurements During field measurements a single reading is taken every two minutes with an integration time determined automatically. The uncertainty associated with noise on this measurement cannot therefore be estimated from operational data. However, to complement the laboratory measurements of repeatability described above in Section 4.2.3, 20 measurements were taken in the field within 30 s. The standard deviation of those 20 measurements was slightly higher than the 0.1% seen in the laboratory and ranged from 0.4% at 400 nm to 0.1% at 600 nm. Combined Uncertainty Associated with the Field Measurement of Radiance The combined uncertainty associated with field radiance measured by the spectrometer is obtained by combining the different components in Table 3. We have not performed that combination at this stage because we want to combine uncertainties with different error correlation structures separately. The measurement noise during a single field measurement leads to an error that is entirely independent from one measurement to the next, from one spectrometer to the next, and from one wavelength to the next. For the uncertainty associated with the temperature sensitivity coefficient, this can be considered fully correlated for all measurements by a single spectrometer (the effect is stable) and independent between spectrometers, because there is a separate thermometer in each spectrometer and they are mounted in separate housings, and because the effect was separately calibrated for each spectrometer. The wavelength-to-wavelength error correlation structure is mixed. Figure 9 above shows that the residual has almost no spectral pattern (except perhaps for central wavelengths where it is small) and therefore it is reasonable to assume that the error is uncorrelated. On the other hand, the propagation from an uncertainty associated with temperature to that associated with scene radiance is fully correlated between wavelengths as is related through an algebraic expression. 1 The error due to temperature sensitivity is partly correlated from wavelength to wavelength. The component that relates to uncertainty in the model is independent, while the component that is from the propagation of temperature uncertainty to radiance uncertainty is fully correlated. Atmospheric Measurements from the AERONET Sun Photometer The ground-measured radiance is converted into ground reflectance using Equation (1). This equation requires an estimate of the solar irradiance as transmitted through the atmosphere and the sky irradiance as scattered by the atmosphere. Both quantities are evaluated using MODTRAN-5, which requires atmospheric parameters as inputs. These atmospheric parameters are measured using the Cimel CE318 sun photometer which is at the Baotou site. The sun photometer forms part of AERONET and has been calibrated according to the AERONET procedures [9], originally at NASA and more recently at the Beijing XMWK Technology Co. Ltd., China. The measurement data are transmitted to the AERONET data processing center, automatically processed and made available at the AERONET website. Then, the aerosol optical depth (AOD) at 550 nm is calculated via logarithmic interpolation from the AOT in the 440, 670, 870, and 1020 nm channels. The measurements in the 936 nm channel of the solar radiometer are used to calculate the water vapor column (WVC), using a modified version of the Langley algorithm [18]. We have taken the uncertainties associated with AOD and WVC from the literature, in particular [19]. These are estimated as less than 0.01 (absolute uncertainty) and less than 12% (relative uncertainty), respectively. Sensitivity Analysis of MODTRAN-5 to Atmospheric Conditions Monte Carlo (MC) analysis techniques were used to determine the sensitivity of the MODTRAN-5 sky irradiance and solar irradiance calculations to uncertainties in the input parameters. Errors drawn from a Gaussian distribution with a standard deviation of 12% (relative) and 0.01 (absolute) were added randomly to WVC and AOD respectively, for a range of realistic conditions for the Baotou site. Each combination of parameters was separately tested, with for each test, either the AOD or the WVC varied and the other component kept constant. The corresponding total downwelling irradiance, i.e., the relative uncertainty associated with sun ( , ; ) + sky ( , ) in Equation (1), was simulated 1000 times by MODTRAN-5. The standard deviation of each 1000 sets of simulated irradiances was the uncertainty associated with downwelling irradiance due to AOD or WVC uncertainties. A further test was performed where these two parameters were varied together (in a way that was consistent with the fact they were derived together, so including any error correlations) and this gave the same uncertainty as the combined uncertainty from the two separate components. Figure 10 shows the relative uncertainty associated with the total downwelling irradiance due to WVC uncertainties of 12%. Similarly, Figure 11 shows the relative uncertainty associated with the total downwelling irradiance due to AOD uncertainties of 0.01. These graphs are for two of the representative conditions studied; similar results were obtained for other conditions. In the MC simulations, the mean value of AOD is set as 0.2, the mean value of WVC is set as 0.5 g/cm 2 , and solar zenith angle (SZA) and view zenith angle (VZA) are set as 30° and 0°, respectively. Other Uncertainties Associated with the MODTRAN-5 Processing In addition to the uncertainties associated with the input parameters to MODTRAN-5 (the AOD and WVC), the downwelling irradiance calculated by MODTRAN-5 has uncertainties associated with other assumptions in the MODTRAN-5 model. MODTRAN-5 makes assumptions on the heights of different atmospheric layers and defines a set of "atmospheric models" to describe these conditions. It also makes assumptions on the type and size of aerosols and defines a set of defined "aerosol models" to describe different assumptions. For Baotou two atmospheric models are used, i.e., the one defined as "mid-latitude summer" is used from April to September and the "mid-latitude winter" model is used from October to March. The aerosol model used is the "rural" type. This was chosen because the site is 70 km from the nearest city, Baotou, China. It is possible that the "desert" aerosol type may be more appropriate for some periods of the year. To understand the difference between "desert" and "rural" aerosol models for the site, data from one measurement were processed using both aerosol models. The relative difference between the models is given in Figure 12. The uncertainty associated with aerosol type is taken to be a rectangular distribution within this range, thus the standard uncertainty is 2√3 ⁄ , where D is the full difference ( Figure 12). Figure 12. Relative difference between the calculated total downwelling irradiance for a real data set at Baotou assuming a desert model and a rural model. The overall accuracy of the MODTRAN-5 evaluation of downwelling irradiance also depends on the uncertainty associated with the radiative transfer model within MODTRAN-5 and the uncertainty associated with the solar spectral irradiance model used. Uncertainties associated with the solar zenith angle and solar distance (θ, d, respectively, in Equation (6)) are considered negligible. In the future, this uncertainty estimate could be evaluated using in-field measurements, but that has not yet been done. The uncertainty associated with the MODTRAN-5 radiative transfer model is described in the literature [20] as being 1%-2% for radiance predictions. The solar spectral irradiance model used is the Thuillier model [15,21] and this has a standard uncertainty of 1.5% at 450 nm, 0.9% at 650 nm, 1.1% at 850 nm, and 0.8% at 1550 nm. As the spectral uncertainty is not known, these values at 450, 650, 850, and 1550 nm were linearly interpolated to provide intermediate wavelength uncertainties. Combined Uncertainty Associated with Ground Spectral Reflectance Overall, ground reflectance is calculated using Equation (1). As we have considered in the previous subsections the total downwelling irradiance sun ( , ; ) + sky ( , ) as a single quantity, we also do not separate them in our combined uncertainty table. The combined uncertainty associated with ground reflectance are obtained by combining the different components in Table 4. Figure 13 shows the uncertainty associated with the bottom-of-atmosphere (BOA) reflectance from 400 nm to 1000 nm. The error correlation structures due to the solar and sky irradiances are considered partially correlated from one observation to the next with a single spectrometer; the assumption is that these change on longer time scales (hours to days), but are constant for shorter timescales. This assumption could be refined by assuming a triangular or bell-shaped correlation structure but that has not yet been analyzed. Between spectrometers the error correlation is considered fully correlated-the three spectrometers are all on the same super-site and the same AERONET CIMEL instrument is used to obtain corrections for all three instruments. To evaluate the error correlation structure between wavelengths, we analyzed the Pearson correlation coefficient for the errors obtained in the Monte Carlo simulation. This showed a very high error correlation (above 0.95) for almost all wavelengths. We therefore treat this as fully correlated from wavelength to wavelength. The same analysis has not been done for the solar irradiance model, but we assume that it is also fully correlated for the purposes of a precautionary analysis (fully correlated components are not reduced by averaging). Uncertainty Associated with the RadCalNet BOA Reflectance Product The BOA reflectance measurement uncertainty established in the previous section is for a measurement with a single spectrometer at a single time at the native wavelengths of the spectrometer. These measurements are every 2 min on site. For the official RadCalNet BOA reflectance product we (a) increase the wavelength step to 10 nm, (b) provide a product for the full site area (45 × 45 m for BTCN, 300 × 300 m 2 for BSCN), and (c) average readings to create a value every 30 min. Figure 13. The combined uncertainty associated with ground reflectance. Wavelength Sampling To obtain data at 10 nm intervals there are two options. One option is to provide the narrow bandwidth information at 10 nm intervals. The other option is to perform a spectral integral to combine data over a 10 nm bandwidth. At present, the Baotou 10 nm data are obtained by picking the data at the 10 nm intervals. Therefore, this does not alter the uncertainty analysis. Representativeness (Site Homogeneity) The measurement of the site reflectance by the spectrometers is at one point (for BTCN) or two points (BSCN). However, the satellite observations will be averaged over the larger area of the defined site. To understand the representativeness of a single/two measurement(s) for the site as a whole, tests were performed where a portable spectrometer was used to make measurements at multiple points across each target. The spectrometers have a field of view of 3° and therefore see an area with a diameter of ~104 mm for BTCN (2 m above ground), and ~131 mm for BSCN (2.5 m height). For BSCN, the site is a natural sand with a ~1 mm grain size. For BTCN, the site has pebbles which are typically ~15 mm across. The portable spectrometer used for this analysis imaged a similar area to the permanent instruments. The uncertainty associated with the representativeness of the point measurement was estimated based on the uniformity of the whole target, which is defined using the following equation: 30 Minute Temporal Averages To obtain BOA reflectance every 30 min, the reflectance averages of 15 min before and after a given time (i.e., 9:00, 9:30, 10:00, etc.) are used as the final BOA reflectance. Since the measurement interval of spectrometer is 2 min, 15 BOA reflectance values are averaged. The variation in those readings comes from two origins: first, we expect variability due to all the effects listed in the tables above that have "independent" as their reading-to-reading error correlation structure and second, we expect variability due to the change in solar zenith angle over the 30 min averaging period. To validate the uncertainty associated above with "independent" effects we compared the standard deviation of the 15 readings with the combined uncertainty and obtained close agreement at noon. At 9 am and 3pm, the standard deviation was higher than the combined uncertainty of the "independent" effects, because of the change in the solar zenith angle. However, at present no uncertainty has been introduced to account for solar zenith angle changes as the time period of averaging is symmetrical about the declared time and the difference between the measured mean and a mean corrected for solar zenith angle is negligible. Uncertainty Associated with the BOA Reflectance The combined uncertainty associated with the BOA reflectance is a combination of the effects given in Section 4.1 to Section 4.5. For BTCN and BSCN for the initial period when only one spectrometer was in operation, the official RadCalNet product is the uncertainty associated with the temporal mean of 15 measurements with a single spectrometer. To establish the uncertainty associated with this mean, we combined uncertainties that were associated with errors that were "partially" or "fully" correlated with time (any partial correlation is likely to be extremely high over a 30 min window) and separately combined uncertainties associated with errors that were "independent" in their error correlation structure. The uncertainty associated with independent effects could be reduced by the square root of the number of independent observations, here √15, and this was then added in quadrature (square root of the sum of the squares) with the uncertainties associated with correlated effects. For the calculation of the uncertainty associated with the mean of the two spectrometers at BSCN, the uncertainty components were separated into (a) fully correlated for both time and between spectrometers, (b) fully independent for both time and between spectrometers, (c) correlated between spectrometers but independent from measurement to measurement, and (d) correlated between measurements but independent between spectrometers. Then, we separately calculated the uncertainty associated with each of these. The uncertainty associated with the mean of two spectrometers based on 15 measurements each was calculated by dividing (a) by 1, (b) by √30, (c) by √15, and (d) by √2. Finally, the combined uncertainty was calculated as the square root of the sum of the squares of these four parts. For this combination, we have assumed that the atmospheric effects are fully correlated over the 30 min of the averaging. Figure 14 shows the uncertainty associated with the BOA reflectance. A summary for the uncertainty budget of BTCN and BSCN BOA reflectance is shown in Table A1 (see Appendix C). Uncertainty Associated with Propagation to TOA Comparing a satellite sensor observation with the Baotou calibration site requires TOA reflectance. As discussed in Section 3.1, propagation to TOA requires the determination of direct light from the site, light lost through scattering between the site and the satellite, and light gained from scattering in the atmosphere, both directly and having reflected off nearby ground locations. The influence on the measured radiance of light that has reflected off nearby ground locations (background radiation) is described by the adjacency effect [22]. AIR has performed a detailed analysis on the adjacency effect when calculating the TOA reflectance from the BOA reflectance. A method considering the adjacency effect has been proposed and detailed in another paper [23]. In this method, a local atmospheric point spread function (PSF) for the Baotou site was constructed, and this was used to calculate an effective background reflectance that was used in MODTRAN-5 simulation as the reflectance of the surrounding area. The TOA reflectance calculated using this method with the consideration of adjacency effect was compared with several satellites' observations. The uncertainty associated with the effective background reflectance was obtained by considering several uncertainty components, such as the constructed atmospheric PSF, the error of AOD, and the seasonal change of the surrounding area. Then, the uncertainty associated with the TOA reflectance simulated using the effective background reflectance was also estimated. However, for the official RadCalNet product, the propagation to TOA is performed through the RadCalNet processing. This does not have the option to include the correction of the adjacency effect. Thus, we determined the uncertainty introduced by not correcting for the adjacency effect, based on the constructed effective background reflectance of Baotou site in the previous study. This was evaluated by running MODTRAN-5 for the different targets assuming first the background reflectance discussed above and then a background reflectance that is the same as the target reflectance. The difference in the TOA reflectance between correcting and not correcting the adjacency effect in MODTRAN-5 simulations for BTCN and BSCN targets are shown in Figure 15. In addition to the uncertainties associated with the BOA reflectance measurements, uncertainties associated with the radiative transfer modelling to TOA performed by MODTRAN using the input atmospheric conditions, such as AOD, WVC, aerosol type, and MODTRAN model, are also considered. To determine the uncertainty associated with propagation to TOA reflectance, a similar approach to that used for the determination of the uncertainty associated with sun ( , ; ) + sky ( , ) was used (Section 4.4). The TOA reflectance is calculated using MODTRAN-5 from solar spectral irradiance, sun-earth distance factor and the solar zenith angle. MC analysis techniques were also used to determine the uncertainty associated with the TOA radiance due to WVC and AOD uncertainties. The uncertainties associated with the assumption of aerosol type, solar irradiance model, and MODTRAN-5 radiative transfer model were also considered. According to the analysis results, the total uncertainty associated with the TOA reflectance propagated from the BOA reflectance is given in Figure 16. Discussion The uncertainty of BOA reflectance (strictly "hemispherical-conical reflectance factor") measured by a single spectrometer for BTCN (about 6%) is greater than that for BSCN (about 4.8%), which is to a large extent caused by the surface uniformity. Because there are two spectrometers deployed in BSCN, the uncertainty for BSCN is further decreased to about 3% across most of the wavelength range (see Figure 14). Thus, the combined uncertainty of BSCN associated with propagation to top-of-atmosphere reflectance is almost always better than 5% within the spectral range 450-1000 nm, and in about half of the spectral range the uncertainty is less than 4%. This result indicates that the high uncertainty due to the poor surface uniformity for the BTCN site could be reduced by using multiple observation devices. At the shortest wavelengths (below 450 nm) the uncertainty is dominated by internal stray light effects within the spectrometer and reaches ~20% at 400 nm. This paper focuses on the uncertainty analysis method applied to the official RadCalNet reflectance products using the standard RadCalNet processing, without adjacency effect correction and at 10 nm intervals. Reduced uncertainties are available from a processing to TOA that includes adjacency effect correction. Performing a thorough uncertainty analysis always identifies areas of potential improvement. Here we have identified that the wavelength error could be reduced and that stray light is the dominant source of uncertainty for the shortest wavelengths and therefore needs a more thorough investigation and the establishment of a matrix-based correction algorithm. We anticipate that these investigations will lead to an improved uncertainty by the time of the next RadCalNet Collection (2021) and some of the corrections will be able to be applied retrospectively (particularly for stray light). The uncertainty associated with homogeneity is also a significant uncertainty for BTCN and could, perhaps, be improved using an instrument with a wider field of view, however, such an instrument would be more difficult to calibrate. The uncertainty budget allows us to consider such trade-offs in future improvements. Conclusions This paper showed an example of a metrological uncertainty analysis for a RadCalNet site. For the first time, we considered the error correlation structures for each uncertainty component. These error correlation structures enabled us to perform a robust propagation to a product that averages two spectrometers and 15 measurements by each spectrometer. In the future, we will use the wavelength-to-wavelength error correlation structure to obtain a reliable estimate of the uncertainty associated with a spectrally integrated product (e.g., to match the spectral response function of a satellite sensor). The temporal and spectrometer-to-spectrometer error correlation structures will also be needed to determine a robust uncertainty associated with the comparison to a satellite that makes multiple overpasses of the sites during a comparison period. To perform that analysis completely, we would also need to account for the error correlation structures for the satellite product. The paper that we used as the basis of the method described here [11] has shown how this was done for the Advanced Very High Resolution Radiometer (AVHRR) sensors. A simplified version of this has also been done for Sentinel-2 [24]. As an example of the metrological uncertainty analysis, this paper described the determination of ground reflectance and TOA reflectance for the Baotou BTCN and BSCN sites. We analyzed the sources of uncertainty from the laboratory calibration of the spectrometer, through to field observations of radiance and the calculation of ground reflectance, and to the propagation to TOA reflectance. The uncertainty tree diagram for the Baotou measurements of ground spectral reflectance was drawn, and every uncertainty component was considered and analyzed based on tests in the laboratory and field, and Monte Carlo modelling of the atmospheric corrections. The preliminary results for the uncertainty propagating to the TOA reflectance were analyzed. The uncertainty associated with the TOA reflectance propagated from BOA reflectance measured by a single observation is approximately estimated as 6% for BTCN and 4.8% for BSCN if the adjacency effect can be corrected for. Since the RadCalNet BOA product on BSCN is generated based on the observations from two spectrometers, its uncertainty is reduced to about 3%. The uncertainties of the official RadCalNet TOA product (which does not consider the adjacency effect) for BTCN and BSCN are estimated to be approximately 7% and 4%-5%, respectively. Appendix B.1. External Stray Light during Calibration The stray light during calibration includes both light that reaches the diffuser panel having scattered off a surface (e.g., walls, optical bench), and light that enters the spectrometer from angles outside its field of view. In the field, it includes light that enters the spectrometer from larger angles. Because the field source is very large (the ground extends in all directions), while in the laboratory the bright panel is surrounded by a dark laboratory, this creates differences between calibration and use. Stray light illuminating the diffuser panel in the laboratory was estimated by blocking the direct beam from the lamp using a small screen and seeing the signal on the spectrometer with and without this beam block. The signal with the block, as a percentage of the signal without the block, is given in Figure A2. The results in Figure A2 show that external stray light is small during calibration (less than ~0.1). When the instrument is used in the field, the stray light signal will be much higher because the surrounding area is brightly illuminated. For now, we consider an uncertainty associated with external stray light to be 0.1%, recognizing that this is a rough estimate. Further tests are required with sources of differing sizes and with in-the-field stray light testing in order to refine this estimate. Appendix B.2. Internal (Spectral) Stray Light Internal stray light is light that scatters within the spectrometer and therefore reaches a spectrometer pixel for the "wrong" wavelength. This is particularly problematic at short wavelengths where the desired signal is small and the detector has a higher sensitivity to light of longer wavelengths than it does to light of the desired wavelength. These two effects combine to mean that a small fraction of light from the much higher radiance longer wavelengths can significantly impact the measured signal at short wavelengths. This internal spectral stray light was estimated using long-pass filters, which do not transmit light below a cut-on wavelength and have a high transmittance for longer wavelengths. The signal with the filter was compared to the signal without the filter for the shorter wavelengths. Figure A3 shows the signal with and without a 550 nm cut-on filter. At the shortest wavelengths (below 440 nm), the signal with a filter rises and is almost half of the (low) signal with a filter. Figure A3. Source measured radiance with and without a 550 nm cut on a filter (left vertical axis) and the transmittance of that filter (right vertical axis, logarithmic scale). Appendix B.2.1. Correction for Internal Stray Light In order to evaluate the impact of this internal stray light, and if necessary to correct for it, we need to determine the effect of this stray light both on the field observations and on the calibration. We define: nf,source ( ): Signal (V for volts, but probably actually digital numbers) with no filter of the source. f,source ( ): Signal with a filter and the source. true,source ( ): The signal that would be measured if the instrument had no stray light response. f ( ): Transmittance of filter. : Wavelength (general concept). ℓ: Wavelength (of a short wavelength pixel that is sensitive to the stray light). min : The shortest wavelength that is considered "long wavelength". (ℓ): The fraction of the long wavelength light that makes it to pixel ℓ. Therefore, the measured signal at short wavelength ℓ for the calibration source, without a filter is: nf,cal (ℓ) = true,cal (ℓ) + (ℓ) true,cal ( ) nm min (B1) The measured signal at short wavelength ℓ for the calibration source, with a filter is: f,cal (ℓ) = true,cal,f (ℓ) + (ℓ) f ( ) true,cal ( ) . In practice, we get the top line from the measurement at the shorter wavelength and the bottom line from integrating the measured signal at longer wavelengths (as we assume there is no stray light in the longer wavelength signal). A correction can then be applied for any spectral observation (again making the assumption that the longer wavelength signal is insensitive to stray light). Thus, the corrected signal in the field can be calculated from the measured signal at longer wavelengths and the (ℓ) determined in the laboratory as: true,field (ℓ) = nf,field (ℓ) − (ℓ) true,field ( ) . Experimental Values The method described in the previous section was applied to one of the spectrometers that are used in the field. The 550 nm cut-on filter results were used to evaluate (ℓ) for wavelengths up to 500 nm and the 650 nm cut-on filter for wavelengths from 502 to 590 nm. A 780 nm cut-on filter was used from 592 to 710 nm and a 900 nm cut-on filter for wavelengths from 712 to 870 nm. Wavelengths above 870 nm were not corrected. The stray light signal, as a percentage of the measured signal, is given in Figure A4, along with the measured source radiance levels. The stray light is less significant in the field than in the laboratory because in the field the radiance at long wavelengths is a similar magnitude to that at short wavelengths, while in the laboratory the source is considerably brighter at longer wavelengths. Because the calibration source is used to evaluate the instrument gain and then the instrument gain is multiplied by the in-field observation, the critical quantity to evaluate is the ratio between the field and laboratory measurements with and without stray light correction. Figure A5 shows this ratio with and without correction and the error introduced if the correction is not made. It has not yet been possible to perform the necessary spectral measurements with a tunable laser for the BTCN and BSCN spectrometers. However, initial results were obtained with two lasers and are shown in Figure A6. These results show that the shape of the stray light changes with wavelength of the source (and therefore measurements are required for more laser wavelengths) and that there is stray light at longer wavelengths as well as at shorter wavelengths. Figure A6. Relative response on each pixel (relative to central wavelength response) to a monochromatic 632.8 nm laser and a monochromatic 405 nm laser.
14,198.8
2020-05-26T00:00:00.000
[ "Engineering", "Environmental Science", "Physics" ]
Target Recognition and Trajectory Planning of Apple Harvesting Robot considering Color Multimedia Image Segmentation Algorithm For the purpose of significantly reducing the processing time of the apple harvesting robot during the harvesting process, it is highly necessary to carry out the corresponding studies on the methods for rapid recognition and trajectory planning.(rough the comprehensive application of information relevance, the image processing area can be reduced. For image recognition and trajectory planning, the related template matching algorithm for removing the mean value and normalization product can be adopted, and segmentation methods based on different threshold values can be used for the realization of the effect. Subsequently, the comparative experiments are properly carried out to verify the effectiveness of the method used. Introduction e apple harvesting robot is the product of the continuous advancement of science and technology. It is a high-tech confrontation platform that stands for a kind of human wisdom and also a contest of comprehensive technology between two parties [1,2]. e robot can make judgment on the highly complex trends in the field based on the corresponding rules so as to guide itself to make the optimal decision and then take the corresponding actions that are conducive to defeating the opponent [3]. During the contest between the two parties, the apple harvesting robot focuses closely on the vision system to carry out a comprehensive search for the related environmental information such as goals and balls. Subsequently, the laser, infrared, and other related ranging systems are used to conduct global positioning of the robot and comprehensive detection of the obstacles and then integrate the information obtained based on certain rules to form certain action instructions and demonstrate the operations accordingly [4,5]. Hence, the requirements for real-time performance, robustness, and accuracy are very high. When the robot gradually approaches the center of the image in the center of the target fruit, it usually needs to recognize the image multiple times and implement the trajectory planning multiple times before it can accomplish the task [6]. In the past, the process of acquiring each image was actually carried out by executing a certain recognition algorithm repeatedly. e recognition time spent was usually close to almost the same. e final overall recognition time was actually the sum of the recognition time for all images [7]. In the whole harvesting process, the recognition time is an integral part. If the recognition time can be reduced, the harvesting speed of the robot can be significantly increased, and the gap between manual harvesting and machine harvesting can be further narrowed so as to improve its practical value tremendously. erefore, in the study of this paper, the author has conducted a comprehensive application of the information relevance of the images to investigate the rapid recognition method and the corresponding trajectory planning method. a particularly evident color difference in the apple fruit and the background. A set of pictures of an orchard is taken in a natural environment, focusing closely on the background and apple fruit area. e background mainly refers to the branches, the sky, the green leaves, and other areas where the apple fruit is located. e values of the color factors R, G, and B are subject to comprehensive statistical and other related analyses. After statistical analysis, we can see that, regardless of the color difference R-or 2R-G-B, the apple fruit can be distinguished effectively from the background color based on its own method [8]. In normal circumstances, the R-G calculation method is not only relatively convenient, but also extremely simple. Hence, the color difference R-G is used as the color feature value in the image segmentation process of this paper. e color difference curve corresponding to R-G is shown in Figure 1. As shown in Figure 1, we can determine the fixed threshold value and then use the value obtained to identify the corresponding fixed threshold on the fruit image. However, it is found through a large number of experiments that the segmentation method based on the fixed threshold value still has some defects. e main reason is that it is not particularly adaptable to the changes in light. On this basis, the author has applied the OTSU method, which is often referred to as the segmentation method based on the dynamic threshold value. As a dynamic threshold segmentation method, it has excellent performance. In the process of acquiring the dynamic threshold value, it is necessary to perform calculation on three aspects based on this method. e first aspect is the target class in the image, the second aspect is the background class with the minimum variance, and the third aspect is the background class with the maximum variance. After the image segmentation is completed, there are still some relatively isolated small holes, tiny dots, and minor burrs. However, these noises can have a serious influence on the recognition. us, we need to take the relevant measures to eliminate the influence of the noise [9]. In this paper, we have applied the "Corrosion-remove-expansion" method effectively. First of all, the corresponding corrosion calculation is carried out, and then the image after segmentation is calculated based on a certain method. e purpose is to eliminate the target boundary point so that it can present a phenomenon of gradual inward shrinkage. Subsequently, the small area removal operation is fully applied, the purpose of which is to eliminate some remaining small areas based on the corresponding method. Finally, the expansion calculation is carried out comprehensively, the purpose of which is to expand some of the points of contact and then merge them into the target. rough the processing described above, we can segment the image into two parts: the first part is the background, and the second part is the fruit. Identification of the Target Fruit. In the process of harvesting fruits, if a single-manipulator harvesting robot is used, it can only carry out the task in a manner of harvesting the fruits one by one. If there are multiple fruits in the image, then it is necessary to identify the target fruit to be harvested based on a certain method before the robot can complete the harvesting work successfully. e processed image needs to be marked based on the 8-domain marking method, and then the marked fruit area is selected as the basic object to obtain the two-dimensional centroid coordinates. e specific equation is shown as follows: where i and j are the horizontal and vertical coordinates of the pixels of the fruit image, N is the total number of pixels of the fruit image, and Ω is the set of pixels that belong to the same fruit image. At the same time, it is also necessary to calculate the corresponding side length. Finally, we can identify the target fruit based on the effective application of the principle that the target is the closest to the image center. e specific formula for the calculation of the distance is as follows: where x o and y o are the coordinates of the centroid of the fruit and x c and y c are the coordinates of the center of the image. Extraction of the Recognition Area. e target fruit information in the previous frame image has a relatively evident role. Especially for the target fruit in the next frame image, it has a relatively prominent reference role. at is to say, when the area of the next frame image is processed, we need to take the previous frame image as a basis [10]. In normal circumstances, for the purpose of acquiring the centroid coordinates of the fruit, the related work is often completed by gradually approaching the center of the image. Hence, when we complete the acquisition of the first image, the cases can often be reduced continuously through subsequent processing. After the processing described above, the image processing time will be reduced significantly, so that the overall harvesting time can be further cut, and finally the rapidness can be enhanced in the most effective way [11]. e specific steps are described as follows: (1) For the images acquired, after we carry out certain processing by using the relevant methods, it is necessary to identify the harvesting target fruit effectively based on the principle that the fruit essence is the closest to the image center(x c , y c ). ose with their own side lengths l and m should also be identified. Finally, the coordinates at the top left corner of the minimum horizontal bounding rectangle (x t , y t ) should be determined, as shown in Figure 2. (2) We segment the image into four areas based on a certain method, that is, areas A, B, C, and D, and 2 Advances in Multimedia determine which area the fruit is in through the center of the fruit and the center coordinates of the image. If it is located in area A, the coordinates (x t , y t ) and (x c + l/2, y c + m/2) are taken as the base points to determine the rectangular processing area, and the details are shown in Figure 3; if it is located in area B, the coordinates (x t + l, y t ) and (x c − l/2, y c + m/2) are taken as the base points to determine the rectangular processing area; if it is located in area C, the coordinates (x t , y t + m) and (x c + l/2, y c − m/2) are taken as the base points to determine the rectangular processing area; if it is located in area D, the coordinates (x t + l, y t + m) and (x c − l/2, y c − m/2) are taken as the base points to determine the rectangular processing area. (3) e image is acquired, and the rectangular processing area determined in step (2) is selected as the basic object. e corresponding processing is carried out based on the method in step (1). For the space beyond the area, we can fill it with white color based on the corresponding rules, and the details are shown in Figure 4. It is certain that there are also differences. It is necessary to convert the target fruit center coordinates obtained based on equation (1) by using the rectangular processing area coordinate system. e purpose is to convert it to the coordinate system of the acquired images. For the center of mass coordinates in the processing area where the target fruit is located, we can assume it to be (x g , y g ) and convert it to the corresponding coordinate system of the acquired images, and the center of mass coordinates (x g′ , y g′ ) of the target fruit can be obtained as follows: (4) It is necessary to determine the rectangular processing area in the next frame of the acquired image based on the method in step (2) and then use the method in step (3) for the related processing effectively. In this way, the continuous loop processing can be carried out until the center of mass coordinates and the coordinates of the image center are overlapped with each other. Selection of Color Space. In the process of image processing, the commonly used color spaces, such as RGB and HSV, are diverse. For the camera input system of the vision Advances in Multimedia system, it is necessary to adopt the CCD, and the most representative output mode is the RGB color space. In normal circumstances, if we need to segment a set of color graphics, the first choice of the method that comes to mind is often RGB [8,12]. RGB space has very significant advantages. is advantage is that it is not only simple, but also intuitive. It does not require any conversion or classification in the process of application, and its speed is relatively high. However, it still has some defects. ese defects are mainly manifested in two aspects as the following. Firstly, the RGB space is a color display space, which is not suitable for human visual characteristics in general. Secondly, if the conditions are different, the distribution of the measured RGB color values can present a scattered state. In this case, it will be difficult to determine the RGB value of a specific object. In addition, it is particularly prone to including some color objects that are not designated, and it is also possible to miss some objects that should be recognized [13]. In normal circumstances, if the positions are different on the game field, there may also be a huge gap in the light intensity. As a result, the RGB values of the color can vary significantly in different positions. Due to the several reasons described above, RGB is not applicable for color classification [14]. For this purpose, the RGB space to a point (r, g, b) is converted to a point (h, s, v) in the HSV space. It is assumed that m � max (r, g, b) and n � min (r, g, b), in which r, g, and b are values in the normalized RGB color space. Image Segmentation Test. e OTSU segmentation algorithm and the fixed threshold segmentation algorithm are comprehensively compared by using a certain method. Two different apple fruit images taken under different light are selected. e details are shown in Figures 5(a) and 5(b). e image corresponding to Figure 5(a) is an image formed under strong light irradiation, and the image corresponding to Figure 5(b) is an image formed under weak light irradiation. In Figures 5(c) and 5(d), the fruit images obtained after segmentation based on a fixed threshold are shown, respectively. e segmentation threshold values of the above two are the same. Although there is a certain amount of noise from the branches and leaves in Figure 5(c), this type of noise due to branches and leaves is relatively small. In Figure 5(d), there is a more evident phenomenon of excessive segmentation. us, it can be observed that, with respect to the changes in light, segmentation based on the fixed threshold value is not quite adaptable. If the image is Figures 5(e) and 5(f ) show the fruit images segmented under the OTSU algorithm based on the dynamic threshold value, respectively. From the segmentation effect, it can be observed that its fruit segmentation effect is relatively good. Compared with the segmentation based on the fixed threshold value, its applicability of light is much stronger as well. Matching Recognition Test. To verify whether the match is correct, we need to use the matching probability to carry out the corresponding test. An image of apple fruit taken in an environment with natural light is selected. In order to increase the difficulty of matching and substantially increase the magnitude, we should choose as complex background as possible, and there should be multiple apple fruits in the image as well. In addition, we also need to select 10 apples manually, the purpose of which is to use these 10 apples as the target fruits and the template images at the same time. e details are shown in Figure 6(a). e calculated value based on equation (4), R ∼ G color difference value, and 2R-G-B color difference value are used as the image pixel gray value in this study, and relevant algorithms, such as rapid removing mean value and normalization product, are used for matching recognition effectively. It can be found through observation of Figures 6(b)-6(d) that after the RG color difference value is used, target fruits (1), (4), and (5) show matching errors. In addition, whether we use the gray scale calculated based on equation (4) or the 2R-GB color difference, its success rate can reach 100%. Interference Recognition Test. In the process of gradually approaching the center of the images, due to the different shooting angles and various lighting effects, it is possible that some changes may occur in both contrast and brightness. Hence, it is particularly necessary to carry out the matching of the algorithm test by using the relevant methods. In general, there are two methods to adjust the brightness of images. e first method is known as the nonlinear method, and the second one is known as the linear method. When nonlinear method is used to adjust the brightness of images, it can easily lead to huge loss of image information, and the image after adjustment will look relatively flat without a solid sense of hierarchy. On the contrary, when the linear method is used to adjust the brightness of images, the image after adjustment often shows a strong sense of hierarchy, which is relatively realistic, vivid, and natural. Hence, the author adopts the photoshop as the method to adjust the brightness of the image in Figure 6 and then carries out the corresponding matching after the adjustment. ere is a certain correlation between the brightness change and the matching probability. e specific relationship is shown in Figure 7. In this figure, the negative value on the horizontal axis stands for the gradual decrease in brightness, and the positive value stands for the gradual increase in brightness. It can be found through observation of the graph that the change range of brightness is within the interval of [-−35, 40]. As long as it is within this interval, the matching probability can reach up to 100%. When the brightness adjustment is relatively large or small, a certain matching error may occur. However, the matching probability will be reduced significantly. Since the relevant capture work is often completed within a very short time in the process of gradually approaching the image center, the changes in brightness are not particularly evident. Hence, the basic requirements can be met effectively. For the adjustment of the contrast, the author still carries out the relevant processing by using photoshop and then performs the matching recognition to match the changes in the matching probability and the contrast. e specific relationship is shown in Figure 8. Negative value herein refers to the gradual decrease in the contrast, whereas positive value refers to the gradual increase in the contrast. It can be found through the observation of the graph that the changes in the contrast will not affect the matching recognition, and the recognition is relatively accurate in all cases. Algorithm Comparison Test. e three algorithms described above are compared by using some methods based on certain rules to verify the rapidity of recognition. (1) Algorithm 1: the OTSU recognition algorithm is applied for the processing of each frame in the dynamic images. (2) Algorithm 2: the OTSU recognition algorithm is applied for the processing of each frame in the dynamic images. e related information of the images is used in the image processing of the subsequent frame. (3) Algorithm 3: OTSU recognition algorithm is used for the processing of the dynamic image in each frame. e related information of the image is applied in the image processing of the next frame. Subsequently, the relevant algorithm for removing the mean value and normalization is adopted effectively. We assume that, in the process of approaching the image center, the number of dynamic images acquired by the video sensor is 4 frames, the specific pixel size is 320 × 240, and 10 sets of pictures are analyzed and compared based on certain rules by using some methods. Finally, the corresponding recognition time is obtained. e specific mean time spent in Algorithm 1 is about 1.15 seconds, and the specific mean time spent in Algorithm 2 is 0.95 seconds. It can be observed that the application of associated information can reduce the processing time by up to 17%. e time spent in Algorithm 3 is 0.74 seconds. us, it can be seen that the application of rapid mean value and normalization related algorithm can shorten the processing time, with a particularly obvious effect. Compared with Algorithm 1, the reduction in the processing has reached 36%. From the above comparison, it can be known that the design method adopted in this paper has great advantages and can improve the harvesting speed of the robot significantly (Table 1). Conclusions rough the real-time image information processing, the most comprehensive monitoring of the changes in the lighting of the environment can be fully implemented, and the color threshold value can also be adjusted accordingly at the same time. In this way, the image segmentation can be carried out with a certain accuracy, so that the image information thus obtained is more accurate and objective and has relative adaptability. We have obtained the implementation effects of the segmentation algorithm based on different threshold values. Compared with the old methods, the method used in this paper is superior in the effectiveness, and the recognition time is also reduced by an impressive 36%. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
4,879
2021-09-15T00:00:00.000
[ "Computer Science" ]
DEVELOPMENT OF A METHOD FOR IMPROVING STABILITY METHOD OF APPLYING DIGITAL WATERMARKS TO DIGITAL IMAGES into digital images is presented. A technique for increasing the stability of methods for applying digital watermarks into digital images, based on pseudo-holographic coding and additional filtering of a digital watermark, has been developed. The technique described in this work using pseudo-holographic coding of digital watermarks is effective for all types of attacks that were considered, except for image rotation. The paper presents a statistical indicator for assessing the stability of methods for applying digital watermarks. The indicator makes it possible to comprehensively assess the resistance of the method to a certain number of attacks. An experimental study was carried out according to the proposed method. This technique is most effective when part of the image is lost. When pre-filtering a digital watermark, the most effective is the third filtering method, which is averaging over a cell with subsequent binarization. The least efficient is the first method, which is binarization and finding the statistical mode over the cell. For an affine type attack, which is an image rotation, this technique is effective only when the rotation is compensated. To estimate the rotation angle, an affine transformation matrix is found, which is obtained from a consistent set of corresponding ORB-descriptors. Using this method allows to accurately extract a digital watermark for the entire range of angles. A comprehensive assessment of the methodology for increasing the stability of the method of applying a digital watermark based on Wavelet transforms has shown that this method is 20 % better at counteracting various types of attacks Introduction Reliability, invisibility and applying are prerequisites for any watermarking technique. However, research has concluded that these requirements are difficult to achieve at the same time. Steganography techniques are used not only for the covert transmission of messages, but also used to protect copyright or property rights in a digital image, photographs or other digitized works of art. Therefore, various measures are being developed to protect information, of an organizational and technical nature. One of the most effective technical means of protecting multimedia information is to applied invisible tags -digital watermarks -into the protected. Digital watermarks can contain a lot of useful information: when the file was created, who owns the copyright, contact information about the authors, and more. All entered data can be considered strong evidence when considering issues and litigation about authorship or to prove the fact of illegal copying and is often decisive. Attacks that extract digital watermarks (filtering, overmodulation, lossy compression, etc.), they act against an applied message, that is, aimed at destroying or damaging a digital watermark by manipulating the tagged image. At the same time, it is rather difficult to develop methods for introducing digital watermarks that are resistant to minor filtering. Such methods usually cause significant distortion of the container image, which is not acceptable. Thus, an urgent task is to develop methods and approaches that increase the stability of digital watermarks DEVELOPMENT OF A METHOD FOR IMPROVING STABILITY METHOD OF APPLYING DIGITAL WATERMARKS TO DIGITAL IMAGES O l e k s a n d r M a k o v e i c h u k Doctor of Technical Sciences, Head of Department Scientific Research Department Abto Software Heroiv UPA str., 77, Lviv, Ukraine, 79015 This paper [7] proposes a robust watermarking technique that combines the features of discrete wavelet transform (DWT), discrete cosine transform, and singular value decomposition. In this technique, DWT is used to decompose color images into different frequency and time scales. According to the results, the combination of DWT-DCT features with SVD technology provides reliability against image processing and geometric attacks in the YIQ color model. However, this technique turned out to be unstable against other types of attacks. A stable hybrid double watermarking method is discussed in [9]. But when increasing the digital watermark applying rate to achieve a higher level of robustness, minor artifacts are observed in the container image. The main problems in the implementation of methods for ensuring copyright protection in images representing open steganosystems are the significant destruction or destruction of digital watermarks at high image compression ratios, affine transformations and other types of attacks, as well as the associated noticeable deterioration of the image quality. Therefore, studies aimed at developing methods and approaches that increase the stability of digital watermarks and do not introduce significant distortions into the container image are relevant. The aim and objectives of research The aim of this research is to develop a technique for increasing the stability of methods for applying digital watermarks to digital images. This will enable the further use of methods for applying digital watermarks in commercial projects, while ensuring an acceptable level of stability. To achieve the aim, the following objectives were set: -to develop a functional model of the process of ensuring increased stability of methods for applying digital watermarks in images; -to propose an indicator for assessing sustainability; -to conduct an experimental study, according to the proposed method. Materials and methods of research Modern research to create an effective watermarking system uses various methods to improve and balance characteristics such as: stability, invisibility, reliability. Let's note that the work does not impose any restrictions on the type of attacks; therefore, it is required that the proposed method of steganography be resistant to the loss of a part of the image to which the watermark is added. The direction of solving this problem is provided by the so-called holographic metaphor -a distributed form of digital images presentation, which is resistant to interference [10][11][12][13][14]. The idea of the proposed transformation is quite transparent: the digital image is unfolded into a one-dimensional sequence so that the "distant" points of the image should be "close" numbers in a one-dimensional sequence. In this case, each point with coordinates (m, n) on the image is associated with a certain number k, which deter-and do not introduce significant distortions into the container image. Literature review and problem statement The authors of the article [1] have developed a technique for marking color images of a digital watermark using the induction of a decision tree in the field of discrete cosine transform. The method uses discrete cosine transform domains to transform the container image and watermark, and the decision tree induction method is used to hide the watermark. But since a color image has three channels, in which the intensity will be different, then for each channel it will be necessary to select a different threshold for selecting blocks for applying a digital watermark. And it is the use of a decision tree that makes it impossible to universally use this method of applying a digital watermark, since the thresholds for the selection of applying blocks will need to be calculated for each image separately. In [2], the authors present geometrically invariant images of watermarks based on affine covariant regions (ACR), which provide a certain degree of stability. To further improve reliability, a new watermark scheme is used based on work [3], which is insensitive to geometric distortions, as well as general image processing operations. This scheme consists mainly of three components: 1) the feature selection procedure based on the theoretical graph clustering algorithm is used to obtain a set of stable ACRs that do not overlap; 2) for each selected ACR, local normalization and orientation alignment are performed to create a geometrically invariant region that can improve the robustness of the proposed watermarking scheme; 3) in order to prevent image quality degradation caused by normalization and reverse normalization, indirect inverse normalization is applied to achieve a good trade-off between stealth and reliability. However, this method is resistant only to geometric distortions of images. The authors have developed a watermarking algorithm using a singular matrix representation and a genetic algorithm [4]. The method uses a singular vector to insert a watermark into the container. In addition, the genetic algorithm technique is used to improve the efficiency of the proposed scheme. But the computational complexity that arises when using a genetic algorithm makes it impossible to use this approach in real life. Wavelet-based watermarks are presented in [5]. The method uses a scale factor to modify a single vector of the container image. In addition, multipurpose particle swarm optimization is used to optimize the balance between conflicting watermarking factors. But there are still unresolved issues related to image distortions in which a significant part of information is lost (for example, a large percentage of noise or loss of part of the image) In [6], a technique for applying watermarks based on human perception of color is proposed. It provides a new visual model that can accurately assess the degree of noticeable distortions in the human visual system. However, the work does not highlight how to select the desired area for applying. It does not provide an opportunity to assess the sustainability of this technique. mines the number of this point in a pseudo-holographic sequence. When the sequence is scanned and recorded, a "pseudo-hologram" is formed. Such a transformation allows reconstructing a reduced copy of the original image with an arbitrary connected fragment of the resulting sequence (or, using interpolation methods, reconstructing a full-scale approximation of the original image). That is, a fragment of a one-dimensional sequence, like an analog hologram, contains enough information about the entire image as a whole. Such a "holographic" representation of images is resistant to data corruption, since even if some of the image information is lost, it is possible to recover with a certain accuracy, depending on the size of the loss. Thus, it is proposed to carry out a pseudo-holographic coding procedure for the watermark image, which consists in mixing the image pixels using a known pseudo-random permutation [15]: where w perm -result of pixel shuffling, p -known pseudo-random permutation. To obtain such a permutation, it is convenient to use an algorithm that consists in generating a pseudo-random uniformly distributed sequence x, which is then sorted in ascending order and taken as a permutation p (indices in the sorted sequence). Let's note that it is advisable to consider only global permutations, the use of block permutations requires the fulfillment of the condition on the block size, which must be greater than the correlation radius of the image (in this case, it is commensurate with the size of the QR code) [16]. When adding digital signs (watermark) to images, it is proposed to use wavelet transforms (Digital Wavelet Transform, DWT) [17][18][19]. In this case, the container image is converted using DWT into four sub-bands: low-high (LH), high-low (HL), high-high (HH) and low-low (LL) [20]. It is possible to formally write this in the form where f -container image, DWT () is a function that implements DWT [LL, HL, LH, HH] -the corresponding wavelet transform subbands. In this case, most of the known types of DWT can be used; Daubechies wavelets were used in this work [21]. The watermark multiplicatively modifies the LL subband, in which the main information about the picture is concentrated: where w -watermark image, LL w -modified sub-range LL, α -parameter, an operator (•), which means element-wise matrix multiplication. Let's note that the watermark image must be half the size of the container image. The original image (with an attached watermark) is created using the inverse wavelet transform: where f w -watermarked container image, DWT -1 () -inverse DWT transform function. To extract digital watermarks, the above procedure is performed in reverse order: 1) similarly to (1), the wavelet transforms are carried out: where [LL', HL', LH', HH'] -the corresponding wavelet transform sub-bands; 2) the estimate of the digital watermark w'is found as the difference between the LL -subbands of the watermarked image and the container image: 3) since the estimate of the digital watermark w' will be modulated by LL (expressions (3) and (5)), taking into account the presence of noise, it is proposed to filter the image w". In an important special case, when the digital watermark is a binary matrix code (for example, QR code) for filtering, it is possible to use the following procedures that will be performed for each cell of the matrix code: -binarization and finding the statistical mode over the cell: where 1 q w -filtering result for the first method; mode() -function, returns the value of the statistical mode; t 1 -binarization threshold; -averaging over binarized values over a cell and further binarization: w -filtering result for the second method; mean() -averaging function; t 2 -binarization threshold; -cell averaging and further binarization: where 3 q w -filtering result for the second method; t 3 -binarization threshold. The binarization thresholds t 1,2,3 are found using the Otsu's algorithm [22] or using adaptive binarization [23]. This study takes into account the main factors and new techniques used by potential researchers to create a reliable system for applying DW to digital images. 1. Functional model of the process of ensuring the enhancement of resilience The functional model of the process of ensuring the enhancement of the sustainability of methods for applying digital watermarks in digital images is shown in Fig. 1. In Fig. 1 the following notation is used: f − container image, w − digital watermark, p -known pseudo-random permutation, w perm − mixed digital watermark, f' -container image with added watermark, perm w′ − extracted mixed digital watermark, w' − restored digital watermark with distortion. The technique described in Fig. 1 includes the following steps: 1. Mixing the pixels of the digital watermark. The essence of this stage is that a sequence of indices l={l 1 , l 2 , …, l n×m }is formed using a pseudo-random number generator, where n, m is the size of the watermark w in pixels. Then the k-pixel of the watermark is moved to the place of the pixel with the index l k . Thus, let's obtain a digital watermark (w perm ) mixed with a known sequence. 2. Applying a mixed digital watermark (w perm ) into a digital container image (f). At this stage, using any method of applying a digital watermark, application (w perm ) occurs. In this work, a method using wavelet transforms was used to apply a digital sign. Let's use Daubechies wavelets [21] to represent a container image (f) and a mixed digital watermark (w perm ). Then, using the LL coefficient and a certain coefficient α using formulas (3), (4), the frequency spectrum of the mixed digital watermark (w perm ) is added to the frequency spectrum of the container image (f). 3. Extraction of the mixed digital watermark (w perm ) from the container image with a digital watermark (f'). At this stage, using formula (3), the wavelet transforms and images are represented in the frequency spectrum. Using formula (6), an estimate of the digital watermark w' is found as the difference between the LL -sub-bands of the watermarked image and the container image. 4. Restoring the normal sequence of digital watermark pixels. This step is the reverse of the procedure presented in the first step, after which let's obtain a normal sequence of digital watermark pixels. 5. Using digital watermark filtering. In this step, various image filtering methods are used to improve the digital watermark. In this work, let's use three methods of image filtering described by formulas (7)- (9). In this technique, due to pseudo-holographic coding, the digital watermark is converted, which is resistant to different types of distortions. This, in turn, in combination with the methods of filtering images after separating the digital watermark and restoring the normal distribution of the digital watermark pixels makes it possible to achieve a high level of stability of the methods of applying digital watermarks during various attacks. 2. Indicator for assessing the sustainability of methods for applying digital water The stability of the digital watermark applying method can be assessed in a statistical sense, the following assumptions were made. The digital watermarking method W can be defined as a set of some functions F and G that describe the process of applying and extracting a digital watermark on a set of all data: E is the set of data required for the digital watermark applying and extraction method to work. For simplicity, let's assume that the input dataset E includes the embed container Im and the digital watermark Wm: All such cases are called false. As a criterion for comparing the correspondence between formulas (12) and (13), other criteria for assessing the correspondence of two images can also be used, for example, the assessments presented in [24,25]. Transformation of the form , , results in the correct reading of the digital watermark from the container or false triggering, represented by formulas (12), (13). Thus, the probability P that, after using an attack on a container from a digital watermark At Im Im ′ = , extraction of the digital watermark from the container will lead to an erroneous result (13) is equal to the probability that the input data set E i used in the j-th attack belongs to the set E l . Let n l,j be the number of different input data sets contained in E l , for the j-th attack, then Q j =n l,j /N is the probability that the execution of the sequence of functions (14) on the data set E i randomly selected from E among the same likely to result in a false digital watermark exception. In this case, P j =1-Q=1-n l,j /N is the probability that during the j-th attack on the element E i , randomly selected from the set E, the value of the digital watermark will be obtained, which is within acceptable limits -the expression (12). Since various attacks are independent events, the probability that these attacks do not provide an acceptable resultexpression (13) is equal to the product of the probabilities of admissible digital watermark values after each attack: It is this product of probabilities that will assess the reliability of the DW method. 3. Experimental study of the method of applying digital watermarks For the experiments, a test image in grayscale Cameraman was used as the container image (Fig. 2, a). For a digital watermark image -a binary QR-code image, which is a matrix of 29×29 elements, where the message 'KHARKIV NATIONAL UNIVERSITY OF RADIO ELECTRONICS' is encoded (Fig. 2, b). When the size of the QR-code image is 464×464 pixels (that is, the size of one cell is 16×16), Cameraman was rescaled to size 928×928. To obtain a digital watermark, the pixels of the QR-code image were mixed using the procedure described above (Fig. 2, c). The result of adding a digital watermark (parameter value α=0.1) is shown in Fig. 2, d. During the experiment, the influence of the following types of attacks was investigated: -addition of normally distributed noise with a given mean and variance; -adding noise like "salt and pepper" with a given density; -rotation at a given angle; -extraction of a part of the image of a given size; -jpeg compression with a specified quality parameter. For each type of attack, the total number of errors in the QR code matrix obtained from the selected digital watermark was determined. The influence of normally distributed additive noise with mean μ=0: 0.001: 0.05 and variance σ 2 =0: 0.001: 0.05 was investigated. The results are shown in Fig. 3-7. When studying the influence of normally distributed additive noise (Fig. 3), it is possible to construct graphs of the dependence of the number of errors on the noise parameters, shown in Fig. 7-9. Next, let's investigate the effect of "salt and pepper" noise with a density ρ=0: 0.01: 0.5. The results are shown in Fig. 10-13. All filtering methods for this type of attack are equally ineffective. The second step involves investigating the use of rotation compensation before extracting the digital watermark from the container image. To estimate the rotation angle, let's find the affine transformation matrix between the original and returned container images. To do this, on each of these images, let's determine the location of the singularity points (as which let's use the ORB descriptors [24]). Let's note that in order to detect a sufficient number of descriptors, these images must be smoothed using a Gaussian filter with σ=3 (Fig. 20). When investigating the influence of "salt-and-pepper" noise ( Fig. 10-13), it is possible to build graphs of the dependence of the number of errors on the noise parameters (Fig. 14). The study of the effect of image rotation on a digital watermark was carried out in two stages. At the first stage, the influence of image rotation on angles φ=2°:0.1°:2° was investigated. The results are shown in Fig. 15-18. Method 3 Error Noise parameter For each filtering method, graphs of the dependence of the number of errors on the rotation angle are shown (Fig. 19). Finding the corresponding points of the ORB descriptor is performed using the RANSAC algorithm [25]. RANSAC (abbr. RANdom SAmple Consensus) is an iterative method used to estimate the parameters of a mathematical model for a set of observable data that contains outliers (Fig. 21). Oriented FAST and Rotated BRIEF The described compensation method can be easily generalized to other coordinate transformations and is a promising direction for further research. Knowing the rotation angle, turn the image in the opposite direction (Fig. 22). For each filtering method, graphs of the dependence of the number of errors on the size of the extracted square are shown (Fig. 30). Further, the influence of image compression using the Jpeg algorithm was investigated depending on the value of the quality parameter q=1:100. The results are shown in Fig. 31-34. When studying the effect of image compression using the Jpeg algorithm, depending on the value of the quality parameter in Fig. 31-34 it is possible to build the following graphs (Fig. 35). Further, the stability of the proposed method was assessed. When assessing the reliability of the proposed methodology, five types of attacks were used: -addition of normally distributed noise with a given mean and variance; -addition of noise like "salt and pepper" with a given density; -rotation at a given angle; -extraction of a part of the image of a given size; -jpeg compression with a specified quality parameter. Each type of attack included 500 different variations of these attacks. The test results are presented in Table 1. Discussion of the results of the study of methods for increasing the stability of applying digital watermarks The method of pseudo-holographic coding of digital watermarks described in this work is effective for all types of attacks that were considered, except for image rotation. This method is most effective when part of the image is lost. After analyzing the graphs in Fig. 7-9, it is possible to conclude that the third filtering method is the most effective. As can be seen from the graph in Fig. 14 the second and third filtering methods are effective and give comparable results. The investigation of the "turn" attack was carried out in two stages. At the first stage, the influence of image rotation on angles φ=2°:0.1°:2° was investigated. For each filtering method, graphs of the dependence of the number of errors on the rotation angle are shown (Fig. 19). After analyzing the results obtained during image rotations, the following conclusions can be drawn. First, without rotation compensation, all filtering methods for this type of attack are equally ineffective -if the image is returned to an angle greater than 0.2°, then the correct selection of the digital watermark is impossible. Secondly, using the proposed compensation method, all three filtering methods work absolutely faultlessly in the entire range of investigated angles φ=-10°:10°. During the attack, extracting a part of the image from the graph in Fig. 30 shows that the third filtering method is the most effective. When studying the effect of image compression using the Jpeg algorithm, depending on the value of the quality parameter ( Fig. 31-34), it can be seen (Fig. 35) that all three methods give comparable results, the most effective is the third filtering method. Table 1 Reliability assessment results Methods for applying digital watermarks Reliability assessment of the digital watermarking method The classic method of applying a digital watermark using wavelet transforms 0,6348 Digital watermarking based on wavelet transforms using the proposed technique 0,8344 As can be seen from the test results, due to the use of pseudo-holographic coding, the inhomogeneity of applying a digital watermark into the container image is ensured. This makes it possible to increase the resistance of the method to the loss of a part of the pixels of the digital watermark. And methods of filtering a digital watermark allow to restore lost information based on statistical criteria. Thus, based on the results shown in Table 1, it is possible to say that the use of the proposed technique increased the reliability of the method by 20 %. The above allows to determine that the proposed method has advantages in that, regardless of the method of applying a digital watermark, an increase in stability will be provided. For the development of the proposed technique, it is planned to conduct further research on pseudo-holographic coding methods using chaos theory. Conclusions 1. A functional model of the process of ensuring increased stability of methods for applying digital watermarks into digital images, based on pseudo-holographic coding and additional filtering of a digital watermark, has been developed. The method of pseudo-holographic coding of digital watermarks described in the work is effective for countering all types of attacks that were considered, except for image rotation. A comprehensive assessment of the methodology for increasing the stability of the method of applying a digital watermark based on Wavelet transformations has shown that its use improves resistance to various types of attacks by 20 %. 2. The paper presents an indicator for assessing the sustainability of digital watermarking methods, which takes into account all types of attacks and allows a comprehensive assessment of the sustainability of the digital watermarking method. 3. An experimental study was carried out according to the proposed method. This technique is most effective when part of the image is lost. When pre-filtering a digital watermark, the third filtering method is most effective. This method is averaging over a cell and subsequent binarization. The least efficient method is the first binarization method and finding the statistical mode over the cell. It is advisable to carry out binarization according to the Otsu algorithm. For an attack of the affine type, which is a rotation of the image, this method is effective only when compensating for the rotation. To estimate the rotation angle, an affine transformation matrix is found, which is obtained from a consistent set of corresponding ORB-descriptors. Using this method allows to accurately identify a digital watermark for the entire range of angles investigated.
6,253.8
2021-06-30T00:00:00.000
[ "Computer Science" ]
LMdist: Local Manifold distance accurately measures beta diversity in ecological gradients Abstract Motivation Differentiating ecosystems poses a complex, high-dimensional problem constrained by capturing relevant variation across species profiles. Researchers use pairwise distances and subsequent dimensionality reduction to highlight variation in a few dimensions. Despite popularity in analysis of ecological data, these low-dimensional visualizations can contain geometric abnormalities such as “arch” and “horseshoe” effects, potentially obscuring the impact of environmental gradients. These abnormalities appear in ordination but are in fact a product of oversaturated large pairwise distances. Results We present Local Manifold distance (LMdist), an unsupervised algorithm which adjusts pairwise beta diversity measures to better represent true ecological distances between samples. Beta diversity measures can have a bounded dynamic range in depicting long environmental gradients with high species turnover. Using a graph structure, LMdist projects pairwise distances onto a manifold and traverses the manifold surface to adjust pairwise distances at the upper end of the beta diversity measure’s dynamic range. This allows for values beyond the range of the original measure. Not all datasets will have oversaturated pairwise distances, nor will capture variation that resembles a manifold, so LMdist adjusts only those pairwise values which may be undervalued in the presence of a sampled gradient. The adjusted distances serve as input for ordination and statistical testing. We demonstrate on real and simulated data that LMdist effectively recovers distances along known gradients and along complex manifolds such as the Swiss roll dataset. LMdist enables more powerful statistical tests for gradient effects and reveals variation orthogonal to the gradient. Availability and implementation Available on GitHub at https://github.com/knights-lab/LMdist. Introduction Beta diversity is a standard component of microbial and community ecology analysis.Ordination plots can be found in many microbiome and ecology publications.Geometric anomalies like the arch or horseshoe effects are prevalent in ordination plots of datasets containing an ecological gradient.Ecological gradients could include geochemistry, temperature, altitude, time, or other continuous factors.The arch effect displays samples along the gradient in a curved formation during ordination such that the gradient no longer appears 1D.The horseshoe effect, also referred to as the Guttman effect, is the more extreme relative of the arch effect and displays the samples along an ecological gradient in, as the name suggests, a horseshoe-like shape.Thus, in the horseshoe effect, ends of the gradient appear attracted to one another (Camiz 2005).In the early 1980s, ecologists hypothesized the extreme arch was a mathematical artifact obscuring ecological gradients (Hill and Gauch 1980), adopting the term "horseshoe" from popular political theory discourse. The arch and horseshoe effects are of concern for beta diversity analyses because they misrepresent the true ecological distances between samples lying along ecological gradients.Figure 1 illustrates an arch in soil samples taken along a pH gradient (Lauber et al. 2009).The pH gradient is clearly visible along the arched display of samples, but the arch makes the ends of the gradient (black squares for acidic pH, white diamonds for basic pH) appear almost as closely related to each other as to the middle of the pH gradient.It has been shown previously that this arch phenomenon is not caused by the ordination or visualization (Morton et al. 2017), but is instead a problem with the pairwise dissimilarities generating the ordination.Pairwise dissimilarities are the foundation of distance-based statistical analyses for beta diversity, so statistical outcomes may be weakened by pairwise measures misrepresenting sample relationships. Despite a relatively high frequency of arches and horseshoes present in beta diversity analyses, these artifacts remain relatively underexplored and misunderstood in microbial ecology literature.A limited body of prior work has sought to explain and exemplify the problem as it appears.Ecologists Podani and Miklo ´s explored the tendency of various distance and dissimilarity measures to result in a horseshoe shape in PCoA plots (Podani and Miklo ´s 2002).Their findings indicated that researchers should select distance and dissimilarity measures which do not result in the arch, but their work lacks specificity about the causes or a notable solution.Diaconis et al. (2008) indicate some arches may be true representations, but that PCoA is limited in portraying them, and that manifold learning methods for ordination are better suited to data portraying an arch or horseshoe.Still, reasoning about the origin of the arch artifact was lacking until the work of Morton et al. (2017) pointed to sparsity of microbiome datasets as a possible source.Morton et al. (2017) demonstrate the limitations of distance and dissimilarity metrics in sparse datasets, although their solution, the Earth Mover Band Aware Distance (EMBAD) metric, is designed to rely upon known metadata in adjusting the ordination.EMBAD starts by sorting the samples according to a predetermined environmental variable, then simulating the flow of the taxonomic features between the samples along this ordered list.The EMBAD metric resolves the arched geometry with this adjustment but requires foreknowledge of the ecological gradient and direct use of this gradient in evaluating pairwise distances. Building on this existing body of work, we hypothesize that distance and dissimilarity metrics have a bounded dynamic range in describing pairwise relationships of a dataset, leading to an oversaturation of large pairwise measures.This effect is caused by samples that are sufficiently separated along the ecological gradient such that they have few or no species in common.Pairwise relationships are more reliable with many species in common, so samples from opposite ends of the gradient will instead have less informative pairwise distances to distinguish them.Upper bounds are known for some popular measures, such as Bray-Curtis and Jaccard dissimilarity which are restricted between zero and one.In these measures, counts of a subset of features will never exceed the total feature counts for a sample.Distance metrics, among them Euclidean distance, Aitchison distances, and UniFrac distances, are unbounded but may oversaturate in the same manner as bounded measures when there are few features in common.Euclidean distances are already known to be a poor choice for sparse datasets, common for microbial ecology (Aggarwal et al. 2001).When these large pairwise distances or dissimilarities become oversaturated, ordination algorithms may be limited in differentiating samples in a reduced dimensional space, leading to arches and horseshoes. Given our hypothesis that the bounded dynamic range of beta diversity measures cause arch and horseshoe effects, we propose Local Manifold distance (LMdist) for adjusting distances and dissimilarities to overcome the limited range of ecological distance measures in sparse datasets.LMdist borrows concepts from graph theory and statistics to better represent the high dimensional geometries of gradient-based datasets.LMdist not only corrects arches and horseshoes in ordination plots but improves the reliability of the underlying pairwise measures for downstream analyses such as statistical hypothesis testing. Local Manifold distance (LMdist) algorithm The LMdist algorithm represents the high dimensional beta diversity matrix as a graph, then measures distances between points along the graph-defined manifold.Figure 2 visually depicts the algorithm on a small simulated gradient dataset.Beginning with an undirected graph of nodes, representing samples, edges are drawn between sample nodes and their parameter-defined neighbors.Each edge is weighted by the distance/dissimilarity between samples at either end, compatible with any pairwise beta diversity measure.Some pairwise edges will be omitted from the graph if they are above the parameterized maximum value, called the neighborhood radius.The graph is then traversed to determine adjusted pairwise values.These adjusted values become the new input for ordination and subsequent statistical analysis, resolving the arch in ordinal plots and enabling more powerful linear regression. Input pairwise measures form a connected graph The only required input for the LMdist algorithm is the original pairwise beta diversity matrix; the radius parameter allows for further fine-tuning of the algorithm.The algorithm is compatible with any pairwise measure, and LMdist will adjust these measures to produce a new set of pairwise distances.The neighborhood radius parameter determines which input pairwise values will be included in forming the undirected graph (Fig. 2B and C).For example, given a radius value of 0.25, only pairwise values under 0.25 will be used to connect sample nodes in the graph.Therefore, we can think of the radius parameter as the radius of a sphere in the original beta diversity space, centered at any given sample node.All other sample nodes falling within the bounds of this sphere will be connected to the centroid node as they are considered "neighbors."Each sample node will be the centroid for this process, until all nodes have been considered and connected to any neighbors to form edges in the graph (Fig. 2D). After the neighborhood edges have been determined per node using a given radius, the resulting graph may be a disconnected graph.This is expected when samples have not been collected evenly along a gradient, or if the chosen radius is small.If disconnected, a minimum spanning tree (MST) is computed using original pairwise distances for all sample nodes, which traverses all nodes in the shortest possible path.Comparing this MST to the graph of neighbor edges, we can borrow minimal edges from the MST to make the graph fully connected.As a result, we will have a fully connected graph 2 Hoops and Knights where the minimum possible number of edges exceed the radius parameter, as needed to fully connect the graph. Recalculate pairwise distances by traversing the graph This connected graph is used to recalculate pairwise distances/dissimilarities between samples.We calculate a single pairwise distance by traversing from one sample node along any available edges to the other node, in the shortest possible path (Fig. 2E).The adjusted pairwise value is then the sum of all edges traversed in this shortest path between two nodes.This traversal is completed for all pairs of sample nodes, such that we have adjusted pairwise values for any sample relationships exceeding the trusted, radius-defined neighborhood. Parameters can fine-tune results of LMdist Fine-tuning of LMdist can be achieved by adjusting the radius parameter, calibrating optimization criteria, or smoothing results via a Gaussian weighting.A large radius, close to the maximum distance, results in a very small adjustment to the original distance matrix.Conversely, a very small radius will cause the largest adjustment, because distant nodes will be connected in the graph only by traversing many intermediate nodes. By default, the algorithm calculates LMdist using 50 different radii, picking the largest radius which meets two optimization criteria.First, any radius which results in a graph with an average degree, the average number of connections per node, <10% of the number of samples (parameterized as "phi" with default 0.10) will be excluded from the results of LMdist.This limitation is intended to avoid overfitting, resulting in a distorted graph that might exaggerate the distances between samples.The second criterion is a multi-objective function which allows LMdist to guess the best radial value for a given dataset.This multi-objective optimization maximizes both the radius and a correlation between the LMdistadjusted distances and the Euclidean distances in resulting ordination (PCoA) space, using the first n dimensions, up to 10, accounting for >80% variance.To give priority to trusting more original distances, a correlation for a smaller radius must exceed the previous best correlation by a parameterized value of epsilon (default 0.05). The multi-objective function works for many cases, but still places confidence in a single radius value.We therefore implemented an optional parameter ("smooth" defaulted to False) for smoothing results of multiple radii.If smoothing is preferred, a Gaussian weight centered at the chosen radius is applied to the results of up to 50 radii, such that many radial values will influence the results, generating a smoothed version of the corrected pairwise distances. LMdist produces an adjusted distance matrix The result of the LMdist algorithm, implemented in R (R Core Team 2021), is an adjusted pairwise distance object, the same format as the output of the vegdist function in the vegan LMdist: Local Manifold distance package (Oksanen et al. 2022).These adjusted values can replace the original pairwise distance matrix in subsequent beta diversity analysis.In ordination, the adjusted values become the new input for ordinal methods accepting pairwise measures, such as PCoA or NMDS.Results in a simulated dataset and case studies have shown that LMdist reduces the arched appearance of the data, and if an environmental gradient is present LMdist better resolves the gradient, revealing variation orthogonal to the gradient in other ordinal dimensions. Use cases have oversaturated distances LMdist can be used with any pairwise distance matrix but is best utilized for uncovering gradients.While some gradients may already be well represented, an arch or horseshoe in ordination tends to appear when the pairwise distance matrices become oversaturated.We define oversaturated distances as a large left skew in the distribution of pairwise distances, because large distances are relatively oversaturated compared to the smaller pairwise distances.This oversaturation indicates a disagreement between the sampled gradient and the pairwise sample distances, as is demonstrated in implementation with a simulated dataset to follow (Fig. 4).Therefore, we propose LMdist is most appropriately used in studies with a long underlying gradient or oversaturated distances, as these samples will be better represented when accounting for the underlying manifold.Case-control studies may not need adjustment by LMdist, most importantly if the pairwise distances are not oversaturated with large values. Simulated dataset For initial validation of this algorithm, we created a simulated dataset of taxonomic features evenly distributed along an artificial gradient, mimicking a gradient-driven ecosystem.The final simulated dataset contains 50 samples and 231 taxonomic features.Noisy samples added to the dataset illustrate the sensitivity of the algorithm, which has remained stable in uncovering the gradient. Coenoclines emulate species abundance along a gradient In creating a simulated dataset we utilized coenoclines, representations of species response functions along some gradient (Palmer 2013).We presumed a unimodal response curve for each species along the gradient, supported by further exploration of the soil dataset (Supplementary Fig. S1).For our simulated gradient, we distributed artificial species features evenly along a gradient, using Gaussian coenoclines (Fig. 3A).For an example gradient length of 1000 (arbitrary units), the artificial species distributions are simulated as Gaussian curves centered 10 steps apart with a standard deviation of 150, allowing for density of overlap between species.Samples are obtained from these coenoclines by sampling values from the densities of the Gaussian curves at a given point, attributing the amounts of every artificial species at that point in the gradient to the sample (Fig. 3A).A total of 50 samples are taken, evenly distributed along the full gradient length.The absolute heights of the Gaussian coenoclines are arbitrary because samples are normalized to relative abundances before analysis. Sampling noise from a Dirichlet distribution To add noise to the simulated dataset, each mixture of sampled species values is used as the prior for a Dirichlet distribution, the generative distribution of the multinomial distribution and the multivariate generalization of the Beta distribution (Ng et al. 2011), producing 50 additional samples as noise, mimicking the original samples (Fig. 3B).We chose the Dirichlet distribution because it allows for fine-tuning the degree of noise and because microbial ecology samples have been modeled previously using a Dirichlet distribution (Harrison et al. 2020). 4 Hoops and Knights Simulated gradient Figure 4 shows the simulated dataset after application of the LMdist algorithm to adjust distances.In the original Bray-Curtis distances, we can see from Fig. 4B that the Bray-Curtis distances on the y-axis are distributed with the opposite skew of the known gradient distances on the x-axis, resulting in a plateau rather than the expected y ¼ x line.This oversaturation of large distances results in the samples forming a horseshoe in PCoA ordination (Fig. 4A).After applying LMdist with a radius of 0.4, the gradient distances and adjusted distances now resemble one another (Fig. 4D) and the gradient is resolved in PCoA along the first principal component (Pearson correlation 0.999, improved from 0.967 prior to LMdist) such that the second axis may now be accounting for noise (Levene's test of variance, P < .001)(Fig. 4C). Simulated Swiss roll The Swiss roll dataset has been used to exemplify an obvious low-dimensional manifold structure in demonstrating manifold learning approaches.Figure 5 shows 300 samples in a Swiss roll, created using the scikit-learn python package (Pedregosa et al. 2011), in the original three dimensions.We then compare to dimensionality reduction by principal component analysis (PCA), t-distributed Stochastic Neighbor Embedding (t-SNE) (Van der Maaten and Hinton 2008), Isomap (Balasubramanian and Schwartz 2002), and LMdist.Fine-tuning of parameters was necessary for all algorithms to obtain the best possible flattening of the manifold in 2D space. As expected, the manifold learning approaches and LMdist outperform PCA, due to their parameter-defined perplexity and ability to recover some of the underlying structure of the LMdist: Local Manifold distance data.LMdist proves particularly useful in two respects compared to these manifold learning approaches.First, LMdist appears to most reliably uncover sample relationships, as it is not overly sensitive to hyperparameters and a series of transformations as is the case of t-SNE, nor does LMdist assume neighborhoods of equal sample sizes as in Isomap.The neighborhood radius used by LMdist therefore resolves the manifold, but also more clearly resolves the variation along the manifold in the second dimension.Secondly, LMdist is advantageous as the only approach to outputs an actual distance matrix, which can then be used for distance-based statistical testing.Typically, the adjusted distance matrix is only implicit within manifold learning algorithms, precluding their use in downstream analysis such as statistical testing of beta diversity. Case studies demonstrating the arch effect To exemplify the usefulness of the LMdist algorithm, we have gathered datasets from a variety of microbiome and ecological studies (Fig. 6).As previously described, oversaturation of the distance and dissimilarity measures is often driven by underlying ecological gradients in the dataset.Therefore, datasets were selected which have a measurable environmental gradient and which exhibited an arch or horseshoe in ordination. First, we return to the soil samples from across North and South America comprising a pH gradient (Fig. 6A).Lauber et al. found these bacterial communities to be significantly correlated with soil pH (ANOSIM, P ¼ .001;Mantel test Spearman, r ¼ 0.73), largely driven by the presence and absence of the phylum Acidobacteria, known to favor acidic environments, as well as the phyla Bacteroidetes and Actinobacteria which prefer comparatively basic environments (Lauber et al. 2009).After applying LMdist, we can see that the arch is largely resolved in the visualization, with no change in significance of distance-based testing (ANOSIM, P ¼ .001;Mantel test Spearman, r ¼ 0.74).Interestingly, though, we find that PC2 now resolves some variation in the data orthogonal to the pH gradient in PC1, correlating with amount of silt clay (Pearson cor ¼ 0.51, P < .001;PERMANOVA, P < .001)and annual season temperature (Pearson cor ¼ 0.49, P < .001;PERMANOVA, P < .001).While silt clay and annual season temperature were significant prior to LMdist using PERMANOVA (P < .05),F-scores for silt clay, annual season temperature, and pH all increased, implying the effect size is better represented after application of LMdist.This new finding with silt clay and annual season temperature orthogonal to the pH gradient was not mentioned in the original publication and might have been obscured by the arch in the original analysis. Sampling increasing depths of the hypersaline microbial mats of Guerrero Negro in Baja California Sur provides another gradient analysis on a smaller dataset.The authors confirm that the depths of microbial mats are phylogenetically stratified, as light and oxygen produced by photosynthesis rapidly diminishes with depth (Harris et al. 2013).Each depth from the top of the microbial mat to a depth of 34 mm is sampled twice and sequenced with 454 pyrosequencing technology.We can clearly make out that the depths follow a horseshoe pattern in the original ordination (Fig. 6B).Using LMdist, we can resolve the horseshoe while trusting many distances (neighborhood radius 0.758 for Jaccard dissimilarity is chosen by the default algorithm) (Fig. 6B).The correlation of the gradient with the first principal component (Spearman rho ¼ 0.941, P < .001)improved after application of LMdist without changing significance (Pearson cor ¼ 0.987, P < .001). Our next case study is a longitudinal, observational experiment comparing the Turkey gut when raised in an isolated hatch brood system as opposed to a commercial brood pen (Fig. 6C) (Miller et al. 2021).Ignoring the first day samples since they formed a separate cluster, we can resolve the longitudinal changes using LMdist.The authors found that beta diversity significantly separated the samples by the collection date (PERMANOVA, F-score ¼ 23.8, P < .001)and upbringing groups (Mantel test, r ¼ 0.38, P ¼ .001).After applying LMdist, we found the collection day gradient improved the effect in PERMANOVA found between collection day and pairwise measures (F-score ¼ 57.63, P < .001)without changing the significance found for raising group (PERMANOVA, P ¼ .007). Finally, we evaluate the usefulness of LMdist applied to herb and shrub community data collected by Robert Whittaker in the Siskiyou Mountains along an elevation gradient (Fig. 6D) (Whittaker et al. 2022).Restricting to just the communities in diorite soil, the elevation gradient forms an arch in PCoA of Bray-Curtis dissimilarities.Again, the effect size of the elevation gradient (PERMANOVA, F-score ¼ 36.8,P ¼ .001) is improved after applying LMdist (PERMANOVA, F-score ¼ 92.4,P ¼ .001). These case studies exhibiting gradient-driven arches and horseshoes demonstrate how LMdist may better resolve gradient effects.Resolving gradients along one principal component enables more powerful linear statistical tests of gradient impact.LMdist also uncovers other variation orthogonal to the gradient, presenting some novel findings previously obscured by horseshoes.The optional gaussian smoothing parameter in LMdist minimally alters output for these case studies (Supplementary Fig. S2), implying LMdist with both a single radius and smoothing present similar, reasonable solutions. In cases where an underlying gradient is not present, such as a clustered dataset, we have found that in both simulated and real clustered data LMdist does not resolve an arch unless the original pairwise beta diversity values are truly oversaturated (Supplementary Fig. S3) (Whittaker 1956, Yatsunenko et al. 2012, Vangay et al. 2018).One can test for oversaturation by evaluating if the data presents a left skew in the distribution of pairwise distances.If no left skew is present, we expect there is no long underlying manifold, nor oversaturation of pairwise values.When we apply LMdist to simulated (Supplementary Fig. S3C) practical (Supplementary Fig. S3D and E) examples without oversaturation, we find LMdist chooses not to adjust pairwise distances as they are already the best representation of sample relationships.These experiments indicate LMdist is robust to overfitting and will not adjust already reliable pairwise values. Discussion We have demonstrated the usefulness of LMdist in a simulated gradient and in case studies of real environmental gradients across a range of published datasets.The simulated gradient can be nearly perfectly resolved with LMdist-adjusted distances.In various case studies, we find that LMdist can not only visually resolve the arched geometry of the data but can improve statistical outcomes in distance-based tests.LMdist is best used when pairwise beta diversity measures are oversaturated, presenting a left skew in the distribution of distances.These cases are typical of sampling a gradient or finding an arch in ordination, not as common for case-control or cohort studies.However, LMdist is also designed to resist overfitting, only correcting oversaturated pairwise values of which there may be none in a clustered dataset (Supplementary Fig. S3).In several gradient-based case studies, LMdist-adjusted distances improved the correlation of a known continuous gradient with the first axis of ordination, enabling more reliable use of linear regression approaches. LMdist: Local Manifold distance By adjusting distance/dissimilarity measures with LMdist to decrease saturation of large pairwise distance values, and to increase trustworthiness of distances, ordination plots more accurately portray global relationships in the data.We have also demonstrated that LMdist reveals new information driving variation in the data orthogonal to the gradient, such as the second principal component in the soil dataset (Fig. 6A).Prior to LMdist, the arch had occupied two or more dimensions instead of one, therefore leaving this orthogonal variation obscured.LMdist is therefore a powerful analysis technique in determining other interactions beyond the main environmental drivers in a study, allowing researchers to dive deeper into new explanatory variables.An important feature of this approach is that LMdist is completely unsupervised, not relying on foreknowledge of the gradient driving variation. LMdist is an efficient, data-driven algorithm for resolving arches and horseshoes driven by underlying gradients.With proper utilization, LMdist has shown promise in enabling more powerful statistical tests between environmental gradients and high-dimensional microbiome datasets.Analysis with LMdist-corrected distances may uncover new ecological variation that is orthogonal to environmental gradients, expanding the findings of previously published works. Figure 1 . Figure 1.PCoA with Jaccard distances of soil samples taken along a geochemical pH gradient.An arch in the data is clearly visible, making the ends of the pH gradient appear closer than expected. Figure 2 . Figure 2. Visual representation of the LMdist algorithm.(A) Simulated samples appearing in an arch during PCoA with Bray-Curtis distances, colored by a simulated ecological gradient.(B) We represent these same samples as isolated nodes and define a neighborhood size by a radius parameter, then (C) add undirected edges between nodes within the parameter-defined neighborhood.(D) Continue to add edges between nodes in common neighborhoods until the graph is fully connected, all nodes can be reached by all other nodes.(E) This graph of nodes and edges is traversed to create adjusted pairwise values, which become the new input for ordination.(F) The resulting ordination can resolve the linear gradient in PCoA. Figure 3 . Figure 3. Creation of simulated dataset.(A) Coenoclines representing roughly every 10th taxonomic feature distributed along the simulated gradient, annotated to show how sample 10 would be collected.Sample 10 would be comprised of the amounts of each feature as seen in the visual cross section of the feature coenoclines.(B) Heatmap of samples collected, displayed in order with the noise samples preceding each original sample. Figure 4 . Figure 4. LMdist applied to the simulated gradient.(A) PCoA with Bray-Curtis distances of 100 simulated samples, colored by approximate position on gradient.(B) Comparison plot of community distances (Bray-Curtis) and gradient distances (Euclidean).We can see the community distances plateau at the top of the gradient, causing a mismatch between the density of community and gradient distances.(C) PCoA with LMdist-adjusted distances, where the horseshoe is largely resolved along the first principal component.(D) The same comparison plot between the adjusted community distances and gradient distances, the distribution of which is better balanced to one another after LMdist is applied. Figure 5 . Figure 5. LMdist applied to the Swiss roll dataset.(A) The Swiss roll dataset with 300 samples in the original 3 dimensions, created with the scikit-learn package in Python.(B) PCA of the Swiss roll dataset.(C) t-SNE of the Swiss roll dataset using perplexity 60. (D) Isomap of Swiss roll dataset with neighborhood size 5. (E) LMdist of Swiss roll dataset with neighborhood radius 0.62. Figure 6 . Figure 6.PCoAs before and after adjusting pairwise values with LMdist on default settings, displayed with fixed square perspective.(A) Soil samples along a pH gradient, Jaccard dissimilarity and LMdist with radius 0.952.(B) Microbial mat samples at various depths, Jaccard dissimilarity and LMdist with radius 0.758.(C) Longitudinal Turkey cecum samples, Bray-Curtis dissimilarity and LMdist with radius 0.599 (note: changed epsilon to 0.01 since the default epsilon was too small).(D) Herbs and shrubs community data collected by Robert Whittaker in the Siskiyou Mountains, Bray-Curtis dissimilarity and LMdist with radius 0.93.
6,274.8
2023-12-01T00:00:00.000
[ "Environmental Science", "Biology" ]
A Random Compressive Sensing Method for Airborne Clustering WSNs In order to reduce the energy consumption of the cluster members in WSNs, this paper proposes a random compressive sensing data acquisition scheme for airborne clustering WSNs. In this scheme, hardware resource limited cluster members sample the input signals with random sampling sequence and then transmit the sampling signals to the cluster head or Sink to reconstruct. Aimed at improving the reconstruction performance of this scheme, this paper puts forward a new MP reconstruction method based on composite chaotic-genetic algorithm, which combines the excellent local searching characteristics of chaos theory with the powerful global search ability of genetic algorithm. The experimental result shows that this scheme is very suitable for the hardware resource limited clustering WSNs. On the one hand, the reconstruction precision of the composite chaotic-genetic MP method can reach a magnitude of 10−15, and the average search speed is about 37 time that of the MP reconstruction method, which can effectively improve the reconstruction performance of the cluster head or Sink; on the other hand, by diminishing the sampling frequency to 1/8 of the original sampling frequency, the random compressive sensing technique can dramatically reduce the sampling quantity and the energy consumption of the cluster members, with the reconstruction precision reaching a magnitude of 10−7. Introduction Airborne Clustering WSNs System.Recently, the research on airborne data acquisition system based on wireless sensor networks (WSNs) has attracted increasing attention in the world [1][2][3][4][5][6][7].As we know, subsystems such as the engine, fuel, and cockpit environment in the existing general aircraft are distributed into their respective regions, so airborne WSNs should use clustering network architecture [4].Each subsystem or respective region of aircraft forms one or more clusters.Cluster head and sensor nodes in each cluster use star topology.Airborne data acquisition system based on clustering WSNs is shown in Figure 1. Airborne WSNs provide a flexible, lightweight, and reliable data collection means for aircraft condition monitoring.It has the following features: firstly, the sensor nodes in the physical space are vicinity arranged according to the sensor layout scheme.This means that the locations of a majority of, even all of, the sensor nodes in the monitoring network are relatively fixed.Secondly, as the relatively fixed position, all the cluster heads should be continuously supplied with the airborne power system and can configure high performance storage, processing, and communication devices.Thirdly, because of the limited physical space, some cluster member nodes, for example, the embedded sensor nodes in the engine monitoring system cannot be supplied with the airborne power system.These cluster member nodes have limited energy, processing, and communication capabilities.They have to collect and transmit large amounts of raw sensing data collected by the Nyquist sampling rate to the cluster head.This leads to the manifest reduction of cluster members' service life and the overall network's performance.Obviously, this "asymmetric" data acquisition mode is unreasonable. The Application of CS in WSNs.Compressive sensing (CS) technology utilizes signal sparsity, sampling signal far below the Nyquist sampling rate.It can shift the complex signal processing from the data collection terminal to the decoder, reduce the energy consumption of the data collecting side, and improve the performance requirements of the decoder.This fits well with the frame characteristics of WSNs, because, on the one hand, a large number of hardware resource limited cluster members achieve low-rate sampling and, on the other hand, the cluster head or Sink with sufficient energy, strong data storage, and processing capabilities realizes the complex signal reconstruction process, which can provide new ways for the realization of practical wireless sensor networks. Currently, the research on the application of CS technology in WSNs has three main directions: the application of CS technology in WSNs data fusion [8,9]; the application of CS technology in WSNs data acquisition and reconstruction [10][11][12][13]; and the application of CS technology in WSNs data transmission and routing [14,15].These studies lack the practical consideration of the hardware implementation difficulty and simply apply CS theory to the process of WSNs data acquisition, processing, and transmission.The reason is that the realization of basic compressive sensing technology is harder than the traditional sampling methods on the hardware requirements [16].Aiming at this issue, [17,18] proposed a new random compressive sensing method that can realize the compressive sensing techniques in hardware resource limited WSNs. Contributions and Paper Organization.In this paper, we have two unique contributions.The first one is the data acquisition scheme based on random compressive sensing for airborne clustering WSNs.The second contribution is a new CS reconstruction method based on the composite chaoticgenetic MP algorithm. The remaining part of this paper is organized as follows. In Section 2, we introduce the basic theory about compressive sensing technology.In Section 3, we show the principle of random compressive sensing and present the specific steps of random compressive sensing based clustering WSNs signal acquisition method.In Section 4, through combining chaos theory with genetic algorithm, we present a composite chaotic-genetic MP reconstruction method.In Section 5, we prove the effectiveness of our scheme through experiment.Finally, we provide the conclusions and future works in Section 6. Overview of Compressive Sensing The traditional signal acquisition process is shown in Figure 2(a).The full information acquisition method needs to transfer large amounts of sensory data, resulting in high computation and communication load.So it is unfit for node hardware resource limited WSNs.Compressive sensing theory suggests that as long as the signal is sparse or can be sparse representation in some kind of transformation, the original high dimensional sequences can be projected onto a low dimensional space by a measurement matrix which is irrelevant to the sparse transformation basis.Then, the original data can be reconstructed from a small amount of projection with high probability by solving an optimization problem [19].Figure 2(b) is the signal acquisition process of CS.Compressive sensing theory includes three main parts: the sparse representation of the signal; the measurement matrix design; and the signal reconstruction method [20,21]. Sparse Representation. The prerequisite of compressive sensing is that the signal is sparse or can be sparse representation under some kind of transformation.Common signals are generally nonsparse in time domain.Therefore, before applying compressive sensing technology to a specific signal, we must select the most suitable sparse transform domain for the best sparse representation.Set x = [x 1 , x 2 , . . ., x ] as sparse or compressible signal, wherein x ∈ , ∈ , and x is (1 ≤ < and ≪ ) sparse in an orthogonal basis.x can be expressed as x = Ψ = ∑ =1 , = [ 1 , 2 , . . ., ] is the sparse sequence of x in the sparse transformation matrix Ψ, and the number of nonzero elements is . Measurement Matrix Design. After sparse transformation, signal x can be linearly transformed by × ( ≤ ) dimensional measurement matrix Φ, so the original dimensional signal x is transformed to -dimensional; Φ is unrelated to Ψ; that is, We can see that y is the linear projection of x in the measurement matrix Φ and contains sufficient information to reconstruct x.Therefore, designing a suitable measurement matrix can not only achieve purposes of optimum compression but also ensure that the signal can be accurately reconstructed.Candès et al. proved that the measurement matrix Φ must be met with the restricted isometry property (RIP): ) is the isometric constant [22]. Signal Reconstruction. Known y and observation matrix Φ to restore the original signal x are to solve an underdetermined equations course, which cannot uniquely determine x from y.However, as the signal x is sparse or compressible and the observation matrix Φ satisfies the RIP condition, x can be accurately reconstructed by solving the optimal 0 norm; namely, (2) It is an NP hard problem to solve (2) directly.The solutions for this problem include the greedy tracking algorithm, the convex relaxation method, and the combination algorithm.The most representative method is the matching pursuit-(MP-) like algorithm.Thinking of iteration at each time to find nonzero coefficients can provide an effective method for the approximate solution of minimum 0 norm problem [19]. Compared with Figure 2(a) conventional sampling method, compressive sensing technology can compress and sample data at the same time, which is more suitable for WSNs in which the node hardware resource is constrained. Random Compressive Sensing Technique for Clustering WSNs Adopting effective dimension reduced projection on sparse signal.Compressive sensing technology can realize compressive sampling with much lower frequency than classical Nyquist sampling.Although this method reduces the sampling rate, it increases the demands on hardware resources.Because compressive sensing must generate a set of random numbers before signal collection and the random signal generator needs to work at the Nyquist frequency to generate random numbers, thus, it is inapplicable to the clustering WSNs in this paper.Random compressive sensing technique samples the original signal x randomly according to the sampling sequence, as shown in Figure 3(a).Compared with the traditional equal interval sampling, as shown in Figure 3(b), the / matrix array of random compressive sensing is not the standard unit but a number of nonzero values which is obtained by a random sampling matrix Sample (Sample is the matrix expression of the sampling sequence) [17]. Random compressive sensing process in clustering WSNs is that the timer in the cluster member controls the / sampling according to the sampling sequence; then the cluster members sent a small number of sampling signals to the cluster head or Sink.Finally, according to the sampling sequence, random projection and signal reconstruction can be completed in the cluster head or Sink.The principle of random compressive sensing in clustering WSNs is shown in Figure 4. To configure random number register in the cluster members can avoid the problem in the traditional compressive sensing that needs high frequency random number generator.The sampling sequence is calculated by correlation and sent to the cluster members by cluster head or Sink.After sampling by the sampling sequence, x may lose some information.However, the sequence is calculated by correlation and it is representative, so the sampling signal can provide enough information for final signal reconstruction. In contrast to the classical compressive sampling method in (1), random compressive sensing method can be expressed by (3).In [16], the author validates that the random compressive sensing meets the RIP and irrelevance property by simulation.Consider The specific steps of clustering WSNs signal acquisition method based on random compressive sensing are as follows. Step 1. Determine the sparsity degree based on the prior information. Step 2. Calculate the number of random samples . Step 4. Send the generated random sampling sequence to the cluster members and storage. Step 5. Cluster members sample the signal randomly according to the sampling sequence (random sampling frequency is the ratio of sampling number to the time required to complete this sample); then send the ×1-dimensional sample x () to the cluster head or Sink. Step 6.In the cluster head or Sink, expand the × 1dimensional sample x () to the × 1-dimensional sample x () in accordance with the sampling sequence, and then process random projection to reduce the dimension of x according to the × -dimensional measurement matrix Φ which satisfies the Gaussian random distribution; finally obtain y. Step 7. Reconstruct the signal in the cluster head or Sink. As can be seen from these 7 steps, the cluster members only need to receive and store sampling sequence, complete / sampling, and send × 1-dimensional sample x () to cluster head.As the amount of data transmitted by wireless reduced greatly, thus, the energy consumption of the cluster members can be reduced excessively.Compared with the sampling data, the data quantity of random sampling sequence is very small, so its communication energy consumption is also very small. In [18], the authors proposed using the existing reconstruction algorithms to reconstruct signal; they did not explain the specific reconstruction method, but the most widely used method currently is the MP-kind algorithms.These algorithms which either have poor accuracy or are slow cannot meet the practice requirement.Therefore, Section 4 in this paper proposes an improved matching pursuit reconstruction method based on chaotic-genetic algorithm. Composite Chaotic-Genetic MP Reconstruction Method ⟨x, 0 ⟩ is the inner product of x and 0 , represents every atom in atoms dictionary , and we choose Gabor atoms dictionary.After orthogonal projection, x can be decomposed into the best projection ⟨x, 0 ⟩ 0 and the residual 1 x.Afterwards, continue to project the residual 1 x in the atoms dictionary ; then obtain x = ⟨ x, ⟩ + +1 x after + 1 iterations.As long as +1 x is less than a predetermined threshold, x is decomposed into atoms: Gabor atom () = (1/√)(( − )/) cos(] + ), () = − 2 is a Gaussian window function, and = (, , ], ) is the time-frequency atomic parameter.The atomic frequency V, the atomic phase , the scale , and the displacement form the atoms dictionary , as each iteration of MP algorithm requires residual x to do the inner products with every atom in the atoms dictionary , which results in a very large amount of computation. Genetic algorithm (GA) is an adaptive global optimization search algorithm.It only needs the optimized object to provide the calculation standard and the parameters bound of the objective function, and then it can seek the optimum parameters in the global space quickly to meet the requirements.MP algorithm has already given the range of discretization atomic parameter = (, , ], ) and the calculation formula of optimal atoms before finding the optimal atom.Therefore, applying GA to the MP algorithm can change the searching process of finding the optimal atom from the whole dictionary to the randomly generated smaller subset of the atomic dictionary, which can reduce the amount of computation and improve the matching speed of the optimal atom greatly [23]. However, on the one hand, the less the difference of GA initial individual fitness value, the lower the search speed at the later period of GA algorithm; on the other hand, the great difference of GA initial individual fitness value will lead to the "premature" phenomenon.Chaos has the capability of initial value sensitivity, the ergodicity, and the randomness.The "randomness" here is caused by the internal characteristics of the system.It can traverse all states within a certain range without repetition according to their own regularity.What is more, not only can it have high efficiency but also it can avoid the local optimal effectively.Therefore, to combine excellent local searching characteristic of chaos with powerful global searching capability of GA can improve the search ability of the system effectively [24,25]. Most of existing chaotic-genetic algorithms use the Logistic mapping in the genetic algorithm to generate chaotic sequences as the initial group or add chaos random disturbance in mutation operation phase to improve the performance of the algorithm.However, they still have the shortage of large searching blind area and slow convergence speed [26].This paper puts forward a composite chaotic mapping method based on Tent mapping and Logistic mapping.This composite process can improve the randomness and sensitivity of the chaotic mapping, remedy the deficiency of low accuracy, and slow speed by only using Logistic mapping efficiently. Composite Chaotic Searching Algorithm. Logistic mapping is defined as The distribution character of this iterative sequence is "high in two poles, low in the middle." To solve the optimization problem, the efficiency of the algorithm will drop when the optimal value of target function falls in the middle part. Tent mapping is defined as The iterative speed of Tent mapping is faster than that of Logistic mapping, but its iterative sequence is easy to fall into cycle in small period and unstable periodic points. Lyapunov index can describe the separation speed of adjacent points effectively in the projection or the sensitivity of the orbit to initial conditions in the strange attractor.The greater Lyapunov index indicates that the mapping is more sensitive to initial conditions.It is defined as Calculating the Lyapunov index of Logistic mapping and Tent mapping, respectively, by (8), we knew that the Lyapunov index of Tent mapping has the maximum value while = 2, and the chaotic trajectory is most sensitive to initial conditions at this time.Therefore, considering the characteristics of Logistic mapping and Tent mapping, we insert = 2 into (6) to get iterative sequence +1 and put +1 as the initial values of ( 7) and then get a new composite mapping: The composite mapping in ( 9) is similar to parabolic type.Only when 1 ≤ ≤ 2, the composite mapping is the single full mapping with bounded sequences and can enter the chaotic state.Calculated by (8), we get that the composite mapping has the largest Lyapunov index in these three mappings while = 2, which means that it has better sensitivity to initial conditions and stronger local search ability.Insert = 2 into (9); we can obtain the composite mapping equation: The basic idea of chaotic search is to map the optimization variable into chaotic variable through the chaotic mapping and then use the ergodicity of chaotic variable to search the optimal solution and finally convert the optimal solution to the original optimization space by a linear transformation. Set (10) as the constraint condition of the n-dimensional optimization problem: max ( 1 , 2 , 3 , . . ., ), x as the th dimension decision variable, and x min, < x < x max, ; the composite chaotic searching process is as follows. Step 1.Let = 0, mapping the jth dimension decision variable Step 2. Set () as the initial value of (10); calculate the next generation chaotic variable Step 3. Map chaotic variable Step 4. Evaluate the quality of decision variable (4).Now, we use the proposed composite chaotic-genetic algorithm to optimize MP reconstruction method.As atoms are generated from = (, , ], ), we set as the optimization parameter and set the absolute value of inner product between signal (residual signal) and atom ⟨ x, ⟩ as the fitness function.In this method, the combination of composite chaotic algorithm and genetic algorithm is mainly embodied in two stages.In the initial population generation stage of genetic algorithm, we use the intrinsic correlation of composite chaotic sequence to optimize the generation of initial population, because the variables that are generated randomly often distribute irrationally and could lead to "prematurity." In the late searching stage of genetic algorithm, the powerful local searching ability of composite chaotic algorithm can be used to improve the search performance. MP Reconstruction Method Based on Composite Chaotic-Genetic Algorithm. MP algorithm has a large amount of calculations because every step of this algorithm should complete the optimization problem in The specific steps of composite chaotic-genetic MP reconstruction method are as follows. Step 1. Get signal x or residual signal x; initialize population size N, iteration times G, crossover probability P , mutation probability P , and the residual threshold T. Step 3. Generate the initial population P 0 of genetic manipulation in the "expanding" range according to (10) by taking advantage of the ergodicity of composite chaotic algorithm. Step 4 (calculate the fitness value).The MP reconstruction process is seeking the maximum value of ⟨ x, ⟩, so set ⟨ x, ⟩ as the fitness function.After decoding, calculate the fitness value of each individual according to this fitness function. Step 5 (selection).We directly replace the minimum fitness individuals with the maximum fitness individuals and then generate a new population P 1 . Step 6 (crossover and mutation).After operating crossover and mutation to population P 1 , we obtain a new population P 2 .We define the crossover probability and mutation probability as P and P separately.They can adjust automatically with the increasing of the iteration number: in the initial evolution stage, large P and small P help to speed up the convergence due to the large population differences; in the later evolution stage, small P and large P help to prevent "prematurity." Therefore, the definitions of crossover and mutation probability are as follows: where gen represents generation and max gen represents the maximum generation.When gen < max gen, repeat the calculation; when gen = max gen, end the iteration process. Set the individual which has the maximum fitness value as the optimal output. Step 7. Perform chaotic disturbance to the former l individuals which have the larger fitness value in population P 2 , using the composite chaotic search algorithm in Section 4.2.Then, we get a new population P 3 .After that we get by inserting the best individual of P 3 into (). Step 8. Project the residual x to ; we obtain the component ⟨ x, ⟩ and the residual +1 x.If +1 x is less than the threshold T, the algorithm terminates.Otherwise, make +1 the initial signal and return to Step 1. Step 9.According to each iteration result, we obtain the optimal reconstruction signal x = ∑ −1 =0 ⟨ x, ⟩ in the form of (5). The process of composite chaotic-genetic MP reconstruction method is shown in Figure 5. Composite Chaotic-Genetic MP Reconstruction Algorithm Performance Simulation.The configuration of the experimental computer is as follows: AMD Athlon (tm) II X2 255 processor 3.11 GHz, RAM 2 GB, the operating system being Windows XP sp3 by using Matlab7.10 programming.The length of the original signal is 512.The signal is from the superposition of four single-frequency signals: 50 Hz, 100 Hz, 200 Hz, and 400 Hz.The sampling frequency is 800 Hz, as shown in Figure 6.The observation matrix is the random Gaussian matrix.The parameters of our composite chaoticgenetic MP reconstruction method are as follows: the original population size N = 30, the maximum iteration number G = 100, the initial crossover probability P = 0.6, the initial mutation probability P = 0.05, = 5, l = 3, and the threshold T = 0.0001.The definition of reconstruction error is shown in (12), x is the original signal, and x is the reconstructed signal: Initialize N, G, P c , P m , T, l, Encode = ( s, u, v, w ) Generate initial population P 0 according to (10) Generate population P 1 after "selection" Generate population P 2 after "crossover" and "mutation" Generate population P 3 using the composite chaotic search algorithm Calculate g k r k+1 x = r k x g k is larger than T? Figure 7 is the reconstruction result of the original signal using our composite chaotic-genetic MP reconstruction method; the iteration number is 161.The average reconstruction error calculated by ( 12) is approximately 1.3776 × 10 −15 .We can find that the reconstruction error is very small. Signal reconstruction To analyze and verify the performance of our new reconstruction method, we compared with the performance of these five reconstruction methods: Method 1 is the basic MP reconstruction method, Method 2 is the MP reconstruction method based on genetic algorithm (GA-MP), Method 3 is the chaotic-genetic MP reconstruction method based on Logistic mapping (L-GA-MP), Method 4 is the chaoticgenetic MP reconstruction method based on Tent mapping (T-GA-MP), and Method 5 is our composite chaotic-genetic MP reconstruction method based on Logistic mapping and Tent mapping (LT-GA-MP).Table 1 shows the iteration number and the relative speed of these five methods when the reconstruction error = 0.0001.Considering the randomness of the algorithm, the result is the mean value after the experiment was performed 100 times. Seen from Table 1, the reconstruction effect of the basic MP reconstruction method is the best; its average iteration number is only 9 when the reconstruction error = 0.0001, but it takes the longest time.Since the GA-MP method only searches the subset of the atoms' dictionary, it can reduce the amount of computation largely and improve the matching speed greatly.However, its high speed is at the expense of matching accuracy.Its average iteration number is about 35 when the reconstruction error = 0.0001, and the average search speed is about 26 times of the MP reconstruction method.Our LT-GA-MP method has the highest speed in these five methods.Because the composite chaotic search algorithm participates in the genetic algorithm at the initial population generation stage and the later search stage, which can improve the overall performance of the LT-GA-MP method greatly, its average iteration number is 20 when the reconstruction error = 0.0001, and the average search speed is about 37 times of the MP reconstruction method. Random Compressive Sensing Experiments for Clustering WSNs.The experiment process of the random compressive sensing for clustering WSNs is as follows: original signal is sampled by the cluster members which are equipped with CC2430 chip.And then the cluster members send the samples to the cluster head or Sink which is composed of one computer and one coordinator.Computer computes the projection matrix according to prior information and generates the random sampling sequences (Sample) by Matlab.After that, send the random sampling sequence to the cluster members by the coordinator in wireless mode; the latter can store the sampling sequence.When the cluster members receive the signal acquisition command sent by the coordinator, their timers control / complete random sampling according to the random sampling sequence and then send the collected data to computer via the coordinator.signal reconstruction task is completed in the computer according to our composite chaotic-genetic MP reconstruction method.The coordinator and the cluster members complete the data transmission task under the drive of Z Stack protocol stack.The process of our experiment is shown in Figure 8.We set a sine signal generated by the signal source as the original signal; the frequency is 1 kHz.The length of the signal is 512, as shown in Figure 9.After calculation, we set the random sampling number = 64, the random sampling frequency is 3.1 kHz, and the sampling frequency of the reconstructed signal is 25 kHz.The random sampling results of the original signal are shown in Figure 10.Based on the random sampling results, the reconstruction results are obtained by our composite chaotic-genetic MP reconstruction method, as shown in Figure 11.The values of relevant parameters accord with Section 5.1.After calculation with (12), the average reconstruction error is about 1.8265 × 10 −7 . As we can see, the signal acquisition scheme based on random compressive sensing technique in this paper is fit for the hardware resource limited clustering WSNs.On the one hand, the cluster members only need sample 1/8 of the original signal data quantity, namely, 64 points, to greatly reduce the amount of data which is sent to the cluster head, saving the finite energy of cluster members enormously. On the other hand, the sampling frequency of the cluster members is only 1/8 of the original sampling frequency, which greatly reduces its hardware resource requirements.From the experiment results, we find that although there is a gap of the reconstruction error between our random compressive sensing scheme (a magnitude of 10 −7 ) and the classical compressive sensing technique (a magnitude of 10 −15 ), it can still meet the actual requirements.Compared with the reconstruction error about a magnitude of 10 −5 in [18], the reconstruction error of our method can reach a magnitude of 10 −7 , which is improved obviously. Energy Consumption Analysis. In order to compare the communication energy consumption between our random compressive sensing scheme and the traditional sampling scheme in WSNs, we set the experiment as follows: the traditional sampling mode is a 5 kHz equal interval sampling.The wireless communication energy consumption is , the sending energy consumption is , and the receiving energy consumption is .The total length of sending data is ; and the total length of receiving data is .The instantaneous sending and receiving currents of cluster member node are = 29mA and = 24mA, respectively.Sending or receiving one byte data need = 32 s.The setting of other experiment accords with Section 5.2.The communication energy consumption of one single jump in WSNs can be expressed as In Z Stack, the longest length of PHY protocol data frame is 128 B. Therein, the data length of synchronized frame head, frame tail, and frame structure is 11 B; the data length of order frame is 5 B; and the rest 112 B is the available length of PHY protocol data frame. Suppose there are 512 double byte pieces of data in each sampling; the traditional sampling scheme needs 10 times to transmit these data.As the cluster members only need sample 1/8 data quantity in every sampling, namely, 64 double byte pieces of data, our random compressive sensing scheme only needs 2 times to transmit these data.Calculated with (13), we can get that the traditional sampling scheme needs 55.5 × 10 −3 mAh to sample 512 double byte pieces of data; therein, the sending energy consumption of the cluster member is 30.5 × 10 −3 mAh; our random compressive sensing scheme needs 7.5 × 10 −3 mAh to sample 64 double byte pieces of data; therein, the sending energy consumption of the cluster member is 4.1 × 10 −3 mAh.We can see that the former is nearly 7.4 times the latter. The simulation result of the communication energy consumption is shown in Figure 12.We can see, on condition that these two schemes have the same transmission distance, along with the increasing sampling times, the communication energy consumption of the traditional sampling scheme is much larger than our random compressive sensing scheme. Figure 13 is the local magnifying effect of Figure 12.We can see that the communication energy consumption of our random compressive sensing scheme is not 0 at the beginning, because the cluster member node should receive the random sampling sequence from the cluster head. Conclusion and Future Work Finite energy of cluster members is one of the most important factors to restrict the development of airborne clustering WSNs.In order to reduce the energy consumption of cluster members, we put forward a kind of random compressive sensing scheme.Aiming at the low signal reconstruction accuracy in [18], we propose a composite chaotic-genetic MP reconstruction method based on Logistic mapping and Tent mapping.The experiment results show the following. (1) Our composite chaotic-genetic MP reconstruction method combines the excellent local searching characteristics of chaos theory with the powerful global search ability of genetic algorithm, which can realize the complementary advantages and greatly improve the overall performance of the algorithm.Compared with [18], our method highly improves the reconstruction accuracy.What is more, the average search speed is about 37 times as fast as that of the MP reconstruction method. (2) Our random compressive sensing scheme may lose some useful information, but the sampling sequence is calculated with correlation of the prior information; the reconstruction error can still reach a magnitude of 10 −7 .Our method can reduce the amount of sampling data and the sampling frequency of cluster members at the same time and finally reduce the hardware resource requirements of the cluster members directly.The communication energy consumption of the traditional sampling scheme is nearly 7.4 times our random compressive sensing scheme. Therefore, our random compressive sensing scheme is very suitable for airborne clustering WSNs.Due to the length limitation, this paper does not consider the noise problem, which will be studied in the next step. Figure 3 : Figure 3: The principle of (a) random compressive sensing and (b) traditional equal interval sampling. Figure 4 : Figure 4: The principle of random compressive sensing in clustering WSNs. Figure 5 : Figure 5: Flow chart of composite chaotic-genetic MP reconstruction. Figure 6 :Figure 7 : Figure 6: The original signal wave in (a) time domain and (b) frequency domain. Figure 12 : Figure 12: The comparison of communication energy consumption between the traditional sampling scheme and our random compressive sensing scheme. Figure 13 : Figure 13: The local magnifying of Figure 12. Table 1 : The performance of different reconstruction algorithms when = 0.0001.
7,345.8
2015-08-01T00:00:00.000
[ "Computer Science" ]
A Discontinuous Galerkin Model for Fluorescence Loss in Photobleaching Fluorescence loss in photobleaching (FLIP) is a modern microscopy method for visualization of transport processes in living cells. This paper presents the simulation of FLIP sequences based on a calibrated reaction–diffusion system defined on segmented cell images. By the use of a discontinuous Galerkin method, the computational complexity is drastically reduced compared to continuous Galerkin methods. Using this approach on green fluorescent protein (GFP), we can determine its intracellular diffusion constant, the strength of localized hindrance to diffusion as well as the permeability of the nuclear membrane for GFP passage, directly from the FLIP image series. Thus, we present for the first time, to our knowledge, a quantitative computational FLIP method for inferring several molecular transport parameters in parallel from FLIP image data acquired at commercial microscope systems. Fluorescence loss in photobleaching (FLIP) is a modern microscopy method for visualization of transport processes in living cells. This paper presents the simulation of FLIP sequences based on a calibrated reaction-diffusion system defined on segmented cell images. By the use of a discontinuous Galerkin method, the computational complexity is drastically reduced compared to continuous Galerkin methods. Using this approach on green fluorescent protein (GFP), we can determine its intracellular diffusion constant, the strength of localized hindrance to diffusion as well as the permeability of the nuclear membrane for GFP passage, directly from the FLIP image series. Thus, we present for the first time, to our knowledge, a quantitative computational FLIP method for inferring several molecular transport parameters in parallel from FLIP image data acquired at commercial microscope systems. Analysis of protein mobilities within living cells heavily relies on quantitative fluorescence microscopy. The protein of interest is either tagged with a green fluorescent protein (GFP) or its color variants. Alternatively, linkage tags are introduced genetically (as HaLo or SNAP tags) for subsequent labeling with suitable organic dyes [1][2][3] . The intracellular dynamics of such tagged proteins can be followed and quantified by three principal approaches (a) measurement of fluorescence fluctuations in the steady state, as in fluorescence correlation spectroscopy and its imaging variants 4,5 , (b) single molecule tracking (SMT) to gather an ensemble of trajectories of individual molecules 6,7 and (c) local disturbance of the steady state by photobleaching followed by measurement of establishing a new steady state 2,8 . Here, we are concerned with the last approach only. The disturbance by localized photobleaching can be singular in time, as in fluorescence recovery after photobleaching (FRAP), continuous, as in continuous photobleaching (CP) or repeatedly pulsed, as in fluorescence loss in photobleaching (FLIP). In FRAP and CP, the fluorescence dynamics is typically only monitored at the site of bleaching 8,9 . Accordingly, only one temporal profile of fluorescence change is gathered in conventional FRAP and CP and can be used for subsequent modeling of binding and diffusion processes. This comes at the risk of parameter uncertainty and overfitting 8 , which is why more recent FRAP studies include the whole spatiotemporal profile involved in the bleach and recovery [10][11][12][13][14] . In FLIP, the whole cell is automatically monitored, i.e., inside and outside the bleached domain, thereby naturally providing a temporal fluorescence profile (i.e., fluorescence loss) at each pixel position. Thus, FLIP provides comprehensive quantitative data on fluorescence dynamics for the whole cell as a precondition for reliable data modeling. However, only a few attempts have been made so far, to infer transport parameters from FLIP image data [15][16][17] . Luedeke et al. used a compartment model in their FLIP data modeling, in which a Heaviside function was used to describe the FLIP cycle of bleaching and scanning 16 . This lead to a non-linear ordinary algebraic-differential equation system, which was solved numerically. Diffusion was not explicitly included in this model. Gruebele and colleagues (2014) performed numerical simulations of the underlying reaction-diffusion model, in which the reaction term described the localized bleaching process 15 . They discretized the whole cellular domain into a few subdomains and fitted the experimental fluorescence loss in each subdomain to several diffusion models. To include the complete spatiotemporal fluorescence loss profile, we presented previously a quantitative FLIP model using a pixel-by-pixel analysis with an empirical fitting function available as a plugin to the popular image analysis program ImageJ 17,18 . This analysis method allowed for detecting local heterogeneities in fluorescence loss kinetics, but the underlying causes could not be inferred from the empirical model used. In 19 we presented a reaction-diffusion compartment model for intracellular transport observed in FLIP images, which can describe both diffusion, nucleo-cytoplasmic transport, and local binding mechanistically. We Fluorescence Loss in Photobleaching In FLIP a selected cell-area is repeatedly bleached using the intense laser beam of a confocal microscope. In between the bleaches, an image scan is made to observe the transport process, see Fig. 1 for illustration. The bleaching induces a decrease in fluorescence, not only in the bleaching area but in the whole cell due to the transport processes towards the repeatedly bleached area. This in principle allows evaluating the transport in the cell and between the intracellular compartments. Thus, any delayed fluorescence loss in a particular cellular region outside the bleach spot indicates hindrance to molecular transport, either due to steric barriers to diffusion (for example the nuclear membrane separating cytosol from the nucleus), due to binding or because of crowding. The latter has been shown to cause excluded volume effects and, in the case of the nucleus, fractal diffusion as a consequence of the complex DNA folding and topology 17,20,28 . A reaction-diffusion model in segmented FLIP images The PDE model of the FLIP process is a reduced version of the system in 19 defined on two compartments, namely nucleus and cytoplasm. To obtain a realistic simulation, the compartment boundaries are found via segmentation of the FLIP images. For this and later references, we will use Ω as a notation for the whole cell domain, and ∂Ω denotes the boundaries. Furthermore, we let Γ M represent the nuclear membrane, Ω N and Ω C represent nucleus and cytoplasm respectively. In this paper we let the bleaching area be located within the cytoplasm Ω B ⊂ Ω C , see Fig. 2. The segmentation of the FLIP images is produced by the Chan-Vese active contours algorithm 29 . The algorithm is based on level set functions where the goal is to minimize the Chan-Vese energy functional by activating the level set function through an artificial time-like parameter. By minimizing the energy functional one minimizes the total deviation from the average gray-levels in for-and background, respectively. The energy functional also takes the length and thereby the smoothness of the curve into account. The implementation and further description of the Chan-Vese algorithm can be found in 19,30 . In this paper, the algorithm is applied to localize boundaries of the cell, nucleus and bleaching area in our FLIP images. Here the cell and bleaching area are segmented from the first FLIP image, while the nucleus is segmented from frame number 45, where the nucleus geometry is clearest. A reaction-diffusion model with hindrance. Inspecting the FLIP images, one of the most conspicuous things would be the architecture of especially the nucleus. There is currently put a lot of research effort on characterizing spatial heterogeneities in intracellular diffusion and transport processes. Especially within the nucleus, it is observed that molecular crowding hinder GFP's diffusion in dense nuclear compartments 20,28 . GFP is considered as minimally interacting protein, such that specific binding to intracellular structures can likely be ignored. However, the spatial heterogeneity of GFP distribution, which we observed especially in the nucleus, indicates that the mesoscopic cellular organization together with non-specific interactions of eGFP can cause local enrichment or depletion of this protein. Such locally varying heterogeneous distribution of eGFP can be the consequence of protein partitioning into aqueous nuclear phases with differing properties 31 . Alternatively, it is the result of the fractal organization of diffusion barriers, for example stemming from the nuclear DNA content 20,28 . Such barriers to diffusion have been detected in the nucleus by pair correlation analysis of intensity fluctuations of eGFP 19 . Similarly, the heterochromatin-euchromatin border has been shown to form a barrier for protein diffusion 32 . In 17 the pixel-wise FLIP analysis shows a negative correlation between DNA content and the fluorescence intensity and fluorescence loss kinetics of GFP in the nucleus. The computational FLIP model therefore needs to account for the uneven distributions of nuclear proteins. We model the spatially varying eGFP distribution using rate constants and classical mass-action kinetics. It should be emphasized that this is a significant simplification, as diffusion of eGFP in the bounded state is ignored, and the underlying causes of local protein enrichment are not explicitly considered. However, as they are only partly understood, and we find good agreement of our simulation results with the experimental FLIP data, we use this pragmatic modeling approach here. More complicated modeling approaches including confined or anomalous diffusion will be discussed in section 6, below. Thus the model consists of both hindered and free fluorescence proteins and we define the observed fluorescence intensity as: where u and u b is the intensities of the free and hindered molecules, respectively. The high-intensity areas are the areas in which we find that GFP is hindered in its motion. Thus, in these areas, u has been transformed into u b , in contrast to areas of low intensity. This is described by the reversible, first order reaction mechanism: b k k where k + and k − are spatially resolved positive reaction constants; i.e. we account for the above mentioned diffusion barriers by a mean field approach using reaction rate constants k + and k − . Assuming diffusive transport of the free (but not the hindered) GFP-tagged molecules according to Fick's law, the time-dependent PDE model reads: where α is the diffusion coefficient for free GFP molecules, b is the intrinsic bleaching rate constant, q is the equilibrium constant for the reaction between the ground and excited state for a fluorophore 33 , thus is the total rate at which the fluorophores are bleached inside the bleaching area Ω B . Further, θ and χ Ω B are both characteristic functions, θ is time-dependent and simulates when the high-intensity laser bleaches, χ Ω B is space dependent and ensures that bleaching only occurs in the bleaching area: Ω At the initial time, before bleaching, the system is in equilibrium and the free molecules are uniformly distributed u 0 = const. Any higher fluorescence intensity is due to accumulation of hindered molecules Thus, the initial intensity of free molecules is the uniform background of the observed initial intensity The equilibrium state for (2) is given by It is reasonable to model k + = k + (x) to be positive where increased fluorescence intensity indicates the presence of hindered molecules Compartment model with semipermeable membrane. To obtain a realistic FLIP simulation at least two compartments are needed, i.e., the cytoplasm and nucleus. These compartments are separated by the nuclear membrane. According to Fick's first law the diffusive flux is anti-proportional to the gradient α = − ∇u J . To model diffusive transport across a semipermeable membrane interface where u may jump, we integrate Fick's law across the membrane to obtain = − − ) . Here, p denotes the solute permeability of the membrane measured in μm/s. The membrane separates the domain into two compartments labeled by ± superscripts. In our cell model see Fig. 2 for example, the nucleus Ω N is the minus-compartment and the cytoplasm Ω C is the plus-compartment. The outward unit normal vectors n ± along the common interface point into the opposite compartment. If the concentration outside is greater than inside u + > u − , then the flux points back into the minus-compartment resulting in a damping effect in agreement with Fick's law. As the outward normals along a common interface are opposite, the flux may be written as a jump bracket Despite the fact that biological transport across a membrane may be complex, it is common practice to approximate the permeability experimentally by dividing the measured flux by the jump in concentration 34,35 . At this point, we are ready to summarize the mathematical model. The complete PDE model. The fluorescence intensities of both free and hindered molecules are governed by the reaction diffusion system α χθ The reaction rates are taken from (4) and (5). Along the membrane, the diffusive flux (6) is expressed as interface condition Focusing on the intracellular architecture and diffusive transport of GFP, we may assume there is no transport of GFP across the cell membrane ∂Ω The normalised initial intensity 0 ≤ c(0, x) ≤ 1 is extracted from the first FLIP image and A discontinous Galerkin method with internal interface condition To effectively simulate the abrupt change in fluorescence intensity as seen in FLIP images, it is desirable that the numerical method can represent discontinuous functions. The Discontinuous Galerkin (DG) method was first introduced by Reed and Hill 36 in 1973 to resolve shocks in hyperbolic conservation laws. Independently, Babuska 37 , Wheeler 38 and Arnold 39 developed interior penalty discontinuous Galerkin (IPDG) methods for elliptic and parabolic problems. Since then the interest and the development of DG methods have been growing. The interested reader is referred to 40 where the history of their development until 1999 can be found. In this paper, the interface condition along the nuclear membrane (8) is implemented into the IPDG method based on 39,41 . To describe the method, we introduce some notation. Let  h denote the discretization of Ω into disjoint open elements K T ∈ h . In connection, let Γ denote the union of the boundaries of all . Note that the mesh should be constructed such that Γ ⊂ Γ M . Further we decompose Γ into three disjoint subsets . Further, let u + and u − denote a single valued function on two adjacent elements +  and  − . As usual, n ± denote the outward unit vectors on along  ∂ ± . Then average and jump term are defined as {u} = (u Note that the jump of a scalar gives a vector, while the jump of a vector is a scalar, moreover Consider the div-grad operator α ∇ ⋅ ∇u ( ) on two adjacent elements  ± . By partial integration (Green's first identity) where v denotes a suitable test function. Along the common edge Summing up over all elements ∈ h K T we thus find where both the membrane flux condition (6) and the zero flux boundary condition (9) have been used. The IPDG method enforces continuity across internal edges by a penalty term 37,39,41 . Using (11) the formula is symmetric and consistent for continuous solutions Here h denotes the average diameter of two adjacent elements, and σ is the Nitsche parameter 42 . The bilinear form for the div-grad operator based on the IPDG method reads The last integral in (12) is the internal penalty; a large enough Nitsche parameter enforces continuity across internal edges 37,39,41,42 . Let v and w be discontinuous, piecewise bilinear test-functions for u and u b respectively. The semi-discrete PDE with boundary condition and interface conditions (8) and (9) reads FEniCS implementation Applying a backward Euler time step to (13) results in the weak form for the time step This weak form is conveniently implemented using the automated Finite Element package FEniCS 27 . For faster execution, it is recommended to pre-assemble the system matrix, which FEniCS can do automatically based on the given mesh and weak formulation. The pulsating laser is realized by pre-assembling two systems, with and without the bleaching term B(u, v). To resolve the effect of bleaching, the bleaching interval is a multiple of the time step: . To use FEniCS a high-level Python script is written, where the weak formulation is expressed in the UFL form language. UFL is a domain specific language for defining weak formulations in a notation close to the one presented in this paper 44 . DOLFIN then interprets the script and passes the UFL to the Variational Form Compiler (FFC). Then Instant (build on top of SWIG) turns it into a C++ function callable from Python. In the end, the linear systems are solved by the UMFPACK sparse, direct solver via PETSc 45,46 . Optional iterative and parallel solvers are available. A test on the given mesh and the system from this paper showed that the iterative generalized minimal residual method with PETSc algebraic multigrid preconditioner was overall 20-30% slower than the direct solver. Calibration and simulation of FLIP images The discontinuous Galerkin method approximates the solution to the PDE model (7)-(10) as a piecewise bilinear and possibly discontinuous function defined on a triangulation of the cell. The discrete geometry from the segmented FLIP images is written into a .geo geometry file. By Gmsh 47 the mesh is constructed on the segmented cell geometry found in the geo file and displayed in Fig. 3. It consists of 1523 triangles; 991 located in the cytoplasm, 503 in the nucleus and 29 in the bleaching area. The initial fluorescence intensity 0 ≤ c(0, x) ≤ 1 is extracted from the first FLIP image. The original images are affected by some noise, however. Therefore, the FLIP images are preconditioned by Gaussian smoothing (with a radius of one pixel = 0.05467326 μm) within the cell domain. The intensity of free and hindered molecules is initialized according to (10) i.e., the intensity pattern as seen in the first blurred FLIP image is carried by the hindered molecules. The bleaching time interval was Δt b = 0.8 s followed by a recovery phase of 1.8 s, resulting in a total frame rate of 2.6 s. For the simulation the discrete time step is set to be Δt = 0.2 s. Not yet defined model parameters are: the diffusion coefficient α, the bleaching term β = + b q q 1 both appearing in the PDE model (7), the proportionality factor γ in reaction rates (4) and (5), as well as the permeability constant p in the interface condition (8). Calibration. The remaining parameters are identified by calibrating the simulation to observed FLIP images. To this end, a misfit functional is minimized with respect to the parameters. At discrete times t i = 2.6(i − 1) + 2.0 seconds i = 1, 2, 3, …, 50 we measure the difference between the simulated intensity and the preconditioned (blurred) FLIP images represented as a piecewise linear finite element function on the mesh. For tests regarding the number of FLIP images used, see Supplementary S.3. Thus, the misfit functional is expressed as i i b i g i 1 50 2 where c g denotes the intensity of the goal function. Squaring the deviation puts a strong penalty on outliers and results in a more even distribution of residuals. The PDE constrained calibration problem reads: , , ) argmin ( , , , ), where u and u b solve the PDE model (7). To perform the optimization, we apply the Nelder-Mead downhill simplex algorithm 48 which is part of the SciPy library 49 . It calls the semipermeable membrane FLIP model (13) implemented as a FEniCS function. Initially the Nelder-Mead search constructs with five initial guess vectors ξ k = (α k , β k , γ k , p k ) forming a four dimensional simplex. The misfit functional (14) is evaluated in all five vertices E k = E(ξ k ) and the vortices are renumbered in ascending order < < <  E E E 1 2 5 . The least optimal simplex vector ξ 5 is replaced by a (hopefully) better approximation. The iteration stops if both the progress in the optimal parameters ξ ξ − The Nelder-Mead algorithm can call the FLIP solver multiple times per iteration, here resulting in 231 function evaluations in form of forward solutions of the PDE system (7)- (10). The calibration process takes approximately 3 hours on an Intel Core i5 processor at 3.2 GHz with 8 GB memory running Ubuntu 14.04.5. Simulation and visualisation. With the optimized parameters (15) our FLIP model as stated in Section 2.3 is completely determined. Recall that reaction rates k ± as well as initial intensities are extracted from the first (denoised) FLIP image. A sequence of FLIP images in McArdle RH7777 cells is displayed in the top row of Fig. 6(a-d). Green fluorescent protein (GFP) was repeatedly bleached with full laser power at a 30 pixel (1.64 μm) diameter circular region in the cytoplasm (green circle), in a temperature controlled (35 ± 1 °C) environment of a Zeiss LSM 510 confocal microscope using the 488-nm line of an Argon laser. The entire images were scanned with 0.5% laser power between each bleach. The total frame rate inclusive bleaching was 2.6 s and the image area is approximately 15 × 15 μm. As mentioned earlier, we use Gaussian blur with radius 1 pixel to denoise the FLIP image. The blurred FLIP sequence is presented in the second row of Fig. 6(e-h). The first blurred FLIP image is used to create k + and the subsequent is used to generate goal functions. A goal function is a piecewise linear discontinuous Galerkin function defined on the mesh, based on the pixel values from the blurred FLIP images. The goal functions displayed in the third row of Fig. 6(i-l) were used to calibrate the FLIP model. Finally, the simulation results of our calibrated FLIP model can be seen in the lowest row of Fig. 6(m-p). The structure established in the simulation mainly originates from the reaction kinetics given in (2). In Fig. 7 k + is illustrated based on the estimated proportional factor γ = . 0 319. One can clearly see that the spatial map of k + resembles the structure from the first intensity image as stated in (4). Hindrance to free diffusion is clearly higher in the nucleus compared to the cytoplasm, which is in accordance with earlier studies 10,20 . Discussion and Conclusion To compare the spatiotemporal profile of fluorescence loss between experiment and simulation, we make use of our previously developed method, namely to fit a stretched/compressed exponential (StrExp) function to each pixel position in the data and simulation outputs 17 . This function is an extension of the exponential function, as it can be considered as the sum of exponentials with a distribution of rate constants, rather than a single rate constant. This leads to a time-dependent rate coefficient, suitable for modeling delays and long-tail kinetics, not addressable using a single exponential decay function. The StrExp function is widely used for modeling physico-chemical processes and is used here to provide an independent assessment of the quality of our FLIP model. The StrExp function provides an accurate description of fluorescence loss kinetics and reads with amplitude map I 0 (x), time constant map, τ(x), heterogeneity map, h(x) and a background term, The heterogeneity parameter describes the shape of the intensity decay with 0 ≤ h < 1 modeling a delayed (compressed) exponential and 1 < h ≤ 2 modeling a stretched exponential, which is faster than exponential initially and slower for long times compared to τ. For h = 1, one recovers a mono-exponential function. We showed previously, that the StrExp function can accurately model diffusional transport in FLIP simulations, both in 2D and in 3D. We found that the shape of the fluorescence loss profile is well approximated with 1 < h ≤ 2 inside the bleach spot and a gradient of h-values as function of distance from the bleaching spot in the range 0.5 ≤ h < 1 outside the bleached region 17 . We demonstrated also that binding/release-dominated transport can be fitted with a StrExp function as well. Finally, we found that local heterogeneity in the h-map between neighboring pixels for GFP FLIP experiments indicates deviation from classical diffusional transport with space-invariant diffusion constant in living cells. In fact, we found for exactly the same experimental FLIP sequence used in the current study, that pixel-to-pixel variation of h-values, either larger or smaller than one exist in the cytoplasm and in the nucleus (see Figs 4 and 5 in 17 ). This can be seen particularly clearly when calculating the rate coefficient map, which is defined as: Here, I n (x, t) = exp(−(t/τ(x)) (1/h(x)) ) refers to the intensity decay normalized to the initial fluorescence given an amplitude equal to one 17,52 . For a stretched decay, the rate coefficient decreases over time, while for a compressed decay, the rate coefficient increases, indicating respective slowing and accelerating fluorescence loss kinetics at a given position 17 . We fitted this function to the experimental and calibrated FLIP sequence using a plugin, which we presented previously to the popular image analysis program ImageJ 18 named PixBleach 55,56 . As shown in Fig. 8, the outcome of the FLIP simulation and calibration coincides nicely with the experimental FLIP data including spatially heterogeneous amplitude and time constant maps. As for the experimental data, fluorescence loss in the nucleus is significantly slowed, and the nucleus shows spatially varying fluorescence loss kinetics in experiment and FLIP simulation. From that, we conclude that our model, using spatially varying binding/release rate constants can accurately describe the experimentally known heterogeneity of nuclear diffusion of GFP, even though, we do not explicitly model spatially varying diffusion (i.e., we kept D spatially invariant and varied local binding affinities to unknown subcellular structures) 17,23 . The spatially varying intensity of GFP is observed at steady state in living McArdle cells and has been reported in many other studies as well 23,57 . Local differences in diffusion of GFP have been measured by fluorescence correlation spectroscopy (FCS) in the nucleus of HeLa cells, ranging from D ≈ 10 μm 2 /s to D ≈ 35 -50 μm 2 /s, but those differences in diffusion were not correlated with GFP intensity in the same regions 23 . It is likely that the compact nuclear DNA creates local barriers to diffusion 28 , which we detect as locally delayed fluorescence loss profiles 17 . As long as diffusion barriers are penetrable for GFP on the time scale of its cellular turnover by synthesis and degradation no spatial gradients of this protein should be expected. In other words, barriers can cause protein confinement on a short time scale but should lead to normal diffusion on a long time scale and therefore to a complete exploration of the three-dimensional nuclear space. As a consequence, any concentration gradients will be smoothened out and a homogeneous nuclear intensity of GFP would be expected. If on the other hand, the affinity of GFP for various nuclear subregions varies, a heterogeneous steady state distribution can be expected. Coexisting phases due to differences in polyelectrolyte concentration and properties have been proposed to contribute to the nuclear organization 31 , and GFP could show different affinities for such nuclear domains. Thus, our simplified mass-action model, while ignoring intradomain diffusion, emphasizes exchange of GFP between nuclear areas of different affinities for this protein. The same is true, though to a lower extent, for the cytoplasm. Similarly, the nuclear membrane can be seen as a barrier to diffusion, detectable by a variant of FCS 58 . The time constant map inferred from fitting the StrExp function to the experimental FLIP sequence or to the calibrated FLIP model data changes abruptly at the nuclear membrane, demonstrating that our computational FLIP model can detect barriers to diffusion as well (Fig. 8c). Also, the heterogeneity map and the maps of rate coefficients indicate delayed fluorescence loss in the nucleus for the experimental and calibrated FLIP sequence (compare Fig. 8b and e,f). This delay, characterized by a compressed StrExp function with increasing rate coefficients as function of time is a direct consequence of the presence of two effects: i) the nuclear membrane, acting as stringent barrier to diffusion and ii) hindrance to diffusion combined with partitioning preference of GFP in domains in the nucleus, which also causes the higher overall accumulation of GFP in that compartment compared to the cytoplasm. Both, the comparable shape of the fluorescence loss kinetics and the nuclear accumulation of GFP despite passive permeation across the nuclear membrane, are important validations of our reaction-diffusion FLIP model. Interestingly, on a smaller spatial scale (i.e., in the range of a few microns) the heterogeneity map is more structured for the experimental FLIP data than for the calibrated model (Fig. 8b). This leads to a larger spatial variation of the bleaching rate coefficients in the experimental FLIP sequence compared to the FLIP model (compare Fig. 8e and f, especially in the nucleus). It is likely that this minor discrepancy is a result of anomalous diffusion processes, which are not taken into account in our model 59 . For further validating our model of passive permeation across the nuclear membrane, we made use of the data by Mohr et al. 24 , who compared the size dependence of nuclear permeation of various inert and spherical probe molecules 24 . The passive (i.e. not receptor mediated) influx of each studied molecular species followed first order kinetics, and the measured influx rate constant in permeabilized HeLa cells could be used to estimate the membrane permeability as p = k · V/A (nuclear volume, V = 1130 μm 3 and nuclear area, A = 540 μm 2 ). With these values and the Stokes-Einstein relation, we have performed a forward simulation of a FLIP experiment with selected probe molecules of very different Stokes radius (Supplemental Fig. S8). Clearly, increasing the Stokes radius from 0.67 nm for Fluorescein-tagged cysteine (Fl-Cys), over 1.69 nm for Ubiquitin (Ubq) to 2.85 nm for maltose-binding protein (MBP) had a dramatic effect on the fluorescence loss kinetics in the nucleus. While the nuclear membrane presented not much of a barrier for the nucleocytoplasmic exchange of Fl-Cys, permeation of MBP was strongly hindered. On the same time scale, lateral diffusion of all three probe molecules to the bleached area caused complete fluorescence loss in the cytoplasm (Supplemental Fig. S8). Together, these simulation results are in line with the experimental findings of Görlich and colleagues 24 , and shows the potential of our reaction-diffusion FLIP model to study nuclear transport and intracellular diffusion of other cargo molecules than GFP. The simulation results of the calibrated FLIP model agree very well with the goal function and even the FLIP images in Fig. 6. The internal structure of the cell is accurately reproduced by the remarkably simple reactiondiffusion model. It might be worth noting that the FLIP images and hence also the goal function reflect a time interval of 1.8 s what it takes the confocal microscope to scan the image during the recovery phase after bleaching. The simulated images, however, display snapshots at discrete times t = 0, 26, 52 and 104 seconds. By applying a discontinuous Galerkin method, it is possible to model the nuclear membrane as an internal interface instead of resolving the internal membrane dynamics as in 19 . As a consequence not only the DG mesh consists of 147 times fewer triangles, but also the PDE model is simpler replacing the internal membrane dynamics by the interface condition (8). The typical runtime for the simulation of a FLIP sequence is about 108 times faster than for the continuous Galerkin method. Also, this result exceeds the expectation formulated in the introduction. One reason is that the PDE model (7) consists of only two equations instead of four as in 19 . In the literature, one can find several papers using a semipermeable membrane model, see 34,51,60 . Peters 51 measures the permeability constant for a liver cell with a different size of dextrans. The article presents results for dextrans with a molecular mass of 19.5, 39.0 and 62.0 kDa. Although only three measurements are presented, it is clear that the correlation between the mass of the molecules and the respective measured permeabilities 0.705, 0.027 and 0.0036 μm/s is nonlinear. As we only have three data points, a fit would be strongly biased by the error in the data. For GFP with its estimated Stokes radius of 2.42 nm and a molecular mass of 27 kDa however, one may expect a permeability in the lower range of the interval (0.027, 0.705). The estimated permeability for GFP of p = 0.111 clearly matches with that expectation. It is also possible to model active transmembrane dynamics in the framework of a discontinuous Galerkin method. In that case, the semipermeable membrane condition (8) will be replaced by an active membrane condition based on reaction kinetics. A follow-up article is in preparation. Data availability. Experimental FLIP sequences, simulated images and program code will be made available by the authors upon request.
7,681.8
2018-01-23T00:00:00.000
[ "Computer Science" ]
Effects of Acetylated Veneer on the Natural Weathering Properties of Adhesive-Free Veneer Overlaid Wood‒Plastic Composites. The purpose of this study is to investigate the natural weathering properties of unmodified and acetylated veneer overlaid wood‒plastic composites (vWPCs) manufactured by one-step hot press molding. The results show that the water absorption and thickness swelling of vWPC with acetylated veneer were lower than those of unmodified vWPC. In addition, the surface tensile strength of vWPC increased with increasing weight gain of acetylated veneer, and the flexural properties of vWPC were not significantly different. Furthermore, the results of natural weathering demonstrated that not only the photostability but also the modulus of elasticity (MOE) retention ratio and surface tensile strength of vWPC with acetylated veneer were significantly higher than those of vWPC with unmodified veneer. Thus, better dimensional stability, surface tensile strength, and weathering properties can be achieved when the vWPC is made with acetylated veneer, especially those containing veneers with a higher weight percent gain. Introduction In the past few decades, wood-plastic composites (WPCs) have been used in various fixtures, such as window framing, fencing, roofing, decking, and siding [1]. The global WPC market has experienced significant growth in North America and Europe [2]. Additionally, WPCs have been increasingly the focus of research interest [3,4]. However, WPCs are composed of synthetic polymers and wood particles (or wood fiber), which are subjected by photodegradation upon exposure to sunlight, especially ultraviolet (UV) light. Therefore, the color fading and strength weakening of WPCs can be caused by weathering and restrict the WPCs to certain outdoor applications. It has long been shown that lignin is liable to photodegrade among constituents of wood, and this leads to radical-induced depolymerization of lignin, hemicellulose, and cellulose at the wood surface [5,6]. Furthermore, the strength losses of wood after weathering are caused by wood swelling and shrinkage after moisture effects [7]. It is well known that the dimensional stability, hydrophobicity, and weatherability or durability of wood can be improved by acetylation [8][9][10]. On the other hand, according to Altenbach [11], efficient load bearing of conventional polymer composites with homogeneous single-layered structures could be achieved when polymer composites Polymers 2020, 12, 513 2 of 10 have multilayered structures, thereby making polymer composites more valuable. It has been shown that layered particleboard and fiberboard have higher flexural strength and stiffness than homogeneous counterparts at the same density level. Hsu et al. [4] reported that the specific flexural properties of three-layered bamboo-plastic composite (BPC 3L ) were higher than those of homogeneous single-layered BPC. Furthermore, Najafi et al. [12] and Adhikary et al. [13] indicated that recycled plastic is usually suitable for manufacturing WPCs. Therefore, to improve the aesthetic appearance, flexural strength, and weathering properties of WPCs, unmodified and various acetylated veneers were applied to the surface of WPCs to manufacture adhesive-free veneer overlaid wood-plastic composites (vWPCs) by one-step hot press molding in this study. Consequently, the physicomechanical and weathering properties of vWPCs with unmodified and acetylated veneers were compared to evaluate the effectiveness of acetylation as a means of improving the weatherability of vWPCs for outdoor applications. Materials Taiwan red pine (Pinus taiwanensis Hayata), a fast-growing wood species, was sampled from the experimental forest of the National Chung Hsing University in Nan-Tou County. Wood particles were prepared by hammer milling and sieving; particles between 16 and 24 mesh were selected and used in this study. Defect-free rotary-cut radiata pine (P. radiata D. Don.) veneer sheets with a thickness of 2 mm were purchased from Wan Tsai Industry Co., Ltd. (Chiayi, Taiwan). Recycled high-density polyethylene (rHDPE; MFI: 4.20 g/10 min; density: 940 kg/m 3 ) was kindly supplied by Horng Gee Co., Ltd. (Changhua, Taiwan). All plastic pellets were ground in an attrition mill to reduce their particle size to less than 20 mesh before composite processing. The chemicals and solvents used in this experiment were purchased from Sigma-Aldrich Chemical Co. (St. Louis, MO, USA). Acetylation Treatments Veneers were acetylated with acetic anhydride (AA) using the vapor phase reaction method [14] at a solid/liquid ratio of 2 g/mL. All reactions were conducted at 140 • C for 2-8 h to obtain acetylated veneers with different modification degrees. At the end of the reaction, the acetylated veneers were washed with distilled water for 24 h to remove the reagent residues and byproducts (i.e., acetic acid). Finally, the acetylated veneers were dried at 105 • C for 12 h, and the weight percent gain (WPG) was calculated as follows: WPG (%) = 100(M 1 − M 0 )/M 0 , where M 0 and M 1 are the oven-dried weights of veneer before and after acetylation, respectively. Composite Processing The flat-platen pressing process was applied to the manufacture of adhesive-free vWPCs according to our previous studies [15,16]. The weight ratio of oven-dried wood particles (moisture content less than 3%) to rHDPE powder was 50/50 for the WPC core. The manufacturing process of vWPCs is shown in Figure 1. Two pieces of 2 mm thick veneers were used for the surface layers on both sides of the WPC core mat, and the longitudinal grain directions of the surface veneers were parallel to each other. The expected density of vWPCs was 800 kg/m 3 . The formed sandwich panels (300 mm × 200 mm with 12 mm thickness) were hot pressed at 180 • C and 2.5 MPa for 8 min and then cold pressed until the core temperature of the vWPCs decreased to 40 • C. Natural Weathering Test For the natural weathering test, composite species were exposed facing south, inclined at a 45° angle for a period of 1185 days at the campus of National Chung Hsing University (24°07'25.7'' N, 120°40'30.7'' E). During exposure periods, the temperature ranged from 7.7 to 36.6 °C, and the average relative humidity and annual precipitation were 77.0% and 6494 mm, respectively. The exposed samples were periodically removed, and their properties were measured regularly. Determination of vWPC Properties To determine the properties of the vWPCs, several determinations, including density, water absorption, thickness swelling, flexural properties, and surface tensile strength, were made according to the Chinese National Standard (CNS) 2215. In brief, specimens with dimensions of 230 mm × 50 mm × 12 mm were used to evaluate flexural properties by the three-point static bending test with a loading speed of 10 mm/min and span of 180 mm. The surface tensile strength of the vWPC was determined on samples with dimensions of 50 mm × 50 mm × 12 mm at a tensile speed of 2 mm/min. All samples were conditioned at 20 °C and 65% relative humidity for 2 weeks prior to testing. The retention ratios of modulus of elasticity (MOE) and modulus of rupture (MOR) of the vWPCs after natural weathering were determined as follows: MOE retention ratio (%) = 100(MOEt/MOE0); MOR retention ratio (%) = 100(MORt/MOR0), where the measurements with the subscript indices 0 and t were for the vWPC data before and after weathering for a time t, respectively. ATR-FTIR Spectral Measurements Attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectra of all specimens were recorded on a Spectrum 100 FTIR spectrometer (Perkin-Elmer, Buckinghamshire, UK) equipped with a deuterated triglycine sulfate (DTGS) detector and a MIRacle ATR accessory (Pike Technologies, Madison, WI, USA). The spectra were collected by coadding 32 scans at a resolution of 4 cm -1 in the range from 650 to 4000 cm -1 . Three spectra were acquired at room temperature for each composite. Measurement of Surface Color The color of the composite surface was measured by a color and color difference meter (CM-3600d, Minolta, Tokyo, Japan) under a D65 light source with a test window diameter of 8 mm. The color parameters L*, a*, and b* of all specimens were obtained directly from the colorimeter. Based on the CIE L*a*b* color system, L* is the value on the white/black axis, a* is the value on the red/green Natural Weathering Test For the natural weathering test, composite species were exposed facing south, inclined at a 45 • angle for a period of 1185 days at the campus of National Chung Hsing University (24 • 07 25.7 N, 120 • 40 30.7 E). During exposure periods, the temperature ranged from 7.7 to 36.6 • C, and the average relative humidity and annual precipitation were 77.0% and 6494 mm, respectively. The exposed samples were periodically removed, and their properties were measured regularly. Determination of vWPC Properties To determine the properties of the vWPCs, several determinations, including density, water absorption, thickness swelling, flexural properties, and surface tensile strength, were made according to the Chinese National Standard (CNS) 2215. In brief, specimens with dimensions of 230 mm × 50 mm × 12 mm were used to evaluate flexural properties by the three-point static bending test with a loading speed of 10 mm/min and span of 180 mm. The surface tensile strength of the vWPC was determined on samples with dimensions of 50 mm × 50 mm × 12 mm at a tensile speed of 2 mm/min. All samples were conditioned at 20 • C and 65% relative humidity for 2 weeks prior to testing. The retention ratios of modulus of elasticity (MOE) and modulus of rupture (MOR) of the vWPCs after natural weathering were determined as follows: MOE retention ratio (%) = 100(MOE t /MOE 0 ); MOR retention ratio (%) = 100(MOR t /MOR 0 ), where the measurements with the subscript indices 0 and t were for the vWPC data before and after weathering for a time t, respectively. ATR-FTIR Spectral Measurements Attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectra of all specimens were recorded on a Spectrum 100 FTIR spectrometer (Perkin-Elmer, Buckinghamshire, UK) equipped with a deuterated triglycine sulfate (DTGS) detector and a MIRacle ATR accessory (Pike Technologies, Madison, WI, USA). The spectra were collected by coadding 32 scans at a resolution of 4 cm −1 in the range from 650 to 4000 cm −1 . Three spectra were acquired at room temperature for each composite. Measurement of Surface Color The color of the composite surface was measured by a color and color difference meter (CM-3600d, Minolta, Tokyo, Japan) under a D 65 light source with a test window diameter of 8 mm. The color parameters L*, a*, and b* of all specimens were obtained directly from the colorimeter. Based on the CIE L*a*b* color system, L* is the value on the white/black axis, a* is the value on the red/green axis, b* is the value on the yellow/blue axis, and the ∆E* value is the color difference (∆E* = [(∆L*) 2 + (∆a*) 2 + (∆b*) 2 ] 1/2 ). Analysis of Variance All results are expressed as the mean ± standard deviation (SD). The significance of differences was calculated by Scheffe's test or Student's t-test, and p values < 0.05 were considered to be significant. The Physical and Flexural Properties of vWPCs The various physical and flexural properties of the vWPC with different WPGs of acetylated veneers are shown in Table 1. The densities of all vWPCs were approximately 785-824 kg/m 3 , and there were no significant differences among them. In addition, after 24 h of immersion in water, the water absorption and thickness swelling decreased with increasing WPG of the veneer. Of these, the vWPC with WPG 16 of acetylated veneers exhibited the lowest water absorption (9.9%) and thickness swelling (1.6%). This phenomenon may be affected by the acetylation of hydroxyl groups of the veneer cell wall with AA, which leads to a decrease in the content of hydrophilic hydroxyl groups and results in more hydrophobic surfaces [17,18]. Another possible explanation for reducing the volume swelling of the acetylated vWPC is that the volume of the veneer cell wall is occupied by the added chemicals (bonded acetyl groups), which results in a decrease in additional swelling of the modified veneer upon exposure to water soaking [19,20]. In addition, Table 1 also shows that there were no significant differences in the modulus of rupture (MOR) and modulus of elasticity (MOE) between unmodified and acetylated vWPCs, even at a WPG of 16%. The values of MOR and MOE for all vWPCs are approximately 46.2-51.9 MPa and 4.1-4.6 GPa, respectively. This result indicated that the flexural properties of the vWPC were not influenced by the acetylation of overlaid veneers. According to the reports of Rowell and Banks [21] and Birkinshaw and Hale [22], acetylation with AA did not noticeably affect the mechanical properties of modified wood. Therefore, the acetylated veneer did not significantly affect the flexural properties of the vWPC. In contrast, the surface tensile strength of the veneer for vWPC increased with increasing WPG of the veneer. The strength increased from the original 490 to 1153 kPa when the WPG of acetylated veneer reached 16%. It is well known that the surface tensile strength of vWPCs depends on the bonding strength between the WPC core and the overlaid veneer. The interfacial adhesion between the veneer and the hydrophobic WPC core can be enhanced by veneer acetylation [18,23,24]. Thus, better stress transfer from the surface veneer to the WPC core through the interface results in high surface tensile strength. Values are the mean ± SD (n = 5 for 24 h soaking and flexural properties; n = 3 for surface tensile strength). Different letters indicate significant differences within a column (p < 0.05). Appearance characteristics of the vWPCs during natural weathering The appearance characteristics of all vWPCs changed significantly during 1185 days of natural weathering. As shown in Figure 2, the surface color of all vWPCs darkens as the natural weathering time increases. In addition, visible cracks developed remarkably at the surface of the veneer for vWPC Polymers 2020, 12, 513 5 of 10 with unmodified veneer after natural weathering for 32 days. Afterward, the number and size of cracks in unmodified veneer increased with increasing exposure time up to 1185 days of natural weathering. Similar results were also reported by Evans et al. [25]. The explanation given is that the unmodified veneer swelled and shrank after absorbing and desorbing moisture, resulting in cracks at the veneer. In contrast, the vWPCs with acetylated veneers showed almost no crack formation after 1185 days of natural weathering. In other words, the weatherability of the vWPC with acetylated veneers is better than that of the vWPC with unmodified veneers. Characteristics of the vWPCs During Natural Weathering 3.2.1. Appearance characteristics of the vWPCs during natural weathering The appearance characteristics of all vWPCs changed significantly during 1185 days of natural weathering. As shown in Figure 2, the surface color of all vWPCs darkens as the natural weathering time increases. In addition, visible cracks developed remarkably at the surface of the veneer for vWPC with unmodified veneer after natural weathering for 32 days. Afterward, the number and size of cracks in unmodified veneer increased with increasing exposure time up to 1185 days of natural weathering. Similar results were also reported by Evans et al. [25]. The explanation given is that the unmodified veneer swelled and shrank after absorbing and desorbing moisture, resulting in cracks at the veneer. In contrast, the vWPCs with acetylated veneers showed almost no crack formation after 1185 days of natural weathering. In other words, the weatherability of the vWPC with acetylated veneers is better than that of the vWPC with unmodified veneers. Color Changes of the vWPCs During Natural Weathering The color variation of vWPCs with unmodified and acetylated (WPG 6, WPG 11 and WPG 16) veneers during 1185 days of natural weathering was evaluated using the CIE L*a*b* color system. As shown in Figure 3, there was no significant difference in the L*, a*, and b* values of all vWPCs before natural weathering. In addition, the L* value of vWPC with unmodified veneer decreased during natural weathering (Figure 3a). This result is different from that of Stark [7], who reported that the lightening of wood floor-plastic composites occurred during accelerated weathering. However, the L* value of all vWPCs with acetylated veneers increased with increasing exposure time during the first 8 days. Afterward, the value decreased with increasing exposure time. Compared to unmodified vWPC, the L* value of acetylated vWPCs was higher than that of unmodified vWPC after weathering for periods of up to 250 days. In addition, the b* value of all vWPCs showed no significant difference Color Changes of the vWPCs During Natural Weathering The color variation of vWPCs with unmodified and acetylated (WPG 6, WPG 11 and WPG 16) veneers during 1185 days of natural weathering was evaluated using the CIE L*a*b* color system. As shown in Figure 3, there was no significant difference in the L*, a*, and b* values of all vWPCs before natural weathering. In addition, the L* value of vWPC with unmodified veneer decreased during natural weathering (Figure 3a). This result is different from that of Stark [7], who reported that the lightening of wood floor-plastic composites occurred during accelerated weathering. However, the L* value of all vWPCs with acetylated veneers increased with increasing exposure time during the first 8 days. Afterward, the value decreased with increasing exposure time. Compared to unmodified vWPC, the L* value of acetylated vWPCs was higher than that of unmodified vWPC after weathering for periods of up to 250 days. In addition, the b* value of all vWPCs showed no significant difference (Figure 3c), but the a* value of unmodified vWPC was higher than that of acetylated vWPC for the same period of time (Figure 3b). After 1185 days of natural weathering, the a* and b* values of all vWPCs had no significant differences. These results revealed that the surface color of unmodified vWPC was darker and redder than that of acetylated vWPC. Meanwhile, Figure 3d shows that all vWPCs exhibited more sensitivity to color change at the initial natural weathering, and the ∆E* values Polymers 2020, 12, 513 6 of 10 of vWPCs with unmodified, WPG 6, WPG 11, and WPG 16 acetylated veneers were 9.5, 5.6, 6.9, and 6.9, respectively, after weathering for 8 days. Then, the value decreased with increasing exposure time after weathering for periods of up to 32 days. Afterward, the value increased with increasing exposure time until 512 days of natural weathering and then leveling off. The ∆E* values of vWPCs with unmodified, WPG 6, WPG 11, and WPG 16 acetylated veneers were 34.5, 36.8, 33.8, and 37.3, respectively, after natural weathering for 1185 days. This result demonstrated that unmodified vWPC was more susceptible to photooxidation than acetylated vWPCs since acetylated veneers retarded the photodegradation process during the initial period of natural weathering. (Figure 3c), but the a* value of unmodified vWPC was higher than that of acetylated vWPC for the same period of time (Figure 3b). After 1185 days of natural weathering, the a* and b* values of all vWPCs had no significant differences. These results revealed that the surface color of unmodified vWPC was darker and redder than that of acetylated vWPC. Meanwhile, Figure 3d shows that all vWPCs exhibited more sensitivity to color change at the initial natural weathering, and the ΔE* values of vWPCs with unmodified, WPG 6, WPG 11, and WPG 16 acetylated veneers were 9.5, 5.6, 6.9, and 6.9, respectively, after weathering for 8 days. Then, the value decreased with increasing exposure time after weathering for periods of up to 32 days. Afterward, the value increased with increasing exposure time until 512 days of natural weathering and then leveling off. The ΔE* values of vWPCs with unmodified, WPG 6, WPG 11, and WPG 16 acetylated veneers were 34.5, 36.8, 33.8, and 37.3, respectively, after natural weathering for 1185 days. This result demonstrated that unmodified vWPC was more susceptible to photooxidation than acetylated vWPCs since acetylated veneers retarded the photodegradation process during the initial period of natural weathering. It is well known that among the constituents of wood, lignin is most susceptible to photodegradation [26]. Most of the coloring substances generated by photooxidation of lignin come from further reactions between the intermediary phenoxy radicals and oxygen, resulting in the browning process of wood [27]. Therefore, the a* and b* values of vWPC with unmodified veneer increased with increasing exposure time in the first 8 days. However, vWPCs with acetylated veneers retarded the browning process during natural weathering, and similar results were observed on acetylated veneer [8] and esterified wood [28]. Meanwhile, Ohkoshi [29] and Mitsui [30] reported that the acetylated wood subjected by photobleaching upon exposure to UV, resulting in the L* value of all vWPCs with acetylated veneers increased with increasing exposure time in the first 8 days. These results suggest that the acetylation of wood can play a major role in controlling the natural weathering process of wood and wood composites. It is well known that among the constituents of wood, lignin is most susceptible to photodegradation [26]. Most of the coloring substances generated by photooxidation of lignin come from further reactions between the intermediary phenoxy radicals and oxygen, resulting in the browning process of wood [27]. Therefore, the a* and b* values of vWPC with unmodified veneer increased with increasing exposure time in the first 8 days. However, vWPCs with acetylated veneers retarded the browning process during natural weathering, and similar results were observed on acetylated veneer [8] and esterified wood [28]. Meanwhile, Ohkoshi [29] and Mitsui [30] reported that the acetylated wood subjected by photobleaching upon exposure to UV, resulting in the L* value of all vWPCs with acetylated veneers increased with increasing exposure time in the first 8 days. These results suggest that the acetylation of wood can play a major role in controlling the natural weathering process of wood and wood composites. Mechanical Properties of the vWPCs During Natural Weathering The changes in flexural properties and surface tensile strength of various vWPCs during weathering are shown in Tables 2 and 3, respectively. Table 2 shows that the MOE retention ratios of vWPCs with unmodified and acetylated veneers decreased significantly during natural weathering. Similar results were reported for WPC weathering by Stark [7]. On the other hand, the retained MOE ratio Polymers 2020, 12, 513 7 of 10 of vWPCs with acetylated veneer (WPG 6, 11 and 16) usually remains at 83.6%-89.5% after natural weathering for 64 days. In contrast, the MOE retention ratio of vWPCs with unmodified veneer was significantly decreased to 48.7% after 64 days of natural weathering. The explanation given is that photodegradation occurs mainly in the lignin on the veneer surface, leading to a cellulose-rich surface. As a result, wood cell walls swell when penetrated by water, facilitating deeper light penetration and providing sites for further degradation, resulting in the deterioration of mechanical properties for veneers [31]. Meanwhile, the unmodified veneer swelled and shrank after absorbing and desorbing moisture. Such cyclic dimensional changes could result in cracks at the veneer (Figure 2), leading to a reduction in the MOE of the veneer and vWPC. However, the dimensional stability and hydrophobicity of wood can be remarkably improved by acetylation [8][9][10], thus resulting in high strength retention for acetylated composites during natural weathering, especially for higher WPG composites. After 1185 days of natural weathering, the MOE retention ratios of the vWPCs with various degrees of acetylation of veneers decreased in the following order: WPG 16 (43.3%), WPG 11 (37.1%), WPG 6 (32.9%), and unmodified (15.8%). Of these, the vWPC with WPG 16 acetylated veneer retained the greatest strength over the weathering period, while the vWPC with unmodified veneer retained the least strength. Similar to the trend observed for the flexural strength, the MOR retention ratios of the vWPCs were WPG 16 (40.2%) > WPG 11 (30.4%) > WPG 6 (21.0%) > unmodified (17.1%) after 1185 days of natural weathering. A similar result was observed on acetylated Scots pine by Even et al. [9]. Furthermore, as shown in Table 3, the surface tensile strength of all vWPCs generally decreased with increasing natural weathering time. Among them, the veneers of all unmodified vWPC were peeled off from the WPC core of vWPC after weathering for 512 days; thus, the unmodified vWPC had not been detected. The explanation for this observation is that the interfacial adhesion between the unmodified veneer and the WPC core is poor, and then the cyclic dimensional changes of veneer during weathering lead to the surface veneer peeling off. However, the interfacial adhesion between the veneer and the WPC core and the dimensional stability of veneers can be enhanced through acetylation. Therefore, the surface tensile strength of vWPC with acetylated veneer remained at 331-349 kPa after 1185 days of natural weathering. These results demonstrate that the mechanical strength of vWPCs for outdoor application could be improved by veneer acetylation. Values are the mean ± SD (n = 5). Different lowercase and capital letters indicate significant differences within a raw and a column (p < 0.05), respectively. In this study, ATR-FTIR spectroscopy was used to monitor the specific reactions of vWPCs during natural weathering. As shown in Figure 4, the intensity of absorption bands corresponding to the C=C group in aromatic rings (1510 and 1600 cm −1 ) of lignin decreased with increasing exposure time. After 4 days of natural weathering, the absorption bands of unmodified veneers and acetylated veneers with WPG 6 and 11 almost disappeared. However, the acetylated veneer with WPG 16 retained some of the absorption bands of lignin. Similar results have been reported by Evans et al. [9], who suggested that wood acetylation to lower WPG had no protective effect on lignin and even increased the susceptibility of lignin to degradation during weathering. In addition, the absorption band of the carbonyl (1731 cm −1 ) group of vWPC with unmodified veneer increased significantly in the first 4 days of weathering. Afterward, the absorption band of the carbonyl group decreased or even disappeared as the exposure time increased. Accordingly, an explanation for this phenomenon is that photodegraded products of lignin located on the surface of veneer are leached during weathering, causing the absorption band of carbonyl groups to decrease. Furthermore, the absorption bands at 1737 (-OCOCH 3 , C=O), 1371 (-OCOCH 3 , C-H), and 1237 cm −1 (-OCOCH 3 , C-O) also decreased significantly with increasing weathering time for all acetylated vWPCs. This result indicated that the deacetylation or partial hydrolysis of these groups occurred during weathering. Similar results have also been shown in some esterified wood [32]. Accordingly, these results revealed that the effect of acetylation on improving the photostability of vWPCs was not significant. : p < 0.01 (one-tailed test) compared to the "unmodified" group. ATR-FTIR Analysis of vWPCs During Natural Weathering In this study, ATR-FTIR spectroscopy was used to monitor the specific reactions of vWPCs during natural weathering. As shown in Figure 4, the intensity of absorption bands corresponding to the C=C group in aromatic rings (1510 and 1600 cm -1 ) of lignin decreased with increasing exposure time. After 4 days of natural weathering, the absorption bands of unmodified veneers and acetylated veneers with WPG 6 and 11 almost disappeared. However, the acetylated veneer with WPG 16 retained some of the absorption bands of lignin. Similar results have been reported by Evans et al. [9], who suggested that wood acetylation to lower WPG had no protective effect on lignin and even increased the susceptibility of lignin to degradation during weathering. In addition, the absorption band of the carbonyl (1731 cm -1 ) group of vWPC with unmodified veneer increased significantly in the first 4 days of weathering. Afterward, the absorption band of the carbonyl group decreased or even disappeared as the exposure time increased. Accordingly, an explanation for this phenomenon is that photodegraded products of lignin located on the surface of veneer are leached during weathering, causing the absorption band of carbonyl groups to decrease. Furthermore, the absorption bands at 1737 (-OCOCH3, C=O), 1371 (-OCOCH3, C-H), and 1237 cm -1 (-OCOCH3, C-O) also decreased significantly with increasing weathering time for all acetylated vWPCs. This result indicated that the deacetylation or partial hydrolysis of these groups occurred during weathering. Similar results have also been shown in some esterified wood [32]. Accordingly, these results revealed that the effect of acetylation on improving the photostability of vWPCs was not significant.
6,547.8
2020-02-27T00:00:00.000
[ "Materials Science" ]
Compact fusion energy based on the spherical tokamak Tokamak Energy Ltd, UK, is developing spherical tokamaks using high temperature superconductor magnets as a possible route to fusion power using relatively small devices. We present an overview of the development programme including details of the enabling technologies, the key modelling methods and results, and the remaining challenges on the path to compact fusion. Introduction Since the mid -1980s the spherical tokamak (ST) has been recognized as an important device for fusion research [1][2][3][4]. Such devices demonstrate all the main features of high aspect ratio tokamaks but are relatively small and inexpensive to construct. Moreover, research has shown that they have beneficial properties such as operation at high beta [2], can be run at higher elongation [3,4], and possibly exhibit higher confinement [4], although more data are needed at higher field and lower collisionality to determine this important aspect. Early attempts to design reactors based on STs did not produce convincing designs, and until recently STs have been mainly seen as useful research devices and possibly as neutron sources for component testing. However, recent advances in both tokamak physics and superconductor technology have changed the situation, and relatively small STs operating at high fusion gain are now considered possible. The key physics step is the realization that the power and the device size needed for high fusion gain may be considerably less than previous estimates, while the key technological step is the advent of ReBCO high temperature superconductors (HTS). In addition to operating at relatively high temperatures, HTS can also produce and withstand relatively high magnetic fields: both of these properties are beneficial in the design of magnets for fusion devices especially for STs where space is limited in the central column. Sorbom et al [5] have considered the application of HTS to tokamaks of conventional aspect ratio and produced a design for ARC, a fusion power plant slightly greater in size than JET and at considerably higher field. In this paper, we describe the tokamak energy (TE) programme to develop an alternative route to fusion power based on STs constructed using HTS magnets, and the modelling and concept work underway to determine the optimum power and size of an ST/HTS fusion module. This work identifies key aspects in the physics and technology that significantly affect the size, power and feasibility of such a module. In parallel, experimental work is underway addressing these aspects, including the construction and operation of a series of STs. In this paper, we present recent new results and the status of the development programme, and we outline the intended next steps. The paper is divided into five main sections. In section 2 we summarise briefly our earlier modelling work that indicates that there is potentially a solution for a high fusion performance device at relatively small major radius and low aspect ratio. In section 3, we give an overview of the TE development programme; we include a brief description of the STs operated, presently under construction and planned at TE. Our predictions of the performance of a candidate ST fusion module are extended and updated in section 4. Possibilities for modular fusion are discussed briefly in section 5. A summary is given in section 6. Power and size of tokamak pilot plants and reactors Recent modelling with a system code based on an established physics model has shown that, when operated at reasonable fractions of the density and beta limits, tokamak pilot plants and reactors have a power gain, Q fus , that is only weakly dependent on size; mainly it depends on P fus , and H, where P fus is the fusion power and H is the confinement enhancement factor relative to empirical scalings [6]. Frequently the ITER reference scaling (IPB98y2) is used and H is defined relative to that. When expressed in dimensionless variables this scaling has a significant inverse dependence on the plasma beta, (β −0.9 ). However, dedicated experiments on several devices in which the dependence of the confinement time on beta has been probed directly, have shown that the confinement time is almost independent of beta; alternative beta-independent scalings have been developed, for example that by Petty [7]. These scalings are arguably more appropriate because they give consistency between single device and multi-device experiments. Modelling with the system code has shown that the power needed for a given fusion gain is a factor of two to four lower with these scalings (figure 1) [6]. The dependence on P fus implies that it is principally engineering and technological aspects, such as wall and divertor loads, rather than physics considerations, that determine the minimum device size. The lower power requirement arising from the beta-independent scalings is especially advantageous. Using the system code, a wide parameter scan was undertaken to establish possible regions of parameter space that could potentially offer high Q fus with acceptable engineering parameters. In addition to the high aspect ratio, large tokamak solution, a region of parameter space at low aspect ratio and relatively small major radius, and hence small plasma volume, has been identified (figure 1). The physics advantages (such as high beta) of low aspect ratio potentially enable a compact ST module to achieve a high fusion gain at a modest toroidal field (TF) of around 4 T, whereas a compact conventional aspect ratio tokamak requires a very high field on axis ~12 T to achieve high fusion gain-as evidenced by Ignitor [15]. A candidate device (ST135) with a major radius (R 0 ) of 1.35 m, aspect ratio (A) of 1.8 and magnetic field on axis ( B T0 ) of 3.7 T operating at P fus = 185 MW with a Q fus of 5 was suggested. The study that led to this proposal was mainly a physics study; engineering aspects were not investigated. Some important engineering and technological aspects are currently being developed and key results are presented in this paper (section 4). HTS Use of conventional low temperature superconductor (LTS) for an ST fusion device appears impractical because thick shielding (⩾1 m) would be needed to prevent neutrons heating the superconductor to above 4 K. With shielding of this thickness on the inner central column, the device would be very Figure 1. P fus as a function of R 0 at constant Q fus = 30, H = 1.5 for both IPB98y2 scaling and beta independent scalings for A = 3.2 and A = 1.8. The values of some key engineering parameters are given in the text in the figure including the field at the conductor on the inboard side, B cond . P div is the transported loss power that has to be handled in the divertor after allowance for radiation losses. Details are given in [6]. The conventional large tokamak solution (left) and the potential low A solutions (right) are indicated. The circled areas show that, with the beta-independent scaling, the wall loads and divertor loads for a relatively small (R = 1.4 m) low aspect ratio (1.8) device would be in the range of 3.5 MW m −2 and 45 MW m −1 respectively, for the same Q fus . This is challenging (although proposals to reduce the divertor load in an ST are described in section 4.2) but in the region of those likely to have to be dealt with in much larger and more powerful devices. The value of B cond is high in the case of the low A approach but potentially achievable using magnets made with HTS (sections 3 and 4). large. The advent of HTS, however, potentially provides a solution. HTSs were discovered in the late 1980s and the 2nd generation ReBCO (where Re = Yttrium or Gadolinium) tapes have very promising properties; in particular, they are able to carry high currents under very high magnetic fields. Although superconductivity occurs at around 91 K in zero magnetic field, far better performance is achieved when cooled to around 20-40 K. Thus, for constructing tokamaks, HTS has potentially two advantages relative to LTS: an ability to carry more current at high field, and less demanding cryogenics [8]. ST25(HTS) To gain experience with constructing tokamaks using magnets made from HTS, TE constructed a small but complete tokamak (figure 2). This provided the world's first demonstration of a tokamak magnet where all the magnets are made from HTS. All coils (toroidal and poloidal) are wound from YBCO HTS tape. The 6-limb TF cryostat is cooled 'cryo-free' to ~20 K using a single Sumitomo cold head seen above the vessel, thermal conduction from the HTS tape being provided by copper strips; the two poloidal field (PF) coils being cooled by He gas to 20-50 K. A 29 h run was obtained in June 2015, with an RF discharge in hydrogen (figure 2). The TF magnet in ST25(HTS) used a continuous length of 12 mm wide YBCO tape of 48 turns in each of 6 limbs which when operated at 400 A would provide a TF at R = 0.25 m of ~0.1 T, chosen to permit current drive (CD) via 2.45 GHz microwave sources. This simple design is prone to single point failure (par ticularly at any of the several soldered joints), was not designed to tolerate quenches, and was operated considerably below the critical current which is ~1 kA at 20 K and in the low self-field which is <1 T at the inner TF limb. A high performance fusion ST will need a TF of 3-4 T, which requires the development of high current HTS cables. TE is currently developing HTS cable, joint and quench management technologies required to build and operate a larger device that will operate at higher field and with very large stored energy (section 3.4). A major challenge is the design of the central column, and this is being addressed in the design work for ST135 (section 4.1). ST40 To date STs have operated at TFs of less than 1 T. For high fusion performance, devices operating at 3 T or above will be needed. To construct an ST that can operate at fields at this level, innovative engineering solutions will be needed especially for the central column. To develop and demonstrate solutions to the key engineering aspects, TE is constructing a device (ST40) with copper magnets that is intended to operate at fields up to 3 T. Beyond this device TE is planning high field STs using HTS. ST40 (figure 3), will have a design field of B T0 = 3 T at major radius of R 0 = 0.4 m, and a centre-rod current of 6 MA. Use of copper for the TF coil (as in all existing STs, except ST25HTS at TE) has the advantages of combining structural strength with good conductivity (especially when cooled to liquid nitrogen temperature). Whereas existing STs have operated typically at 0.3-0.5 T, with the recent MAST, Globus-M and NSTX upgrades striving for 1 T, innovative design features are employed to enable ST40 to operate at up to 3 T. Principal amongst these is the use of Constant Tension Curve TF limbs, specially designed so that over the permitted temperature rise (whether starting from ambient or from liquid nitrogen temperature) the expansions of the centre post and the return limbs are matched, so that minimal movement occurs at the critical top and bottom joints, a simple robust flexi-joint being provided to accommodate the movement. At fields of 3 T, stresses are high; and an external support structure based on two steel rings (shown in grey above and below the magnet) accommodates in-plane and out of plane forces, such as those arising from tolerance errors in the radial position, and the JxB twists arising from TF-PF and TF-solenoid interactions. The ST40 mechanical design was analysed extensively by a series of electromagnetic analyses using Opera [10], which simulated the forces expected in operational scenarios, including Vertical Displacement Events. These forces were then used in mechanical Finite Element Analyses of major components, using Ansys [11]. For example the central column of the TF magnet, formed from 24 twisted wedge-shaped conductors, exhibits highest stress at the inner edge. For the maximum wedge current of 0.25 MA required to produce a field of 3 T at plasma major radius of 0.4 m, this stress is ~100 MPa in the copper. The copper is half hard, with a yield stress around 180 MPa. Comparing this to the Von Mises yield criterion gives a factor of safety on yield of 1.8. An important aid to obtaining such a high field is the use in ST40 of a minimal solenoid, made possible by the mergingcompression (MC) process for plasma start-up. This should produce hot plasmas with currents of up to 2 MA without use of the central solenoid, which is only needed to maintain the flat-top current-assisted by the high bootstrap fractions expected, and CD from NBI or RF. Hence, the solenoid is considerably smaller than in MAST and NSTX and their upgrades. This reduces JxB twisting stresses, allows more copper for the TF column which reduces TF resistance and heating, and provides a stronger TF post. The centre post is constructed from 24 wedges, each twisted by 15 degrees over their length thus obviating the need for a TF compensating coil. The TF, solenoid and PF coils are powered by 'Supercapacitors' such as the Maxwell 125 V, 63 F, 0.5 MJ transport module, providing a very economic power supply from laboratory power supplies. Each unit has a limiting fault current ~7 kA even under dead short conditions providing safety; an important consideration in a 100 MJ capacitor bank. The plasma pulse length is limited by the temper ature rise in the centre post; initial operations with a water-cooled TF magnet will provide a TF of 1-2 T at R 0 = 0.4 m with a flat top of 1-3 s; operation with liquid nitrogen cooling considerably reduces resistance and hence heating and enables longer pulses, and should permit operation at up to 3 T with a flat top of ~1 s. The MC coils (indicated in figure 3) operated successfully in START and MAST, and extrapolation to ST40 is discussed in [12]. The MC process involves the formation of plasma rings around each of the MC coils shown in figure 3 by rapid discharge of high voltage capacitor banks. These plasma rings attract each other and merge on the midplane, followed by an adiabatic compression of the plasma to the desired major radius of ~0.4 m. It is shown that plasma current immediately after merging increases with TF and linearly with MC coil current. In ST40 the TF and MC coil current are increased over those in MAST by factors of up to 6 and 2 respectively. Extrapolation indicates ST40 should have plasma current after merging of around 1 MA; the subsequent adiabatic compression phase halves the radius in ST40 and should approximately double the plasma current, assisted by the significant reduction in inductance of the plasma ring as it takes up the highly shaped ST form. The MC scheme will be operated at the highest performance permissible, to produce the highest possible plasma currents and plasma temperatures. The final design features MC coil currents of 600 kAt in each coil, produced by a 11 kV, 28 mF capacitor bank, with a downswing time of ~10 ms which induces the plasma rings, and uses very slender support legs (to minimise interference with the plasma rings which have changing helicity). The MC coil mounting structure was analysed in Ansys, and a prototype was tested to 10 000 cycles at the design load of 73 kN, and then finally pulled to destruction. Final failure occurred at approximately 3 times the design load. In addition to the original objective of providing a high vacuum version of the pioneering START ST at a tenfold increase in TF, the specification has been extended: indeed, it is expected that the MC scheme will provide up to 10 keV plasmas in ST40, the plasma being heated by the rapid conversion of magnetic field energy into plasma kinetic energy during the merging. Full details of its predicted performance, and of the expected evolution of electron and ion temperature profiles are provided in [12], based on extensive studies on both MAST and Japanese STs in collaboration with Y Ono and his team [13]. The ST40 device is currently under construction and is expected to begin operation in 2017. Future development programme As mentioned above, the intention is to combine the experience gained with the low field HTS device ST25(HTS) with that obtained with the high field copper device ST40 to design and construct high field STs using HTS for the magnets. The objectives will be to develop physics understanding of a high field ST, to test HTS cable technology, and to establish HTS performance during DT fusion conditions. ST40 should provide valuable information to determine energy confinement scaling in a high-field ST. TE is designing a high field HTS magnet, using cable technology similar to that described in section 4, to establish the engineering viability. Research is advancing rapidly on these topics, both in-house and worldwide, and the precise DT fusion experiments are still under consideration. As shown in section 4.3 these can range in size from small short pulse research devices, to steady state devices of major radius ~2 m. [9] that growth rates of the highly elongated (κ ~ 2.6) plasma shown can be limited to e-fold times of ~20 ms by the passive plates (indicated), and can be stabilized by internal active feedback coils. Conceptual design of a prototype fusion power module: ST135 While from a physics perspective it seems that a compact fusion module may be possible (section 2), the feasibility of such a device depends critically on there being satisfactory engineering solutions in a few critical areas. Three important components are the central column where it is necessary to handle the stress in this component at the same time as accommodating the HTS TF magnet; the divertor where it is necessary to handle high power loads; the inboard shielding which is needed to protect the HTS tape from bombardment from high energy neutrons so that it has an acceptable lifetime, and also to reduce the neutron heating to a level that can be handled with a reasonable cryogenic system. Possible solutions for these components are under study and development within TE, and are outlined in the following sections. HTS central column design One possible arrangement (figure 4) utilises two significant features of HTS tape: namely, operation at 20-30 K that gives sufficient current carrying capability at high magnetic field, but at much lower cryogenic cooling cost than operation at 4 K, and the property that tape aligned parallel to the local magnetic field can carry several times more current than nonaligned tape. In this simple model the individual HTS tapes are bonded into multi-layer cables, and for the initial calculations we assume that the entire structure has the strength of half-hard copper. Towards the geometric centre of the column, the magnetic field reduces and in consequence the current carrying capacity of the HTS tape increases. This makes it possible to reduce the number of tapes. For this simplified design, in which the HTS cables are arranged to produce a uniform current density over the central HTS magnet, we can derive a simple expression for the peak stress which is at r = 0, as follows. Current density in the centre rod magnet is J cc = I cc /(π· R 2 cc ) where I cc (MA) is the total centre rod current, and R cc (m) is the radius of the magnet. Since we are assuming constant current density in the central column, the TF in Tesla at any radius r(m) within it is B(r) = 0.2 r I cc / R 2 cc . If we neglect hoop stress and integrate the J × B force from r to R cc we can obtain the inward force at any radius within the central column. We find that the peak compressive stress (σ cc ) occurs at the column axis, and is where we have used Ampere's law B T0 = 0.2 I cc /R 0 to replace I cc , where B T0 is the TF in Tesla at the plasma major radius R 0 (m). For the reference ST135 design, R 0 = 1.35 m, R cc = 0.25 m, plasma current = 7.2 MA, B T0 = 3.7 T, A = 1.8, elongation κ = 2.64, and so the peak field at the edge of the HTS magnet is 20 T and the central column current is 25 MA. With a neutron shield thickness of 0.35 m, the calculated peak radial stress is 320 MPa. This is high but is in the form of uniform hydrostatic compression when an axial compressive stress of the same order is provided (below). A finite element analysis of the centre column with a Young's modulus of 90 GPa and Poisson's ratio of 0.35 gives a peak stress of 255 MPa. This lower figure reflects the support provided by tangential stiffness. However a practical centre column containing cooling channels and various materials with different mechanical properties is likely to have higher localised stress. We find expression (1), although approximate, useful for scoping studies. Expression (1) shows that forces increase as the square of the TF, but reduce as the square of the central column radius. Hence for example, a 0.05 m addition (20%) to the HTS core radius (accompanied by a 0.05 m decrease in shield thickness, if it is desired to maintain the aspect ratio of 1.8), reduces the field at R cc to 16.7 T and the peak stress to 205 MPa whilst maintaining a field of 3.7 T at R 0 = 1.35 m. Other stresses are also important: in particular, stresses arising from axial loads at the inboard TF leg are considerable and can be the limiting stresses [14] depending on the device design. These stresses are not yet included in our analysis. The compact radial build of an ST module, however, should make it feasible to include an external mechanical structure to apply a pre-load compression of the centre-rod. If this can be accomplished successfully, then the compressive stress would dominate. As a point of comparison, we note that Ignitor has developed a design solution along these lines [15]. In that case, the necessary mechanical strength has been obtained by designing the copper coils and its steel structural elements (C-clamps, central post, bracing rings) in such a way that the entire system, with the aid of an electromagnetic press when necessary, can provide the appropriate degree of rigidity to the central legs of the coils to handle the electrodynamic stresses, while allowing enough deformation to cope with the rapid Whereas it is conventional to twist superconductor cables to minimize AC losses these will not be significant in the TF magnet of an ST power plant as this will have a slow rise to reach a constant peak current. With a suitable design, use can be made of the substantial increase in performance afforded by aligned operation, giving a corresponding reduction in cost. Divertor loads High power plasmas in relatively small devices would impose high divertor loads if operated in the single-null (SN) configuration, especially in an ST where the inner strike point is at low radius, and space to mitigate the power load by angled strike points or long divertor legs is limited. However the use of double-null divertor (DND) operation, as studied extensively on the START and MAST STs [16], can considerably improve the loading, as the DND configuration is very favourable for the ST concept. Firstly, the inner SOL is now (largely) isolated from the outer, and it is found that most scrape-offlayer (SOL) power escapes through the outer segment and so is incident on the outer strike points; the inner/outer power ratio varies widely, dependent on plasma conditions. During ELMs the ratio can be over 20 times higher; during inter-ELM periods when the core heating is partially retained, the ratio can fall to 4, approximately the ratio of the inner and outer SOL areas; but the average ratio is typically taken as 10 in MAST [17]. Full analysis of the divertor performance requires exact specification of the machine parameters and the detailed divertor design. These details are not available at the present, pre-conceptual phase of the ST135 project. Instead, it is instructive to compare our divertor with the FNSF design [3] that is similar to ST135. FNSF is particularly relevant because an HTS version of the (copper magnet) FNSF series was developed as a joint study between TE and PPPL, and is presently used as a concept design for ST135, as reported in [18]. The study of divertor loads in an R 0 = 1.7 m version of FNSF [3] estimates the peak divertor loads for both the inner and outer DND strikepoints to be less than 10 MW m −2 . The load on the divertor target is roughly P div /S w , where P div = P SOL − P rad is the power delivered to the targets and S w ∝ R trg × f x × λ q is the effective wetted area. Here P SOL is the power entering the scrape-off layer (SOL); P rad the power spread over the side walls, mostly by radiation; R trg is the radius of the strike point; f x the flux expansion from the midplane to the target and λ q represents the width of the SOL at the midplane as given by Eich scaling [19]. In reality, S w includes also flux broadening and non-proportional power dissipation in the divertor, but for first estimates one can consider them proportional to λ q . ST135 is designed to have P fus = 200 MW and Q fus = 5, whereas the FNSF design envisages P fus = 160 MW and Q fus = 2. The heat entering the plasma is the combination of alpha heating and auxiliary heating, making a total of 112 MW in FNSF and 80 MW in ST135, and after radiation losses due to impurity, Bremsstrahlung and cyclotron radiation this will enter the SOL. The strike points R trg are at about 20% larger radius in FNSF and expansion f x should be very similar. The Eich scaling predicts λ q varying as B −1.2 pol and B pol is a factor 1.4 higher in FNSF, so λ q is a factor 1.5 larger in ST135. Overall, we conclude that the strike areas should be similar; and since P sol is approximately 30% less, the peak power on each outer divertor in ST135 should be ~7 MW m −2 compared to ~10 MW m −2 in FNSF. This estimate suggests that the power loading of the divertor targets in ST135 should be tolerable. However the effectiveness of DND operation in limiting inner strike point loads, especially if fast transients such as ELMs are present, is important and requires further experimental results on position control, ELM mitigation, timescales of load transients, etc. Experiments on ST40 are planned to deal with some of these aspects. Other key engineering aspects, such as the parallel heat flux and manufacturing and installation accuracies of the divertor tiles also need investigation. Shielding, energy deposition, neutron flux and damage in the central core An extensive investigation of candidate materials for the inner shield has been carried out and tungsten carbide with water cooling has been identified as a promising material [20]. MCNP Monte Carlo Code [21] calculations of the attenuation due to this shield have been carried out. The attenuation of the neutron flux, and associated heat deposition in the central core, as a function of shield thickness have been param eterised and included in the TE System Code [20]. The heat deposited will have to be removed actively with a cryoplant and an estimate of the power requirement is also included. To determine the optimum shield thickness several factors have to be taken into account simultaneously. For a device of given Q fus , H factor and aspect ratio A, it is necessary to consider each of the attenuation due to the shield, the magnetic field on the HTS tape and the radial stress in the central column. The TE system code has been extended so that these different aspects can be considered simultaneously. It was found that in order to keep the peak radial stress around its limiting value of 320 MPa as the major radius increased, the radius of the superconducting core also needed to increase but less rapidly than the shield thickness increase. The extra space in the radial build as the major radius increases is used to increase both the thickness of the shield and of the HTS core in the ratio: 92% to shield thickness T shield , 8% to the HTS core radius R cc which approximately maintains constant stress. As an example, for a reference plasma (Q fus = 5, P fus = 201 MW, H(IPB98y2) = 1.9, A = 1.8, κ = 2.64, βN = 4.5), we present in figure 5 the variation of key parameters with major radius. We see that at the reference major radius for ST135 (R 0 = 1.35 m), the shield thickness is 0.31 m, the field on the conductor is 20.2 T, the plasma current is 7.2 MA, the neutron heating to the central column is 97.7 kW, and the wall load is 1.88 MW m −2 . To handle this level of neutron heating we estimate that a cryogenic plant of 3.0 MW wall-plug power would be needed. It is clear from the figure that as the shield thickness increases the heating of the central column reduces rapidly. The crosses in figure 5 show computations of the energy deposition into the superconducting core made using the MCNP code. It is seen that the fit to the System Code prediction is good over a wide range of radius without the need for any change in parameters. The left of the figure corresponds to the limit of zero shield thickness and it is seen that only here, for shield thickness below a few cm, that the computed deposited power falls significantly below the simple exponential dependence of form 10 3 × exp[−6.61(R 0 − 1.35)] kW (where R 0 is in m). A key aspect not yet included is any change in tape performance due to irradiation by neutrons. The neutron flux across the outer surface of the superconducting core has been calculated using MCNP. The full triangles in figure 6 show the neutron flux above 0.1 MeV for the outer surface of the superconducting core as measured in the central mid-plane region (8.6% of the total core height) where the flux is highest. The flux variation with major radius fitted at larger radii above 1 m is shown by the dashed lines to decay exponentially appreciably faster than that for the power deposition mentioned earlier with a form 3.54 × 10 17 exp[−7.08(R 0 − 1.35)] n s −1 m −2 . It is seen that for lower major radii below 1 m the flux is rather lower than predicted from the exponential decay. Indeed for zero shield thickness, which occurs at major radius 0.592 m, the flux is only a fraction 0.283 of its expected value. This is modeled as shown in the full lines by subtracting from the above function 5.45 × 10 19 exp[−14.56(R 0 − 0.592)]. Heating power deposited in the superconducting core, and other key parameters, as a function of plasma major radius. The scan has been performed with a constant H IPB98y2 = 1.9, the central temperature adjusted to give 0.8 of the Greenwald density limit, and the TF adjusted to give 0.9 of the beta limit. The extra space made available by increasing the major radius has been divided in the ratio 92% to the shield thickness T shield and 8% to the HTS core radius R cc across the plot. The circles show the reference design at 1.35 m major radius. The crosses show the energy deposition calculated independently using the MCNP code. Inevitably the HTS performance will degrade but information on the extent of the degradation is limited. Eisterer's work on HTS tapes [22] irradiated in a fission reactor has suggested that the tape lifetime corresponds to a total neutron fluence of about 10 23 m −2 . The open diamonds in figure 6 show the seconds of continuous running assuming this fluence limit. For many scientific objectives, the actual running time is likely to be composed of many relatively short pulses. The measurements by Eisterer were made at ambient temper atures rather than ~30 K as expected during operation. They were made using a reactor flux whose energy dependence may be quite different from that expected behind the neutron shield of a fusion plant. Gamma radiation damage has not been included and may be important. Raising the temperature of the tape temporarily (annealing) may restore tape performance. In this case also information is limited and dedicated R&D is needed. A modular power plant If a relatively small fusion module is feasible, then a possible alternative supply of fusion power based on a modular concept may be available. Compared to ST135, a higher Q fus would be needed, ~10-20, and the tritium breeding ratio would need to be >1. To meet these requirements, the device would probably have to be somewhat larger than ST135 but still small relative to the large DEMOs considered for the single device approach. The energy confinement in STs at high field, and the thickness of shielding needed to protect the HTS, especially on the central column, have a strong impact on the minimum size. It is expected that within the next few years better estimates in both cases will be available through dedicated R&D and it will be possible to optimise the size and power of a ST fusion module. The economics and operational advantages of a modular concept, utilizing perhaps 11 small 100 MW units, (10 working and 1 undergoing maintenance) have already been outlined [23]. The advantages include improved availability; cyclic maintenance; the need for only a relatively small hot cell; a sharing of start-up and energy conversion facilities; the possibility of providing plant output varying in time by switching individual modules, and the economics of massproduction. STs can exhibit the combination of high bootstrap fraction and high beta-important both for maximizing power gain and in obtaining/maintaining the plasma current, especially in the absence of a central solenoid. In this latter respect, recent predictions that RF techniques can provide full plasma current initiation and ramp-up [24] are encouraging; initial tokamak-like plasma can be formed by using electron Bernstein wave (EBW) start-up alone [25]. Then EBW CD may be used further for the plasma current ramp-up because of its relatively high efficiency η = R 0 n e I CD /P RF ≈ 0.035 (10 20 A W −1 m 2 ) [26] at low electron temperatures. EBW CD efficiency remains high even in over-dense (ω pe > ω ce ) plasma [27]. At the reactor level of temperatures ~10 keV, EBW CD efficiency η ≈ 0.1 would become compatible with other CD methods so a combination of different CD techniques with different accessibilities to the plasma may become beneficial. Summary The TE programme is aimed at developing the ST as a future power source. Areas that have a high leverage on the feasibility of this approach have been identified and are under study in current R&D. Two such areas are the energy confinement scaling at high field (3-4 T), and the impact of fusion neutron irradiation on the properties of HTS rare earth tape at 20-30 K. Both are under investigation and the data should be available in the near future. Favourable results could lead to economic fusion based on modular high gain STs of relatively small size (R 0 < 1.5 m); less favourable results could lead to larger but still economic ST fusion power plants of around 1.5-2 m major radius. In either case, the small scale of the fusion modules should lead to rapid development and make possible the resolution of the remaining key outstanding physics and technology steps that are needed for the realisation of fusion power.
8,498
2017-01-01T00:00:00.000
[ "Engineering", "Physics" ]
Digital Tools to Facilitate the Detection and Treatment of Bipolar Disorder: Key Developments and Future Directions Bipolar disorder (BD) impacts over 40 million people around the world, often manifesting in early adulthood and substantially impacting the quality of life and functioning of individuals. Although early interventions are associated with a better prognosis, the early detection of BD is challenging given the high degree of similarity with other psychiatric conditions, including major depressive disorder, which corroborates the high rates of misdiagnosis. Further, BD has a chronic, relapsing course, and the majority of patients will go on to experience mood relapses despite pharmacological treatment. Digital technologies present promising results to augment early detection of symptoms and enhance BD treatment. In this editorial, we will discuss current findings on the use of digital technologies in the field of BD, while debating the challenges associated with their implementation in clinical practice and the future directions. Introduction Bipolar disorder (BD) is a chronic and recurrent mental illness that affects 2.4% of the worldwide population [1].BD usually manifests in early adulthood, with the median age at onset found to be 33 years of age and a peak age at onset of 19.5 years of age [2].BD presents a profound negative impact on individuals' lives with high rates of disability [3].According to the Global Burden of Disease Study (2019), BD is the 12th leading cause of years lived with disability among young adults aged between 15 to 24 years [4]. Digital health technologies have been studied in the context of BD and are showing promising results in the early detection of the disorder [5,6] and depressive or manic episodes among individuals with the disorder [7], as well as the promotion of a better prognosis [8,9].To understand this progress, we will review 3 promising and innovative areas of work (Figure 1).In this editorial, we will discuss the role of (1) machine learning techniques, (2) digital phenotyping, and (3) mobile health (mHealth) apps to enhance BD care.Additionally, the challenges and future directions for the implementation of digital health technologies in BD will be considered. Early Detection of BD and Reducing Misdiagnosis: Insights From Machine Learning Studies A potential contributor to the disease burden in BD is the delay in obtaining an accurate diagnosis, which consequently delays the appropriate management and treatment of the disorder.A recent systematic review showed that the median delay in help seeking was 3.5 years, the median delay in diagnosis was 6.7 years, and the median duration of untreated BD was 5.9 years [10].Another recent study found that the rate of misdiagnosis in BD was 76.8%, and most of those cases received a misdiagnosis of major depressive disorder (MDD) [11].Despite the similarities in the clinical presentation of a depressive episode in MDD and BD, the treatment strategies recommended for each disorder are different, with antidepressants being the main pharmacological strategy in MDD [12] and mood stabilizers being recommended for BD [13].Thus, having strategies for the early detection of BD is crucial to reduce the misdiagnosis rates and to provide the proper treatment early in the course of the disorder.In this section, we will be describing some machine learning studies aimed at (1) predicting mood disorder misdiagnosis, (2) predicting BD onset, and (3) differentiating BD from unipolar disorder.Finally, we will discuss the challenges of translating these findings into clinical practice. A scoping review aimed at investigating the use of machine learning techniques for the detection of BD found that the majority of the studies used classification models (eg, random forest), included a sample size of fewer than 300 individuals, and included clinical data in the model [5].The potential of new machine learning methods to better understand factors associated with misdiagnosis was exemplified in a recent study that reported a misdiagnosis rate of 50.97% [14].In this study, any mismatch between the self-reported diagnosis and the clinical interview diagnosis was considered a misdiagnosis.The investigators used machine learning techniques to identify the predictors of misdiagnosis, and the mean accuracy of the predictive model was 70% [14].This study showed that more severe depressive symptoms and unstable self-image were the strongest predictors of mood disorder misdiagnosis among the 1045 variables evaluated [14].These results may be explained by the fact that patients usually seek treatment when they are severely depressed and that they may be underreporting hypomanic symptoms during a severe depressive episode.Consequently, they might be misdiagnosed with major depression instead of receiving the correct diagnosis of BD. Another recent study highlights how a correct diagnosis may be made earlier.The clinical predictors of BD were described in a large birth cohort study, including 3748 subjects assessed at birth and 11, 15, 18, and 22 years of age [6].The study used machine learning techniques and showed that the presence of suicide risk, generalized anxiety disorder, parental physical abuse, and financial problems at 18 years of age were the strongest predictors for a BD diagnosis at 22 years of age, with a balanced accuracy of 75% [6].Additionally, the high-risk subgroup of BD showed a high frequency of drug use and depressive symptoms [6]. Several machine learning studies used digital phenotyping to classify BD and unipolar disorder [15,16].In one study, daily smartphone-based self-assessments of mood and same-time passively collected smartphone data on smartphone usage were assessed for 6 months [15].The main findings indicate that patients with BD, in an euthymic state, had a lower number of incoming phone calls per day compared to patients with unipolar depression also experiencing euthymia.In addition, during depressive states, patients with BD had a lower number of incoming and outgoing phone calls per day as compared with patients with unipolar depression [15].BD was classified with an area under the curve (AUC) of 0.84 (overall; when mood state was not taken into consideration), 0.86 (during a depressive state), and 0.87 (during a euthymic state) in this study.However, when applying the leave-one-out cross-validation approach, the AUC for all models reduced (AUC=0.48for the overall model, AUC=0.42 for the depressive state model, and AUC=0.46 for the euthymic state model), indicating that changes in combined smartphone-based data were highly individual [15].Another digital phenotyping study using the mindLAMP app to collect geolocation, accelerometer, and screen/state reported an AUC of 0.62 for classifying patients with MDD or Bipolar I/II disorders [16] The differing results noted above are common in machine learning research, especially where the underlying data and technology differ between studies.A task force discussing the scientific literature related to machine learning and big data-based studies showed that machine learning studies have included a variety of data to predict BD, including neuroimaging, genetics, electroencephalogram, neurophysiological data, blood biomarkers, text, facial expressions, speech, and ecological momentary assessments [17].The task force emphasized that some limitations should be addressed to allow these findings to be translated to clinical practice, in particular the lack of external validation of the predictive models [17]. Digital Phenotyping to Detect Mood Symptoms and Mood Episodes in BD The development of digital phenotyping is quickly evolving and expanding in the field of BD.Digital phenotyping involves collecting data (eg, location, activity, sleep, speech patterns), typically from smartphones, to monitor behavior, cognition, and mood [18].Digital phenotyping may help facilitate the early detection of potentially problematic mood changes, therefore facilitating early intervention.Before digital phenotyping can be applied in usual care for BD, we must develop an understanding of which of the multitude of digital data collected by smartphones and wearable sensors can reliably and validly detect early warning signs of mood episodes.Importantly, while several studies have shown that digital phenotyping is a promising technique, it faces several challenges that need to be robustly addressed [19], which will be discussed in this section. A systematic review describing the evidence about the use of portable digital tools for detecting BD, mood states, and mood symptoms found 62 studies assessing it in terms of four main areas: (1) smartphone apps designed to collect active (eg, mood self-assessments) or passive (eg, recording geolocation, step counts, call and text logs, sleep, etc) data; (2) wearable sensors for the monitoring of electrocardiography and actigraphy; (3) audio-visual recordings for the analysis of speech or facial expressions and upper body movements; and (4) multimodal tools, combining 2 or more of the above [7].Two-thirds of the studies included applied machine learning approaches to classify BD versus healthy controls, to identify mood states, or to predict the severity of symptoms.They achieved mixed results, yielding fair to excellent classification performances, with accuracy globally ranging from 60% to 97% [7].A recent review assessing the application of digital tools for major depressive episodes described the following digital phenotype for BD: (1) speech alterations during a depressive episode, including decreased speech pause and reduced fundamental frequency, while these speech features were increased during a hypomanic episode; (2) irrespective of the mood state, heart rate variability was reduced, but the change in heart rate variability in the interepisodic phases remained unclear, and (3) an electrodermal hypoactivity in a depressive episode was reported, which increased when patients were euthymic [20]. Regarding the challenges related to digital phenotyping, it is important to note that any data collected using digital devices is prone to bias and needs to be standardized to ensure accuracy not only across populations but also across different devices.Moreover, concerns relating to privacy, ethics, data security, and consent must be addressed.User comfort in sharing data differs depending on the data type (eg, users are more comfortable sharing health data than personal data such as location, communication logs, and social activity) and the recipient (eg, users have greater comfort sharing data directly with clinicians than having this entered into their electronic health record), and this may impact willingness to use digital phenotyping platforms [21].As user engagement is essential for the success of any digital phenotyping tools [22], it is necessary to account for discrepancies in access, equity, and distribution of resources.Finally, more in-depth longitudinal studies are required to ascertain the relationship between biomarkers and long-term outcomes of health and well-being. Smartphone-Based Interventions for BD Psychosocial therapies and education in self-management strategies can improve outcomes in BD [23] and are recommended complements to pharmacological interventions in guidelines for BD treatment.However, access to these forms of care remains suboptimal, with less than 50% of individuals in treatment for BD receiving therapy with a psychologist, social worker, or self-help support group [24].Smartphone apps have the potential to provide psychoeducation and facilitate several of the core components of psychosocial therapies (eg, self-monitoring, detecting and responding to mood episodes, stabilizing daily routines, improving emotion regulation, encouraging medication adherence, etc) [25].Encouragingly, individuals with BD report high levels of access to smartphones and a willingness to receive psychosocial interventions via apps [26,27].Several app-facilitated interventions have been developed and evaluated for BD, variously integrating self-monitoring, psychoeducation, cognitive-behavior therapy, and skills training, and targeting both symptoms and patient-valued outcomes such as functioning and quality of life [9].However, evidence for their feasibility and efficacy is still preliminary, and interventions are yet to fully leverage the capabilities of apps for intervention personalization. Two recent systematic reviews and meta-analyses investigated the role of smartphone-based interventions to improve clinical outcomes in BD and found conflicting results [8,9].Liu et al [8] included 10 studies in their systematic review (7 randomized controlled trials and 3 single-arm trials) and concluded that smartphone-based interventions were effective in reducing manic and depressive symptoms in between-group (compared to controls) and within-group (comparing symptoms from baseline to postintervention in the intervention group).Anmella et al [9] included 13 studies in their qualitative synthesis of the findings and 5 studies in their meta-analysis.The meta-analyses comparing the pre-post change in depressive and (hypo)manic symptom severity, functioning, quality of life, and perceived stress between smartphone interventions and controls did not reach statistical significance for any outcome assessed [9].The potential explanation for the conflicting findings is that the eligibility criteria were different between both studies.The most important difference is the fact that Liu et al [8] included not only smartphone-based apps but also phone calls from specialists to facilitate therapy and website interventions in the intervention group, while Anmella et al [9] excluded interventions not delivered through smartphones (eg, exclusive of phone calls, phone messaging, only SMS text messaging, or computer-delivered interventions) from the intervention group.Another difference between both studies is that Liu et al [8] did not restrict the inclusion criteria to individuals with BD and included a few studies that recruited a more heterogeneous population (eg, serious mental illness, mood disorders), while Anmella et al [9] only included studies where the participants were diagnosed with BD. Given the heterogeneity of BD both between and within individuals, effective psychotherapy involves appropriately tailoring intervention content and delivery to the challenges and goals of a specific individual at a specific time.However, apps are yet to fully capitalize on the potential of smartphones to personalize intervention delivery in response to changes in clinical state.One app program, SIMPLe, personalizes content using ecological momentary assessment to identify potential prodromal mood changes and adapts the delivery of psychoeducation messages in response [28,29].To advance our understanding of how to tailor just-in-time adaptive interventions for BD, microrandomized trials can be used to evaluate the immediate impact of diverse types of intervention prompts.For example, an evaluation of mobile acceptance and commitment therapy used this trial design to evaluate different categories of intervention and found that awareness-focused prompts paradoxically increased symptoms [30].Beyond the clinical utility of personalization, this feature is also highly prioritized by individuals with BD themselves [31], who have expressed a desire for apps that make meaningful use of their data to customize intervention delivery and facilitate proactive support. Improving the Dissemination and Uptake of Apps for BD Although research-led studies have developed and evaluated mobile apps for BD, the dissemination and uptake of these apps in real-world contexts must be considered to reach the target population and maximize their impact.A recent web-based survey investigating the use of mobile apps to support mood and sleep self-management among individuals with BD found that 41.6% of participants reported using a self-management app related to mood and/or sleep [32].The most nominated app for mood monitoring was Daylio, and the most reported app for sleep monitoring was Fitbit.Since these apps are designed to support the public with well-being concerns, this raises questions about why apps specifically designed for BD are not reaching this population.Two possibilities emerge: (1) apps designed for BD are not sufficiently acceptable or engaging in the eyes of the target audience and (2) individuals with BD may not be adequately supported to select the app that is best suited to their needs.To facilitate research-led apps reaching and impacting users with BD, we must consider their ability to create and sustain user engagement.Further, we must consider effective dissemination pathways, targeting both patients and the health care providers involved in the provision of care to people with BD (eg, clinicians, nurses, allied health professionals, and case managers). Poor engagement is endemic to mental health apps in general, extending beyond just those aimed at BD [33], with most users of publicly available apps disengaging within 30 days.Specific to BD, a systematic review showed that adherence data in research trials was infrequently reported; of the 13 studies providing engagement data, the activity rate ranged from 58% to 91% [34].The failure to consider the needs and goals of the targeted population likely contributes to startlingly poor levels of uptake and adherence.Involving users in the design of apps can help ensure their design, content, and feature selection are relevant, acceptable, and engaging.However, a recent review investigated the level of user involvement in the design of self-monitoring apps for BD [35] and found that 36% of the apps did not mention user involvement in the design, while 9% reported low, 36% reported medium, and 18% reported high user involvement.This review highlights the importance of including an appropriate sample size capable of adequately capturing users' needs so that technology can be better designed.Finally, it is recommended that users are involved early in the design process, and their involvement should not be limited solely to the design but also to all aspects of the research, ensuring end-to-end involvement.Case studies of apps using a co-design framework include the quality of life-focused LiveWell and PolarUs apps, both of which consulted people with BD throughout development [36,37].Figure 2 depicts 2 screens from the PolarUs app: on the home screen (left image), users are prompted to engage in quality of life, sleep, and mood self-monitoring, and are provided with relevant resources [37].Users can review their self-monitoring data over time (right image).Individuals with BD provided input into the app design (including icons, color scheme, and layout), navigation, features, and content. Looking ahead, as more apps for BD are developed and made available to the public, patients with BD and health care providers will likely require support to navigate the digital health landscape, as research-led apps will compete for attention with commercial offerings that may have limitations in their privacy protections and efficacy [38].Educational interventions to enhance digital health literacy may help individuals with BD to select the appropriate apps for their self-management goals.While in general, levels of digital health literacy are comparable for people with BD to the general population, a study found that individuals with BD who are younger, have completed less education, or are less familiar with mental health apps may require extra support to safely and productively navigate web-based health resources [39].Recent steps have been taken to address the needs of these groups: a brief, informational video describing strategies to select safe, effective, and engaging mental health apps for BD was created [40], incorporating the de Azevedo Cardoso et al JMIR MENTAL HEALTH XSL • FO RenderX perspectives of people with lived experience in the script and design.A still image from this video is presented in Figure 3 [40].This resource was later expanded upon to create a web-based module [41], depicted in Figure 4, which contains additional information and resources to support people in evaluating app privacy policies, inclusion of evidence-based strategies for BD, and motivational techniques.Other resources like mindapps.orgcan help facilitate informed decision-making about mental health apps [38]. Health care providers are an important source of information and advice on smartphone apps, yet a web survey found that only 48.8% of health care providers reported discussing or recommending health apps to patients with BD [42].Most of the apps recommended were related to core symptoms of BD, including mood and sleep.Among the health care providers who did not discuss health apps with patients with BD (51.2%), the predominant reason mentioned was the lack of familiarity with credible and suitable apps tailored for BD.The resources discussed above are also appropriate for use by clinicians wishing to learn more about appropriate and effective apps for BD [38,41].These findings emphasize the importance of providing training aimed at increasing clinician self-efficacy in using mobile apps with patients, a strategy that should be considered by researchers developing new mHealth tools. Conclusion The evidence available to date indicates that digital technologies may help in the early detection of BD and mood episodes, as well as in enhancing treatment, improving health outcomes and consequently promoting a better prognosis for individuals with BD.However, there are important limitations that need to be addressed before these technologies can be translated to clinical practice, including the following: (1) external validation of the machine learning models developed to date, (2) need for well-designed prospective cohort studies to validate findings about digital phenotyping and early detection of BD and mood symptoms, (3) involvement of individuals with lived experience in the development of mobile apps, and (4) dissemination of the available technology among health care providers and directly to people with BD.Finally, adequately powered randomized controlled trials are still needed to evaluate the efficacy of mental health apps for BD.Additionally, there is a need to advance our understanding of how to tailor app-based interventions based on the valuable insights generated by digital phenotyping. Figure 1 . Figure 1.Digital tools for BD: key developments and future directions.BD: bipolar disorder; ML: machine learning. Figure 3 . Figure 3. Choosing a bipolar disorder app that works for you (reproduced from [40], with permission from Erin Michalak). Figure 4 . Figure 4.Additional information and resources to support people in evaluating app privacy policies, inclusion of evidence-based strategies for BD, and motivational techniques (reproduced from[41], with permission from Erin Michalak).BD: bipolar disorder.
4,708.2
2024-03-20T00:00:00.000
[ "Medicine", "Psychology", "Computer Science" ]
Ruminal Microbiome Manipulation to Improve Fermentation Efficiency in Ruminants The rumen is an integrated dynamic microbial ecosystem composed of enormous populations of bacteria, protozoa, fungi, archaea, and bacteriophages. These microbes ferment feed organic matter consumed by ruminants to produce beneficial products such as microbial biomass and short-chain fatty acids, which form the major metabolic fuels for ruminants. The fermentation process also involves inefficient end product formation for both host animals and the environment, such as ammonia, methane, and carbon dioxide production. In typical conditions of ruminal fermentation, microbiota does not produce an optimal mixture of enzymes to maximize plant cell wall degradation or synthesize maximum microbial protein. Well-functioning rumen can be achieved through microbial manipulation by alteration of rumen microbiome composition to enhance specific beneficial fermentation pathways while minimizing or altering inefficient fermentation pathways. Therefore, manipulating ruminal fermentation is useful to improve feed conversion efficiency, animal productivity, and product quality. Understanding rumen microbial diversity and dynamics is crucial to maximize animal production efficiency and mitigate the emission of greenhouse gases from ruminants. This chapter discusses genetic and nongenetic rumen manipulation methods to achieve better rumen microbial fermentation including improvement of fibrolytic activity, inhibition of methanogenesis, prevention of acidosis, and balancing rumen ammonia concentration for optimal microbial protein synthesis. Introduction Rumen inhabits several microbial populations, that is, bacteria, protozoa, fungi, bacteriophages, yeasts, and methanogens symbiotically, which are very dynamic, plastic, and redundant in function with the changes in diets though core microbiota persists, which has probably evolved by host-microbiota interaction in the evolutionary pressure over thousands of years [1].A symbiotic relationship exists between rumen microbes and host animals in which both provide desirable substrates to each other mainly through these ways-1) physical breakdown of feed particles by mastication and rumination expands their surface area for microbial attachment and degradation, and consequently, microbes secrete various enzymes for dietary substrate degradation, 2) ruminal movements bring microbes in contact with the dietary substrate by mixing of digesta and consequently produce fermentation products (e.g., H 2 , CO 2 , ammonia, short-chain fatty acids (SCFAs), and 3) utilization (absorption and consumption) of the fermentation products for keeping optimal ruminal conditions (e.g., pH) to maintain microbial growth and microbial protein synthesis [2].Therefore, due to the interactive ecosystem of the rumen, any modification to one component of this system has several effects on other components.The fermentation end products of any diet are incorporated into the final animal products (meat or milk).Thus, manipulation of the ruminal fermentation pathways is the most effective approach to improve ruminant health and production efficiency without exaggerated increases in nutrient supply.This in particular should help the small livestock holders in developing countries for continued production. The literature explored various manipulation strategies including enhancing or inhibiting the growth or the metabolic activity of specific rumen microbiota (e.g., archaea for methanogenesis) and/or altering the ruminal fermentation toward specific pathways (e.g, decreasing H 2 production and increasing short-chain fatty acids (SCFAs) production [3,4].Extensive literature supports the supplementations of various rumen modifiers; however, efforts are still underway to find appropriate methods to simultaneously improve livestock production while reducing greenhouse effects on the environment.Through the following aspects, the most common methodologies for modifying the ruminal microbiome and fermentation characteristics are discussed in this chapter. Enhancing fibrolytic activity and short-chain fatty acid production Lignocellulose (complex polymers of cellulose, hemicellulose, pectin, and lignin) makes up the majority of the ruminant diet.Generally, forages, including crop residues, provide the main source of nutrition to ruminants that contribute to the food security and primary source of income of smallholder farmers in the developing countries [5][6][7].This is also true where grazing animals are common in the developed countries.Hence, forage is virtually the only source of nutrition in the main beefproducing northern Australia, North and South America [8]. Although ruminants can digest fibrous feedstuffs, dietary cell wall polysaccharides are rarely completely degraded in the rumen.Less than 50% of the plant cell wall of most forage grasses are digested and utilized.This is attributed to the combination of the biochemical and physical barriers present in the ingested fibrous feedstuffs and retention time limitations of the ingested dietary substances in the rumen [9], resulting in excessive nutrient excretion, low nutrient intake, and a significant loss of dietary energy in the form of CH 4 emission [10].Therefore, enhancing the rumen microbiota to degrade plant cell walls usually leads to improve animal productivity. Ruminants cannot degrade lignocellulose themselves.An involved community of fibrolytic microorganisms catalyzes the degradation of the plant cell walls in the rumen.The major classical fibrolytic bacteria involved in fiber degradation are Fibrobacter succinogenes, Ruminococcus albus, Ruminococcus flavefaciens, Butyrivibrio, and Prevotella spp.[11].Anaerobic fungi also contribute to degrade cell wall components and play a special role in degrading low-quality forages.Fungi are able to penetrate the plant tissue as a result of their filamentous growth and can degrade up to 34% of the lignin in plant tissues [12].Fungi (i.e., Neocallimastix sp.) have a broad range of highly active fibrolytic enzymes and are the only known rumen microorganisms with exo-acting cellulose activity [11].Cellulolytic activity is present in many rumen protozoa species, and the most efficient cellulose degraders are Epidinium ecaudatum, Eudiplodinium maggii, and Ostracodinium dilobum [13]. There are various well-established procedures that can be used to improve forage utilization including modifying ruminal microbial fermentation toward more fiber degradation.These include mechanical and chemical processing of forages and genetically engineering of plants for cell wall composition.However, we will focus on ruminal fibrolytic microorganisms and their products in the following sections of the chapter. Genetically engineered fiber-degrading bacteria The manipulation of genes in genetically engineered organisms can produce a product with novel specific characteristics that may have significant value.This concept was exploited in developing genetically modified fiber-degrading bacteria to optimize their activity by producing the correct mixture of fibrolytic enzymes to maximize plant cell wall degradation.Ruminococcus and Fibrobacter strains were the most targeted fiber-degrading bacteria for genetic modifications because they cannot produce exocellulases that are active against crystalline cellulose.Therefore, altering this activity would make them more potent [11].The genome sequences of F. succinogenes, R. albus, and Prevotella ruminicola strains are available [11]. As early as 1995, Miyagi et al. [14] suggested that inoculation of genetically marked R. albus into a goat rumen might be of benefit to rumen function, but they found that the inoculant usually disappears from goat rumen after 14 days.One of the reasons for this is that bacteria reproduce within the physiological and ecological limits of the rumen ecosystem in which cooperative networks exist among ruminal microorganisms; since some organisms cleave specific bonds, others utilize particular substrates, while others produce inhibitors [11].The scientists' sights were turned to Butyrivibrio species because they are among the most rumen bacteria capable of hemicellulose degradation and are regarded as being ecologically robust [15].Gobius et al. [16] reported the successful transformation of a diverse range of eight strains of Bu. fibrisolvens with xylanase (family 10 glycosyl hydrolases) from rumen fungus Neocallimastix patriciarum.Glycosyl hydrolases family 10 was selected because it is different from family 11, which typically exists in Bu.Fibrisolvens and this family is characterized by high specific activity and resistance to proteolysis.The transformation was functionally successful and the in vitro fiber digestibility measurements revealed an improvement in plant fiber degradation by the recombinant xylanase; however, this still does not allow them to compete with the far more fibrolytic species Fibrobacter and Ruminococcus [11].Another genetically engineered bacteria, Bacteroides thetaiotaomicron was inoculated at approximately 1% of the total population into in vitro dual-flow continuous culture fermenters and persisted for at least 144 h with relative abundances of 0.48-1.42%and increased fiber digestion, particularly hemicellulose fraction [17].Generally, most of the experiments that used modified fibrolytic bacteria were in vitro trials.However, it should be taken into consideration that the in vitro fermenters did not express the full complement of rumen microorganisms (particularly protozoa).Moreover, this microbial manipulation application seems to be costly, especially for the small livestock holders in developing counties. Direct-fed microbials The concept of direct-fed microbials is different from the term probiotics.Probiotics were identified by any live microbial feed additive that may beneficially influence the host animals upon ingestion by improving microbial balance in the intestine [18].Viable microbial communities, enzyme preparations, culture extracts, or combinations of those products were included in the concept of probiotic supplements [19].The DFM has a narrower definition than probiotics as it is defined as a source of life, naturally occurring microorganisms alive, naturally occurring microorganisms that improve the digestive function of livestock.The DFM includes three main categories; bacterial, fungal, and a combination of both [20].DFM must be alive to impact ruminal fermentation; thus, the viability and number of organisms fed must be ensured at the time of feeding.Lactic acid-producing and utilizing bacterial species of Lactobacillus, Bifidobacterium, Streptococcus, Bacillus, Enterococcus, Propionibacterium, Megasphaera elsdenii and Prevotella bryantii, and yeasts such as Saccharomyces and Aspergillus were the significant microbes of most of the DFM for livestock production [21]. DFM can grow under ruminal conditions and manipulate the microbial ecosystem.Various factors may affect the activity of DFM including microbial strains, time of feeding, feeding system, treatment period, physiological conditions, and dosages [20,22].The microbial strains seem to be the main influencer-DFM containing mainly lactic acid-producing and utilizing bacteria can manipulate the growth of microorganisms adapted to lactic acid in the rumen while preventing the drastic pH drops, for example, M. elsdenii [19].DFM of Propionibacterium species can manipulate the fermentation pathways toward a more molar portion of propionate production [20,23].Propionibacterium is naturally found in high numbers in the rumen ecosystem and known to ferment lactate to propionate, providing more substrates for lactose synthesis in early lactation dairy cows, improving energy efficiency for the growing ruminants by reducing methane emission [20,23]. Direct-fed microbials, based on fungal cultures, mainly contain Saccharomyces cerevisiae and Aspergillus oryzae, which can remove oxygen from the surfaces of freshly ingested feed particles to maintain the ruminal anaerobic conditions for the growth of cellulolytic bacteria [22,24].Moreover, the end metabolites of yeasts in the rumen can provide the ruminal microbiota with growth factors (i.e., rumen acetogens, digestive enzymes, anti-bacterial compounds, organic acids, and vitamins), resulting in stimulation of ruminal cellulolytic bacteria and maintenance of pH for optimal fiber degradation, and consequently greater production performance [21,22].Due to the low cost of DFM compared to other commercial feed additives, it can be included among the suitable solutions to manipulate the ruminal fiber degradation for the smallholder livestock sectors. Exogenous fibrolytic enzymes Products of exogenous fibrolytic enzymes (EFE) that contain primarily cellulolytic and xylanolytic activities can manipulate the ruminal fiber degradation, and improve feed conversion efficiency and thus lead to enhanced productive efficiency of ruminants [9].Published literature suggests that the mode of actions of EFE products are likely different than that of DFM products.The activities introduced to the rumen by EFE are not novel to the ruminal ecosystem as they would act upon the same sites of the feed substrate particles as endogenous fibrolytic enzymes [25].The Ruminal Microbiome Manipulation to Improve Fermentation Efficiency in Ruminants DOI: http://dx.doi.org/10.5772/intechopen.101582release of reducing sugars by EFE is probably an essential mechanism by which EFE operates [26].The degree of sugar release is dependent on the substrate types as well as the type of enzymes.The released sugars can attract secondary ruminal microbial colonization, or remove barriers to the microbial attachment to substrate feed particles by cleaving the linkage between phenolic compounds and polysaccharides [9].As a result, the most significant effects of EFE probably occur in the interval between the arrival of the feed particles into the rumen and its colonization by ruminal microorganisms, as only the rate, not the extent, of cell wall degradation, has been improved [25].EFE can also manipulate the rumen fibrolytic microorganisms by enhancing their endogenous fibrolytic activities. Genes from ruminal fungi encoding cellulases, xylanases, mannanases, and endoglucanases have been successfully isolated.Protein bioengineering has been employed to improve the catalytic activity and substrate diversity of fibrolytic enzymes from ruminants.This has resulted in fibrolytic enzymes with up to 10 times higher specific activity, pH and temperature optima, and enhanced fiber-substrate binding activity than the original enzymes [27].This, together with the low manufacturing cost, has led to more recent developments in the enzyme production industry, and as a result, a wide range of commercial EFE products is now available.Frequently the manufactures' recommended doses of most commercial EFE products have been measured under wide ranges of pH (4.2-6.5) and temperatures (40-57°C), which are not always close to typical ruminal conditions.Moreover, most of the commercial EFE products for ruminants are often referred to as xylanases or cellulases.However, none of these products comprise single enzymes; secondary enzyme activities are invariably present, namely, proteases, amylases, or pectinases [9].A wide variety of feed substrates can be targeted by a single EFE product.Thus, the random addition of these products to ruminant diets without consideration for specific rumen conditions (pH 6.0-6.5 and 39°C) and the not yet tested efficiency for specific substrate will result in unpredictable effects and thus discouraging the adoption of the EFE technology [28,29]. In general, enhancing the rumen microbiota to degrade the dietary fibers through the above-discussed strategies may lead to accelerating the energy production in the forms of short-chain fatty acids (SCFAs) and/or microbial protein synthesis.At the same time, it may also produce high amounts of CO 2 and CH 4 . Decreasing methanogenesis and increasing propionate production The ruminal fermentation is the primary source of CH 4 emission from livestock; it is one of the most potent greenhouse gases featured by short atmospheric mean lifetime.Furthermore, a significant proportion of the ingested feed energy is also lost as CH 4 [40].Methane is produced by methanogens mainly by reduction of CO 2 through the hydrogenotrophic pathway.Formic acid and methylamines produced by other ruminal bacteria are also reduced to CH 4 by some methanogens.Therefore, methanogens interact with other ruminal microorganisms (e.g., protozoa, bacteria, and fungi) through interspecies H 2 transfer [4].Thus, maximizing metabolic H 2 flow away from CH 4 toward SCFAs production could improve production efficiency in ruminants and decrease environmental impact.There are various direct and indirect strategies to manipulate rumen methanogenesis; among these options, inhibiting the growth or the metabolic activity of methanogens seems to be the most effective approach.The efficiency of these strategies mainly depends on where methanogens reside.It can be seen from the smaller number of archaeal 16S rRNA gene sequences (461 vs. 8162) recovered from protozoa than from ruminal content or fluid [4].Free methanogens are mainly integrated into the biofilm on the surfaces of feed particles where H 2 -producing bacteria actively produce H 2 .These methanogens protected by the biofilm may not be inhibited to an extent similar to the free-living peers by anti-methanogenic inhibitors [4].Also, methanogens can be inhibited indirectly through inhibiting rumen ciliate protozoa.Based on fluorescence in situ hybridization analysis, about 16% of the rumen ciliate protozoa contained methanogens inside their cells [30].Most rumen ciliate protozoa have hydrogenosomes, unique membrane-bound organelles producing H 2 by malate oxidization; therefore, these organelles can attract some species of methanogens as endosymbionts [4]. Methane formation pathways comprise of three main steps; transfer of methyl group to coenzyme M (CoM-SH), reduction of methyl-coenzyme M with coenzyme B (CoB-SH), and reusing heterodisulfide CoM-S-S-CoB [4,31].Thus, obstruction of any of these steps may manipulate CH 4 production.A wealth of literature on rumen CH 4 manipulation strategies in ruminants have been published recently, but relatively very few have emphasized the suitable mitigation strategies at the farm level [32].Each method has some potential advantages and limitations.The principal interest for animal producers is income, as they usually do not take CH 4 mitigation strategies or climate changes into account.Thus, any strategy to mitigate greenhouse gasses emission would only be of practical interest if achievements on the efficiency of animal production can be obtained.This can be obtained through rumen CH 4 modifiers that enhance the production of SCFAs and/or reduce proteases.The following part addresses some of these microbial modifiers. Ionophores Ionophores are polyether antibiotics that act as inhibitors to hydrogen-producing bacteria.They are widely used as successful growth promoters in the livestock industry due to their ability to modulate rumen fermentation toward propionate production, thereby decreasing CH 4 production.Since propionate and CH 4 are terminal acceptors for metabolic H 2 , any increase in propionate production may accompany reduced CH 4 .In addition, ionophores positively affect ruminal fermentation through inhibition of deamination compared to proteolysis, inhibition of hydrolysis of triglycerides, and biohydrogenation of unsaturated fatty acids, while enhancing the trans-octadecenoic isomers (cited from [33]). From the literature, monensin and lasalocid are the most well-known ionophoretype antimicrobials used as rumen modifiers.Mainly, they inhibit Gram-positive bacteria; however, they can also inhibit some Gram-negative bacteria.Ionophores decrease CH 4 production by inhibiting H 2 producing bacteria by penetrating the bacterial cell wall membrane.They act as H + /Na + and H + /K + antiporters, dissipating ion gradients required for the synthesis of ATP, transport of nutrients, and other essential cellular activities in bacteria, resulting in retardation of cell growth and cell death [4,34].Monensin can decrease total methanogens number in cattle, and also alter the community composition of methanogen species, for example, monensin decreased the population of Methanomicrobium spp.while increasing that of Methanobrevibacters spp.[4]. Unfortunately, ionophores present a temporary impact on ruminal manipulation effects due to the adaptation of the microorganisms of these inhibitors.Ionophores are now restricted due to the possible resistance of pathogenic microorganisms to antibiotics [33].Recently, the global scenario has shifted the interest toward plant [35,36].Moreover, the type of the dietary feeds affects the efficiency of ionophores with the better effect of ionophores observed in high starch diets [33].Thus, this approach seems to be less effective for the small livestock holders in most developing countries since the forages are the main ingredient in the diets. Plant secondary compounds Numerous plant secondary compounds (PSC), including tannins, flavonoids, saponins, essential oils (EOs), organosulfur compounds, have been recognized as having the potential to modulate ruminal microbial fermentation [37][38][39].Plant secondary compounds are natural phytochemicals with the potential ability to manipulate rumen fermentation without causing microbial resistance or residual noxious effects on animal products [3].Unlike ionophores, the different active components found in plant extracts may manipulate ruminal microbiota through more potent mechanisms of action (e.g., antimicrobial and antioxidant), which may avoid the risk of losing activity over time [40]. Tannins Tannins are polyphenolic compounds with different molecular weights ranging from 500 to 5000 Da [41].Tannins are classified into two major groups, that is, condensed (CT) and hydrolyzable tannins (HT).CT are proanthocyanidins consisting of oligomers or polymers of flavan-3-ol subunits.They act through binding with dietary proteins and carbohydrates by making strong complexes at ruminal pH [41][42][43].Therefore, they are the most plant secondary metabolites studied in terms of rumen modulation pathways. The literature reported quite various effects of CT supplementations regarding CH 4 mitigation [38].Some studies suggest a direct effect of CT on methanogens by binding with the proteinaceous adhesin or parts of the cell envelope, which impairs the establishment of methanogens-protozoa complex and decreases interspecies H 2 transfer, and inhibits growth [44].Other studies suggest an indirect effect of CT through the anti-protozoal effect.However, the effects of CT on rumen protozoal activity are varied in the literature, probably because some of the CTs have a direct effect on rumen methanogenic archaea, which are not associated with the protozoa.Tannins also can indirectly inhibit CH 4 per unit of the animal product through tannin-protein or organic matter complexes under ruminal conditions, while protein from these complexes is released post ruminally, making it available for gastric digestion at abomasum and small intestine conditions, leading to enhancing the animal productivity [43].Another theory is that tannins can act as H 2 sink reducing the availability of H 2 for CO 2 reduction to CH 4 , implying that 1.2 mol CH 4 is produced per mol of catechin [44]. Tree foliages are good feed resources for the small ruminants, which are rich in protein and perform catalytic functions in improving ruminal fermentation, especially in low-quality forage-based diets in developing countries [45].The nutritionists have paid great attention to the tanniferous legumes and tree foliages as alternative cheap feed resources (especially in drought conditions and arid and semi-arid regions) and to achieve CH 4 mitigation goals in the developing countries [46].Many plants were investigated in the literature; however, the results are highly variable among studies.Soltan et al. [43] studied various tanniniferous browsers and found that some plants (i.e., Prosopis and Leucaena) similarly modulate ruminal fermentation as ionophores perform by decreasing the acetate to propionate ratio, CH 4 and NH 3 -N, while Acacia reduced CH 4 through decreasing fiber degradation although it had similar CT concentration as Leucaena.Thus, it seems that not only does tannin concentration play a role in the modulation of the ruminal fermentation process, but also types, molecular weights are important in determining tannin potency in modulating rumen fermentation patterns.The presence of HT and other plant secondary metabolites (mimosine in Leucaena) together with CT can interact with the action of CT [44,47]. Saponins Saponins are a group of plant secondary metabolites with high molecular weight glycosides in which a sugar is linked to a hydrophobic aglycone.It can be generally classified as steroidal and triterpenoid [48,49].The effects of saponins on rumen fermentation modulation have been reviewed extensively [49].The main biological effect of saponins is on the cell membranes of bacteria and protozoa.Saponins are highly toxic to protozoa compared with bacteria because saponins can form complexes with sterols present in the protozoal membrane surface, disrupting the membrane function [49].Thus, it can indirectly affect the methanogenic archaea through their symbiotic relationship with rumen protozoa [38].However, some literature assumed that the effects of saponins on rumen protozoa could be transient due to the ability of ruminal bacteria to degrade saponins into sapogenins.The sapogenin compound cannot affect protozoa [50]. Essential oils Essential oils (EO) are volatile aromatic complexes obtained from different plant volatile fractions by steam distillation.They can be obtained from various plant parts including leaf, stem, fruit, root, seed, flower, bark, and petal.EO contains numerous bioactive substances; the most important ones are terpenoids (monoterpenoids and sesquiterpenoids) and phenylpropanoids.Due to the lipophilic properties of these components, EO act against various rumen bacteria through interacting with the cell membrane [3]. Several EO compounds, either in pure form or in mixtures, had antioxidant and anti-bacterial properties; therefore, they can modulate the ruminal fermentation pathways [51].The EO, unlike ionophores, does not alter the ruminal microbial activities through a specific mode of action.Therefore, EO may have more potent mechanisms of action that may not likely lose their effectiveness over time.Soltan et al. [40] suggested two mechanisms in explaining how combination of phenylpropanes and terpene hydrocarbons components in EO mixtures work together to enhance additive antimicrobial activity-1) phenolic compounds may increase cell membrane permeability through the action of hydroxyl group, thus facilitating the transport of terpene hydrocarbons into the microbial cells, which then combine with proteins and enzymes inside the cells; 2) phenolic compounds could increase the size, number or duration of the existence of the pores created by the binding of terpene hydrocarbons with proteins in cell membranes. The effects of EO on rumen fermentation are variable depending on concentrations, types, diet and adaptation period, but most EO are found to have anti-methanogenic Ruminal Microbiome Manipulation to Improve Fermentation Efficiency in Ruminants DOI: http://dx.doi.org/10.5772/intechopen.101582properties [35,52].Patra and Yu [52] studied various EO with different chemical structures (clove, eucalyptus, origanum, peppermint, and garlic oil) in vitro at three different concentrations (0.25, 0.50, and 1.0 g/L) for their effect on CH 4 production and archaeal abundance and diversity and they found that all these EO suppressed CH 4 production, but the extent of CH 4 inhibition and ruminal fermentation differed among the EO.Further studies are needed to understand the interactions of the active compounds with the dietary ingredients and their activity against specific methanogens should be identified without adverse effects on fermentation patterns and rumen fiber degradability, as well as the different doses for each EO.Also, attention needs to be paid to the palatability as some EO may adversely affect palatability and dry matter intake due to the aroma they add to the ration.Therefore, many products of encapsulated EO are available in commercial forms, but this raises the question of the suitability of these products as feed additives at the farm level in developing countries. Propolis Propolis is a mixture of resinous substances collected from buds of deciduous trees and crevices in the bark of coniferous and deciduous trees and secretions by honeybees [53,54].The bees use propolis to fill cracks, cover hive walls and embalm invading intruder insects or small animals [55,56].The literature reported that the chemical composition of propolis is highly variable by bee collection site since geographical location plays an important role [54].The most bioactive components are belonging to groups of isoflavones, flavonoids, and fatty acids that have been reported to be biologically active [53].Recently, bee propolis has been recognized as a natural alternative feed additive to antibiotics in ruminant diets [54].Compared to ionophores (e.g., monensin), different propolis sources can reduce CH 4 production while improving the organic matter digestibility and total SCFAs in vitro and in vivo [53,57].Morsy et al. [58] reported that CH 4 reduction caused by propolis supplementation is accompanied by increasing urinary allantoin, total purine derivatives, and enhancements of individual and total SCFAs.Thus, they suggested that propolis can help in the redirection of ruminal organic matter degradation from CH 4 production to microbial synthesis and SCFAs.From a practical view, propolis can be a promising feed additive in the vegetation places where it is produced in a large amount such as Brazil. Plant oils Fats are usually used as energy sources for dairy cattle.The addition of fats is a promising approach for modulating rumen microbial communities and the fermentation process.Fats are known to inhibit microbial activity; however, supplementing fats up to 6% of dry matter has shown no adverse effects on total nutrient digestibility and total SCFAs [59].A meta-analysis study suggests that methane emissions can be declined by 0.66 g/kg DM intake with each percentage increase in dietary fats, within dietary fat concentrations of 1.24-11.4% [59].Fats containing high levels of C12:0, C18:3, and polyunsaturated fatty acids up to 6% of the dietary diet may be considered for CH 4 mitigation without compromising the productivity in dairy cattle [59]. Plant oil supplements can modulate CH 4 directly by inhibiting rumen protozoa and methanogens while enhancing biohydrogenation of polyunsaturated fatty acids (PUFA) to act as ruminal hydrogen sink for hydrogen produced by rumen microorganisms and reducing fiber degradation with less H 2 production in the rumen [60]. The literature showed variable effects of plant oils on CH 4 emission and rumen fermentation; this might be related to the oil type (free oil or whole seed), diet composition (forage to-concentrate ratio), and fatty acid type (short-chain or PUFA) present in diets [59].Generally, consideration of vegetable oils supplementation to lower CH 4 emission may depend upon the cost and expected outcome effect on animal productivity. Chitosan Chitosan is a natural polycationic polymer, nontoxic, biocompatible, biodegradable; thus, it is safe for human as well as animal consumption [61].It is a linear polysaccharide composed of two repeated units-D-glucosamine and N-acetyl-Dglucosamine linked by β-(1-4)-linkages [61].It can be found in the structural exoskeleton of insects, crustaceans, mollusks, cell walls of fungi, and certain algae, but it is mainly obtained from marine crustaceans [62].It is characterized by anti-inflammatory, antitumor, antioxidative, anticholesterolemic, hemostatic, and analgesic effects.Moreover, it has a high antimicrobial affinity against a wide range of bacteria, fungi, and protozoa; therefore, it has been recently tested as a rumen fermentation modulator and considered as a promising natural agent with CH 4 mitigating effects [61].The antimicrobial mechanism of chitosan can include interactions at the cell surface and outer membranes through electrostatic forces, the replacement of Ca +2 and Mg +2 ions, the destabilization of the cell membrane, and leakage of intracellular substances, and cell death.The antimicrobial properties of chitosan can also include chelating capacity for various metal ions and the inhibition of mRNA and protein synthesis [61]. It seems chitosan activity depends on the diet type as well as the ruminal pH.The literature reports suggest that the maximum effect of chitosan is noted when grain (starch) is incorporated in the ration at low pH values, shifting the fermentation pattern to a more propionate production pathway, which could be explained by the higher sensitivity of Gram-positive bacteria than Gram-negative bacteria against chitosan [61,63].This type of change in ruminal fermentation by chitosan results in reductions in CH 4 production.Moreover, supplementation of chitosan alters the rumen bacterial communities related to fatty acids biohydrogenation, that is, Butyrivibrio group and Butyrivibrio proteoclasticus that lead to increases in concentrations of milk unsaturated fatty acids and cis-9,trans-11 conjugated linoleic acid [64]. Chemical feed additives Numerous chemical additives were used to modulate the rumen microbial activity for optimizing animal productivity, namely, defaunating agents, and anti-methanogenic agents to reduce CH 4 emission.Patra et al. [4] reported the most promising antimethanogenic agents that effectively lower CH 4 without adverse effects on rumen degradability or producing SCFAs and each of which works through different modes of action when added together to additively decrease CH 4 production.These include halogenated sulfonated compounds (e.g., 2-bromoethanesulfonate, 2-chloroethanesulfonate, and 3-bromopropanesulfonate), 3-nitrooxypropanol (3NOP), nitrate, and ethyl-3NOP are used to inhibit methyl-CoM reductase activity, the final limiting step to complete the methanogenesis pathways.Halogenated aliphatic compounds with 1 or 2 carbons can impair the corrinoid enzymes function and inhibit cobamidedependent methyl group transfer in methanogenesis or may serve as terminal electron (e − ) acceptors.Some agents, namely, lovastatin and mevastatin were found to inhibit Ruminal Microbiome Manipulation to Improve Fermentation Efficiency in Ruminants DOI: http://dx.doi.org/10.5772/intechopen.1015823-hydroxy-3-methylglutaryl coenzyme, which is essential in the mevalonate pathway to form isoprenoid alcohols of methanogen cell membranes [4].The addition of nitrate has two benefits-it can inhibit methanogenesis and acts as a nonprotein nitrogen source, which could be useful in low-quality base diets [65]. Control of acidosis Diets containing high amounts of rapidly fermenting soluble carbohydrate result in pH drop due to excessive production of lactate or VFA or a combination of both, which may be of subacute ruminal acidosis (pH between 5.0 to 5.5) or acute acidosis (<5.0) type with acute or chronic in duration [66].The consequences of acidosis range widely along with death and more importantly lower productivity, especially in subacute ruminal acidosis [66,67].Decreasing the ruminal pH leads to inhibition of rumen cellulolytic bacteria.Therefore, maintaining ruminal pH at the average level (5.8-7.2) is an essential factor to balance the rumen microorganisms between acid producers and consumers.In this context, buffering reagents and alkalizer (e.g., sodium bicarbonate, magnesium oxide, and calcium magnesium carbonate), direct-fed microbials, and malate supplementation may increase pH in the rumen and production when ruminants are fed with high-grain based diets [66,68].Malate supplementation can stimulate Selenomonas ruminantium that converts lactate to VFA [69].Marden et al. [70] reported that the inclusion of 150 g of sodium bicarbonate increased total ruminal VFA concentration by 11.7% compared to the control diet fed to lactating cows.The addition of sodium bicarbonate, magnesium oxide, and calcium magnesium carbonate reduced the duration of time ruminal pH persisted below 5.8 in lactating dairy cows fed a high-starch (342 g/kg DM) containing diet and increased milk and fat yield, and milk fat concentration, but reduced milk trans-fatty acids isomers [71].The efficacy of the acid-neutralizing capacity of the alkalizers depends upon physical and chemical properties that influence the solubility in the ruminal conditions.However, in developing country conditions, the acidosis problems are usually less severe as ruminants are mostly fed with roughage-based diets. Enhancing ruminal microbial protein synthesis Microbial protein in the rumen (RMP) accounts for between 50 and 90% of the protein entering into the duodenum and supplies the majority of the amino acids required for growth and milk protein synthesis [72].Therefore, increasing RMP synthesis is important for improving animal productivity.Moreover, increasing the RMPS is an effective strategy to decrease protein (i.e., nitrogen) excretion in livestock since the dietary protein unless utilized properly by ruminal microorganisms is degraded to ammonia in the rumen, and ammonia is absorbed from the rumen, metabolized to urea in the liver, and excreted in urine causing environmental nitrogen pollution [10,73]. There are many factors affecting RMP synthesis including dry matter intake, type of the ration fed (forage to concentrate ratio), the flow rate of digesta in the rumen, the sources, and synchronization of nitrogen and energy sources [74].Among these, the amount of energy supplied to rumen microbes was found to be the main factor affecting the amount of nitrogen incorporated into RMP.Phosphorylation at the substrate level and electron transport level are two significant mechanisms of energy generation within microbial cells [75].Based on 10 reconstructed pathways associated with the energy metabolism in the ruminal microbiome, Lu et al. [75] found that the energy-rich diet increased the total abundance of substrate-level phosphorylation enzymes in the glucose fermentation and F-type ATPase of the electron transporter chain more than the protein-rich diet.Therefore, they concluded that energy intake induces higher RMP yield more than protein intake.In this context, any factor affecting the available amount of soluble carbohydrates to rumen microbes will affect the efficiencies of RMP synthesis.Therefore, most of the previously mentioned rumen modifiers (e.g., plant secondary metabolites, dietary oil) may affect the RMP synthesis; however, most of the studies have ignored the determination of RMP. Maximizing RMP synthesis seems to be the most effective approach for the small livestock holders in most developing countries since microbial protein sometimes becomes the only protein source for the animals fed on poor quality forage diets with low or without concentrate supplementations.Balancing the diets of these animals by supplementing of leaves of legumes, urea-molasses multinutrient blocks, urea in the form of slow ammonia release, and other nonprotein nitrogen resources found to be favorable for RMP synthesis [8,10,29,73].It has been recognized that feeding high true proteins (the most expensive ingredients in the ruminant diet) can be utilized by ruminal bacteria in about the same way as the ammonia from nonprotein nitrogen (e.g., urea).The optimum concentrations of ammonia in the rumen for maximal RMP synthesis are about 50-60 mg/L and 27-133 mg/L from the in vitro and in vivo studies, respectively [73]. Reduction in CH 4 production can enhance the RMP synthesis.Soltan et al. [10,29] observed that inclusion of Leucaena in sheep diet up to 35% with or without polyethylene glycol enhanced the RMP and the body nitrogen retention while reducingCH 4 emission; they suggested that optimizing microbial growth efficiency might help to redirect organic matter degraded from CH 4 formation to RMP synthesis.Plants or feed additives containing phytochemicals with high antioxidant activity can promote more nutrients for microbial uptake, enhancing RMP synthesis, while reducing CH 4 emission due to lessening the ruminal oxidative stress [36, 53]. Reduction of ruminal protein degradation and ammonia production From an economic view, dietary protein concentrates increase production costs, especially for developing countries.Furthermore, the microbial population in the rumen has a high proteolytic capacity to degrade the dietary protein.Therefore, nutritionists are interested in formulating diets with ruminal undegradable protein sources.The protein degradation in rumen depends mainly on three processes-proteolysis, peptidolysis, and deamination.Many protein-degrading bacteria are naturally found under ruminal conditions, that is, Ruminobacter amylophilus, P. ruminicola, Butyrivibrio fibrisolvens, S. ruminantium, Streptococcus bovis, and P. bryantii.There are many amino acid-fermenting bacteria, that is, Clostridium sticklandii, Clostridium aminophilum, M. elsdenii, B. fibrisolvens, P. ruminicola, S. bovis, and S. ruminantium [73].Increased ruminal ammonia concentration is an indicator of the high degradation of dietary protein.Many factors can affect ruminal protein degradation and ammonia concentration, such as the type of dietary protein, the energy sources, the predominant microbial population, the rumen passage rate, rumen pH [35].The ruminal bacteria can utilize ammonia for the synthesis of amino acids required for their growth.The optimal ammonia concentration needed to maximize the RMP synthesis ranges from 88 to 133 mg/L [76]. Ruminal Microbiome Manipulation to Improve Fermentation Efficiency in Ruminants DOI: http://dx.doi.org/10.5772/intechopen.101582 Several inhibitors of ruminal microbial protein degradation and ammonia production were reported in the literature.Condensed tannins, slow-release urea products, encapsulated nitrate, clays (e.g., bentonite and zeolite that acts through cation exchange capacity), and biochar were found to reduce the rapid increase in ammonia production and maintained the ruminal pH.Urea pool in the rumen is contributed from urea in the diet and recycling of urea through saliva and ruminal wall.The urease enzyme produced by the ruminal microbiota rapidly degrades urea to ammonia causing ammonia toxicity and inefficient urea utilization when used in excessive amounts [73].Inhibitors of urease may reduce the risk of ammonia toxicity and efficient utilization of urea and other nonprotein nitrogen compounds [77]. Enhancing functional values of milk and meat Ruminant-derived foods (milk and meat) contain a high amount of saturated fatty acids, which are associated with human health concerns.Therefore, improving the functional value of ruminants' products by increasing the content of beneficial fatty acids (FAs) and decreasing detrimental ones, specifically, decreasing the content of saturated FAs and increasing n-3 FAs and conjugated linoleic acids (e.g., cis-9, trans-11 C18:2, also called rumenic acid) have been great interests among the researchers [78].Manipulating ruminal biohydrogenation of polyunsaturated fatty acids (PUFAs) has been the target to increase meat and milk content of rumenic acid and vaccenic acid, as both compounds are major intermediates in the biohydrogenation.To elevate rumenic acid content in products, inhibiting the last step of biohydrogenation needs to be attempted without affecting lipolysis and isomerization and reduction of linoleic acid and linolenic acid to rumenic acid and vaccenic acid.Alternatively, to elevate PUFAs in meat and milk, in particular n-3FAs, inhibition of early steps of biohydogenation should be targeted.Secondary compounds such as tannins, saponins, or essential oils rich in terpenes present in plants and forages or supplementation of vegetable oil can improve some aspects of meat and milk quality including n-3 FAs, conjugated linoleic acids, antioxidant properties [73, 79-81]. Conclusions The ruminal fermentation end products are typically the outputs of several interactive reactions among the rumen microbial populations.Manipulations of rumen microbial fermentation toward enhancing fiber digestibility, SCFAs production, and outflow of microbial biomass, while reducing ammonia and CH 4 emission are the most probable ways to improve animal productivity.Numerous rumen fermentation modifiers have been studied during the last few decades; however, their positive effects are sometimes associated with undesirable effects or highly significant costs (e.g., ionophore antibiotics, anti-methanogenic chemical feed additives, or essential oils).Moreover, most of these modifiers exhibited inconsistent efficacy in the literature mainly because of the variability in animal age, breed, diet formulation, physiological status, rumen microbial resistance, and adaptation.Despite the long history of studies on the rumen modifiers, most of the measurements are determined through the treatment period but knowledge is still limited on animal responses in later life or impacts on human health and growth.However, there is unanimous agreement that an ample array of drought-tolerant plants containing effective bioactive compounds, Ruminal Microbiome Manipulation to Improve Fermentation Efficiency in Ruminants DOI: http://dx.doi.org/10.5772/intechopen.101582natural feed additives with potential abilities to modulate CH 4 emission
9,011.8
2021-12-31T00:00:00.000
[ "Agricultural And Food Sciences", "Biology" ]
Meteorite Chelyabinsk : Features of Destruction A space object exploded near the city of Chelyabinsk on February 15, 2013. Meteorite fragments reached the Earth’s surface, and accordingly we may consider this space object to have been a meteorite. However, this event showed a number of features not corresponding to the destruction of a meteorite. The space object began to disintegrate at an altitude of 70 km when pressure (dynamical loads) on its front surface was ~6.7 × 10 N·m. The substance from the object’s surface was not blown off by drops, as at ablation, but was dumped by jets over a distance up to 1 km. The trail of this space object visually reminded us of a jet aircraft’s contrail, made up of water. But there is no enough water at altitudes of 30 70 km. It may be assumed that the object itself delivered water to these altitudes. The calculation of gas rise over the trail showed that the temperature in some parts of this trail was about 900 K. Heating of large masses of gas can be explained not only by the release of kinetic energy of the space object, but also by combustion processes of its substance. Thus, it was concluded that the meteorite could have been delivered by a comet. INTRODUCTION A space object give then name "Chelyabinsk meteorite" exploded in the Earth's atmosphere on February 15, 2013.This object flew along a sloping trajectory with an angle of inclination to the Earth's surface equal to ~17˚.The speed at which the object entered the Earth's atmosphere was defined as 19 km/s.The energy ~500 kilotons of TNT was released during the explosion, which made it possible to estimate the mass of this space object as ~10 7 kg. Fragments of the Chelyabinsk meteorite which were defined as ordinary chondrite landed on the ground.This space object is a priori considered as a meteorite with an initial diameter of ~19 m.However, a number of inconsistencies were discovered in the process of studying this fall that makes it possible to doubt the meteorite nature of this space object.It was hypothesized [1,2] that the Chelyabinsk meteorite was only a small part of the space object.The main body was a comet.Here we discuss some features of the destruction of this space object, which allow us to make a conclusion about the nature of this body. THE BEGINNING OF THE OBJECT'S DESTRUCTION The object's trail started to form at an altitude of 70 km.According to Borovička et al. [3], the width of the trail left by this space object along the entire length was about 2 km.Therefore, particles flew away from the parent body at a distance of ~1 km.It is unlikely that this is due to the movement of evaporated meteorite molecules or molten droplets.According to eyewitnesses [4], the space object very quickly rotated during its flight, and substance was dumped by the jets in all directions. It is considered that splitting of space bodies begins, when dynamic loads to the front surface of the body Ps exceed the strength of the substance of this body σ cb . where C D ~1-the drag coefficient (for a sphere), ρ a -the atmospheric density at this altitude and V-the speed of the object.From here we find the object's strength at the beginning of its destruction to have been According to numerous observations, destruction of stone meteorites [5], to which meteorite Chelyabinsk relates, begins at dynamic loads of 3 × 10 5 -10 6 N•m −2 .The strength of the meteorite Chelyabinsk was ~10 6 N•m −2 .This was determined by the scattering ellipse of meteorite substance which begins below the altitude of the trajectory ~35 km [2], where the dynamic loads on the meteorite Chelyabinsk were ~1. Thus, in the object which exploded over Chelyabinsk there could be two components with different strengths.Emission of substance as jets is characteristic of comets, hence, it is possible to assume that at an altitude of 70 km exactly a comet with low strength (~6.5 × 10 3 N•m −2 ) began to be destroyed and its destruction ended in an explosion at an altitude of 30 km.And a meteorite which had a strength ~10 6 N•m −2 began to be destroyed at an altitude of ~35 km. APPEARANCE OF THE TRAIL OF THE SPACE OBJECT The appearance and form of a trail left by the space object (Figure 1) was similar to trails of objects containing water, that is, a contrail from a jet plane and clouds.The trail of a jet plane consists of ice crystals which condensed on dust particles ejected from the plane.Clouds also consist of water droplets or ice crystals.The light orange color of the trail in the Chelyabinsk event is associated with oxides of nitrogen formed during ionization of the atmosphere.Moreover, it is important to note that this trail had a luminous edge (Figure 1).This luminosity is due to the refraction of sunlight by ice crystals, a characteristic of We can also see that this trail reflects sunlight like ice crystals.Therefore, it can be assumed that the trail of the space object contained water in the form of ice crystals. It should be noted that at altitudes of 30 -70 km the atmospheric water is not enough to form a dense cloudy trail, which we observe in Figure 1.The Earth's atmosphere at an altitude of 50 km is so "dry" that clouds do not form there and, accordingly, are never observed.At an altitude of 30 km, clouds are observed very rarely; they represent a thin transparent veil.A significant decrease in atmospheric humidity with altitude can be noted in Figure 2. If the space object were a meteorite, its trail would be a dust cloud.Dust was thrown to an altitude of 50 km as a result of the explosion of the Krakatau volcano.It was noted that the dust blanket absorbs sunlight.In the Chelyabinsk event, the cloud reflects the light of the Sun, therefore, it can be assumed that water at an altitude of 30 -70 km was brought by the space object itself.This means that part of this space object was a comet. SIMULATION OF CLOUD RISING OVER A BRIGHT RED SPOT Red (hot) spots (Figure 3) appeared at the place of slowed down dumped matter from the object.The origin of the glow in these spots is related to processes taking place in this substance.Clouds rising over the red spots are the result of the atmosphere's heating due to the energy released in these processes.We can suppose that the process which caused this heating was not only the liberation of the kinetic energy but also combustion.Combustion of the cosmic object's substance is possible only in case this object was a comet. Figure 2. The standard atmosphere in the altitude range 0 -30 km: 1-air density; 2-water content according to Lazarev [6].Natural Science A model of the clouds rising should meet the following conditions.The core of a primary red spot in the first five seconds after the time of the object's flight rose by ~450 meters [3].We assume that heated gas of a red spot represented a sphere-shaped cloud with an initial radius r~1000 m located at an altitude of z = 25.5 km.The dependence of the atmosphere's temperature on altitude near Chelyabinsk for February 15, 2013 was taken from Miller et al. [7].Atmospheric pressure at different altitudes was calculated using the barometric formula. A rise of the cloud takes place under the action of forces [8][9][10]: where F up is the lifting force F up = M•w, where M = ρ gas •V is the mass of rising gas with density ρ gas (which depends on temperature T), V is the cloud volume, and w is acceleration of the lift; F A is the Archimedes force which is equal to F A = g•V•ρ at , where ρ at is the atmosphere density at the given altitude; F G = M•g is Natural Science the gravitational force acting on the cloud; F R is the resistance force where S is the cross-section area of the cloud and v is the cloud lift speed, С x is the dimensionless coefficient of aerodynamic drag depending on the shape of a moving body.This coefficient determines which part of the ambient air starts moving together with the rising object.For a spherical shape of the object, this coefficient was experimentally determined as С x = 0.24.However, as was shown by observations of clouds in nuclear explosions, as well as rising balloons [8], deformations occur in the course of a rise and this coefficient can increase by a factor of ~2. The calculations showed that a cloud can rise by 450 m in 5 seconds if its initial temperature is T~900 K.In other words, the temperature of the cloud of mass М~2.5 × 10 7 kg exceeds atmospheric temperature at this altitude by ~700˚. DISCUSSION AND CONCLUSION One more discrepancy with the meteorite nature of the space object concerns the fragments that reached the Earth's surface.Investigation of the meteorite fragments showed that some of them were directly irradiated by streams of solar cosmic rays [11].Therefore, these fragments were located on the surface of this space object.The remaining fragments belonged to near surface layers of the space object.These fragments were located not deeper than 2.5 m from the surface [12] in spite of the fact that the radius of the Chelyabinsk meteorite was determined as 8.5 m. It is difficult to explain the complete evaporation of the inner part of the meteorite when the fragments from the surface layers survived.According to all theories of the destruction of meteorites, the outer layers of large bodies evaporate initially.All questions are removed, if we are dealing with a situation where a comet was united with the Chelyabinsk meteorite.At first the less strong comet disintegrates, and then meteorite fragments reach the Earth's surface. So, we can suggest that in the Chelyabinsk event the space object was a comet merged with a meteorite.
2,319.8
2018-11-30T00:00:00.000
[ "Physics", "Geology" ]
Understanding models understanding language Landgrebe and Smith (Synthese 198(March):2061–2081, 2021) present an unflattering diagnosis of recent advances in what they call language-centric artificial intelligence—perhaps more widely known as natural language processing: The models that are currently employed do not have sufficient expressivity, will not generalize, and are fundamentally unable to induce linguistic semantics, they say. The diagnosis is mainly derived from an analysis of the widely used Transformer architecture. Here I address a number of misunderstandings in their analysis, and present what I take to be a more adequate analysis of the ability of Transformer models to learn natural language semantics. To avoid confusion, I distinguish between inferential and referential semantics. Landgrebe and Smith (2021)’s analysis of the Transformer architecture’s expressivity and generalization concerns inferential semantics. This part of their diagnosis is shown to rely on misunderstandings of technical properties of Transformers. Landgrebe and Smith (2021) also claim that referential semantics is unobtainable for Transformer models. In response, I present a non-technical discussion of techniques for grounding Transformer models, giving them referential semantics, even in the absence of supervision. I also present a simple thought experiment to highlight the mechanisms that would lead to referential semantics, and discuss in what sense models that are grounded in this way, can be said to understand language. Finally, I discuss the approach Landgrebe and Smith (2021) advocate for, namely manual specification of formal grammars that associate linguistic expressions with logical form. Introduction Cross-disciplinary investigations, such as when philosophers put artificial intelligence under scrutiny, are healthy, if not crucial.Any discipline has its blind spots, and Synthese (2022) 200:443 sometimes it takes a new set of eyes to push research horizons onward.Needless to say, cross-disciplinary investigations require considerable knowledge of at least two scientific fields, and it is both brave and praiseworthy when researchers embark on such endeavors.Landgrebe and Smith (2021) present a very critical analysis of contemporary language-centric artificial intelligence (natural language processing)-in particular of models based on the Transformer architecture (Vaswani et al., 2017).Their article has two parts: In Sects. 1 and 2, they present their analysis of Transformer models; in Sect.3, they present an alternative approach to modeling language.I will focus mostly on Sects. 1 and 2, but also briefly discuss the approach advocated for in Sect.3. In these sections, Landgrebe and Smith (2021) argue that Transformer models are insufficiently expressive, exhibit poor generalization, and will never acquire linguistic semantics, never 'understand' language.There are many reasons to be critical of recent developments in artificial intelligence, but in this paper, I will argue that the diagnosis presented by Landgrebe and Smith (2021) is misleading in important respects, and I will show why, on the contrary, there are reasons to believe that Transformers suffer from none of the above weaknesses. Understanding transformers The most widely used models in natural language processing today rely on the Transformer architecture (Vaswani et al., 2017).This includes most popular pretrained language models such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), BART (Lewis et al., 2020), and GPT-3 (Brown et al., 2020), but Transformer models are also used across a wide range of downstream applications, including syntactic parsing (Mohammadshahi & Henderson, 2021), summarization (Gu et al., 2019), and semantic parsing (Shiv & Quirk, 2019).We first present a rough outline of how Transformer models work, and then review how they are presented in Landgrebe and Smith (2021). Transformer models are deep neural networks, typically comprised of dozens of layers.Layers are commonly referred to as Transformer blocks.The neurons of each layer connect to the neurons of the next layer, leading to models in which learning amounts to carefully adjusting millions of numerical nobs.The input to a Transformer is typically a short text. 1 The first step is so-called tokenization of the text, a translation of the original input into a series of more meaningful entities, called tokens.There are many ways to do this, but the meaningful units roughly correspond to what we understand as words or morphemes.It is important to note that already at this point, the Transformer architecture injects an inductive bias by segmenting the input into words and morphemes.This inductive bias is a linguistically motivated bias: While learned tokenizations do not always align perfectly with how linguists would break a text into meaningful units, it correlates much better with a gold-standard segmentation than a random segmentation would. The input tokens are then translated into vectors that represent their meaning out of context.The token vectors are combined with vectors that represent where in the short text the different tokens were located.These vectors are called position encodings.It is the combined vectors that thus represent the different tokens and where they are in the text, that are sent through the stacked Transformer blocks. What is a Transformer block then?A Transformer block is first and foremost a way of combining information about different tokens that takes into account that tokens may be more or less important in a particular context and with a specific purpose in mind.The word good has a particular meaning in the sentence Huawei's new phone is good, and is particularly important if we were to decide whether the review provides a rationale for buying the phone or not.Consider now the sentence Many say that Huawei's new phone is good, but I think it is average.The word good obviously has the same meaning, but is less important in deciding the sentiment of the sentence.The importance of the word good depends on the other words in the sentence, and the Transformer architecture presents a particular way of combining the encodings of the different words with a specific purpose in mind. This is where the calculations in Transformer blocks get a little complicated.In brief, the vectors that represent situated tokens, are multiplied into three different number matrices.This gives us three new vectors u i , v i , w i for each token t i .These are now combined with the vectors of other tokens.For each token t i , the first vector u i is multiplied by the second vector v j for the other tokens, giving us a scalar value that is used to weight the third vector w j of the second token.Each token t i is now represented by the sum of these vectors (w j ).This summed vector now contain not only information about the original word, but also about the context in which it appeared.In each stacked Transformer block, the same operation is repeated.Each layer contains more and more abstract vector representations of the original text, and the various vector representations have been found to contain useful information for a wide range of applications in natural language processing.Misunderstanding Transformers Landgrebe and Smith (2021), in their criticism, focus on what they see as the limited capacity of Transformer models, but their description of the capacity of Transformer models is factually wrong.They describe, for example, how a Transformer model 'encodes each single-sentence input into an encoding vector of 1024 real numbers.'This is simply not true.In a Transformer, the input is a series of, say, 512 or 1024 tokens, each represented as, say, 512-or 1024-sized vectors, but then combined through stacked Transformer blocks of multi-headed self-attention into complex sentence representations. 2 Landgrebe and Smith (2021) also claim that 'these sentence embeddings lose relations between words within the sentence.'As my presentation of the inner workings of Transformer blocks just showed, this is also false: Transformer models were specifically designed to model the relations between words.Note also that the dimensionality of neural networks are hyper-parameters that can be modified, and that the capacity of networks is only practically bounded by computational resources.Landgrebe and Smith (2021) also point to a more serious limitation of (some) Transformer models, namely 'the discarding of all information pertaining to the contexts of the input sentences.'As already mentioned, some applications of the Transformer architecture model sentences independently of each other.This clearly is an important limitation, preventing resolution of inter-sentential anaphora and bridging, or disambiguation based on preceding discourse.However, most applications of the Transformer architecture do model context, when relevant, typically in one of three ways: either by (a) simply processing larger chunks of texts sequentially (Liu et al., 2019); (b) conditioning on context presentations (Wang et al., 2020); or (c) applying Transformer models hierarchically (Ging et al., 2020).In sum, Landgrebe and Smith (2021) thus misrepresent Transformers in three ways: by claiming Transformers have limited expressivity, fail to capture relations between words, and fail to model inter-sentential context.None of these are true.Note how all three claims pertain to inferential semantics. Transformers' understanding The main argument presented by Landgrebe and Smith (2021) against using models such as Transformers in natural language processing, has to do with referential semantics, and is reminiscent of Searle (1980) and similar thought experiments in philosophy of mind. 3 Landgrebe and Smith (2021) claim that a Transformer model is necessarily shallow because 'the vector space it uses is merely morpho-syntactical and lacks semantic dimensions.'This is clearly a false statement under most definitions of morpho-syntax and semantics, since Transformer models obtain very good results on tasks such as topic classification and machine translation.If Transformer models only encoded morpho-syntactic information, they would not be able to distinguish between I just ate an apple and I never painted a lion,4 making topic classification and machine translation blind guessing. Here is what I think they mean, though: Transformer models at best learn inferential semantics, not referential semantics (Marconi, 1997). 5Landgrebe and Smith (2021) define understanding language as seeings its relevance to actions and thoughts, and argue this is what Transformer models cannot do.Seeing the relevance of words and phrases to actions and thoughts seems to decompose into the following two properties: (a) Lexical representations are grounded in representations of the (physical, social, and mental) world; and (b) the agent is aware or conscious of such grounding. 6nferential semantics refers to the part of semantics that is concerned with valid inferences.In lexical semantics, this involves establishing relations of synonymy, antonymy, hyponymy, etc.The output of such lexicographic exercises is often a database, which is best thought of as a graph with lexemes as nodes and with edges corresponding to lexical relations.The situation gets more complex at the sentence or discourse level, revolving around discourse relations such as entailment, contrast, consequence and explanation.Referential semantics is the part of semantics that concerns denotation, whether in terms of truth-conditions, mental representations or situations in which using a word is deemed appropriate. Both (a) and (b) clearly concern referential semantics.Marconi (1997) remarks that Searle's Chinese Room only applies to referential semantics, not inferential semantics, since category effects, word associations, etc., are unconscious processes.Landgrebe and Smith (2021) thus seem to agree with Searle (1980) on the inherent limitations of Transformer models: lack of proper grounding and lack of consciousness.7 In Sect.4, I will discuss handwritten grammars, which Landgrebe and Smith (2021) claim do not suffer from such limitations.(Searle, however, would.)Grounding The grounding problem (Harnad, 1990;Jackson & Sharkey, 1996) is the problem of learning a mapping from words and phrases to the objects and events they refer to, or to cognitive representations thereof.How can deep neural network models such as Transformer models learn to ground their representations in this way?Inferential semantics, i.e., relations between words and sentences, are induced implicitly by most learning objectives used to train these models.The most commonly used learning objective today is perhaps masked language modeling (Devlin et al., 2019), but the same holds for the translation objective in Vaswani et al. (2017), for example.Transformer models can therefore be straight-forwardly evaluated as models of inferential semantics.How can Transformer models encode referential semantics, though? Well, the details of this will depend on what we think it is that linguistic expressions refer to.Let us, for example, assume linguistic expressions refer to vectors in an embedding space of neural activations or (fMRI/EEG) images thereof.If mental imagery is defined broadly enough, this should be compatible with some forms of internalist semantics (Rapaport, 1994;Schank & Colby, 1973), but note the vector space could also be a perceptual or physical space.Referential semantics or grounding now amounts to learning a mapping between the Transformer model vector space and this target space. 8ut why, you may ask, would language model vector spaces be isomorphic to representations of our physical, mental and social world?After all, language model vector spaces are induced merely from higher-order co-occurrence statistics.I think the answer is straight-forward: Words that are used together, tend to refer to things that, in our experience, occur together.When you tell someone about your recent hiking trip, you are likely to use words like mountain, trail, or camping.Such words, as a consequence, end up close in the vector space of a language model, while being also intimately connected in our mental representations of the world.If we accept the idea that our mental organization maps (is approximately isomorphic to) the structure of the world,9 the world-model isomorphism follows straight-forwardly (by closure of isomorphisms) from the distributional hypothesis. There is plenty of evidence that Transformer-based language models encode words in ways that are near-isomorphic to where neural activation occurs when listening to or reading these words (Pereira et al., 2018;Caucheteux & King, 2021),10 to how our perceptual spaces are organized (Abdou et al., 2021;Patel & Pavlick, 2022), as well as to how physical spaces are organized (Liétard et al., 2021).11Such global similarities can also be induced from local ones: Wu et al. (2021) show how brain activity patterns of individual words are encoded in a way that facilitates analogical reasoning-the same analogical reasoning that language models facilitate.Such a property would in the limit entail that brain encodings are isomorphic to language model representations (Peng et al., 2020).To see this, consider an example of analogical reasoning: 'Berlin is to Germany, what Copenhagen is to ____'.In a language model, this is computed by subtracting the vector for Germany from the sum of the vectors for Berlin and Copenhagen, returning the nearest neighbor for the resulting vector.For today's language models, the result would most likely be the vector for Denmark.If you can compute all possible analogies using vector off-set this way, you have induced a structure that is isomorphic to (the current geopolitical) reality.If you can compute the same analogies by offset of brain imaging vectors, these two spaces must be near-isomorphic.And language models can thus be grounded in brain imaging spaces. To flesh out my argument that Transformers (and similar neural network architectures) can learn grounding, I present the Color Radio thought experiment, about grounding of color terms:12 Consider a common AM/FM radio receiver tuned in on a talk radio channel.The engineer who built the receiver, augmented the device with a pattern recognition module or a modern language model, as well as a one-pixel camera.The radio wants nothing more than to learn the meaning of color terms.It therefore starts to consider the linguistic contexts in which these terms occur.Since the talk radio channel signal is not aligned with the input of its camera, it cannot use co-occurrence statistics to ground these terms in its color perception.Notice also that the problem of grounding color terms, in the eyes of Searle (1980), should be as impossible as learning to understand language in general.The representations of the receiver's language model were induced 'in a vat', so to speak.Pursuing its goal, nevertheless, the radio notices how terms such as yellow and turquoise occur in slightly different contexts, but also how other color terms such as violet and purple occur in very similar contexts.Technically, it computes the co-occurrence statistics of these color terms and embeds these in a low-dimensional vector space.After years of practice, it learns to represent colors in a way that is near-isomorphic to how humans perceive colors-Because its language model is contextualized, it even learns to correct for possible reporting biases.It now has learned the inferential semantics of color terms.The radio wants more, though.It also wants to learn the referential semantics of color terms, i.e., the mapping of color terms onto pixel values.However, if the color term representation is isomorphic to the camera's representation of colors, it follows that unless the color terms lie equidistantly on a sphere, we can induce a mapping, even in the absence of supervision, by straight-forward methods that are humans also seem to be endowed with. 13The Color Radio thought experiment is designed to suggest the plausibility of unsupervised grounding, and is as such intended as both a rebuttal of Searle (1980) and Landgrebe and Smith (2021). In sum, my argument for why (unsupervised) grounding of Transformers is possible, goes as follows: Premise (P1) 'Transformer language model vector spaces are nearisomorphic across languages and often also with brain imaging, perceptual and physical spaces.'Premise (P2) 'Two near-isomorphic vector spaces can be aligned with minimal supervision, and often without supervision.'Conclusion 'Transformer language model vector spaces can be aligned with minimal supervision, and often without supervision.' Both premises have empirical support, and the conclusion is derived by a simple application of modus ponens. Awareness Landgrebe and Smith (2021)'s definition of understanding as seeing the relevance of words and phrases to actions and thoughts, was shown to decompose into grounding and awareness.I will argue with Dennett (1987) that seeing awareness as a prerequisite for understanding, rests on a category mistake (Ryle, 1938). 14The category mistake of Searle, as well as of Landgrebe and Smith (2021), is to assume that language understanding can be equated with what we experience, when we are aware of our language understanding.Understanding language, we argue, or linguistic meaning, if you prefer, does not belong to the category of private, conscious experiences, but to categories of processes that are orthogonal to consciousness.It is generally easy to conflate these, because our introspection suffers from a severe sampling bias: When we think of instances of our own language production, we naturally tend to think about instances in which we were conscious of our language production.Now ask yourself this: Does linguistic meaning really imply awareness of linguistic meaning?Does understanding really imply awareness of understanding?Dennett (1987) argued that Searle (1980) conflates understanding and awareness of understanding.Leibniz already emphasized the importance of processes of understanding and reflection that we are unaware of.It certainly seems possible to produce semantically fluent sentences in the absence of conscious thought, e.g., during sleep or under anesthesia (Webster, 2017).Patients that are unconscious-as defined by the Glasgow Coma Scale-reportedly react to and can remember verbal communication, even if they are not able to respond.Comatose patients also seem to comprehend language.Van den Bussche et al. (2009) present several experiments that suggest the possibility of unconscious language understanding, even when participants are fully awake.One of them is a lexical decision task, in which participants were exposed to sequences of letters and asked to classify these as words or non-words.Subliminal primes preceded the exposure.Some primes were semantically related, while others were completely unrelated.Semantically related primes were shown to lead to faster and more accurate responses.In another experiment, participants were asked to read target words aloud, and related subliminal primes were again shown to facilitate reading. 15This all suggests that meaning does not require conscious reflection on relevance and attribution.And if so, machines simply do not need consciousness to acquire linguistic meaning. That we are prone to this category mistake is unsurprising: When we recollect memories of communicating with others, memories of understanding what others were saying, we almost by definition recall events in which we were in fact aware of this process of understanding.It is much easier to recall events you were conscious of than events you were not.Our introspection thus suffers from severe sampling bias, so to speak.This holds true for things we do.Consider the common experience of unconscious driving.You jump on your bike or get into your car to drive to work, but quickly find yourself immersed in thoughts.Perhaps you are preparing yourself for a meeting later that day, or you are thinking about the movie you saw last night.Moments later you park in front of your office, with no recollection of how you made it there.Presumably you navigated through crossings and roundabouts, stopped at traffic lights, etc., but none of this required conscious effort.Nevertheless, if you were asked what it feels like to bike to work, you would likely recall events in which you were conscious of biking to work. My argument for why awareness is irrelevant for the ability of Transformers to learn referential semantics, is simply that awareness is irrelevant for this pursuit.This follows directly from the empirical observation that language understanding can be unconscious.Approximation Transformer models are induced from finite amounts of data and hence approximative.If trained on more (representative) data, they will likely learn to better approximate the inferential and referential aspects of semantics.Landgrebe and Smith (2021) find this disturbing and write: 'Even at their very best, they remain approximative, and so any success they achieve is still, in the end, based on luck.' This, though, is a fallacy.Mastery of archery or talent for counseling is also approximative, but not a matter of luck.While no one-neither a chess computer nor a grand champion-is able to compute the optimal next chess move in real time, because of the doubly exponential search space, a skilled chess player will nevertheless win over me a hundred games in a row.While his craft is in the same sense approximative, any attempt to reduce the difference between us to luck would be ridiculous.Human language acquisition, by the way, is also approximative.This was the most prominent counter-argument against another classical argument for the impossibility of machine understanding of language, namely Gold's Theorem (Gold, 1967).In fact, Gold's Theorem seems to provide some motivation for saying approximation is necessary for language understanding.This follows from the fact that language is a moving target, and that members of a linguistic community exhibit a great deal of variation, speaking slightly different dialects, sociolects, and idiolects.A learning algorithm that would iterate through all possible grammars and only discriminate between (exactly) correct and incorrect ones, would never terminate in the face of such variation. That is: I argue that the approximative nature of Transformer models, like their possible lack of awareness, is orthogonal to their ability to learn referential semantics.This follows from two relatively uncontroversial assumptions, namely that language exhibits drift and inter-speaker variation, and that this makes it possible to identify a language exactly: 'Language is a moving target, over time and between speakers.'Premise (P2) 'Moving targets can be approximated, not modeled exactly.'Conclusion 'Language models can only approximate language.' The Robustness of Transformers In a final point of criticism, Landgrebe and Smith (2021) also suggest that Transformer models will quickly become invalid if the input-output relationship changes on either side even in some minor way.This is because the model does not generalize.Once fed with data as input that do not correspond to the distribution it was trained with, the model will fail.Landgrebe and Smith ( 2021) are here concerned with the robustness of deep neural networks, e.g., Transformer models, under distributional shift.This is an important subfield of artificial intelligence, and many researchers have devoted their careers to learning good models under distributional shift.Sometimes this literature is referred to as domain adaptation or transfer learning (Søgaard, 2013).While domain adaptation remains a challenge, language models based on Transformers are among the most robust models in artificial intelligence, and it is certainly false to say that they become invalid if the input-output relationship changes moderately. 16In other words: The claim that Transformers generally exhibit poor generalization and low performance is inconsistent with empirical observations.In Sect.2, I showed how Landgrebe and Smith (2021) misrepresented how inference works in Transformers in three ways.In this section, I have discussed three other claims by Landgrebe and Smith (2021), pertaining to their learning capacity: Landgrebe and Smith (2021) claim Transformers cannot acquire referential semantics and cannot learn to generalize outside of their training data.I presented a mixture of arguments and empirical evidence in an atttempt to refute both claims.Moreover, on my way, I also discussed how awareness is generally not a prerequisite for understanding, and how the fact that machine learning models, including Transformer models, are approximate by nature by no means disqualify them as models of language.I summarize my discussion of Landgrebe and Smith (2021)'s critique of Transformers in the table below. Handwritten grammars Earlier critiques of Transformers and related architectures in natural language processing focused on showing language understanding is unlearnable from raw text (Bender & Koller, 2020), i.e., in the absence of supervision, that language models based on 16 Hsieh et al. (2019), for example, show how Transformer models tend to be much more robust than earlier models, including so-called recurrent neural networks, across tasks such as sentiment analysis and textual entailment.Hendrycks et al. (2020) make similar observations for more downstream applications.Landgrebe and Smith (2021) seem underwhelmed by the performance of Transformer architectures, in general.They note that Vaswani et al. (2017) report so-called BLEU scores of 28.4 for English-German and 41.8 for English-French and write how 75-85 could be achieved in theory and would correspond to the translation abilities of an average bilingual speaker.The Transformer scores are, in contrast, 'low', in their view.Human translators do not exhibit much better BLEU scores, however.In the original paper introducing the BLEU metric (Papineni et al., 2002), the BLEU scores reported for two human translators were 19.3 and 25.7, respectively.Transformers are uninterpretable (Boge, 2021), or that they tell us nothing about linguistic competencies (Dupre, 2021).Landgrebe and Smith (2021) argue that language understanding is unlearnable for Transformers, even with supervision.They are not interested in interpretability or the ability to distill linguistic theories of competence, merely the learning of inferential and referential semantics.17This section briefly discusses the alternative proposed by Landgrebe and Smith (2021) to such deep neural learning architectures: handwritten grammars mapping sentences to logical form.I will argue that if language understanding is out of reach for deep neural network architectures, it must also be out of reach for handwritten grammars with logical form.The approach of Landgrebe and Smith ( 2021) is a pipeline approach.They first use a shallow form of syntactic analysis called part-of-speech tagging to induce the syntactic categories of the input words in context.The authors then rely on a 'proprietary AIalgorithm chain that uses world knowledge in the form of a dictionary of lexemes and their word forms along with associated rules, relating, for example, to the transitivity and intensionality of verbs '. 18 This proprietary algorithm chain maps the input to logical form, a process which 'requires world knowledge, for example about temporal succession, which is stored in the computer using ontologies'. How would this approach to text processing or text generation be more meaningful than Transformer models?One argument that perhaps it is not, runs as follows: Assume a handwritten grammar g, following the pipeline approach of Landgrebe and Smith (2021).Assume also g 'understands' language.The Transformer architecture is Turing-complete (Pérez et al., 2019).This means that there is a translation function τ from any handwritten grammar that can be implemented as a Turing machine into an isomorphic Transformer, i.e., τ (g) = t.If g 'understands' language, so does t.So for any handwritten grammar that understands language, there exists a Transformer model that also understands language.Q.E.D. In fact, the steps of the pipeline approach in Landgrebe and Smith ( 2021) have (all) been modelled by Transformer architectures. 19Probing experiments suggest that even moderate-sized Transformer-based language models learn similar pipelines from just doing masked language modeling at scale (Tenney et al., 2019).Transformers could also be trained specifically to simulate the pipeline approach of Landgrebe and Smith (2021).Since this form of teacher-student training (Fan et al., 2018) can be done on raw text, the Transformer model would in the limit become functionally indistinguishable from the pipeline system.For Searle (1980), none of these steps would capture linguistic meaning.For Landgrebe and Smith (2021) Somewhat surprisingly, Landgrebe and Smith (2021) do not discuss the fact that the classical arguments of Searle and Dreyfus against the possibility of machine understanding of language were presented with such handwritten grammars in mind.I think Transformers and related neural architectures present real advantages over handwritten grammars.These advantages have nothing to do with expressivity, word-word interactions, and context-sensitivity, but with their explanatory power.Transformers can be used to make theories of learning testable, while handwritten grammars cannot.Consider, for example, the hypothesis that the semantics of directionals is not learnable from next-word prediction alone.Such a hypothesis can be falsified by training Transformers language models and seeing whether their representation of directionals is isomorphic to directional geometry; see Patel and Pavlick (2022) for details.Transformers and related architectures, in this way, provide us with practical tools for evaluating hypotheses about the learnability of linguistic phenomena. Concluding remarks I have argued that Transformers and related architectures seem able to learn both inferential and referential semantics.Clearly, you can do more with language than inferential and referential semantics, and some of these things are well beyond what you can ask a language model to do.If I ask you to walk like a penguin, I ask you to do something that language models cannot do.What we do with language is to many an important part of its meaning, and if so, language models learn only part of the meaning of language.Many linguists and philosophers have tried to distinguish between referential semantics and such embedded practices.Wittgenstein (1953), for example, would think of referential semantics-or the ability to point-as a nonprivileged practice.While Wittgenstein does not give special attention to this 'pointing game', it has played an important role in psycholinguistics and anthropology, for example.Language models play many languages better than us, e.g., writing poetry or jokes, translating or summarizing texts, or spotting grammatical errors-but the pointing game has been the litmus test for machine understanding of language since Searle's Chinese Room, and it is widely used to probe for lexical semantics. Language models have other limitations: You cannot encode the precise semantics of second-order quantifiers like most of in vector space.For a finite set of pairs of sets, it can learn the right inferences, e.g., that most members of A are also members of B, but only for a limited set of cases.So what do we make of a language model that can do the pointing game, as well as the other games just mentioned, but only decide whether most members of A are also in B, if A and B are sufficiently small?My answer is: Well, what would me make of, say, a 14-year old child with the same skills?If a 14-year old child can point to the referents of Italian nouns, translate Italian sentences into another language, summarize documents written in Italian, but only decide whether la maggior parte delle A sono B for small A and B, would you not say this child still speaks Italian?The requirement that you can apply all words correctly in all cases is a very high bar for saying someone understands a language.Just like knowing a strawberry is a nut, is not generally seen as a test of one's ability to understand English.Also, recall that Landgrebe and Smith (2021) are not claiming that Transformers have insufficient levels of referential semantics.Rather, they claim Transformers have no referential semantics.In other words, any signs of referential semantics would challenge their claims. I have other, more serious quarrels with Transformers: They are slow and costly to train, with terrible carbon footprints, and they exhibit slow inference times.They generally require GPUs, which are inaccessible in many parts of the world.The word segmentation algorithms and positional encoding schemes typically used in conjunction with the Transformer architecture are biased toward fusional (mostly Indo-European) languages.Each of these points is reason to consider alternatives to Transformer models.The arguments put forward by Landgrebe and Smith (2021) against Transformer models, however, are problematic. One contribution of this work was the defense of Transformers and related neural architectures against a series of false claims, i.e., that they exhibit limited expressivity, are unable to capture word-word dependencies, are not sensitive to context, and do not generalize well.Another contribution was an in-depth discussion of another claim presented by Landgrebe and Smith (2021), namely that Transformer models are incapable of understanding language, in the sense of 'hatching on' to the world.I introduced a distinction between inferential and referential semantics, originally presented by Marconi (1997), making it clear that this argument only concerns referential semantics.I then pointed to a recent finding in the artificial intelligence literature: The observation that unsupervised alignment of isomorphic representations enables grounding of language models in mental representations or representations of the physical world.This observation makes referential semantics in neural networks possible, under very permissive assumptions.All that such grounding requires is learning a linear projection into the mental or physical space.This is sufficient, since language model vector spaces have been shown to be near-isomorphic to mental, perceptual, and physical spaces.Projections into such spaces can easily be learned when supervision is available, using point-set-registration or graph alignment algorithms, but it has also been shown that this can even be done in the absence of supervision, e.g., with generative adversarial networks.I provided a thought experiment called the Color Radio to provide some intuition how such grounding could be obtained in practice. In Sect.4, I addressed the hybrid pipeline approach to natural language understanding advanced by Landgrebe and Smith (2021), showing that any of the components of their pipeline could be replaced by Transformers without changing the underlying function.Finally, I discussed other limitations of modern-day language models: They are slow and costly to train, have terrible carbon footprints, exhibit slow inference, and require costly GPUs.This is all orthogonal to my discussion of Landgrebe and Smith (2021), of course.There are also obvious limitations to what you can cram into a vector, e.g., the semantics of second-order quantifiers.The question is whether this is , it seems the trouble is that you cannot have it both ways: If you think a grammar mapping sentences into logical form can capture linguistic meaning, you have to admit that the same is possible for Transformer models and other forms of deep neural networks.
7,879.2
2022-10-27T00:00:00.000
[ "Linguistics", "Computer Science", "Philosophy" ]
Bias Adjustment Methods for Analysis of a Non-randomized Controlled Trials of Right Heart Catheterization for Patients in ICU : Kaplan-Meier estimate or proportional hazards regression is commonly used directly to estimate the effect of treatment on survival time in randomized clinical studies. However, such methods usually lead to biased estimate of treatment effect in non-randomized or observational studies because the treated and untreated groups cannot be compared directly due to potential systematical difference in baseline characteristics. Researchers have developed various methods for adjusting biased estimates by balancing out confounding covariates such as matching or stratification on propensity score, inverse probability treatment weighting. However, very few studies have compared the performance of these methods. In this paper, we conducted an intensive case study to compare the performance of various bias correction methods for non-randomized studies and applied these methods to the right-heart catheterization (RHC) study to investigate the impact of RHC on the survival time of critically ill patients in the intensive care unit. Our findings suggest that, after bias adjustment procedures, RHC was associated with increased mortality. The inverse probability treatment weighting outperforms other bias adjustment methods in terms of bias, mean-squared error of the hazard ratio estimators, type I error and power. In general, a combination of these bias adjustment methods could be applied to make the estimation of the treatment effect more efficient. Introduction In randomized clinical studies, the effect of treatment on patients' survival time can be estimated by comparing treated and untreated subjects directly. In this case, Kaplan-Meier estimate or proportional hazards regression is used directly to estimate the effect of treatment on survival time. However, it is not easy to materialize a randomized study in daily life. There is an increasing number of nonrandomized studies in recent years. In an observational (or nonrandomized) study, the treated and untreated groups cannot be compared directly because they may systematically differ at baseline characteristics. For example, the patients' health condition and medical history are essential factors when doctors make a diagnosis. The treatment assignment to a patient is dependent on covariates like age, gender, health condition, and medical history, etc. As a result, the effect of medical treatment on patients' survival time may be confounded by their baseline covariates. Therefore, systematic differences in baseline characteristics between the treated and untreated groups must be considered in assessing the impact of treatment on survival time in observational studies. The propensity score plays an important role in balancing the treated and untreated subjects to make them comparable. Rosenbaum and Rubin proposed that propensity score is the conditional probability assignment to a particular treatment given a vector of observed covariates [1][2]. They indicated that adjustment for the scalar propensity score contributes to control all confounders and eliminate bias due to observed covariates. Propensity Score is a scalar function of the covariates that includes the information required to achieve the balance of distribution of baseline covariates. The most common methods based on propensity score are matching, stratification, regression adjustment, and probability weighting [3][4]. With the application of the propensity score, the treated and untreated patients who have similar propensity scores will have a similar distribution of observed background covariates. Therefore, the effect of treatment will be unrelated to confounders, as a result of which, treated and the untreated subject is comparable like what we could attain in randomized studies. The dataset that motivated this paper pertains to day 1 of hospitalization and the treatment variable "swang1" is whether or not a patient received a Right Heart Catheterization (RHC), also called the Swan-Ganz catheter, on the first day in which the patient qualified for the SUPPORT study [5]. RHC is a test used to see how well your heart is pumping (how much blood it pumps per minute) and to measure the pressures in the heart and lungs. In an RHC, the doctor guides a special catheter (a small, hollow tube) to the right side of the heart then passes the tube into the pulmonary artery. The doctor observes blood flow through the heart and measures the pressures inside the heart and lungs. A sensitivity analysis provided some evidence that patients receiving RHC had decreased survival time. But the sensitivity analysis indicated that any unmeasured confounder would have to be somewhat strong to explain away the results. Our goal is to estimate the effect of RHC treatment on the patients' survival time after reducing the confounding bias. However, systematic differences in patients in the two groups may exist, and these differences could lead to a biased estimate of treatment effect; which is known as the causal effect in a nonrandomized study. As mentioned before, the distributions of baseline covariates between treatment 0 and treatment 1 subjects are quite different. What's more, as we will see in the matching methods, the distributions of the propensity score in the two treatment groups are different, which reveals the systematic difference in the two studies and the problem of confounding. The remainder of the article focus on the application and comparison of the following three methods. Section 2 introduce matching on propensity score method. Section 3 introduce stratification on propensity score method. Section 4 introduce inverse probability treatment weighting method. We apply each method to the Right Heart Catheterization study to compare the survival time of RHC treated group and control group. The article concludes with a discussion on the choice of methods under different scenarios in Section 5. Matching on Propensity Score The propensity score is presented in both randomized trials and observational studies. In randomized trials, the true propensity score is known and defined by the study design. In observational studies, true propensity scores are generally not known but can be estimated through the data [6]. The propensity score is the conditional probability assignment to a particular treatment given a vector of observed covariates [1]: Where the dependent variable is binary, =1 is associated with the RHC treatment and =0 is corresponding to control. , i=0, 1 are observed values of the vector of covariates [6]. Propensity scores are generally calculated using one of two methods: Logistic regression or Classification and Regression Tree Analysis [6]. In practice, the propensity score is most often estimated using a logistic regression model, in which treatment status is regressed on observed baseline characteristics [7]. The estimated propensity score is the predicted probability of treatment derived from the fitted regression model [8]. Where the parameters α, β are estimated by maximum likelihood logistic regression. Matching is a commonly used method to select "matched" pairs on background covariates that we believe need to be controlled. Even though it seems difficult to find patients who are similar on all important covariates, especially when there are plenty of covariates of interest, propensity score matching solves this problem by allowing us to control for as many covariates as we want simultaneously by matching a single scalar variable [9]. Rosenbaum and Rubin introduced three techniques for constructing a matched sample: (i) nearest available matching on the estimated propensity score; (ii) Mahalanobis metric matching including the propensity score; and (iii) nearest available Mahalanobis metric matching within calipers defined by the propensity score. Therefore, once the propensity scores are estimated by the logistic regression method, we apply the nearest available matching approach to reduce the confounding bias in the RHC study. In this method, the absolute difference between the estimated propensity scores for the control and treated groups is minimized [6]. Given randomly ordered control and treated subjects, the first treated subject is selected along with a control subject with a propensity score closest in value to it [10]. Generally, if a treated subject and a control subject have the same propensity score, the observed covariates are automatically controlled for [6]. Therefore, any differences between the treatment and control groups will be accounted for and will not be a result of the observed covariates. To confirm the effect of the propensity score matching method on reducing systematic difference, it is necessary to compare the covariates between treatment 0 and treatment 1 before and after matching. Our goal is to reduce the difference in the mean of individual covariate between treatment 0 and treatment 1 after matching method. To decide whether there is a significant difference in the mean of individual covariate between treatment 0 and treatment 1, visualizations like box plots, bar plots are carried out first and then a two-sample t-test is applied to compare the results statistically. Since there are 50 covariates in the dataset makes it too complicated to conclude the changes that matching influenced, and according to the variable description, not all of the covariates are useful in the model. There may be some errors in analyzing the results of matching without any variable selection. The Lasso method has been tried first for variable selection in the Cox model. LASSO can be computed via R Package glmnet [11]. But the final results showed that there are still 42 covariates remaining in the model whose coefficient is larger than zero. It is not convenient to implement a comparison among all of the 42 covariates. Then we can try to use the stepwise package which provides the final model with 28 covariates. Apparently, it is not the perfect result even though it provides a much simpler model. We can do further selection from the final 28 covariates. According to the Cox matching adjusted model with the selected covariates, table 1 comparing the P-value results of the Cox match adjusted model with full covariates and the 28 covariates from the variable selection. The majority of the 28 covariates in the stepwise final model have smaller P-value, which means the corresponding covariates are more significant in this model. Meanwhile, the P-value of some covariates increases relatively. Therefore, those covariates whose P-value becomes smaller while are less than 0.05 before and after variable selection are reasonable to represent the most significant ones. It is more convenient to concentrate on these 9 covariates and compare the mean of them after the matching method. In order to confirm the effects of matching, first of all, we draw the boxplots and bar plots of these covariates chosen from stepwise before and after matching. Here we use the boxplot of "surv2md1", "das2d3pc" and the results of propensity score and bar plots of "hema", "chfhx", "meta", "chrpulhx", "psychhx", "dnr1Yes", "renal". Although plots are showing the approximate equivalence between treatment 0 and 1, in favor of unbiased estimate of treatment effect before matching, it is not statistically significant at the 0.05 level of significance. As a result, it is not enough to conclude the effect of matching only by the plots of covariates. Further statistical steps are necessary. To be specific, a two-sample ttest is applied here to test whether the difference of a covariate's mean in treatment 0 and treatment 1 is zero. Table 2 indicates the mean and standard deviation of covariates in subsets under treat 0 and treat 1 before matching. Among the 9 significant covariates, there are 8 covariates with P-value less than 0.05, which is sufficient evidence to reject the null hypothesis and conclude that the confounding bias exists. Similarly, the visualization and twosample t-test are conducted for relevant data after matching. It can be seen from the box plot of the PS's before and after matching that the unbalance has been reduced a lot after matching. Also, the test statistics and P-value in table 3 revealed that the differences between covariates under treatment 0 and 1 decreases, since most covariates' P-value are larger than 0.05. Even though the P-value of "survmd1" and "dnr1" is still less than 0.05, the significance becomes less with the P-value increasing much more. Since the systematic differences between the patients in treatment 0 group and treatment 1 group have been greatly reduced, the effect of treatment on survival time could be compared directly. Figure 3 is the comparison plot of the Kaplan-Meier estimates before and after matching. The log-rank test statistic is 19.35 with P-value 1.00e-05 before matching, and 23.65 with P-value 1.20E-06 after matching. In other words, the result of the treatment effect (P-value 1.00e-05) is not accurate statistically without matching adjustment. The results provided evidence that the difference of survival functions between the two groups is more significant at significance level 0.05 after propensity score matching and the patients who received RHC had lower survival time than those who did not receive RHC. Stratification on Propensity Score Stratification on propensity score can also ameliorate the confounding effects of covariates. Each observation for the subject is classified into a propensity quantile based on the propensity score [12]. According to Rosenbaum and Rubin's results, creating five strata based on a continuous variable like the propensity quantile with the stratum boundaries determined by its distribution in the exposed and the comparison group combined eliminates approximately 90% of measured confounding [13]. Therefore, the patients can be assigned to five strata using the propensity score quantile as the cut-off. Within each stratum, the treated patients and untreated patients will have roughly similar propensity score values, also a similar distribution of the measured baseline covariates. The effect of the treatment can be estimated by comparing the outcomes directly between subjects with treatment 0 and subjects with treatment 1 in one stratum if the propensity score has been estimated correctly [7]. To confirm that the systematic difference has been reduced after stratification, it is necessary to compare the covariates' mean under treatment 0 and treatment 1 before and after stratification. The same problem occurs here as with matching when there are 50 covariates in the dataset, which is too complex to conclude whether the stratification makes a difference. Variable selection will be operated again as before. Similarly, the Lasso method has been tried for variable selection but there are 32 covariates left in the final result with coefficients larger than zero. Therefore, I still apply stepwise here aiming to obtain a simpler model and then 28 covariates are selected from the stepwise function with stratification. A further selection is similar as before. According to the Cox stratification adjusted model with the selected covariates, table 4 comparing the P-value results of the Cox stratification adjusted model with full covariates and the 28 covariates from the variable selection. The majority of the 28 covariates in the stepwise final model have smaller P-value, which means the corresponding covariates are more significant in this model. Meanwhile, the P-value of some covariates increases relatively. So, those covariates Heart Catheterization for Patients in ICU whose P-value becomes smaller while are less than 0.05 before and after variable selection are chosen to represent the most significant ones. It is reasonable to concentrate on these 8 covariates and compare the mean of them after the stratification. To confirm the effects of stratification, a two-sample t-test is applied here to test whether the difference of a covariate's mean in treatment 0 and treatment 1 is zero, which is related to test whether the systematic difference in covariates has been reduced. Table 5 shows the mean and standard deviation of corresponding covariates in subsets under treatment 0 and treatment 1 before stratification. All of the 8 covariates' Pvalue is less than 0.05. That is sufficient evidence to reject the null hypothesis and conclude that the confounding bias exists and the stratification adjustment is necessary when evaluating the effect of treatment on survival time. Similarly, the two-sample t-test is conducted for relevant data after stratification. It can be seen from the test statistics and Pvalue in table 6 that the systematic differences between covariates under treatment 0 and treatment 1 decreases, since most covariates' P-value increased and the significance of the difference in mean between treatment 0 and treatment 1 decreased. Even though the P-value of the covariates is still less than 0.05, the significance becomes less with the P-value increasing. The reason for the zero P-value is that the stratified two-sample t-test function defines the extreme P-value as zero. To compare the mean of selected covariates in the subject of treatment 0 and subject of treatment 1 more accurately and sufficiently, table 7 and 8 indicate the mean of each covariate in each stratification group and use two-sample t-test respectively to test whether there is a significant difference in the mean of covariates between treatment 0 and treatment 1 after stratification. Apparently, most of the P-values are larger than 0.05 which concludes to fail to reject the null hypothesis and illustrates that the systematic difference and confounding bias are reduced. Since systematic differences between the patients in treatment 0 group and treatment 1 group have been greatly reduced, the effect of treatment on survival time is comparable. Figures 4 and 5 is the Cox proportional hazard regression model for treatment 0 and treatment 1 after stratification. It is obvious that the balance of the covariates is better achieved after stratification. Figure 6 are the comparison plots of the Kaplan-Meier estimates between treatment 0 and 1 in each stratification group. As we can see from the five plots, the survival time of patients after RHC treatment is relatively decreased, which leads to the same conclusion as propensity score matching. Inverse Probability of Treatment Weighting Kaplan Meier estimator is widely used in clinical studies to compare survival time between different treatment groups. However, if certain covariates corresponding to low survival rates are more strongly represented in one group than another, which is considered as over-represented, the survival estimated by the Kaplan-Meier method form one group would appear to be worse than survival estimated from the other group. Another approach reducing confounding effects was proposed by Xie and Liu in 2005 [14]. They developed the Adjusted Kaplan Meier estimator (AKME) using the inverse probability of treatment weighting (IPTW). The estimated propensity score, the probability of being treated in a certain group conditioning on a set of covariates, is used to construct the weights for subjects. A weight is assigned to each individual as the inverse of the propensity score. For example, a subject with a higher propensity score, which is considered as overrepresented, is assigned with a lower weight. On the other hand, subjects with a lower propensity score, considered as under-represented, will be given a higher weight [15]. They also proposed a weighted log-rank test for statistical comparison of the survival functions of the two groups. As with the matching and stratification, we apply the IPTW method to the Right Heart Catheterization study. The propensity score of each patient is estimated using logistic regression in the same way. Then the Kaplan-Meier estimators of both the treatment group and control group are adjusted with the weight as the inverse of the propensity score. If the propensity score is estimated correctly, the sampling bias will be removed after weighting adjustment. Figure 7 shows the Kaplan-Meier estimator on the survival function of the two groups before and after weighting adjustment. We can see from the plot that the survival curve of the subject with treatment 1 is lower than the subject with treatment 0. It becomes more obvious after adjustment. We also perform the log-rank test for statistical comparison of the survival functions. Table 9 shows the comparison of the hazard ratio estimate with or without IPTW procedure. The log-rank test statistic without weighting is 19.35 with P-value 1.00E-05, while the weighted log-rank test statistic is 75.45 with a P-value less than 2.00e-16. We conclude that the difference in survival functions between the two groups is more significant at significance level 0.05 after weighting with the inverse of the propensity score. Moreover, the plot shows that the survival time of subjects with treatment 1, who received the RHC, tends to be lower than the survival time of those not receiving RHC. Discussions and Conclusions In this paper, we discussed three bias adjustment methods for causal inference in non-randomized clinical trials. According to the application results from three bias adjustment methods on the Right Heart Catheterization study, we conclude from the Cox proportional-hazards regression that patients receiving RHC had decreased survival time. Moreover, the difference in survival time between the two groups becomes more significant at significance level 0.05 after reducing the confounding bias. Matching on propensity score is a good method for removing the bias between the treated group and the control group on the background covariates. It is preferred when the sample size of the control group is much large than the sample size of the treatment group. Stratification is preferred when the sample size is large enough since the estimation would be unreliable if there are not enough patients in each stratum. The IPTW method showed better performance in general. One may consider matching or stratification when the control group variance is much larger than the variance of the treatment group. Overall, a combination of the methods could be applied to make the estimation of the treatment effect more efficient.
4,855.2
2021-07-19T00:00:00.000
[ "Mathematics" ]
A Secured Intrusion Detection System for Mobile Edge Computing : With the proliferation of mobile devices and the increasing demand for low-latency and high-throughput applications, mobile edge computing (MEC) has emerged as a promising paradigm to offload computational tasks to the network edge. However, the dynamic and resource-constrained nature of MEC environments introduces new challenges, particularly in the realm of security. In this context, intrusion detection becomes crucial to safeguard the integrity and confidentiality of sensitive data processed at the edge. This paper presents a novel Secured Edge Computing Intrusion Detection System (SEC-IDS) tailored for MEC environments. The proposed SEC-IDS framework integrates both signature-based and anomaly-based detection mechanisms to enhance the accuracy and adaptability of intrusion detection. Leveraging edge computing resources, the framework distributes detection tasks closer to the data source, thereby reducing latency and improving real-time responsiveness. To validate the effectiveness of the proposed SEC-IDS framework, extensive experiments were conducted in a simulated MEC environment. The results demonstrate superior detection rates compared to traditional centralized approaches, highlighting the efficiency and scalability of the proposed solution. Furthermore, the framework exhibits resilience to resource constraints commonly encountered in edge computing environments. Introduction The rapid proliferation of mobile computing has revolutionized the way users interact with information technology, ushering in an era of unprecedented connectivity and pervasive computing.With the advent of resource-constrained devices and the burgeoning demand for low-latency applications, mobile edge computing (MEC) has emerged as a paradigm-shifting technology, pushing computational capabilities closer to end-users [1][2][3].While MEC offers numerous benefits, its unique characteristics also pose distinct security challenges, necessitating innovative solutions to safeguard the integrity and confidentiality of data processed at the edge.In this context, the focus of this paper is on the development and exploration of a cutting-edge solution: a secured edge-based intrusion detection framework tailored explicitly for mobile computing environments. Securing intrusion detection systems (IDSs) in the context of mobile edge computing (MEC) introduces a myriad of challenges and considerations.The convergence of mobile devices with edge computing resources creates a dynamic environment where traditional security measures may fall short.One notable concern is the increased attack surface due to the distributed nature of MEC, making it imperative to safeguard communication channels and data exchanges.Mobile devices, being inherently vulnerable to diverse cyber threats, amplify the need for robust intrusion detection mechanisms.Additionally, the reliance on wireless communication within MEC introduces potential vulnerabilities, emphasizing the necessity for secure transmission protocols and encryption.The integration of edge resources in processing and storing sensitive information calls for stringent access controls and authentication mechanisms to thwart unauthorized access.Furthermore, the seamless integration of signature-based and anomaly-based detection in MEC's IDS introduces • This paper proposes a secured edge computing-based intrusion detection system (SEC- IDS) approach to intrusion detection, acknowledging the significance of real-time responsiveness and low-latency decision-making in mobile computing scenarios. • By distributing intrusion detection tasks closer to the data source, the framework aims to reduce the impact of latency, enhance the scalability of the system, and improve overall network security. • Furthermore, our proposed framework incorporates a hybrid detection approach, combining signature-based and anomaly-based techniques.This amalgamation enables the system to detect both known attack patterns and previously unseen threats, thereby providing a comprehensive defense against a diverse range of cyber threats.• Additionally, a dedicated secure communication layer is integrated into the framework to mitigate potential attacks on the intrusion detection system itself, ensuring the overall robustness of the proposed solution. Through this research, we strive to contribute to the ongoing discourse on securing mobile computing environments, emphasizing the critical role that intrusion detection plays in fortifying the integrity and confidentiality of data processed at the edge.The subsequent sections will delve into the intricacies of our secured edge-based intrusion detection framework, its design principles, implementation details, and comprehensive evaluation, shedding light on its efficacy in addressing the evolving security challenges within the realm of mobile computing. The rest of the paper is organized in such a way that Section 2 describes the comprehensive literature background and related state-of-the-art methods.Section 3 explains the types of attacks on mobile edge computing discussed in this study.Further, Section 4 presents the SEC-IDS proposed framework in detail.Section 5 discusses the implementation, results, and comparative analysis with the existing methods.Lastly, we conclude our study in Section 6. Background In this section, we introduce the background studies of used technologies for the proposed study.Further, these technologies are evaluated as well. A. Edge Computing The rapid evolution of computing paradigms, coupled with the pervasive integration of Internet of Things (IoT) devices [5], has led to a paradigm shift in network architecture.Edge computing, with its emphasis on decentralized data processing at the network periphery, has emerged as a transformative approach to address the challenges posed by the massive influx of data and the demand for low-latency, high-throughput applications.However, the distributed nature of edge computing introduces new security concerns, necessitating innovative solutions to safeguard critical data and infrastructure.In the context of edge computing, the security architecture must evolve to address the unique characteristics and challenges associated with this decentralized paradigm [6].Traditional security models, primarily designed for centralized architectures, may prove insufficient to protect the diverse and dynamic edge computing environments.Intrusion detection systems (IDSs) play a pivotal role in identifying and mitigating potential threats, ensuring the integrity, confidentiality, and availability of data and services. B. IDS (Intrusion Detection System) Intrusion detection systems play a pivotal role in identifying and mitigating cybersecurity threats, offering organizations a proactive defense mechanism.However, understanding their limitations is paramount in designing a comprehensive security strategy that addresses the evolving nature of cyber threats [7].As cybersecurity landscapes continue to advance, the need for innovative approaches and the integration of complementary technologies becomes imperative to enhance the overall resilience of network defenses.Edge-based IDS security architecture represents a crucial advancement in the domain of cybersecurity, tailoring intrusion detection mechanisms to the specific requirements and constraints of edge computing environments.Unlike conventional IDS that operate within centralized data centers, edge-based IDS is strategically positioned at the network periphery, closer to the data sources and endpoints.This proximity not only reduces latency but also enables timely detection and response to security incidents, a critical consideration in the context of emerging applications such as autonomous vehicles, smart cities, and industrial IoT.The architecture of edge-based IDS encompasses a range of components and functionalities designed to fortify the security posture of edge computing environments.This includes the deployment of intrusion detection sensors, distributed detection engines, and secure communication protocols [8].The dynamic and resource-constrained nature of edge devices necessitates the optimization of detection algorithms and the efficient utilization of computing resources.Consequently, the security architecture must strike a delicate balance between detection accuracy and minimal impact on system performance. Related Work The fundamental idea behind the Internet of Things (IoT) centers on the proliferation of intelligent nodes seamlessly integrated into our daily social interactions [9].This underscores the imperative for cutting-edge intrusion detection methods specifically tailored to address the unique challenges posed by IoT and EDGE computing networks, emphasizing the importance of adopting approaches grounded in artificial intelligence.In the realm of IoT, digital devices interconnected via the internet aim to establish connections for individuals through smart IoT applications, resulting in network-distributed environments characterized by limited power, storage, and memory capacities. Intrusion detection systems (IDSs) assume a critical role in identifying and responding to intrusive actions and behaviors, prompting administrators to take automated actions [10].Employing signature methods, IDSs detect intrusions by comparing signatures to predefined intrusive events stored in the database [11].While this ensures swift detection and diminishes false alarms, a significant drawback is evident: only known intrusions can be identified.Anomaly detection treats all intrusive activities as anomalous, flagging any activity deviating from standard treatment as a potential intrusion.Anomaly-based detection offers a substantial advantage in detecting zero-day attacks and variations of known attacks.Numerous existing approaches leverage traditional machine learning environments for intrusion detection.Robust anomaly detection methods utilizing artificial neural networks (ANN) and deep learning surpass the limitations of conventional approaches [12][13][14][15].The adaptability of ANN features renders them applicable across diverse domains, with a specific focus on enhancing intrusion detection.These advanced approaches prove immensely beneficial in the realms of modern computing and EDGE computing. In [16], a deep belief network tailored for the Edge of Things (EoT) is presented, offering the capability to detect intrusive activities within the EoT network.The proposed framework comprises modules for data collection, feature extraction, and classification. However, the computational demands and associated costs of this model are notably high.Addressing the critical security concerns in the Internet of Things (IoT) network, ref. [17] introduces a robust intrusion detection system (IDS) incorporating a multi-agent system, blockchain, and deep learning algorithms.While this approach demonstrates high efficiency, the amalgamation of three diverse techniques introduces complexity and increases response time. For mobile edge computing, ref. [10] proposes a network IDS that captures tcpdump packets, extracts and analyzes features, and forwards legitimate packets into the network.The model employs a topic model to learn normal behavioral patterns, but its detection accuracy is compromised when new types of packets enter the network.In addition, ref. [18] introduces a data-driven mimicry and game theory-based IDS for edge computing networks, investigating new attacks based on game income and balance points.Efforts are made to reduce the IDS cost.Also, ref. [19] suggests a traffic inspection and classificationbased distributed attack model for IoT applications, leveraging the flexibility of cloudbased architectures with edge computing.However, relying on a traffic classification-based mechanism may yield inaccurate results with new network traffic. Further, authors in [20] proposed the ZBIDS model, a security framework designed to enhance network protection by logically dividing the network into distinct zones, each with specific security requirements.This hierarchical architecture allows for tailored intrusion detection mechanisms in each zone, ensuring a more effective and flexible approach to network security.ZIDS employs pattern recognition and signature-based methods to identify known attack patterns or anomalies in network behavior.Each zone is equipped with zone-specific intrusion detection modules and customized rulesets, enabling targeted threat detection.The system generates real-time alerts upon detecting suspicious activities, triggering automated responses based on the severity of the threat.Security policies are defined for each zone, guiding the acceptable and unacceptable activities within that segment.ZIDS enforces these policies to ensure compliance with predefined rules and regulations.The model maintains comprehensive logs of network activities and detected intrusions, providing detailed reports for post-incident analysis and continuous security improvement.The ZIDS model is scalable, making it adaptable to diverse network environments and varying security needs.Overall, ZIDS offers a robust and hierarchical approach to intrusion detection, enhancing the security posture of networked systems. Similarly, authors in [21] proposed the EEACK-IDS (enhanced energy-aware clustering and key management-based intrusion detection system) model as an innovative intrusion detection framework designed for wireless sensor networks (WSNs).It combines energy-efficient clustering and key management techniques to enhance the overall security and energy efficiency of WSNs.EEACK-IDS employs an energy-efficient clustering approach to organize sensor nodes into clusters, minimizing energy consumption and prolonging the network's operational lifetime.The model incorporates robust key management mechanisms to secure communication within and between clusters.Key distribution and updating strategies enhance the resilience of the network against potential attacks.EEACK-IDS integrates an intrusion detection system that continuously monitors network activities to identify and respond to potential security threats.The model introduces enhanced security measures to protect against various types of attacks, including data tampering, eavesdropping, and node compromise.EEACK-IDS undergoes performance evaluation to assess its effectiveness in terms of intrusion detection accuracy, energy consumption, and network lifetime.The model aims to achieve a balance between security and energy efficiency, making it suitable for resource-constrained WSNs. Another security-related IDS model, SAZIDS (smart ant colony-based zone intrusion detection system), was proposed in [22].The model is an innovative intrusion detection framework designed for wireless sensor networks (WSNs).It leverages the principles of ant colony optimization to create an intelligent and adaptive system for detecting intrusions in WSNs.The model organizes the WSN into distinct zones, each with its own set of security requirements.Ant agents patrol these zones, monitoring network activities and identifying potential intrusions.SAZIDS prioritizes energy efficiency, a critical consideration for resource-constrained WSNs.The intelligent ant agents optimize their routes and activities to minimize energy consumption while maintaining effective intrusion detection.The decentralized nature of SAZIDS enhances its scalability and resilience [23].Ant agents operate autonomously, contributing to the robustness of the intrusion detection system.SAZIDS undergoes performance evaluation to assess its effectiveness in terms of intrusion detection accuracy, energy efficiency, and adaptability to changing network conditions. Types of Attacks on Mobile Edge Computing Routing Information Protocol (RIP) [24] is a dynamic routing protocol commonly used within computer networks to facilitate the exchange of routing information between routers.While RIP is a widely adopted protocol, its simplicity can make it vulnerable to various types of attacks. Route Flapping: In this attack [25], a malicious actor advertises false or poisoned routing information to routers in the network.The attacker may advertise unreachable or undesirable routes, leading routers to make incorrect routing decisions.Route poisoning can disrupt normal network operations by causing routers to forward traffic along incorrect paths, leading to connectivity issues and potential data interception.Route flapping involves continuously and rapidly advertising and withdrawing routes.This activity can consume network resources and cause instability in the routing tables of neighboring routers.Route flapping can lead to network congestion, increased bandwidth usage, and potential service disruptions as routers struggle to adapt to frequent changes in routing information. Denial of Service (DoS): A Denial-of-Service attack [26] on RIP can involve overwhelming the RIP routers with excessive traffic or malformed packets, causing them to become unresponsive or leading to degraded performance.A successful DoS attack can result in a loss of network connectivity, rendering the RIP routers incapable of providing routing services and potentially disrupting overall network functionality. Spoofing Attacks: [27] Malicious entities may attempt to spoof RIP packets by forging the source address to appear as a trusted router within the network.This can allow the attacker to inject false routing information.Spoofed RIP packets can mislead routers into accepting unauthorized routing updates, potentially leading to traffic interception, rerouting, or other security compromises.Table 1 presents an overview of all these attacks as follows. Route Flapping Route flapping is a network instability issue where a route alternates between available and unavailable states in a rapid and repetitive manner. Denial of Service (DoS) A Denial-of-Service attack aims to disrupt the normal functioning of a system or network by overwhelming it with a flood of traffic, rendering it incapable of providing routing services. Spoofing Attacks Spoofing attacks involve the impersonation of a legitimate entity or source by falsifying information. Unauthorized Access Unauthorized access refers to gaining entry or privileges to a system or network without proper authorization.Attackers exploit vulnerabilities to bypass security mechanisms and access sensitive information or resources. Malware Injection Attack A malware injection attack involves inserting malicious code or software into a system or application.This code can compromise the integrity of the system, steal sensitive information, or perform unauthorized actions. Unauthorized Access: Unauthorized access [28] to routers running RIP can result in an attacker gaining control over the routing tables and configurations.This can lead to the manipulation of routing information.Unauthorized access allows attackers to modify routing tables, redirect traffic, or cause network outages, compromising the overall integrity and security of the network. Malware Injection Attack: A malware injection attack [29][30][31][32], also known as a code injection attack, is a type of cybersecurity threat where malicious code is inserted into a legitimate application or system with the intent of compromising its functionality, stealing sensitive information, or gaining unauthorized access.This form of attack exploits vulnerabilities in software, allowing the attacker to execute arbitrary code and manipulate the targeted system for malicious purposes. Proposed SEC-IDS Framework In the evolving landscape of ubiquitous connectivity, the increasing prominence of mobile edge computing (MEC) underscores the critical need for robust security measures within edge environments.This paper addresses this imperative by presenting a groundbreaking intrusion detection system (IDS) framework tailored explicitly for MEC.The SEC-IDS framework stands out by leveraging a certificate authority (CA) infrastructure, introducing a novel approach to enhance the security landscape of mobile edge networks.At its core, the framework employs CA principles to validate and authenticate communication channels between edge devices and services operating within the MEC ecosystem.By integrating this CA-based approach, the SEC-IDS not only establishes a foundation for trusted communication but also reinforces the overall integrity and confidentiality of data exchanged at the edge.This innovative framework represents a significant stride in fortifying the security posture of MEC environments, offering a reliable and scalable solution to address the unique challenges posed by the dynamic and distributed nature of edge computing. Figure 1 presents the proposed CA-based SEC-IDS for edge computing. PEER REVIEW 7 of 14 The key components of the framework include: Certificate Management System: A robust system for issuing, distributing, and managing digital certificates for edge devices within the MEC environment.Certificates play a pivotal role in establishing se- The key components of the framework include: Certificate Management System: A robust system for issuing, distributing, and managing digital certificates for edge devices within the MEC environment.Certificates play a pivotal role in establishing secure communication channels.Equation ( 1) is used to generate the digital signature as follows: Verify(cd,p) In the proposed model for mobile edge computing (MEC) with the SEC-IDS (Secured Edge Computing Intrusion Detection System) framework, the certificate management system (CMS) plays a crucial role in enhancing the security and trustworthiness of communication channels within the MEC ecosystem.The CMS serves as the central component responsible for the issuance, distribution, renewal, and revocation of digital certificates, adding an additional layer of authentication and validation to the communication process.The primary functions of the certificate management system in the proposed model include: • Certificate Issuance: The CMS generates digital certificates for entities within the MEC environment, such as edge devices and services.These certificates serve as cryptographic credentials, attesting to the authenticity and legitimacy of the entities involved in communication. • Public Key Distribution: The CMS facilitates the distribution of public keys associated with the issued certificates.This is essential for enabling secure communication through encryption and ensuring that only authorized entities can decrypt and access sensitive information.• Certificate Renewal and Revocation: The CMS manages the lifecycle of digital certificates, overseeing their renewal to maintain up-to-date cryptographic credentials.Additionally, it handles the revocation of certificates in cases of compromised security or changes in entity authorization status, thereby promptly mitigating security risks.• Authentication and Trust Establishment: By relying on the CA infrastructure, the CMS contributes to the establishment of a trusted communication channel within the MEC ecosystem.Digital certificates issued and managed by the CMS serve as trusted indicators, allowing entities to verify the authenticity of their counterparts before engaging in data exchange. • Integrity and Confidentiality Assurance: Through the issuance and verification of digital signatures, the CMS ensures the integrity of data transmitted within the MEC environment.It also plays a vital role in maintaining the confidentiality of communication by managing the encryption keys associated with the certificates.• Policy Enforcement: The CMS enforces security policies related to certificate usage, ensuring that entities adhere to predefined security standards and access controls.This contributes to a consistent and well-regulated security posture within the MEC infrastructure. Intrusion Detection Module: An advanced IDS module that monitors network traffic, analyzes communication patterns, and detects anomalous behavior within the MEC infrastructure.The IDS is designed to identify potential security threats and trigger alerts for timely response. Behavioral Analytics: Integration of behavioral analytics and machine learning algorithms to enhance the detection capabilities of the IDS.This allows the system to adapt and evolve, identifying both known and novel intrusion patterns. Real-time Response Mechanism: A real-time response mechanism that allows the IDS to take immediate action upon detecting suspicious activities.This may include isolating compromised devices, blocking malicious communication, or triggering notifications to administrators. The proposed CA-based framework addresses the unique security challenges of MEC environments by providing a trusted foundation for communication and implementing proactive intrusion detection measures.By combining the principles of CA with state-ofthe-art intrusion detection technologies, the proposed framework aims to establish a secure and resilient MEC ecosystem, safeguarding critical data and services at the network edge.The step by step procedure of proposed CA-IDS approach is presented in Algorithm 1 as follows. Initialization Initialize the certificate authority (CA) and intrusion detection system (IDS) components. 6. Continuously repeat steps 3, 4, and 5 to adapt to evolving network conditions. Security analysis of proposed SEC-IDS Framework Let C be the set of digital certificates issued by the CA, T be the set of network traffic data, and Verify(cd,p) be the certificate verification function for network packet p using certificate cd.The lemma states that if the verification process is successful for all network packets in T based on the digital certificates in C, then the communication between devices in the MEC network is considered authenticated.Mathematically, the lemma can be expressed as ∀p ∈T, ∀cd ∈ C:Verify(cd,p) = True =⇒ Communication is Authenticated In this lemma, the quantifiers ∀∀ denote universal quantification, stating that the verification holds for all network packets and certificates.The implication (=⇒) indicates that if the verification process is true for all combinations of packets and certificates, then the communication is authenticated in the MEC network.This lemma captures the essential property of the algorithm, emphasizing the importance of successful certificate verification for secure communication. Implementation and Results To assess the efficacy and functionality of the proposed framework, we employed NS-2 (Network Simulator 2) as the chosen tool for implementation.NS-2 is a versatile and widely adopted discrete event network simulator that provides a platform for the creation and evaluation of complex network scenarios.In this context, we specifically utilized NS-2 to implement a certificate authority (CA)-based intrusion detection system (IDS) tailored for mobile edge computing (MEC) environments.The utilization of NS-2 allows for the simulation and thorough examination of both security and performance aspects within a controlled and replicable environment.This choice of simulation tool facilitates the modeling of diverse network conditions and scenarios, enabling comprehensive testing of the CA-based IDS for MEC.Through NS-2, we can emulate various intrusion scenarios, assess the system's responsiveness to security threats, and evaluate its overall performance under different conditions.The use of NS-2 ensures a robust and versatile platform for the validation and refinement of the proposed framework, contributing to a more thorough understanding of its capabilities and limitations in the context of MEC security.The following parameters in Table 2 were used during the evaluation: Figure 2 demonstrates the superior performance of the proposed SEC-IDS system in accurately identifying attacks when compared to existing detection strategies like ZBIDS, EEACK-IDS, and SAZIDS.SEC-IDS showcases a 3.45% improvement by delivering the content with a ratio of 900 in 3 ms, which is a massive improvement over SAZIDS, a significant 12.14% enhancement over the current ZBIDS strategy, and an 8.28% higher detection rate than EEACK-IDS.Similarly, we also evaluated the detection rate and compared with existing approaches as shown in Figure 3. Figure 4 displays the assessment of the false alarm rate.The proposed SEC-IDS showcases its effectiveness in reducing the false alarm rate when contrasted with established exploration methods like ZBIDS, EEACK-IDS, and SAZIDS.SEC-IDS manifests a notable 20.18% improvement over SAZIDS, a significant 16% enhancement over EEACK-IDS, and an overall performance improvement of 9.95% by minimizing the FAR up to 1.2, which is ignorable in the given strategies. EEACK-IDS, and SAZIDS.SEC-IDS showcases a 3.45% improvement by delivering the content with a ratio of 900 in 3 ms, which is a massive improvement over SAZIDS, a significant 12.14% enhancement over the current ZBIDS strategy, and an 8.28% higher detection rate than EEACK-IDS.Similarly, we also evaluated the detection rate and compared with existing approaches as shown in Figure 3. EEACK-IDS, and SAZIDS.SEC-IDS showcases a 3.45% improvement by delivering the content with a ratio of 900 in 3 ms, which is a massive improvement over SAZIDS, a significant 12.14% enhancement over the current ZBIDS strategy, and an 8.28% higher detection rate than EEACK-IDS.Similarly, we also evaluated the detection rate and compared with existing approaches as shown in Figure 3.The intrusion detection system (IDS) designed for edge computing and lev certificate authority introduces a robust security framework.This system capita certificate authority mechanisms to enhance the security posture of edge computi ronments.The use of certificates ensures the authenticity and trustworthiness o within the edge network.By employing certificate-based validation, the IDS c tively identify and respond to potential intrusions, safeguarding the integrity an dentiality of edge computing resources.This approach establishes a secure found edge computing operations, addressing the unique security challenges posed b tralized and distributed computing environments.Further, the proposed IDS m some pros and cons as follows: Pros of the proposed SEC_IDS Model: Comprehensive Detection Mechanisms: The integration of both signature-ba anomaly-based detection mechanisms enhances the overall effectiveness of intru tection.This comprehensive approach allows the system to detect known attack as well as abnormal behaviors that may indicate novel threats. Adaptability: The SEC-IDS framework is designed to be adaptable to threats.The combination of signature-based and anomaly-based detection mec The intrusion detection system (IDS) designed for edge computing and leveraging certificate authority introduces a robust security framework.This system capitalizes on certificate authority mechanisms to enhance the security posture of edge computing environments.The use of certificates ensures the authenticity and trustworthiness of entities within the edge network.By employing certificate-based validation, the IDS can effectively identify and respond to potential intrusions, safeguarding the integrity and confidentiality of edge computing resources.This approach establishes a secure foundation for edge computing operations, addressing the unique security challenges posed by decentralized and distributed computing environments.Further, the proposed IDS model has some pros and cons as follows: Pros of the proposed SEC_IDS Model: Comprehensive Detection Mechanisms: The integration of both signature-based and anomaly-based detection mechanisms enhances the overall effectiveness of intrusion detection.This comprehensive approach allows the system to detect known attack patterns as well as abnormal behaviors that may indicate novel threats. Adaptability: The SEC-IDS framework is designed to be adaptable to evolving threats.The combination of signature-based and anomaly-based detection mechanisms ensures flexibility in identifying both known and unknown intrusion attempts, making it resilient to emerging attack vectors. Edge Computing Utilization: Leveraging edge computing resources is a significant advantage.By distributing detection tasks closer to the data source within the MEC environment, the framework minimizes latency.This not only improves the real-time responsiveness of the intrusion detection system but also optimizes resource utilization. Reduced Latency: The distribution of detection tasks at the edge reduces latency in the intrusion detection process.This is crucial for MEC environments where low latency is essential for ensuring timely responses to security threats. Realistic Validation: The extensive experiments conducted in a simulated MEC environment provide a realistic validation of the SEC-IDS framework.Simulating MEC conditions allows for a controlled and replicable assessment of its performance under various scenarios. Cons of proposed SEC_IDS Model: Simulation Limitations: While the simulated MEC environment offers controlled experiments, it may not fully replicate the complexities and nuances of a live MEC deployment.Real-world conditions, such as network variability and dynamic user behavior, could introduce factors not accounted for in the simulation. Resource Intensiveness: Implementing both signature-based and anomaly-based detection mechanisms may demand significant computational resources.In resourceconstrained MEC environments, this could potentially lead to performance bottlenecks or increased energy consumption. Dependency on Edge Infrastructure: The effectiveness of the SEC-IDS framework is contingent on the availability and reliability of edge computing resources.In scenarios where the edge infrastructure is limited or unstable, the system's performance may be compromised. Ongoing Maintenance: To stay effective against evolving threats, the SEC-IDS framework may require regular updates and maintenance.Keeping signature databases up to date and refining anomaly detection models could introduce operational overhead. Limited Generalization: The framework's performance may be optimized for the specific conditions of the simulated MEC environment, and its generalization to diverse MEC deployments may require additional validation and customization. Conclusions This study has illuminated a multitude of challenges in intrusion detection, causing disruptions in the operation of mobile edge networks and posing threats to availability, integrity, and confidentiality.Conventional firewalls and established machine learning-based methods encounter hurdles in adeptly discerning new or unfamiliar intrusive traffic.In response to these challenges, this paper introduces the Secured Edge Computing Intrusion Detection System (SEC-IDS), meticulously designed for mobile edge computing environments.The proposed framework incorporates distinct detection modules geared towards identifying unknown or novel attacks with a minimal false alarm rate (FAR) within the mobile edge infrastructure.The results of the implementation underscore the effectiveness of the SEC-IDS framework, achieving an impressive accuracy of 95.25% and an exceptionally low FAR of 1.1%.In contrast, the ZIDS demonstrates an accuracy of 86.04% with a FAR of 8.4%, while the SAIDS attains an accuracy of 86.94% with a FAR of 2.1%.These findings, when compared with prior works, signify a substantial enhancement in performance, with accuracy elevated by up to 10.78% and FAR reduced by up to 93%.The paper delves further into a security analysis grounded in game theory principles. Figure 1 . Figure 1.CA-based secured IDS for mobile-based edge computing. Figure 1 . Figure 1.CA-based secured IDS for mobile-based edge computing. Algorithm 1 : CA-Based IDS for Mobile Edge Computing Inputs: D-Set of all edge devices in the MEC environment.C-Set of digital certificates.T-Set of network traffic data.M-Machine learning model for anomaly detection.A-Set of intrusion alerts.Output: C-Set of digital certificates issued by the CA.A-Set of alerts indicating potential intrusions.Act-Actions taken based on the alerts.Process: Figure 2 . Figure 2. Data delivery ratio of the proposed model compared with existing models. Figure 3 . Figure 3. Intrusion detection rate of the proposed model compared with existing models. Figure 4 Figure4displays the assessment of the false alarm rate.The proposed SEC-IDS showcases its effectiveness in reducing the false alarm rate when contrasted with established exploration methods like ZBIDS, EEACK-IDS, and SAZIDS.SEC-IDS manifests a notable 20.18% improvement over SAZIDS, a significant 16% enhancement over EEACK-IDS, and an overall performance improvement of 9.95% by minimizing the FAR up to 1.2, which is ignorable in the given strategies. Figure 2 . Figure 2. Data delivery ratio of the proposed model compared with existing models. Figure 2 . Figure 2. Data delivery ratio of the proposed model compared with existing models. Figure 3 . Figure 3. Intrusion detection rate of the proposed model compared with existing models. Figure 4 Figure4displays the assessment of the false alarm rate.The proposed SEC-IDS showcases its effectiveness in reducing the false alarm rate when contrasted with established exploration methods like ZBIDS, EEACK-IDS, and SAZIDS.SEC-IDS manifests a notable 20.18% improvement over SAZIDS, a significant 16% enhancement over EEACK-IDS, and an overall performance improvement of 9.95% by minimizing the FAR up to 1.2, which is ignorable in the given strategies. Figure 3 . Figure 3. Intrusion detection rate of the proposed model compared with existing models. Figure 4 . Figure 4. False alarm rate of the proposed model compared with existing models. Figure 5 Figure 5 illustrates the assessment of the link change rate, revealing that SEC-IDS adeptly reduces this rate when compared to the established methods of ZBIDS, EEACK-IDS, and SAZIDS.SEC-IDS showcases a significant 16.16% improvement over SAZIDS, a Figure 4 . Figure 4. False alarm rate of the proposed model compared with existing models. Figure 5 Figure5illustrates the assessment of the link change rate, revealing that SEC-IDS adeptly reduces this rate when compared to the established methods of ZBIDS, EEACK-IDS, and SAZIDS.SEC-IDS showcases a significant 16.16% improvement over SAZIDS, a substantial 45.8% enhancement over ZBIDS, and a notable 44% better performance than EEACK-IDS. Figure 4 . Figure 4. False alarm rate of the proposed model compared with existing models. Figure 5 Figure5illustrates the assessment of the link change rate, revealing that adeptly reduces this rate when compared to the established methods of ZBIDS, IDS, and SAZIDS.SEC-IDS showcases a significant 16.16% improvement over SA substantial 45.8% enhancement over ZBIDS, and a notable 44% better performa EEACK-IDS. Figure 5 . Figure 5. Link change rate of the proposed model compared with existing models. Figure 5 . Figure 5. Link change rate of the proposed model compared with existing models. Funding: King Abdulaziz University (DSR) & Ministry of Education: IFPDP-269-22.Institutional Review Board Statement: Not applicable.Informed Consent Statement: Not applicable.Data Availability Statement: Data is contained within the article. Table 1 . Overview of possible attacks for IDS in mobile edge computing. Table 2 . Simulation parameters and perspective values.
7,445.2
2024-02-09T00:00:00.000
[ "Computer Science", "Engineering" ]
Reflections from the Application of Different Type of Activities : Special Training Methods Course * Abstract: The aim of this study is to reveal the benefits gained from “Special Training Methods II” course and the problems prospective mathematics teachers encountered with it. The case study method was used in the study. The participants in the study were 34 prospective mathematics teachers studying at a Primary School Mathematics Education Department. The data collection tools were a form composed of open-ended questions and semi-structured interviews. Descriptive analysis of the quantitative data was carried out. In the “Special Teaching Methods II” course, beginning in the spring term of the 2015-2016 academic year, teaching activities on “multiple intelligences”, “discovery”, “group work”, “problem-solving”, “history of mathematics” and “computer-assisted teaching” were developed and implemented. It was concluded that these activities helped students like mathematics more, understand the importance of helping each other and cooperation and have more enjoyable lessons, as well as aiding their cognitive, social and emotional development. It was also found that through these activities participants improved their belief in themselves and increased their confidence regarding teaching mathematics. The participants also faced with some difficulties during the application process. They mostly mentioned that preparing worksheets was time-consuming, finding a school to perform the activity was hard and students were reluctant. Introduction Activity may be defined as a planned task which aims to provide the students with the gains in the curriculum (Bransford, Brown and Cooking, 2000) and applications which allow students to use mathematical expressions and symbols, create models and engage in reasoning and abstraction (Baki, 2008).In other words, activity may be defined as a task, which attracts the interest of the student, is a part of everyday life and puts the student in the center (Bukova-Guzel & Alkan, 2005).An activity should be interesting and educational, selected from daily events, have a defined purpose, enable students to interact and collaborate, allow students construct their knowledge by using their previous experiences and preliminary learning, make efficient use of time, motivate students and encourage them to think, discuss and predict (Dreyfus & Tsamir, 2004;Doolittle, 2000;Epstein & Ryan, 2002;Ishi, 2003;Kerpic, 2011;Ozmantar & Bingolbali, 2009;Saunders, 1992;Watson, 2008).On the other hand, Ainley, Pratt and Hansen (2006) emphasizes that purpose and applicability principles are important when developing an activity. Mathematical tasks are given great importance in the United States in order to improve the quality of mathematics education and support the learning of a certain concept (Simon and Tzur, 2004).National Council of Teachers of Mathematics (NCTM) highlights the importance of student-centered mathematics education through application of various activities (NCTM, 2000).In the updated middle school mathematics curriculum and textbooks in Turkey, it is noted that subjects should be taught through activities (Ministry of National Education [MNE], 2013).The curriculum aims to create situations where students make discoveries on their own through learning-based activities and easily learn by understanding (Bulut, 2008).Considering that the curriculum expects subjects or concepts are taught through activities (MNE, 2005), it seems that there is a conflict instead of a common perception as far as the application of the curriculum through activities goes and therefore there are problems about the quality and the implementation style of activities (Bozkurt, 2012).On the other hand, some studies reveal that teachers are not able to develop activities or perform developed activities in their classes (Duatepe-Paksu and Akkus, 2007) or are not interested in and willing to perform activities due to certain reasons (Bal, 2008;Ozpolat, Sezer, Isgor and Sezer, 2007). The content of the Special Training Methods (STM-I and STM-II) course involves field-specific basic concepts and the relation of these concepts with the teaching in the field; general objectives of the teaching in the field; methods, techniques, tools and materials used; and review and assessment of the relevant curriculum and textbooks.In addition, the course requires teaching of problem-solving, numbers and operations, algebra, geometry, measurement, data processing and probability and involves planning, presentation and assessment activities (CoHE, 2007).Therefore, prospective teachers are expected to be informed about strategies, methods and techniques required by the STM course which they take at undergraduate level and be able to apply these strategies, methods and techniques when they start their service.However, it is reported in the literature that teachers prefer teaching methods and techniques such as direct instruction or question-answer, are not sufficiently equipped (Okur-Akca, Akcay & Kurt, 2016) and usually use the question-answer technique, the expository teaching strategy and discussion and direct instruction methods (Temizoz & Ozgun-Koca, 2008).Teachers show the busy curriculum and the concern for not being able to keep up with the schedule as the reasons behind this situation and express that they do not use different teaching methods in different classes (Temizoz, 2005).Although studies on activities are available in the literature, there is no study in which prospective teachers apply activities which they developed in the STM-II course to middle school students and which points out to resulting situations to the best of our knowledge.For this reason, this study is significant in that it reveals opinions of prospective teachers about the applicability of activities which they developed and determines potential gains of middle school students via these activities.In this context, prospective teachers developed mathematical activities in accordance with principles specified in the curriculum and the study addressed how these activities are applied within the process.Thus, the purpose of this study is to reveal outcomes achieved through and problems encountered during the application of activities related to multiple intelligences, discovery, group work, problem-solving, history of mathematics, mathematical rules and computer-assisted teaching.To this end, the study seeks to answer the following questions: 1. What are the difficulties encountered by prospective mathematics teachers during the "STM-II" course?2. What are the opinions of prospective mathematics teachers about gains provided by activities developed in the "STM-II" course for middle school students?3. What are the gains provided by activities developed in the "STM-II" course for prospective teachers? Methodology The study utilizes the qualitative research approach.The qualitative research method is used for the systematic examination of meaning derived from the experiences of individuals participating in the research (Ekiz, 2013).Qualitative studies have important characteristics such as creating an awareness of the natural environment, adopting a holistic approach, revealing the perceptions of participants, being flexible and performing an inductive analysis (Yildirim & Simsek, 2011).Taking these characteristics into account, the present study was designed as a case study approach which requires the use of multiple data collection tools by the researcher to gather detailed and in-depth information about real life events, a certain situation, a certain time period or limited situations within a group (Creswell, 2013;Yildirim & Simsek, 2011).The present study focuses on a limited situation and attempts to gather in-depth information, thus employs the case study method. Participants The study was carried out with 34 prospective teachers attending the fourth-year of Elementary School Mathematics Teaching Department, Faculty of Education in a state-owned university.28 of the participants were female and 6 were male.The prospective teachers participating in the study were coded as P1, P2, P3, P4, P5, ..., P34 in accordance with research ethics. Data Collection and Analysis A form consisting of open-ended questions was used in this study in order to reveal opinions of participants about gains achieved through and problems encountered during mathematical activities which they constructed and applied in the STM-II course.The form included questions regarding gains achieved through and problems encountered during the application of activities related to multiple intelligences, discovery, group work, problem-solving, history of mathematics, mathematical rules and computer-assisted teaching.The questions in the form used in the study were prepared with the help of the literature and opinions of two experts on the subject were received.The content and prediction validity of questions in the form was ensured by receiving opinions of three faculty members specializing on the field.Lastly, comprehensibility of the questions was examined by a Turkish language professor and the form took its final shape. The qualitative findings obtained using the form were applied descriptive analysis.Tables were created based on common views of the participants.Frequency values were used when creating the tables.It is of great importance in terms of validity in a qualitative research to report the data collected in detail, include direct quotes from the participants and present results obtained (Yildirim & Simsek, 2005).For this reason, direct quotes were used in this study to reflect the opinions of the participants and present findings to the reader in a organized and interpreted manner.Quotes from prospective teachers were included with each code. The Application Process of The Activities Constructed by The Participants The prospective teachers made use of experiences of the researcher and various studies in the literature when developing the activities given in the table below.The title, the purpose and the participant count of each activity carried out in the STM-II course can be seen in the table below.It should be mentioned that prospective teachers received help from experts to develop the activities.12 Safety in Numbers To allow students to collaborate toward a common goal and gain confidence. 13 From The Specific to The General, from The General to The Most General To perform activities aimed at examining number and shape patterns and arithmetic sequences and expressing the rule of the sequence using a variable (e.g.n). Pinocchio and The Money Pouch To ensure active participation and effective communication through group work. 15 Working with Whole Numbers To compare and sort whole numbers. A Basket of Apples and The Car Dealer To solve a higher-order problem using Polya's problemsolving steps. As shown in the table above, the participants prepared 16 "Group Work" activities dealing with different subjects.To arouse interest in and willingness to learning, increase learning responsibility and improve versatile thinking skills in decision-making process. As shown in the table above, the participants prepared 17 "Problem-solving" activities dealing with different subjects. Activity's Name Activity's Purpose Number of Participants 1 Sieve of Eratosthenes To allow students learn the historical development of mathematics and value mathematics. Ataturk and Geometry To allow students learn about the history of mathematics by showing the importance placed by and contributions of Ataturk to mathematics. Fractals in Our Lives To raise awareness in students by pointing out the place of fractals in the history of mathematics. Ancient Egyptian Mathematics To help students understand place values of digits in the decimal number system and the reason behind the transfer in addition. Euclid's Algorithm To allow students to discover Euclid's cathetus correlation. Dealing with The Sieve of Eratosthenes To help students find prime numbers up to 100 using the sieve of Eratosthenes. 7 Getting to Know Pythagoras and His Relation To help student establish the Pythagorean relation and solve problems by teaching them the place of Pythagoras in the history of mathematics. Guess and Find To help students explain and share their mathematical ideas in a logical way by using the mathematical terminology and language correctly.9 Leonardo Da Fibonacci To allow students to discover that mathematics exists in nature and everywhere and realize beauties of mathematics. 10 Getting to Know al-Khwarizmi To inform students about the history of mathematics by introducing al-Khwarizmi. 11 Pearls from Sierpinski To introduce the famous Polish mathematician Waclaw Sierpinski and his contributions to science. 12 Sino-Japanese Numbers To examine the development processes of mathematics in different civilizations. Solving A Eratosthenes Puzzle To explain Eratosthenes' contribution to the history of mathematics.To allow students see relations between concepts, reach generalizations, make estimations based on the rule and improve inductive and mathematical thinking skills. Studying Fractions with Smurfs To teach how to compare unit fractions, make denominators equal and recognize equivalent fractions. 12 Not Without Rules To point out the importance of mathematical rules which we use in everyday life. Whole Numbers in My Mind To choose the right strategy for mental addition and subtraction with natural numbers. 14 Think About It To teach students the rules of division and help them transfer these rules to new situations or associate the rules with everyday life. 15 How About Working with Cylinders? To calculate the volume of the cylinder and find the pattern between volumes of two cylinders whose diameters are doubled 16 Party Hat To give examples for the use of cone in everyday life and help them find volume and area of cone. As shown in the table above, the participants prepared 16 ".Rule Teaching" activities dealing with different subjects.To identify diagonals and interior and exterior angles of polygons and calculate the sum of interior and exterior angles. 2 Finding Formulas To construct new knowledge using the preliminary knowledge of students and thus show them how formulas are derived. 3 My Sugar Cube To establish the volume relation through models considering that the cube is a special case of the rectangular parallelepiped. Angles in My Body To name and draw polygons and recognize main elements of polygons such as the edge, interior angle, corner and diagonal. Linear Equations To draw graphs of linear equations and express how two variables with linear correlation change depending on each other via tables, graphs and equations. 6 Brain Storming To form structures whose drawings from different perspectives are given. 7 My Absolute Value To teach how to determine the absolute value of a whole number. The Discovery of The Day To find the general pattern by finding the relation between the edge length of a square drawn in a circle, whose radius changes at each step, and the area of the isosceles right triangle in each square. 9 Learning The Square Root To teach students how to determine the relation between square natural numbers and square roots of these numbers. Let's Discover Together To create the image of a planar shape created as a result of successive displacements and reflections. 11 Vulture Circle To measure the length of a circle and the arc of a circle and the area of a circle and a circle segment. Let's Play with Legos To calculate the volume of a shape by counting unit cubes. I Found A Model To associate a percentage with a fraction or decimal notation corresponding to the same greatness and show conversions between these notations via a model. 14 My Exponential Numbers To find and show the square and the cube of a natural number. Percentages on Windows To allow students to calculate the amount corresponding to a certain percentage of a quantity and express a quantity as the percentage of another quantity. Acute, Right and Obtuse Angles To teach students form acute, right and obtuse angles and recognize acute, right and obtuse angles.17 Let's Make Lemonade To teach students liquid measuring units and conversions between these units and help them make comparisons. 18 Let's Find The Perfect Square and The Difference of Two Squares To teach students the perfect square and the difference of two squares. As shown in the table above, the participants prepared 18 "Discovery" activities dealing with different subjects.As shown in the table above, the participants prepared 9 "Computer-assisted Mathematics" activities dealing with different subjects. The participants were given three weeks to determine activity types, content and their group mates.At the end of three weeks, the participants presented their activities in the classroom.The names of the activities were determined as a result of cooperative work of each group.The participants were asked to form groups consisting of at least two and at most four members.Groups were created in the study since it is stated in the literature that activities should be designed in a way that they will allow students work in groups (Kayaaslan, 2006) and should also involve situations requiring both group and individual work (Baki and Gokcek, 2005;Baki, 2008).The participants were asked to take photos during the performance of activities and complete the activities two weeks prior to the end of the semester.The participants were interviewed during class hours each week and asked to prepare reports for each activity.The reports prepared by the participants were examined and feedback was given on how to perform the activity in the next class.Once the activities were completed, groups made presentations about their activities in the last class of the semester.As an example, the application processes of four different activities (history of mathematics, theory of multiple intelligences, teaching rules and problem-solving activities) are presented below: History of Mathematics Activity Figure 1 shows reflections from the activity "Napier's Bones".The aim of this activity was to show students how mathematics was transfered from the past to this day and how mathematical operations were performed in the past.The students were also explained that mathematics is man-made and not sent from heavens.Before the application, the appropriate ones were selected among worksheets on "History of Mathematics" prepared for 5th graders depending on the gains desired and the subject.After consulting with the responsible teacher of the school providing the internship program, the students were explained the purposes of the worksheets.The activities were distributed to 18 students.After wishes of good luck, the students were told that they could ask for help from the participants should they have any difficulties.The application was carried out on an individual basis by 14 students due to the low classroom size.The students were asked to read the text at the beginning of the worksheet involving two activities each having seven questions.The students were told to answer the questions in accordance with instructions.In a nutshell, the text involved a story about how to multiply numbers using the Napier's bones method.It was observed that three students answered all of the questions correctly, however the students generally had difficulty in answering the questions in the activities. Theory of Multiple Intelligences Activity Figure 2 shows reflections from the "Discovering Ourselves" activity.The purpose of this activity was to adopt an approach which considers individual differences of students and regulates the teaching process according to these individual differences and to help students realize these differences and value mathematics and also themselves. Prior to the application, the participants prepared a worksheet for 8th graders about the "Theory of Multiple Intelligences". After consulting with the responsible teacher of the school providing the internship program, the students were explained the purposes of the worksheets.The application was carried out in groups of two with the participation of 18 eighth grade students.In the beginning, the students showed a prejudiced attitude toward the activity and thought that they could not answer the questions.It was observed that these prejudices diminished once the students reviewed the worksheet.As stated in Gardner's "Theory of Multiple Intelligences", worksheets were prepared to consider individual differences of students and regulate the teaching process according to these individual differences.The students stated that they found the activity to be fun and it was observed that they had a good time because they were offered learning experiences appealing to all senses and given the opportunity to play an active role in learning (Baki, Gurbuz, Unal & Atasoy, 2009). It was found as a result of the activity that the students identified the intelligence domain which suited them the best and drew pictures and wrote stories and poems accordingly.The students realized their capacity to create a product, their ability to come up with effective and efficient solutions for real life problems, their ability to solve new and complex problems which need to be addressed and thus discovered themselves.Moreover, the activity attracted the attention of middle school students since it helped them get to know themselves and the students expressed that they discovered their intelligence type at the end of the application. Rule Teaching Activity Figure 7 shows reflections from the "How About Working with Cylinders?"activity.The prospective teachers who supervised the activity aimed to have students calculate the volume of the cylinder and find the pattern between volumes of two cylinders whose diameters were doubled.Prior to the application, the students were reminded how to calculate the volume of the cylinder.The students found the subject to be fun and enjoyable.The participants guided them in cases where students had difficulties in understanding the subject.In spite of the guidance provided by the participants regarding the performance of the activity, 14 students asked for help from the prospective teachers on how to perform the activity.The reason why the students had difficulties might be because they could not discover the relation over the pattern.After the application, the worksheets completed by the students were evaluated and it was detected that they had difficulties in finding the rules and relations using the operational steps. Figure 7. Reflections from the "How About Working with Cylinders?" activity European Journal of Educational Research 167 Problem-solving Activity The aim of the "Whole Numbers/All Numbers" activity was to identify the problem situation and look for solutions.Prior to the application, the participants prepared a worksheet for 8th graders about problem-solving.The group work method was used when performing the problem-solving activity.The application was performed with 16 eight grade students assigned to 8 groups of two.The worksheet related to whole numbers was introduced to the students prior to the application.After necessary explanations, the students were handed the worksheet consisting of 8 questions and asked to read the instruction at the beginning of the worksheet.The students were asked to solves the given problem in the first activity, to write down problem steps in the second activity and form a problem in the third activity.The worksheets completed by the students were evaluated after the application and it was found that most groups participating in the activity answered all of the questions correctly, whereas the group with the least number of correct answers had 3 correct answers.At the end of the application process, it was seen that almost all of the students answered problems in the activities successfully without difficulty. Findings / Results The findings obtained according to sub-problems of the study are given below in tables.It is difficult to find a school to perform the activity.4 Findings Related to The Difficulties Encountered by Prospective Mathematics Teachers During The "STM-II" Course Some of the participant opinions regarding codes derived from "Prospective Teacher", "Student" and "Time" themes given in Table 8 can be found below. "The students... did not want to participate in the application process and made a fuss about it… (K19)" "It took me more time than I expected to prepare the worksheet.(P3)" "We encountered problems while arranging a class in the school which we visited to perform the activity.There where teachers who did not want to give their course hour because they did not want to fall behind in their schedule.But we were able to perform the activity in a fifth grade class in the end by asking one of the teachers.(P11)" Findings Related to The Opinions of The Participants About Gains Provided by Activities Developed in The "STM-II" Course for Middle School Students Table 9.The gains provided by activities developed in the "STM-II" course according to prospective teachers "We have also decided to review our class management and focus on our shortcomings.(P1)" Discussion and Conclusion This study focused on the difficulties encountered by prospective mathematics teachers while carrying out activities, the opinions of prospective teachers about what middle school students gained from the activities and about what participants gained from the activities.Thus, an attempt was made to reveal the opinions of prospective teachers regarding the benefits achieved through and problems countered during the application of activities related to multiple intelligences, discovery, group work, problem-solving, the history of mathematics, mathematical rules and computerassisted teaching. Regarding difficulties encountered by the participants during the application process of mathematical activities, the participants mostly mentioned that it was time-consuming to prepare worksheets, it was difficult to find a school to perform the activity and students were reluctant toward the activity.Some of the findings obtained from "Prospective Teacher", "Student", "Time" and "School" themes show similarities with some studies in the literature (Bal, 2008;Ozpolat, Sezer, Isgor & Sezer, 2007).This shows that students and teachers are not accustomed to carry out mathematics classes with activities. Regarding gains of "Discovery-Group Work-Multiple Intelligences-Problem-solving-Rule Teaching-Computer-assisted Mathematics" activities for students, the participants expressed that students had increased interest and curiosity in mathematics, classes became more enjoyable, prejudices toward mathematics were eliminated, participation in class increased, students helped each other more and permanent and meaningful learning was ensured.These findings are consistent with those of Elbers (2003) who reported that activities encouraged students to study and discover mathematical learning processes, allowed them to gain experience and develop new strategies.In addition, these findings show parallelism with those obtained by Yildiz andBaki (2016a, 2016b) in their study on the history of mathematics education.This leads to the idea that the activities developed greatly contributed to both cognitive and affective skills of the prospective teachers. Regarding gains of the participants related to "Professional and Personal Development", it was revealed that the prospective teachers learned how to make mathematics classes more interesting, gained experience about how to manage the class and realized their lack of knowledge about certain subjects.In this context, Bozkurt (2012) found how perceptions of participants regarding the activity are reflected on the application to be a remarkable situation.These findings are consistent with the findings of the study.From this point, we can say that the activities allowed the prospective teachers to gain preliminary experience related to teaching. Almost all of the participants included in the study expressed that the activities which they developed in the "STM-II" course and applied to middle school students improved their beliefs and confidence in their ability to teach mathematics.Thanks to this course, the prospective teachers found the opportunity to come out to the field outside the faculty environment and perform the activities.The prospective teachers stated that they believed these activities which they performed with students contributed a lot to their social-emotional and professional skills as well as their cognitive skills.The participants better understood the importance of the teaching profession thanks to beneficial learning processes which took place as a result of the activities.From this, we may conclude that the course helped the participants become prospective teachers who experienced the excitement and joy of teaching students mathematics through various activities.A literature review reveals that there are numerous studies conducted with the idea that developing activities will contribute to mathematics education and therefore teacher education (Herbst, 2008;Kerpic, 2011;Ozmantar, Bozkurt, Demir, Bingolbali & Acil, 2010;Ugurel, Bukova-Guzel, 2010).Therefore, the findings of the study show that the activities have positive reflections on teacher training. The participants made the necessary research using the curriculum, textbooks and various studies in the literature under the guidance of the researcher in the activity development stage and therefore were well-prepared and placed the necessary importance and value to the activities, which ensured that the activities were beneficial and effective.Similarly, Ersoy (2006) found that teachers' high level of knowledge, increased awareness and sensitivity toward their duties allowed activities to be beneficial and effective. Considering the importance of guidance offered by the teachers and clues provided on how to learn in activities performed with primary school students, the importance of in-class activities is better understood (Ozmantar et al., 2010).In this context, students who perform or are encouraged to perform activities within the scope of in-class applications will become individuals who are accustomed to activities, able to understand the purpose of activities (Saglik, 2007;Yalvac, 2010) and eager to perform activities.Therefore, students will see that mathematics is actually an engaging course and it is possible to enjoy mathematics if they crack the secret of it as they perform activities.From this point, it seems that activities are effective in enabling students to view mathematics as an engaging course rather than a scary one and like mathematics better, understand the importance of cooperation and collaboration in group works (Baki et al., 2010), understand mathematics better and enhance their cognitive and social-emotional development.In this context, students should be encouraged to find their own solutions and make generalizations from their solutions while performing activities (Olkun & Toluk, 2003). Researchers concluded that activities in almost all mathematics textbooks undervalued efficient use of time and preconditioned behaviors of students and did not include activities related to use of computer technologies other than the calculator (Arslan & Ozpinar, 2009;Kerpic & Bozkurt, 2011).Similarly, the results of the trends in international mathematics and science study indicate that activities developed and performed in the process are not implemented efficiently (Sisman, Acat, Aypay & Karadag, 2011).These results contrast with the findings of the study.Because the results of the study show that the activities were quite effective. It was concluded that the "STM-II" course allowed prospective teachers to think mathematically (Arslan & Yildiz, 2010;Yildiz, 2016), solve problems (Taskin, Yildiz, Kanbolat & Baki, 2013), learn by doing and experiencing, reason, make connections and achieve permanent learning when learning concepts through activities developed and performed within the scope of the course.In addition to the cognitive dimension mentioned above, considering the affective dimension; the activities improved prospective teachers' belief and self-confidence, their ability to communicate with students and teachers at the school, their class management skills, their ability to cooperate with students and their sense of responsibility and allowed them to feel themselves as teachers. The following recommendations are presented considering the results of the study: 1.The participants stated that they had difficulties in finding a school to perform the activities.Teachers and administrators serving in middle schools may try to help prospective teachers who will soon be in service to solve this problem.Also, considering that activities have an important place in student achievement in mathematics, the awareness level of teachers may be increased on this matter. 2. Some participants expressed that students were reluctant to perform the activities.It may be beneficial that teachers perform activities, especially those included in the 2009 curriculum and textbooks, more frequently in their classes and ensure students get accustomed to perform activities to overcome or reduce this problem.Also, learning-teaching activities in the 2013 curriculum may be enriched.In-service trainings may be organized to provide teachers with adequate knowledge and skills regarding activities included in the middle school mathematics curriculum in order to improve the situation. 3. The participants expressed that students had increased interest and curiosity in mathematics, classes became more enjoyable, prejudices toward mathematics were eliminated, participation in class increased, students helped each other more and permanent and meaningful learning was ensured.Considering the interest of students in the activities, teachers and prospective teachers may be informed about mathematics teaching through activities.Thus, the increase in the interest of students in mathematics will be sustainable. 4. Almost all of the participants included in the study expressed that the activities which they developed in the "STM-II" course and applied to middle school students improved their beliefs and confidence in their ability to teach mathematics.In all major field courses received by prospective teachers at undergraduate level, prospective teachers may be given the opportunity to develop activities with more tangible, clear and rich material support in order to achieve goals specified in the curriculum. 5. Interviews may be held with prospective teachers to investigate participant opinions about how to improve and apply activities in more depth.Also, observations may be conducted in order to examine how teachers develop activities and perform them in their classes and reveal difficulties which they encounter. To summarize, it is recommended that mathematical activities are given more weight in schools, students are familiarized with activities and raise the awareness of teachers and prospective teachers regarding mathematical activities.Activities in the updated curriculum and textbooks may be enriched in a way that all mathematical gains from primary education level to secondary education level are emphasized. Figure 8 . Figure 8. Reflections from the "Whole Numbers/All Numbers" activity communication between students To improve students' motivation To improve students' self-confidence To teach how to work in cooperation To create a sense of responsibility To enable students see their deficiencies Discovery To create an interest and curiosity toward mathematics 31 To ensure permanent and meaningful learning 10 To enhance the communication between students To improve students' self-confidence To have students discover mathematical concepts To allow students learn through brainstorming To offer different perspectives To raise a generation which produces information Table 1 . Multiple Intelligences Activity To improve communication skills and use problem-solving, reasoning and logical thinking skills efficiently.10Let'sDoIt TogetherTo name and classify polygons.11Brand New Ideas To help students suggest new mathematical ideas. Table 3 . Problem-solving Activity Time to Solve ProblemsTo interpret the time chart given and understand the concept of time by solving problems.14Get The Frog out of The Well To solve problems using Polya's problem-solving steps. Table 4 . ContinuedAs shown in the table above, the participants prepared 18 "History of Mathematics" activities dealing with different subjects. 4 De Moivre's Calculation To help students realize the ability to measure time. Table 6 . Discovery Activity Table 7 . Computer-assisted Mathematics Activity Table 8 . Difficulties encountered by prospective teachers Table 9 . ContinuedSome of the participant opinions regarding "Discovery-Group Work-Multiple Intelligences-Problem-solving-Rule Teaching-Computer-assisted Mathematics" themes given in Table9can be found below.Findings Related to The Gains Provided by Activities Developed in The "STM-II" Course for Prospective Teachers Table 10 . The gains provided by activities developed in the "STM-II" course for the participants Some of the participant opinions regarding codes derived from "Professional Development" and "Personal Development" themes given in Table10can be found below.
7,677.6
2017-04-15T00:00:00.000
[ "Mathematics", "Education" ]
Patient-Derived Orthotopic Xenograft (PDOX) Models of Melanoma † Metastatic melanoma is a recalcitrant tumor. Although “targeted” and immune therapies have been highly touted, only relatively few patients have had durable responses. To overcome this problem, our laboratory has established the melanoma patient-derived orthotopic xenograft (PDOX) model with the use of surgical orthotopic implantation (SOI). Promising results have been obtained with regard to identifying effective approved agents and experimental therapeutics, as well as combinations of the two using the melanoma PDOX model. PD-1/PD-L1 immunotherapy has shown promise with melanoma, but is limited by tumor infiltration of activated T cells [5], and has not increased the survival rate [2]. Stage III and IV melanoma is almost never curable, due to a lack of effective drugs, resistance to immunotherapy and tumor heterogeneity [10]. Chemotherapy and radiotherapy of melanoma are also limited by melanin [11]. Individualized and precision therapy is needed for melanoma. The present report reviews our laboratory's experience with PDOX models of melanoma, and the ability of the PDOX models to identify effective currently-used-as well as experimental-therapeutics. S. typhimurium A1-R Was Highly Effective on the Patient-Derived Orthotopic Xenograft (PDOX) Melanoma in Nude Mice S. typhimurium A1-R, expressing green fluorescent protein (GFP), extensively targeted the tumor, with very few GFP-expressing bacteria found in other organs (i.e., demonstrating high tumor selectivity). S. typhimurium A1-R strongly inhibited the growth of the melanoma (Figure 1). S. typhimurium A1-R, cisplatinum (CDDP), and a combination of S. typhimurium A1-R and CDDP, were all highly effective on the melanoma PDOX ( Figure 2) PDOX Model of a BRAF-V600E Mutant Melanoma A BRAF-V600E mutant melanoma PDOX was established. VEM, temozolomide (TEM), trametinib (TRA) and cobimetinib (COB) were all effective against it. TRA treatment caused tumor regression ( Figure 3). The PDOX was expected to be sensitive to VEM, since VEM targets the BRAF-V600E mutation. However, in this case, TRA was much more effective than VEM [55]. This result shows that the BRAF-V600E mutation is probably not a major factor in promoting this melanoma, and that genomic profiling by itself is insufficient to direct therapy. In a subsequent study with this BRAF-V600E mutant melanoma PDOX, TEM combined with S. typhimurium A1-R was significantly more effective than either S. typhimurium A1-R and TEM alone, [34]. ** p < 0.01, compared with the untreated control group. PDOX Model of a BRAF-V600E Mutant Melanoma A BRAF-V600E mutant melanoma PDOX was established. VEM, temozolomide (TEM), trametinib (TRA) and cobimetinib (COB) were all effective against it. TRA treatment caused tumor regression ( Figure 3). The PDOX was expected to be sensitive to VEM, since VEM targets the BRAF-V600E mutation. However, in this case, TRA was much more effective than VEM [55]. This result shows that the BRAF-V600E mutation is probably not a major factor in promoting this melanoma, and that genomic profiling by itself is insufficient to direct therapy. [34]. ** p < 0.01, compared with the untreated control group. PDOX Model of a BRAF-V600E Mutant Melanoma A BRAF-V600E mutant melanoma PDOX was established. VEM, temozolomide (TEM), trametinib (TRA) and cobimetinib (COB) were all effective against it. TRA treatment caused tumor regression ( Figure 3). The PDOX was expected to be sensitive to VEM, since VEM targets the BRAF-V600E mutation. However, in this case, TRA was much more effective than VEM [55]. This result shows that the BRAF-V600E mutation is probably not a major factor in promoting this melanoma, and that genomic profiling by itself is insufficient to direct therapy. In a subsequent study with this BRAF-V600E mutant melanoma PDOX, TEM combined with S. typhimurium A1-R was significantly more effective than either S. typhimurium A1-R and TEM alone, In a subsequent study with this BRAF-V600E mutant melanoma PDOX, TEM combined with S. typhimurium A1-R was significantly more effective than either S. typhimurium A1-R and TEM alone, causing regression of the tumor (Figure 4). Confocal microscopy showed that the S. typhimurium A1-R could directly target the melanoma PDOX and cause tumor necrosis [56]. causing regression of the tumor (Figure 4). Confocal microscopy showed that the S. typhimurium A1-R could directly target the melanoma PDOX and cause tumor necrosis [56]. In a subsequent study, VEM, S. typhimurium A1-R, COB, VEM combined with COB, and VEM combined with S. typhimurium A1-R were all effective against the BRAF-V600E mutant melanoma PDOX, compared to the untreated control. VEM combined with S. typhimurium A1-R was the most effective compared to other therapies ( Figure 5). Tumor necrosis was more extensive in the group treated with VEM combined with S. typhimurium A1-R [9]. In another study, TEM combined with S. typhimurium A1-R, and VEM combined with S. typhimurium A1-R, were significantly more effective than S. typhimurium A1-R alone on the BRAF-V600E mutant melanoma PDOX ( Figure 6). Both VEM and TEM significantly increased the tumor targeting of S. typhimurium A1-R, compared to S. typhimurium A1-R alone, as observed by highresolution confocal microscopy ( Figure 7A,B). These results suggested that S. typhimurium A1-R increases the efficacy of chemotherapy, and chemotherapy increases the tumor targeting of S. typhimurium A1-R in the melanoma PDOX model [57]. In a subsequent study, VEM, S. typhimurium A1-R, COB, VEM combined with COB, and VEM combined with S. typhimurium A1-R were all effective against the BRAF-V600E mutant melanoma PDOX, compared to the untreated control. VEM combined with S. typhimurium A1-R was the most effective compared to other therapies ( Figure 5). Tumor necrosis was more extensive in the group treated with VEM combined with S. typhimurium A1-R [9]. causing regression of the tumor (Figure 4). Confocal microscopy showed that the S. typhimurium A1-R could directly target the melanoma PDOX and cause tumor necrosis [56]. In a subsequent study, VEM, S. typhimurium A1-R, COB, VEM combined with COB, and VEM combined with S. typhimurium A1-R were all effective against the BRAF-V600E mutant melanoma PDOX, compared to the untreated control. VEM combined with S. typhimurium A1-R was the most effective compared to other therapies ( Figure 5). Tumor necrosis was more extensive in the group treated with VEM combined with S. typhimurium A1-R [9]. In another study, TEM combined with S. typhimurium A1-R, and VEM combined with S. typhimurium A1-R, were significantly more effective than S. typhimurium A1-R alone on the BRAF-V600E mutant melanoma PDOX ( Figure 6). Both VEM and TEM significantly increased the tumor targeting of S. typhimurium A1-R, compared to S. typhimurium A1-R alone, as observed by highresolution confocal microscopy ( Figure 7A,B). These results suggested that S. typhimurium A1-R increases the efficacy of chemotherapy, and chemotherapy increases the tumor targeting of S. typhimurium A1-R in the melanoma PDOX model [57]. In another study, TEM combined with S. typhimurium A1-R, and VEM combined with S. typhimurium A1-R, were significantly more effective than S. typhimurium A1-R alone on the BRAF-V600E mutant melanoma PDOX ( Figure 6). Both VEM and TEM significantly increased the tumor targeting of S. typhimurium A1-R, compared to S. typhimurium A1-R alone, as observed by high-resolution confocal microscopy ( Figure 7A,B). These results suggested that S. typhimurium A1-R increases the efficacy of chemotherapy, and chemotherapy increases the tumor targeting of S. typhimurium A1-R in the melanoma PDOX model [57]. Methionine dependence is a general metabolic defect in cancer. It has been demonstrated that methionine starvation induces a tumor-selective S/G2-phase cell-cycle arrest of tumor cells [58][59][60][61]. Methionine dependence is due to the excess use of methionine in aberrant transmethylation reactions, termed the Hoffman effect, and is analogous to the Warburg effect for glucose in cancer [62][63][64][65][66][67]. The excessive and aberrant use of methionine in cancer is strongly observed in [ 11 C]-methionine PET imaging, where the high uptake of [ 11 C]-methionine results in a very strong and selective tumor signal compared with normal tissue background. [ 11 C]-methionine is superior to [ 18 C]fluorodeoxyglucose (FDG) for PET imaging, suggesting methionine dependence is more tumorspecific than glucose dependence [68,69]. A purified methionine-cleaving enzyme, methioninase (METase), from Pseudomonas putida, has been found previously to be an effective antitumor agent in vitro as well as in vivo [70][71][72][73]. For the large-scale production of METase, the gene from P. putida has been cloned in Escherichia coli and a purification protocol for recombinant methioninase (rMETase) has been established with high purity and low endotoxin release [74][75][76][77]. Methionine dependence is a general metabolic defect in cancer. It has been demonstrated that methionine starvation induces a tumor-selective S/G 2 -phase cell-cycle arrest of tumor cells [58][59][60][61]. Methionine dependence is due to the excess use of methionine in aberrant transmethylation reactions, termed the Hoffman effect, and is analogous to the Warburg effect for glucose in cancer [62][63][64][65][66][67]. The excessive and aberrant use of methionine in cancer is strongly observed in [ 11 C]-methionine PET imaging, where the high uptake of [ 11 C]-methionine results in a very strong and selective tumor signal compared with normal tissue background. [ 11 C]-methionine is superior to [ 18 C]-fluorodeoxyglucose (FDG) for PET imaging, suggesting methionine dependence is more tumor-specific than glucose dependence [68,69]. A purified methionine-cleaving enzyme, methioninase (METase), from Pseudomonas putida, has been found previously to be an effective antitumor agent in vitro as well as in vivo [70][71][72][73]. For the large-scale production of METase, the gene from P. putida has been cloned in Escherichia coli and a purification protocol for recombinant methioninase (rMETase) has been established with high purity and low endotoxin release [74][75][76][77]. Methionine dependence is a general metabolic defect in cancer. It has been demonstrated that methionine starvation induces a tumor-selective S/G2-phase cell-cycle arrest of tumor cells [58][59][60][61]. Methionine dependence is due to the excess use of methionine in aberrant transmethylation reactions, termed the Hoffman effect, and is analogous to the Warburg effect for glucose in cancer [62][63][64][65][66][67]. The excessive and aberrant use of methionine in cancer is strongly observed in [ 11 C]-methionine PET imaging, where the high uptake of [ 11 C]-methionine results in a very strong and selective tumor signal compared with normal tissue background. [ 11 C]-methionine is superior to [ 18 C]fluorodeoxyglucose (FDG) for PET imaging, suggesting methionine dependence is more tumorspecific than glucose dependence [68,69]. A purified methionine-cleaving enzyme, methioninase (METase), from Pseudomonas putida, has been found previously to be an effective antitumor agent in vitro as well as in vivo [70][71][72][73]. For the large-scale production of METase, the gene from P. putida has been cloned in Escherichia coli and a purification protocol for recombinant methioninase (rMETase) has been established with high purity and low endotoxin release [74][75][76][77]. The combination therapy of TEM and rMETase had significantly better efficacy than either therapy alone on the BRAF-V600E mutant melanoma PDOX (Figure 8). Post-treatment L-methionine levels in tumors treated with rMETase alone, or along with TEM, were significantly decreased compared to untreated controls (data not shown). These results showed that this melanoma is methionine dependent, and rMETase thereby suppresses the melanoma PDOX [77]. The combination therapy of TEM and rMETase had significantly better efficacy than either therapy alone on the BRAF-V600E mutant melanoma PDOX (Figure 8). Post-treatment L-methionine levels in tumors treated with rMETase alone, or along with TEM, were significantly decreased compared to untreated controls (data not shown). These results showed that this melanoma is methionine dependent, and rMETase thereby suppresses the melanoma PDOX [77]. The combination therapy of TEM and rMETase had significantly better efficacy than either therapy alone on the BRAF-V600E mutant melanoma PDOX (Figure 8). Post-treatment L-methionine levels in tumors treated with rMETase alone, or along with TEM, were significantly decreased compared to untreated controls (data not shown). These results showed that this melanoma is methionine dependent, and rMETase thereby suppresses the melanoma PDOX [77]. This review indicates that the melanoma PDOX is a promising-although still-developingtechnology, able to identify effective therapy for patients, both approved and experimental. Future studies will investigate further advantages of the melanoma PDOX model. Please see references [78,79] for reviews of melanoma PDX models. Future studies will address molecular changes in the treated melanoma PDOX models described in the present report. Mice Athymic (nu/nu) nude mice (AntiCancer Inc., San Diego, CA, USA) were used in these studies in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals under Assurance Number A3873-1. Animals were anesthetized with a ketamine mixture via subcutaneous injection of a 0.02 mL solution of 20 mg/kg ketamine, 15.2 mg/kg xylazine and 0.48 mg/kg acepromazine maleate for all surgeries [9,[55][56][57]77]. Patient-Derived Tumors The PDOX models from the University of California Los Angeles (UCLA) were established from a 75-year-old female patient with a melanoma of the right chest wall. The melanoma had a BRAF-V600E mutation. Tumor resection was performed in the Department of Surgery, UCLA. The tumor was provided for PDOX establishment after written informed consent was provided by the patient, and after approval was granted by the Institutional Review Board (IRB) [55]. Another patient melanoma was obtained from a patient at UCSD under IRB approval and informed patient consent [34]. Establishment of PDOX Models of Melanoma by Surgical Orthotopic Implantation (SOI) Resected melanoma tissue was immediately transported to AntiCancer Inc. on ice. The BRAF-V600E mutant melanoma tumor fragments (3 mm 3 ) were transplanted to the chest wall of nude mice to mimic the site from which they were resected from the patient [9,[55][56][57]77]. The melanoma from UCSD was directly implanted subdermally and passaged in the back skin of nude mice [34]. All surgeries were performed under ketamine anesthesia. Tumor Histology The original tumor tissue and PDOX tumor tissue were fixed in 10% formalin. The fixed tumors were embedded in paraffin and then sectioned and stained. Standard bright-light microscopy was used for histopathological analysis [55]. Intratumor L-Methionine Levels After the completion of rMETase treatment, each tumor was sonicated for 30 s on ice and centrifuged at 12,000 rpm for 10 min. Supernatants were collected and protein levels were measured using the Coomassie Protein Assay Kit (Thermo Scientific, Rockford, IL, USA). L-methionine levels were determined using a high-pressure liquid chromatography (HPLC) procedure we developed previously [81,82]. Methionine levels were normalized to tumor protein by standard procedures. Conflicts of Interest: The author declares no conflict of interest.
3,264.8
2017-08-31T00:00:00.000
[ "Biology", "Medicine" ]
Metasomatized lithospheric mantle for Mesozoic giant gold deposits in the North China craton The origin of giant lode gold deposits of Mesozoic age in the North China craton (NCC) is enigmatic because high-grade metamorphic ancient crust would be highly depleted in gold. Instead, lithospheric mantle beneath the crust is the likely source of the gold, which may have been anomalously enriched by metasomatic processes. However, the role of gold enrichment and metasomatism in the lithospheric mantle remains unclear. Here, we present comprehensive data on gold and platinum group element contents of mantle xenoliths (n = 28) and basalts (n = 47) representing the temporal evolution of the eastern NCC. The results indicate that extensive mantle metasomatism and hydration introduced some gold (<1–2 ppb) but did not lead to a gold-enriched mantle. However, volatile-rich basalts formed mainly from the metasomatized lithospheric mantle display noticeably elevated gold contents as compared to those from the asthenosphere. Combined with the significant inheritance of mantle-derived volatiles in auriferous fluids of ore bodies, the new data reveal that the mechanism for the formation of the lode gold deposits was related to the volatile-rich components that accumulated during metasomatism and facilitated the release of gold during extensional craton destruction and mantle melting. Gold-bearing, hydrous magmas ascended rapidly along translithospheric fault zones and evolved auriferous fluids to form the giant deposits in the crust. INTRODUCTION The subcratonic lithospheric mantle (SCLM) underneath Archean crust mostly formed by high degrees of partial melting (Griffin et al., 2009). The SCLM is thus refractory and strongly depleted in incompatible elements and many metals like Au, reducing its potential as a source for later giant deposits. However, magmas and fluids derived from the convecting mantle, and particularly subducted materials, may have metasomatized and replenished the SCLM in volatiles, metals, and other elements (e.g., Lorand et al., 2013; O’Reilly and Griffin, 2013). The metasomatized SCLM is often assumed to be anomalously enriched in Au and to represent the source for the formation of large Au provinces (Hronsky et al., 2012; Griffin et al., 2013; Tassara et al., 2017), including Carlin-type Au deposits (sediment-hosted disseminated gold deposits) (Muntean et al., 2011). Giant Au deposits in the North China craton (NCC), which are globally noteworthy for their large-scale reserves (>5000 tons), are likely the best case in the world to clarify this model. The lithospheric mantle of the NCC was intensely metasomatized and hydrated over 2 billion years by partial melts and subducted components of different ages (Paleozoic, Triassic, Jurassic) before its extensive destruction at ca. 130–120 Ma (Zhu et al., 2012; Wu et al., 2019). The cratonic destruction was essentially coeval with the eruption of mantle-derived magmas and the formation of giant lode Au deposits in the eastern NCC (Li et al., 2012; Zhu et al., 2015). These hydrothermal deposits are mostly hosted in amphiboliteto granulite-facies metamorphic rocks and in Mesozoic felsic plutons. They are difficult to designate as crustal metamorphism-related orogenic Au deposits because they formed prior to 1.8 Ga after high-grade metamorphism of the crust, which would have been strongly depleted in gold and fluids (Goldfarb and Santosh, 2014; Goldfarb and Groves, 2015). Instead, it is assumed that the lithospheric mantle of the NCC metasomatized by subducted materials may have played a key role in the large-scale Au mineralization (Goldfarb and Groves, 2015; Li et al., 2012; Zhu et al., 2015). However, the extent of gold enrichment in the SCLM after metasomatism and the mechanism and scope of its contribution to giant Au deposits have rarely been directly tested. Here, we present Au and platinum group element (PGE) contents of the peridotite xenoliths and basalts in the NCC, which reflect different evolutionary episodes of the mantle from the Archean to Cenozoic. This allows us to fully assess the impact of metasomatism on the Au contents of the SCLM and define the links among mantle metasomatism, mantle-derived hydrous magmas, and the origin of giant Au deposits in the NCC. SAMPLES Primitive alkaline picrites and high-Mg basalts with Mg# of 71–75 were erupted coeval with (125–119 Ma) (Fig. 1), or slightly earlier than the peak period of Au mineralization (Gao et al., 2008; Liu et al., 2008). They have been extensively studied and are characterized by high volatile contents (e.g., 2–4 wt% water), arc basalt–like trace element patterns, and radiogenic Sr-Nd-Hf-Os isotopic compositions (referred to hereafter as 130–120 Ma basalts; Zhang et al., 2002; Gao et al., 2008; Liu et al., 2008; Xia et al., 2013; Meng et al., *E-mail<EMAIL_ADDRESS>Published online 22 November 2019 Downloaded from https://pubs.geoscienceworld.org/gsa/geology/article-pdf/48/2/169/4927129/169.pdf by guest on 14 February 2020 170 www.gsapubs.org | Volume 48 | Number 2 | GEOLOGY | Geological Society of America 2015; Huang et al., 2017; Geng et al., 2019a, 2019b). They have been well accepted to have mainly originated from metasomatized, hydrated and isotopically enriched SCLM with insignificant input of crustal contamination (see details in the GSA Data Repository1). We analyzed the Au and PGE contents of many 130–120 Ma basalts, and younger basalts that erupted after the formation of the gold deposits. The younger basalts erupted later than 110 Ma and are melts derived from the asthenosphere (Liu et al., 2008; Meng et al., 2015). We used these basalts as a measure for the fraction of gold released from the asthenospheric mantle compared to the 130–120 Ma basalts, which were mainly from the metasomatized SCLM. Mantle xenoliths with Archean to Paleoproterozoic (Hebi and Mengyin) and Phanerozoic (Shanwang) Re depletion model ages (Zheng et al., 2005; Chu et al., 2009; Liu et al., 2011) were also analyzed to assess temporal changes in the Au contents of the SCLM. We obtained the gold and PGE contents of bulk rocks of mantle xenoliths (n = 28, three locations; Fig. 1), and 130–120 Ma and <110 Ma basalts (n = 47, seven locations), after Carius tube digestion in reverse aqua regia and chromatography separation (Cheng et al., 2019). The PGE contents were determined by isotope dilution methods, and gold contents were determined by internal standardization to platinum and/or standard addition method (Tables DR1–DR2 in the Data Repository). Reference materials and sample replicates indicated 10%–15% (2 standard deviations) uncertainty for Au, with blanks of 5 ± 5 pg (Figs. DR1–DR3). Such low blanks are essential for analyzing low-Au samples. The Au and PGE contents and other information about the analyzed samples are given in Tables DR1–DR3, and the main results are shown in Figures 2 and 3. LIMITED RE-ENRICHMENT OF GOLD IN METASOMATIZED MANTLE Gold is more incompatible, and also more mobile, in fluids than Pd and other PGEs (Maier et al., 2012; Pokrovski et al., 2013), and so melt and/or fluid metasomatism should elevate the Au/ Ir and Pd/Ir ratios of the refractory SCLM, and also Au/Pd, which is a well-documented feature of peridotites (e.g., Fischer-Gödde et al., 2011; Maier et al., 2012). The Mengyin mantle xenoliths hosted by 480 Ma kimberlites, and the Hebi mantle xenoliths hosted by 4 Ma basalts, represent the relics of Archean–Paleoproterozoic SCLM (low Os/Osinitial of 0.1089–0.1164, high Mg# of > 92; Fig. 2). They have undergone extensive metasomatism, as indicated by highly enriched light rare earth elements (REEs; Zheng et al., 2005), radiogenic Sr/Srinitial, and unradiogenic Nd/Ndinitial (Zhang et al., 2008; Chu et al., 2009). The Mengyin harzburgite xenoliths contain 140–510 ppm S, which is much higher than that for refractory peridotites (Chu et al., 2009), and variably elevated Au/Pd(N) (normalized to the primitive mantle [PM]), indicating the addition of sulfides and gold during metasomatism. However, these samples still contain relatively low Au contents of 0.06–0.50 ppb, as well as low Pd and Cu contents compared to the PM (Fig. 2). This is also true for the Hebi peridotites (La/Yb(N) of 16–38 and Au of 0.03–0.11 ppb; Fig. 2). These results indicate that metasomatism introduced S, but only limited Au, into the SCLM from the Archean to 480 Ma, and even until 4 Ma in the central NCC (Hebi). A similar fashion of Au enrichment also occurred in the Finsch and Venetia peridotites in the Kaapvaal craton (e.g., high S contents of 280–1240 ppm and high Au/ Pd(N) of >1–13, but Au <0.9–1.4 ppb; Maier et al., 2012). The Shanwang peridotite xenoliths hosted in 18 Ma basalts represent juvenile lithospheric mantle that formed after the destruction of the NCC and Mesozoic Au mineralization (Chu et al., 2009). They contain 0.02–1.8 ppb Au and show a correlation with PGE contents, similar to refertilized massif-type peridotites representative of Phanerozoic lithospheric mantle (Figs. 2 and 3; Fig. DR4). 1GSA Data Repository item 2020048, methods, data quality, supplementary notes, Figures DR1– DR10, and Tables DR1–DR3, is available online at http://www.geosociety.org/datarepository/2020/, or on request from<EMAIL_ADDRESS>Figure 1. Sample locations on a simplified map of the North China craton (NCC). Analyzed mantle xenoliths and basalts (130–120 Ma and <110 Ma) are shown with eruption ages. Both mantle xenoliths and basalts from Hebi and Shanwang are included. Also shown are the major districts of Early Cretaceous lode gold deposits and the translithospheric Tanlu fault in the eastern North China craton (modified from Zhu et al., 2015). North China craton Qinling orogen Central Asian Orogenic Belt 128 E 0 INTRODUCTION The subcratonic lithospheric mantle (SCLM) underneath Archean crust mostly formed by high degrees of partial melting (Griffin et al., 2009). The SCLM is thus refractory and strongly depleted in incompatible elements and many metals like Au, reducing its potential as a source for later giant deposits. However, magmas and fluids derived from the convecting mantle, and particularly subducted materials, may have metasomatized and replenished the SCLM in volatiles, metals, and other elements (e.g., Lorand et al., 2013;O'Reilly and Griffin, 2013). The metasomatized SCLM is often assumed to be anomalously enriched in Au and to represent the source for the formation of large Au provinces (Hronsky et al., 2012;Griffin et al., 2013;Tassara et al., 2017), including Carlin-type Au deposits (sediment-hosted disseminated gold deposits) (Muntean et al., 2011). Giant Au deposits in the North China craton (NCC), which are globally noteworthy for their large-scale reserves (>5000 tons), are likely the best case in the world to clarify this model. The lithospheric mantle of the NCC was intensely metasomatized and hydrated over 2 billion years by partial melts and subducted components of different ages (Paleozoic, Triassic, Jurassic) before its extensive destruction at ca. 130-120 Ma (Zhu et al., 2012;Wu et al., 2019). The cratonic destruction was essentially coeval with the eruption of mantle-derived magmas and the formation of giant lode Au deposits in the eastern NCC (Li et al., 2012;Zhu et al., 2015). These hydrothermal deposits are mostly hosted in amphibolite-to granulite-facies metamorphic rocks and in Mesozoic felsic plutons. They are difficult to designate as crustal metamorphism-related orogenic Au deposits because they formed prior to 1.8 Ga after high-grade metamorphism of the crust, which would have been strongly depleted in gold and fluids (Goldfarb and Santosh, 2014;Goldfarb and Groves, 2015). Instead, it is assumed that the lithospheric mantle of the NCC metasomatized by subducted materials may have played a key role in the large-scale Au mineralization (Goldfarb and Groves, 2015;Li et al., 2012;Zhu et al., 2015). However, the extent of gold enrichment in the SCLM after metasomatism and the mechanism and scope of its contribution to giant Au deposits have rarely been directly tested. Here, we present Au and platinum group element (PGE) contents of the peridotite xenoliths and basalts in the NCC, which reflect different evolutionary episodes of the mantle from the Archean to Cenozoic. This allows us to fully assess the impact of metasomatism on the Au contents of the SCLM and define the links among mantle metasomatism, mantle-derived hydrous magmas, and the origin of giant Au deposits in the NCC. SAMPLES Primitive alkaline picrites and high-Mg basalts with Mg# of 71-75 were erupted coeval with (125-119 Ma) (Fig. 1), or slightly earlier than the peak period of Au mineralization Liu et al., 2008). They have been extensively studied and are characterized by high volatile contents (e.g., 2-4 wt% water), arc basalt-like trace element patterns, and radiogenic Sr-Nd-Hf-Os isotopic compositions (referred to hereafter as 130-120 Ma basalts; Zhang et al., 2002;Gao et al., 2008;Liu et al., 2008;Xia et al., 2013;Meng et al., 2015;Huang et al., 2017;Geng et al., 2019aGeng et al., , 2019b. They have been well accepted to have mainly originated from metasomatized, hydrated and isotopically enriched SCLM with insignificant input of crustal contamination (see details in the GSA Data Repository 1 ). We analyzed the Au and PGE contents of many 130-120 Ma basalts, and younger basalts that erupted after the formation of the gold deposits. The younger basalts erupted later than 110 Ma and are melts derived from the asthenosphere Meng et al., 2015). We used these basalts as a measure for the fraction of gold released from the asthenospheric mantle compared to the 130-120 Ma basalts, which were mainly from the metasomatized SCLM. Mantle xenoliths with Archean to Paleoproterozoic (Hebi and Mengyin) and Phanerozoic (Shanwang) Re depletion model ages (Zheng et al., 2005;Chu et al., 2009;Liu et al., 2011) were also analyzed to assess temporal changes in the Au contents of the SCLM. We obtained the gold and PGE contents of bulk rocks of mantle xenoliths (n = 28, three locations; Fig. 1), and 130-120 Ma and <110 Ma basalts (n = 47, seven locations), after Carius tube digestion in reverse aqua regia and chromatography separation (Cheng et al., 2019). The PGE contents were determined by isotope dilution methods, and gold contents were determined by internal standardization to platinum and/or standard addition method (Tables DR1-DR2 in the Data Repository). Reference materials and sample replicates indicated 10%-15% (2 standard deviations) uncertainty for Au, with blanks of 5 ± 5 pg LIMITED RE-ENRICHMENT OF GOLD IN METASOMATIZED MANTLE Gold is more incompatible, and also more mobile, in fluids than Pd and other PGEs (Maier et al., 2012;Pokrovski et al., 2013), and so melt and/or fluid metasomatism should elevate the Au/ Ir and Pd/Ir ratios of the refractory SCLM, and also Au/Pd, which is a well-documented feature of peridotites (e.g., Fischer-Gödde et al., 2011;Maier et al., 2012). The Mengyin mantle xenoliths hosted by 480 Ma kimberlites, and the Hebi mantle xenoliths hosted by 4 Ma basalts, represent the relics of Archean-Paleoproterozoic SCLM (low 187 Os/ 188 Os initial of 0.1089-0.1164, high Mg# of > 92; Fig. 2). They have undergone extensive metasomatism, as indicated by highly enriched light rare earth elements (REEs; Zheng et al., 2005), radiogenic 87 Sr/ 86 Sr initial , and unradiogenic 143 Nd/ 144 Nd initial Chu et al., 2009). The Mengyin harzburgite xenoliths contain 140-510 ppm S, which is much higher than that for refractory peridotites (Chu et al., 2009), and variably elevated Au/Pd (N) (normalized to the primitive mantle [PM]), indicating the addition of sulfides and gold during metasomatism. However, these samples still contain relatively low Au contents of 0.06-0.50 ppb, as well as low Pd and Cu contents compared to the PM (Fig. 2). This is also true for the Hebi peridotites (La/Yb (N) of 16-38 and Au of 0.03-0.11 ppb; Fig. 2). These results indicate that metasomatism introduced S, but only limited Au, into the SCLM from the Archean to 480 Ma, and even until 4 Ma in the central NCC (Hebi). A similar fashion of Au enrichment also occurred in the Finsch and Venetia peridotites in the Kaapvaal craton (e.g., high S contents of 280-1240 ppm and high Au/ Pd (N) of >1-13, but Au <0.9-1.4 ppb; Maier et al., 2012). The Shanwang peridotite xenoliths hosted in 18 Ma basalts represent juvenile lithospheric mantle that formed after the destruction of the NCC and Mesozoic Au mineralization (Chu et al., 2009). They contain 0.02-1.8 ppb Au and show a correlation with PGE contents, similar to refertilized massif-type peridotites representative of Phanerozoic lithospheric mantle (Figs. 2 and 3; Fig. DR4). Previously determined Au contents of mantle xenoliths from the NCC, mainly by the NiS fire assay method, showed a large range of 0.5-38 ppb, with a mean value of 5 ppb; this is distinctly different from other mantle domains worldwide, including those with strong mantle metasomatism (Fischer-Gödde et al., 2011;Saunders et al., 2018). Our high-precision new Au data are far lower than the previous values (Figs. 2 and 3; Fig. DR5), including those from the same localities (3-13 ppb Au; Zhang et al., 2008;Zheng et al., 2005). Such discrepancy likely does not result from sample heterogeneity but data quality (see the Data Repository). The new data are consistent with Cu and PGE contents in the same samples as well as other peridotites of variable fertility worldwide ( Fig. 2; see the Data Repository). Therefore, the mantle xenoliths, reflecting metasomatized ancient SCLM and juvenile SCLM beneath the NCC, indicate no substantial enrichment of Au, Cu, or Pd contents. The NCC is thus unlikely to have been inherently rich in Au, and mantle metasomatism and replacement by juvenile lithospheric mantle may not have led to strong enrichment of Au in the SCLM. The 130-120 Ma basalts from the northern and southern margins of the NCC contain a mean value of 2.2 ppb Au, with a maximum of 4.3 ppb (0.4-4.3 ppb Au, n = 24; Figs. 3 and 4). Despite the much longer history of metasomatism, the 130-120 Ma basalts mostly display Au/Pd (N) of 3-5, i.e., only slightly higher than those for the <110 Ma basalts (2-4) and the fertile mantle (0.5-2). The Au contents and Au/Pd (N) of the 130-120 Ma basalts thus reflect limited enrichment of Au relative to PGEs and the low Au contents of the lithosphere beneath the eastern NCC (Fig. 3). This observation is remarkable because the northern and southern cratonic margins of the NCC were considered to have been strongly affected by multiple periods of subduction from 480 Ma to 130 Ma (Zhu et al., 2012;Wu et al., 2019) and to be essential for the Au deposits (Goldfarb and Groves, 2015;Zhu et al., 2015). All of the new data for the mantle xenoliths and basalts of the eastern NCC consistently indicate that there is no significant Au enrichment in the SCLM, despite extensive metasomatism and hydration. Mantle metasomatism and hydration did replenish a fraction of Au to the highly depleted SCLM, as reflected by the elevated Au/Pd (N) , but the amount must have been limited (Figs. 2 and 3). EFFICIENT RELEASE OF GOLD INTO HYDROUS BASALTS Although the metasomatized SCLM does not show anomalous enrichment of Au, the hydrous 130-120 Ma basalts derived from the SCLM contain 2-3 ppb Au on average, which is 3-4 times higher than values of asthenospherederived, <110 Ma basalts (Figs. 3 and 4). This is remarkable given similarly low Au contents (<1-2 ppb) of the mantle source as indicated by similar Au/Pd (N) (Figs. 3 and 4). Metasomatism led to high H 2 O contents (>1000 ppm), S, C, and other volatiles, and elevated oxygen fugacity in the SCLM beneath the NCC (Geng et al., 2019a(Geng et al., , 2019bXia et al., 2013). The 130-120 Ma basalts contain high MgO of 11-14 wt% and high water contents, which resulted from high degrees of partial melting of the metasomatized SCLM. Increasing H 2 O contents and oxygen fugacity of the basalts from the metasomatized source could lead to a preferential transfer of Au into the magma (Botcharnikov et al., 2011). These basalts show enhanced Au and PGE contents (e.g., 0.1-0.5 ppb Os-Ir), and particularly radiogenic 187 Os/ 188 Os initial values Huang et al., 2017). These data support the selective and efficient release of metals from the fusible fraction of the metasomatized SCLM, irrespective of the specific rock types, including the possible goldrich veins in metasomatized peridotites (Tassara et al., 2017). High-degree hydrous melting thus promoted the release of the fusible components of the SCLM with Au into 130-120 Ma basalts. In contrast, after the NCC destruction, the juvenile lithospheric and asthenospheric mantle was relatively volatile-poor (Zhu et al., 2012;Xia et al., 2017), and so <110 Ma basalts have low Au and PGE contents, like mid-ocean-ridge basalts (Figs. 3 and 4). We conclude that the relatively high Au contents of the 130-120 Ma, volatile-rich basalts mainly resulted from efficient extraction from metasomatized SCLM. A similar process likely also occurred for the giant Lihir gold deposit in Papua New Guinea, where the adjacent mantle source was strongly modified by subduction but metal contents remained low, e.g., only 0.04-1.29 ppb Au and 9-40 ppm Cu (McInnes et al., 1999), comparable to the metasomatized SCLM of the NCC. IMPLICATIONS FOR GIANT GOLD DEPOSITS IN THE NCC The 130-120 Ma basalts in the NCC provide key insights into the links in timing and mechanism among the metasomatized SCLM, hydrous mantle magmas, and ore-forming fluids that record a strong signature of mantle-derived volatiles (Fig. 4). The extension-induced thinning of the SCLM beneath the NCC and its conversion into juvenile lithosphere by upwelling asthenospheric mantle at ca. 130-120 Ma (Zhu et al., 2012;Wu et al., 2019) triggered the high degree of melting of the metasomatized SCLM. This resulted in the formation of hydrous S-, C-, Cl-, and Aubearing, high Mg# magmas, as reflected by the 130-120 Ma basalts that erupted almost coeval with the peak of gold mineralization. The rapid ascent and emplacement of the magmas along preexisting lithosphere-scale weak zones such as the Tanlu fault (Zhao et al., 2016) were facilitated by the preconcentration of hydrous mineral assemblages in these weak zones (Foley, 2008). With further magmatic-hydrothermal evolution, the gold that was initially in the hydrous magmas would have favorably partitioned into the exsolved fluids and would have been significantly enriched in the fluids by a factor of hundreds (Pokrovski et al., 2013). The fluids then preferentially transported Au, S, Cl, C, and noble gases along the translithospheric faults to second-order fault systems, where Au was deposited. This model explains the strong spatial and temporal association of Au deposits with metasomatized lithospheric sources of mafic dike swarms, the major association between Au deposits and extensional fault systems (Goldfarb and Santosh, 2014), and the substantial inheritance of mantle-derived volatile S, C, H, O (Mao et al., 2008) and noble gases (Zhu et al., 2015;Tan et al., 2018) in the auriferous fluids of ore bodies. Primitive basaltic magmas with 2-3 ppb Au, similar to those of the parental magmas (1.5-4 ppb Au) of the giant Bingham Canyon (Utah, USA) Cu-Au porphyry deposit (Grondahl and Zajacz, 2017), could have led to the formation of giant Au deposits such as those observed in the eastern NCC. Consequently, significant Au pre-enrichment in the SCLM is not a prerequisite for the formation of giant Au deposits. Extensive mantle metasomatism plays a key role in that it enables the efficient extraction of Au during subsequent melting of the metasomatized mantle. The metasomatic components are probably also essential to produce later auriferous fluids that are exsolved and lead to the giant Au deposits. The present work highlights the importance of mantle-derived, Au-bearing hydrous magmas in the origin of giant Au deposits. Further understanding of the detailed magmatic-hydrothermal evolution is required for a complete picture of the enrichment stages of Au. ACKNOWLEDGMENTS We thank Zhaochu Hu, Haihong Chen, Kang Chen, and Tao He for support in the laboratory; Zhuyin Chu and Jingao Liu for provision of some mantle xenoliths; and Jianwei Li and Xinfu Zhao for discussion. This project was based on previous contributions by numerous Chinese colleagues working on the evolution of the North China craton and gold deposits. The study was supported by the Chinese National Key Research and Development Program (2016YFC0600103), and National Natural Science Foundation of China (41722302, 41673027). We appreciate Jon Hronsky and two anonymous reviewers for constructive comments, and Chris Clark for careful editorial handling.
5,503.4
2020-02-01T00:00:00.000
[ "Geology" ]
Evaluation of Child–Computer Interaction Using Fitts’ Law: A Comparison between a Standard Computer Mouse and a Head Mouse This study evaluates and compares the suitability for child–computer interaction (CCI, the branch within human–computer interaction focused on interactive computer systems for children) of two devices: a standard computer mouse and the ENLAZA interface, a head mouse that measures the user’s head posture using an inertial sensor. A multidirectional pointing task was used to assess the motor performance and the users’ ability to learn such a task. The evaluation was based on the interpretation of the metrics derived from Fitts’ law. Ten children aged between 6 and 8 participated in this study. Participants performed a series of pre- and post-training tests for both input devices. After the experiments, data were analyzed and statistically compared. The results show that Fitts’ law can be used to detect changes in the learning process and assess the level of psychomotor development (by comparing the performance of adults and children). In addition, meaningful differences between the fine motor control (hand) and the gross motor control (head) were found by comparing the results of the interaction using the two devices. These findings suggest that Fitts’ law metrics offer a reliable and objective way of measuring the progress of physical training or therapy. Introduction Child-computer interaction (CCI) is the branch within human-computer interaction (HCI) that studies the design, implementation and use of interactive computer systems for children [1]. The research on this area has gained traction since the 1990s, mainly driven by interest in the use of technology in schools, for educational and communication purposes. An essential goal when designing human-computer interfaces for children is to make them "child friendly", by developing interactions designed to provide a natural feel and the sensation of control to children. These design requirements are even more relevant when the interfaces are focused on children with special needs. Augmentative and alternative communication (AAC) is the field that studies hardware and software resources that bridge the gap between computers or personal devices and people with disabilities. Cerebral palsy (CP) is the most common physical disability in childhood, affecting 2-3 per 1000 live births [2]. CP is an umbrella term that describes motor disorders caused by a lesion in the immature brain [3]. These disorders, which affect movement and posture, are often accompanied by cognitive or perceptive disorders that greatly hamper daily life activities, limiting the capabilities for communication and social relationships among children. According to population-based studies, one out of Sensors 2021, 21, 3826 2 of 11 two children with CP have a speech disorder [4]. Therefore, alternative and augmentative interfaces could give such children the opportunity to improve their interaction with their physical and social environment. Children with CP often use alternative and augmentative input devices as substitutes to a standard computer mouse; trackballs, mechanical switches, joysticks or adapted keyboards are the most common devices [5]. Recently, touchscreen tablets have introduced a very cost-effective human-computer interface for children with motor disorders [6]. Other emerging technologies, such as gaze trackers or head mice, have become popular as a solution for many users with CP [7]. The ENLAZA interface is an alternative input device that replaces a standard computer mouse. It is based on a wearable inertial sensor that measures the head posture, which is translated into mouse pointer movements. This interface has been previously used for different purposes: input device [8], physical rehabilitation [9], and biomechanical assessment [10]. One of the first works that analyzed the ENLAZA interface was Fitts' law, which extracted the throughput (TP) for healthy adult subjects [11]. Fitts' law (1954) is the first mathematical model that describes the trade-off between speed and accuracy during reaching tasks [12]. It is defined as shown in Equation (1): where ID is the index of difficulty, and A and W are the distance between targets (measured from the center of the targets) and the width of each target, respectively. This law was later adapted according to the Shannon formulation by Mackenzie [13], and was improved regarding its accuracy following Crossman's research [14], giving rise to the current model. The main metric, the TP, is calculated as Equation (2): where MT is the movement time average over a sequence of trials in seconds (s), and ID e is the effective index of difficulty of the selected task in bits and is obtained according to Equation (3): where W e is the effective target width (replaces W from the original formulation of Fitts' law), and is calculated from the standard deviation (SD) in the selection coordinates gathered over a sequence of trials for a particular A-W condition, as shown in Equation (4): In addition, the linear relationship between MT and ID e can be expressed as stated in Equation (5): where a and b are constants that depend on the choice of the input device and are usually determined empirically by regression analysis. Constant a defines the intersection on the y axis and can be interpreted as a delay. Constant b is a slope describing the acceleration. Both parameters show the linear dependency in Fitts' law. Finally, it is important to highlight that the index of performance (IP), defined as Equation (6): which allows for the comparison of different pointing devices; the higher its value, the less MT is affected by increases in ID e . Over the last 30 years, Fitts' law and its subsequent adaptations have been widely used in HCI to evaluate the performance of input devices [13], as well as in other research fields, including kinematics and motor behavior [15,16]. In the field of HCI, the standard ISO 9241-9 "Ergonomic requirements for office work with visual display terminals (VDTs)-Part 9: Requirements for non-keyboard input devices" (revised by ISO 9241-400:2007) [17], indicates that the TP is a useful factor to assess the usability of input devices. In fact, this metric has been widely used to evaluate alternative input devices. Bernardos et al. [18] obtained a TP of 2.04 bits/s for a head-based interaction using a Kinect device. Roig et al. [19] evaluated a system based on head tracking using a mobile device's front camera, obtaining a TP of 1.42 bits/s. Raya et al. [20] studied the TP of the ENLAZA device, obtaining a value of 1.8 ± 0.4 bits/s. These results agree with those presented in other works in the literature [13,21,22]. In all of these studies, the participants were adult people. In the case of children, Fitts' law has mainly been used to investigate how children control their movements [23]. The proposed task usually consists of performing rapid pointing movements between two targets with different combinations of distances and widths (which determine the index of difficulty of the task). These tasks can be performed in both the real world or a simulated one, which is why Fitts' law is useful for evaluating motor behavior and HCI. In both scenarios, Fitts' law defines a linear relation between MT and ID e , whose slope usually quantifies the amount of information processed per second. Recent studies have investigated the user experience of children using touchscreen tablets, reporting a TP of around 2.3-2.5 bits/s [24], and complemented these findings with some recommendations for professionals who use these devices for teaching [25]. In the field of human behavior, it has been proven that Fitts' law can be used to evaluate the maturity of the central nervous system (CNS), whose role is in processing the information needed for motor skills [26,27], leading the motor system. The quicker this information is processed, the better the movements can be executed. Therefore, measuring the time and spatial accuracy of movements together provides relevant information about the development of the sensorimotor control. Hertzum et al. [28] studied the effects of age on pointing performance with mouse and touchscreens in groups of young people (12-14 years), adults (25)(26)(27)(28)(29)(30)(31)(32)(33) and elderly people (61-69). They showed that adult participants performed better than both young and elderly participants, but they did not provide the TP for every group. Regarding the use among people with disabilities, Gump et al. [29] applied Fitts' law to individuals with CP to characterize motor behavior during aiming tasks, and Bertucco et al. [30] characterized dystonia of children with CP in reaching tasks using an iPad TM . Hay et al. [23] demonstrated that the trade-off between speed and accuracy improves as children grow up, based on analysis of children aged 5 to 11. Schneiberg et al. [31] concluded that children between 8 and 10 years old had outcome measures similar to adults, whereas younger children showed immature patterns during reaching tasks. The results arising from the aforementioned studies suggest that ages ranged between 0 and 8 years old deserve special attention when designing and using HCIs. The objective of this study is to evaluate two CCI devices, a standard computer mouse versus the ENLAZA interface, by measuring the motor performance and the learning process of children in early stages of fine motor development performing a multidirectional pointing task. This evaluation is based on the interpretation of Fitts' law derived metrics: TP (mainly), MT and IP. Despite the computer mouse being the gold standard of the HCI, only a few studies [32] measured the TP when users are children. According to the previously cited literature, the slope of the linear equation of MT versus ID e quantifies the amount of information processed per second. For this study, we found that the linear relation between MT and ID e is maintained, but we also obtained meaningful differences between those slopes for hand control and head control, since the former uses fine motor control, and the latter uses gross motor control. Finally, the study explores some conclusions that could be drawn from the metrics associated with Fitts' law regarding the performance of a pointing task. These conclusions obtained for children can then be compared with those previously published for adults [13,[18][19][20][21][22]. Materials and Methods This study compares two CCI devices, a standard computer mouse versus the previously described head mouse (ENLAZA interface), by measuring the motor performance and the learning process while performing multidirectional pointing tasks. Participants Ten healthy children, who were randomly recruited from among all the students in their first years of primary school from the Colegio CEU Montepríncipe in Madrid, participated in the study, in order to obtain a representative sample. Ages ranged from 6 to 8 years. All the children had previous experience in the use of a standard mouse, but no one had experience in the use of any kind of head mouse. All of the children's parents and the head office of the school gave consent for participation. Experimental Setting Participants sat in a chair in front of a table, where there was a 17-inch screen laptop lying perpendicular, centered in front of their heads. The laptop had a standard mouse connected, which lay on top of the table. The head mouse was fixed to the forehead of the participants thanks to a wide rubber band that firmly held the device to prevent it from moving relative to their heads during each experiment. All of the participants were asked to sit comfortably but with their back straight, trying not to change their trunk position during the whole experiment. In addition, the room where the experiments took place was brightly lit and quiet enough to prevent participants from becoming distracted during the tasks. Test ISO 9241-411 [33] describes a multi-directional test that evaluates point-select actions in different directions through a series of pointing tasks. For these tasks, the user must move the cursor, trying to reach several circular targets (of diameter W each) which were equally spaced in a circular layout of diameter A (so each movement's amplitude is A) (see Figure 1). The user must start the task with the cursor in the center of the circular layout defined by the set of targets, and the next target will be determined by a change in color (grey to blue), chosen so it can be easily perceived by people with color blindness. sions that could be drawn from the metrics associated with Fitts' law regarding the performance of a pointing task. These conclusions obtained for children can then be compared with those previously published for adults [13,[18][19][20][21][22]. Materials and Methods This study compares two CCI devices, a standard computer mouse versus the previously described head mouse (ENLAZA interface), by measuring the motor performance and the learning process while performing multidirectional pointing tasks. Participants Ten healthy children, who were randomly recruited from among all the students in their first years of primary school from the Colegio CEU Montepríncipe in Madrid, participated in the study, in order to obtain a representative sample. Ages ranged from 6 to 8 years. All the children had previous experience in the use of a standard mouse, but no one had experience in the use of any kind of head mouse. All of the children's parents and the head office of the school gave consent for participation. Experimental Setting Participants sat in a chair in front of a table, where there was a 17-inch screen laptop lying perpendicular, centered in front of their heads. The laptop had a standard mouse connected, which lay on top of the table. The head mouse was fixed to the forehead of the participants thanks to a wide rubber band that firmly held the device to prevent it from moving relative to their heads during each experiment. All of the participants were asked to sit comfortably but with their back straight, trying not to change their trunk position during the whole experiment. In addition, the room where the experiments took place was brightly lit and quiet enough to prevent participants from becoming distracted during the tasks. Test ISO 9241-411 [33] describes a multi-directional test that evaluates point-select actions in different directions through a series of pointing tasks. For these tasks, the user must move the cursor, trying to reach several circular targets (of diameter W each) which were equally spaced in a circular layout of diameter A (so each movement's amplitude is A) (see Figure 1). The user must start the task with the cursor in the center of the circular layout defined by the set of targets, and the next target will be determined by a change in color (grey to blue), chosen so it can be easily perceived by people with color blindness. This test, set up and played on the laptop screen using the open source software FittsStudy [34], was performed for a standard mouse and the head mouse, in two consec-Sensors 2021, 21, 3826 5 of 11 utive executions. It was randomly determined whether the standard mouse or the head mouse was used first. Each test displayed a different sequence of 10 targets, randomly determined. The next target to be reached is highlighted and, if the user misses it, i.e., if the target was not correctly selected, the highlight color changes to red. Each sequence (test) was repeated 13 times: the first 3 times for practicing, and the last 10 for testing. This set of 13 sequences were performed 4 times, each with different A-W conditions (which led to 4 different indexes of difficulty, ID), one after another. The order in which the 4 conditions were chosen was randomly determined (see Table 1). Participants were asked to select targets as quickly and accurately as possible, but also were told to slow down after more than 3 consecutive targets were missed in the same test. They could rest as much as they wanted between tests. To study the influence of the learning process when carrying out the described test, each participant's data were recorded from the first session (called pre-training in the analysis that follows, when the participants had never performed the described test before) to after 3 days of training (called post-training). All participants undertook the same training regime. Data Analysis and Statistics After conducting the described experiments, the associated data, previously generated by the software FittsStudy, were exported and analyzed, using both a custom spreadsheet and RStudio software [35]. The variables analyzed after the experiments were TP, MT, IP, and error rate. The analysis was mainly focused on detecting the differences in the users' performance for each input device (standard mouse and head mouse) and studying the effect of the learning process before and after the training. First, to test Fitts' law, the least-squares method was used to carry out linear regression. The bivariate correlation (Pearson correlation coefficient) was also calculated for each case. In addition, TP, MT, and IP were calculated and compared for paired samples of each input device for the pre-training and post-training sessions. On the one hand, the appropriate mean values and their standard deviation were calculated. On the other hand, each comparison started with a series of normality tests on the differences (pre-training and post-training) for both the standard mouse and the head mouse (Shapiro-Wilk test, skew S and kurtosis K standardized parameters, and Q-Q plots with 0.95 confidence intervals). Once the assumption of normality was checked, a t-test for paired samples was performed in each case. Moreover, error rate comparisons were performed with a Wilcoxon signedranked test for paired samples, as the data were not normally distributed in this case. Finally, the error rate separated for both W and A values was also calculated and compared for each input device. TP mean values (in bits/s) and their corresponding standard deviations for each device are presented in Figure 3. For the standard mouse, the results show values of 2.73 ± 0.27 and 3.09 ± 0.48, for pre-and post-training, respectively (blue bars); and 0.89 ± 0.21 and 1.08 ± 0.22 for the head mouse for pre-and post-training, respectively (red bars). The comparison study of the learning effect for TP (pre-training vs. post-training) showed a statistically significant result for both the standard mouse (p = 0.02) and the head mouse (p = 0.0004). The difference was much more remarkable in the case of the head mouse. TP mean values (in bits/s) and their corresponding standard deviations for each device are presented in Figure 3. For the standard mouse, the results show values of 2.73 ± 0.27 and 3.09 ± 0.48, for pre-and post-training, respectively (blue bars); and 0.89 ± 0.21 and 1.08 ± 0.22 for the head mouse for pre-and post-training, respectively (red bars). The comparison study of the learning effect for TP (pre-training vs. post-training) showed a statistically significant result for both the standard mouse (p = 0.02) and the head mouse (p = 0.0004). The difference was much more remarkable in the case of the head mouse. TP mean values (in bits/s) and their corresponding standard deviations for each device are presented in Figure 3. For the standard mouse, the results show values of 2.73 ± 0.27 and 3.09 ± 0.48, for pre-and post-training, respectively (blue bars); and 0.89 ± 0.21 and 1.08 ± 0.22 for the head mouse for pre-and post-training, respectively (red bars). The comparison study of the learning effect for TP (pre-training vs. post-training) showed a statistically significant result for both the standard mouse (p = 0.02) and the head mouse (p = 0.0004). The difference was much more remarkable in the case of the head mouse. Regarding the MT evolution, presented in Figure 4, the obtained results (in ms) for the standard mouse were 1014.25 ± 146.91 and 973.90 ± 165.52 for pre-and post-training, respectively (blue bars); and 2910.12 ± 738.53 and 2588.83 ± 709.95 for the head mouse for pre-and post-training, respectively (red bars). Although a slightly decreasing tendency in MT (greater in the case of the head mouse) can be seen in Figure 4, there were not statistically significant differences, either in the standard mouse (p = 0.47) or in the head mouse (p = 0.1) comparisons. pre-and post-training, respectively (red bars). Although a slightly decreasing tendency in MT (greater in the case of the head mouse) can be seen in Figure 4, there were not statistically significant differences, either in the standard mouse (p = 0.47) or in the head mouse (p = 0.1) comparisons. The results for IP, presented in Figure 5, show very few changes in this parameter for both the standard mouse and the head mouse comparing pre-and post-training sessions. Numerically, the results (in bits/s) were as follows: for the standard mouse 5.10 ± 2.40 and 4.93 ± 1.15 for pre-and post-training, respectively (blue bars), and for the head mouse 0.86 ± 0.22 and 0.99 ± 0.47 for pre-and post-training, respectively (red bars). There were not statistically significant differences, either in the standard mouse (p = 0.68) or in the head mouse (p = 0.35) comparisons. Error rate results, presented in Figure 6, show large standard deviation values, which suggests large data dispersion. Nevertheless, the error rate means (in %) for the standard mouse, 10.00 ± 10.27 and 2.50 ± 4.08 for pre-and post-training, respectively (blue bars), are clearly smaller than those for the head mouse, 20.75 ± 10.80 and 13.25 ± 10.34 for pre-and The results for IP, presented in Figure 5, show very few changes in this parameter for both the standard mouse and the head mouse comparing pre-and post-training sessions. Numerically, the results (in bits/s) were as follows: for the standard mouse 5.10 ± 2.40 and 4.93 ± 1.15 for pre-and post-training, respectively (blue bars), and for the head mouse 0.86 ± 0.22 and 0.99 ± 0.47 for pre-and post-training, respectively (red bars). There were not statistically significant differences, either in the standard mouse (p = 0.68) or in the head mouse (p = 0.35) comparisons. Regarding the MT evolution, presented in Figure 4, the obtained results (in ms) for the standard mouse were 1014.25 ± 146.91 and 973.90 ± 165.52 for pre-and post-training, respectively (blue bars); and 2910.12 ± 738.53 and 2588.83 ± 709.95 for the head mouse for pre-and post-training, respectively (red bars). Although a slightly decreasing tendency in MT (greater in the case of the head mouse) can be seen in Figure 4, there were not statistically significant differences, either in the standard mouse (p = 0.47) or in the head mouse (p = 0.1) comparisons. The results for IP, presented in Figure 5, show very few changes in this parameter for both the standard mouse and the head mouse comparing pre-and post-training sessions. Numerically, the results (in bits/s) were as follows: for the standard mouse 5.10 ± 2.40 and 4.93 ± 1.15 for pre-and post-training, respectively (blue bars), and for the head mouse 0.86 ± 0.22 and 0.99 ± 0.47 for pre-and post-training, respectively (red bars). There were not statistically significant differences, either in the standard mouse (p = 0.68) or in the head mouse (p = 0.35) comparisons. Error rate results, presented in Figure 6, show large standard deviation values, which suggests large data dispersion. Nevertheless, the error rate means (in %) for the standard mouse, 10.00 ± 10.27 and 2.50 ± 4.08 for pre-and post-training, respectively (blue bars), are clearly smaller than those for the head mouse, 20.75 ± 10.80 and 13.25 ± 10.34 for pre-and Error rate results, presented in Figure 6, show large standard deviation values, which suggests large data dispersion. Nevertheless, the error rate means (in %) for the standard mouse, 10.00 ± 10.27 and 2.50 ± 4.08 for pre-and post-training, respectively (blue bars), are clearly smaller than those for the head mouse, 20.75 ± 10.80 and 13.25 ± 10.34 for preand post-training, respectively (red bars). Regarding the evolution of this parameter for each device comparing pre-and post-training sessions, and although the downward trend is clearly observed, the statistically significant difference only shows for the head mouse (p = 0.09 for the head mouse versus p = 0.12 for the standard mouse). Figure 7, we present the comparison of the error rate according to both W and A values. As expected, these results show that, for both input devices, the error rate is higher as W decreases and A increases. The percentage of error rate is smaller for the standard mouse in any case, which matches with the results obtained in the previous comparison (see Figure 6). Standard deviations are also remarkably large in this case, and so is the consequent data dispersion. Results (in %) for the standard mouse and the head mouse were: 7.25 ± 13.20 and 19.75 ± 9.49, respectively, for W = 32 px (blue bars, Figure 7a), 3.00 ± 8.83 and 14.00 ± 16.61, respectively, for W = 96 px (red bars, Figure 7a), 4.75 ± 10.86 and 9.00 ± 9.82, respectively, for A = 256 px (blue bars, Figure 7b), and 5.50 ± 11.97 and 24.75 ± 20.13, respectively, for A = 512 px (red bars, Figure 7b). Finally, in Figure 7, we present the comparison of the error rate according to both W and A values. As expected, these results show that, for both input devices, the error rate is higher as W decreases and A increases. The percentage of error rate is smaller for the standard mouse in any case, which matches with the results obtained in the previous comparison (see Figure 6). Standard deviations are also remarkably large in this case, and so is the consequent data dispersion. Results (in %) for the standard mouse and the head mouse were: 7.25 ± 13.20 and 19.75 ± 9.49, respectively, for W = 32 px (blue bars, Figure 7a), 3.00 ± 8.83 and 14.00 ± 16.61, respectively, for W = 96 px (red bars, Figure 7a), 4.75 ± 10.86 and 9.00 ± 9.82, respectively, for A = 256 px (blue bars, Figure 7b), and 5.50 ± 11.97 and 24.75 ± 20.13, respectively, for A = 512 px (red bars, Figure 7b). post-training, respectively (red bars). Regarding the evolution of this parameter for each device comparing pre-and post-training sessions, and although the downward trend is clearly observed, the statistically significant difference only shows for the head mouse (p = 0.09 for the head mouse versus p = 0.12 for the standard mouse). Finally, in Figure 7, we present the comparison of the error rate according to both W and A values. As expected, these results show that, for both input devices, the error rate is higher as W decreases and A increases. The percentage of error rate is smaller for the standard mouse in any case, which matches with the results obtained in the previous comparison (see Figure 6). Standard deviations are also remarkably large in this case, and so is the consequent data dispersion. Results (in %) for the standard mouse and the head mouse were: 7.25 ± 13.20 and 19.75 ± 9.49, respectively, for W = 32 px (blue bars, Figure 7a), 3.00 ± 8.83 and 14.00 ± 16.61, respectively, for W = 96 px (red bars, Figure 7a), 4.75 ± 10.86 and 9.00 ± 9.82, respectively, for A = 256 px (blue bars, Figure 7b), and 5.50 ± 11.97 and 24.75 ± 20.13, respectively, for A = 512 px (red bars, Figure 7b). Discussion Research on CCI has grown considerably in the last three decades, mainly motivated by interest in educational applications. Different studies state that the usability of computers is strongly dependent on motor skills, which suggests that the children's needs must be carefully studied and considered. New interactive systems (i.e., touchscreens, tablets, or smartphones) have introduced scenarios with a greater usability, facilitating the use of computers by children. Over the decades, Fitts' law has been used to evaluate HCI. However, there are a lack of studies evaluating CCI using Fitts' law and the tests proposed in the standard ISO 9241. Fitts' law states that the amount of time required to move a pointer to a target area is a function (linear dependence) of the index of difficulty, which, in turn, depends on the distance to the target and the target size. Regarding our results, Figure 2 depicts the values of the MT (mean values ± standard deviation) versus the ID e using a standard mouse and the head mouse. The subsequent linear regression adjustment demonstrates that the performance for all conditions follows the described Fitts' model. Therefore, it can be concluded that the metric proposed by Fitts' law, the TP, is reliable to evaluate the use of a standard mouse and the head mouse by children with ages ranged between 6 and 8. In addition, Figure 2 also shows that the slopes of the regression lines for the tests using the head mouse are higher than those for a standard mouse. Since the slope of that linear equation usually quantifies the amount of information processed per second, this result makes possible the identification of differences between both motor behavior and control of hand and head. As expected, the former is more precise than the latter. This result suggests that the slope of the linear equation might be a useful metric to evaluate the motor performance of a training regime or physical therapy. Therefore, the TP evaluation might complement gross motor functional scales, by enabling the detection of finer changes in motor performance. According to Gallahue, who deeply studied the psychomotor developmental stages [36], children around 6-7 begin to acquire fine motors skills (fine and precise movements) and it is only around 14, once those fine motor skills are well established, that they begin to develop special skills. This insight could explain why the mean TP values obtained for a standard mouse in our experiments (2.7 and 3.1 bits/s for the pre-and post-training sessions, respectively, as shown in Figure 3) are smaller than those expected for healthy adults (from 3.7 to 4.5 bits/s) [21]. An interesting approach for subsequent studies could be to analyze changes in the TP values across all age ranges of psychomotor development (as described by Gallahue) since, although the normal ranges for the TP parameter are widely studied and established for adults, there is a lack of scientific evidence describing this parameter in children. However, the obtained mean TP values for the head mouse, starting at 0.89 bits/s and climbing to 1 bit/s after the training process, are slightly low, but are quite close to those expected in healthy adults using a similar input device (0.92 to 1.93 bits/s) [20,21]. This result could lead us to conclude that, as head movements are not usually classified as fine motor skills (mostly related with the synchronization of hands and fingers), but as gross motor skills instead, they are mainly developed among children between 3 and 6 years of age [36]. Therefore, the performance of children older than this age range could be similar to that of healthy adults. It is important to highlight that, as indicated by the results, the values of TP are significantly higher after training (from a statistical point of view) for both a standard mouse and the head mouse. The evolution was more noticeable in the case of the head mouse, as children had no previous experience in the use of this device. However, these differences are not clearly observable for the MT and the IP. These findings suggest that the TP is sensitive and therefore a good metric for measuring and characterizing (not only qualitatively, but also quantitatively) changes in the motor behavior after physical training or therapy focused on the motor control improvement. The error rate results as a function of W and A values were selected according to the gold standard [37,38]. As expected, the error rate is higher as W decreases and A increases, and the mean values of the error rate are clearly higher for the head mouse. In fact, even a small reduction in the A value is enough to greatly increase the error rate. Taking this into account, it would be reasonable to adjust W and A to obtain error rates similar to those of the gold standard. These changes would mean a decrease in the ID of the movements, which, predictably, would lead to performance improvements. Regarding the limitations of the study, it should be noted that both the number of participants and their age range were small. Nevertheless, the sample was broad enough to achieve our goals. Future research directions should be focused on (1) adjusting A and W values according to the previous proposal, with the aim of achieving performance improvements, and (2) designing a test similar to the one proposed in this paper, but using only the ENLAZA device to evaluate motor performance and the learning process of children with CP undertaking physical therapy sessions. Funding: This research was funded by the SPANISH GOVERNMENT (FEDER/Ministry of Science and Innovation/AEI.), grant number RTI2018-097122-A-I00. Informed Consent Statement: All the children, their parents or legal guardians, and the head office of Colegio San Pablo CEU Montepríncipe, gave consent for participation in the study.
7,613.2
2021-05-31T00:00:00.000
[ "Computer Science" ]
Ellipsometric Study on the Uniformity of Al:ZnO Thin Films Deposited Using DC Sputtering at Room Temperature over Large Areas Al-doped ZnO combines high transparency and conductivity with abundant and non-toxic elements; making it suitable for optoelectronic devices with large-scale applications. In order to check the quality of the material deposited over large areas, spectroscopic ellipsometry is a powerful technique that allows the determination of various optical and electrical parameters by applying suitable oscillator models. This technique is used here to obtain sheet resistance and visible transmittance data at several equidistant points of Al:ZnO thin films deposited using DC sputtering on 15 cm × 15 cm glass substrates. Independent measurements using other optical (spectrophotometry) and electrical (four point probe) methods show analogous visible transmittance but somewhat higher resistance values than those obtained with ellipsometry, which is explained by the contribution of grain-boundary scattering compared to in-grain properties provided using ellipsometry. However, the mapping of the data gives a similar spatial distribution to the different types of measurement; therefore, proving the capacity of ellipsometry to study with a single tool the uniformity of the optical and electrical characteristics of large areas. Introduction ZnO is a transparent conductive oxide (TCO), showing high optical transmittance in the visible range and low electrical resistivity in its native structure [1].The increment in n-type conductivity is relatively easy to achieve using excess Zn or doping with group-III elements (Al, Ga, and In) as Zn substituents [2].Aluminum doping has a particular interest, because it results in a highly conductive material (Al:ZnO or AZO) constituted of abundant and non-toxic elements, making a suitable TCO alternative for diverse optoelectronic devices with large-scale deployment, such as in tunable color filters [3], smart windows [4] and photovoltaic solar cells [5].Typically, the above applications demand a visible transmittance of T V = 80-90% and a sheet resistance of R s = 10-30 Ω/sq [6], with a figure of merit defined as [7] ϕ = T V 10 /R s , which is used to compare different TCOs.AZO thin films have been prepared using various chemical and physical techniques: electrodeposition [2], sol-gel [8], spray pyrolysis [9], evaporation [10], sputtering [11], etc.More concretely, direct current (DC) magnetron sputtering can produce transparent and conductive AZO layers at room temperature [11] on heat-sensitive substrates [12], while most of the other techniques require high substrate temperature [9,10] or thermal post-treatment above 300 • C [2,8] to achieve a good-quality material.Previous works have shown the influence of process parameters on the characteristics of AZO thin films prepared using DC sputtering on unheated glass substrates [13,14]. Another important technical challenge is to achieve high uniformity of the AZO characteristics over a large area.Typical characterization requires profilometry to determine Materials 2023, 16, 6644 2 of 11 surface roughness and film thickness; spectrophotometry to determine optical transmittance; and four point probe (FPP) electrical measurements to obtain sheet resistance [13].Spectroscopic ellipsometry is another characterization technique that has been used to map the film thickness on different substrate areas [11,15], also allowing the simultaneous determination of several optical and electrical parameters by applying suitable oscillator models [16]. In this work, AZO thin films have been deposited using DC sputtering on unheated glass with a 15 cm × 15 cm area, and their uniformity has been analyzed with variable-angle spectroscopic ellipsometry performed on several points throughout the glass substrate.The optical constants (n, k) and electrical parameters (free carrier concentration, mobility) have been accurately obtained using a combination of the Drude oscillator model and Bruggeman effective medium approximations, including the simulated substrate and a rough top layer in the optical model, as reported by other authors [4,15].The ellipsometric data maps are in good agreement with the analogous maps, made with visible transmittances determined using spectrophotometry and sheet resistances given by FPP electrical measurements.Therefore, we show the potential of using ellipsometry and proposed optical models to map the optical and electrical characteristics (and thus the figure of merit) of AZO coatings for large-area applications. Materials and Methods AZO thin films were prepared on 15 cm × 15 cm × 2 mm soda lime glasses (SLG) with DC magnetron sputtering at room temperature, using a homemade vacuum deposition system.The substrate was placed in a vertical stainless steel frame and moved in front of a rectangular target (45 cm high, 13 cm wide, 6 mm thick) consisting of 98 wt% ZnO and 2 wt% Al 2 O 3 .After the chamber was evacuated below 1 × 10 −4 Pa, high purity Ar and O 2 were introduced until they reached a process pressure of 4 × 10 −1 Pa.Then, a DC power was applied to the target (set to 1.7 W/cm 2 ) for 7 min to obtain a film thickness of 0.80 ± 0.04 µm, as was measured after deposition with a Dektak 3030 profilometer (Veeco, Herzogenrath, Germany).These preparation parameters were selected according to previous works [12,13], in order to obtain AZO layers with the desired visible transmittance (T V ~85%) and sheet resistance (R s ~20 Ω/sq). Optical transmittance was measured with unpolarized light at normal incidence, using the spectrophotometer Avantes AS-5216 (Avantes, Apeldoorn, The Netherlands).Sheet resistance was obtained with a Signatone FPP head, combined with a Keithley 2450 (Keithley Instruments, Germering, Germany) Source-Measurement Unit.These optical and electrical data are compared with the information extracted from a Semilab SE-2000 Spectrocopic Ellipsometer [17] (Semilab Inc., Prielle Kornéli, Hungary).To ensure uniformity, all the measurements were performed at several points placed at 3 cm from each other on the 15 cm × 15 cm sample area. The ellipsometric parameters (Ψ, ∆), defined as the ratio of the reflection coefficients for p-and s-polarizations [16], R p /R s = tan(Ψ)exp(i∆), were acquired in the wavelength range λ = 300-2100 nm at three incident angles Φ 0 = 55, 60 and 65 • .In order to extract useful information, the structure of the sample and the corresponding optical dispersions must be modeled, whilst taking into account that the ellipsometric model must distinguish the glass substrate from the AZO features, and AZO depth heterogeneities can be considered by introducing different sublayers in the simulation [18].The model supposes a parallel multilayer structure consisting of homogeneous, isotropic phases represented by their respective thickness d j and complex refractive index N j = n j − ik j , with n j as the real part of the refractive index and k j the extinction coefficient.During the simulation, all theoretically calculated spectra (R p /R s ) cal originate from the measured ratios (R p /R s ) meas , which depend on the incident angle Φ 0 , the photon energy E = hc/λ, and implicitly on the characteristics of the phases used in the model: tan(ψ)exp(i∆) = f(Φ 0 , E, n j , k j , d j ).The model parameters are thus obtained by minimizing the error function [19]: The quality of the fit is obtained using the coefficient of determination (r 2 ) [17], which measures the percentage of variance in the dependent variables that the independent variables explain collectively: Variance explained by the model Total variance (2) Results and Discussion Figure 1 shows the ellipsometric parameters (Ψ, ∆) acquired at three incident angles of 55, 60 and 65 • on a point of the sample.These angles gave a high R s /R p ratio, below the Brewster angle at which R p reaches zero [20].Previously, we had measured and modeled the bare SLG substrate to obtain suitable n SLG and k SLG values to feed into the simulation of the SLG/AZO system.Next, the AZO characteristics were modeled, assuming there was a compact layer and rough top layer, with a combination of the Drude oscillator model (for the AZO compact layer) [15] and the Bruggeman effective medium approximation (for the rough layer, considered as a 50/50 vol% mixture of AZO and void) [18].The coefficient of determination for this two-layer model was r 2 = 91%.Subsequently, the same experimental data were simulated assuming some depth heterogeneity of the AZO characteristics, with a first compact layer close to the substrate (AZO1), a second compact layer (AZO2) and the rough top layer (AZO2 + void), which gave a better r 2 = 96%.Although a somewhat higher coefficient of determination could be achieved with a four-layer model (r 2 = 98%), the results do not make physical sense due to abnormally high conductivity values (σ > 10 6 S/cm) at some points.Therefore, the three-layer model is considered optimum.It should be noted that, the simulated spectra start at λ = 400 nm because these models do not take into account the fundamental absorption that occurs at wavelengths below the semiconductor bandgap, which is located around 350 nm [13]. The quality of the fit is obtained using the coefficient of determination (r 2 ) [17], which measures the percentage of variance in the dependent variables that the independent variables explain collectively: Results and Discussion Figure 1 shows the ellipsometric parameters (Ψ, Δ) acquired at three incident angles of 55, 60 and 65° on a point of the sample.These angles gave a high Rs/Rp ratio, below the Brewster angle at which Rp reaches zero [20].Previously, we had measured and modeled the bare SLG substrate to obtain suitable nSLG and kSLG values to feed into the simulation of the SLG/AZO system.Next, the AZO characteristics were modeled, assuming there was a compact layer and rough top layer, with a combination of the Drude oscillator model (for the AZO compact layer) [15] and the Bruggeman effective medium approximation (for the rough layer, considered as a 50/50 vol% mixture of AZO and void) [18].The coefficient of determination for this two-layer model was r 2 = 91%.Subsequently, the same experimental data were simulated assuming some depth heterogeneity of the AZO characteristics, with a first compact layer close to the substrate (AZO1), a second compact layer (AZO2) and the rough top layer (AZO2 + void), which gave a better r 2 = 96%.Although a somewhat higher coefficient of determination could be achieved with a four-layer model (r 2 = 98%), the results do not make physical sense due to abnormally high conductivity values (σ > 10 6 S/cm) at some points.Therefore, the three-layer model is considered optimum.It should be noted that, the simulated spectra start at λ = 400 nm because these models do not take into account the fundamental absorption that occurs at wavelengths below the semiconductor bandgap, which is located around 350 nm [13].The optical parameters derived from the three-layer model are illustrated in Figure 2. It can be seen that the refractive index is above 1.7 and the extinction coefficient is below 0.02 in the visible region (λ = 400-800 nm), as reported for other AZO thin films [16,21].Both n and k vary slowly up to λ = 1200 nm, but at larger wavelengths n decreases and correspondingly k increases sharply.The increment of k in the near infrared indicates the The optical parameters derived from the three-layer model are illustrated in Figure 2. It can be seen that the refractive index is above 1.7 and the extinction coefficient is below 0.02 in the visible region (λ = 400-800 nm), as reported for other AZO thin films [16,21]. Both n and k vary slowly up to λ = 1200 nm, but at larger wavelengths n decreases and correspondingly k increases sharply.The increment of k in the near infrared indicates the onset of reflection from the free carrier plasma [21], where the material enters into a metallic-like regime.The higher values of n and k obtained for AZO1, with respect to AZO2, indicate that the material initially grows with a better quality (denser and more conductive) in the region closest to the glass substrate, lowering somewhat its density and conductivity when the deposition time increases.onset of reflection from the free carrier plasma [21], where the material enters into a metallic-like regime.The higher values of n and k obtained for AZO1, with respect to AZO2, indicate that the material initially grows with a better quality (denser and more conductive) in the region closest to the glass substrate, lowering somewhat its density and conductivity when the deposition time increases.The Drude oscillator is used to describe the electrical conduction of free carriers in semiconductor materials [22].From the analysis of the ellipsometric data, the dimensionless real and imaginary parts of the dielectric function corresponding to each conductive phase (j) are formulated as follows [3,17]: where EP (eV units) and EΓ (eV units) are the plasma energy and the broadening, which is in connection with the scattering frequency. The electrical conductivity, free carrier concentration and mobility are obtained using: = ℏ * , (7) where ε0 (55.263 × 10 6 e/Vm) is the free-space permittivity, ℏ (6.582 × 10 −16 eV s) is the reduced Planck constant, e is the electron charge and m* denotes the scalar effective mass of carriers, which is assumed to be 0.25 me (me = 0.511 × 10 6 eV/c 2 the electron rest mass) for AZO according to the literature [23].Furthermore, taking the simulated layer thickness for each point (dj), the sheet resistance (Ω units) is calculated as follows: The Drude oscillator is used to describe the electrical conduction of free carriers in semiconductor materials [22].From the analysis of the ellipsometric data, the dimensionless real and imaginary parts of the dielectric function corresponding to each conductive phase (j) are formulated as follows [3,17]: where E P (eV units) and E Γ (eV units) are the plasma energy and the broadening, which is in connection with the scattering frequency.The electrical conductivity, free carrier concentration and mobility are obtained using: where ε 0 (55.263 × 10 6 e/Vm) is the free-space permittivity, (6.582 × 10 −16 eV s) is the reduced Planck constant, e is the electron charge and m* denotes the scalar effective mass of carriers, which is assumed to be 0.25 m e (m e = 0.511 × 10 6 eV/c 2 the electron rest mass) for AZO according to the literature [23].Furthermore, taking the simulated layer thickness for each point (d j ), the sheet resistance (Ω units) is calculated as follows: The evolution of conductivity and mobility with the carrier concentration is represented in Figure 3, which shows the data obtained for the two conductive phases (AZO1 and AZO2, in the three-layer model) at various points in the sample.Despite the considerable dispersion of values, it is observed that mobility seems to be proportional to N −2/3 , and the conductivity proportional to N 1/3 , which corresponds to scattering by ionized impurities [24].This is reasonable considering that optical measurements essentially provide in-grain properties, and that scattering by ionized impurities is an intrinsic property of the material.Otherwise, electrical methods include inter-grain properties measuring only the free carriers that can overcome the potential barrier at grain boundaries [23].The evolution of conductivity and mobility with the carrier concentration is represented in Figure 3, which shows the data obtained for the two conductive phases (AZO1 and AZO2, in the three-layer model) at various points in the sample.Despite the considerable dispersion of values, it is observed that mobility seems to be proportional to N −2/3 , and the conductivity proportional to N 1/3 , which corresponds to scattering by ionized impurities [24].This is reasonable considering that optical measurements essentially provide in-grain properties, and that scattering by ionized impurities is an intrinsic property of the material.Otherwise, electrical methods include inter-grain properties measuring only the free carriers that can overcome the potential barrier at grain boundaries [23].For each measured point in the sample, the ellipsometric fit for the three-layer model gives the thickness values that are plotted in Figure 4 as a function of the respective total thickness.The figure includes a contour map of the total thickness on the 15 cm × 15 cm sample area.The thickness variation is of 6%, in the same order as reported for other TCOs on lower substrate areas [15,18].The map shows that the total thickness is greater at the upper and lower edges, due to a greater thickness of AZO2 and especially of the top layer (AZO2 + void), which denotes an increase in surface roughness.Such increase in thickness and roughness occurs at the edges of the sample that are attached to the substrate-holder frame.It may be due to some reflection of the sputtering plasma on the metallic frame, because the reflected particles reach the substrate with a lower energy, which reduces their diffusion during film growth [25].Otherwise, the lateral edges of the sample remain free of the frame and show better homogeneity.For each measured point in the sample, the ellipsometric fit for the three-layer model gives the thickness values that are plotted in Figure 4 as a function of the respective total thickness.The figure includes a contour map of the total thickness on the 15 cm × 15 cm sample area.The thickness variation is of 6%, in the same order as reported for other TCOs on lower substrate areas [15,18].The map shows that the total thickness is greater at the upper and lower edges, due to a greater thickness of AZO2 and especially of the top layer (AZO2 + void), which denotes an increase in surface roughness.Such increase in thickness and roughness occurs at the edges of the sample that are attached to the substrate-holder frame.It may be due to some reflection of the sputtering plasma on the metallic frame, because the reflected particles reach the substrate with a lower energy, which reduces their diffusion during film growth [25].Otherwise, the lateral edges of the sample remain free of the frame and show better homogeneity.Taking the conductivity and thickness data determined by the optical dispersion model, the overall sheet resistance is calculated assuming the parallel connection of the conductive phases [18]: It is mapped in Figure 5, as is the sheet resistance acquired using FPP electrical measurements (Rel) at the different points of the sample.Higher resistances are observed at the horizontal edges, in relation to the greater thickness of AZO2 and the top layer (AZO2 + void), which have a lower conductivity than AZO1, based on the data in Figure 3.The top layer thickness represents the surface roughness, and it is known that increased surface roughness contributes to increased resistivity [26].At the same sample point, the values of Rop and Rel differ due to a discrepancy in the conductivities determined using optical or electrical measurements.Figure 6 shows these data (σop = Rop/dtotal and σel = Rel/dtotal) as a function of the total film thickness.It is noted that both conductivities tend to decrease as the film thickness increases, which is related to the increase in roughness (i.e., the AZO2 + void thickness) evidenced in Figure 4.The electrical conductivity is around 650 S/cm, according to previous studies performed on AZO thin films without heating [12,13], but values above 900 S/cm are obtained optically.Taking the conductivity and thickness data determined by the optical dispersion model, the overall sheet resistance is calculated assuming the parallel connection of the conductive phases [18]: It is mapped in Figure 5, as is the sheet resistance acquired using FPP electrical measurements (R el ) at the different points of the sample.Higher resistances are observed at the horizontal edges, in relation to the greater thickness of AZO2 and the top layer (AZO2 + void), which have a lower conductivity than AZO1, based on the data in Figure 3.The top layer thickness represents the surface roughness, and it is known that increased surface roughness contributes to increased resistivity [26].Taking the conductivity and thickness data determined by the optical dispersion model, the overall sheet resistance is calculated assuming the parallel connection of the conductive phases [18]: It is mapped in Figure 5, as is the sheet resistance acquired using FPP electrical measurements (Rel) at the different points of the sample.Higher resistances are observed at the horizontal edges, in relation to the greater thickness of AZO2 and the top layer (AZO2 + void), which have a lower conductivity than AZO1, based on the data in Figure 3.The top layer thickness represents the surface roughness, and it is known that increased surface roughness contributes to increased resistivity [26].At the same sample point, the values of Rop and Rel differ due to a discrepancy in the conductivities determined using optical or electrical measurements.Figure 6 shows these data (σop = Rop/dtotal and σel = Rel/dtotal) as a function of the total film thickness.It is noted that both conductivities tend to decrease as the film thickness increases, which is related to the increase in roughness (i.e., the AZO2 + void thickness) evidenced in Figure 4.The electrical conductivity is around 650 S/cm, according to previous studies performed on AZO thin films without heating [12,13], but values above 900 S/cm are obtained optically.At the same sample point, the values of R op and R el differ due to a discrepancy in the conductivities determined using optical or electrical measurements.Figure 6 shows these data (σ op = R op /d total and σ el = R el /d total ) as a function of the total film thickness.It is noted that both conductivities tend to decrease as the film thickness increases, which is related to the increase in roughness (i.e., the AZO2 + void thickness) evidenced in Figure 4.The electrical conductivity is around 650 S/cm, according to previous studies performed on AZO thin films without heating [12,13], but values above 900 S/cm are obtained optically.Such a discrepancy is usually found in the literature, due to the effect of grain boundaries.Electrical measurements give the number of free carriers that can overcome the potential barrier at the grain boundaries, but optical measurements even include the carriers with lower energy than the potential barrier at the grain boundaries.Therefore, the free carrier concentration and mobility determined optically are often somewhat higher than those given with electrical methods [22,27].The ratios calculated in Figure 6 (σ op /σ el ~1.6) are consistent with those found by other authors, indicating a similar ratio of R DL ~1.6 [22], which is defined as the relation between the resistance to electron transport inside the lattice (1/µ L ) and the resistance to electron transport due to defects such as grain boundaries and neutral impurities (1/µ D ).This is because the change in conductivity is mainly related to the carrier mobility [22,28], so σ op /σ el ~RDL = µ L /µ D . Materials 2023, 16, x FOR PEER REVIEW 7 of 11 Such a discrepancy is usually found in the literature, due to the effect of grain boundaries.Electrical measurements give the number of free carriers that can overcome the potential barrier at the grain boundaries, but optical measurements even include the carriers with lower energy than the potential barrier at the grain boundaries.Therefore, the free carrier concentration and mobility determined optically are often somewhat higher than those given with electrical methods [22,27].The ratios calculated in Figure 6 (σop/σel~1.6)are consistent with those found by other authors, indicating a similar ratio of RDL~1.6 [22], which is defined as the relation between the resistance to electron transport inside the lattice (1/µL) and the resistance to electron transport due to defects such as grain boundaries and neutral impurities (1/µD).This is because the change in conductivity is mainly related to the carrier mobility [22,28], so σop/σel~RDL = µL/µD.Regarding optical transmittance, the global spectrum at each sample point has been simulated (Ts) [29] and compared with the corresponding spectrum measured using spectrophotometry (Tm), as illustrated in Figure 7.For long wavelengths (λ > 1400 nm), the difference is practically zero (Ts = Tm), as expected when using the Drude oscillator model, which optimally reproduces the behavior of the material in the zone of optical absorption by free carriers [22].In the visible region (around λ = 600 nm), the simulated transmittance is somewhat lower than that obtained with spectrophotometry (Ts < Tm), but in any case both increase or decrease proportionally when moving from one point to another on the sample.Regarding optical transmittance, the global spectrum at each sample point has been simulated (T s ) [29] and compared with the corresponding spectrum measured using spectrophotometry (T m ), as illustrated in Figure 7.For long wavelengths (λ > 1400 nm), the difference is practically zero (T s = T m ), as expected when using the Drude oscillator model, which optimally reproduces the behavior of the material in the zone of optical absorption by free carriers [22].In the visible region (around λ = 600 nm), the simulated transmittance is somewhat lower than that obtained with spectrophotometry (T s < T m ), but in any case both increase or decrease proportionally when moving from one point to another on the sample. For each sample point, the AZO visible transmittance (both simulated T Vs and measured T Vm ) has been calculated as the average value in the range λ = 400-800 nm from the respective SLG/AZO spectra and discounting the SLG substrate.These values are mapped in Figure 8, where it can be seen that higher visible transmittances are obtained at the upper and lower edges of the sample, related to a greater thickness of AZO2 and the top layer (AZO2 + void) in Figure 4, and corresponding also with higher resistivities in Figure 5.This shows that the increase in roughness (i.e., the AZO2 + void thickness) contributes to an increase in both electrical resistivity and visible transmittance [6].For each sample point, the AZO visible transmittance (both simulated TVs and measured TVm) has been calculated as the average value in the range λ = 400-800 nm from the respective SLG/AZO spectra and discounting the SLG substrate.These values are mapped in Figure 8, where it can be seen that higher visible transmittances are obtained at the upper and lower edges of the sample, related to a greater thickness of AZO2 and the top layer (AZO2 + void) in Figure 4, and corresponding also with higher resistivities in Figure 5.This shows that the increase in roughness (i.e., the AZO2 + void thickness) contributes to an increase in both electrical resistivity and visible transmittance [6].Finally, the figure of merit defined by Haacke [7] has been calculated with the respective data obtained using the simulation (φs = TVs 10 /Rop) and from independent electrical and spectrophotometric measurements (φm = TVm 10 /Rel), which are represented in Figure 9.Although the range of variation of the visible transmittance is narrow for the simulated values (84-88%) and when directly measured (85-88%), it dominates in terms of merit.Therefore, higher quality is obtained in more transparent regions, despite the fact that their resistance is also somewhat higher.On the other hand, the simulated figure of merit For each sample point, the AZO visible transmittance (both simulated TVs and measured TVm) has been calculated as the average value in the range λ = 400-800 nm from the respective SLG/AZO spectra and discounting the SLG substrate.These values are mapped in Figure 8, where it can be seen that higher visible transmittances are obtained at the upper and lower edges of the sample, related to a greater thickness of AZO2 and the top layer (AZO2 + void) in Figure 4, and corresponding also with higher resistivities in Figure 5.This shows that the increase in roughness (i.e., the AZO2 + void thickness) contributes to an increase in both electrical resistivity and visible transmittance [6].Finally, the figure of merit defined by Haacke [7] has been calculated with the respective data obtained using the simulation (φs = TVs 10 /Rop) and from independent electrical and spectrophotometric measurements (φm = TVm 10 /Rel), which are represented in Figure 9.Although the range of variation of the visible transmittance is narrow for the simulated values (84-88%) and when directly measured (85-88%), it dominates in terms of merit.Therefore, higher quality is obtained in more transparent regions, despite the fact that their resistance is also somewhat higher.On the other hand, the simulated figure of merit Finally, the figure of merit defined by Haacke [7] has been calculated with the respective data obtained using the simulation (ϕ s = T Vs 10 /R op ) and from independent electrical and spectrophotometric measurements (ϕ m = T Vm 10 /R el ), which are represented in Figure 9.Although the range of variation of the visible transmittance is narrow for the simulated values (84-88%) and when directly measured (85-88%), it dominates in terms of merit.Therefore, higher quality is obtained in more transparent regions, despite the fact that their resistance is also somewhat higher.On the other hand, the simulated figure of merit is in general higher (ϕ s > ϕ m ) because the optically determined resistance is lower (R op < R el ).In fact, the ratio between the maximum values ϕ s /ϕ m = 0.022/0.014= 1.6 is analogous to that established for the respective conductivities (σ op /σ el ~1.6) in Figure 6.It should be noted that, the ϕ data presented here are calculated considering average values of transmittance instead of the maximum transmittance at a particular wavelength, which exceeds 90%.Even so, the figure of merit is always above 0.010 Ω −1 , higher than that reported for other TCOs grown at high substrate temperature [30][31][32]. Figure 1 . Figure 1.Ellipsometric data measured (symbols) and fitted (lines) for three incidence angles (Φ0) at one point of the SLG/AZO sample.The dashed lines correspond to the two-layer model and the solid lines to the three-layer model. Figure 1 . Figure 1.Ellipsometric data measured (symbols) and fitted (lines) for three incidence angles (Φ 0 ) at one point of the SLG/AZO sample.The dashed lines correspond to the two-layer model and the solid lines to the three-layer model. Figure 2 . Figure 2. Optical parameters obtained from the data in Figure 1 simulated with the three-layer model. Figure 2 . Figure 2. Optical parameters obtained from the data in Figure 1 simulated with the three-layer model. Figure 3 . Figure 3. Electrical conductivity (σ), carrier concentration (N) and mobility (µ) values obtained for the two conductive phases (AZO1 and AZO2 in the three-layer model) at several equidistant points in the 15 cm × 15 cm sample area. Figure 3 . Figure 3. Electrical conductivity (σ), carrier concentration (N) and mobility (µ) values obtained for the two conductive phases (AZO1 and AZO2 in the three-layer model) at several equidistant points in the 15 cm × 15 cm sample area. Figure 4 . Figure 4. Thickness values provided using the three-layer model at sixteen equidistant points marked on the 15 cm × 15 cm sample area.The contour map represents the total thickness at each point. Figure 5 . Figure 5. Contour map of the sheet resistance values obtained using optical simulations (Rop) and electrical measurements (Rel) on the 15 cm × 15 cm sample area. Figure 4 . Figure 4. Thickness values provided using the three-layer model at sixteen equidistant points marked on the 15 cm × 15 cm sample area.The contour map represents the total thickness at each point. Figure 4 . Figure 4. Thickness values provided using the three-layer model at sixteen equidistant points marked on the 15 cm × 15 cm sample area.The contour map represents the total thickness at each point. Figure 5 . Figure 5. Contour map of the sheet resistance values obtained using optical simulations (Rop) and electrical measurements (Rel) on the 15 cm × 15 cm sample area. Figure 5 . Figure 5. Contour map of the sheet resistance values obtained using optical simulations (R op ) and electrical measurements (R el ) on the 15 cm × 15 cm sample area. Figure 6 . Figure 6.Conductivity values determined using optical simulations (σop, squares) and electrical measurements (σel, circles) at several equidistant points in the 15 cm × 15 cm sample area, plotted as a function of the respective total thickness.The black arrow indicates the right y-axis for the ratio (triangles). Figure 6 . Figure 6.Conductivity values determined using optical simulations (σ op, square s ) and electrical measurements (σ el , circles) at several equidistant points in the 15 cm × 15 cm sample area, plotted as a function of the respective total thickness.The black arrow indicates the right y-axis for the ratio (triangles). Figure 7 . Figure 7. Transmittance spectra simulated using the three-layer model (Ts) and measured with spectrophotometry (Tm) at point 8 (x = 13, y = 6) and point 16 (x = 13, y = 13), on the SLG/AZO sample of 15 cm × 15 cm area.The measured transmittance for the bare SLG substrate is included for comparison. Figure 8 . Figure 8. Contour map of the visible transmittance simulated with the three-layer model (TVs) and measured using spectrophotometry (TVm) on the 15 cm × 15 cm sample area. Figure 7 . Figure 7. Transmittance spectra simulated using the three-layer model (T s ) and measured with spectrophotometry (T m ) at point 8 (x = 13, y = 6) and point 16 (x = 13, y = 13), on the SLG/AZO sample of 15 cm × 15 cm area.The measured transmittance for the bare SLG substrate is included for comparison. Figure 7 . Figure 7. Transmittance spectra simulated using the three-layer model (Ts) and measured with spectrophotometry (Tm) at point 8 (x = 13, y = 6) and point 16 (x = 13, y = 13), on the SLG/AZO sample of 15 cm × 15 cm area.The measured transmittance for the bare SLG substrate is included for comparison. Figure 8 . Figure 8. Contour map of the visible transmittance simulated with the three-layer model (TVs) and measured using spectrophotometry (TVm) on the 15 cm × 15 cm sample area. Figure 8 . Figure 8. Contour map of the visible transmittance simulated with the three-layer model (T Vs ) and measured using spectrophotometry (T Vm ) on the 15 cm × 15 cm sample area.
7,841.4
2023-10-01T00:00:00.000
[ "Physics" ]
Differential Microbial Composition of Monovarietal and Blended Extra Virgin Olive Oils Determines Oil Quality during Storage Extra virgin olive oil (EVOO) contains a biotic fraction, which is characterized by various microorganisms, including yeasts. The colonization of microorganisms in the freshly produced EVOO is determined by the physicochemical characteristics of the product. The production of blended EVOO with balanced taste, which is obtained by blending several monovarietal EVOOs, modifies the original microbiota of each oil due to the differential physico-chemical characteristics of the blended oil. This study aimed to evaluate the effect of microbial composition on the stability of the quality indices of the monovarietal and blended EVOOs derived from Leccino, Peranzana, Coratina, and Ravece olive varieties after six months of storage. The yeasts survived only in the monovarietal EVOOs during six months of storage. Barnettozyma californica, Candida adriatica, Candida diddensiae, and Yamadazyma terventina were the predominant yeast species, whose abundance varied in the four monovarietal EVOOs. However, the number of yeasts markedly decreased during the first three months of storage in all blended EVOOs. Thus, all blended EVOOs were more stable than the monovarietal EVOOs as the abundance and activity of microorganisms were limited during storage. Introduction Extra virgin olive oil (EVOO) is produced by directly subjecting the olive fruit to mechanical extraction without any further refining process. Globally, EVOO is one of the oldest vegetable oils known for its sensory and nutritional value [1]. According to the European Food Safety Authority (EFSA), the phenols in virgin olive oil protect the blood lipids from oxidative stress [2]. The health benefits of EVOO are attributed to its abiotic fraction, which is characterized by phytochemicals, such as tocopherols, carotenoids, and phenolic compounds [3][4][5]. Previous microbiological studies have demonstrated that freshly produced olive oil contains a biotic fraction, which is characterized by several microorganisms, including yeasts [6]. The yeasts in the EVOO are mainly derived from the carposphere of the olives [7]. Additionally, the yeasts can be derived from the mill plant during the extraction process [8]. Some yeast species in freshly produced EVOO do not have a long lifespan, whereas other species survive and become the predominant microbiota in the olive oil. Several yeast species in the freshly produced EVOO can remain active during the storage period and can improve or deteriorate olive oil quality, depending on their metabolic activity [9]. Recent studies have demonstrated that the presence of some yeast species, such as Candida adriatica, Nakazawaea wickerhamii, and Candida diddensiae may deteriorate the sensory attributes of olive oil during storage. However, the sensory attributes of EVOO containing specific C. diddensiae yeast strains do not deteriorate even after four months of storage [10]. Yeast population density, strain, and enzymatic activity are reported to determine EVOO chemical composition [11,12]. However, the chemical composition of EVOO can influence the survival of some yeast species during storage [13]. The high concentration of polar phenolic compounds in EVOO negatively affects the survival of some yeast species, such as Candida parapsilosis [14]. The fatty acid and triglyceride contents in EVOO can also inhibit the growth of several yeast strains. Several yeast species, such as Meyerozyma guilliermondii, C. parapsilosis, and C. diddensiae are reported to exhibit concentration-dependent sensitivity to linoleic acid [15]. Oil producers generally produce the following three types of olive oils: monovarietal EVOO, blended EVOO, and olive oil mixed with other vegetable oils, such as sunflower seed oil and grape seed oil. The flavor of monovarietal EVOOs is determined by the genetic characteristics of the olive tree and the pedo-climatic factors of the production area [16]. These oils meet the needs of the niche market in countries like Italy, where more than 700 varieties of olives are found. However, the blended EVOOs are produced by trained blenders by combining the aromatic profile of various oils. Additionally, the blended EVOOs are produced in sufficient quantities with a balanced taste to meet the demands of the international market. Most super-market brands of EVOO are blended with oils from many different cultivars, regions, and even countries. The comparative microbiological analysis of EVOOs extracted from a single olive variety and EVOOs extracted from multiple olive varieties revealed the prevalence of single yeast species only in the monovarietal EVOOs [17]. However, some oil producers blend the EVOOs extracted from different varieties of olive fruits to obtain a consistent taste profile. The effect of yeast species from each monovarietal EVOO on quality of the blended oil is not well understood. Changing the physicochemical characteristics of the monovarietal EVOOs during blending can affect the composition of the oil microbiota. Conversely, the microbial metabolic processes during storage may differential affect the quality of blended and monovarietal EVOO. This study aimed to analyze the abundance of yeasts in monovarietal and blended EVOOs and to evaluate their effect on oil quality during storage. Production of Monovarietal EVOO for Blending Monovarietal EVOOs were extracted from the ancestral Leccino, Coratina, Peranzana, and Ravece olive varieties, which have a several-hundred-years history of usage in Central-Southern Italy. During the experimental season, the olives were not subjected to insecticide treatment. The Bactrocera oleae (Rossi) infection rate among the fruits harvested from all the varieties was in the range of 10-20%. The homogeneous masses of approximately 300 kg of healthy olives from the same rural area were separately processed within 12 h of harvesting. The leaves and other materials were removed and the olives washed in fresh tap water. The fruits were crushed in a grinder at 2000 rpm. The paste was subjected to malaxation for 20 min at 27 • C. Next, the paste was moistened using a small amount of tap water. The oil was separated from other fruit components by double extraction through horizontal (decanter) and vertical centrifugation. The fresh EVOOs (50 L) extracted from each variety were stored separately in four batches. The EVOOs were immediately subjected to physical (suspended solid and water contents) and microbiological analyses. The four batches of monovarietal oils were allowed to settle for 30 days in a dark place at 12-13 • C before blending. Physicochemical Analysis Physicochemical analysis was performed to determine the suspended solid and water contents of the freshly extracted olive oil and the phenolic profile of the monovarietal EVOOs. The suspended solid content was assessed using 50 g of olive oil sample. The sample was filtered under reduced pressure through a 0.45 µm pre-weighed and oil-wetted nitrocellulose filter (Ministart NML-Sartorius, Göttingen, Germany). Each analysis was repeated four times. The water content of the olive oil samples was assessed following a protocol described by Ciafardini and Zullo [9]. The water content of the olive oil samples was determined using the 37858 HYDRANAL-Moisture Test Kit (Sigma-Aldrich, Seelze, Germany), following the manufacturer's instructions. The phenolic compounds in the monovarietal EVOOs extracted from Leccino, Peranzana, Coratina, and Ravece olive varieties were evaluated by high-performance liquid chromatography (HPLC) analysis. The HPLC analysis was performed in an Agilent 1200 liquid chromatographic system equipped with a diode array UV detector and C18 column (4.6 mm i.d. × 250 mm; particle size 5 µm) (Phenomenex, Torrance, CA, USA) coupled to a C18 guard column (4 × 3.0 mm; Phenomenex). The mobile phases used in the HPLC analysis were water/acetic acid (97:3, v v −1 ) (solvent A) and methanol/acetonitrile (50:50, v v −1 ) (solvent B). The elution was performed at a flow rate of 1.0 mL min −1 . The solvent gradient was changed as follows: from 95% (A) and 5% (B) to 70% (A) and 30% (B) in 25 min; 65% (A) and 35% (B) in 10 min; 60% (A) and 40% (B) in 5 min; 30% (A) and 70% (B) in 10 min; and 100% (B) in 5 min, followed by 5 min of maintenance. The chromatograms were acquired at wavelengths of 240 and 280 nm. The compounds were identified and quantified based on the retention time and absorption at different wavelengths. The analysis was repeated three times for each olive oil sample. Microbiological Analysis of EVOOs during Storage Microbiological analysis was performed using EVOOs extracted from the Leccino, Peranzana, Coratina, and Ravece olive varieties immediately after extraction. The blended EVOO samples were analyzed at the beginning of experimentation (zero time), and after 3 and 6 months of storage at 12 • C in a dark place. Briefly, 20 mL of oil samples were micro-filtered through a 0.45 µm sterile nitrocellulose filter. The nitrocellulose filter used to capture each sample was then transferred into a 25-mL sterile beaker and homogenized using a Turrax model T25 homogenizer (IKA, Milan, Italy) in a sterile physiological solution. Finally, the initial weight of each sample was reconstituted through the addition of a sterile physiological 0.9% (w v −1 ) NaCl solution. The solution was then subjected to 10-fold serial dilution. The bacteria were analyzed in the plate count agar standard (PCAS) medium (Oxoid, Basingstoke, Hampshire, England). The samples (0.2 mL of the 10-fold serially diluted solution) were plated in the PCAS medium and incubated aerobically for 3 days at 28 • C. The molds were evaluated in the oxytetracycline glucose yeast extract agar medium (Oxoid) supplemented with 100 µg mL −1 gentamicin and 100 µg mL −1 chloramphenicol. The molds were counted after 7 days of incubation at 28 • C. The yeasts were analyzed in the MYGP agar medium, whose composition is as follows: 3 g yeast extract (Biolife, Milan, Italy), 3 g malt extract (BBL, Cockeysville, MD, USA), 5 g phytone powder (BBL), 10 g D-glucose (Merck, Darmstadt, Germany), and 1000 mL distilled water, pH 7 [18]. The MYGP agar medium was supplemented with tetracycline (20 mg L −1 ) to inhibit bacterial growth. The serially diluted sample (0.2 mL) was spread-plated onto the MYGP agar medium for colony counting in triplicate. The yeast colonies were counted after 5 days of incubation at 30 • C and recorded as the colony forming unit (CFU). The colonies were then transferred into several MYGP agar plates (master plates) and used for further analysis. Dynamics of EVOO Yeast Species during Storage The yeast strains isolated from the EVOO samples were identified by screening a high number of colonies grown on a specific chromogenic medium. Based on the physiological properties of the isolated yeasts, colored compounds are formed around the yeast colonies. All yeast colonies isolated from the master plates were inoculated into the CHROMagar Candida medium (BBL, cod. 4354093, Heidelberg, Germany). The colony morphology of approximately 3000 colored yeast colonies was assessed after 7 days of incubation at 30 • C [19]. All yeast colonies inoculated in the chromogenic medium were divided into five homogeneous chromogenic groups as follows: red bordeaux center with a white exterior; uniform red; fire red center with a white exterior; uniform brown; uniform white; and uniform bluish. From each chromogenic yeast colony group, 20 isolates were randomly chosen and used for subsequent identification tests. Identification of Yeast Species The selected yeast colonies that belong to different chromogenic groups were subjected to genetic analysis. The yeast strains were identified at the species level by sequencing the D1/D2 region (approximately 600 bp) of the large (26S) ribosomal subunit gene using the NL1 and NL4 primers, following the protocols described by Kurtzman and Robnett [20]. The ribosomal gene sequence of yeast strains amplified by the NL1 primer was compared with published yeast sequences available in the public sequence databases (GenBank + EMBL+DDBJ+PDB) using a BLAST search on the National Center for Biotechnology Information (NCBI) website (http://www.ncbi.nlm.nich.gov/blast). Enzymatic Activity in the Predominant Yeasts in The EVOO Enzymatic tests were performed using 20 yeast strains, which belong to the following four different species isolated from the EVOOs and identified by sequencing the D1/D2 region of the ribosomal subunit (26S) gene: Barnettozyma californica, Candida adriatica, Candida diddensiae, and Yamadazyma terventina. All enzymatic tests were performed in triplicates, following the protocols described by Ciafardini and Zullo [21] with minor modifications. The β-glucosidase activity was evaluated using two different substrates. The master plates of yeasts belonging to the four species were prepared using the MYGP agar medium supplemented with 0.1% (w v −1 ) esculin (Sigma-Aldrich, Milan, Italy) and 0.03% (w v −1 ) FeCl 3 (Carlo Erba, Milan, Italy). After 48 h incubation at 30 • C, β-glucosidase activity was monitored visually based on the presence or absence of a dark halo around the colony, which was compared to that of the non-inoculated control plate. Each yeast was assigned a code based on the color of the halo, which indicates the enzymatic activity level as follows: N (no activity), light gray: L (low activity), black: H (high activity). The β-glucosidase activity of the same yeast cultures was confirmed in a 96-well microplate using the synthetic substrate, p-nitrophenylglucopyranoside (p-NPG) (Sigma-Aldrich, Milan, Italy). The microbial culture (100 µL; O.D. 600 = 0.70) in each well of the microplate (Falcon-Fisher Scientific, Milan, Italy) was incubated with 150 µL of 0.1 M phosphate buffer (pH 7) supplemented with 0.4% (w v −1 ) p-NPG at 30 • C for 180 min. The control included all reagents, except p-NPG. The absorbance of the reaction mixture was measured at 410 nm using a microplate reader (Fisher Scientific, Milan, Italy). The yeasts that exhibited enzymatic activity on both substrates were recorded as β-glucosidase producers. The esterase activity was evaluated in a 96-well microplate using the 4-nitrophenyl acetate (4-NPA) substrate (Sigma-Aldrich). The yeast culture (70 µL; O.D. 600 = 0.8) in each well of the microplate was incubated with 70 µL of 0.5% (w v −1 ) 4-NPA prepared in methanol, and 70 µL of 0.1 M phosphate buffer (pH 7) at 30 • C for 180 min. The positive control was prepared by replacing the microbial cultures with 70 µL of porcine esterase (Sigma-Aldrich; 50 U mL −1 of phosphate buffer). The negative control lacked both yeast and esterase. The absorbance of the reaction mixture was measured at 410 nm using a microplate reader. The esterase analysis was also performed using the MYGP agar medium supplemented with NaCl (5 g L −1 ), CaCl 2 (0.1 g L −1 ), and Tween 20 (5 mL L −1 ). The MYGP agar medium enriched with NaCl and CaCl 2 was sterilized at 121 • C for 20 min and allowed to cool to 55 • C. Next, sterilized Tween 20 (Sigma-Aldrich) was added and mixed before the medium was poured into the plates. The plates inoculated with yeast strains were incubated at 30 • C for 10 days. The cultures were monitored daily for the presence of a cloudy halo around the colonies. The yeasts that exhibited enzyme activity in both tests were recorded as esterase producers. The lipase activity was performed as described by Ciafardini et al. [22]. Briefly, 5 mL of the overnight stock culture of each yeast strain (O.D. 600 adjusted to ca. 0.8) was subjected to centrifugation at 9000× g for 5 min. Next, the culture pellet was suspended in 2 mL of 0.1 M phosphate buffer (pH 6) and incubated with 6 mL of filter-sterilized (Minisart NML-Sartorius, Göttingen, Germany) virgin olive oil. The negative control included all the components, except the yeast. Three repetitions were performed for each yeast strain. The samples were incubated at 30 • C for 7 days. The samples were vortexed daily for 1 min. The lipolytic activity was assessed through the titrimetric method for the determination of the olive oil free fatty acid content according to the European Community Regulation 1348/2013 [23]. The phenoloxidase activity was assessed using 7.5 mL of the overnight stock culture of each yeast culture (O.D. 600 adjusted to ca. 0.8). The overnight culture was subjected to centrifugation at 9000× g for 5 min. The culture pellet was suspended in 2 mL of 0.1 M phosphate buffer (pH 7) containing 100 mM of pyrocatechol (Sigma-Aldrich). The control included all the components, except yeast culture. The mixture was vortexed for 1 min and incubated at 30 • C for 60 min. The dark color intensity of the test group was visually compared with that of the control group. To evaluate the peroxidase activity, 7.5 mL of the overnight stock culture (O.D. 600 adjusted to ca. 0.8) was pelleted. Next, the pelleted cells were incubated with 5 mL of a reaction mixture containing 0.30 mL of 4 % (w v −1 ) pyrogallol (Sigma-Aldrich), 0.30 mL of 1 % (v v −1 ) H 2 O 2 , and 4.4 mL of 0.1 M phosphate buffer (pH 7). The mixture was vortexed for 1 min and incubated at 30 • C for 60 min. The dark color intensity of the test group was compared with that of the control group. The catalase activity was evaluated in a 96-well plate using yeast culture (150 µL) grown in MYGP broth overnight at 30 • C. The enzymatic activity was evaluated by adding 50 µL of 3% (v v −1 ) H 2 O 2 in each well. The bubble production during 20 min incubation at 30 • C was visually evaluated and recorded as low or high catalase activity. Analytical Indices The free fatty acid concentration, peroxide values, and UV spectrophotometric indices (K 232 , K 270 , ∆K extinction coefficient K 266 , and K 274 ) of the monovarietal and blended EVOO samples were evaluated at the beginning of the experimentation (zero time) and after 3 and 6 months of storage to assess their merceological class. All parameters were measured in triplicates for each sample according to European Community Regulation 1348/2013 [23]. Sensory Analysis Sensory analysis was performed on EVOOs at the beginning of the experimentation (time zero) and after 3 and 6 months of storage by a fully trained analytical taste panel, which was recognized by the International Olive Oil Council (IOC). The panel test was established using an IOC standard profile sheet method [24]. Each panel member analyzed all samples during three different sessions. Three olive oil samples from each group were analyzed simultaneously by each panelist during three different sessions. The sample sets were randomly distributed among 10 assessors. The median values of the sensory data were calculated and the test supervisor chose a significance level of 5%. Statistical Analyses A priori one-way analysis of variance, followed by Tukey's HSD (honest significant difference) test was performed using the Statgraphics computer program (Statgraphics, version 6, Manugistics, Inc., Rockville, MA, USA). The difference was considered statistically significant when the p-value was less than 0.01. Microbiological and Physicochemical Characteristics of the Freshly Produced EVOOs The microbiological analysis of the four freshly produced monovarietal EVOOs revealed a high abundance of yeasts and bacteria and a low abundance of mold in the EVOOs derived from the Coratina and Ravece varieties. The number of yeasts varied from 3.70 log CFU mL −1 (Leccino EVOO) to 5.41 log CFU mL −1 (Coratina EVOO), whereas that of bacteria varied from 2.28 log CFU mL −1 (Ravece EVOO) to 4.65 log CFU mL −1 (Coratina EVOO) ( Table 1). The suspended solid content was high in the EVOOs extracted from the Coratina and Peranzana varieties. The water content in the four EVOOs varied from 0.22% (w w −1 ) (Leccino EVOO) to 0.45% (w w −1 ) (Coratina EVOO) ( Table 1). This indicated that the high abundance of yeasts and bacteria in the Coratina EVOO was due to high suspended solid and water contents, which support the growth of microorganisms. The values reported in the column with different letters are significantly different from one another at p < 0.01. Physicochemical Analysis of the Monovarietal EVOOs used for Blending The physicochemical analysis revealed that the suspended solid and water contents of the EVOOs analyzed were similar to those of the unfiltered veiled oils. The suspended solid content of freshly produced monovarietal EVOOs (Table 1) ranged from 0.070% (w w −1 ) to 0.092% (w w −1 ). However, the suspended solid content decreased by about 50% after the samples were subjected to sedimentation for 30 days ( Table 2). The water content of the freshly produced EVOOs (Table 1) decreased when they were subjected to sedimentation for 30 days ( Table 2). The physicochemical analysis performed at the beginning of the blending experiment (zero time) indicated that EVOO extracted from Leccino variety had the lowest water content (0.15%; w w −1 ), while EVOO extracted from Coratina variety had the highest water content (0.36%; w w −1 ) ( Table 2). Ciafardini and Zullo [9] reported that low water content, which was observed in the EVOO derived from Leccino variety, prevents the deterioration of EVOO quality. Other studies have reported that high water content can adversely affect the shelf life of the product [12]. The chemical analysis of the monovarietal EVOOs performed at the initial phase of the blending indicated that the phenolic profiles vary depending on the olive variety from which the EVOOs were extracted. The EVOOs were grouped based on the total phenolic content as follows: low (Leccino EVOO with 179 mg tyrosol kg −1 oil), medium (Peranzana and Ravece EVOOs with 221 and 247 mg tyrosol kg −1 oil, respectively), and high (Coratina EVOO with 329 mg tyrosol Kg −1 oil) ( Table 2). This indicated that some Italian olive varieties, including Coratina variety, normally produces EVOO with bitter-pungent taste and a high content of polar phenols, which are used to increase the phenolic content of other EVOOs through blending. When monovarietal EVOOs containing low phenolic content (Leccino EVOO) are blended with those containing high phenolic content (Coratina EVOO), the shelf life of the product increases because enhanced phenolic content increases the antioxidant activity. The consumer acceptance of EVOOs can be enhanced by blending EVOOs containing high phenolic content with a strong bitter and spicy taste (Coratina EVOO) and EVOOs with low phenolic content as the blended oil acquires a more balanced flavor. The monovarietal EVOOs used for blending are listed in Table 2. The monovarietal EVOOs extracted from Leccino and Peranzana varieties with a medium-low phenol content were blended with those extracted from Ravece and Coratina varieties with medium-high phenolic content and stored for a period of six months. The microbial dynamics and the quality index stability of EVOOs were assessed by analyzing the samples collected at the beginning of the experimentation (zero time) and after 3 and 6 months of storage. Microbiological Analysis The microbiological analysis of the four freshly produced monovarietal EVOOs revealed a high abundance of yeasts and bacteria and a low abundance of mold in the EVOOs extracted from the Coratina and Peranzana varieties ( Table 1). The microbiological analysis of EVOO samples collected at the initial phase of blending (zero time) revealed a marked reduction in the abundance of bacteria and yeasts in all monovarietal EVOOs and a complete lack of mold ( Table 3). The laboratory blending was performed using monovarietal EVOOs, which were subjected to sedimentation for 30 days. The reduction in the abundance of microbes can be mainly attributed to the sedimentation of micro-drops of vegetation water and solid particles, which are rich in microorganisms, toward the bottom of canisters. This was consistent with the results of previous studies [6]. However, the microbiota composition of the blended EVOOs was markedly different from that of the monovarietal EVOOs during storage. The number of bacteria markedly decreased during the first three months of storage. The bacteria were not detected in all EVOOs, except monovarietal EVOOs extracted from Peranzana and Coratina varieties. The reduction in yeast population during the first three months of EVOO storage was lesser than that in bacteria population in all samples, except the blended EVOOs extracted from Coratina and Ravece varieties. However, the microbiological analysis of EVOOs stored for six months revealed that the yeasts survived in all monovarietal EVOO samples, but not in the blended EVOOs (Table 3). The survival of yeasts in the blended EVOOs is lower than that in the monovarietal EVOOs, which is partly attributed to the unpredictable environments of blended oils that have depleted chemical content and other conditions that limit microbial activity. In a previously study has been demonstrated the survival of yeasts both in monovarietal oils and in mixture coming from the milling of different blends of olives produced by the same varieties [2]. The comparative analysis of the results reported in Tables 1 and 3 and Table 2 suggested that the microbiota characteristics of oil freshly extracted from the fruits are determined at the beginning of the storage period. This is because of the presence of vegetation water and nutrients, which are favourable for microbial growth, during this period. In the case of blended EVOOs subjected to sedimentation, the growth of microbiota from each monovarietal EVOO is limited as they are exposed to harsh conditions, such as low water content and nutrient depletion. Dynamics of Yeast Species Population during EVOOs Storage The ribosomal (26S) D1/D2 region sequencing analysis of the most representative yeasts isolated from the EVOOs during storage allowed the identification of the following four yeast species: Barnettozyma californica, Candida adriatica, Candida diddensiae, and Yamadazyma terventina. These yeast species were identified in all EVOOs samples analyzed at the beginning of the experiment (zero time). During storage, the yeast species were identified in the monovarietal EVOOs but not in the blended EVOOs for lack of colonies to be examined ( Table 4). The maximum number of yeast species identified in the EVOO samples was as follows: two in EVOOs extracted from Leccino and Ravece varieties, three in EVOO extracted from Peranzana, and four in EVOO extracted from Coratina. The prevalence of yeast species in the EVOOs varied from 60% to 98% depending on the olive variety. B. californica and C. adriatica were the abundant yeast species in the monovarietal EVOOs derived from Leccino and Ravece during the entire storage period, respectively. Contrastingly, B. californica and C. diddensiae were the abundant yeast species in the EVOOs derived from Peranzana and Coratina during the first months of storage, respectively, whereas Y. terventina was abundant in both EVOOs at the end of the storage period. The results reported in Table 4 are consistent with those of our previous study, which demonstrated the predominance of single yeast species in monovarietal oils [17]. Enzymatic Activity of the Yeast Species The activities of β-glucosidase, esterase, lipase, peroxidase, phenoloxidase, and catalase were evaluated in the yeast isolates identified at the species level (Table 5). These enzymes are reported to be involved in the reduction of phenols and the production of new compounds, which determine the oil quality [25,26]. The proportion of β-glucosidase-producing yeast strains varied according to the species as follows: B. californica (34%), C. diddensiae (90%), C. adriatica, and Y. terventina (100%). Additionally, the proportion of strains producing β-glucosidase was highest among Y. terventina strains (Table 5). β-glucosidase is an important enzyme in olive oil as it hydrolyzes oleuropein, a bitter-tasting glucoside, into aglycone and glucose, which improve the sensory profile of bitter oils [10]. The production of esterase was observed in all the C. diddensiae strains among which 70% exhibited strong enzymatic activity. The lipase activity was observed in 90%, 80%, 67%, and 12% of Y. terventina, C. diddensiae, B. californica, and C. adriatica strains, respectively. Strong lipase activity was exhibited by 40% C. diddensiae and 20% Y. terventina strains. Lipase (triacylglycerol acylhydrolase) and esterase (carboxylic-ester hydrolase) hydrolyze hydrophobic long-and short-chain carboxylic acid esters, respectively. Lipase catalyzes the hydrolysis of ester bonds at the interface between an insoluble substrate phase (olive oil) and the aqueous phase, whereas esterase catalyzes the hydrolysis of ester bonds of water-soluble substrates. Esterase is involved in the debittering process of the oil through the hydrolysis of aglycones derived from the hydrolysis of oleuropein, which partly preserves the bitter taste of the starting compound. Lipase hydrolyzes the EVOO triglycerides and enhances the free fatty acid (FFA) level [27,28]. The lipase in some yeast strains isolated from the EVOOs promotes lipolytic activity during storage [29]. One of the major factors that affect olive oil quality is acidity, which affects EVOO stability [30]. The European Community Regulation 1348/2013 [23] recommends that EVOO must contain less than 0.80% of total FFAs (expressed as oleic acid). The oxidase activity mediated by peroxidase and phenoloxidase was most evident in the C. diddensiae, Y. terventina, and B. californica strains. However, the proportion of C. adriatica exhibiting peroxidase and phenoloxidase activities was low. In contrast to other enzymes, catalase activity was observed in all the yeast strains ( Table 5). The oxidase enzymes, including peroxidases and polyphenol oxidases, oxidize phenolic compounds and polyphenols and affect the sensorial quality of EVOOs during storage [10]. Quality Evolution during EVOOs Storage The quality of the EVOOs during storage was assessed by comparing the results acquired from the analytical indices and the sensorial tests with the limit values established by the European Community Regulation 1348/2013 for the EVOO class [23]. All the samples of monovarietal oils used in the blending tests were initially EVOO, although some analytical indices at the beginning of experimentation (zero time) were high partly due to the presence of Bactrocera oleae (Rossi) [28]. This was useful to investigate the time course of quality index stability and the ability of EVOOs to maintain the high initial analytical parameters that enables the maintenance of merceological class EVOO, during storage. The chemical analysis associated with quality indices indicated that the quality of all monovarietal EVOOs, except Ravece EVOO, deteriorated and did not satisfy the recommended merceological parameters of EVOO after the third month of storage. In contrast, the analytical indices of the blended EVOOs indicated stability and thus could be classified as EVOO after six months of storage ( Table 6). The sensorial analysis results of monovarietal EVOOs were consistent with the analytical index results, which indicated that the EVOOs extracted from Leccino, Peranzana, and Coratina varieties exhibited muddy sediment defect ( Table 7). The comparative analysis of these results and the results of previous studies suggested that blended EVOOs exhibit better stability than the monovarietal EVOOs. This can be attributed to the microbiota composition, and to the total polar phenolic compound and water contents. The comparative analysis of quality index results (reported in Tables 6 and 7) and other results indicated that the low stability exhibited by monovarietal EVOO extracted from Leccino may be due to its low phenolic content (Table 2) and due to the high esterase and oxidase activities in predominant yeasts, including C. diddensiae strains (Tables 4 and 5) [7,8]. The quality deterioration of the monovarietal EVOOs extracted from Peranzana and Coratina may be due to the enzymatic activities in the C. diddensiae and Y. terventina yeast strains and partly due to the presence of bacteria (Table 3) [31]. The data reported in Tables 2 and 6 indicated that the high phenolic content of EVOOs extracted from Coratina could not inhibit the enzymatic activity, which increased the levels of FFAs during storage. This may be due to the high water content, which promotes the activity of lipase derived from microorganisms or fruits. The lipase activity of some oil-borne yeast is reported to be maximum when more than 0.25% of water is present in the olive oil [29]. Among the monovarietal EVOOs, only Ravece EVOO was stable during storage. This may be due to the medium-high content of polar phenolic compounds and low water content in the Ravece EVOO (Table 2). Additionally, the low abundance of yeasts (majorly C. adriatica) may also have contributed to the stability of Ravece EVOO during storage as the enzymatic activity in this species does not adversely affect oil quality (Tables 3-5). Conclusions Blending is a widespread practice in EVOO manufacturing companies. It is used for the production of sufficient quantities of EVOO with a balanced taste to meet the demands of the global market. In this study, we demonstrated that the microbiota, which was established in the EVOOs immediately after extraction, survive well in the monovarietal EVOOs but not in the blended EVOOs accomplished after one month of storage. Several oil-borne yeasts derived from healthy olives are desirable as they improve some sensory characteristics of the oil. However, some yeasts are often derived from damaged olives and can impair the oil quality under favourable conditions, such as high-water content and low phenolic concentration. These observations indicate that the blended EVOOs are more stable than the monovarietal EVOOs due to the limited number of microorganisms and their low metabolic activity during storage. Conflicts of Interest: The authors declare no conflict of interest.
7,541.8
2020-03-01T00:00:00.000
[ "Environmental Science", "Chemistry" ]
Autoarrangement System of Accompaniment Chords Based on Hidden Markov Model with Machine Learning Accompaniment production is one of the most important elements in music work, and chord arrangement is the key link of accompaniment production, which usually requires more musical talent and profound music theory knowledge to be competent. In this article, the machine learning model is used to replace manual accompaniment chords’ arrangement, and an automatic computer means is provided to complete and assist accompaniment chords’ arrangement. Also, through music feature extraction, automatic chord label construction, and model construction and training, the whole system finally has the ability of automatic accompaniment chord arrangement for the main melody. Based on the research of automatic chord label construction method and the characteristics of MIDI data format, a chord analysis method based on interval difference is proposed to construct chord labels of the whole track and realize the construction of automatic chord labels. In this study, the hidden Markov model is constructed according to the chord types, in which the input features are the improved theme PCP features proposed in this paper, and the input labels are the label data set constructed by the automated method proposed in this paper. After the training is completed, the PCP features of the theme to be predicted and improved are input to generate the accompaniment chords of the final arrangement.(rough PCP features and template-matching model, the system designed in this paper improves the matching accuracy of the generated chords compared with that generated by the traditional method. Introduction With the increasingly vigorous development of the modern Internet, music has new media and carriers, and more and more music products are derived. Digital music has been better popularized and spread in the Internet information flow carrier, which greatly enriches people's spare time life. e development of intelligent, Internet, virtual reality, and other technologies has blurred the boundary between the real world and the virtual world, allowing paintings, art, and music to be presented to people in a highly genuine form. With the improvement in computer performance and the diversification of Internet functions and products, the threshold of learning music has been greatly reduced. People no longer need rich music theory knowledge and deep musical literacy to engage in music-related industries, such as music creation, music adaptation, and music retrieval. In the field of computer, more and more scholars and experts try to solve and simplify some problems related to music learning and creation by combining music theory knowledge with audio signal processing and researching specific features and algorithms. Artificial intelligence, such as deep learning and machine learning, covers all walks of life and extends to music, which is also developing in the direction of intelligence. Computers begin to assist or even replace professional workers to complete music work [1]. Computer arranger process is by computer algorithm, looking for a set of suitable system for the whole period of melody chord, a pop music usually combined by two parts of the vocal and instrumental music accompaniment; melody is a series of single notes to form a continuous music, and they constitute the theme of the music. erefore, people create or recreate a popular music, often from the beginning of the creative part of the main melody. Orchestrating harmonious chords for melodic lines can be a daunting task for amateur music lovers. For those who are interested in music creation, it is of great practical significance to study the automatic music accompaniment of relevant computers. For those who are interested in music creation, it is of great practical significance to study the automatic music accompaniment of relevant computers. Because the chord tension and foil music accompaniment in creating music plays an important role in the emotional aspects, the theme automation music accompaniment system will generate a matching chord accompaniment, Finally, a complete music file containing melody and chord accompaniment is the output. e music with automatic accompaniment generated by relevant computer algorithms can be used for entertainment and can also be used for music creators through certain theoretical reference. In the process of automatic accompaniment of music, the part of accompaniment is completely completed by the computer. By inputting the main melody, the creator can get a complete new music work with chord accompaniment, and use computer composition and accompaniment to enrich and expand the research field of computer algorithm. Arrangers can provide a variety of possibilities for the creation of music forms and styles. To a certain extent, the study of automatic music accompaniment system enriches the innovation of music and also provides music creators with reference to music accompaniment chords. Most musicians think music itself is extremely emotional, subjective, audio, a form of art, many segments of the rhythm of the music from the composer, fragmentary, and the creation inspiration of discontinuity; so for the inspiration of fragmentation and randomness, it is difficult to by a certain fixed computer algorithms to replicated and created again, So, it is more difficult to use computers to help us compose music, but as more and more computer algorithms are introduced into the field of music composition, through hidden Markov algorithm, stochastic process, genetic algorithm, artificial neural network, and so on, algorithmic composition is easier to apply to the current music form. It can be done through the computer simulation in the world with all kinds of music styles and forms [2]. It makes music more accessible to people who are interested in music creation but lack relevant music knowledge, eliminates the barriers of music creation, and makes seemingly distant music creation close at hand. e harmony of music is the core of accompaniment. To match a harmonious accompaniment for any given melody, it is necessary to solve the coordination problem of automatic accompaniment [3], which leads to another automatic accompaniment system that can match the harmonious chords of the input melody. Lee and Marsic put forward a kind of automatic accompaniment system suitable for a particular style; they constructed a system using new Riemann to transform a chord melody of process based on the MIDI list of paths, including alternative chord path in a similar binary tree structure, and then by a Markov chain with learning probability statistical optimization matching probability of chord. Emilia Gomez put the influence of harmonic frequency into the feature statement in the process of studying PCP features, considered the maximum value of specific frequency, and constrained the normalization of the feature weight of related frequency bands [4]. e improved HPCP characteristics reduce the influence of intensity and different timbre to some extent. Yang et al. has produced a software that can convert an arbitrary input audio signal into a chord sequence corresponding to the harmonious accompaniment [5]. In the process of studying PCP features, Wu et al. also included the influence of harmonic frequency into feature description, considered the maximum value of specific frequency, constrained and normalized the feature weight of related frequency bands, and improved HPCP features reduced the influence of intensity and different timbers to a certain extent [6]. By introducing the maximum likelihood criterion decision tree algorithm, Xue et al. calculated the likelihood coefficients between all single notes and calculated the occurrence times of adjacent intervals at different times. e chord sequence obtained from the combination of the single note with the most occurrence times, and the largest likelihood coefficient was taken as the final matching result [7]. erefore, solving the automatic arrangement of music chords has become a hot research direction of computer at the present stage. Music eory. Rhythm is the music of different lengths of the sound, according to a certain law of the combination of musical forms. Rhythm is in the beat, and the rhythm cannot be separated from the beat. e beat is a cyclical occurrence of a rhythm with a rule of strength and weakness [8]. Beats are expressed in fractional form in musical notation. Melody is the soul of music. e high and low of notes, the speed of rhythm, and the strength and weakness make the melody present different colours. Different pitches are connected to form the pitch contour of the melody, which abstracts into a curved melody curve. e distance between different points on the curve represents the interval relationship between pitches. In general, the basic patterns of melody can be summarized as horizontal progression, upward progression, downward progression, and wave progression. Melody is the basis of forming a part. Monophonic music has only one melody, whereas multipart music contains multiple melodies, which revolve around a certain main melody, and each melody is independent and interacts with each other. Generally speaking, the progression of twopart melody can be divided into simultaneous progression, parallel progression, reverse progression, and oblique progression. Tone is the law of music, which normalizes the relationship between musical sounds through an artificial constraint, so that it presents a form of expression in line with human aesthetics and cognition. At present, there are three main ways of expression of temperament: pure temperament, five degrees of mutual generation temperament, and twelve-equal temperament. e pure fifth of interval relation is the key element. On the premise of determining the pitch, the interval relation is taken as the pure fifth, that is, the conditional constraint of the frequency ratio 3 : 2, and the remaining tone values are deduced [9]. e tone relation obtained in this way is the reciprocal fifth. e characteristic of purity, from the point of view of signal processing, is the frequency ratio of each tone level, identical to a certain integer. According to the relationship of pure temperament, the overall harmony of the tone level is very high, and it is comfortable and three-dimensional from the perspective of human hearing experience. erefore, in modern applications, pure temperament is generally used in symphony performance, especially in the case of multipart and multiinstrument ensemble, which has a good harmony. Fundamentals of Music Signal Analysis. Musical Instrument Digital Interface is one of the most common structured symbolic representations. e contents of an MIDI file are a series of instructions that define what the Instrument will play and when. Because no audio waveforms are stored, MIDI files take up little storage space, and the stored contents can be modified flexibly; these characteristics make MIDI widely used in music creation, music recording, music analysis, and other aspects. Music notation is a kind of musical notation, including two types of music notation for recording pitch and fingering. Among them, the simplified score and staff score belong to the score of recording pitch, whereas the six-line score used for guitar performance belongs to the score of recording fingering. STFT is a steady-state analysis of signals based on the assumption that the signals are stable in a short time. erefore, piano music can be assumed to have short-term stationarily and analysed by STFT. e definition of STFT is shown as where x represents discrete music signal, w stands for window function, and X represents the spectrum at time m. In the process of STFT, the length of the window determines the time resolution and frequency resolution. e longer the window length, the longer the intercepted signal, the lower the time resolution, and the higher the frequency resolution; conversely, the shorter the window length, the shorter the intercepted signal, the higher the time resolution, and the lower the frequency resolution [8][9][10]. If the stationary analysis fails, the signal length is recalculated, and the number of signal columns when the source signal is divided into columns is calculated according to the signal length, window length, and the number of signal columns when the source signal is divided into columns. erefore, in STFT, the time resolution and frequency resolution are contradictory, and the window length should be determined according to the actual situation. Constant Q transformation (CQT) is another method of frequency domain analysis, and its definition is shown as where k is the sequence number of the spectral line, Q is the quality factor, and its value is equal to the ratio of the centre frequency to the bandwidth. Because the centre frequency is an exponential distribution, Q is a constant, N is the window length of the window function, and w(n) is the value of whereinto, f s is the sampling frequency, f is the lowest frequency of the music signal, and f k is the frequency value of the KTH spectral line. B is the number of spectral lines within an octave. Because an octave is divided into twelve semitones by the average temperament of twelve, B generally takes a value of 12 or a multiple of 12. en, the frequency corresponding to each spectral line is exactly one to one with the frequency of the scale. Because CQT spectrum frequency and scale frequency have the same exponential distribution law, CQT is applied to the analysis and processing of music signals. However, the most important problem of CQT is that the calculation speed is slow. One reason is that, for each spectral line number k, the corresponding window length should be calculated and then the calculation should be carried out in accordance with formula (2), resulting in a large amount of overall calculation. e other reason is that the spectral line frequency distribution is not linear. So you cannot call the Fast Fourier Transform (FFT) directly, which slows down the calculation speed. In addition, according to the experimental results, short-time Fourier transform is the most suitable for analysing audio signals. Neural Network. Neural network is an operation model whose basic unit is neuron. In a neural network, neurons are connected with weights, and the function of such interconnections is to transmit and activate information [11]. x i represents the input signal, and w i represents the weight of each input signal and the connection between the neuron. Formula (4) can be obtained through the weighted summation of the input signals based on these weights: where b represents the offset term, and then takes Z as the input to obtain equation (5) through a nonlinear activation function: where y(z) represents the activation function, the nonlinear function is usually selected as the activation function, whose function is to introduce nonlinearity into the neural network, so that the neural network has the ability to solve the nonlinear mapping problem. e most commonly used activation function is tanh function, which is defined as Generally, the neural network can have multiple layers, in addition to the input layer and output layer, and other Mathematical Problems in Engineering layer is known as the hidden layer; in hiding, each layer contains multiple neurons, and the output is the next layer of neurons in a layer of neurons input; this kind of connection mode constitutes the basic structure of neural network [12] and is also the foundation of the network information transmission. e specific structure of neuron is shown in Figure 1. In Figure 1, except for the input layer, the neurons of each layer are connected with the neurons of the previous layer, and each connection carries a weight value. With the progression of the number of layers, the output of each layer in the neural network can be expressed as follows: where W represents the first l layer of weighting matrix, X layer represents the first l input, and Z represents the weighted sum of the input and output, and then, we get the output of the first layer l · A nonlinear mapping, and the output of the first l layer will be deemed to have been the first l + 1 layer of input, so keep moving forward, the forward process is known as prior to transmission. After the forward propagation of the neural network, a predicted result will be obtained. When the predicted result is different from the actual result, an error will be generated, which can be quantified through the loss function, and the quantified result is called loss. e purpose of training the neural network is to reduce the loss [13]. In the process of loss reduction, it is necessary to start from the last output layer and calculate the weight parameter gradient of each layer in reverse based on the chain rule. is reverse process is called back propagation. Taking a neural network with the number of layers N as an example, its back propagation formula (8) is as follows: ere are many layers in the neural network, and the functions of each layer are different. e basic neural network includes input layer, hidden layer, and output layer, which is similar to the state transition network in HMM. Every neuron in the hidden layer is connected to the previous layer, and each path has a weight value for constraint. Each layer is obtained by the weighted sum of the weight of the neurons of the previous layer and the input value, and it becomes the input value of the next layer after nonlinear mapping. In the recursion process, due to the back propagation algorithm, the obtained partial derivatives will be back propagated to update the weight of each layer and the network parameters. In this way, the repeated learning results in stable parameters and a mature neural network model are obtained. Data Preprocessing. Considering that the music has the characteristics of short and stable, the signal is usually divided into frames. At the same time, in order to ensure the smoothness and continuity of the frame interval signals after segmentation, the overlapping segmentation method is adopted to carry out local calculation between the frames. In this article, the source file used for preprocessing the frame segmentation data is the audio data of the main theme in WAV format, and the sampling rate of all audio is set at 44.1 KHz to ensure a unified standard [13,14]. e processed audio signal is sampled down to 11025 Hz to achieve its normalization. If the overlapping frame information obtained by segmentation does not achieve the desired effect, the overlapping segment segmentation method is used again considering the spectrum energy leakage and sliding window function. Frame segmentation is shown in Figure 2. Improved PCP Feature Extraction. e principle of PCP feature calculation is based on the change in frequency value of the twelve-average law in music theory and the mapping calculation. e change of Pitch of different notes in music, in speech signal, is the change in frequency value [15]. It is generally understood that it spans an octave, but the ratio of frequencies belonging to the same tone is 2 : 1. In twelve equal temperament, the frequency of the adjacent chromatic is one over twelve of the 2 to the power relationship; therefore, in the music signal, the change of the transverse grows exponentially, mapping to the three-dimensional space, said can see that the change of pitch corresponds to the frequency change is climbing upward spiral, can see more intuitive way of step frequency change. e most unique advantage of PCP feature is that its processing makes the spectral energy of the audio signal attached with musical characteristics, so when processing the audio data related to the music signal, the musical characteristics of the audio signal can be better displayed [16,17]. e setting of the centre frequency is corresponding to the frequency value corresponding to the twelve semitones in the twelve-equal temperament. e weight of the frequency value of all the notes in the twelve-equal temperament is retained, and the weight of the irrelevant frequency value is filtered out. It can effectively overcome the low-frequency noise and high-frequency overtone interference, and at the same time, the weight of the basic frequency in the low-frequency band is retained, so as to overcome the problem of fuzzy sound value to a certain extent (Figure 3). Figure 3 is the spectrum diagram corresponding to the frequency range of the note where A4 is after Gaussian filtering. It can be observed that 440 Hz has the largest amplitude, that is, its position corresponds to the central frequency, while the left amplitude boundary of other frequencies is between 420 Hz and 430 Hz, and the right amplitude boundary is between 450 Hz and 460 Hz. e calculated frequency values are outside the boundary, so the frequency of effective sound values will not be blocked, which plays a very good filtering effect. Figure 4 is the PCP feature spectrum diagram improved by Gaussian filtering set and logarithmic compression. e spectral energy in the feature part of pitch level is more coherent, and the corresponding pitch level structure of each time segment can be clearly seen. is part of audio is A melody WAV file of the song Little Star, which I recorded by myself. rough the spectral map obtained by the improved PCP feature extraction method in this article, melody sounds C, G, A, G, F, E, D, and C can be clearly obtained (Figure 4). Design of Chord Arrangement System Based on the HMM Model Frame to move The first K frame The first K + 1 frame [18][19][20]. e input melody is segmented, and the input melody mode is unified in different songs and modes, and the single melody song is transformed into the standard C major without changing the internal sound group structure of the melody itself. erefore, it greatly facilitates the arrangement of chords. To pick up the theme of the characteristics of the fragments and according to the characteristics of the combination of machine learning algorithm to obtain sample songs under the different styles of melody-matching chord by relevant probability, in the accompaniment, chords knowledge database matching choice was made, thus having the right chord of this fragment, and repeat the above steps, until we get the chord accompaniment matching probability, the optimal, and record and update the relevant probability parameters [21][22][23]. e characteristic notes of melody are the weight relationship of the proportion of notes appearing in this piece of music. e notes that appear most frequently in a piece of music are defined as the characteristic notes of this piece of music. When entering the simplified score of the sample music, the simplified score of the input music will be screened, and the characteristic notes will be extracted segment by segment. Match the optimal chord for each characteristic note based on the characteristic note. About the optimization of the single melody notes and sequence, the further design of the composition of the chord internal algorithm, through the chord construction algorithm, can generate a sound, vivid chord structure of a sports trend group, in accordance with the matched code and the best chord sequence obtained by matching combination. Finally combined chord sequence and main melody single notes playing at the same time play a melody with a harmonic accompaniment. 4.2. e Framework of Chord Arrangement System. e automatic chord matching system designed is mainly divided into two parts: one is the music feature extraction part, that is, the improved PCP feature extraction described in the previous literature. e other part is the model part, which includes the collection of model chord labels, model training and prediction. As shown in Figure 5, the chord automatic matching system is mainly divided into two parts. e dashed frame on the left is the music feature extraction module, which adopts the improved PCP feature. e other module is the model module in the dashed box on the right, which mainly involves the HMM model and the construction of automatic chord labels. e model is to delete a series of musical information by means of symbolic event recording; the channel where the percussion music is located, analyse the musical characteristics of each track, retain the note with the lowest pitch, delete the other notes, and get the accompaniment track. e data set of accompaniment tracks is stored in the form of event messages, and the source data is in MIDI format. It is very convenient to extract and collect musicrelated indicators ( Figure 5). Automated Chord Tag Construction. Different from the common WAV format audio signal storage form, MIDI stores a series of music information in a file in the form of message of event by means of symbolic event recording. erefore, it is very convenient to use this format as the source data to extract and collect music-related index characteristics. is article uses the Accompaniment Track portion of this file as the source data set for the automatic chord construction tag construction, so the following will describe how to get the Accompaniment Track and its MIDI music information. Mathematical Problems in Engineering Different from the common WAV format audio signal storage form, MIDI stores a series of music information in a file in the form of message of event by means of symbolic event recording. erefore, it is very convenient to use this format as the source data to extract and collect music-related index characteristics [18]. is article uses the Accompaniment Track portion of this file as the source data set for the automatic chord construction tag construction, so the following will describe how to get the Accompaniment Track and its MIDI music information ( Figure 6). As shown in Figure 6, it is a schematic diagram of the high-pitched contour line under a time series. e horizontal axis represents time. In different time segments, there are different notes, and each note corresponds to the pitch of the vertical axis. Skyline algorithm is to extract the notes with the highest contour line as the main melody notes, namely, the red highlighted part in the figure, on the premise of multitone overlap. e collection of these high-pitched contour notes can form the main melody channel sound track. In this article, the input melody is segmented, and the input melody mode is unified in different songs and modes, and the single melody song is uniformly transformed into the standard C major without changing the internal sound group structure of the melody itself. erefore, it greatly facilitates the arrangement of chords. Fragments of theme features are extracted, according to the characteristics of the combination of machine learning algorithm to obtain sample songs under the different styles of melody-matching chord by relevant probability, and in the accompaniment, chords knowledge database matching options, this segment of the right chord, and repeat the above steps, until get the chord accompaniment matching probability, the optimal And record and update the relevant probability parameters [19,20]. Melody characteristic ratio is in this period of music notes weight relations. e notes that occur most frequently in a piece of music are defined as the characteristic notes of that piece. In the input sample music chords, the chords of the entered the music selection, piecewise characteristics extracted note, based on the characteristics of the corresponding to match each characteristic notes of a chord in optimal. e aim is to obtain the estimation of transition matrix probability A ij , observation matrix probability, and initial state probability I in the hidden Markov model of music accompaniment through machine training learning. e following is the definition of each probability in the hidden Markov model of music. e probability of transition matrix is estimated by A ij , i � 1, 2, 3, 4, 5, 6, 7; j � 1, 2, 3, 4, 5, 6, 7. According to the melody characteristic tone and accompaniment sequence obtained above, the parameters of the accompaniment hidden Markov model are updated and statistics are performed to obtain the training results of the corresponding sample songs, such as the probability of state transition matrix, the probability of emission matrix, and the probability of initial matrix. In the algorithm of automatic accompaniment chord system of music based on Hidden Markov model, the intermediate state transition probability of each moment is obtained from the intermediate state transition probability of the previous step, which is a recursive calculation method. Chord prediction is a decoding problem, which is to solve the optimal path in the state transition network to maximize the probability of the corresponding path. Based on the premise that the corresponding system has known the PCP characteristics of the main melody of the observation sequence and the parameters of the hidden Markov model, the accompaniment chord sequence that is most likely to correspond to the main melody is obtained. Here, it is defined as 1, 2, . . . , 7). Formula (11) describes a mathematical solution to the decoding, which represents the maximum probability value of all selector subsets reaching the state at the time point with Mathematical Problems in Engineering known parameters of HMM. According to this equation, the optimal solution at the next moment can be obtained as Finally, the optimal solution can be obtained, and it stays in the final state as e optimal path is to recourse forward and obtain equation (14) by constantly solving the following equation: Finally, the set of all subsets constitutes an optimal chord selection path. Experiment and Result Analysis. e improved theme PCP feature vector proposed is used as the feature extraction of the model and as the observation vector of HMM. e number of model states is set to 6. Except for the initial and termination states, all the others are active states. Each activity state uses a single Gaussian observation function, a diagonal matrix, consisting of an average vector and a change vector. After the model training is completed, 5 files are randomly selected from the test data set as test objects, and the improved PCP feature vectors are extracted and input to the model for chord prediction, and the chord sequences obtained are recorded, as shown in Figure 7. Mathematical Problems in Engineering As can be seen from the comparison results in the figure above, compared with the original traditional PCP features, the improved PCP features used in this article have improved the accuracy of chord arrangement to a certain extent. e experimental results obtained using the improved PCP features proposed in this article. e accuracy of chord arrangement In Vacation, Better Hurry Up, and Holiday Door Time increased by 6.65%, 6.58%, and 6.14%, respectively, while in Cool Hun Day and Better Door Us, the accuracy increased by 2.89% and 3.01%, respectively. In general, the improved PCP feature proposed in this article has better chord arrangement effect compared with the traditional PCP feature. Conclusion MIDI music data set, based on hidden Markov model chord recognition model, combined with improved PCP music features as the input vector, cooperate with the chord label the method of building automation, set up a complete set of chords orchestration system, provides a way of computer automation to as a theme for orchestration of accompaniment chords to help people solve the needs of musical accompaniment and chord arrangement. e proposed automatic chord label construction is based on MIDI data format, so it may be difficult to recover 100% of the constructed chord sequence. In this article, a method of automatic chord label construction is proposed. Based on the characteristics of MIDI symbol data format, a method of chord analysis based on interval difference is proposed, and the accompaniment chords of the bar are obtained by matching with the binary chord template constructed in advance. Finally, the automatic chord label construction is realized. is article designs a set of chord arrangement system based on HMM hidden Markov model, elaborates the mathematical principles and technical points of the hidden Markov model in detail, and expands and explains each step and process of the model combined with the improved PCP characteristics of the main theme. In the training process, the observation vector of the input value of the model is the PCP feature vector improved, and the label of the model is extracted from the training data using the method of automatic chord label construction. After the model training is completed, the improved PCP features of the theme to be predicted in the test set are extracted and input to the HMM model for prediction to generate the final arranged chords. Compared with the traditional PCP feature and template-matching model, it is found that the improved PCP feature and the HMM model proposed have better chord matching effect and higher accuracy. Although the system built in this study successfully realizes the automatic chord arrangement and has a better effect on chord arrangement than the previous methods, there is still much room for improvement. In music theory, the composition of chords is not only the broken chords of single notes strung together but also changes according to the needs of the song itself. Data Availability e data used to support the findings of the study are included within the article. Conflicts of Interest e authors declare that they have no conflicts of interest.
7,592.2
2021-10-13T00:00:00.000
[ "Computer Science" ]
η meson physics with WASA-at-COSY In recent years, the η meson has been a focal point of research for the WASA experiment at the Cooler Synchrotron COSY of the Research Center Jülich. Production experiments using nucleon-nucleon and nucleon-nucleus collisions have been performed, studying the η − N interaction in various configurations. A better understanding of this interaction is a key aspect in the ongoing search for η-nuclear bound states. In addition, the η meson itself represents an ideal laboratory for precision studies of the strong and electromagnetic interactions as well as for searches for beyond Standard Model physics. Large datasets were assembled using the WASA experiment to enable studies on rare and forbidden decay modes. An overview over recent highlights of the WASA η meson physics programme was given. Introduction The η meson has been an integral part of the WASA-at-COSY hadron physics programme over the last decade. Due to its unique properties, it represents an ideal gateway to study two very distinct research areas. Already in 1985, Haider, Liu and Bhalerao [1,2] found an attractive interaction between the η meson and nucleons. Although the original model suggested binding between nuclei and the η meson only for mass numbers A ≥ 12, extensive experimental effort was put into the investigation of η-nucleon and η-nucleus interactions in the search for η-nucleus bound states also in systems as light as the deuteron. While there is, to date, no unequivocal evidence for such a bound state, strong signs of an attractive final state interaction have been found in production experiments off light nuclei such as the deuteron [3][4][5], 3 He [6][7][8][9][10][11][12][13] and 4 He [14,15]. One of the most prominent examples is the pd → 3 Heη reaction. Here, the total cross section σ rises steeply from zero at threshold to a plateau of around σ = 400 nb within 1 MeV of excess energy [6][7][8][9]. It has been discussed elaborately that such a behaviour can best be explained in the context of a strong final state interaction that might support a (quasi-)bound state below the production threshold [16][17][18]. While considerable effort was put in the investigation of the near threshold region in the search for potential η − 3 He bound state, the main production process of η mesons in the pd → 3 Heη remains relatively poorly understood. Angular distributions at larger excess energies display strong asymmetries that are not reproduced by any of the available theoretical models. However, a relatively shallow database in combination with large systematic effects between the measurements by different experimental facilities have thus far hindered a detailed study of the importance of higher partial waves as a function of the available excess energy. A new, high statistics dataset gathered with the WASA-at-COSY experiment [19] for the first time allows such a detailed study over a large range of excess energies. For the theoretical interpretation of the η-nucleus interaction in complex systems featuring multiple nucleons it is of great importance to understand the elementary η − N interaction. Here, spin can serve as a valuable tool to study both possible mesons in a meson-exchange model and the importance of nucleon resonances in the production process. A new high statistics measurement using a polarised proton beam has recently been made available [20], studying the role of higher partial waves at two different excess energies by determination of the analysing power A y . Apart from these production studies, the η meson represents an ideal scenario for various decay studies. It is an eigenstate to the P-, C-and G-parity with an isospin I = 0, so that all strong and electromagnetic decays are forbidden to first order. Consequently, there is no strongly dominating decay mode. Thus, the η meson is seen as an excellent gateway to study rare decays or search for symmetry violating forbidden decay modes. Hadronic decays can give access to isospin violation and thus the difference in the up-and down-quark masses, whereas radiative decays can be used to study quantum anomalies. Another major part in the studies of rare and forbidden decays are leptonic and semi-leptonic decays. These give access to the form factor of the η meson, can be used to study symmetry violations and to search for beyond Standard Model physics. Making use of two large, dedicated datasets for decay studies of 30 × 10 6 η mesons in pd → 3 Heη and 500 × 10 6 η mesons in pp → ppη, respectively, the search for rare and forbidden η decays is an integral part of the WASA-at-COSY η physics programme. The WASA-at-COSY experiment The WASA experiment is an internal, fixed target 4π experiment located at the COooler SYnchrotron COSY at Research Center Jülich. The accelerator provides beams of both protons and deuterons in a momentum range of 0.3 GeV/c ≤ p beam ≤ 3.7 GeV/c with a momentum resolution of around ∆p/p ≈ 10 −3 [21]. The beam is brought to collision with a pellet target providing frozen pellets of either hydrogen or deuterium. A Central Detector is built around this interaction point consisting of a drift chamber, a solenoid, a plastic scintillator and a calorimeter. Due to the fixed target geometry, particles experience a large boost in forward direction. Consequently, a dedicated detection device is used to reconstruct heavy, forward-going ejectiles (mainly protons, deuterons and He-nuclei). This Forward Detector consists of various layers of scintillation detectors and a proportional chamber, which is used to reconstruct the azimuthal and polar scattering angles of charged particles emitted in forward direction. For more information regarding the WASA detector setup the reader is referred to [22]. η meson production Using the combination of the accelerator COSY and the WASA detector, the production of η mesons can be studied in proton-proton, proton-deuteron (or deuteron-proton) and deuterondeuteron collisions using either a polarised or an unpolarised beam. During the talk, recent results on η meson production in proton-proton collisions, using a polarised proton beam, and in proton-deuteron fusion were discussed. The pp → ppη reaction The measurement was performed at two different beam momenta of p p = 2026 MeV/c and p p = 2188 MeV/c, corresponding to excess energies of Q η = 15 MeV and Q η = 72 MeV, respectively. In addition, two different spin orientations were used to be able to better estimate systematic effects related to the beam polarisation. The polarisation was determined by measuring the asymmetry A(θ p , φ p ) = of the number of elastically scattered protons. This asymmetry is related to the analysing power by A(θ p , φ p ) = P · A y (θ p )cosφ p , where P is the polarisation. For the reaction pp → pp, the analysing power was previously measured [23,24], so that the polarisation was determined as given in [25]. Similarly, the asymmetry is measured for both polarisation states and for both beam momenta and, thus, the analysing power is determined. Here, the η meson is reconstructed in its two main decay modes involving neutral particles (η → γγ and η → π 0 π 0 π 0 ). As is argued in [20], the product of analysing power and differential cross section is given by According to [20], the coefficients G y0 1 , H y0 1 and I y0 1 can directly be related to the (Ps * Pp)-, (Pp) 2 -and (S s * S d)-partial waves, respectively. Using a fit of the type A y θ η dσ dΩ = C 1 sin θ η + C 2 cos θ η sin θ η , it was found that the data at Q η = 15 MeV are consistent with s-wave production, whereas a clear signal for higher partial waves was observed for Q η = 72 MeV. The findings at Q η = 15 MeV are in contradiction with earlier predictions from meson exchange models based on pseudo-scalar [26] and vector [27] meson exchange. More detailed information can be found in [20]. The pd → 3 Heη reaction As previously discussed, the near-threshold region of the pd → 3 Heη reaction has been studied in great detail in recent years (see [6][7][8][9]). It was found that a strong final state interaction, potentially involving a (quasi-)bound η − 3 He state, is responsible for the sudden rise of the cross section directly at the production threshold [16][17][18]. With little knowledge about the production mechanism itself, where thus far two-step models [28][29][30] and a meson exchange model involving the S 11 (1535) resonance [31] have been proposed but found little support in higher energy data, strong assumptions on the production amplitude have to be made in order to extract final state interaction parameters. In 2014, a new measurement was performed using the WASA-at-COSY experimental facility. Data were taken at 15 different beam momenta between p p = 1.60 GeV/c and p p = 1.74 GeV/c with a stepsize of 10 MeV/c, corresponding to an excess energy region of 13.6 MeV ≤ Q η ≤ 80.9 MeV. Detecting the 3 He nuclei in the Forward Detector, a missing mass analysis was performed and both total cross sections and detailed angular distributions could be determined for all 15 excess energies. A large structure was observed in the excess energy range between Q η ≈ 10 MeV and Q η ≈ 40 MeV, potentially hinting at a region in which the final state interaction loses its importance and is superseded by the free production amplitude. The strongly forward peaked angular distributions extracted in [19] will for the first time allow to study the behaviour of individual components with rising excess energy, and thus serve as a valuable input to future theoretical model calculations. For more detailed information, we refer to [19]. η meson decays In recent years, two large datasets have been accumulated with the WASA-at-COSY experiment, specifically dedicated to the study of η meson decays, 30 × 10 6 η mesons in the pd → 3 Heη process as well as 500 × 10 6 η mesons in the reaction pp → ppη. Various analyses have been performed or are ongoing, targeting a wide variety of physics topics (see, e.g., [32]). In all these analyses, the near 4π coverage of the WASA detector in combination with its excellent particle identification capabilities is exploited for a fully exclusive event reconstruction. During the conference, a choice of some recent highlights was presented. 4.1 The η → γe + e − and η → e + e − e + e − decays Dalitz-and double-Dalitz-decays of pseudo-scalar mesons are of high interest as they allow the determination of the electromagnetic form factor F q 2 1 , q 2 2 for (q 2 1 > 0, q 2 2 = 0) and (q 2 1 > 0, q 2 2 > 0), respectively. These form factors play a key role in the determination of the hadronic light-by-light contribution to the muon anomalous magnetic moment, as it, to date, represents one of the largest contributions to the total uncertainty in the theoretical calculation of g µ − 2. Using the proton-deuteron fusion dataset containing 30 × 10 6 reconstructed η mesons, around 14 × 10 3 η → γe + e − have been observed and a branching ratio of BR(η → γe + e − ) = (6.72 ± 0.07 stat. ± 0.31 sys. ) × 10 −3 [32] was determined. The value obtained in [32] is compatible with the PDG average [33]. In [34], the form factor was extracted and shown to be consistent with other measurements [35,36]. In addition, a search for the double-Dalitz decay η → e + e − e + e − yielded 18.4 ± 4.9 signal events and a branching ratio of BR(η → e + e − e + e − ) = (3.2 ± 0.9 stat. ± 0.5 sys. ) × 10 −5 . Again, the branching ratio is compatible with the PDG average within the quoted uncertainty. With 18.4 ± 4.9 signal events, a determination of the two-dimensional form factor F q 2 1 , q 2 2 was not feasible. The analysis of the pp → ppη dataset, containing roughly 500 × 10 6 η mesons is ongoing and was discussed in another contribution. The In the radiative decay η → γπ + π − , M1 and E2 transitions are allowed, whereas the E1 transition would be CP-violating. However, in order to distinguish between the different transitions, a measurement of the polarisation of the photon in the final state would be needed. Instead, the decay η → π + π − e + e − with an intermediate virtual photon creating the e + e − pair allows the search for CP-violation. If CP-violation were to be found in this η meson decay, an asymmetry in the distribution of the the dihedral angle φ, the angle between the decay planes of the π + π − and the e + e − pairs, would be seen. Based on the 30×10 6 pd → 3 Heη events, a total of 215±17 η → π + π − e + e − was observed, resulting in a branching ratio of BR(η → π + π − e + e − ) = (2.7±0.2 stat. ±0.2 sys. )×10 −4 [32]. Also here, the resulting branching ratio is consistent with the PDG average within the uncertainties. The asymmetry in the dihedral angle was determined to be A φ = (−1.1 ± 6.6 stat. ± 0.2 sys. ) × 10 −2 [32], which is consistent with zero and thus does not signal any CP-violation. For further details, see [32]. In the future, more strict limits on a potential CP-violation in this decay can be set, using the larger dataset in pp → ppη. The semi-leptonic decay of an η meson into a neutral pion and an electron-positron pair created by a single virtual photon is forbidden by C-parity conservation. Standard Model predictions based on a two-virtual-photon exchange range from BR(η → π 0 e + e − ) ≈ 10 −11 to BR(η → π 0 e + e − ) ≈ 10 −8 [37][38][39]. Thus, this particular decay channel presents an ideal opportunity to search for C-parity violation in the Standard Model. Also, beyond Standard Model particles, such as the hypothetical dark photon, could create the e + e − pair in the η → π 0 e + e − , thus potentially increasing the branching ratio to an observable value. In a recent analysis based on the pd → 3 Heη dataset no signal was found. A new, world's best upper limit on the branching ratio was determined. Depending on the hypothesis for the e + e − invariant mass distribution, the limit is given by [40] BR virtual (η → π 0 e + e − ) < 7.5 × 10 −6 , BR PS (η → π 0 e + e − ) < 9.5 × 10 −6 . ( Here, the subscript virtual signals the case in which the e + e − invariant mass distribution was calculated based on a vector meson dominance model, whereas the subscript PS signals the case, where the photon energy distribution and thus also the e + e − invariant mass distribution is given by evenly distributed events in three-particle phase-space. In both cases, the limit is a significant improvement over the previous best limit of BR PDG (η → π 0 e + e − ) < 4 × 10 −5 [33]. With an order of magnitude more data available in the pp → ppη reaction, a further improvement by a currently ongoing analysis seems feasible. More detailed information on these decays as well as on further studies that have been performed using the large η meson datasets gathered with the WASA-at-COSY setup can be found in the respective publications [32,40]. Summary In this contribution, an overview was given over some selected topics within the η meson physics programme with the WASA-at-COSY experiment. In production experiments, the η − N interaction is studied. Recent results on the analysing power in the pp → ppη reaction were presented. Additionally, a new dataset was presented concerning the pd → 3 Heη reaction away from the reaction threshold, aiming at a better understanding of the underlying reaction process in a region that is not dominated by the strong final state interaction. With two large, dedicated datasets a wide variety of η meson decays is studied with WASA-at-COSY. Selected topics included the Dalitz-and double-Dalitz-decays in the quest for the η meson electromagnetic form factor, a search for CP-violation in the decay η → π + π − e + e − and a recent new best upper limit on the C-parity violating decay η → π 0 e + e − . Results presented during the conference are based on the smaller pd → 3 Heη dataset containing around 30 × 10 6 η mesons, while multiple studies on η meson decays in a dataset of 500 × 10 6 η mesons in the pp → ppη reaction are ongoing.
3,997
2019-01-01T00:00:00.000
[ "Physics" ]
Perceptual thresholds for differences in CT noise texture Abstract. Purpose The average (fav) or peak (fpeak) noise power spectrum (NPS) frequency is often used as a one-parameter descriptor of the CT noise texture. Our study develops a more complete two-parameter model of the CT NPS and investigates the sensitivity of human observers to changes in it. Approach A model of CT NPS was created based on its fpeak and a half-Gaussian fit (σ) to the downslope. Two-alternative forced-choice staircase studies were used to determine perceptual thresholds for noise texture, defined as parameter differences with a predetermined level of discrimination performance (80% correct). Five imaging scientist observers performed the forced-choice studies for eight directions in the fpeak/σ-space, for two reference NPSs (corresponding to body and lung kernels). The experiment was repeated with 32 radiologists, each evaluating a single direction in the fpeak/σ-space. NPS differences were quantified by the noise texture contrast (Ctexture), the integral of the absolute NPS difference. Results The two-parameter NPS model was found to be a good representation of various clinical CT reconstructions. Perception thresholds for fpeak alone are 0.2  lp/cm for body and 0.4  lp/cm for lung NPSs. For σ, these values are 0.15 and 2  lp/cm, respectively. Thresholds change if the other parameter also changes. Different NPSs with the same fpeak or fav can be discriminated. Nonradiologist observers did not need more Ctexture than radiologists. Conclusions f peak or fav is insufficient to describe noise texture completely. The discrimination of noise texture changes depending on its frequency content. Radiologists do not discriminate noise texture changes better than nonradiologists. Introduction The visual appearance of medical images is influenced by both their noise magnitude and noise texture.In fact, it is known that both can affect the detectability of small and low-contrast lesions. 1 Although multiple factors affect noise magnitude, the noise texture in CT imaging is mainly influenced by the reconstruction method and reconstruction kernel used.3][4][5] This appearance is usually associated with a shift of the noise power spectrum (NPS) toward the lower frequencies compared with that of images obtained with filtered-back projection (FBP). 6This shift downward in the noise frequencies is mainly due to these algorithms achieving a reduction in the image noise by increasing the spatial correlation across voxels, especially in low-dose conditions.As expected, this increase in correlation can lead to a lowering of the spatial resolution of the image. 7,8epending on the contrast, newly developed, deep-learning-based reconstruction (DLR) algorithms seem to be able to decouple this usual relationship between spatial resolution and noise texture from each other to a larger extent than that existing in current iterative reconstruction methods. 9This may allow for new opportunities to manipulate noise texture during reconstruction, improving the detectability of low-contrast lesions. Therefore, with the increasing use of iterative reconstruction algorithms in CT and especially with the advent of deep-learning-based postprocessing, it is of interest to better understand the phenomenon of noise texture changes during reconstruction and postprocessing in CT.With this knowledge, it could be feasible to tune some of these algorithms to optimize the resulting image noise texture while maintaining the spatial resolution.This can be achieved by studying only the shape of the NPS, independently of its magnitude.However, to make these insights clinically relevant, it is necessary to first determine what changes in noise texture are actually perceptible by a human observer.Given the complexity of the human visual system, it is not immediately clear how sensitive humans are to noise texture differences.Therefore, it is of interest to characterize the minimum changes in the NPS shape that are needed for a human observer to detect a change in the image texture. To be able to systematically study noise texture changes, a simple and continuous parametric model that describes the NPS change, and therefore noise texture, is needed.It is common to summarize the information of CT NPSs with one parameter, the frequency at which the NPS peaks (f peak ), or alternatively, the average NPS frequency (f av ).However, it is clear that one parameter can provide only limited information on the frequency distribution of the noise texture.In other words, multiple different NPSs, all resulting in different noise textures, could have the same f peak and/or f av .To overcome this, a more complete parametric representation of the CT NPS shape is needed. Therefore, the purpose of this study is to introduce and validate a more complete parametric model of the NPS in CT and use that model to determine the detectability of changes in noise texture for human observers. Materials and Methods To investigate the perceptual thresholds for noise texture changes, we created and evaluated a simple and continuous two-parameter model that describes the shape of the NPS of CT images.This model was then used in forced-choice psychophysical experiments using adaptive staircase methods to estimate the observer thresholds as a function of changes in the two parameters.To understand if these perceptual thresholds might be different for radiologists compared with nonradiologists, a limited version of the study was repeated with radiologists.Finally, to determine if the threshold changes varied based on differences in the reference NPS, these experiments were performed using two different reference noise textures, one for body and the other for lung reconstruction kernels. Modeling the Noise Texture In CT, an NPS usually has a ramp dominating the lower frequencies and an apodization part dominating the higher frequencies. 10Previously, to model the full NPS, a six-parameter model of NPS was suggested: 11 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 1 1 4 ; 1 3 8 In this model, the parameter a controls the magnitude of the noise, and the other parameters primarily determine the shape of the NPS.However, having six parameters that can change alone or together results in a large number of possible changes and is impractical for use in an observer study.Therefore, we propose the simplification of the model to a three-parameter one.By evaluating the resulting NPS fits from one manufacturer, the values of b, c, and d were empirically determined and fixed to 1 for b and c and 2 for d: The applicability of this model for clinically available reconstruction kernels and reconstruction methods in CT for various vendors is tested. If an NPS is described using Eq. ( 2), the peak frequency of this NPS is derived analytically as follows: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 3 ; 1 1 7 ; 3 5 9 To characterize the NPS independently of the fitting model used, we propose two parameters: one parameter that describes the upslope and one that describes the downslope.For the description of the upslope, we used f peak because NPSs are supposed to monotonically increase to f peak and this parameter is already often used to describe the NPS.For the downslope, we used the standard deviation (σ) of a half-Gaussian that is fitted through the downslope of the NPS, i.e., for all frequencies equal to or higher than f peak , resulting in E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 4 ; 1 1 7 ; 2 3 8 where a 0 determines the magnitude of the Gaussian and σ is its width.So σ was used as a single parameter to describe the NPS downslope.Because we are modeling only the shape of the NPS and not its overall magnitude, all modeled NPSs were set to unit area under the curve.An example NPS and its resulting parameterizations are shown in Fig. 1. Information on the testing on the applicability of this model is given in Appendix A. Generation of Patches with Various Noise Textures Given a specific f peak and σ, a continuous distribution of NPSs can be generated using Eq.(1) or Eq. ( 2).For a detailed description of the procedure used, see Appendix A. From the NPS resulting from these equations, a two-dimensional NPS (NPS 2D ) was created assuming that the NPS 2D is radially symmetric.To be able to generate a specific noise texture, the generated NPS 2D is applied to white noise as follows: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 5 ; 1 1 4 ; 7 1 2 Nðμ; σÞ where N is the resulting colored-noise image, F is the FFT operator, and n is a realization of white Gaussian noise with a mean value of μ and a standard deviation of σ.For the observer study, noise patches of 256 × 256 pixels were created. Noise Texture Contrast If two noise textures and their corresponding NPSs are considered, the noise texture contrast (C texture ) is calculated based on the contrast that an ideal observer is able to see.For the derivation of the ideal observer, see Appendix B. Effectively, the ideal observer looks at the absolute differences between the two NPSs.Therefore, the noise texture contrast is calculated from the NPS 2D as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 6 ; 1 1 4 ; 5 5 5 Observer Study To investigate the detectability of differences between two colored noise textures, a two alternative forced choice observer study was performed.For each realization of the experiment, three noise patches, created in real time, were shown to the observer.One was labeled the "reference" noise patch, one patch had another noise realization with the same NPS as the reference patch, and the third noise patch originated from an NPS with a different f peak and/or a different σ.All noise patches were shown with a window level equal to the mean gray level and a window width of 10 (i.e., 10 times the SD).The task for the observer was to identify the patch that had the same noise texture as the reference noise patch (Fig. 2).After the observer made a choice, the correct patch was highlighted for one second and then the next trial was shown. To determine the parameter values to use for the noise patches of the next trial, a staircase method was applied with a step size of 15% of the current value.The difference in the parameter values between the different NPSs was decreased after three correct responses and increased after one incorrect response. 12The trials were stopped after 12 reversals, and every series was executed 6 times.Per repetition, the geometrical mean over the trials from the last eight reversals was determined, which is an estimate of the 80% correct point on the psychometric curve. 13The average value of this 80% correct point from the last five repetitions was used as the detectability threshold. For this study, two reference NPSs were chosen, one from a body kernel and one resulting from a lung kernel, determined from images of a 320 mm water phantom on a clinical wide-area Fig. 2 Screenshot of a trial shown to the observer.The task of the observer was to select the alternative noise patch (option 1 or option 2) that has the noise texture most comparable to the one of the reference noise patch. CT system (Aquilion One PRISM edition, Canon Medical Systems Corporation, Otawara, Japan) using the dose determined by the automatic tube current modulation and a hybrid iterative reconstruction (HIR) method (AIDR 3D, Canon Medical Systems Corporation).The reference NPSs are shown in Fig. 3.The f peak and σ values are 1.89 and 1.28 lp∕cm for the body kernel and 4.64 and 1.83 lp∕cm for the lung kernel. During the observer study, the two reference NPSs were approached from eight directions, involving a change in f peak only, a change in σ only, and simultaneous changes in both (all from higher and lower values) (see Fig. 4).The starting test values were determined by what was a clearly visible difference in noise texture for one of the investigators.Initially, five nonradiologist Fig. 3 Reference noise power spectra (NPSs).The body NPS was obtained using an HIR (Hybrid-IR) method with body settings, and for the lung NPS, a Hybrid-IR with lung settings was used.The f peak and σ values are 1.89 and 1.28 lp∕cm for the body kernel and 4.64 and 1.83 lp∕cm for the lung kernel.observers (PhD students in imaging science and medical physics trainees) completed studies to evaluate all 8 directions for both reference NPSs in multiple sessions.A maximum of 2 directions was performed in one session to prevent fatigue.All observers were able to complete each twodirection session within 1 h. To investigate if radiologists are able to detect more subtle differences in noise texture, the experiment was repeated with 27 radiologists at the Medical Imaging Perception Lab at the European Congress of Radiology (ECR) 2023 and afterward with five radiologists from the Radboud University Medical Center.Due to the limitation in available time per radiologist, each radiologist only performed one of the reference NPS and direction combinations.Each series was performed only five times, of which the last four were used for the calculation of the geometrical mean.This led to results from two radiologists for each direction. All experiments were performed in dimmed lighting conditions, comparable to diagnostic reading room conditions.The noise patches were shown on a DICOM GSDF calibrated diagnostic monitor (for the nonradiologist: Barco MDMC-12133 and for the radiologists: Barco MDNC-3321, Barco, Kortrijk, Belgium). Analysis of the Results For each reference NPS, the threshold f peak and σ values per nonradiologist observer and the average threshold over all nonradiologist observers were calculated for each direction.A threshold detectability boundary ellipse was fitted through the eight average threshold values using a least squares method. The threshold noise texture contrast for each threshold condition, as well as their 95% confidence interval, was calculated per observer.For the two radiologists, only the limiting noise texture contrast was calculated.The radiologists were assumed to perform the same as the other observers if their threshold noise texture contrast was within the 95% confidence interval of the nonradiologist observers. Results In Fig. 5, the f peak and σ limiting values for each nonradiologist observer and the overall average are shown for both reference NPSs.Also the detectability threshold ellipse is shown.For the body NPS (f peak : 1.89 lp∕cm and σ: 1.28 lp∕cm), the ellipse has the center close to the reference value, with f peak ¼ 1.86 lp∕cm and σ ¼ 1.30 lp∕cm.The major radius of the ellipse makes an angle of 143 deg with the f peak axis.Based on this elliptical fit, the detectability threshold f peak is 0.2 lp∕cm.Of course, this value changes if σ is changed simultaneously.For the lung NPS (f peak : 4.64 lp∕cm and σ: 1.83 lp∕cm), the ellipse center is at f peak ¼ 4.30 lp∕cm and σ ¼ 2.46 lp∕cm, and the major radius makes an angle of 120 deg.The corresponding threshold f peak is 0.4 lp∕cm.Therefore, the detection threshold for a change in f peak is higher when using the lung NPS as the reference compared with the body NPS as reference. The background of the two graphs in Fig. 5 shows the noise texture contrast (C texture ) compared to the reference NPS.The lighter the color is, the higher the contrast is.For a changing f peak , less C texture is needed to be perceptible to a human observer compared with changing the downslope.To make a change in texture perceptible, the most C texture is needed in the direction of lowering f peak combined with increasing σ, or vice versa.The iso-f av line through the reference NPS shows that the average frequency is a better estimator for the visibility of noise texture changes than f peak because the iso-f av line is more parallel to the major axis of the threshold ellipse, whereas the iso-f peak line runs more closely to the minor axis of the ellipse.However, NPSs with the same average frequency can still be distinguishable from each other. The noise texture contrast thresholds determined with radiologists show that radiologists have a noise texture contrast threshold within the 95% confidence interval of the nonradiologist observer results in 17 of the 32 experiments.In 12 cases, the radiologists had a noise texture contrast threshold above the 95% confidence interval of that from the nonradiologist observers.A detectability threshold could not be determined for two experiments because the observers would have needed a larger difference to be able to detect the correct noise texture than would be possible (σ would become negative).Table 1 and Fig. 6 show the individual radiologist results and the average results of the nonradiologist observers. Discussion Because noise texture influences detectability of lesions and new deep learning-based CT methods can more easily modify noise texture, it is of interest to study the effect of noise texture changes on the detectability of lesions.In this research, we focused on the detectability of noise texture changes itself, hypothesizing that if an observer cannot detect differences in noise texture, then lesion detectability across these noise textures would be unaffected.We found the thresholds for detectability with varying f peak and σ for two commonly used reconstruction kernels and showed that radiologists do not perform better than nonradiologist observers in detecting these differences.This may suggest that the sensitivity to changes in NPS is related to the human visual system.However, depending on the direction of the change, the intraobserver variability of this detectability threshold can be large.This is especially true in the direction of the major axis of the ellipse. The change in f peak and σ needed for the human observer to detect the difference varies for both conditions and for the directions within each condition.However, the average noisetexture contrast needed is roughly equivalent, except for the lung reference NPS in the direction of a higher f peak and lower σ.In this case, the noise texture differences are concentrated at high spatial frequencies.This result may reflect limitations in human observers at high spatial frequencies.5][16] However, we should note that these models have been developed for images that look very different from CT noise textures.Therefore, further validation is needed to determine if generalization to CT noise is applicable. A slightly higher detectability threshold was found for radiologists compared with the nonradiologist observers.This might be caused by the fact that the evaluation with radiologists was performed in only one change direction, so they were less used to the task compared with the nonradiologists, who did all directions.In addition, radiologists only performed five repetitions, and the nonradiologist observers performed six.However, we did not find that the last repetition of the nonradiologist observers was better than the first five. This research is a first step in the investigation of the effect of noise texture from nonlinear reconstruction methods on the perception of lesions in clinical CT images.Further research is needed to include the visibility of lesions with different noise textures and, eventually, with the inclusion of anatomical background.The latter is needed not only due to its interference with No value available for the second radiologist for directions 3 (abdomen) and 8 (lung) because, for these observers, the difference had to be larger than possible for σ (σ would become negative). the detectability of lesions but also because the noise texture from nonlinear reconstructions is probably different from that in homogeneous backgrounds, potentially also breaking the assumption that the noise is radially symmetrical.However, in this first initial study, we aimed to determine what differences in noise texture, as characterized by differences in NPS, are actually detectable by the human visual system, so the follow-up studies could be performed with meaningful noise texture differences. Our study has several limitations.First, the number of observers was limited.Future research might involve increasing the number of observers to better estimate the average thresholds as well as their variability for the various directions.Also evaluating only eight directions of change is quite limited considering that six parameters are needed to describe an ellipse.Hence, evaluating more directions could provide a better estimation of the limiting ellipse.In addition, we used only two reference NPSs.Although these NPSs are used often for lung and body exams, acquisitions for bone and brain result in NPSs having different f peak and σ.Also other reconstruction techniques, such as model-based iterative reconstructions (MBIR) or DLR, as well as reconstructions from other vendors, will result in different NPS shapes.Finally, the underlying noise distribution used was Gaussian, although in recent studies, we are seeing that nonnormal CT noise distributions can be discriminable from NPS-matched normal noise distributions. 17o further study this effect, similar studies as this one are needed; however, these should not change the NPS but change the underlying noise distribution.Next, just as a follow-up for this study, the effect on lesion detection should be studied. Conclusions Human observers showed different sensitivity to changes in CT noise textures based on peak frequency (f peak ), and the downslope of the NPS (σ) alone and in combination.Radiologists did not detect these textural changes any better than nonradiologist observers.Describing NPS using only the f peak or the f av alone was insufficient to describe perceived differences in CT noise texture.The presented model using f peak and σ can serve as a starting point to better describe noise texture and to further study the impact of CT noise texture on human task performance. Appendix A: Verification of the NPS Model in CT To obtain a wide representation of CT NPS curves, water phantom images were acquired using CTs from four vendors and different reconstruction techniques.These NPSs were modeled and parameterized, and the goodness-of-fit for the models and appropriateness of the parameters was determined.6.1 NPS Acquisition NPS data were acquired on four CT systems from four different vendors (Canon Medical Systems, GE HealthCare, Philips healthcare, Siemens Healthineers).A 320-mm diameter water phantom was imaged using the settings used clinically for the abdomen protocol at the corresponding site and with a lower dose setting.The acquisitions were reconstructed using the clinically used kernel for abdomen, lung, brain, and bone, for the following reconstruction methods (if available): FBP, HIR, MBIR, and DLR.The slice thicknesses used were 0.5 mm for Canon, 0.625 mm for GE, and 1.0 mm for Philips and Siemens.For all but the FBP reconstruction, three strength settings were used, leading to a maximum of 80 reconstructions per CT system, if all reconstruction methods were available [2 dose levels, 4 kernels, 10 reconstruction methods (1 FBP + 3 strengths × 3 methods)].Each NPS 1d was calculated in a central ROI of 128 × 128 pixels using the method described by Boedeker et al. 18 From each acquisition, a stack of at least 100 slices was used for the NPS calculation.For each slice, the NPS was calculated and averaged over all slices.This average NPS 1d was normalized to have a unit area under the curve. NPS Data Analysis The acquired NPS 1d was fitted using the six-parameter model [Eq.( 1)] and the three-parameter model [Eqs.( 2) and ( 3)] by a least-squares method.To determine the f peak , the NPS 1d was first filtered using a low-pass filter at 4% of the full bandwidth.This prevented small local peaks from affecting the determination of f peak .Finally, the three-parameter NPS 1d was described with a two-parameter model [with Eq. ( 4)], and the f av was calculated. To generate the two-parameter parameterized NPS, a procedure in Python was written using the curve_fit and minimize_scalar functions from the scipy.optimizepackage.The three parameterized NPSs were compared to the original NPS 1d using the relative sum of absolute differences (RSAD): Results All acquired NPS shapes are shown in Sec.6.4.Of the 194 acquired NPSs, 152 (78%) NPSs have a shape that has a ramp dominating the low frequencies and an apodization part that dominates the higher frequencies.14 (7%) NPSs only have a ramp (no downslope), and 28 (14%) NPSs have a different shape altogether (e.g., multiple peaks).In Tables 2-5, all values of f peak , σ, f av , and the RSADs between the parameterized NPSs and the acquired NPSs are given.For several NPSs, there is no σ value because the NPS has no downslope within the Nyquist frequency.Two NPSs with the same f peak or the same f av are shown in Fig. 7 together with their corresponding noise textures.As can be seen, the noise textures are clearly discernible, whereas f peak or f av is the same.The combination of f peak and σ do differ for these situations. The six-parameter NPS model can fit the acquired NPSs with a RSAD smaller than 10% for FBP, HIR, MBIR, and DLR in 100%, 92%, 71%, and 100% of the cases, respectively.For the three-parameter model, this drops to 88%, 69%, 45%, and 54%, respectively.For a more elaborate overview, see Table 6.The NPSs modeled by the parameters f peak and σ using the threeparameter model yielded a modeled NPS that, over all manufacturers, resulted within 20% RSAD for FBP, HIR, MBIR, and DLR in 69%, 78%, 54%, and 67% of all NPSs, respectively (Table 6).To avoid possible differences in appearance due to differences in higher order statistics in the original reconstructions, the noise textures were generated by applying the NPS to a realization of white noise. Table 6 Percentages of NPS parameterizations with a relative sum of absolute differences with the acquired NPS below 10%, between 10% and 20%, and above 20%, per manufacturer and reconstruction type.The first number in each cell is the percentage for the six-parameter model and the second number is for the Three-parameter model.Some reconstruction types were not available for some manufacturers.Notes RSAD: Relative sum of absolute differences The following reconstruction techniques were not available: Siemens HIR and DLR.GE MBIR and DLR.For Philips MBIR Bone, the hospital uses the same reconstruction kernel as for MBIR Lung. Appendix B: Ideal Observer We consider a binary discrimination task to discriminate between two classes of Gaussiandistributed images defined by different power spectra.Let g represent the image pixel values as a column vector.The two hypotheses for the discrimination task are E Q -T A R G E T ; t e m p : i n t r a l i n k -; s e c 7 ; 1 1 4 ; 6 0 7 where the textural differences are entirely represented in the image covariance matrix (Σ 1 versus Σ 2 ) and there is no difference in the image mean (μ). Ideal Observer The ideal observer test statistic is based on the log-likelihood ratio.The likelihood of a hypothesis (given an image stimulus) is determined from a multivariate normal distribution: E Q -T A R G E T ; t e m p : i n t r a l i n k -; s e c 7 .1 ; 1 1 4 ; 5 1 2 where M is the number of pixels in the image and j j represents the determinant of the matrix argument.Calling this a likelihood (instead of a probability density function) means that we consider g to be given (i.e., the independent variable) and H i to be unknown (the dependent variable).The log-likelihood ratio is then given as E Q -T A R G E T ; t e m p : i n t r a l i n k -; s e c 7 .1 ; 1 1 4 ; 4 2 8 The last line is equivalent to the log-likelihood ratio, with removing the terms that do not affect the performance. To evaluate the ideal observer in this case, we need to be able to compute the inverse of the class covariance matrices.This is where textures defined by an NPS can make the computations much easier. Frequency Domain Computation If the different image textures may be considered to be realizations of a stationary random process, then their covariance matrices are diagonalized by the Fourier basis: E Q -T A R G E T ; t e m p : i n t r a l i n k -; s e c 7 .2 ; 1 1 4 ; 2 3 9 where S i is a diagonal matrix representing the noise power spectrum and F is the finite (usually 2D) Fourier transform matrix.So the product Fg would be the FFT of image g.Because of the properties of the FFT, we have F −1 ¼ 1 M F Ã , where the superscript * means the transpose conjugate (sometimes called the Hermitian or adjoint operator).So we have E Q -T A R G E T ; t e m p : i n t r a l i n k -; s e c 7 .2 ; 1 1 4 ; 1 6 6 We can use the spectral decomposition of the covariance matrices in the likelihood ratio to recast the ideal observer formula in the Fourier transform domain.Let the caret represent a Fourier transform (i.e., ĝ ¼ Fg).We write g − μ as F −1 ðĝ − μÞ to get Note that we rewrote ðg − μÞ T as ðg − μÞ Ã .This is appropriate because the quantity is real, and therefore the Hermitian is equivalent to the transpose.Based on the formula for the inverse spectrum above, we get E Q -T A R G E T ; t e m p : i n t r a l i n k -; s e c 7 .2 ; 1 1 7 ; 6 3 7 λðgÞ ¼ 1 M ðĝ − ûÞ Ã ðS −1 1 − S −1 2 Þðĝ − ûÞ: Because the power-spectrum matrices are diagonal, their inverses are as well, so this quadratic form can be written as a sum E Q -T A R G E T ; t e m p : i n t r a l i n k -; s e c 7 .2 ; 1 1 7 ; 5 7 8 This sum can be problematic if any of the spectral elements are 0, which usually happen from power spectra that are estimated from samples.It may be advisable to regularize the powerspectrum inversion to get E Q -T A R G E T ; t e m p : i n t r a l i n k -; s e c 7 .2 ; 1 1 7 ; 4 9 9 where ε represents the variance of discretization "noise." Fig. 1 Fig. 1 Example of NPS parameterization.The original acquired NPS (in gray) is fit using the three-parameter model [Eq.(2), in dark blue].The peak frequency (f peak ) is used to describe the first section of the NPS (in light blue).A half-Gaussian is fit through the section beyond f peak [Eq.(4), dashed yellow].The σ of that Gaussian describes the apodization part of the NPS (light gray). Fig. 4 Fig. 4 All eight directions of changes in f peak and/or σ that were investigated.Compared to the reference NPS, four directions change f peak or σ only, and four directions involve a change in both f peak and σ. Fig. 5 Fig. 5 Results from the observer study.The 80% threshold limits for each observer, as well as the average values and the fitted detectability threshold ellipse, are shown.Results from the (a) body reference NPS and (b) lung reference NPS.The color of the background indicates the noise texture contrast. Fig. 6 Fig. 6 (a), (b) Results from the radiologists shown in conjunction with the average of the nonradiologist observers.In 17 of the 32 radiologist experiments, the results are within the 95% confidence intervals of the nonradiologist results. E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 sum of absolute differences between the acquired and the modeled NPS.Modeled NPS: NPS determined based on the two-parameter model, using f peak and σ from the acquired NPS. - sum of absolute differences between the acquired and the modeled NPS.Modeled NPS: NPS determined based on the two-parameter model, using f peak and σ from the acquired NPS. Fig. 7 Fig.7NPSs with clearly different noise textures but with the same (a) peak frequency (1.26 lp∕cm) and (b) average peak frequency (1.64 lp∕cm).To avoid possible differences in appearance due to differences in higher order statistics in the original reconstructions, the noise textures were generated by applying the NPS to a realization of white noise. 6. 4 NPS Shapes NPS shapes for the images obtained for the various manufacturers, reconstruction techniques, and kernels.The acquired NPS and the fitted three-parameter model are shown below. Table 1 80% detection thresholds for both body and lung NPS as a reference for the nonradiologist observers.Also the noise texture contrast (C texture ) for the first and second radiologists is shown.If theCtexture of the radiologist is in bold, this value is outside the 95% interval of the nonradiologist observers. a n.a.: not applicable (in this direction, this parameter does not change compared to the reference).a Table 2 Parameters and appropriateness of fit of the various NPSs acquired on a Canon Aquilion One PRISM Edition. Table 3 Parameters of the various NPSs acquired on a GE Discovery CT750 HD. Table 4 Parameters of the various NPSs acquired on a Philips iCT 256. Table 4 ( Continued).Relative sum of absolute differences between the acquired and the modeled NPS.Modeled NPS: NPS determined based on the two-parameter model, using Table 5 Parameters of the various NPSs acquired on a Siemens Somatom Force. RSAD: relative sum of absolute differences between the acquired and the modeled NPS.Modeled NPS: NPS determined based on the two-parameter model, using f peak , and σ from the acquired NPS. Ioannis Sechopoulos has Research Agreements with Siemens Healthcare, Canon Medical, ScreenPoint Medical, Sectra Benelux, Volpara Healthcare, Lunit, and iCAD.He also has a Speaker Agreement with Siemens Healthcare.Craig Abbey is an occasional consultant for Canon Medical Systems USA and Izotropic Corporation LLC, where he also holds stock options.Kirsten Boedeker and Daniel Shin are both Employees of Canon Medical Systems Corporation.
7,936.6
2024-05-01T00:00:00.000
[ "Physics" ]
A discussion paper on stigmatizing features of diabetes Abstract Aim This manuscript aims to describe stigmatizing features of diabetes. Design This article presents a narrative review of literature pertaining to stigma surrounding diabetes in different contexts. Methods A literature search was conducted in CINAHL, PubMed and Web of Science for qualitative studies published between 2007–2017. The search was completed using various combinations of diabetes, T1DM, T2DM, stigma, social/public stigma, internalized/self‐stigma, stigmatization and diabetes‐related stigma in English. The reviewers then independently reviewed the eligible studies (N = 18) to extract data. Results From the 18 studies included in this narrative review, seven features related to stigma in diabetes were identified. People with diabetes were most notably considered and stigmatized as being “sick,” “death reminder,” “rejected marriage candidate,” “self‐inflicting,” “contagiousness,” “requiring a dietary modification” and “drunk or drug abuser.” Stigma, a discrediting attribute minimizing a person's value, is a multi-dimensional construct including interpersonal and intrapersonal experiences (Goffman, 1963). It is defined as discriminatory behaviours directed towards people with the stigmatized condition (Bogart et al., 2008), although it is not limited only to the behaviours. Weiss, Ramakrishna, and Somma (2006) have suggested stigma is typically a social process, experienced or anticipated, characterized by exclusion, rejection, blame or devaluation that result from experience, perception or reasonable anticipation of an adverse social judgement about a person or group. This judgement is based on an enduring feature of identity conferred by a health problem or health-related condition, and the judgement is in some essential way medically unwarranted (p.279). Stigma in chronic illnesses such as HIV/AIDS has received considerable attention, but there has been limited attention given to stigma and diabetes (Browne et al., 2014). A small body of research exists related to understanding stigma as a social construct in different cultures. Culture affects how people exhibit alternate thinking, feeling and behaving processes that may affect stigmatization and discrimination towards people with diabetes. Such differences may affect the definition and manifestation of stigma (Weiss et al., 2006). A comprehensive understanding of stigma surrounding diabetes is important for informing policy and practice to improve the quality of care and quality of life for those living with diabetes (Schabert et al., 2013). The literature review about stigma in diabetes aimed to describe stigmatizing features of diabetes in different countries around the world. The review of findings may provide a foundation for future research related to stigmatization in living with diabetes. | Design This article presents a narrative review of literature related to stigma in diabetes. | Search strategy Each search was completed using various combinations of these search words: diabetes, T1DM, T2DM, stigma, social/public stigma, internalized/self-stigma, stigmatization and diabetes-related stigma. | Inclusion criteria Qualitative studies were included in this review of literature. Articles had to focus on stigmatization against people diagnosed with T1DM, T2DM or both. Studies describing the stigmatized perception of people without diabetes towards those living with diabetes also were included. Studies that were excluded were not peer-reviewed, did not provide enough information about stigmatized features of diabetes or described insufficient data related to stigma in diabetes for data extraction. Nineteen qualitative manuscripts were identified for inclusion in the review. Figure 1 shows the PRISMA flow diagram of this review. | Data extraction Two reviewers (S.A. and M.D.I.) evaluated abstracts to identify articles meeting the inclusion criteria. Then, eligible studies and full text of relevant articles to stigma in diabetes were carefully read by each reviewer independently. A data extraction form was adapted from the literature. Discrepancies between the two reviewers in the extracted data were resolved in consensus discussion. | Ethical statement The research team comprehensively reviewed all the relevant work and judged research quality and relevance. All the references also were acknowledged and fully cited. | RESULTS Description of studies: Eighteen qualitative studies were analysed in this narrative review. Ten studies included T1DM participants, eight studies included T2DM and four studies included participants without diabetes. Five studies were conducted in an Asian population; two studies in Africa; and the remaining studies were conducted in the United States, Australia and the UK. Study characteristics can be found in Table 1. The literature review highlighted that diabetes-related stigma is a complex issue. Some themes are interrelated and could not be separated. In these manuscripts, people with diabetes were mostly stigmatized as "sick and disabled," "death reminder," "rejected marriage candidate," "self-inflicting," "contagious," "requiring dietary modification" and "drunk or drug abuser." | Sick Seven studies in different countries (the United States, Canada, Australia, India, Iran and Palestine) reported that people with diabetes are stigmatized as being sick. The designation of "being sick" affects an individual's ability to experience a normal independent life, and is a common diabetes-related stigma in Australia (Browne et al., 2014). One study in Iran found that young adults with diabetes perceive the social stigma of diabetes as being sick and disabled (Abdoli, Abazari et al., 2013). A similar result was found in Palestinian children with T1DM, who perceived diabetes as a stigmatizing condition that spoiled their identity as a healthy individual, making them feel like an outsider and not a normal person (Elissa et al., 2016). A study performed in a U.S. Arab American community found that individuals often viewed diabetes as a weakness or breakdown (DiZazzo-Miller et al., 2017). Indian mothers of children with diabetes experienced diabetes-related stigma when other people labelled their child as a "sick kid" (Verloo, Meenakumari, Abraham, & Malarvizhi, 2016). This finding is similar to Weiler's (2007) study and Weiler and Crist's (2009) study where Mexican American participants with diabetes experienced stigmatization as "being sick" and referred to the stigma as "The Big D." | Death reminder In three studies (Tajikistan, Iran and Soweto), individuals with diabetes were stigmatized as a "death reminder." Being a "death reminder" has a strong connection of being stigmatized as "sick." Children with T1DM in Tajikestan described their experiences of how people predict their premature death by saying, "You are very sick! You will die soon; you will not have a long life" (Haugvik, Beran, Klassen, Hussain, & Haaland, 2016), which is similar to (Abdoli, Abazari et al., 2013) in Iran. Some participants in Mendenhall and Norris's (2015) study also indicated how some people feel diabetes is a "death panel" by whispering about amputations due to diabetes and negative stories surrounding diabetes. | Marriage rejected candidate Diabetes-related stigmatization is considerably greater for younger, unmarried women, particularly in Asian countries. Delayed marriage is reported in people with diabetes in different countries such as Iran and India (Abdoli, Abazari et al., 2013). Iranians believe that women with diabetes are not suitable candidates for marriage due to high-risk pregnancies, the potential of having a child with diabetes, and the role of a woman in the Iranian family (Abdoli, Doosti Irani et al., 2013). In a similar study in the UK, the South Asian community described public perception that views diabetes as a sign of physical inadequacy to traditional marriage (Singh et al., 2012). An unmarried Arab male in Australia described diabetes as a "disaster," which makes both males and females with diabetes less desirable candidates for marriage due to a perceived connection between diabetes, erectile dysfunction and the passing of diabetes to their children (Abouzeid, Philpot, Janus, Coates, & Dunbar, 2013). Marriage in India is a source of stress for individuals with diabetes and their families. Some Indian adolescents, especially girls with diabetes, experienced social stigmatization and were not wanted for marriage (Hapunda et al., 2015). This is also true for Indian mothers, who consider diabetes as a barrier for their daughters getting married (Verloo et al., 2016). Individuals with diabetes in London are thought to be unable to conceive or to have a normal pregnancy (Winkley et al., 2015). A 2014 Australian study noted that participants experienced the termination (or threat of termination) of a romantic relationship due to diabetes. Fear of the negative impact of diabetes on their relationship was one of the main reasons highlighted by participants. They were worried about disclosing their diabetes to their partners or potential partners. It also was mentioned as a marriage barrier by some participants (Browne et al., 2014). | Self-inflicting Nine studies have noted that the community's perception about the cause and nature of diabetes can be stigmatizing. In several countries such as Iran (Abdoli, Abazari et al., 2013), Australia (Browne, Ventura, Mosely, & Speight, 2013), Taiwan (Lin, Anderson, Hagerty, & Lee, 2008), Ireland (Balfe et al., 2013) and the United States (Vishwanath, 2014), individuals with diabetes are considered to be self-inflicting the disease. There are two common beliefs about diabetes that can be stigmatizing for people with diabetes: (i) diabetes is an illness of over-indulgence with food (Lin et al., 2008) and (ii) diabetes is a result of an individual's own actions Vishwanath, 2014). For example, the findings of Vishwanath's (2014) U.S. study suggested that most participants described diabetes as a disease that affects children who are lazy, unhealthy, fat, obese, lacking exercise and having an eating disorder (p. 516). Overweight people, particularly in T2DM, are stigmatized for getting diabetes because of their lack of self-control. In some cultures such as Hispanic or Latino, diabetes is seen as a punishment from God. Weiler (2007) wrote that the punishment ideology imposed a self-associated stigmatization, which is similar to the Abdoli, Doosti Irani et al., study (2013) in Iran and the Browne et al. (2014) study in Australia. Insulin injections can be misunderstood as drug abuse in Iran (Abdoli, Doosti Irani et al., 2013), Taiwan (Chen, Tseng, Huang, & Chuang, 2012;Lin et al., 2008) and Australia (Browne et al., 2014). Tajukestani's children expressed being stigmatized as drug abusers while trying to inject insulin in public places (Haugvik et al., 2016). Australian participants also described being worried about, or having experienced, being mistaken for a drug abuser while injecting insulin. This was particularly the case for those who injected insulin with a vial and a syringe before the advancement of insulin pens and pumps (Browne et al., 2014). Participants with T2DM in Kuala Lumpur also expressed their feelings of stigmatization as a barrier for insulin injection, which can be misunderstood or stigmatized as drug abuse (Abu Hassan et al., 2013). | Requiring dietary modification Reviewed articles referred to the stigmatization of people with diabetes due to life modifications, especially dietary modifications and restrictions. The required treatment regimen for diabetes management includes actions that often are noticeable by others. This includes eating at specified times, which may be associated with some degree of stigma (Chatterjee & Biswas, 2013;Fukunaga, Uehara, & Tom, 2011). | Having a contagious disease A few of the reviewed articles indicated people without diabetes may stigmatize those living with diabetes as being contagious. For example, the Lin et al. (2008) study on Taiwanese individuals with T2DM found that some people believe diabetes is an infectious disease, and they stigmatize people with diabetes as contagious. Hapunda et al. (2015) noted that in Zambia there is a fear of getting diabetes in a social setting. Therefore, some children who participated in a study mentioned that the community perceived them as "infectious" and some of their peers would deny playing with them because they may catch diabetes (Hapunda et al., 2015). | Limitations The limitation of this manuscript is having a retrospective review of previously published manuscripts chosen at the authors' discretion and selected electronic databases. | DISCUSSION Individuals with diabetes are stigmatized as sick and disabled (Browne et al., 2014;Weiler, 2007), which can be the underlying foundation of most of the stigma surrounding diabetes (Shestak, 2001;Weiler & Crist, 2009). Being stigmatized as sick and disabled is itself a stigma in some cultures (Kesavadev et al., 2014). This feature of stigma has the ability to make people dependent on others throughout their life and impose a financial burden on family and society (Abdoli, 2011). It also leads to a greater burden for people with diabetes in certain population sub-groups such as young adults and women, particularly in Asian countries (Abdoli, Abazari et al., 2013;Doosti Irani, 2014). Some Asian countries view diabetes as a sign of physical inadequacy rooted in being sick and disabled. This perspective leads to a disproportionate burden of diabetes on young adults, particularly women, and affects their marriage potential (Ahmadi, MaslakPak, Anoosheh, Hajizadeh, & Rajab, 2009;Maslakpak, Anoosheh, Fazlollah, & Ebrahim, 2010;Patel, Eborall, Khunti, Davies, & Stone, 2011). People in Asian countries assume that those living with diabetes cannot perform duties as a mother or as a marital partner as they are considered "sick and disabled" (Abdoli, Doosti Irani et al., 2013). Individuals with or without diabetes think that women with diabetes are infertile or at a high risk for pregnancy (Abdoli, 2011). Women are thought to transmit diabetes to their child, who will inevitably suffer foetal death or be born with other congenital disorders. Men are considered to be sexually dysfunctional due to diabetic impotency. The financial burden of diabetes medication and associated complications is of great concern to men and women affected with diabetes (Browne et al., 2014). Even in the 21st century, communities are not aware of diabetes aetiology and some consider diabetes a punishment or a result of one's lack of self-control Caban, & Walker, 2006;Hjelm, Bard, Nyberg, & Apelqvist, 2003;Lin et al., 2008;Vishwanath, 2014). Individuals also do not feel safe to inject insulin in public places because they might be misunderstood as a drug abuser or drunk while they are experiencing symptoms of hypoglycaemia (Abdoli, Doosti Irani et al., 2013;Browne et al., 2014;Ho & James, 2006;Lin et al., 2008). | CONCLUSION This review of articles indicates the issue of stigmatization for people with diabetes has been an ongoing significant psychosocial issue associated with diabetes globally. Although an increasing number of declarations and laws are aimed at health equality of people with diabetes, discrimination and stigmatization is still broadly diffused (Benedetti, 2014
3,215.8
2018-01-24T00:00:00.000
[ "Geology" ]
A Non-inductive Coil Design Used to Provide High-Frequency and Large Currents Currently, cutting-edge, high-frequency current sources are limited by switching devices and wire materials, and the output current cannot take into account the demands of a high peak and low rise time at the same time. Based on the output demand of a current source, a non-inductive coil for providing high-frequency, high current sources with low rise times is designed. The coil is appropriately designed according to the principle of the ampere-turn method, where several turns of wire are utilized to linearly synthesize the current to obtain high-frequency currents with amplitudes up to 30 kA. However, the inductance formed after winding the coil could possess a hindering effect on the high-frequency current. In the present investigation, based on the law of energy conservation and utilizing the principle of transformer coupling, the inductor’s hindering effect on high-frequency currents is appropriately eliminated by consuming the stored energy of the inductor innovatively. Theoretical calculations and practical tests show that the inductance of a two-layer 28-turn coil is 42 times smaller than that of a two-layer, 28-turn perfect circular spiral PCB coil. The measured inductance is only 6.69 μH, the output current amplitude is calculated to be up to 33 kA with a rise time of 20 ns, and the output waveform corresponding to a 1 MHz square wave is not remarkably distorted. This effective design idea could be very helpful in solving the problem of high peak values and low rise times in high-frequency, high-current source output design. Introduction High-power pulsed power technology originated in the 1930s from research on generating rays using capacitors, and the first publication of microsecond pulsed X-rays from a high-voltage pulsed power discharge point was made by Kingdon and Tanis in 1930 [1]. Among the pulsed current sources, the adjustable high-frequency, high-current pulse source, as a new type of controllable pulsed current source, plays a central role in the field of parameter calibration of high-precision, high-volume range AC measurement equipment [2].Additionally, the calibration current source is currently in the stage of continuous improvement [3].In general, there are two main research directions in the present mainstream electrical energy design research. The first is how to generate high-frequency, high-current sources with amplitudes exceeding the kilo Ampere class.Kovalchuk et al. [4] described the design and testing of a pulsed source with an adjustable setting which was capable of delivering currents of 400 to 600 kA in a 200 to 800 ns rise time.The complete system reaches the order of tons and has dimensions of 1 × 0.6 × 0.81 m 3 (with a pumping system).However, the generator equipment was too bulky and complex to be put into practical use.Bazanov et al. [5] created a destructive current source based on the bursting mode with a current amplitude of up to 10 MA and a rise time of 120 ns.This current source is disposable and the current waveform is not controllable.In addition, magnetic pulse compression (MPC) systems have been designed to achieve a rise time of 307 ns and a peak power of 30.2 MW [6], but the waveforms are reproducible only up to 7 kHz.The second one is how to control the accuracy of the output waveform and generate the waveform without distortion.Carvalhaes-Dias et al. [7] designed a proportional to absolute temperature (PTAT) current source with a maximum nonlinearity of 0.44 ppm/C.Wei et al. [8] designed a current source with a ripple ratio of 0.1% and an amplitude of 100 A based on the requirement of Hall element detection.The measurement objects were almost sinusoidal, representing the waveform of the other side of the waveform reduction without further discussion. It can be found that the world's top high-frequency current source pulse design research cannot simultaneously meet the output current of kiloamps and the rise time of tens of nanoseconds.; this requires new design ideas and designs to meet the special requirements of the amplitude of this large, short rise. In this paper, based on the law of energy conservation and utilizing the principle of transformer coupling, an innovative approach is proposed to eliminate the hindering effect of inductors on high-frequency currents by depleting the stored energy of inductors.To this end, a non-inductive coil for outputting high-frequency, high-current sources with low rise times is designed based on the output design requirements of a current source.According to the principle of the ampere-turn method, the coil uses several turns of wires connected in parallel to linearly synthesize multiple small currents to arrive at a high-frequency current with an amplitude of up to 30 kA.However, the parasitic inductance of the coil could hinder the high-frequency current with an amplitude of up to 30 kA, which is mainly reflected in the delayed rise time and waveform distortion. Non-Inductive Coil Design Preparation In order to meet the output requirements of a high-frequency, high-current, singlecycle pulse source made by a Chinese laboratory, the other electromagnetic induction equipment in the laboratory were tested for extreme parameters.The output limit design requirements for this current source are presented in Table 1. Table 1.High-frequency current source output limits design requirements. Norm Parameters The amplitude current (I MAX ) 30 kA Maximum frequency (f MAX ) 1 MHz Rise time 25 ns Since the current source of the output current amplitude and rise time requirements are extremely high, the current conventional current output program cannot meet the requirements.It is mainly reflected in the current mature commercial switching devices, which can be achieved within 25 ns switching speed products.Actually, the switching current amplitude indicators are not more than 100 A, which is more than a hundred times the gap with the design of 30 kA and cannot be utilized to increase the amplitude of the current simply by paralleling the switching device approach.Here, we propose a new output design approach to meet the design specifications. Development of a Theoretical Model for Eliminating Inductive Effects Consider first how the current amplitude target of the current source is reached.According to the magnetic circuit Ohm's law and magnetic circuit Kirchhoff's law [9], where N j represents the j-th coil turn of the coil, I j denotes the current flowing through the turn, R j is the magnetic resistance on the j-th coil turn, and Φ j represents the changing magnetic flux passing through the j-th coil. This formula shows that the algebraic sum of the number of ampere-turns around a closed path in a magnetic circuit is equal to the algebraic sum of the magnetoresistance and the flux magnitudes.Therefore, the magnetic field effect of a single turn of high current can be simulated by superposing the magnetic path of multiple turns of a small current.Let the total target current be I 0 , the current passing through a single wire be I 1 , and the number of turns of the winding coil be represented by N. The corresponding formula reads as follows [9]: This is the basic principle of the ampere-turn approach, where the algebraic sum of the current per turn is equal to the algebraic sum of the product of the magnetoresistance and the magnetic flux.By winding a wire into a coil and treating one side of the coil as a whole, a large current pulse several times the current flowing through a single wire is obtained.The equivalent circuit is schematically presented in Figure 1. where Nj represents the j-th coil turn of the coil, Ij denotes the current flowing through the turn, Rj is the magnetic resistance on the j-th coil turn, and Φ j represents the changing magnetic flux passing through the j-th coil. This formula shows that the algebraic sum of the number of ampere-turns around a closed path in a magnetic circuit is equal to the algebraic sum of the magnetoresistance and the flux magnitudes.Therefore, the magnetic field effect of a single turn of high current can be simulated by superposing the magnetic path of multiple turns of a small current.Let the total target current be I0, the current passing through a single wire be I1, and the number of turns of the winding coil be represented by N. The corresponding formula reads as follows [9]: This is the basic principle of the ampere-turn approach, where the algebraic sum of the current per turn is equal to the algebraic sum of the product of the magnetoresistance and the magnetic flux.By winding a wire into a coil and treating one side of the coil as a whole, a large current pulse several times the current flowing through a single wire is obtained.The equivalent circuit is schematically presented in Figure 1.In Figure 1, R denotes the resistance of the coil and L represents the equivalent inductance of the coil.This paper focuses on the hindering effect of the coil's equivalent inductance on high-frequency currents.The mathematical formulation of the inductance can be introduced by Maxwell's equations: High-current pulse source Equation ( 3) gives the essential reason why the inductor hinders high-frequency current: high-frequency current causes drastic changes in the electromagnetic field, and part of the energy of the current is stored by the magnetic field and is released when the current becomes stable, causing obstruction.Therefore, it can be considered that the storage and release of inductive energy is the essence of the inductor on the high-frequency current obstruction effect.This plays a major negative impact on the work of the high-frequency circuit but is also the main factor of the high-frequency current output waveform distortion, reflected in the hysteresis phenomenon of the rising edge of the square-wave pulse, as illustrated in Figure 2 [9,10].In Figure 1, R denotes the resistance of the coil and L represents the equivalent inductance of the coil.This paper focuses on the hindering effect of the coil's equivalent inductance on high-frequency currents.The mathematical formulation of the inductance can be introduced by Maxwell's equations: Equation (3) gives the essential reason why the inductor hinders high-frequency current: high-frequency current causes drastic changes in the electromagnetic field, and part of the energy of the current is stored by the magnetic field and is released when the current becomes stable, causing obstruction.Therefore, it can be considered that the storage and release of inductive energy is the essence of the inductor on the high-frequency current obstruction effect.This plays a major negative impact on the work of the high-frequency circuit but is also the main factor of the high-frequency current output waveform distortion, reflected in the hysteresis phenomenon of the rising edge of the square-wave pulse, as illustrated in Figure 2 [9,10].Therefore, it can be assumed that to achieve the design requirement of tens of nanoseconds of rise time, the hindering effect of the equivalent inductance of the output coil on the high-frequency current must be eliminated.Therefore, it can be assumed that to achieve the design requirement of tens of nanoseconds of rise time, the hindering effect of the equivalent inductance of the output coil on the high-frequency current must be eliminated. In this paper, from the perspective of energy conservation, the structural design of the magnetic field changes with the loss of energy in advance of the conversion and consumption, so that the energy stored in the inductor is reduced or even not, so as to weaken or even eliminate the obstruction effect of the inductor on the high-frequency current. The mutual inductance of two coils has the following relation [10]: where M denotes the mutual inductance and k signifies the coupling coefficient, whereas l 1 and l 2 represent the self-inductance of the two coils.Assuming that this hollow transformer is ideal, i.e., the element is a full coupler and the two coils are identical, k = 1 and l 1 = l 2 . For the primary end of the connected circuit, the induced voltage of the mutual inductance is opposite to that of the self-inductance.At this stage, it can be assumed that the inductance of the coil at the primary end is practically eliminated and only the dynamic resistance impedance due to transformer coupling needs to be taken into account, thus virtually eliminating the effect of the coil inductance on the dynamic circuit.In essence, the energy stored in the inductor to impede changes in the electromagnetic field will be transformed and consumed in the secondary coil according to the transformer coupling effect, thus eliminating the impedance of the inductor on the high-frequency signal in the engineering sense. According to the impedance calculation formula of the transformer [10], Considering that the secondary coil and primary coil are completely consistent at this time, the value of n is obviously equal to one, and thereby When Z 2 is unloaded, there is only the impedance of the secondary coil at this point.Clearly, the resistance, RT, on the secondary coil is only the resistance generated by the wire.Its high-frequency circuit schematic representation is shown in Figure 3.However, taking into account the measurement requirements, once the coil is fully coupled, the magnetic energy that is theoretically completely recycled cannot produce magnetic field changes, and also cannot output electromagnetic signals, which as an output coil loses its design significance.Therefore, it cannot completely offset the coil's own inductance and the primary end of the inductance is not completely eliminated as a test point.The final equivalent circuit of the non-inductive coil is illustrated in Figure 4.However, taking into account the measurement requirements, once the coil is fully coupled, the magnetic energy that is theoretically completely recycled cannot produce magnetic field changes, and also cannot output electromagnetic signals, which as an output coil loses its design significance.Therefore, it cannot completely offset the coil's own inductance and the primary end of the inductance is not completely eliminated as a test point.The final equivalent circuit of the non-inductive coil is illustrated in Figure 4. However, taking into account the measurement requirements, once the coil is fully coupled, the magnetic energy that is theoretically completely recycled cannot produce magnetic field changes, and also cannot output electromagnetic signals, which as an output coil loses its design significance.Therefore, it cannot completely offset the coil's own inductance and the primary end of the inductance is not completely eliminated as a test point.The final equivalent circuit of the non-inductive coil is illustrated in Figure 4. Coil Electromagnetic Parameter Design Model For a coil utilized to output high-frequency current, there are three core electromagnetic parameters: resistance, inductance, and capacitance.The modeling of these three parameters is in turn discussed in the following. Coil Electromagnetic Parameter Design Model For a coil utilized to output high-frequency current, there are three core electromagnetic parameters: resistance, inductance, and capacitance.The modeling of these three parameters is in turn discussed in the following. A Model for Calculating the Coil Resistance Obviously, the resistance on the coil is simply the resistance generated by the wire.At this point, it can be derived directly using the resistance definition formula.Therefore, the formula for the coil resistance is as follows: R = ρl length wh L (7) where ρ denotes the resistivity of the wire, l length represents the length of the wire, w is the width of the copper-clad wire, and h L is the thickness of the copper-clad wire. A Model for Calculating the Coil Inductance For the unmodified PCB helical coil, the approximate formula of inductance presented in Equation ( 8) is given according to the work of Zhao [11].By establishing electromagnetic field simulation models of spiral coils with various shapes, Zhao [11] utilized the electromagnetic model and the principle of electromagnetic induction to give several approximate mutual inductance formulas between spiral PCB coils.Through establishing spiral coils with specified shapes, the PCB coil measured its mutual inductance and verified the validity of the fitting formula.Under certain conditions, the formula can be approximated as the spiral tube mutual inductance calculation formula, verifying the theoretical value and physical meaning of the formula. where µ 0 is the vacuum permeability, N is the number of turns, and D AVG denotes the arithmetic mean of the inner and outer diameters.The factors C 1 -C 4 represent the fitting coefficients, which are determined by the outer shape of the coil.In the above relation, p is the filling ratio, which is given by At this point, the coil is employed as the primary coil of the transformer, the corresponding secondary coil is added, and the secondary coil is shortened to have no load.According to the theoretical analysis above, the inductance at this point will be completely converted into the static impedance of the secondary coil.However, this is the assumption that the secondary coil and the primary coil are fully coupled.In fact, if the primary coil as part of the measurement window is not coupled with the secondary coil, then we only need to calculate the uncoupled part of the inductance and the uncoupled part of the reduced impedance can be brought about by the impact.The part, for ease of calculation, can be regarded as a straight wire generated inductance. Zhong [12] derived an approximate formula for the inductance of a single wire on a PCB based on the definition of inductance and the principle of electromagnetic induction: The derivation of Equation ( 10) is based on the fact that the thickness of the PCB wire is much smaller than the width and length of the wire and can be regarded as an ideal conductive flat plate.Based on the relevant knowledge of electrodynamics, the electromagnetic field model of the conductive flat plate was established, and the magnetic field energy storage was calculated based on the magnetic field model.Since the essence of inductance is the storage and release effect of magnetic field energy, the calculation formula of inductance can be derived from magnetic field energy storage. The mutual inductance between wires with the same current direction is equal to the product of the mutual inductance coefficient between the two wires and the sum of the inductances of the two wires.However, there are so many wires that the mutual inductance coefficient is difficult to determine directly.To simplify the theoretical calculation, according to this layer, the maximum value of the total inductance generated by the mutual inductance of the wires is calculated, that is, the mutual inductance coefficient between the wires is the maximum value of one.At this time, the total mutual inductance of one wire is the sum of the inductances of multiple wires and the total mutual inductance received by multiple wires; that is, it is the product of the square of the inductance of a single wire and the number of wires.By denoting x as the number of wires' roots, the corresponding formula can be expressed by A Model for Calculating the Coil Capacitance The capacitance value of the coil refers to the parasitic capacitance that appears after the coil is made.Generally speaking, the parasitic capacitance appears in two main factors: the PCB circuit board between the layers of the aperture capacitance and the capacitance generated between the wires.Herein, the parasitic capacitance is essentially generated by the capacitance between the layers of the circuit board and the layers of the positive conductor. The primary and secondary coils can be approximated as a parallel plate capacitor.The area directly opposite the two is the area of the secondary coil.Since the plate represents two pairs of coils, the calculated interlayer parasitic capacitance multiplied by two can be regarded as the size of the parasitic capacitance of the whole plate.Therefore, based on the parallel plate capacitance formula, the coil parasitic capacitance is given by where ε 0 represents the vacuum dielectric constant with a value of 8.85 × 10 −12 F/m, ε r denotes the relative permittivity, S is the square area, and h B denotes the distance between the plates. PCB Design The elimination of inductive effects in the present work is essentially based on the extremely high degree of coupling of the two coils.To achieve this goal, a high degree of regularity of the coil shape and positive angularity is required, which is difficult to achieve with ordinary coils wound through enameled wires.According to the design requirements, printed circuit board coils (PCB coils) are utilized to ensure the regularity of the coils. Obviously, the PCBs used as coils need to have the following characteristics, which can be met by conventional FR-4 materials.The thicker and wider the copper foil, the lower the resistance value, but increased thickness and width could lead to poor adhesion between the copper foil and the circuit board substrate.This could result in the peeling of the copper foil from the board, and the circuit board being layered layer by layer.In other words, the closer the thickness of the substrate, the better the coupling effect, but the thinner the thickness of the substrate, the lower the mechanical strength of the PCB.Therefore, these parameters should be integrated into the design requirements and practical process constraints to determine the parameters. After cooperation and consultation with PCB manufacturers, the PCB processing parameters are presented in Table 2. Design and Evaluation of the Coil Parameters In the present design, the set of PCB processing parameters is presented in Table 2. When creating PCBs to provide high-power outputs, it is necessary to first perform the relevant safety checks on the board.The main consideration for the circuit board is whether the temperature rise of the PCB exceeds the safe temperature of the materials used in the PCB at the design limits, resulting in damage. Considering that the coil wire designed in this paper passes a high-frequency current, it is inappropriate to use the general steady-state circuit current safety calibration.It is only necessary to consider that the heating generated by the wire during the passage time of the high-frequency current would be in the safe range, so the short-circuit allowable current formula was employed to evaluate the limiting current. Yang [13], in estimating the PCB circuit temperature, gave the equation for the allowable current of the PCB copper foil in a short-circuit condition as follows: Further calculations yield a maximum limiting current of 880.87A for the PCB copper foil. Coil Shape Design Currently, common PCB coils are round, square, and other square polygons.According to an investigation conducted by Zhu [14] on planar spiral inductors using polygons, the number of sides of a square polygon is directly proportional to the inductance while keeping the same inner and outer diameters.In particular, the inductance of a circular spiral coil with a diameter of 200 mm is 1.24 times greater than that of a square coil with a side length of 200 mm when all other factors are considered the same [14]. However, the exploitation of square coils still possesses major drawbacks.Yang [13], in analyzing the PCB alignment on the interference of high-frequency signals, explicitly suggested that right-angle routing on printed circuit boards plays a negative role in the formation of high-frequency signals.The corner may be equivalent to a capacitive load on the transmission line, which pulls up the rise time to the detriment of high-frequency signal formation, and the right-angled tip generates additional electromagnetic interference (EMI).Therefore, it is necessary to minimize the presence of right angles in the coil shape. Combining the findings of both, the shape of the coil was designed as a runwaytype coil with a square body and generally rounded corners.A suitable combination of two conventional coils avoids the shortcomings of both, reduces inductance and external interference in appearance, and minimizes impediments to high-frequency, high-current signal generation. The coil employs a helical track-type coil to connect a high-frequency circuit for current transmission, called the primary coil.As illustrated in Figure 5, the other two layers are printed with corresponding coils to counteract the inductance of the coil, called secondary coils.As can be seen in Figure 5, there exists a portion of the main coil that has not yet been masked, and this portion is the test window where the coil is available for outside testing. Materials and Parameters After completing the work related to coil design, the coil parameters should be ap propriately designed.For this purpose, the coil is approximated as a circular helical coil according to the reference value given by Zhao [11].Let us set the values of C1, C2, C3 and C4 equal to 1, 2.46, 0, and 0.2, respectively.Take the number of turns as the independ ent variable and inductance as the dependent variable, and then bring them into the for mula for calculation to obtain the graph of Figure 6 (i.e., demonstrating the relationship between the number of turns and inductance). Materials and Parameters After completing the work related to coil design, the coil parameters should be appropriately designed.For this purpose, the coil is approximated as a circular helical coil, according to the reference value given by Zhao [11].Let us set the values of C1, C2, C3, and C4 equal to 1, 2.46, 0, and 0.2, respectively.Take the number of turns as the independent variable and inductance as the dependent variable, and then bring them into the formula for calculation to obtain the graph of Figure 6 (i.e., demonstrating the relationship between the number of turns and inductance). Materials and Parameters After completing the work related to coil design, the coil parameters should be appropriately designed.For this purpose, the coil is approximated as a circular helical coil, according to the reference value given by Zhao [11].Let us set the values of C1, C2, C3, and C4 equal to 1, 2.46, 0, and 0.2, respectively.Take the number of turns as the independent variable and inductance as the dependent variable, and then bring them into the formula for calculation to obtain the graph of Figure 6 (i.e., demonstrating the relationship between the number of turns and inductance).It can be seen that the inductance exponentially rises with the number of turns, so from the point of view of reducing inductance, the fewer the number of turns the better.Nevertheless, the fewer the number of turns, the greater the burden on a single wire when the output reaches the design amplitude current. In engineering applications, when the value of the function cannot be too large or too small in actual use, the closest value to 1 can be considered.The point corresponding to the slope of the tangent line of the function is used as the appropriate value.A tangent point with a slope of 1 represents the degree of change of the dependent variable and the It can be seen that the inductance exponentially rises with the number of turns, so from the point of view of reducing inductance, the fewer the number of turns the better.Nevertheless, the fewer the number of turns, the greater the burden on a single wire when the output reaches the design amplitude current. In engineering applications, when the value of the function cannot be too large or too small in actual use, the closest value to 1 can be considered.The point corresponding to the slope of the tangent line of the function is used as the appropriate value.A tangent point with a slope of 1 represents the degree of change of the dependent variable and the independent variable of the function at this time, which can avoid the situation of the value being too large or too small [9].In this case, the value of the point is 28, which means that the number of turns of the single-turn coil selection is 28 turns.At this time, one can write This coil carries a current at the extreme index of I Singlecoil−MAX that far exceeds I MAX , which is clearly unsuitable. Let us improve the coil by connecting two pairs of primary and secondary coils in series on a PCB board.For this purpose, a 4-layer PCB board was utilized and the number of turns of the coil on a single board was increased to 56 turns.Therefore, At this point, it can be assumed that the PCB coil could work safely and normally in the presence of the limit index. To minimize unwanted leakage and the loss of magnetic energy, an additional coil of wire was added to the outside of the coil.A grounding layer was also involved in the coil bore located in front of the test window to provide electromagnetic shielding.These new strategies complete the overall design, as presented in Figure 7. the presence of the limit index. To minimize unwanted leakage and the loss of magnetic energy, an additional coil of wire was added to the outside of the coil.A grounding layer was also involved in the coil bore located in front of the test window to provide electromagnetic shielding.These new strategies complete the overall design, as presented in Figure 7.After completing the design and measuring some factors, we can obtain the dimensional parameters that should be brought into the calculation (see Table 3). Results After the structural design and theoretical verification were completed, the physical object of this coil was fabricated as illustrated in Figure 8: Distance between two layers of coils (H) 1.5 mm Coil facing area (S) 1.22 × 10 −4 m 2 Results After the structural design and theoretical verification were completed, the physical object of this coil was fabricated as illustrated in Figure 8: The inductance and impedance of the PCB coil were measured directly using the LCR bridge, the model is VICIOR 4092E, under a sine wave at a frequency of 1 MHz, as demonstrated in Figure 9.The inductance and impedance of the PCB coil were measured directly using the LCR bridge, the model is VICIOR 4092E, under a sine wave at a frequency of 1 MHz, as demonstrated in Figure 9. Results After the structural design and theoretical verification were completed, the physical object of this coil was fabricated as illustrated in Figure 8: The inductance and impedance of the PCB coil were measured directly using the LCR bridge, the model is VICIOR 4092E, under a sine wave at a frequency of 1 MHz, as demonstrated in Figure 9.It should be noted that due to parasitic capacitance and other parameters that affect the measurement of passive components, the inductance measurement may have negative values, which indicates that the inductance of the object being measured is much smaller than its own capacitance, and the capacitance characteristics of the object being measured are higher than the inductance characteristics. As can be seen in Figure 9, the inductance value is −6.86 µH.A spiral coil shows capacitance characteristics in the bridge measurement.Such a fact proves that the inductance of this coil is much smaller than the parasitic capacitance of the coil itself, and the parasitic capacitance does not influence the formation and output of high-frequency currents.This issue is in line with the article on reducing the effect of inductance, and the design goal is to hinder the resistance of high-frequency and large currents.Additionally, the measured impedance is approximately 6.37 Ω, which is similar to the results of the theoretical calculations, verifying the validity and reference value of the theoretical calculations. To verify whether the actual use of the non-inductive coil meets the design specifications, a current source test platform was suitably constructed for the output coil, and the structure of the platform is schematically presented in Figure 10. the measured impedance is approximately 6.37 Ω, which is similar to the results of the theoretical calculations, verifying the validity and reference value of the theoretical calculations. To verify whether the actual use of the non-inductive coil meets the design specifications, a current source test platform was suitably constructed for the output coil, and the structure of the platform is schematically presented in Figure 10.The test platform utilizes an AC voltage regulator to increase the 220 V AC input to 380 V AC, which is supplied to the storage capacitor for charging.It employs the signal generator to control the high-frequency current parameters through the IGBT control capacitor voltage at both ends according to the signal waveform released on the coil.The current is multiplied through the coil to obtain a high-frequency current. The single-turn current output from the test platform is detected by shunt sampling, and the waveforms sampled from the shunt can be regarded as the real waveform output from this current source.At the same time, to check the quality of the high-frequency current output from the current source, the output current of the current source is captured at the output power coil end using a Roche coil, which is altered to a voltage signal through an integrator to be shown on an oscilloscope.The actual construction of the above-mentioned platform is illustrated in Figure 11.The test platform utilizes an AC voltage regulator to increase the 220 V AC input to 380 V AC, which is supplied to the storage capacitor for charging.It employs the signal generator to control the high-frequency current parameters through the IGBT control capacitor voltage at both ends according to the signal waveform released on the coil.The current is multiplied the coil to obtain a high-frequency current. The single-turn current output from the test platform is detected by shunt sampling, and the waveforms sampled from the shunt can be regarded as the real waveform output from this current source.At the same time, to check the quality of the high-frequency current output from the current source, the output current of the current source is captured at the output power coil end using a Roche coil, which is altered to a voltage signal through an integrator to be shown on an oscilloscope.The actual construction of the above-mentioned platform is illustrated in Figure 11.After the construction is completed, a 1 MHz single-cycle square wave is generated by the signal generator, the pulse is output through the test platform and the test coil, and the pulse waveform is sampled through the shunt after 1000 times of attenuation.The test results are illustrated in Figure 12.After the construction is completed, a 1 MHz single-cycle square wave is generated by the signal generator, the pulse is output through the test platform and the test coil, and the pulse waveform is sampled through the shunt after 1000 times of attenuation.The test results are illustrated in Figure 12.After the construction is completed, a 1 MHz single-cycle square wave is generated by the signal generator, the pulse is output through the test platform and the test coil, and the pulse waveform is sampled through the shunt after 1000 times of attenuation.The test results are illustrated in Figure 12.The blue curve of channel 1 of this oscilloscope in Figure 12 shows the current waveform of the voltage signal measured by the shunt, and the red curve of channel 2 illustrates the voltage signal measured by the Roche coil from the output. The low value of the voltage waveform before the rising edge represents the zeroinput state of the system when no current flows through it.When the current pulse appears, the waveform exhibits a rising edge.The shorter the rise time, the smaller the parasitic inductance of the output coil; after that, the waveform reaches stability.The difference between the high value and the low value represents the amplitude, which signifies the amplitude of the current pulse flowing through the sampling resistor. From Figure 12, it can be seen that for the 1 MHz high-frequency square wave, its high-frequency rising edge is about 20 ns.While comparing the sampled current source signal and the Roche coil at the output end of the measured signal, it is detectable that the two signals have the same waveform trend, the waveform rising time delay is not obvious, and the waveform variation along the waveform platform does not exhibit a noticeable slow slope. The shape of the waveform and the rising edge of the test results demonstrate that the output of the inductive reactance of the non-inductive coil is low, the inductive reactance for the generation of high-frequency signal obstruction effect is not obvious, the output of the high-frequency, high-current pulse waveform fidelity is good, and no obvious distortion loss of waveform information is detected. Let us calculate the current output amplitude for Figure 12.The current amplitude calculation for this test rig can be formulated as The blue curve of channel 1 of this oscilloscope in Figure 12 shows the current waveform of the voltage signal measured by the shunt, and the red curve of channel 2 illustrates the voltage signal measured by the Roche coil from the output. The low value of the voltage waveform before the rising edge represents the zero-input state of the system when no current flows through it.When the current pulse appears, the waveform exhibits a rising edge.The shorter the rise time, the smaller the parasitic inductance of the output coil; after that, the waveform reaches stability.The difference between the high value and the low value represents the amplitude, which signifies the amplitude of the current pulse flowing through the sampling resistor. From Figure 12, it can be seen that for the 1 MHz high-frequency square wave, its high-frequency rising edge is about 20 ns.While comparing the sampled current source signal and the Roche coil at the output end of the measured signal, it is detectable that the two signals have the same waveform trend, the waveform rising time delay is not obvious, and the waveform variation along the waveform platform does not exhibit a noticeable slow slope. The shape of the waveform and the rising edge of the test results demonstrate that the output of the inductive reactance of the non-inductive coil is low, the inductive reactance for the generation of high-frequency signal obstruction effect is not obvious, the output of the high-frequency, high-current pulse waveform fidelity is good, and no obvious distortion loss of waveform information is detected. Let us calculate the current output amplitude for Figure 12.The current amplitude calculation for this test rig can be formulated as where U amplitude represents the amplitude of the high-frequency current value, and, according to Figure 12, it can be read in the amplitude value of about 6.02 mV.In addition, R Sample denotes the value of the sampling resistance, namely 1.091 mΩ; 56 is the number of turns of the coil; and 1000 is the attenuation of the sampling resistance relative to a single-turn coil time.According to the measurement results in Figure 12, the voltage amplitude value collected by the sampling resistor is about 6.16 mV, and the current amplitude value calculated according to Equation ( 16) is about 33 kA.We summarize the main parameters of the waveforms measured by the sampling resistor and the Rogowski coil in Figure 12 and Table 4. Since the high-precision sampling resistor is connected to the output port of the pulse current source and the current enters the PCB coil after passing through the sampling resistor, the measured amplitude parameter can be considered the true value of the output current amplitude of the current source.The current waveform measured by the Rogowski coil is regarded as the measured value of the coil output current.The absolute value of the difference between the two can be taken as the uncertainty of the coil.We adjusted the current source output so that the output current range was between 33 kA and 29 kA, and then we repeated the experiment 400 times, with 0.4 kA as the one grid, averaged the measured uncertainty into 10 characteristic points, and created the uncertainty table in depicted Figure 13. number of turns of the coil; and 1000 is the attenuation of the sampling resistance relative to a single-turn coil time.According to the measurement results in Figure 12, the voltage amplitude value collected by the sampling resistor is about 6.16 mV, and the current amplitude value calculated according to Equation ( 16) is about 33 kA. We summarize the main parameters of the waveforms measured by the sampling resistor and the Rogowski coil in Figure 12 and Table 4. Since the high-precision sampling resistor is connected to the output port of the pulse current source and the current enters the PCB coil after passing through the sampling resistor, the measured amplitude parameter can be considered the true value of the output current amplitude of the current source.The current waveform measured by the Rogowski coil is regarded as the measured value of the coil output current.The absolute value of the difference between the two can be taken as the uncertainty of the coil.We adjusted the current source output so that the output current range was between 33 kA and 29 kA, and then we repeated the experiment 400 times, with 0.4 kA as the one grid, averaged the measured uncertainty into 10 characteristic points, and created the uncertainty table in depicted Figure 13.The plotted results in Figure 13 reveal that the maximum output uncertainty of the coil is 16.72 A, and the maximum standard deviation is 34.12 A. For the design range of 30 kA, the output error range at various amplitudes is less than 0.06%, and the output The plotted results in Figure 13 reveal that the maximum output uncertainty of the coil is 16.72 A, and the maximum standard deviation is 34.12 A. For the design range of 30 kA, the output error range at various amplitudes is less than 0.06%, and the output stability error range is less than 0.12%.The coil has robustness with various output ranges and repeatability. According to Table 1, the output coil specifications meet the rising edge within 25 ns, an output current amplitude of 30 kA, and an output waveform standard 1 Mhz square wave. Comparison of Theoretical Calculation Effects Based on theoretical simulations, the difference in inductance values between the coil without the new structural design and the coil with the structural design is discussed, whereas other parameters are the same.Suppose that the design of transformer mutual coupling is not adopted and the main coil is directly connected to the circuit.In this case, first, the inductance of the single-layer coil of the main coil is calculated. Referring to Equation ( 8) and Table 3, the inductance of a single-layer coil (L layer ) is evaluated to be about 64 µH. At this time, we also should consider the two layers of the main coil in the same PCB, which will inevitably produce mutual inductance between the coils.As the two coils of the current flow in the same direction, the coil mutual inductance generated by the inductance and the self-inductance generated by the same direction of the induced voltage, together with the high-frequency circuit, play a crucial role in hindering the need to calculate the inductance of the mutual inductance. According to Equation ( 4), the main coil of the two layers is completely consistent, and the self-inductance is consistent.The formula can be simplified as L M = kL, where k signifies the coupling coefficient. For the inductance between PCB coils, Jonsener Zhao [11] provided an engineering estimation formula for k for the mutual inductance between multiple layers of PCB spiral coils with less than 20 turns.According to the literature, this formula still possesses reference value as an approximate calculation when the outer coil diameter is much greater than the distance between the two coils' layers in the case of the number of turns exceeding 20 [4,7]. The engineering estimation formula for k based on the work of Zhao [11] is as follows: where N is the number of coil turns and X represents the distance between two layers.According to the design, the distance between the two main coils is set to 1.54 mm.The calculated value of k is approximately obtained as 0.9950.The total inductance of the two-layer coil can be evaluated as follows: where the calculation result leads to 191.68 µH. As Equation (10), this can be viewed as the inductance generated by a straight wire, which is brought into the calculation to give a total inductance of 1528.8 nH. Let us calculate the inductance of a board coil based on Equation (15), which is constituted of one board and two layers.By doing so, the calculated value is obtained as 4.58 µH. As can be seen, by comparing the above value with 191.68 µH, the inductance is reduced by about 42 times, indicating that this design could substantially reduce the coil inductance. From Equation (7), the resistance of the single-layer coil at this point equals approximately 2.13 Ω. Due to the similarity of the two coils under consideration, the primary coil itself has a resistance of 2.13 Ω.In contrast to the ideal secondary coil, this secondary coil does not exhibit the resistance of the test window and has the additional resistance of the wires connecting the coil circuitry.Comparing the wire lengths of layer 1 and layer 2, due to the presence of a test gap in layer 1, layer 1 has a 0.77 m shorter wire length than layer 2. Calculations show that the resistance of this part is only 0.21 Ω, resulting in an AC impedance of 4.05 Ω.The impedance of the double-layer circuit board is only 8.10 Ω. The parasitic capacitance and impedance of the coil are also given here in a simple calculation.By substituting the value from Table 2 into Equation (12), the value of parasitic capacitance is calculated as 1.74 pF.This capacitance value is much smaller in magnitude than the coil resistance and coil inductance and can be approximated as an insulating state.As a result, the effect of parasitic capacitance can be rationally ignored. According to Figure 11, the theoretically calculated inductance is obtained as 4.58 µH, which is in the same order of magnitude as the measured 6.69 µH.Additionally, the theoretically calculated impedance is 8.10 Ω, which is in the same order of magnitude as the measured 6.96 Ω.This fact reveals that the theoretical calculation is close to the actual measurement result, and the theoretical calculation has a reference significance.This fact explains that the theoretical calculation is close to the actual measurement results, and the theoretical calculations are of reference significance. Conclusions This paper accomplishes the design of a non-inductive coil for high-frequency current output.The output current method based on the ampere-turn method is designed and the principle of coil inductance generation is discussed.The structures of the main and secondary coils are suitably designed on the basis of the principle of inductance and the law of conservation of energy to eliminate the effect of inductance. After completing the establishment of the theoretical model, the shape and dimensional pa-rameters of the coil were determined according to the design requirements, and the key electromagnetic parameters of the coil was calculated.From the theoretical calculation, it can be concluded that the structure of the non-inductive coil is reasonable, and the inductance is 42 times smaller than that of the coil without eliminating the effect of inductance, with the value reduced from 191.68 µH to 4.58 µH.According to the physical verification, the actual inductance of this coil is predicted to be 6.69 µH after the measurement by the LCR bridge, and the coil is capable of outputting 1 MHz with an amplitude of 33 kA.After connecting to a high-frequency current source, the coil can output a current pulse of 30 kA with a rise time of 20 ns and an amplitude of 30 kA.Additionally, since the influence of inductance on the high-frequency current is eliminated, the generated high-frequency current signal exhibits good smoothness and recovery and is convenient for equipment diagnosis. Future Works It can be seen that the theoretically calculated electromagnetic parameters of the output coil are in good agreement with the physical measurements.However, the design is limited by the fact that the instantaneous current carried by the PCB copper foil hardly exceeded 1 kA, so the output amplitude is much lower than that of the leading current source.In fact, there is still a lack of sufficient electromagnetic explanation for the parasitic parameters of the coil and the nature of the capacitive properties at high frequencies.This part of the work still requires further investigation in the future. Figure 1 . Figure 1.High-frequency current equivalent circuit of the coil. Figure 1 . Figure 1.High-frequency current equivalent circuit of the coil. Figure 2 . Figure 2. Graphical representation of the voltage, current, and energy of an inductor as a function of time [10]. Figure 2 . Figure 2. Graphical representation of the voltage, current, and energy of an inductor as a function of time [10]. Figure 3 . Figure 3. Equivalent diagram of a high-frequency circuit for canceling inductance. Figure 3 . Figure 3. Equivalent diagram of a high-frequency circuit for canceling inductance. Figure 4 . Figure 4. High-frequency circuit equivalent diagram of a non-inductive coil. Figure 4 . Figure 4. High-frequency circuit equivalent diagram of a non-inductive coil. Sensors 2024 , 1 Figure 5 . Figure 5. Design of the secondary coil covering the main coil. Figure 5 . Figure 5. Design of the secondary coil covering the main coil. Figure 6 . Figure 6.Plot of the inductance in terms of the number of turns. Figure 6 . Figure 6.Plot of the inductance in terms of the number of turns. Figure 7 . Figure 7. Design of each layer of 4-layer PCB coil. Figure 7 . Figure 7. Design of each layer of 4-layer PCB coil. Figure 8 . Figure 8. Physical image of the non-inductive coil. Figure 9 . Figure 9. Inductance and impedance test setup of the LCR bridge. Figure 8 . Figure 8. Physical image of the non-inductive coil. Figure 8 . Figure 8. Physical image of the non-inductive coil. Figure 9 . Figure 9. Inductance and impedance test setup of the LCR bridge.Figure 9. Inductance and impedance test setup of the LCR bridge. Figure 9 . Figure 9. Inductance and impedance test setup of the LCR bridge.Figure 9. Inductance and impedance test setup of the LCR bridge. Figure 10 . Figure 10.Structure of the test platform building. Figure 10 . Figure 10.Structure of the test platform building. Figure 11 . Figure 11.Actual construction of the test platform. Figure 11 . Figure 11.Actual construction of the test platform. Table 2 . The PCB processing parameters. Table 3 . Main parameters pertinent to the coil design.
11,576.6
2024-03-22T00:00:00.000
[ "Engineering", "Physics" ]
A Novel Knowledge Base Question Answering Method Based on Graph Convolutional Network and Optimized Search Space : Knowledge base question answering (KBQA) aims to provide answers to natural language questions from information in the knowledge base. Although many methods perform well when dealing with simple questions, there are still two challenges for complex questions: huge search space and information missing from the query graphs’ structure. To solve these problems, we propose a novel KBQA method based on a graph convolutional network and optimized search space. When generating the query graph, we rank the query graphs by both their semantic and structural similarities with the question. Then, we just use the top k for the next step. In this process, we specifically extract the structure information of the query graphs by a graph convolutional network while extracting semantic information by a pre-trained model. Thus, we can enhance the method’s ability to understand complex questions. We also introduce a constraint function to optimize the search space. Furthermore, we use the beam search algorithm to reduce the search space further. Experiments on the WebQuestionsSP dataset demonstrate that our method outperforms some baseline methods, showing that the structural information of the query graph has a significant impact on the KBQA task. Introduction A knowledge graph is a heterogeneous multi-digraph, which means it is directed, and multiple edges can exist between two nodes.An agent generates knowledge by relating elements of a graph to real-world objects and actions.A knowledge graph (KG), also known as a knowledge base (KB), is a structured representation of facts that describes a collection of interlinked descriptions of entities, relationships, and semantic descriptions of entities [1].Knowledge bases store a large amount of factual knowledge from the real world.Many large KBs, such as DBPedia [2], Freebase, YAGO [3] and NELL [4], have been built to serve downstream tasks.Knowledge base question answering (KBQA), which aims to answer natural language questions by knowledge bases, has received a lot of attention as an important research direction [5][6][7][8].Figure 1 shows the process of finding the answer to a question by the knowledge in KB. Semantic parsing-based methods (SP-based methods) are one of the mainstream approaches for KBQA [9,10].The SP-based methods first convert natural language questions into symbolic logical forms; after that, the answers are obtained by executing them in a knowledge base [11].Such methods can visualize the reasoning process, which makes the results have high interpretability.However, they rely heavily on the design of logical forms and parsing algorithms. Some works combine graphical structures with SP-based methods to solve the problem [12,13].These methods transform question answering into a query graph generation process and show powerful expressive power in the complex KBQA task.However, such approaches still face two problems.(1) The number of query graphs grows exponentially with the growth of the knowledge base size and the emergence of complex questions [14]. (2) Most works only consider the semantic information of the query graphs, while they ignore the natural graph structure features.However, the latter information is also useful for selecting the correct query graphs [15].Therefore, how to reduce the number of candidate query graphs and how to precisely select the correct query graphs are still the key challenges of the current KBQA work.In this paper, we focus on how to address the two challenges.For challenge 1, we observe that usually, the correct answer to a complex question cannot be found just once in the large search space.Therefore, we can use staged queries to decompose complex questions into multiple simple questions.In addition, a complex question has more than one constraint, which can be used to further reduce the search overhead.We note that some approaches use the graph structure information to improve the effect in some other Natural Language Processing (NLP) tasks but not KBQA [16,17].In fact, the structure of the query graph is also useful for KBQA.Therefore, for challenge 2, we extract the structure information of the query graphs to enhance the ability of our method to select the correct answer in KBQA. Based on the motivation above, we propose a novel KBQA method based on a graph convolutional network by optimized search space.We transform the process of answering complex questions into a hierarchical process of generating query graphs.We extract the constraint function from the complex question and use it to reduce the number of candidate query graphs.After that, we design a novel ranker that scores the candidate query graphs using two components: semantic similarity matching and graph structural similarity.Finally, it uses the beam search algorithm to select the Top K highest-rated query graphs from the candidate query graphs.Due to the addition of the graph structure similarity matching module, our method can select query graphs more accurately.Our main contributions are as follows: 1. To reduce the huge search space for KBQA, we use a constraint function as well as the beam search algorithm to limit the number of candidate query graphs and reduce the computational overhead.2. To update the correctness of query graphs, we add structural information to the semantic information of the query graphs and score the query graphs from multiple perspectives, which enhances the model's ability to understand complex questions. 3. Experimental results on the publicly available KBQA dataset WebQuestionsSP show that our method achieves good experimental results compared to the baseline methods. Related Work 2.1.Semantic Parsing-Based Methods for KBQA Semantic parsing-based methods are the most dominant class of KBQA methods, which aim to parse natural language discourse into logical forms [18,19].Specifically, this category of methods first encodes the question through semantic and syntactic analysis.Afterward, the encoded questions are converted into logical forms of statements (e.g., SPARQL Protocol and RDF Query Language (SPARQL) and Structured Query Language (SQL)) by using a logical parsing module.Finally, the obtained logical form statements are executed on the knowledge base to query the answers [20,21]. The earlier methods [22,23] can handle simple questions well.However, in the subsequent large-scale knowledge bases, these traditional methods are no longer applicable in the face of complex questions with complex semantic syntax involving multiple entities. Query Graph-Based Methods for KBQA The concept of query graph was first proposed by Yih et al., 2015 [12], which is a new idea to simplify the traditional semantic parsing-based methods [13,14].The query graphbased method introduces the semantic information formed by entities and relations in the knowledge base during the parsing of a question.It transforms the semantic understanding process of a question into a query graph generation process, which shows the semantic matching process more intuitively and thus has very good interpretability. However, the query graph generation process usually relies on predefined manual rules, which are not well suited for a large number of complex questions in a large-scale knowledge base.To alleviate this, Ding et al., 2019 [24] used the substructure of frequently occurring queries to assist query graph generation.Abujabal et al., 2017 [25] automatically generated templates based on question-answer pairs to reduce manual operations.Hu et al., 2018 [26] applied aggregation operations and coreference resolution techniques to accommodate complex questions. In addition, earlier methods only consider the degree of predicate matching in the natural language question and the query graph.They use the core query path in the query graph to measure the similarity to the question [12,27].These methods omit much useful information and lead to less accurate filtering of the query graph.Based on this, Lan et al., 2020 [28] more comprehensively utilized the information from nodes, relations, and constraints in the query graph generation process.They transformed the query graph into a serialized form containing nodes, relations, and constraints before performing the semantic similarity measure, which enhances the matching ability of their method to the correct query graph.However, the serialization process causes two nodes that are originally adjacent to each other to be split in the sequence, distorting part of the semantic information and destroying the graph structure information that the query graph naturally has. Overview of the Method Task Description: A KB collects knowledge data in the form of triples K = {h, r, t}, where r ∈ R (the set of relations) and h, t ∈ E (the set of entities).For a given natural language question q, the KBQA task is to find the answer a, where a ∈ E. Method overview: We propose a novel KBQA method based on a graph convolutional network by an optimized search space.We formalize the KBQA task as maximizing the probability distribution p(a|K, q).Instead of reasoning directly about K, we retrieve a query graph g ∈ K and infer a on g.Since g is unknown, we treat it as a latent variable and rewrite p(a|K, q) as: To obtain the query graph g, our method starts from the topic entity in question q and generates the query graph hierarchically using the extend or constrain operations, which are described in Section 3.2. We assume that the correct query graph has a high degree of similarity to the question q.We can use this to select the correct path from the generated candidate query graphs.To measure this similarity, we design a Ranker (described in Section 3.3) that selects the candidate query graphs based on semantic matching and structural similarity of the graphs. Specifically, we use the pre-trained language model RoBERTa to measure the semantic similarity between the question q and the candidate query graphs.At the same time, we use a graph convolutional network to encode the semantic and structural information of the candidate query graphs together, after which we can measure the similarity of these candidate query graphs.Finally, we combine a constrain function and a beam search algorithm to select the query graphs with high similarity for the next step.The beam search algorithm improves the greedy search algorithm by selecting beam − size candidates from the set of candidates generated by each search as the starting point for subsequent searches.Therefore, we can select the beam − size query graphs with high similarity scores from all candidate query graphs, which largely reduces the number of query graphs to optimize the search space. We repeat the above generation-ranking operation until we find the correct answer or reach the maximum hop count limit.An example in Figure 2 shows the process of our method to find the correct answer to a question.The orange circles represent the constraint function to reduce the search space.Ranker is used to select the path with a higher score after ranking the candidate paths, such as the path made up of the orange arrow and the lambda variable X. Query Graph Generation This module uses two actions: extend and constrain to generate query graphs. The extend action extends the core relational path by adding relations (selected by Ranker) to the query graph.Specifically, we connect the relation r chosen by Ranker to the lambda variable X (or the topic entity e t ).After the connection, the original lambda variable X becomes an intermediate variable y (the topic entity e t remains unchanged), while the other end of r becomes the new lambda variable X. Referring to Luo et al., 2018 [29], we generate a constraint function by matching the keywords (e.g., first, last, biggest, etc.) in the question.The constrain action attaches the detected constraint function to the lambda variable X or an intermediate variable connected to X.In the example in Figure 2, when our method detects the keyword 'first', it generates a constraint function argmin, which limits the search to the nodes around which it is connected.Such a constraint helps the model limit the search to a certain range, which reduces the search space. This module starts with the topic entity (topic entity linking results are from the paper [28]) and uses the extend action or constrain action to generate the query graph step by step.Some previous methods [12,27] place the process of adding constraints after the core path is fully generated.However, such methods are too simple and have a limited reduction in the number of candidate query graphs.Therefore, our method performs the constrain action before the extend action, which reduces the number of candidate query graphs. Query Graph Ranker For the method that uses enumeration to search [12], the number of candidate query graphs approaches k n , where k is the core path length and n is the average number of single-hop candidate paths.For complex questions, k n varies from thousands to millions.Such an order of magnitude cannot be handled with current methods. Therefore, to prevent the number of candidate query graphs from growing exponentially with the number of query steps, we use a beam search algorithm to limit the number of query graphs obtained at each step.Further, in order to select query graphs associated with the correct answers, we design a scoring function to rank the query graphs by both semantic and graph structure perspectives of the query graphs and some simple features.Figure 3 Semantic Similarity Measure This module aims to measure the semantic similarity of the natural language question q and the query graph g.This module starts from the topic entity in the question and transforms the query graph into a sequence form g containing entities and relations according to the query graph generation process.Specifically, we compose the question q and the query graph sequence g into a statement pair as the input to RoBERTa (robustly optimized BERT approach) [30].Then, their semantic similarity score(q, g ) is obtained.The formulas are as follows: score(q, g ) = LI NEAR(H qg ) where RoBERTaCLS denotes the (CLS) representation of the concatenated input (Figure 4), and LI NEAR is a projection layer reducing the representation to a scalar similarity score. Graph Structure Similarity Measure The semantic similarity metric module lacks the structural information of the query graph.Furthermore, the sequence transformation process leads to the segmentation of adjacent nodes in the query graph.Therefore, in addition to the semantic information mentioned in Section 3.3.1),this module also parses the query graph from the view of its structure. First, the module vectorizes a node and its type as Ne (using the Global Vectors for Word Representation (GloVe)).After that, Ne is fed into the Bi-directional Long Short-Term Memory (Bi-LSTM), and the hidden state h e of the last time step of the Bi-LSTM is selected as the final encoding of the node, i.e., At this point, the initial description of each node in the query graph is obtained, but each node in the current graph contains only its own information and lacks the description of its neighboring nodes.Therefore, this module uses the Graph Convolutional Network (GCN) to represent the query graph g.The GCN hierarchically aggregates nodes and their neighbor representations.After several aggregations, the nodes contain more information about their neighborhoods.Then, h g , the final representation of the graph g, is obtained by averaging over all nodes' representations.The formulas are as follows: where N(i) is the set of neighbor nodes of node h i ; h (l) j is the representation of node h j in the l-th iteration; W (l) is the parameter matrix of each layer of linear transformation; b (l) is the bias value of each aggregation; and V denotes the set of nodes in graph g. Finally, the graph structure similarity score(q, g) is measured by using the cosine similarity: score(q, g) = cos(h q , h g ) (8) where h q is the vector representation of the question q (obtained from RoBERTa). Candidate Query Graph Selection We design a scoring function that uses the previously obtained semantic similarity and structure similarity as well as some simple features as evaluation criteria to rank the candidate query graphs, the formulas are as follows: where F answer is the number of candidate answers; F topic is the topic entity score; F cons is the number of constraints; and W and b parameters are to be learned during model training.Finally, we use the beam search algorithm to select the top K candidate query graphs for the next iteration. Datasets We conduct experiments using the WebQuestionsSP (WebQSP [31]) dataset to evaluate the effectiveness of our method.WebQuestionsSP is a widely used publicly available dataset containing 4737 questions based on Freebase KB.Following Sun et al., 2018 [32], we partitioned the dataset into the training/validation/testing sets with the number of 2848/250/1639 questions. Methods for Comparison We have selected several methods in related fields within the last few years as baseline methods.First, we compare the method proposed by Lan et al., 2019 [33], which considers the complexity of multi-hop relational paths but does not use set searches or constraints to reduce the search space.After that, we compare the method of Chen et al., 2019 [34], who transforms the extraction of multi-hop relationships into multiple single-pick extractions, thus reducing the search space.We also compare the method that uses additional information: Han et al., 2020 [35] take textual information as hyper-edges and update entity states using GCN.Next, we compare the method of Yan et al., 2021 [36] that uses auxiliary tasks to enhance the pre-trained model.Then, we compare the method of Qin et al., 2021 [37], who use the relational graph to reduce the search space of the query graph.Finally, we compared some of the latest methods [7,8,14,38].Among them, Zhang et al., 2022 [7] composed subgraphs from multiple entities.Chen et al., 2022 [14] used abstract query graphs to enhance query graph accuracy.Ye et al., 2022 [8] and Hu et al., 2022 [38] used generative methods to find answers. Results The results of our method compared with the baseline methods on WebQuestionsSP are shown in Table 1. The method of Qin et al., 2021 [37] reduces the number of candidate query graphs but does not extract the graph structure information of query graphs.Although Han et al., 2020 [35] use GCN to extract graph structure information, they ignore the matching of semantic information.Yan et al. 2021 [36] reformulate the retrieval-based KBQA task to make it a question-context matching form and propose three auxiliary tasks for relation learning, namely relation extraction, relation matching, and relation reasoning, which gives the best results (Hit@1-score) among all baseline methods.Due to the clear supervised signal, these supervised models show excellent performance.In particular, the method of Ye et al., 2022 [8] achieved a surprising F1-score of 76.6. In contrast, our method not only extracts semantic information by using a pre-trained model but also uses GCN to extract graph structure information.Furthermore, we also combine the beam search algorithm and constraint function to enhance the performance of our method.Thus, our method achieves competitive performance on the WebQSP dataset compared to other baseline methods.* denotes supervised methods that use gold SPARQL (or ground truth logical form) as a supervised signal.Our method uses only question-answer pairs, which is a weakly supervised method.The bolded scores represent the highest scores. Ablation Study In order to verify the validity of each component in the model, we performed an ablation study.Table 2 shows the experimental results.We use Gate Recurrent Unit (GRU) to replace RoBERTa in the model.The performance of the model decreased by 6.0% due to the prevalence of missing links in the knowledge base.For example, 71% of the person entities in Freebase are missing birthplace information [39].This leads to the fact that two logically related nodes are not linked in the knowledge base, which reduces the likelihood of finding the correct answer.However, the pre-trained model contains knowledge of many open domains and can make predictions about the missing links in the KB. Variant 2 (w/o GCN): We remove part of the graph structure similarity measure.The performance of the model decreases by 2.2%, which confirms that for query graphbased KBQA methods, extracting the graph structure of the query graph is important.The query graph cannot be filtered well by semantic information matching alone. Variant 3 (w/o Other features): We remove the simple features of the candidate query graph selection module.This variant has the lowest performance degradation of 0.6%.This proves that these simple features are much less capable of filtering query graphs than semantic matching as well as graph structure matching.Furthermore, in order to evaluate the impact of the model components more extensively, we continued our experiments based on Variant 1.The results are shown in Table 3.The results of these two variants demonstrate that both graph structure and simple features can have some improvement on the KBQA task under different settings. We also compared the change in the F1-score score during training for each variant (excluding the variant with simple features removed since the difference in their effectiveness was not significant).As can be seen from Figure 5, although the graph structure metric makes the model fluctuate more sharply in the early stages, which makes the model less effective than the variants without considering the graph structure at some point, it also gives the model a higher upper limit. The ablation study results prove that each module in our model improves the effectiveness of the model.Moreover, the above variants still outperform some of the baseline models, which proves that the effectiveness of our method comes not only from the individual modules but also depends on the overall process design of the model. Conclusions In this paper, we propose a novel KBQA method based on graph convolutional networks and optimized search spaces.By constraining the search process, the model is able to handle complex problems with multiple hops.It solves the problem of graph structure information missing in previous query graph-based KBQA methods, and the results show that the addition of the graph structure matching module improves the model performance by 2.2% (F1-score).Experiments on the WebQSP dataset show that our method has excellent performance. Limitations: In the process of using keywords to detect constraint functions, there may be ambiguity issues.In addition, while using graph structures to improve model performance, our approach leads to an increase in model training time.Further, the large-scale pre-training of the model implies a large resource overhead. Future work: We plan to optimize the model, reduce the resource overhead, and resolve the ambiguity in the constraint function.We also intend to study the effect of different dataset partitions on the experiment. Figure 1 . Figure 1.An example of a KBQA task.For the question "In which stadium did Player A's team win the 1998 World Championship?", the orange circle and the orange line represent the inference process from Player A (the topic entity) to Stadium B (the answer). Figure 2 . Figure 2.An example of the query process.Starting from a topic entity in the question, the extend and constrain operations are applied to generate the query graphs and eventually find the answer.The orange circles represent the constraint function to reduce the search space.Ranker is used to select the path with a higher score after ranking the candidate paths, such as the path made up of the orange arrow and the lambda variable X. Figure 3 . Figure 3.The structure of the query graph Ranker. Figure 4 . Figure 4.The input and output of RoBERTa for measuring the semantic similarity. Figure 5 . Figure 5.Comparison of the F1-score for each variant.(a) Comparison of our method and its variants.(b) Comparison of variant 1 and its variant. When did the author of "The Cricket by the Fireside" write his first book? is an example of the query graph Ranker. Table 1 . Experimental results for comparison with baselines. Table 2 . Experimental results of the ablation study. Table 3 . Ablation study for Variant 1.We removed the graph structure extraction and matching module from variant 1.In this case, the model uses only semantic similarity to select candidate query graphs.The results of the model decreased by 2.4%.This demonstrates that graph structure matching can improve the performance of the query graph-based KBQA model very well.Variant 1-b (w/o Other features): We removed the simple features from variant 1.The model effect is reduced by 0.4%.
5,483.8
2022-11-25T00:00:00.000
[ "Computer Science" ]
A note on the entanglement entropy of primary fermion fields in JT gravity In this paper we analyze and discuss 2D Jackiw-Teitelboim (JT) gravity coupled to primary fermion fields in asymptotically anti-de Sitter (AdS) spacetimes. We obtain a particular solution of the massless Dirac field outside the extremal black hole horizon and find the solution for the dilaton in JT gravity. As two dimensional JT gravity spacetime is conformally flat, we calculate the two point correlators of primary fermion fields under the Weyl transformations. The primary goal of this work is to present a standard technique, called resolvent, rather than using CFT methods. We redefine the fields in terms of the conformal factor as fermion fields and use the resolvent technique to derive the renormalized entanglement entropy for massless Dirac fields in JT gravity. I. INTRODUCTION 2 Two dimensional JT gravity [1−3] is a model of 2D dilaton gravity that admits AdS holography [4]; it is also the simplest nontrivial theory of gravity. In recent years, JT gravity has provided a simple and meaningful toy model for the study of the black hole information loss problem. In particular, it has been able to describe the Page curve of black hole entropy, which is a key step toward solving the black hole information paradox [5−7]. All these works suggest that after the Page time, there is a configuration in which the entanglement wedge of Hawking radiation includes an island inside the black hole interior, and the island configuration is the key to reproducing the Page curve. Therefore, verifying the validity of the island configurationis of great significance. This has motivated several recent proposals to show the existence of the island by proposing ways to extract information from the island to the radiation [8−11]. One of them is achieved by making use of the modular Hamiltonian and modular flow in entanglement wedge reconstruction and the equivalence between the boundary and bulk modular flow [12]. As a concrete example, extremal black holes with modular flow in JT gravity were considered coupled to baths; it is claimed that the explicit information extraction process can be observed in the case where the bulk conformal fields contain free massless fermion fields [12]. While the proposal in [12] shows a promising way to extract information from the island configuration in JT gravity, the details of this process have not been fully specified in the literature. In particular, the modular flow of the free massless fermion field considered in [12] is in two dimensional Minkowski spacetime. More details are needed regarding how to apply this flow to conformally flat spacetime. Therefore, in this paper, we aim to fill this gap in the literature by providing detailed calculations of the entanglement entropy for massless fermion fields with the help of the resolvent technique. Our goal is to provide a clear and comprehensive understanding of the proposed method and its implications for the black hole information paradox. This paper is organized as follows. In Sec. II, we obtain the equations of motion in the background of JT gravity coupled to primary fermion fields and find the particular solution of the wave function outside the extremal black hole horizon, and we also solve for the dilaton in JT gravity. In Sec. III, we calculate the two point correlators of primary fermion fields under Weyl transformations by the CFT method. In Sec. IV, we review the standard resolvent technique to derive the entanglement entropy in n disjoint intervals for a massless Dirac field in two dimensional vacuum Minkowski spacetime [13,14]. Accordingly, we redefine the fields in terms of the conformal factor as fermion fields and use the resolvent technique as described in two dimensional vacuum Minkowski spacetime to derive the renormalized entanglement entropy for massless Dirac fields in JT gravity. GRAVITY BACKGROUND The JT gravity model consists of 2D gravity coupled to a scalar ϕ called the dilaton, with a classical bulk term action in the Lorentzian signature on an asymptotically AdS spacetime, where R is the Ricci scalar and we have set the AdS length . The JT gravity action originates from a dimensional reduction of the four dimensional near extremal magnetic charged black hole [15−17], and the twodimensional JT model is obtained by reduction of the spherically symmetric metric, where is the 2D part with coordinates the dilaton ϕ plays the role of the radius of the 2-sphere that we want to reduce, and is a constant proportional to the extremal entropy of the higher-dimensional black hole geometry. Ψ(x) In this paper, we consider the coupling of a massless Dirac field to JT gravity. The massless Dirac field, also called the primary field, satisfies conformal invariance under conformal transformations in the CFT method. The action of primary fermion fields in 2D curved spacetime is [18−22]: where is the spinor covariant derivative, and the spin connection is 1) . Note that in Eq. (3), is not real, so we should choose as the Dirac Lagrangian, where operates on , and is different from We adopt the metric signature and the anticommutator of the Dirac gamma metric is . The Dirac gamma matrices have this property: and ; we choose The Dirac adjoint in Eq. (3) is defined as , and , where is the vierbein. We define α as the strength of the coupling between the massless Dirac field and JT gravity, and we also define , whereupon the total action functional is By varying the total action (5) with respect to the metric field, we obtain the classical equations of motion (see Appendix A): where is defined as , and is defined as , with . A. Massless Dirac fields outside the extremal black hole In a generic conformal coordinate system , the metric in two dimensional gravity is given by x ± = t ± z In this paper we consider a zero temperature black hole in the two-dimensional Jackiw-Teitelboim gravity, and we can use the Poincaré coordinates to describe the extremal black hole (see Fig. 1 for more details). The metric in the Poincaré patch is 085106-2 The boundary of AdS spacetime is at , the future horizon of the JT extremal black hole is at , and the past horizon is at . S D By varying the Dirac action with respect to the Dirac field, we obtain the massless Dirac field equation in two dimensional conformally flat spacetime We can write the 2-component massless Dirac spinor Ψ as As any two dimensional spacetime is conformally flat, the massless Dirac field equation in the conformal gauge can be written 1) The wave function in JT gravity spacetime must satisfy the following two boundary conditions: The wave function is zero at the AdS spacetime boundary and finite at the past event horizon or the future event horizon of the extreme black hole in JT gravity. Combining the two boundary conditions and Eq. (11), we find a particular solution of the wave function distribution beyond the extremal black hole horizon: B. The dilaton In the conformal gauge, using the general metric in two dimensional gravity in Eq. (7), from Eq. (6) we finally have 2) , (1) For the metric g +− : (2) For the metric g ++ : (3) For the metric g : As the direction of the tetrad can be arbitrarily selected, we choose and . We then obtain the expression for the connection and the matrix in the conformal gauge: 1) A tetrad is a set of four linearly independent vectors that the direction can be arbitrarily selected, four vierbeins are constrained by three equations in light cone coordinates: , , . We choose , and . In conformal gauge , we use the the following identities to get the equations of motion. 085106-3 Next, we substitute the 2-component massless Dirac spinor (10) into the right hand side of Eq. (14), Eq. (15), and Eq. (16). Using Eq. (17) and Eq. (18), we then have Substituting the particular solution of the 2-component massless Dirac spinor (12) back into the right hand side of Eq. (20) and Eq. (21), we find Finally, the equation of motion for the dilaton becomes We can solve the equation for the dilaton where a, b, and c are constants that determine the dilaton of JT gravity. In particular, the dilaton diverges at the conformal boundary, and the location of this physical boundary is imposed by the boundary condition [23]: where u is the physical boundary time, with ε the UV cutoff. S L(2, R) S L(2, R) The metric in JT gravity has isometry. For the extreme black hole in JT gravity, under the transformation the dilaton profiles can be recast as III. THE TWO POINT CORRELATORS A. The primary fermion field correlator in two dimensional Minkowski spacetime We consider a free Dirac field in two dimensions. It satisfies the Dirac equation and the canonical anticommutation relations : (26) where x and y lie on the Cauchy surface with t = constant. The two point field correlator in two dimensional Minkowski spacetime is 1) : The integral of the two point field correlator in Eq. (28) is [13]: where is the standard modified Bessel function; in the massless limit this gives the two point correlator for the primary fermion field in two dimensional flat spacetime: B. The primary fermion field correlator in JT gravity In general, the metric in 2D conformally flat spacetime is: Chang-Zhong Guo, Wen-Cong Gan, Fu-Wen Shu Chin. Phys. C 47, 085106 (2023) 1) Note that in [13,14] the authors used instead of in their computation for the two point field correlator. There exists local Lorentz boost transformations in spacetime, for which is invariant for fermions, the vacuum expectation value of called Feynman propagator is defined as in QFT. In contrast, in [13,14] they defined the two point field correlator as in order to calculate the entanglement entropy of a massless Dirac field with the correlator trace formula (38). 085106-4 where is the conformal factor. Two dimensional JT gravity is locally AdS spacetime with the conformal factor . In the CFT method, the two point correlation function for primary operators on a curved manifold with Weyl rescaled metric in terms of those with metric g satisfies the following transformation relation under Weyl transformations [5,24]: where Δ is the scale dimension for the twist field and is the two point correlation function for primary operators in two dimensional flat spacetime. The free massless fermion field is also the primary field with the scale dimension . Combining Eq. (30) and Eq. (32), we obtain the two point correlators of the primary fermion fields in JT gravity when Weyl transformed from to : IV. ENTANGLEMENT ENTROPY The entanglement entropy (von Neumann entropy) provides us with a convenient way to measure the degree of entanglement between two quantum systems in QFT. We choose the total quantum system as a pure quantum state with the density matrix . The reduced density matrix for the subsystem A is , which is obtained by taking a partial trace over the subsystem B of the total density matrix (see Fig. 2). The entanglement entropy for the subsystem A is the corresponding von Neumann entropy: . For the 1+1 dimensional quantum system at criticality, the continuum limit is a conformal field theory with central charge c. The renormalized entanglement entropy of a single interval in vacuum state in flat spacetime can be calculated by the Cardy formula [25,26]: ℓ where is the length of the interval on the line in vacuum. After Weyl transformation from to , the entanglement entropy in 2D conformally flat spacetime is transformed as [5,27]: The entanglement entropy is related to the reduced density matrix of the region V; hence, the problem of finding an explicit expression for the local density matrix is equivalent to solving the resolvent of the two point correlators in the massless case. Resolvent is a standard technique in complex analysis; the use of the resolvent technique for free massless fermions was first introduced in [13] to study the entanglement entropy in vacuum on the plane, and subsequently for the entanglement entropy of a chiral fermion on the torus [28−30]. In this section we first review the derivation of the entanglement entropy for a massless Dirac field in two dimensional vacuum Minkowski spacetime in terms of the resolvent technique, and we then obtain the entanglement entropy of a single interval for a massless Dirac field in 2D conformally flat JT gravity by redefining the field in terms of the conformal factor as the Fermion field. A. Entanglement entropy for a massless Dirac field in two dimensional vacuum Minkowski spacetime The two point function is related to the reduced density matrix of the region V by the condition: ∂A Fig. 2. (color online) A continuum QFT has been spatially divided into two components on a Cauchy slice Σ. Region B is the complement of region A, and the red curve is the entangling surface, which is a spacetime codimension-2 surface. A note on the entanglement entropy of primary fermion fields in JT gravity Chin. Phys. C 47, 085106 (2023) 085106-5 The expression for the entanglement entropy of the region V can then be given by a propagator trace formula (see Appendix D) [13,14,31]: The resolvent of the two point function is defined as: Combining the the expression for the resolvent (39), the entanglement entropy can be rewritten as: In Eq. (39), the inverse of an operator for the propagator is understood in the sense of a kernel that satisfies the following equation: Substituting (30) into (41) yields a singular integral equation [32]: Fortunately, we can solve the resolvent for this integral operator inside a region formed by n disjoint intervals by the Plemelj formulae [32] in the theory of singular integral equations (see Appendix B). The resolvent of the two point function (see Appendix C): where the function is Substituting (43) into (40), we have . Integrating over ξ first, we obtain the entanglement entropy in n disjoint intervals for a massless Dirac field in two dimensional vacuum Minkowski spacetime: where is a distance cutoff introduced in the last integration, and the Virasoro central charge of the primary fermion field is . For a single interval in 2D vacuum flat spacetime on the plane, we verify the Cardy formula for the renormalized entanglement entropy . JT gravity In this subsection, we apply the resolvent technique to 2D conformally flat spacetime. We begin by redefining the field in terms of the conformal factor as the Fermion field 1) . Let us consider the rescaling field, which is given by: Ψ(⃗ x) Using this rescaling field, we can use the same approach as described in the previous subsection and obtain the same results as in Eq. (46). After performing the calculations using the original field , one finally finds Chang-Zhong Guo, Wen-Cong Gan, Fu-Wen Shu Chin. Phys. C 47, 085106 (2023) 1) We would like to thank Yiming Chen for bringing this point to our attention. 085106-6 The renormalized entanglement entropy for a massless Dirac field of a single interval in JT gravity is 1) : where the Virasoro central charge of the massless Dirac field is . V. CONCLUSION AND DISCUSSION In this paper we obtain the particular solution of the wave function outside the extremal black hole horizon in JT gravity, which is very important for research on the extraction of extremal black hole information with modular flow in JT gravity. The specific expression for the modular flow of 2D free massless fermions depends on the wave function. Other papers have derived the modular flow formula for 2D free massless fermions, but did not report the specific expression for the wave function [12,28,33,34]. 2 In CFT methods, a convenient way to compute entropies of intervals is by using the replica trick to compute the Rényi entropy for integer index n: (49) n → 1 Taking the limit , we can derive the entanglement entropy of the primary fermion fields [5,25,26]. The resolvent technique is a simpler way to derive the entanglement entropy for 2D free massless fermions than the CFT method called the replica trick. In this paper we calculate the two point correlators of primary fermion fields in JT gravity under Weyl transformations and redefine the fields in terms of the conformal factor as the fermion fields, then use the resolvent technique as described in two dimensional vacuum Minkowski spacetime to derive the renormalized entanglement entropy for massless Dirac fields in JT gravity. In this work, we have calculated the wave function and derived the entanglement entropy for the primary fermion fields outside the extremal black hole horizon in JT gravity. We only consider the quantum entanglement between free massless fermions outside the extremal black hole horizon. For the entanglement between free massless fermions inside and outside the horizon, however, we should regard the entirety of spacetime as a total quantum system composed of the extremal black hole and Hawking radiation outside the horizon. The degrees of freedom for the free massless fermions located inside the horizon represent the degrees of freedom of the extremal black hole, and the degrees of freedom for the free massless fermions located outside the horizon represent the degrees of freedom of Hawking radiation particles. In order to calculate the entanglement entropy for the free massless fermions both inside the horizon and outside the horizon, we should consider the entanglement island inside the extremal black hole interior in JT gravity. We may calculate the fine grained entropy of the extremal black hole and Hawking radiation via the semiclassical method called the island rule. We leave the full analysis of this for future work. ACKNOWLEDGMENTS We thank Hong-An Zeng for helpful discussions on the resolvent of the primary fermion correlator in 2D vacuum Minkowski spacetime. TO PRIMARY FERMION FIELDS The total action functional for JT gravity coupled to primary fermions is given by Eq. The variation of (3) with respect to the frame vector indices is [18]: . We use . By the variation of the metric , Eq. (52) can be written: where we have used the following contractions in (53), γ a e a ν = γ ν , e a ρ e a ν = δ ρ ν . (A4) For the classical bulk term action of JT gravity (1), using the standard relations [35], A note on the entanglement entropy of primary fermion fields in JT gravity Chin. Phys. C 47, 085106 (2023) 2D 1) In dilaton gravity, the generalized entropy of Hawking radiation is given by 085106-7 By varying the metric in 2D spacetime, we obtain: In 2D gravity, we can easily calculate that the Einstein tensor is zero. In the last term in Eq. (56), we have . Eq. (56) then becomes Finally, substituting (53) and (57) into (51) yields the classical equation of motion in JT gravity coupled to primary fermion fields: For the entire complex plane (see the Fig. 3), we obtain the integral formula of the function using Cauchy's integral formula [32]: From the Eq. (59) we easily see: Equations of the type are called singular integral equations. We define the following functions: Substituting Eq. (62) into Eq. (61), we have Fig. 3. (color online) L is a line segment with two endpoints a and b, and is the midpoint of the line segment L. is the blue semicircle in the counterclockwise direction, and is the red semicircle in the clockwise direction. represents the contour that contains , and represents the contour that does not contain . represents a complete circle in the counterclockwise direction. (B14) Eq. (72) is also the Plemelj formulae, and the corresponding solutions are from which we obtain Eq. (42) can then be written We define a homogeneous equation: By taking logarithms, we obtain with the corresponding solution: For a single interval , the solution to is Combining Eq. (76) and Eq. (77) yields the Plemelj formulae: A note on the entanglement entropy of primary fermion fields in JT gravity Chin. Phys. C 47, 085106 (2023) R(x, y) From this we obtain the solution to the resolvent 1) : Substituting (80) into (85), we obtain the expression for the resolvent of a single interval : When L contains n disjoint intervals, where , the resolvent of the primary fermion correlator in multicomponent subsets of the L in two dimensional vacuum Minkowski spacetime can be written as R(x, y) = where the function is RELATOR TRACE FORMULA The creation and annihilation operators and for primary fermion fields satisfy the anticommutation relations: . The two point correlators are then given as The reduced density matrix of the fermion system can be written in the exponential form [14]: is the modular Hamiltonian of the system and K is the normalization constant, which satisfies . The two point correlators in the region V of space are related to the reduced density matrix by the following equation: We can diagonalize the exponent by the Bogoliubov transformation with unitary operator U to maintain the anticommutation relation . We choose U such that is a diagonal matrix and is the eigenvalue of Hermitian matrix H. Using the normalization condition and the Bogoliubov transformation, the reduced density matrix can be rewritten The relation between H and C can then be rewritten We define as the eigenvalues of the matrix , giving In terms of the definition of the von Neumann entropy (34), the entanglement entropy for primary fermion fields of the region V can be written as |0⟩ |1⟩ where we have traced two quantum states such as and for primary fermion fields in the second line.
5,038.6
2023-01-01T00:00:00.000
[ "Physics" ]
Approximated least-squares solutions of a generalized Sylvester-transpose matrix equation via gradient-descent iterative algorithm This paper proposes an effective gradient-descent iterative algorithm for solving a generalized Sylvester-transpose equation with rectangular matrix coefficients. The algorithm is applicable for the equation and its interesting special cases when the associated matrix has full column-rank. The main idea of the algorithm is to have a minimum error at each iteration. The algorithm produces a sequence of approximated solutions converging to either the unique solution, or the unique least-squares solution when the problem has no solution. The convergence analysis points out that the algorithm converges fast for a small condition number of the associated matrix. Numerical examples demonstrate the efficiency and effectiveness of the algorithm compared to renowned and recent iterative methods. Introduction In differential equations and control engineering, there has been much attention for the following linear matrix equations: AX + XB = C : Sylvester equation, AXB + CXD = E : a generalized Sylvester equation, AXB + CX T D = E : a generalized Sylvester-transpose equation, X + AXB = C : Stein equation, X + AX T B = C : Stein-transpose equation. These equations are special cases of a generalized Sylvester-transpose matrix equation: where, for each t = 1, . . . , p, A t ∈ R l×m , B t ∈ R n×r , for each s = 1, . . . , q, C s ∈ R l×n , D s ∈ R m×r , E ∈ R l×r are known matrices whereas X ∈ R m×n is the matrix to be determined. These equations play important roles in control and system theory, robust simulation, neural network, and statistics; see e.g. [1][2][3][4]. A traditional method of finding their exact solutions is to use the Kronecker product of a matrix and the vectorization to reduce the matrix equation to a linear system; see e.g. [5,Ch. 4]. However, the dimension of the linear system can be very large due to the Kronecker multiplication, so that the step of finding the inversion of the associated matrix will result in excessive computer storage memory. For that reason, iterative approaches have received much attention. The conjugate gradient (CG) is an interesting idea to formulate finite-step iterative procedures to obtain the exact solution at the final step. There are variants of CG method for solving linear matrix equations, namely, the generalized conjugate direction method (GCD) [6], the conjugated gradient least-squares method (CGLs) [7], generalized product-type methods based on a bi-conjugate gradient (GPBi) [8]. Another interesting idea to create an iterative method is to use Hermitian and skew-Hermitian splitting (HSS); see e.g. [9]. A group of methods, called gradient-based iterative methods, aim to construct a sequence of approximated solutions that converges to the exact solution for any given initial matrices. These methods are derived from the minimization of associated norm-error functions using gradients, and the hierarchical identification. Such techniques have stimulated and have played a role in many pieces of research in a few decades. In 2005, Ding and Chen [10] proposed a gradient-based iterative (GI) method for solving Eqs. (3), (4), and (6). Ding et al. [11] proposed the GI and the least-squares iterative (LSI) methods for solving p j=1 A j XB j = F which includes Eqs. (1) and (4). Niu et al. [12] developed a relaxed gradient-based iterative (RGI) method for solving Eq. (3) by introducing a weighted factor. The MGI method, developed by Wang et al. [13], is a half-step-update modification of the GI method. Zhaolu et al. [14] presented two methods for solving Eq. (3). The first method is based on the GI method and called the Jacobi gradient iterative (JGI) method. (2). See more algorithms in [16][17][18][19][20][21][22][23][24]. The developed iterative methods can be applied to state-space models [25], controlled autoregressive systems [26], and parameter estimation in signal processing [27]. Let us focus on gradient-based iterative methods for solving Eqs. (5) and (8). A recent gradient iterative method for Eq. (5) is AGBI method, developed in [28]. The following two methods were proposed to produce the sequence X(k) of approximated solutions converging to the exact solution X * of Eq. (8). A conservative choice of the convergence factor μ is In this work, we introduce a new iterative algorithm based on gradient-descent for solving Eq. (8). The techniques of gradient and steepest descent let us obtain the search direction and the step sizes. Indeed, our varied step sizes are the optimal convergence factors that guarantee the algorithm to have a minimum error at each iteration. Our convergence analysis proves that, when Eq. (8) has a unique solution, the algorithm constructs a sequence of approximated solutions converging to the exact solution. On the other hand, when Eq. (8) has no solution, the generated sequence converges to the unique least-squares solution. We provide the convergence rate to show that the speed of convergence depends on the condition number of the associated certain matrix. In addition, we have an error analysis that gives an error estimation comparing the current iteration with the preceding and the initial iterations. Finally, we provide numerical simulations to guarantee the efficiency and effectiveness of our algorithm. The illustrative examples show that our algorithm is applicable to both Eq. (8) and its certain interesting special cases. The organization of this paper is as follows. In Sect. 2, we recall the criterion for the matrix equation (8) to have a unique solution or a unique least-squares solution, via the Kronecker linearization. We propose the gradient-descent algorithm to solve Eq. (8) in Sect. 3. The proof of convergence criteria, convergence rates, and error estimation for the proposed algorithm are provided in Sect. 4. In Sect. 5, we present the comparison of the efficiency of our proposed algorithm to well-known and recent iterative algorithms. In the remainder of this paper, all vectors and matrices are real. Denote the set of n columns vectors by R n and the set of m × n matrices by R m×n . The (i, j)th entry of a matrix A is denoted by A(i, j) or a ij . To perform a convergence analysis, we use the Frobenius norm, the spectral norm, and the (spectral) condition number of A ∈ R m×n , which are, respectively, defined by Exact and least-squares solutions of the matrix equation by the Kronecker linearization In this section, we explain how to solve the generalized Sylvester-transpose matrix equation (8) directly using the Kronecker linearization. Recall that the Kronecker product of A = [a ij ] ∈ R m×n and B ∈ R p×q is defined by A ⊗ B = [a ij B] ∈ R mp×nq . The vector operator Vec(·) turns each matrix A = [a ij ] ∈ R m×n to the vector Lemma 2.1 (e.g. [5]) For compatible matrices A, B, and C, we have the following properties of the Kronecker product and the vector operator. Recall also that there is a permutation matrix P(m, n) ∈ R mn×mn such that This matrix depends only on the dimensions m and n and is given by where E ij has entry 1 in (i, j)th position and all other entries are 0. Now, we can transform Eq. (8) to an equivalent linear system by applying the vector operator and utilizing Lemma 2.1(ii) and the property (9). Indeed, we get the linear system where Thus Eq. (8) has a (unique) solution if and only if Eq. (10) does. We impose the assumption that Q is of full column-rank, or equivalently, Q T Q is invertible. If Eq. (8) has a solution, then we obtain the exact (vector) solution to be If Eq. (8) has no solution, then we can seek for a least-squares solution, i.e. a matrix X * that minimizes the squared Frobenius norm Q Vec(X) -Vec(E) 2 F . The assumption on Q implies that the least-squares solution for Eq. (8) is uniquely determined by the solution of the associated normal equation, and it is also given by Eq. (12). In this case, the leastsquares error is given by We denote both the exact and the least-squares solutions of Eq. (8) by X * . Gradient-descent iterative solutions for the matrix equation This section is intended to propose a new iterative algorithm for creating a sequence {X k } of well-approximated solutions of Eq. (8) that converges to the exact or least-squares solution X * . This algorithm will be applicable if the matrix Q is of full column-rank, no matter Eq. (8) has a solution or not. Our aim is to generate a sequence {x k }, starting from an initial vector x 0 , using the recurrence where x k is the kth approximation, τ k+1 > 0 is the step size, and d k is the search direction. To obtain the search direction, we consider the Frobenius-norm error p t=1 A t XB t + q s=1 C s X T D s -E F which is then transformed into Qx -Vec(E) F via Lemma 2.1(ii) and x = Vec(X). Let f : R mn → R be the norm-error function defined by It is easily seen that f is convex. Hence, the gradient-descent iterative method can be shown as the following recursive equation: To find the gradient of the function f , the following properties of the matrix trace will be used: By lettingẽ = Vec(E), we compute the derivative of f as follows: Thus, we have the new form of the iterative equation as follows: The above equation can be transformed into matrix form via Lemma 2.1(ii), i.e., We differentiate φ k+1 by using the properties of a matrix trace and obtain It is obvious that the second-order derivative of φ k+1 is QQ T (ẽ -Qx k ) 2 F which is a positive constant. So when d dτ φ k+1 (τ ) = 0, we get the minimizer of φ k+1 , i.e. s . An implementation of the gradient-descent iterative algorithm for solving Eq. (8) is given by the following algorithm where the search direction and the step size are taken into account. To terminate the algorithm, one can alternatively set the stopping rule to be R k Fδ < where > 0 is a small error and δ is the least-squares error described in Eq. (13). Convergence analysis of the proposed algorithm In this section, Algorithm 1 will be proved to converge to the exact solution or the unique least-squares solution. Recall the next lemma. Algorithm 1: The gradient-descent iterative algorithm for Eq. (8) The following definition is an extension of the Frobenius norm and will be used in the convergence analysis. . Then X k Q → X * Q for any initial matrix X 0 . Here, · Q is the Q-weighted Frobenius norm defined by Eq. (17). Proof Since x * = Vec(X * ) is the optimal solution of min x∈R mn f (x), we denote the minimum value, inf x∈R mn f (x) = f (x * ) as δ. Note that δ is equal to the least-squares error determined by Eq. (13) and is zero if X * is the unique exact solution. If there exists k ∈ N such that ∇f (x k ) = 0, then X k = X * and the result holds. To investigate the convergence of the algorithm, we assume that ∇f (x k ) = 0 for all k. Considering the strong convexity of f , we have from Eq. (14) ∇ 2 f (x k ) = Q T Q. Let λ min (λ max ) be the minimum (maximum) eigenvalue of Q T Q, respectively. Since Q T Q is symmetric, we have Thus, f is strongly convex. From (15), substituting y = x k+1 and x = x k yields We minimize the RHS by taking τ = 1/λ min , so that Since the above equation is true for all y ∈ R mn , we have Similarly, from (16), we have Minimizing the RHS by taking τ = 1/λ max yields Subtracting each side of (19) by δ and combining with ∇f ( Putting α := 1λ min /λ max , we have By induction, we obtain Since Q T Q is assumed to be invertible, Q T Q > 0, it follows that λ min > 0 and hence 0 < α < 1. Consider the case of X * is the unique exact solution, i.e., δ = 0. We have f (x k ) → 0, or equivalently Qx k -Vec(E) → 0 as k → ∞. Now, the assumption that Q is of full columnrank implies that Therefore, X k = Vec -1 (x k ) → X * as k → ∞. The other case is that X * is the unique least-squares solution, i.e., δ > 0. We have f (x k ) → δ or 1 2 Qx k -Vec(E) 2 F → Vec(E) 2 F -Vec(E) T Qx * . Then We omit some algebraic operations and hence immediately write Therefore, X k Q → X * Q as k → ∞. We denote the condition number of Q by κ = κ(Q). Observe that α = 1-κ -2 . The relation between the quadratic norm-error f (x k ) and the norm of residual error R k is given by Making use of Lemma 2.1(ii), the inequalities (20) and (21) become the following estimation: In the case of Eq. (8) having a unique exact solution (δ = 0), the error estimations (22) and (23) reduce to (24) and (25), respectively. Since 0 < α < 1, it follows that, if R k-1 F are nonzero, then The above discussion is summarized in the following theorem. Theorem 4.4 Assume that Q is of full column-rank. (i) Suppose Eq. (8) has a unique solution. The error estimation R k F compared with R k-1 F (the preceding iteration) and R 0 F (the initial iteration) are given by (24) and (25), respectively. Particularly, the relative error R k F gets smaller than the preceding (nonzero) error, as in (26). (ii) When Eq. (8) has a unique least-squares solution, the error estimation (22) and (23) hold. In both cases, the convergence rate of Algorithm 1 (regarding the error R k F ) is governed by Remark 4.5 The relative errors (22) and (23) do not seem to decrease every step of iteration since the terms 2δκ -2 and 2δ(1α k ) are positive. However, the inequality (19) implies that { R k F } ∞ k=1 is a strictly decreasing sequence converging to δ. We recall the following properties. Theorem 4.7 Suppose that Q is of full column-rank and Eq. (8) has a unique exact solution. We have the error estimation X k -X * F compared with the preceding iteration and the initial iteration of Algorithm 1 are provided by Particularly, the convergence rate of the algorithm is governed by Proof Utilizing (25) and Lemma 4.6, we have As the limiting behavior of X k -X * F depends on (1κ -2 ) k 2 , the convergence rate for Algorithm 1 is governed by √ 1κ -2 . Similarly, using (24), it follows that and hence (28) is obtained. Theorem 4.8 Suppose Q is of full column-rank and Eq. (8) has a unique least-squares solution. The error estimation X k -X * 2 F compared to the preceding iteration and the initial iteration of Algorithm 1 are provided by Proof The proof is similar to that of Theorem 4.7 and carried out by (22) and (23). We, therefore, omit the proof. Consequently, our convergence analysis indicates that the proposed algorithm always converges to the unique (exact or least-squares) solution for any initial matrices and small condition numbers. Moreover, the algorithm will converge fast when the condition number is close to 1. Numerical experiments for the generalized Sylvester-transpose matrix equation and its special cases In this section, we provide numerical results to show the efficiency and effectiveness of Algorithm 1. We perform the experiments in the following cases: • a large-scaled square generalized Sylvester-transpose equation, • a small-scaled rectangular generalized Sylvester-transpose equation, • a small-scaled square Sylvester-transpose equation, • a large-scaled square Sylvester equation, • a moderate-scaled square Lyapunov equation. Each example contains some comparisons of the proposed algorithm (denoted by TauOpt) with the mentioned existing algorithms as well as the direct method Eq. (12). CT stands for the computational time (in seconds) and is measured by the tic toc function in MATLAB. The relative error R k F is used to measure error at the kth step of the iteration. All iterations have been evaluated by MATLAB R2020b, on a PC (2.60-GHz intel(R) Core(TM) i7 processor, 8 Gbyte RAM). We choose an initial matrix X 0 = zero(100), where zero(n) is the n × n zero matrix. In fact, this equation has the unique solution X * = tridiag(0.293, 0.152, 0.905). Table 1 shows that the direct method consumes a big amount of time to get the exact solution, while Algorithm 1 produces a small-error solution in a small time (0.1726 sec- We find that 4 = rank Q = rank[Q Vec(E)] = 5, i.e., the matrix equation does not have an exact solution. However, the size of Q is 9 × 4, i.e., Q is of full-column rank. Hence, according to Theorem 4.3, Algorithm 1 will converge to the least-squares solution in which the least-squares error (13) is equal to 0.0231. We choose an initial matrix X 0 = zero(2). Algorithm 1 is compared with GI (Method 1.1), LSI (Method 1.2) and the direct method Eq. (12). In this case, we consider the error X * -X k F where X * is the least-squares solution. Figure 2 displays the error plot, and Table 2 shows the errors and CTs for TauOpt, GI, LSI and the direct method. We see that the errors converge monotonically to zero, i.e., the approximate solutions X k generated by Algorithm 1 converge to X * . Moreover, Algorithm 1 consumes less computational time than other methods. Next, we will consider the Sylvester-transpose equation (5) which is a special case of the generalized Sylvester-transpose equation (8). From Algorithm 1, the optimal step size τ is described by We report the comparison of Algorithm 1 with GI (Method 1.1), LSI (Method 1.2), AGBI ( [28]) and the direct method Eq. (12) by Fig. 3 and Table 3. Both of them imply that Algorithm 1 outperforms other algorithms. Next, we will consider the Sylvester equation (3) which is also a special case of Eq. (8). For this equation, the optimal step size τ is described by Figure 3 Relative errors for Ex. 5.3 where W k = A T R k + R k B T and R k = C -AX k -X k B. where A, B, C ∈ R 100×100 . We choose an initial matrix X 0 = zero(100). Here, the symmetric exact solution is given by X * = tridiag(1, -5, 1), so that AGBI algorithm can be applicable. We compare Algorithm 1 with GI (Method 1.1), AGBI ( [28]), RGI [12], MGI [13], JGI [14], and AJGI [14]. Although Table 4 tells us that our algorithm takes a slightly more time than some other algorithms, Fig. 4 illustrates that Algorithm 1 reaches the fastest convergence. The last example presents another special case of Eq. (8) that is the Lyapunov equation (2). The optimal step size τ is described by Figure 4 Relative errors for Ex. 5.4 where W k = A T R k + R k A and R k = B -AX k -X k A T . In conclusion, Algorithm 1 takes a slightly more computational time than some other algorithms but still outperforms distinctly in performance of convergence. Concluding remarks We properly establish a gradient-descent iterative algorithm for solving the generalized Sylvester-transpose matrix equation (8). We show that the proposed algorithm is useful and applicable for wide range of problems, even though the problem has no solution, as long as the associated matrix Q, defined by Eq. (11), is of full column-rank. If the problem has the unique exact solution, then the approximate solutions converge to the exact solution. In the case of a no-solution problem, we have X Q → X * Q where X * is the unique least-squares solution. The convergence rate is described in terms of κ, the matrix condition number of Q, that is, √ 1κ -2 . Moreover, the analysis shows that the sequence of errors generated by our algorithm is monotone decreasing. Numerical examples are provided to verify our theoretical findings.
4,830
2021-05-21T00:00:00.000
[ "Mathematics" ]
Problems with Higgsplosion A recent calculation of the multi-Higgs boson production in scalar theories with spontaneous symmetry breaking has demonstrated the fast growth of the cross section with the Higgs multiplicity at sufficiently large energies, called"Higgsplosion". It was argued that"Higgsplosion"solves the Higgs hierarchy and fine-tuning problems. In our paper we argue that: a) the formula for"Higgsplosion"has a limited applicability and inconsistent with unitarity of the Standard Model; b) that the contribution from"Higgsplosion"to the imaginary part of the Higgs boson propagator cannot be re-summed in order to furnish a solution of the Higgs hierarchy and fine-tuning problems. 1 The amplitude behaviour with the large scalar multiplicity One of the flaring questions for the modern elementary particle physics is the question about the energy scale of new physics. All current experiments are in excellent agreement with the Standard Model (SM). Moreover, the Higgs mass m H ≃ 125 GeV means that all the couplings of the theory are small above the electroweak scale, and perturbative calculations in non-abelian QFT, which is the core of the SM, should provide a consistent approach. Most of the coupling constants of the theory become smaller with increasing energy. The only two couplings which grow with the energy scale are the U (1) hypercharge coupling constant and the Higgs self coupling λ . However, the scale of new physics related to this coupling evolution with the energy -the Landau pole -is proportional to exp(1/λ ) and significantly exceeds the Planck scale. Therefore, it is normally assumed that SM can be trusted as a perturbative QFT at all energies that can, even hypothetically, be probed in collisions. The only scale that may appear in the SM framework is the one associated with the metastability of the EW vacuum, but this scale, even if present, is very large ∼ 10 10 GeV. At the same time, it has long been known that theories of self-interacting scalars (which also include the Higgs boson of the SM) have problems with the application of perturbation theory at high energies. The first observations of subtleties in the scalar multi-particle production demonstrated that at the tree level, owing to the large number of contributing diagrams, the n-particle amplitudes have factorial dependence on the number of particles. [1,2,3,4,5] A tree This factorial growth of the amplitude indicates the breakdown of the usual perturbative calculations for n λ −1 . It was found [6,7,8] that the corresponding 1 → n cross-section can be written in exponential form where ε ≡ (E − nm H )/nm H is the average kinetic energy of the final-state Higgs particles. The function F(λ n, ε) was obtained by following a specific semiclassical approach [8] valid in the limit λ → 0, n → ∞, with fixed λ n, ε. Moreover, there is a conjecture [7], that to exponential precision the result does not depend on the details of the initial state, given that the initial number of particles is small and therefore, without loss of generality, one can focus on calculation of 1 → n process, even though the initial particle is off-shell. For small λ n ≪ 1 and small energies of the final particles ε ≪ 1 the exponent of the cross-section is [6,7,8] . As λ n → 0, F(λ n, ε) → −∞ and the cross-section Eq.(2) is exponentially suppressed, whilst in the opposite regime for large λ n the cross section grows exponentially, thereby contradicting the unitarity of the theory, at least at the level of perturbation theory. The expression Eq.(4) for F(λ n, ε) is valid for λ n ≪ 1, ε ≪ 1. The logarithmic and lowest order terms correspond to tree level contributions, the term of the order O(λ 2 n 2 ) is the first radiative correction. Note, that in the range of the validity of Eq.(4) the function F(λ n, ε) is negative. At tree level (for λ n ≪ 1) the energy dependence for arbitrary energies ε was found in [11,12] and again leads to an exponentially suppressed result. However, the problem of finding the expression for arbitrary large λ n and ε is still open. Recently authors of [9,10] have extended the thin-wall approximation of [13] and have found the cross-section for the opposite, λ n ≫ 1 limit: An important feature of this solution is the increase of F(λ n, ε) at sufficiently large λ n for a fixed value of ε. This result was then used to argue that at large multiplicities (or, equivalently, large energies E ∼ n(ε + m H )) the 1 → n width grows exponentially. One should note that the thin-wall semi-classical solution, leading to Eq.(5) exists only in the λ φ 4 theory with spontaneous symmetry breaking in 3+1 dimensions. We would like to stress, however, that non-vanishing ε is required for the result of Eq.(5) to be positive, since at zero ε the logarithmic term is infinitely negative which gives zero cross-section at the threshold. At the same time the contribution 0.85 √ λ n in Eq.(5) was obtained at the kinematical threshold, that is for ε → 0. This is a subtle point. One should also note that the full result of Eq.(5) is obtained from a combination of the large λ n contribution with the tree level result, which has the factorized form This form is valid at tree level and at one loop (c.f. Eq. (4)). We would now like to point out that higher order quantum corrections are expected to contain terms which depend both on ε and λ n, e.g. terms like O(λ 2 n 2 ε) in Eq.(4). Such terms could play an important role. We argue here that without the knowledge of these terms it is not possible to determine the validity region of the result Eq.(5) with respect to the value of ε. We discuss this in detail in the next section. Such mixed terms may prevent the exponential growth of the cross-section. The exponential growth of the 1 → n width was suggested to be by itself a solution to the hierarchy problem in [14] where authors conclude that such exponential growth of the self-energy leads, after resummation, to exponential suppression of the scalar propagators at high energies. In this paper we review in detail the validity and consequences of such fast-growing amplitudes in the context of unitary, local and Lorentz invariant quantum field theory. Unitarity and 1PI resummation It has been known for many years [2] that exponentially growing amplitudes lead to a violation of unitarity. In [14] the authors have proposed a mechanism to recover unitarity through the effect of the off-shell 1 → n amplitude on the re-summed scalar Feynman propagator. The authors suggested that if the two-point function falls off faster with energy than the amputated 1 → n matrix element, unitarity can be restored via the so-called Higgspersion mechanism. However, this argument requires a propagator which falls off faster than the amputated 1 → n matrix element. In other words, we require the two-point function to be decaying exponentially with energy. This is a peculiar form of the two-point function that is known to cause problems with unitarity [15]. However, it has been proposed [14] that this form appears in a theory with exploding amplitudes. The problem we see here is the following. An exponentially decreasing propagator has been obtained in [14] because the authors have used the perturbation theory to sum up single-particle irreducible (1PI) Green's functions, which is a valid procedure only for a convergent geometric series. Namely, it has been claimed that the exact two-point function ∆ F (p 2 ) can be obtained from the 1PI Green's function Σ( where m 0 is the bare mass of the theory. However, if Σ is exponentially growing with p 2 , at sufficiently large p 2 this series is no longer convergent. In this case, one may not use the re-summed form of the above expression. Since resummation is not valid, instead of exponentially falling with p 2 , ∆ F will uncontrollably grow with p 2 . This leads to unitarity violation of the Higgsploding theory, assuming that Eq. (5) is valid for large λ n values and non-vanishing ε. Under this assumption, one may ask whether the aforementioned problem is related to the application of the perturbation theory where it is not valid. It is illustrative to examine the functional form of the two-point function using non-perturbative "language" of dispersion relations. In this procedure we closely follow [16]. Consider the momentum-space Feynman propagator where we anticipate that the ∆ F is Lorentz invariant and hence only a function of p 2 . Using the integral representation of the θ -function, one has Setting the variable of integration x → − x in the second term and using translation invariance of the vacuum, Now we insert a complete set of (in or out) states. In the language of [14], this corresponds to a kinematically-unique one-particle state, plus a continuum of multi-particle states. We let σ n denote all the internal quantum numbers of an n-particle state, including its phase space. Assuming that | 0|φ (0)|n, σ n | 2 is Lorentz invariant, where p n is the total four-momentum of the n-particle state. In the case of [14], complications will arise due to the divergence of this integrand. To see how difficulties appear, let us consider the scenario where ∑ n,σ | 0|φ (0)|n, σ n | 2 is a polynomial of order N in p 2 . Exchanging p ′ 0 → −p ′ 0 and x µ → −x µ in the second term gives Combining both terms in the curly bracket and making the iε prescription implicit, At this point, one might be tempted to swap the order of integration and perform the x-integral. However, the remaining integrand would be an order N − 1 polynomial in p ′2 . This integrand is not convergent at p ′ 0 = ±∞, so the straightforward change of the integration order is not valid here. Before we can swap the order of integration, we must perform N subtractions of the form In this way, Eq.(13) may be written as (p 2 0 ) N times a convergent integral, plus an order N − 1 polynomial in p 2 0 where the coefficients are functions of ∆ F (0). For example, the first term in Eq. (14) simply gives ∆ F (0). The contribution from the convergent integral is where p ′ µ ≡ (p ′ 0 , p) and we recognize the term in the curly brackets as the Kallen-Lehmann spectral function ρ(p ′2 ). Given that p is fixed, one may change the variable of integration from p ′2 0 to p ′2 , giving From this form it is evident that if ρ(p 2 ) is an order-N polynomial in p 2 , knowledge of ρ(p 2 ) only defines the two-point function up to some order-N polynomial. The functional form of ∆ F (p 2 ) is allowed to change dramatically without any change in the amputated 1 → n matrix element. In the case of [14], the situation is even more extreme. In this case, the spectral function ρ(p 2 ) is the sum of terms proportional to the multi-particle rate where M h is the Higgs mass, Π n is the n-particle phase space element and M (1 → n) is the matrix element for 1 → n Higgs decay. If one assumes that R(p 2 ) is exponentially growing in p 2 , all predictive power for ∆ F from ρ(p 2 ) is lost, due to the infinite number of subtractions required for a convergent integral in Eq. (13). Although one may know the exact form of ρ(p 2 ), one may add an arbitrary analytic function to both the left-hand side and right-hand side of Eq. (16) such that the Feynman propagator is allowed to change its functional form wildly without having any apparent effect on the multi-particle rate R(p 2 ). This feature is just a statement that for an order-N polynomial g(z) with a branch cut ∆g(z) along the real axis beginning at z 0 , one can integrate ∆g(z) via contour integration. In order to discard the contribution from the |z| → ∞ curve, one performs N subtractions such that where the latter term is the residue at the x = y pole [17]. The price one pays for convergence is the addition of an order-N "polynomial of integration" which must be fixed by extra conditions of the theory. Returning to the Higgspersion scenario, we would like to stress that given that ∆ F (p 2 ) may include an arbitrary analytic function of p 2 there is no reason why it should fall off exponentially with p 2 in the high-energy limit. In fact, Eq.(16) suggests precisely the opposite -that the two-point function should grow uncontrollably in this limit. The discrepancy between the amplitude growth with p 2 we observe and the exponential fall proposed in [14] arises because the latter was calculated using perturbation theory. Namely, the single-particle irreducible (1PI) Green's function Σ(p 2 ) was summed into a geometric series in order to put Σ into the denominator of ∆ F (p 2 ). However, if Σ grows exponentially with p 2 , at sufficiently large p 2 this series is no longer convergent and one must instead use the form where m 0 is the bare mass of the theory. In this form ∆ F will uncontrollably grow with p 2 , in agreement with Eq. (16). In this way, the Higgspersion mechanism only compounds the unitarity violation in the Higgsploding theory. Conclusions We have explored the Higgsplosion effect and related Higgspersion mechanism behind it in detail and have found its limitation and problems. In particular, assuming the correctness of the Eq.(5) for F(λ n, ε) derived for 1 → n process in [9,10] beyond the thin-wall approximation, we have found that the amplitude for 1 → n process increases exponentially rather than decreases at sufficiently high energies as stated in [14]. We have found this effect and the respective discrepancy because one cannot use the resummation of the self-energy insertion when that self-energy grows exponentially. Since the respective series is divergent for sufficiently large momentum one can not re-sum it into a correction of the propagator. Previously [14] it was argued that such a correction will play a crucial role in "shutting-off" the propagator at sufficiently large energies and solving hierarchy problem. In the light of our finding we would like to state that such a resummation is not possible and that, assuming Eq.(5) is correct, the 1 → n amplitude will grow exponentially thereby violating unitarity. The fact that Eq.(5) implies unitarity violation leads us to conclude that this equation is likely not generic enough and that additional higher order cross terms of O(λ 2 n 2 ε) form in Eq.(4) are expected to play an important role on restoration of unitarity. Indeed, unitarity should be restored, since it was present in the theory in the first place from the hermiticity of the Hamiltonian. If some theory has a real unitarity problem (which is, however, not the case of the SM framework we discuss here) one of the natural solutions could be a composite nature of the Higgs boson which at certain characteristic energy scales would cure nonunitary growth via the respective form factor and the related new physics sector. In the case of the Standard Model we conclude that the 1 → n multi-scalar final state amplitude should be consistent with unitarity, but that in any case if it exponentially grows it can not be re-summed. Such behaviour is not consistent with unitarity and does not provide a solution to the hierarchy problem. We believe that the correct evaluation of 1 → n amplitude for multi-scalar final states above the threshold requires an extension of Eq.(5) and remains still an open and very non-trivial problem.
3,740.8
2018-08-16T00:00:00.000
[ "Physics" ]
Proceedings of the 9th International Conference on Surface Plasmon Photonics (SPP9) The research field of plasmonics is concerned with the interaction of light with free electrons in conducting media, thus, having a natural emphasis on metal nanostructures while now also being explored in several other novel material systems ranging from macromolecules and two-dimensional materials (such as graphene) to doped semiconductors. The field of plasmonics is bridging fundamental research and diverse applications, embracing traditional topics such as sensing as well as emerging ones such as localized heating and hot-electron generation. The synergy of light with nanotechnology is opening a range of application areas important to society. After two decades of explosive growth, plasmonics is still going strong: according to “ 2019 Research Fronts ”, the topic “ Plasmonic properties of metal nano-structures” belongs to the top 10 research fronts in physics [1]. The International Conference on Surface Plasmon Photonics (SPP) is a biennial independent and non-profit conference series widely regarded as the premier series in the field of plasmonics. The most recent conference, SPP9, was held in Copenhagen (May 26–31, 2019), exploring the breadth of fascinating topics and new directions that are emerging from plasmonics, including metasurfaces, graphene and other 2D materials, strong-coupling phenomena, topological plasmonics, quantum plasmonics, and hot-electron phenomena (Figure 1). Enabling deeply subwavelength electromagnetic field confinement, plasmonics epitomizes one of the key research areas within nanophotonics represented extensively at SPP9. This special issue includes a selection of invited papers from this conference. SPP9 opened with a plenary talk by Thomas W. Ebbesen on polaritons in material science, including perspectives on strong-coupling phenomena in plasmonics as a new state of matter. In this special issue, this topic is elaborated in the paper by Thomas et al., considering ground state chemistry under vibrational strong coupling [2]. Xiong et al. investigate ultrastrong Proceedings of the 9th International Conference on Surface Plasmon Photonics (SPP9) https://doi.org/10.1515/nanoph-2019-0532 The research field of plasmonics is concerned with the interaction of light with free electrons in conducting media, thus, having a natural emphasis on metal nanostructures while now also being explored in several other novel material systems ranging from macromolecules and two-dimensional materials (such as graphene) to doped semiconductors.The field of plasmonics is bridging fundamental research and diverse applications, embracing traditional topics such as sensing as well as emerging ones such as localized heating and hot-electron generation.The synergy of light with nanotechnology is opening a range of application areas important to society.After two decades of explosive growth, plasmonics is still going strong: according to "2019 Research Fronts", the topic "Plasmonic properties of metal nanostructures" belongs to the top 10 research fronts in physics [1]. The International Conference on Surface Plasmon Photonics (SPP) is a biennial independent and non-profit conference series widely regarded as the premier series in the field of plasmonics.The most recent conference, SPP9, was held in Copenhagen (May 26-31, 2019), exploring the breadth of fascinating topics and new directions that are emerging from plasmonics, including metasurfaces, graphene and other 2D materials, strong-coupling phenomena, topological plasmonics, quantum plasmonics, and hot-electron phenomena (Figure 1).Enabling deeply subwavelength electromagnetic field confinement, plasmonics epitomizes one of the key research areas within nanophotonics represented extensively at SPP9.This special issue includes a selection of invited papers from this conference. SPP9 opened with a plenary talk by Thomas W. Ebbesen on polaritons in material science, including perspectives on strong-coupling phenomena in plasmonics as a new state of matter.In this special issue, this topic is elaborated in the paper by Thomas et al., considering ground state chemistry under vibrational strong coupling [2].Xiong et al. investigate ultrastrong coupling in gold nanocubes coated with quantum emitters, positioned on a gold film [3].Heilmann et al. experimentally explore strong coupling of dye molecules to dielectric lattice resonances [4].Calvo et al. theoretically consider ultra-strong coupling phenomena in molecular cavity quantum-electron dynamics [5].Baranov et al. experimentally explore cavity plasmon-polaritons in the context of circular dichroism [6].Neuman et al. theoretically explore surface-enhanced resonant Raman scattering of molecules in plasmon-exciton systems in the strong-coupling regime [7]. In the area of 2D materials, Galiffi et al. [8] theoretically explore the nonlocal plasmon response in graphene with the aid of singular metasurfaces.Device aspects of hybrid graphene-plasmon systems are considered by Ding et al. combining graphene with plasmonic waveguides to enable large-bandwidth photodetectors [9].Zhao et al. combine GeSe nanosheets with gold metal surfaces to enable surface-plasmon resonance sensors with enhanced sensitivity [10].Ramazani et al. explore exciton-plasmon coupling and hot-carrier generation in boron nitride 2D layers [11].Spreyer et al. experimentally study second harmonic generation in hybrid plasmonic metasurfaces and monolayers of WS 2 [12]. Within the context of plasmonic metasurfaces, Engelberg et al. exploit a Huygens nanoantenna-based metalens for outdoor photographic/surveillance applications in the near infrared [13].Ding et al. exploit gap-plasmon metasurfaces for vortex-beam generation in the near infrared [14].Going beyond the common temporal harmonic response of matter, Pacheco-Peña and Engheta theoretically propose a temporally effective medium concept in metamaterials with the potential to create a medium with a desired effective permittivity [15]. Concerning the advancement of nanofabrication processes for plasmonic structures, Hahn et al. explore heliumfocused ion beam milling as a resist-free, maskless, direct-write method [16].Gittinger et al. exploit a sketch-and-peel technique to define plasmonic dimer resonators [17]. On the topic of optical emission from plasmonic nanostructures, Buret et al. explore the effects of quantization in atomic-sized point contacts [18], while Krasavin et al. explore the tunneling regime of nano-gap dimers [19].Kang et al. review work on quantum plasmonic effects in Angstrom-scale gap structures driven by terahertz radiation [20]. Khurgin reports fundamental limits to hot carrier injection from metals across metal-semiconductor interfaces in plasmonic nanostructures [21]. Within the context of plasmonic antennas, Sanders and Manjavacas explore theoretically parity-time symmetric plasmonic antennas for enhanced light-matter interaction with subwavelength emitters [22].Pedrueza-Villalmanzo et al. offer a perspective on plasmonic nanoantennas for nanoscale chiral chemistry and advancing molecular magnetism [23]. On the subwavelength probing of plasmons, Esmann et al. demonstrate a near-field based spectroscopy method to quantitatively map the projected local optical density of states of a nanostructured sample with 10-nm spatial resolution [24].Kaltenecker et al. use scanning near-field microscopy to investigate interference patterns caused by surface plasmon polaritons on mono-crystalline gold platelets with ultra-smooth surfaces [25]. Finally, as applications of plasmonic resonances, Bauer and Giessen tailor Fano resonances in metallic nanostructures for optical sensing [26], while Jia et al. explore gap plasmon resonances to produce plasmonic colors that can be viewed under dark-field illumination [27]. This special issue provides a perspective on recent research efforts and developments within the dynamic field of plasmonics, illustrating its breadth, and we hope, serving also to inspire new work and attracting new researchers to the field.One of the important SPP9 conference outcomes was awarding Carlsberg Foundation Scholarships to 12 excellent young researchers, including the first authors of Refs.[3,8,13].The next issue in the conference series, the 10th International Conference on Surface Plasmon Photonics (SPP10), will be held in Houston (May 23-28, 2021).For more information, see SPP10.rice.edu. Figure 1 : Figure 1: The breadth of research topics and directions within plasmonics illustrated by a word cloud compiled from the book of abstract from the 9th International Conference on Surface Plasmon Photonics (SPP9) held recently in Copenhagen (SPP9.dk).
1,670.2
2020-02-01T00:00:00.000
[ "Physics" ]
Technology Acceptance of a Machine Learning Algorithm Predicting Delirium in a Clinical Setting: a Mixed-Methods Study Early identification of patients with life-threatening risks such as delirium is crucial in order to initiate preventive actions as quickly as possible. Despite intense research on machine learning for the prediction of clinical outcomes, the acceptance of the integration of such complex models in clinical routine remains unclear. The aim of this study was to evaluate user acceptance of an already implemented machine learning-based application predicting the risk of delirium for in-patients. We applied a mixed methods design to collect opinions and concerns from health care professionals including physicians and nurses who regularly used the application. The evaluation was framed by the Technology Acceptance Model assessing perceived ease of use, perceived usefulness, actual system use and output quality of the application. Questionnaire results from 47 nurses and physicians as well as qualitative results of four expert group meetings rated the overall usefulness of the delirium prediction positively. For healthcare professionals, the visualization and presented information was understandable, the application was easy to use and the additional information for delirium management was appreciated. The application did not increase their workload, but the actual system use was still low during the pilot study. Our study provides insights into the user acceptance of a machine learning-based application supporting delirium management in hospitals. In order to improve quality and safety in healthcare, computerized decision support should predict actionable events and be highly accepted by users. Supplementary Information The online version contains supplementary material available at 10.1007/s10916-021-01727-6. Introduction Artificial intelligence (AI) and particularly machine learning (ML) for supporting healthcare have been a constant in medical informatics research over decades [1,2]. Health-related prediction modelling has gained much attention since wellknown companies have been developing prediction models for different clinical outcomes [3]. This has given rise to various prediction models with high predictive performance in retrospective data sets. However, few of these models have ever been adopted to support healthcare professionals in clinical routine [4,5]. Several barriers and concerns have been raised for the implementation of ML-based predictive models in clinical decision support systems [5][6][7][8]. As the final decision is always the responsibility of the user, it is crucial to open the often criticized black box of ML decisions so that healthcare professionals can detect bias or error [9]. While the simplicity of a system and education tailored to its use facilitate the uptake of a new technology, increasing This article is part of the Topical Collection on Systems-Level Quality Improvement workload and threats to the doctor/nurse-patient relationship might hinder it [10]. The fear of losing control over decisionmaking is a potential barrier [11], and alerts and recommendations might be ignored by clinicians if they are overwhelmed by them [12]. Two recent studies reported on the acceptance of MLbased applications by clinicians. Brennan et al. [13] evaluated the application MySurgeryRisk [14] in a clinical setting and compared the judgment of clinicians with the algorithm's prediction of postoperative complications. Although physicians' risk assessment significantly improved after interaction with the algorithm, only five out of ten physicians reported that the application helped them in decision-making. Five physicians reported that they would use the application for counselling patients preoperatively, and eight found it easy to use. Ginestra et al. [15] assessed clinical perceptions of the Early Warning System 2.0 [16], a tool that predicts sepsis in non-ICU patients. Two hundred eighty-seven nurses and physicians completed a survey after an alert by the system. Overall, physicians criticised missing transparency of relevant predictors, too late alerts and that the system triggered mostly for already known abnormalities. We recently implemented an ML-based application predicting the occurrence of delirium in an Austrian hospital, and prospectively evaluated its performance in a routine clinical setting [17]. Delirium is a syndrome of acute confusional state with an acute decline of cognitive functioning [18]. Delirium patients have an increased risk of morbidity and mortality. High occurrence rates of delirium do not only increase length of stays and financial costs [19], but present a high burden for nursing. Identifying patients with highest risk is especially beneficial for nursing, because delirium can be prevented by non-pharmacological interventions [20,21]. During a pilot study of seven months, the performance of the algorithm had achieved a specificity of 82% and a sensitivity of 74% [17]. As much as an algorithm excels in prospective prediction, it is crucial to know how users and domain experts perceive it. A well-known model for evaluating new technologies is the Technology Acceptance Model (TAM) [22,23], often referred to as a gold standard for explaining IT acceptance [24]. Based on the theory of reasoned action [25], TAM assumes that a behavioural intention acts as best determinant for the actual use of an innovation in technology, influenced by perceived ease of use and perceived usefulness of an innovation. In the extended model TAM2, perceived usefulness is further influenced by several more factors including the output quality of the system, i.e. how well the system performs [26]. Validity and robustness of TAM have been shown for the field of healthcare [27], but minor adaptions of the items are recommended when evaluating health IT applications [24]. The overall goal of our study was to gain knowledge of the uptake, user acceptance and concerns regarding a ML-based prediction application designed to improve patient safety in a clinical setting. The evaluation targeted perceptions by healthcare professionals on the use case delirium prediction and included domain experts and users who had been using the application regularly in their daily work. Material and methods The delirium prediction application Starting in spring 2018, the delirium prediction application has been implemented in a hospital of Steiermärkische Krankenanstaltengesellschaft (KAGes), the regional public care provider in Styria, Austria. Prior to implementation in the hospital information system (HIS), we had performed various training sessions for healthcare professionals and had promoted the application throughout all participating departments. For every patient admitted to one of the departments, a random forest-based algorithm automatically predicts the delirium risk based on existing EHR data [17]. The predicted outcome is an ICD-10-GM (International Classification of Diseases -Tenth Revision -German Modification) coded diagnosis F05 (Delirium due to known physiological condition) or mentions of delirium in the text of a patient's discharge summaries. In addition, domain experts stated the need to include a second model that predicts the diagnosis F10.4 (alcohol withdrawal delirium). Although this type of delirium is quite distinct from the condition coded by F05 in terms of aetiology and pathophysiology, experts found it crucial to include both types because of their similarity in signs, symptoms and consequences. The algorithm predicts delirium risk with both models separately. Based on the higher risk score, every patient is stratified into a risk group: low risk, high risk or very high risk. An icon symbolizing the risk group is presented within the user interface of the HIS (Fig. 1a). With a click on the icon, a web application (Fig. 1b) opens up revealing details on the ML prediction supporting clinical reasoning [17,28]: The application displays patient specific information used for modelling, e.g. ICD-10 codes, laboratory results or procedures. Predictors are ranked by (1) evidence-based risk factors of delirium known from literature and (2) the highest impact on the ML prediction using established feature importance functions. Study design In this study, we evaluated the delirium prediction application integrated in a HIS. This included the visualization in the user interface of the HIS (Fig. 1a) as well as a web application (Fig. 1b), which opens up from the HIS. We used a convergent parallel design for the mixed methods study (Fig. 2). For both quantitative and qualitative methods, TAM [22] constituted the evaluation framework. The factor output quality from TAM2 [26] was considered highly relevant for the application of complex machine learning models in healthcare and was thus added to the original TAM framework for evaluation. In the qualitative assessment, two authors collected comments from healthcare professionals during four expert group meetings before and during the pilot phase. After the last meeting, one author assigned all comments to the factors of TAMperceived ease of use, perceived usefulness, output quality and actual system use. Output quality was defined as the perceived correctness of delirium risk prediction. Besides sharing their experience with the application, the expert group suggested improvements for visualization in the HIS and new functionalities for the algorithm. In the quantitative assessment, we evaluated the user acceptance of the application using questionnaires seven months after implementation. One author formulated items for the TAM factors based on original examples [26] and, as recommended in the literature [24], slightly adjusted to the context of healthcare and delirium prediction. After an expert discussion with two more authors, a total of 16 items were selected for the final questionnaire. A pilot test on understandability with two hospital staff members (not otherwise involved in the study) resulted in minor adoptions of item formulation. Responses for all items were measured using a five point Likert-type response scale (strongly disagreestrongly agree), apart from one item assessing the absolute frequency of use per month in numbers. The final questionnaire included 16 TAM items. User comments were assessed in a free text field at the end of the questionnaire (see Supplementary File Fig. S1). Finally, quantitative and qualitative assessment results were interpreted in conjunction in order to obtain a detailed picture of the uptake of the application in the clinical setting. Participants Printed questionnaires were distributed to five out of eight participating departments. Physicians and nurses from all levels of experience were encouraged to participate in the assessment, which was on a voluntary basis. We received completed questionnaires from ten out of 21 physicians (47.6%) and 37 out of 67 nurses (55.2%, see Table 1). For the expert group meetings, the head of the department nominated experts from their field before the implementation. Depending on the clinical roster, up to five senior physicians Data analysis All quantitative analyses were conducted in R Version 3.6. For all questionnaire items, heat maps facilitated the analysis of the results. For each participant the median was calculated for all item responses of each TAM factor, and then the mean of the medians of all participants was calculated for each factor. Two items measuring perceived ease of use had been formulated negatively and had to be recoded (see Fig. 3). In order to assess the internal consistency of the TAM factors in the questionnaire, we calculated the mean of the items for each factor and Cronbach's alpha using the R package ltm [29] (see Supplementary File Table S2). Technology acceptance questionnaire A heat map of the results from all 47 users on the questionnaire is shown in Fig. 3. Thirty-two users (68.1%) agreed or strongly agreed that the application provided them with additional information. Seven users (14.9%) did not believe that the application is a useful support for delirium prevention, and seven did not believe that the application can be used to detect delirium at an early stage. Opinions about the application's usefulness for their own work were mixed: 17 users (36.2%) reported the application to be useful for their work, while 15 users (31.9%) did not find it useful. Only two users did not find the purpose of the application understandable and three users reported that the presented information was not understandable. For 42 users (89.4%) the use of the application did not increase the workload. However, 18 users (38.3%) were not yet able to integrate the application successfully into their clinical routine, and 14 users (29.8%) reported that they were not sufficiently prepared to use the application at time of implementation. Five users (10.6%) reported that the calculated delirium risk matched their own estimations only rarely or very rarely, and nine users (19.1%) reported that they frequently or very frequently estimated the risk higher than the application. Considering actual system use, nine users (19.1%) strongly agreed or agreed that they considered the output of the application in their clinical decisions. Thirteen users (27.7%) reported that they had been using the application regularly, and the median for use per month was 3 times (min = 0, max = 20). Overall, users rated the perceived ease of use and perceived usefulness rather positive, the output quality neutral, and the actual system use rather poor (see Table 2). Two users left a comment in the free text field. User A described the application as "an excellent instrument for delirium screening that allows managing prevention". User B commented that "there was a more frequent use on the part of the physicians". Perceived usefulness The consensus of the expert group on perceived usefulness was that the application offered a great support in early recognition of delirium risk patients and helped to reduce resources for screening. "The application gives good support -I am convinced of its usefulness." "Due to the delirium prediction application, we were already able to prevent the sliding into a strong delirium with simple interventions." "I see the application as a benefit, as we are able to reduce the time for delirium screening." It provided support in the assessment of patients under sedation at admission, and it was used to confirm existing presumptions on delirium risk. "It is especially an added value if patients are not responsive during admission." "The prediction helps to corroborate my own estimation when seeing a patient." "Also, the prediction helps us when we are not quite sure about the delirium risk." The application also supported the targeting of patients with a delirium diagnosis in a previous stay. "Especially patients with a diagnosis of delirium in the past are being targeted earlier now." Perceived ease of use The common impression for the perceived ease of use was highly positive. The expert group appreciated that there was no need of additional data entry and that the prediction was available within few seconds in the user interface of the HIS. As illustrated in Fig. 1a, high risk patients were presented with a yellow symbol and very high risk patients with a red symbol. The experts appreciated this. "I like the presentation with the traffic light symbol." The visualization in the web application (Fig. 1b) sparked much enthusiasm, because it provided a comprehensive view of a patient supporting healthcare quality not connected to delirium prevention. However, during the first month the risk of delirium had been visualised using percentages. This was criticized by the experts, as their interpretation was not clear to them. As a solution, we replaced the percentages by a bar chart visualizing the three risk categories and an arrow indicating the location of a patient on the risk dimension. "The bar representing the range of delirium risk helps us to identify patients at the border to another risk group." Output quality Within the expert group, the predictive accuracy of the algorithm was perceived as very high. At time of implementation I was sufficiently prepared to use the application. The application has not increased my workload.* The application was not difficult to use.* The information displayed on the individual patients was understandable. The purpose of the application was clear and understandable. I believe that the application can be used to detect delirium at an early stage. I believe that the application is a useful support to prevent delirium. In clinical routine, many cases of delirium are being detected only late. The application is useful for my work. The application provides me with additional information. I considered the output of the application in my clinical decisions. I have been using the application regularly. "The system has almost 100 % accuracy." "There are not too many patients in the very high risk groupit seems correct." Actual system use One senior physician raised concerns about the frequency of use among other physicians: "I absolutely want to continue with the application. Now the question is how to bring it closer to the users many don't know much about it yet." Finally, there was a broad agreement of the expert group members to continue with the application in clinics, and to recommend the implementation of the algorithm to other hospital departments or hospital networks. "The application is successful. It should be continued in any case." Discussion In this study, mixed methods with a convergent parallel study design were used to evaluate an ML-based application predicting delirium in in-patients from a user-centric perspective. The study provides significant insights to user acceptance with an ML application that uses EHR-based risk prediction to increase patient safety. A well-established theory, the Technology Acceptance Model [26], was used to frame the evaluation process and to guide the assessment of perceived usefulness, perceived ease of use, output quality and actual system use. A group of clinical experts provided regular feedback for qualitative analyses, and supported the improvement of visualization and algorithm functionality. After seven months of implementation, the majority of users believed that the application was useful for the prevention of delirium or its early detection. They appreciated the visualization using yellow and red icons in the user interface of the HIS, and a detailed summary of the risk prediction in a web application. The automatic and fast prediction without the need of manual data entry presented a great value to them. However, not everyone was able to integrate the application into their clinical routine and the actual system use was low. Studies of implemented ML applications are rare [4], and few studies have focused on the evaluation of user acceptance and technology uptake. A study of Brennan et al. [13] assessed user acceptance of an ML application as a secondary aim. However, their study sample was small and homogeneous including the feedback of ten physicians only. Ginestra et al. [15] included a bigger, heterogeneous sample, but the ML application was evaluated rather poorly due to missing transparency and late alerts. In order to avoid a black-box scenario, we enhanced clinical reasoning using a web application presenting relevant features from ML-modelling (Fig. 1b). The presented information was understandable and the application provided users with additional information, e.g. highlighting previous diagnoses of delirium. Enabling interpretability and transparency of complex ML models facilitates clinical decision making and the appraisal of risk predictions, and thus remains an important task for ML developers [8]. Potential extra workload has been identified as a barrier of implementation [10], a result that might be essential for a successful uptake of ML-based applications in general. Users reported that the application did not increase their workload. However, further research is needed to determine whether a too high number of false positives might lead to additional preventive actions and increase the clinical workload unnecessarily. Known barriers of hospital-based interventions such as staff workload and changes in roster [30] also limited our study. Questionnaires were kept as short as possible, and half of the staff members from five departments participated in the quantitative assessment. However, only 28% of the participants reported that they had regularly used the application, and the expert group concluded that more promotion and more training sessions were needed. Participation or non-response bias, e.g. people more positive towards an application are more likely to participate, might have affected the results [31]. A major limitation of our study is the questionnaire used for the quantitative assessment. Although TAM is extended by several factors in TAM2, we included only quality output. The need of a short and informative questionnaire limited number of items, and quality output seemed to be at highest importance to us. However, several factors are relevant for the usability of clinical decision support systems, which are not included in TAM nor TAM2 such as reaction speed or system errors [32]. Although the HIS of KAGes is known for its high stability, future studies should address the technical quality of the delirium prediction application integrated in the HIS. Due to the limited sample size, psychometric analyses including factor analyses were not feasible and we analysed internal consistency for the TAM factors using Cronbach's alpha only. The internal consistency was acceptable, but further analyses on the questionnaire are needed in future. The aim of the expert group was to receive a broad feedback without restrictions to specific questions, and we did thus not conduct any structured interviews. Comments made by the expert group were documented and all of them could be assigned to the TAM factors chosen for evaluation. However, biases could occur for selecting questions and comments. The last limitation to be mentioned is the rather short evaluation period restricted to one hospital only. Depending on clinical departments, staff members and predicted outcomes, feedback and evaluation results might vary, even with a stable performance of the underlying algorithm. Thus, ongoing monitoring and surveillance of the system as well as a continuous feedback loop with users is essential to determine the application's usefulness and safety in the long term. Conclusion The results of our study are unique, as we are among the first to implement a ML-based prediction application using electronic health records into clinical routine. The combination of quantitative and qualitative methods in the user-centric evaluation enriches our previously conducted evaluation of the performance of the algorithm during seven months of implementation. The high accuracy of the delirium prediction algorithm presented by us recently [17] is now supported by a positive technology acceptance by physicians and nurses. In future, similar applications providing reliable risk predictions and enhancing clinical reasoning will help targeting clinical resources for pharmacological and non-pharmacological preventive actions. We believe that the acceptance of a highly complex algorithm by healthcare professionals is an essential component for a successful implementation in a clinical setting. Without their belief in the usefulness of the application and their support during the whole implementation process, including the communication of existing opinions and concerns, an application is doomed to failure. Only ML algorithms that achieve high accuracy, predict actionable events and are highly accepted by healthcare professionals will be able to improve healthcare quality and hence patient safety. Declarations Ethics approval The study received approval from the Ethics Committee of the Medical University of Graz (30-146 ex 17/18). Consent to participate Not applicable. Consent for publication Not applicable. Conflict of interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
5,369
2021-03-01T00:00:00.000
[ "Medicine", "Computer Science" ]
Characterization of the salivary microbiome in patients with pancreatic cancer Clinical manifestations of pancreatic cancer often do not occur until the cancer has undergone metastasis, resulting in a very low survival rate. In this study, we investigated whether salivary bacterial profiles might provide useful biomarkers for early detection of pancreatic cancer. Using high-throughput sequencing of bacterial small subunit ribosomal RNA (16S rRNA) gene, we characterized the salivary microbiota of patients with pancreatic cancer and compared them to healthy patients and patients with other diseases, including pancreatic disease, non-pancreatic digestive disease/cancer and non-digestive disease/cancer. A total of 146 patients were enrolled at the UCSD Moores Cancer Center where saliva and demographic data were collected from each patient. Of these, we analyzed the salivary microbiome of 108 patients: 8 had been diagnosed with pancreatic cancer, 78 with other diseases and 22 were classified as non-diseased (healthy) controls. Bacterial 16S rRNA sequences were amplified directly from salivary DNA extractions and subjected to high-throughput sequencing (HTS). Several bacterial genera differed in abundance in patients with pancreatic cancer. We found a significantly higher ratio of Leptotrichia to Porphyromonas in the saliva of patients with pancreatic cancer than in the saliva of healthy patients or those with other disease (Kruskal–Wallis Test; P < 0.001). Leptotrichia abundances were confirmed using real-time qPCR with Leptotrichia specific primers. Similar to previous studies, we found lower relative abundances of Neisseria and Aggregatibacter in the saliva of pancreatic cancer patients, though these results were not significant at the P < 0.05 level (K–W Test; P = 0.07 and P = 0.09 respectively). However, the relative abundances of other previously identified bacterial biomarkers, e.g., Streptococcus mitis and Granulicatella adiacens, were not significantly different in the saliva of pancreatic cancer patients. Overall, this study supports the hypothesis that bacteria abundance profiles in saliva are useful biomarkers for pancreatic cancer though much larger patient studies are needed to verify their predictive utility. INTRODUCTION In the United States, approximately 40,000 people die every year from pancreatic adenocarcinoma, making it the fourth leading cause of cancer related death. Patients diagnosed in the early stage of pancreatic cancer have a 5-year survival rate of 24%, compared to 1.8% when diagnosed in the advanced stage (Li et al., 2004). Clinical manifestations of pancreatic cancer do not appear until after the cancer has undergone metastasis (Holly et al., 2004), emphasizing the need for early detection biomarkers. The etiology of pancreatic cancer remains elusive, with cigarette smoking being the most established risk factor (Vrieling et al., 2010;Nakamura et al., 2011;Fuchs, Colditz & Stampfer, 1996;Zheng et al., 1993), although links have also been made to diabetes (Haugvik et al., 2015;Liu et al., 2015), obesity (Bracci, 2012), and chronic pancreatitis (Malka et al., 2002). Recent research has also shown that men with periodontal disease have a two-fold greater risk of developing pancreatic cancer after adjusting for smoking, diabetes, and body mass index (Michaud et al., 2007). The human oral cavity harbors a complex microbial community (microbiome) known to contain over 700 species of bacteria, more than half of which have not been cultivated (Aas et al., 2005). Researchers have identified a core microbial community in healthy individuals (Zaura, Keijser & Huse, 2009) and shifts from this core microbiome have been associated with dental carries and periodontitis (Berezow & Darveau, 2011). The composition of bacterial communities in saliva seems to reflect health status under certain circumstances (Yamanaka et al., 2012), making the analysis of salivary microbiomes a promising approach for disease diagnostics. A study by Mittal et al. (2011) found that increases in the numbers of Streptococcus mutans and lactobacilli in saliva have been associated with oral disease prevalence, while another study showed that high salivary counts of Capnocytophaga gingivalis, Prevotella melaninogenica and Streptococcus mitis may be indicative of oral cancer (Mager et al., 2005). A recent study by Farrell et al. (2012) suggested that the abundances of specific salivary bacteria could be used as biomarkers for early-stage pancreatic cancer. Using the Human Oral Microbe Identification Microarray (HOMIM), researchers observed decreased levels of Neisseria elongata and Streptococcus mitis in patients with pancreatic cancer compared with healthy individuals, while levels of Granulicatella adiacens were significantly higher in individuals with pancreatic cancer (Farrell et al., 2012). The HOMIM's ability to detect 300 of the most prevalent oral bacterial species has made it a suitable method for assessing community profiles at the phylum level as well as many common taxa at the genus level. However, the HOMIM microarray method fails to detect approximately half of the bacterial species commonly present in saliva (Ahn et al., 2011). In this study, we applied high-throughput sequencing (HTS) of the bacterial smallsubunit ribosomal RNA (16S rRNA) genes to determine the salivary profiles of patients with and without pancreatic cancer. The use of HTS to sequence 16S rRNA bacterial genes from entire salivary microbial communities allows for a more comprehensive profile of the microbiome in health and disease (Kuczynski, Lauber & Walters, 2011). During this study, we collected 146 saliva samples from patients at the UCSD Moores Cancer Center. HTS was used to characterize the salivary microbiome of patients with pancreatic cancer and compare them to patients with other diseases (including pancreatic disease, non-pancreatic digestive disease/cancer and non-digestive disease/cancer) as well as non-diseased (healthy) controls. This allowed us to test the hypothesis that patients with pancreatic cancer may have a distinct microbial community profiles compared to non-diseased controls and to other forms of digestive and non-digestive diseases. Our results demonstrated that patients with pancreatic cancer had a significantly higher abundance ratio of particular bacterial genera. Sample collection and patient information This study was approved by the University of California San Diego (UCSD) and San Diego State University (SDSU) joint Institutional Review Board (IRB Approval #120101). Patients recruited for the study were being clinically evaluated at the UCSD Moores Cancer Center or were undergoing endoscopy procedures by UCSD Gastroenterologists in the Thornton Hospital Pre-Procedure Clinic between May 2012 and August 2013. All patients were required to fast for 12-hours prior to cancer evaluation and endoscopy procedures. To avoid bias during enrollment, the research coordinator responsible for recruiting participants was unaware of patient diagnosis at time of sample collection. Consenting participants were provided with IRB-approved consent forms, and HIPAA forms, as well as an optional, voluntary written survey in which they could share relevant information about antibiotic, dental and smoking history. All participants gave informed consent and their identities were withheld from the research team. Each subject was free to withdraw from the study at any time. Participants were asked to give a saliva sample into a 50 mL conical tube. If the amount of saliva exceeded 55 uL, 10 uL was transferred into tube containing Brain-Heart Infusion media (BHI) and glycerol for future culturing. The remaining saliva was broken up into 55 uL aliquots and stored in sterile cryovials. Both BHI and saliva samples were then immediately stored at −80 • C until further processing. Of the 146 participants, three subjects voluntarily withdrew and seven were not included in the study due insufficient production of saliva (<55 uL) leaving 136 saliva samples. After sample collection, the research coordinator accessed the participants' medical records electronically for patient diagnosis information that was included under a novel subject ID number. Diagnosis was used to determine health status and assess the stage of disease when each sample was taken. The various diagnoses were grouped into the following categories: pancreatic cancer, other disease (including pancreatic disease, non-pancreatic digestive disease/cancer and non-digestive disease/cancer), and healthy (non-diseased) controls. Healthy individuals were defined as participants with no documented chronic digestive or non-digestive disease, and a 5-year resolution of any previously documented digestive or non-digestive disease. Exclusion criteria included participants undergoing active chemotherapy or radiation therapy or use of antibiotics two weeks prior to saliva collection as well as invasive surgery in the past year. DNA isolation, PCR and 16S rRNA sequencing Bacterial DNA was extracted directly from 50 uL of patient saliva using the MoBio PowerSoil DNA Extraction Kit (Catalogue 12888-05, Mo Bio Laboratories, Carlsbad, CA, USA) following the manufacturer's protocol. Genomic DNA was quantified using the NanoDrop TM Spectrophotometer and stored at −20 • C. The 16S ribosomal RNA (rRNA) amplicon region was amplified using barcoded 'universal' bacterial primer 515F (5 ′ -AATGATACGGCGACCACCGAGATCTACAC TATGGTAATT GT GTGCCAGCMGCCGCGGTAA-3 ′ ) and 806R (5 ′ -CAAGCAGAAGA CGGCATACGAGAT XXXXXXXXXXXX AGTCAGTCAG CC GGACTACHVGGGTWT CTAAT-3 ′ ) (X's indicate the location of the 12-bp barcode) with Illumina adaptors used by the Earth Microbiome Project (http://www.earthmicrobiome.org/emp-standardprotocols/16s). The barcoded primers allow pooling of multiple PCR amplicons in a single sequencing run. PCR was carried out using the reaction conditions outlined by the Earth Microbiome Project. Thermocycling parameters were as follows: 94 • C for 3 min (denaturing) followed by amplification for 35 cycles at 94 • C for 45 s, 50 • C for 60 s and 72 • C for 90 s, and a final extension of 72 • C for 10 min (Caporaso et al., 2011). PCR amplicons were then sequenced on the Illumina MiSeq platform at the Argonne National Laboratory Core sequencing facility (Lemont, IL). Sequence analysis 16S rRNA sequences were de-multiplexed using the Quantitative Insights Into Microbial Ecology (QIIME v.1.8.0, http://www.qiime.org) pipeline. Sequences were grouped into operational taxonomic units (OTUs) at 97% sequence similarity using the Greengenes reference database. OTUs that did not cluster with known taxa at 97% identity or higher in the database were clustered de novo (UCLUST (Edgar, 2010). Representative sequences for each OTU were then aligned using PyNast (Caporaso, Bittinger & Bushman, 2010), and taxonomy was assigned using the RDP classifier (Version 2.2) (Cole et al., 2003). A phylogenetic tree was built using FastTree (Price, Dehal & Arkin, 2009). Before performing downstream analysis, patient samples were rarefied to 100,000 sequences per sample, singletons and OTUs present in <25% of samples were removed prior to rarefaction. Chimeric sequences were identified using ChimeraSlayer in QIIME, as well as with DECIPHER (Wright, Yilmaz & Noguera, 2012), and subsequently removed. Alpha diversity metrics were computed using QIIME. Beta diversity distance between samples (weighted and unweighted UniFrac) were computed and used to account for both differences in relative abundance of taxa and phylogeny (Vázquez-Baeza et al., 2013). Beta diversity comparisons were done using analysis of similarities (ANOSIM). We also tested whether there were significant differences in abundance ratios of particular genera between our different categories with GraphPad Prism version 6.0 using the Kruskal-Wallis test followed by Dunn's multiple comparison correction. Statistical significance was accepted at a p < 0.05. Analysis and identification of potential contaminants was done using SourceTracker (Knights et al., 2011). Quantitative PCR (qPCR) Leptotrichia abundance was determined using qPCR. Briefly, for each sample we estimated Leptotrichia abundance using Leptotrichia specific 16S primers and normalized their values to overall bacterial abundance estimated using qPCR with universal bacterial 16S primers (5 ′ -TCCTACGGGAGGCAGCAGT-3 ′ forward primer, and 5 ′ -GGACTACCAGGGTATCTAATCCTGTT-3 ′ reverse primer) developed by Nadkarni et al. (2002). qPCR was performed on a Bio-Rad CFX96 Touch TM Real-Time PCR Detection Instrument. The maximum C t (threshold cycle) for the universal 16S primers was set to 35 cycles and C t levels above this threshold were considered background noise. Genus-specific primers for amplification of Leptotrichia were designed using 16S rRNA sequences obtained from the RDP classifier (Version 2.2) (Cole et al., 2003). Primer3 online software was used for primer selection, and conditions were settled following the recommendations of Thornton & Basu (2011). The Leptotrichia forward primer sequence (5 ′ -GGAGCAAACAGGATTAGATACCC-3 ′ ) and the Leptotrichia reverse primer sequence (5 ′ -TTCGGCACAGACACTCTTCAT-3 ′ ) generated an amplicon of 87 bp. The PCR reaction contained 1 uM of both forward and reverse Leptotrichia primers with thermocycling parameters of 50 • C for 2 min, 95 • C for 10 min and 40 cycles of 95 • C for 15 s and 62.5 • C for 1 min. The amplification reactions for the universal primers and Leptotrichia primers were carried out in at least duplicate using 25 uL of SYBR Green Master Mix (Bio-Rad) and 0.85 ng/uL of extracted DNA as template. Various online tools, including In silico PCR Amplification (Bikandi et al., 2004) and Ribosomal Database Project (Cole et al., 2003) were used to check the specificities of the oligonucleotide primer sequences for the target organism. A saliva sample was sequenced (Eton Bioscience, San Diego, CA) using our novel primers and primer specificity was further confirmed with a 16S rRNA database BlastN search. RESULTS Salivary microbial diversity profiles were generated for a total of 108 patients. 8 patients were diagnosed with pancreatic cancer (P), 78 were diagnosed with other diseases (including cancer) (O), and 22 were considered healthy (non-diseased) controls (H). An analysis of potential sample contamination using SourceTracker (Knights et al., 2011) identified some evidence of human skin and/or environmental contamination. The sequences associated with OTUs identified as contaminants, mostly Staphylococcus (skin) and Cyanobacteria (chloroplasts), were removed from all subsequent analyses. From these data, we identified a total of 12 bacterial phyla and 139 genera. Proteobacteria, Actinobacteria, Bacteroidetes, Firmicutes, and Fusobacteria were the 5 major phyla, accounting for 99.3% of oral bacteria (Fig. 1). The mean relative abundance of Proteobacteria was lower in pancreatic cancer patients relative to other sample categories, while Firmicutes tended to be higher, though these were not significant after adjusting for multiple comparisons (FDR). The pancreatic cancer group also had higher levels of Leptotrichia, as well as lower levels of Porphyromonas, and Neisseria (Fig. 2). In general, multi-level taxonomic profiles of the healthy group resembled the 'other' disease group, while the pancreatic cancer group was readily distinguishable (Fig. S1). However, there were no significant differences among the three main groupings (H, O, and P) in either beta diversity (ANOSIM; P = 0.1) or alpha diversity (Chao1, K-W test; P = 0.6; Faith's PD, K-W test; P = 0.56). As in previous studies by Farrell et al. (2012) and Lin et al. (2013), we saw lower relative abundances of Neisseria and Aggregatibacter, although these differences were not Figure 2 Mean relative abundances of particular genera in pancreatic cancer patients (P) compared to healthy (H) and other disease (O) patient groups. Relative abundances of genera in oral communities from 108 patients. Arrows point to specific genera that showed interesting trends across diagnosis groups. significant (K-W test; P = 0.07 and P = 0.09 respectively). Bacteriodes was more abundant in pancreatic cancer patients compared to healthy individuals, similar to what Lin et al. observed, although this too was not significant (K-W test; P = 0.27). We did not see any difference in the relative abundance of Streptococcus or Granulicatella, which were shown to differ in a prior pancreatic cancer study (Farrell et al., 2012). Additional analytical targets were based on a preliminary study consisting of our first 61 saliva samples (including 3 from pancreatic cancer patients), which showed significantly higher Leptotrichia and lower Porphyromonas in pancreatic cancer patient saliva. The abundance ratio of Leptotrichia, specifically two OTUs (arbitrarily named OTU 31235 and OTU 4443207), to Porphyromonas was significantly higher in pancreatic cancer patients (Fig. 3). A BLAST comparison of these OTUs to the 16S sequence in the Human Oral Microbiome database (Chen et al., 2010) (HOMD RefSeq Version 13.2) found OTU 31235 to be 100% similar to Leptotrichia sp. Oral taxon 221, while OTU 4443207 was 99.3% similar to Leptotrichia hongkongensis. We found a strong positive correlation (Pearson's correlation r = 0.903, P = 0.0000001) between Leptotrichia abundances obtained from 16S rRNA sequencing (OTU relative abundances) and from real-time qPCR (Fig. 4). DISCUSSION Our analysis of salivary microbial profiles supports prior work suggesting that salivary microbial communities of patients diagnosed with pancreatic cancer are distinguishable from salivary microbial communities of healthy patients or patients with other diseases, including non-pancreatic cancers. At the phylum level, pancreatic cancer patients tended to have higher proportions of Firmicutes and lower proportions of Proteobacteria (Fig. 1). At finer taxonomic levels, we observed differences in the mean relative abundances of particular genera in pancreatic cancer patients compared to other patient groups (Fig. 2). For instance, there was a higher proportion of Leptotrichia in pancreatic cancer patients, while the proportion of Porphyromonas and Neisseria were lower in these patients. The most striking difference we found between the microbial profiles of pancreatic cancer patients and other patient groups was in the ratio of the bacterial genera Leptotrichia and Porphyromonas (LP ratio) (Fig. 3). The LP ratio had been identified as a potential biomarker from a preliminary analysis and an analysis of the full dataset found significantly higher LP ratio in pancreatic cancer patient saliva than in other patient Figure 4 Correlation between Leptotrichia abundance from 16S rRNA sequences and from real-time qPCR. Cross validation of total Leptotrichia OTU abundance using real-time qPCR. After using 16S rRNA as a reference gene for normalization of the levels of Leptotrichia genus, data was normalized by fold change to three healthy controls with relatively low Leptotrichia OTU abundance. Each symbol represents a patient: P = 6, and O = 12. Leptotrichia OTU abundance was correlated with qPCR fold change according to Pearson's correlation (r = 0.903). groups. To verify these differences using another method, we cross-validated the relative abundances of Leptotrichia (Fig. 4). Interestingly, during the analysis of the 16S rRNA data, we successfully used the LP ratio to reclassify one of the patients in the non-pancreatic cancer disease group. This particular individual had been initially diagnosed as having an unknown digestive disease, but the patient's high LP ratio suggested pancreatic cancer (Fig. 3). Subsequently, the patient was re-evaluated and diagnosed with pancreatic cancer, supporting the notion that the LP ratio may serve as a pancreatic cancer biomarker. Despite the small cohort of patients in this study, we believe our results are especially noteworthy because we were able to distinguish between patients with pancreatic cancer and patients with a variety of other diseases (including non-pancreatic cancer), in addition to healthy controls. Other researchers have proposed the use of ratios of bacterial taxa previously. Galimanas et al. (2014) suggested using salivary bacteria abundance ratios as a means for differentiating between healthy and diseased patients. Taxonomic ratios have been used to differentiate between subjects in studies of obesity (Lazarevic et al., 2012), diabetes (Zhang & Zhang, 2013), and periodontal disease (Moolya et al., 2014). Ratio comparisons also help to control for high levels of taxonomic variability among individuals (Ding & Schloss, 2014;Segre, 2012;Schwarzberg et al., 2014;Wang et al., 2013). A review of the literature revealed that Leptotrichia's role in oral health remains elusive. However, these bacteria have been found in the bloodstream of immune-compromised patients (Eribe & Olsen, 2008) and co-occur significantly with colorectal tumors (Warren et al., 2013). Leptotrichia have been isolated from cardiovascular and gastrointestinal abscesses, from systemic infections, and are thought to be pathogenic (Han & Wang, 2013). In regards to Porphyromonas, antibodies to Porphyromonas gingivalis have been directly associated with pancreatic cancer (Michaud et al., 2012). A European cohort study measured plasma antibodies to 25 oral bacteria in pre-diagnostic blood samples from 405 pancreatic cancer patients and 416 matched controls and found a >2-fold increase in risk of pancreatic cancer among those with higher antibody titers to a pathogenic strain of P. gingivalis (Michaud et al., 2012). At first glance, it appears contradictory that individuals with higher Porphyromonas antibody titers would have lower oral abundances. However, studies of systemic immunization of animals to particular periopathogens including Porphyromonas have shown reduced colonization of these bacteria in the mouth and a reduction of periodontitis (Evans et al., 1992;Persson et al., 1994;Clark et al., 1991). Similarly, higher Porphyromonas antibody titers in individuals with pancreatic cancer may decrease their oral abundance, though this connection needs to be formally tested. Shifts in salivary microbial diversity could also be a systematic response to pancreatic cancer. Pancreatic cancer is known to weaken the immune system (Von Bernstorff et al., 2001), which could lead to overgrowth of oral bacteria and a shift towards systemically invasive periodontal pathogens. The proliferation of bacterial pathogens could assist cancer progression through systemic inflammation (El-Shinnawi & Soory, 2013) or immune distraction (Feurino, Zhang & Bharadwaj, 2007). Thus, an initial increase in Porphyromonas might be followed by a decrease due to systemic invasion and antibody production. Indeed, inflammation is thought to play a significant role in the development of pancreatic cancer (Farrow & Evers, 2002). We also compared the relative abundances of several other bacterial genera that were indicated as potential biomarkers in previous work by Farrell et al. (2012). Like Farrell et al., we found a lower proportion of Neisseria in pancreatic cancer patient saliva compared with the healthy and other disease category, though this trend was not significant. However, we did not find the same results as Farrell et al. for the other bacterial genera they identified. Our data also showed an increase in Bacteroides and decrease in the abundance of the bacterial genus Aggregatibacter in patients with pancreatic cancer, supporting the results of a pilot study by Lin et al. (2013), though neither trends were significant. Methodological differences between our study and the Farrell et al. study in particular, may partially explain our divergent results. For instance, the inability of the V4 region of the16S rRNA gene to discriminate Streptococcus mitis from other Streptococcus species may have prevented us from detecting difference in this species' abundance (Farrell et al., 2012). Additionally, our study had a broader array of patient categories and cancers were not always confined to the pancreas at the time of sampling. Interestingly, since the completion of our study, Mitsuhashi et al. (2015) reported the detection of oral Fusobacterium in pancreatic cancer tissue. A retrospective review of our abundance data also found a lower relative abundance of Fusobacterium in pancreatic cancer patients compared to other patient categories ( Fig. 2; K-W test, P = 0.03 prior to FDR correction) suggesting the processes driving differences in Fusobacterium may be similar to our proposed mechanism for Porphyromonas. Although the result was not significant after adjusting for multiple-comparisons (FDR), we suggest Fusobacterium abundance should be considered as a potential biomarker target for future studies with larger patient cohorts. Overall, our study suggests that members of the salivary microbiome have promise as potential pancreatic cancer biomarkers and we may have uncovered an important new prospect in this regard (i.e., the LP ratio). However, our relatively small number of samples from pancreatic cancer patients and the discrepancies between our findings and previous work indicate that much larger patient cohorts will be needed to determine whether salivary biomarkers are diagnostically useful. Future studies should focus on improved metadata collection, including diet and oral health information (i.e., periodontal disease), which would make it possible to run statistical analyses that control for multiple factors involved in shaping oral microbial diversity. It will also be important to sample the same individual's saliva over time to assess whether we can distinguish between disease stages and also to control for intra-individual variation. Further, it is possible that single biomarkers may never be able to consistently identify pancreatic patients from other conditions. Thus, we may need more complex metrics that combine the abundances of multiple salivary bacteria, metabolite profiles, and detailed patient metadata. Effective diagnostic biomarkers for pancreatic cancers have been difficult to find, but are sorely needed and have the potential to save thousands of lives each year.
5,425
2015-11-05T00:00:00.000
[ "Medicine", "Biology" ]
EXISTENCE OF A SOLUTION OF A FOUR / ER NONLOCAL QUASILINEAR PARABOLIC PROBLEMa ABST The aim of this paper is to give a theorem about the existence of a classical solution of a Fourier third nonlocal quasilinear parabolic problem. To prove this theorem, Schauder’s theorem is used. The paper is a continuation of papers [1]-[8] and the generalizations of some results from [9]-[11]. The theorem established in this paper can be applied to describe some phenomena in the theories of diffusion and heat conduction with better effects than the analogous classical theorem about the existence of a solution of the Fourier third quasilinear parabolic problem. 1 INTRO,DLIIO,,,N, In paper [7], the author studied the uniqueness of solutions of parabolic semilinear nonlocal-boundary problems in the cylindrical domain.The coefficients of the nonlocal conditions had values belonging to the interval [-1,1] and, therefore, the problems considered were more general than the analogous parabolic initial-boundary and periodic-boundary problems.In this paper we study in the cylindrical domain, the existence of a classical solution of a Fourier third nonlocal quasilinear parabolic problem, which possesses tangent derivatives in the boundary condition.The coefficients of the nonlocal condition from this paper can belong not only to the interval [-1,1] but also to intervals containing the interval [-1,1]. Therefore, a larger class of physical phenomena can be described by the results of this paper 1Received" February, 1991.Revised: April, 1991.2Permanent address: Institute of Mathematics, Krakow University of Technology, Warszawska 31-155 Krakow, POLAND. than by the results of paper [7].Moreover, this fundamental theorem of the paper, about the existence of the solution of the nonlocal problem, can be applied in the theories of diffusion and heat conduction with better effects than the analogous classical theorem.To prove this fundamental theorem, Schauder's theorem is used.This paper is a continuation not only of paper [7] but also of papers [1]-[6] and [8].The main result of the paper is the generalization of the Pogorzelski's result (see [11], Section 22.11), and generalizations of Chabrowski (see [9]) and Friedman (see [10], Section 7.4) results.paper. The notation, assumptions and definitions from this section are valid throughout this Let n be any integer greater than 2. Given two points, : = (Zl,...,Zn)E R n and Y = (Yt,'", Yn) -Rn, the symbol z-Y means the Euclidian distance between z and y.The Euclidian distance between two points Pt and P2 belonging to R n is also denoted by P(Pt, P2). To prove a theorem about the existence of a classical solution of a Fourier's third nonlocal quasilinear parabolic problem, some assumptions will be used.umptiotm: I. D: = D o x (0, T), where 0 < T < c and D o is an open and bounded domain in R n such that the boundary OD o satisfies the following Lyapunov conditions: i) For each point belonging to OD o there exists the tangent plane at this point.ii) For each points P and P2 belonging to OD o the angle (np, np2 between the normal lines rip1 and np2 to OD o at points Pt and P2 satisfies the inequality tc(rtp 1, nP2) <_ eonst.[p(P 1, P2)] hL, where h L is a constant satisfying the inequalities 0 < h L <_ 1. iii) There exists i > 0 such that for every point P belonging to ODo, each line e parallel to the normal line to OD 0 at point P has the property that ODoglK(P,g)fqe (K(P,8) is the ball of radius di centered at point P) is equal at most to the one point. F(z, t, Zo,..., Zn) F( , t, o,'" ", n) _< C(D)Ix-hFt-"F + CF z i_T h F (2.2) i=0 for all (z, t), , t) E D O x (0, T], z i, z E R (i O, 1,..., n), where D O is an arbitrary closed subdomain of Do; M F, M F, C F, hF, h F, IF, P are constants which do not depend on D and satisfy the inequalities MF, MF, C F > O, O < h F <_ I, O < h F <_ l, O <_ IF < 1, O <_ p < l, and C(D) is a positive constant that depends on D. where M I is a positive constant and p is a constant from Assumption V.Moreover, the set 0 of z belonging to D o such that f(z) is the continuous function is nonempty. Moreover, we shall need the following: umtion: X. Do for y E o (J = 1,2,...,k), functions F ,f ,K are given by formulae (2.20),(2.21)and (2.7), respectively, and r is the fundamental solution of the homogeneous parabolic differential equation " 02u " In the paper Z denotes the set of functions w belonging to Z such that the derivatives 0_..W_w fl__w Ozl,... Oz n are continuous in D. 3_.. DEFINITION, QF A FOUR!ER,S .,THINON, LOCAL QUASI-LINEAR PAPBOLIC PROBLEM The Fourier's third nonlocal quasilinear parabolic problem considered in the paper is formulated in the form: For the given domain D satisfying Assumptions I, II and for the given functions aij, b (i, j = 1, 2,..., n), c, F, G, g, f, K satisfying Assumptions III-X, the Fourier's third nonlocal quasilinear parabolic problem in D consists in finding a function u belonging to ZI, satisfying the differential equation F(x, t, u(x, t), Ou(x' t) Ou(z, t). dt%-(-,... -( for (z, t) e aD O x (0, T], where for each t (0, T] the symbol a% denotes the boundary value of the transversal derivative of function u at point and for each t (0, T] the symbols dtx(i) q) denote the boundary values of the derivatives of function u in the tangent directions tx(i) (i = 1,2,...,q) at point z, respectively. A function u possessing the above properties is called a solution in D of the Fourier's third nonlocal quasilinear parabolic problem (3.1)-(3.3). THEOREM. ABOUT ETENCE In this section we prove a theorem about the existence of a solution of the Fourier's third nonlocal quasilinear parabolic problem (3.1)-(3.3)assuming that Assumptions I-X from Section 2 are satisfied. We shall find sufficient conditions that an arbitrary point U =(Uo, Ul,...,un,) belonging to set E might be transformed by into the point aJ'U = V = (Vo, Vl,...,vn,) belonging to this set. E is precompact.
1,456
1992-01-01T00:00:00.000
[ "Mathematics" ]
Three Convergence Results for Iterates of Nonlinear Mappings in Metric Spaces with Graphs : In 2007, in our joint work with D. Butnariu and S. Reich, we proved that if for a self-mapping of a complete metric that is uniformly continuous on bounded sets all its iterates converge uniformly on bounded sets, then this convergence is stable under the presence of small errors. In the present paper, we obtain an extension of this result for self-mappings of a metric space with a graph Introduction During the last sixty years, many results have been obtained in the fixed-point theory of nonlinear operators in complete metric spaces [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15].The first result in this area of research is Banach's celebrated theorem [16], which shows the existence of a unique fixed point of a strict contraction.This area of research includes the analysis of the asymptotic behavior of (inexact) iterates of a nonexpansive operator and their convergence to its fixed points.This research is also devoted to feasibility, common fixed points, iterative methods and variational inequalities with numerous applications in engineering and the medical and natural sciences [17][18][19][20][21][22][23][24]. In our joint paper with D. Butnariu and S. Reich [5], we proved that if for a selfmapping of a complete metric that is uniformly continuous on bounded sets all its iterates converge uniformly on bounded sets, then this convergence is stable under the presence of a small errors.In our present work, we obtain an extension of this result for self-mappings of a metric space with a graph.We also obtain a convergence result for a contractive-type mapping in a metric space with a graph. The First and the Second Main Results Assume that (X, ρ) is a metric space.For every point u ∈ X and each nonempty set For every point u ∈ X and each number r > 0, put For every operator S : X → X, set S 0 (u) = u for all u ∈ X, S 1 = S and S i+1 = S • S i for every nonnegative integer i.We denote the set of all fixed points of S by F(S). Assume that G is a graph such that V(G) ⊂ X is the set of all its vertices and the set E(G) ⊂ X × X is the set of all its edges.We also assume that The graph G is identified with the pair (V(G), E(G)).Fix θ ∈ X. Assume that A : X → X is a mapping and that the following assumptions hold: (A1) There exists a unique point x A ∈ X satisfying A(x A ) = x A .(A2) A n (x) → x A as n → ∞ uniformly over all bounded subsets of X. (A3) A is bounded on bounded subsets of X. (A4) For each , M > 0 there exists δ > 0 such that for each x, y ∈ B(θ, M) satisfying (x, y) ∈ E(G) and ρ(x, y) ≤ δ the relations (A(x), A(y)) ∈ E(G) and ρ(A(x), A(y)) ≤ are valid.The next result is proved in Section 3. Theorem 1. Assume that K is a nonempty bounded subset of X and that > 0.Then, there exist δ > 0 and a natural number N such that for each integer n ≥ N and each sequence {x i } n i=0 ⊂ X, which satisfies for each integer i ∈ {0, . . ., n − 1}, the inequalities hold. Since Theorem 1 holds for any positive , it easily implies the following result. Corollary 1. Assume that {x i } ∞ i=0 is a bounded sequence such that The next result is also proved in Section 3. Theorem 2. Assume that > 0.Then, there exist δ > 0 such that for each sequence for each integer i ≥ 0, the inequality ρ(x i , x A ) ≤ holds for each integer i ≥ 0. It should be mentioned that our results are obtained for a large class of operators.They cover the case when E(G) = X × X, which was considered in [5], the class of nonexpansive mappings A : X → X on a metric space X with graphs satisfying for each (x, y) ∈ E(G).It also contains the class of monotone nonexpansive mappings [35,36] and the class of uniformly locally nonexpansive mappings [37]. Let us complete the proof of Theorem 1. Assume that n ≥ N is an integer and that the sequence {x i } n i=0 ⊂ X satisfies and for every i ∈ {0, . . ., n − 1}, In n ≤ 2N, then the assertion of Theorem 1 follows from Lemma 1.Therefore, we may assume without loss of generality that We prove that ρ(x j , x A ) ≤ , j = N, . . ., n. Assume the contrary.Then, there exists an integer q ∈ (2N, n] such that By ( 21) and ( 22), we may assume without loss of generality that Define We show that {z i } 2N i=0 satisfies the assumptions of Lemma 1.By ( 20) and ( 24), we need only to show that z 0 ∈ K.In view of ( 21), ( 23) and ( 24), Lemma 1 and ( 24) imply that This contradicts (22).The contradiction we have reached completes the proof of Theorem 1. Proof of Theorem 2 Proof.We may assume that < 1. Set K = B(x A , 4).Theorem 1 and the continuity of A at x A imply that there exist δ ∈ (0, /2) and a natural number N such that the following property holds: (a) For each integer n ≥ N and each sequence {x i } n i=0 ⊂ X that satisfies for each integer i ∈ {0, . . ., n − 1}, the inequalities and hold. The Third Main Result Assume that (X, ρ) is a complete metric space and G is a graph such that V(G) ⊂ X is the set of all its vertices and the set E(G) ⊂ X × X is the set of all its edges.We also assume that the space X is bounded: Assume that Q is a natural number Q such that the following assumption holds: (A5) For each x, y ∈ X there exist x 0 , . . ., x q ∈ X such that q ≤ Q, and that the following assumption holds: (A6) For all x, y ∈ X, if (x, y) ∈ E(G), then (A(x), A(y)) ∈ E(G) and ρ(A(x), A(y)) ≤ φ(ρ(x, y))ρ(x, y). We prove the following result. Theorem 3.There exists x A ∈ X such that A n (x) → x A as n → ∞ uniformly for x ∈ X.Moreover, if A is continuous at x A , then A(x A ) = x A . Proof.Let ∈ (0, 1).In order to prove our theorem, it is sufficient to show that there exists a natural number p such that for each x, y ∈ X, ρ(A p (x), A p (y)) ≤ . Conclusions In this paper, we study the behaviour of inexact iterates of a self-mapping A of a metric space with a graph.Assuming that A is bounded on bounded sets and that it uniformly converges on bounded sets to a unique fixed point, we show that this convergense is stable under the presence of computational errors.A prototype of our results for self-mappings of a metric space without graphs was obtained in our joint paper with D. Butnariu and S. Reich [5].It should be mentioned that our results are obtained for a large class of operators.They cover the case when E(G) = X × X, which was considered in [5], the class of nonexpansive mappings A :→ X on a metric space X with graphs satisfying ρ(A(x), A(y)) ≤ ρ(x, y) for each (x, y) ∈ E(G).This also contains the class of monotone nonexpansive mappings [35,36] and the class of uniformly locally nonexpansive mappings [37].
1,845.2
2023-09-13T00:00:00.000
[ "Mathematics" ]
Nondestructive classification of saffron using color and textural analysis. Abstract Saffron classification based on machine vision techniques as well as the expert's opinion is an objective and nondestructive method that can increase the accuracy of this process in real applications. The experts in Iran classify saffron into three classes Pushal, Negin, and Sargol based on apparent characteristics. Four hundred and forty color images from saffron for the three different classes were acquired, using a mobile phone camera. Twenty‐one color features and 99 textural features were extracted using image analysis. Twenty‐two classifiers were employed for classification using mentioned features. The support vector machine and Ensemble classifiers were better than other classifiers. Our results showed that the mean classification accuracy was up to 83.9% using the Quadratic support vector machine and Subspace Discriminant classifier. be avoided using an objective approach such as image processing (Pourreza, Pourreza, Abbaspour-Fard, & Sadrnia, 2012). Advances in machine vision technology make accurate, robust, and low-cost vision machine systems that make it suitable for detection food quality and so this technology can be used to determine the quality of saffron (Kiani & Minaei, 2016). Kiani, Minaei, and Ghasemi-Varnamkhasti (2018) propose the use of E-nose, E-tongue, and CVS systems to evaluate saffron quality and replace sensory recognition by human assessors (Kiani et al., 2018). Minaei, Kiani, Ayyari, and Ghasemi-Varnamkhasti (2017) demonstrated that the combination of computer vision system (CVS) and multilayer perceptron (MLP) is a simple tool for evaluating the quality of saffron samples based on color strength. The performance of the MLP model for saffron color recognition was better than PLS and MLR, and the success rate of classification (CSR) was 96.67%. (Minaei et al., 2017). Today, color computer vision systems are used in various food industries and agricultural products sorting systems because they are reliable, fast, and inexpensive (Donis-González & Guyer, 2016). Color computer vision is used to categorize or recognize the quality of agricultural products and various types of foods, including dates (Muhammad, 2015), pistachios (Omid, Firouz, Nouri-Ahmadabadi, & Mohtasebi, 2017), apple (Paulus & Schrevens, 1999), pizza (Sun, 2016), and Wheat (Pourreza et al., 2012). The computer vision system is trained based on specific patterns extracted from a set of color images provided for different classes, such as texture, geometry, and color properties. Then, the computer vision system determines which new image belongs to which particular category (Faucitano, Huff, Teuscher, Gariepy, & Wegner, 2005). The first step involves extracting a large number of features from classified images. Then, the features must have the ability to separate the classes correctly, which, by training the system, can automatically categorize the new image. Classification is performed by statistical algorithms and different clustering by assigning each image to the corresponding class (Donis-González & Guyer, 2016). The purpose of this study was to design a visual machine technique for detecting different types of saffron (Sargol, Negin, and Pushal) using images taken with mobile phones from bulk samples. Texture properties, color properties, and the percentage of foreign matter (based on color) of saffron were obtained. | Saffron samples A total of 440 samples of different saffron kinds on the market were prepared, without any additives, from various cities of Khorasan Province: Gonabad, Bajestan, Roshtkhar, Sabzevar, Mashhad, Torbat Heydarieh, and Kakhk, without any additives as fraud, and then, the samples were coded. Four experts who had a long history of saffron trading were selected. They divided the specimens into three classes, Sargol, Negin, and Pushal. Samples' information was recorded in a database (Zheng & Lu, 2012) and (Donis-González & Guyer, 2016) and (Zhang, Lee, Lillywhite, & Tippetts, 2017). | Image acquisition Image acquisition was done with a cellphone camera (Samsung Galaxy S7 Edge SM-G935FD Dual SIM 32GB Mobile Phone), which was placed on an imaging chamber at a distance of 9 cm from the sample. In the lighting system, SMD LED strip lights (4014 SMD LED Module) have been used in the upper part of the imaging chamber. A diffuser was installed for the uniformity of light under the lamps. The black background color was used to create the best contrast. The shutter speed was 1/500 s without employing flash, and, respectively, lens focal length, Diaphragm value, and ISO were 4/2 mm, F1/7, and 100. Images were captured at their maximum resolution (3024 × 4032 pixels) and were saved in "JPG" format. For imaging, the images were transferred to the laptop, which was equipped with MATLAB software (2017b. ver. 9.3). The images were given to the expert individuals to classify the samples into three classes: Sargol, Negin, and Pushal. Based on the average view of the experts, 440 different samples were F I G U R E 1 Different types of saffron including Negin, Sargol, Pushal, and Daste taken from them, and they were divided into three categories: 195 samples: Pushal, 129 samples: Negin, and 116 samples: Sargol. In this case, the average views of the experts were selected as the criteria for tagging the samples. | Image preprocessing Original sample image is presented in Figure 2a. In the first step, in order to remove the noises and smooth it, the image is filtered using a low-pass filter. The result is shown in Figure 2b. Foreground of the image is selected by choosing the pixels having intensity bigger than 20. Results are shown in Figure 2c. Small objects are removed from foreground binary image by morphological opening operation the image where all connected components (objects) that have fewer than 3,000 pixels are removed. Further, the image is eroded and dilated by a morphological structuring element with 5-pixel radius. The final foreground of the image is shown in Figure 2d. The saffron part of image is cropped by selecting the area, which has nonzero values. For this purpose, the projection of image over vertical and horizontal axis is calculated and the area between minimum and maximum values is cropped. For example, for the sample image, the area between two vertical and horizontal lines shown in Figure 2e is selected. In general, four virtual lines are generated for defining the cropped area. The cropped area image is then used for further processing. | Textural algorithm Texture analysis is one of the most important characteristics used in identifying regions of interest in an image and has been widely used in image processing. They are defined as attributes representing spatial arrangement of the gray levels of pixels in a region of a digital image, which provide measures of some properties of a region such as smoothness, coarseness, and regularity (Wang, Zhang, & Wei, 2019). To analyze the textures, the features extracted from the image are local entropy of grayscale image (entropy), local standard deviation of image (STD), local binary patterns (LBP), and gray level co-occurrence matrix (GLCM). Features extracted from GLCM include contrast, homogeneity, correlation, and energy that the mentioned features were extracted from the images. The contrast shows the intensity of the gray variation in the image. The correlation describes the linearity and dependence of a different two-pixel value. In this case, μ is the mean value of the matrix and σ i σ j of the variance. The energy represents the order of the image (repetition of the pixel pair) and in fact represents the smoothness and uniformity of the sample surface. Homogeneity describes the similarity of a pixel with neighboring pixels and reflects the uniformity of the image. Specifications extracted from entropy, standard deviation, and local binary patterns were calculated according to Table 1. In addition, the histogram is a graphical representation of the number of pixels for each brightness level in the input image. We defined 25 Bin in this study, and in each period, the abundance of things was gathered together and placed there. Finally, 120 features were extracted from each image. | The local binary patterns (LBPs) A local binary pattern is a synergistic approach to texture analysis, which can provide a boundary of proximity with a pixel tag and a binary result. The main advantage of LBP in business applications is its ability to maintain independent behavior with grayscale level changes and its computational efficiency, processing images in complex real-time environment. In a basic LBP, each 3 × 3 neighborhood is thresholded by the value of the central pixel. Then, the threshold neighborhood values are multiplied by weights given to | Classification model The features outlined in the above sections were used to classify. 22 different calssifiers were used including: | Decision trees classifiers Decision tree (DT) is a machine learning algorithm which classifies the training data recursively by each node in order to maximize the separation of data. The decisions in the tree are started from the root node down to a leaf node to predict a response. The leaf node contains the response (Kamiński, Jakubczyk, & Szufel, 2018). Types of models used in this group include Fine Tree, Medium Tree, and Coarse Tree. | Support vector machine classifiers Support vector machine (SVM) is an effective modeling tool for classification and was used for regression, pattern classification, prediction, and problem detection (Nasirahmadi et al., 2019). In SVM, data input space is mapped into a high dimensional feature space through a kernel function by using minimal training data (Huang, Tang, Yang, & Zhu, 2016). Types of models used in this group include Linear SVM, Quadratic SVM, Cubic SVM, Fine Gaussian SVM, Medium Gaussian SVM, and Coarse Gaussian SVM. | Nearest neighbor classifiers The Nearest neighbor classifiers in the low-precision dimensions is a good predictor. However, they may not have this capability on a large scale. In this classifier, samples that are neighbors or similar to a well-known instance are identified that fall into the set of training, and then, the classification is done based on the training set (Xie, Yang, & He, 2017 | Ensemble classifiers An ensemble is a supervised learning approach such as bagging, boosting, and variants that use multiple models to improve the predictive performance than could be obtained from any of the constituent models (Dutta et al., 2015). Types of models used in this group include Boosted Trees, Bagged Trees, Subspace Discriminant, Subspace KNN, and RUSBoost Trees. | Validation and performance evaluation indices A fivefold stratified cross-validation technique was used to validate the classification. In k-fold cross-validation, the original sample is randomly divided into k equal sized subsamples. Of the k subsamples, a single subsample is remained as the validation data for testing the model, and the remaining/k subsamples are used as training data. The cross-validation process is then repeated k times, with each of the k subsamples used exactly once as the validation data. The k results can then be averaged to produce a single estimation. The advantage of this method over repeated random subsampling is that all observations are used for both training and validation, and each observation is used for validation exactly once (Siedliska, Baranowski, & Mazurek, 2014). Accuracy, confusion matrix, true-positive rate (TP rate), false-negative rate (FN rate), positive predictive rate (PP rate), and false discovery rate (FD rate) were calculated (Xie et al., 2017). Also, the receiver operating characteristic (ROC) was computed in MATLAB based on true-positive and false-negative rates. The area under the ROC curve which ranges from 0.5 (no discrimination ability) to 1 (best discrimination ability) was also calculated (Nasirahmadi et al., 2019). One-way analysis of variance (ANOVA) and Duncan's test were used to determine the significant difference between the accuracy of classifiers. Statistical analysis was performed using SPSS software (IBM Statistics version 23). | RE SULTS AND D ISCUSS I ON The 440 color photographs from different samples of saffron including 195 samples of Pushal, 129 Negin, and 116 Sargol were used in this study. The glossary defined for classifiers, including 21 color features and 99 texture features, was extracted from 440 samples. The classifier was then evaluated using fivefold cross-validation. In the cross-validation, the original samples were randomly partitioned into five groups. Four groups were used as training data for developing the model, and the remaining group was retained as validation data for testing the classifier. The process was repeated for five times, with each of the groups used once as the validation data (Kuo, Chung, Chen, Lin, & Kuo, 2016). | Classification when features of color were used in the classifiers The average accuracy of these four classifiers did not differ significantly (p < .05). For Linear SVM classifier, the classification accuracy was 82.23% (±0.66%). Figure 3 shows the confusion matrix for seven classifiers mentioned. Also, detailed accuracy analysis has been reported in Table 5. | Classification when combinations of all features were used in the classifier A high value of TP rate and PP rate, and a low value of FN rate and FD rate, mean the classification model is good. These values for Pushal saffron were better than other classes of saffron. The FN rate and FD rate showed that the classification error of Sargol and Negin is more than Pushal. These errors happen when the values are close to each other, and it is hard to classify them. In terms of appearance, Negin and Sargol are very similar, and the distinction between them is difficult. In the Pushal, three filaments of stigmas are connected, which at the end has a bit of style, but in the Negin and Sargol, three filaments of stigmas are separated. The receiver operating characteristics (ROC) was an additional method for evaluating the performance of the classification models. An ROC graph illustrates relative trade-offs between true-positives and false-positives and its x-axis is the false-positive rate, whereas the y-axis is the true-positive rate of the model (Siedliska et al., 2014). The area under the ROC curve (AUC) is an important statistical parameter for evaluating classifier performance. Figure 4 shows the ROC curves, TA B L E 4 Average classification accuracies (%) for 10 times running of fivefold cross-validation using 120 color and texture features for saffron classification | CON CLUS IONS In summary, these results showed that the visual texture and color index could be a good index for separating saffron of Pushal, Negin, and Sargol. The saffron samples were collected from the cities of Khorasan Province. A commercially available mobile phone was used to capture the saffron images. The images were given to expert individuals to classify the samples into three classes: Sargol, Negin, and Pushal. A total number of 120 features were extracted from the saffron images. ACK N OWLED G M ENT This study was supported by Vice President for Research and Technology, Ferdowsi University of Mashhad, I.R. Iran. CO N FLI C T O F I NTE R E S T None declared. AUTH O R S ' CO NTR I B UTI O N S The first author was responsible for the accomplishment of most of the works, searching literature data, and write up of the paper. The second author also contributed in the manuscript preparation and standardized the paper as well as supervision of the whole research works. The third and fourth authors also contributed in the manuscript preparation. All authors approved the final manuscript for publication. E TH I C A L S TATEM ENT This study does not involve any human or animal testing.
3,530.8
2020-02-27T00:00:00.000
[ "Materials Science" ]
Compact Interrogation System of Fiber Bragg Grating Sensors Based on Multiheterodyne Dispersion Interferometry for Dynamic Strain Measurements Dual-comb multiheterodyne spectroscopy is a well-established technology for the highly sensitive real-time detection and measurement of the optical spectra of samples, including gases and fiber sensors. However, a common drawback of dual-comb spectroscopy is the need for a broadband amplitude-resolved absorption or reflection measurement, which increases the complexity of the dual comb and requires the precise calibration of the optical detection. In the present study, we present an alternative dispersion-based approach applied to fiber Bragg grating sensors in which the dual comb is compacted by a single dual-drive-unit optical modulator, and the fiber sensor is part of a dispersion interferometer. The incident dual comb samples a few points in the spectrum that are sensitive to Bragg wavelength changes through the optical phase. The spectra reading is improved due to the external interferometer and is desensitized to changes in the amplitude of the comb tones. The narrow-band detection of the fiber sensor dispersion changes that we demonstrate enables the compact, cost-effective, high-resolution multiheterodyne interrogation of high-throughput interferometric fiber sensors. These characteristics open its application both to the detection of fast phenomena, such as ultrasound, and to the precise measurement at high speed of chemical-/biological-sensing samples. The results with a low-reflectivity fiber Bragg grating show the detection of dynamic strain in the range of 215 nε with a 30 dB signal to noise ratio and up to 130 kHz (ultrasonic range). Introduction A dual optical frequency comb (DOFC) [1][2][3] is a useful measurement tool that permits us to perform spectroscopy techniques [4] to study a certain interval of the spectra simultaneously [5][6][7][8][9]. It behaves as a broad source that is composed of optical discrete tones that are coherent and, at the same time, they unambiguously map onto a lower-frequency electrical domain where they are more easily detected. Fiber Bragg grating (FBG) sensors are in-fiber diffractive-pattern devices that can be used to sense physical magnitudes [10][11][12]. They are used for temperature and strain measurements since they act as an optical filter whose central wavelength, or Bragg wavelength, depends on the strain and temperature variation to which the sensor is applied. Typical sensitivities are about 1 pm/µε and 10 pm/ • C, respectively [10]. The simplest approach to interrogate an FBG sensor relies on measuring the reflection spectrum. A broadband source is filtered with an FBG sensor and the center wavelength of the reflection is tracked with wavelength-sensitive detection, and therefore temperature and strain can be recovered. This can be conducted with an optical spectrum analyzer, and it can achieve a resolution of 1 pm of the optical wavelength, that is equivalent to 1 µε in mechanical displacement. Principle of Measurement Strain is a primary magnitude of mechanical sensing. It reveals displacement or elongation (1) that can change dynamically as a result of the transduction of vibrations, acoustic emission or ultrasounds. where ∆l is the variation of the length experienced by the understudy element and L 0 is its initial length. A practical optical gage to measure strain is an FBG sensor, in which the reflected wavelength changes with the strain (2) with a sensitivity of about 1 pm/µε. where λ B is the Bragg wavelength (center of the reflected spectrum), ∆λ B is the change of the Bragg wavelength with the strain and K B is the gage factor that considers the strainoptic coefficient. Considering a practical gage factor of 0.78, the sensitivity is 1.21 pm/µε for the 1550 nm wavelength and 1.03 pm/µε for the 1310 nm wavelength. The better performance of measuring the wavelength change (pm), the better the measurement of the strain (µε) with this sensor. For example, an optical spectrum analyzer with a 20 pm resolution allows the direct detection of 20 µε. Electro-Optic Dual Optical Frequency Comb The dual optical frequency comb is a high-performance architecture for optical spectra interrogation, and therefore it is applicable to read FBG. The working principle consists of generating two optical frequency combs at different frequency rates. Both combs come from the same optical source and are mutually coherent. We can merge them to obtain a multiheterodyne interference signal on a photodetector. The resulting electrical signal is a set of equally frequency spaced tones, each one corresponding to the beat of two optical tones. This method allows the independent recovery of both the amplitude beat and phase of the optical tones. A typical architecture implies an electro-optic generation of sidebands and an acousto-optic frequency shift [15]. The schematic of Figure 1 shows the architecture of an electro-optic DOFC connected to a photodetector. The laser seed of frequency f 0 splits into two arms, each one corresponding to an optical frequency comb generated by an electro-optic phase modulator (EOM). The frequency applied to EOM 1 is slightly different to the frequency applied to EOM 2 . The optical frequency of one arm is shifted by an acousto-optic modulator (AOM). Both arms are combined to beat the two optical frequency combs and, as a result, a multi-heterodyne interferometer is obtained. The signal revealed on a photo-detector is a replica of the optical frequency comb around f 0 (frequency spacing: f pm1 − f pm2 ) that is downshifted to a frequency comb around f shi f t (frequency spacing: f pm1 − f pm2 ). Note that f 0 >> f pm1 , f pm2 , f shi f t and that the two combs are coherent because they come from the same highly coherent seed. where is the Bragg wavelength (center of the reflected spectrum), Δ is the change of the Bragg wavelength with the strain and is the gage factor that considers the strain-optic coefficient. Considering a practical gage factor of 0.78, the sensitivity is 1.21 pm/µε for the 1550 nm wavelength and 1.03 pm/µε for the 1310 nm wavelength. The better performance of measuring the wavelength change (pm), the better the measurement of the strain (µε) with this sensor. For example, an optical spectrum analyzer with a 20 pm resolution allows the direct detection of 20 µε. Electro-Optic Dual Optical Frequency Comb The dual optical frequency comb is a high-performance architecture for optical spectra interrogation, and therefore it is applicable to read FBG. The working principle consists of generating two optical frequency combs at different frequency rates. Both combs come from the same optical source and are mutually coherent. We can merge them to obtain a multiheterodyne interference signal on a photodetector. The resulting electrical signal is a set of equally frequency spaced tones, each one corresponding to the beat of two optical tones. This method allows the independent recovery of both the amplitude beat and phase of the optical tones. A typical architecture implies an electro-optic generation of sidebands and an acousto-optic frequency shift [15]. The schematic of Figure 1 shows the architecture of an electro-optic DOFC connected to a photodetector. The laser seed of frequency f0 splits into two arms, each one corresponding to an optical frequency comb generated by an electro-optic phase modulator (EOM). The frequency applied to EOM1 is slightly different to the frequency applied to EOM2. The optical frequency of one arm is shifted by an acousto-optic modulator (AOM). Both arms are combined to beat the two optical frequency combs and, as a result, a multiheterodyne interferometer is obtained. The signal revealed on a photo-detector is a replica of the optical frequency comb around (frequency spacing: ≅ ) that is downshifted to a frequency comb around (frequency spacing: − ). Note that >> , , and that the two combs are coherent because they come from the same highly coherent seed. EOM, electro-optic phase modulator; signal generators of EOM at frequencies fpm1 and fpm2; AOM, acousto-optic modulator; signal generator of AOM at frequency fshift; PD, photodetector. The DOFC generates a multimode multiheterodyne optical signal that has an injective mapping from the optical spectrum probe (THz frequency, µm wavelength) to the electrical spectrum analyzer within a moderate bandwidth of a photodetector (kHz-MHz). Figure 2 shows the principle of an electro-optic DOFC generation and detection. Figure 2a shows the non-shifted optical frequency combs generated by two EOM to The DOFC generates a multimode multiheterodyne optical signal that has an injective mapping from the optical spectrum probe (THz frequency, µm wavelength) to the electrical spectrum analyzer within a moderate bandwidth of a photodetector (kHz-MHz). Figure 2 shows the principle of an electro-optic DOFC generation and detection. Figure 2a shows the non-shifted optical frequency combs generated by two EOM to illustrate the frequency difference among each generated tone. Figure 2b shows the AOM shifted the optical frequency combs that allow an unambiguous beating of each tone pair on a unique frequency. Note that f pm1 − f pm2 << f pm1 , f pm2 and f shi f t << f pm1 , f pm2 . Finally, the DOFC read by a photo-detector is shown in Figure 2c. The tone of frequency f shi f t corresponds to the optical central frequency f 0 (wavelength λ 0 = c/ f 0 , c is the speed of light in vacuum), and the f pm1 − f pm2 frequency spacing corresponds to the f pm2 optical frequency spacing (wavelength spacing:∆λ = (λ 0 ) 2 · f pm2 /c (λ 0 ) 2 · f pm1 /c). illustrate the frequency difference among each generated tone. Figure 2b shows the AOM shifted the optical frequency combs that allow an unambiguous beating of each tone pair on a unique frequency. Note that − << , and << , . Finally, the DOFC read by a photo-detector is shown in Figure 2c. The tone of frequency corresponds to the optical central frequency (wavelength = / , is the speed of light in vacuum), and the frequency spacing corresponds to the optical frequency spacing (wavelength spacing:  = · /  · / ). Figure 2. Electro-optic dual optical frequency comb spectra: (a) two optical frequency combs of slightly different frequencies applied to the phase modulator; (b) two optical frequency combs with additional frequency shifts; and (c) photo-detected comb as the beat response of the dual comb of (b). Compact Dual-Drive Electro-Optic Dual Optical Frequency Comb The generation of the DOFC is itself a worthy paradigm and state-of-the-art research is struggling to improve the characteristics of the resulting spectra in terms of bandwidth, flatness, coherence and stability. The electro-optic DOFC of Figure 1 is portable and has been implemented in practical applications. However, this set-up is a fiber interferometer where the stability of the generated optical combs depends on the fiber arms. Instead, we propose to use a dual-drive Mach-Zehnder modulator (DD-MZM) with the modulation scheme shown in Figure 3 to obtain a DOFC. We have also three signals to generate the DOFC. Two of them are provided by two signal generators at frequencies for the first comb and for the second comb as in Figure 2a. In this case, the frequency shift is obtained through a phase-generated carrier (PGC) by applying an additional generator at frequency to one input of the DD-MZM. In addition, a bias input that can adjust the steady-state point of operation of the interferometer is available. As before, frequencies and are slightly different and much higher than frequency . If the signal generators of and are switched off, the scheme is essentially a pseudo-heterodyne interferometer with a phase-generated carrier [16]. The Compact Dual-Drive Electro-Optic Dual Optical Frequency Comb The generation of the DOFC is itself a worthy paradigm and state-of-the-art research is struggling to improve the characteristics of the resulting spectra in terms of bandwidth, flatness, coherence and stability. The electro-optic DOFC of Figure 1 is portable and has been implemented in practical applications. However, this set-up is a fiber interferometer where the stability of the generated optical combs depends on the fiber arms. Instead, we propose to use a dual-drive Mach-Zehnder modulator (DD-MZM) with the modulation scheme shown in Figure 3 to obtain a DOFC. bias input adds another degree of freedom. It is used for choosing the phase difference between the first and second arms of the interferometer. This parameter, in addition to the set-up compactness, is important to improve the amplitude stability of the generated optical frequency combs and the interference between them. The PGC technique in interferometry is based on generating an electrical carrier through phase modulation [21]. It can be also achieved by modulating the wavelength of the laser to generate a pseudo-heterodyne signal for a given optical path difference [22]. In our case, we used an additional phase modulation sine signal on one input of the DD-MZM. This non-linear process generates several carriers [22] and each one provides a pseudo-heterodyne detection. The resultant DOFC frequency shift is equivalent to those provided by the AOM. The particularity, in this case, is that, instead of a frequency comb centered on (frequency spacing: -), we obtained, on the photo-detector, a set of frequency combs: the first and principal one is centered on (frequency spacing: -) and the others on 2· and 3· . Note that each of these combs have the same frequency spacing of -and are an injective mapping of the DOFC. The underlying concept in this process is the three pure-phase modulation stages performed in an interferometer. From the basic interferogram equation [23], we can consider three pure-phase modulations, two for the sidebands of the optical combs and one for the PGC approach. Although this is simple, this approach enables us to understand We have also three signals to generate the DOFC. Two of them are provided by two signal generators at frequencies f pm1 for the first comb and f pm2 for the second comb as in Figure 2a. In this case, the frequency shift is obtained through a phase-generated carrier (PGC) by applying an additional generator at frequency f PGC to one input of the DD-MZM. In addition, a bias input that can adjust the steady-state point of operation of the interferometer is available. As before, frequencies f pm1 and f pm2 are slightly different and much higher than frequency f PGC . If the signal generators of f pm1 and f pm2 are switched off, the scheme is essentially a pseudo-heterodyne interferometer with a phase-generated carrier [16]. The bias input adds another degree of freedom. It is used for choosing the phase difference between the first and second arms of the interferometer. This parameter, in addition to the set-up compactness, is important to improve the amplitude stability of the generated optical frequency combs and the interference between them. The PGC technique in interferometry is based on generating an electrical carrier through phase modulation [21]. It can be also achieved by modulating the wavelength of the laser to generate a pseudo-heterodyne signal for a given optical path difference [22]. In our case, we used an additional phase modulation sine signal on one input of the DD-MZM. This non-linear process generates several carriers [22] and each one provides a pseudo-heterodyne detection. The resultant DOFC frequency shift is equivalent to those provided by the AOM. The particularity, in this case, is that, instead of a frequency comb centered on f shi f t (frequency spacing: f pm1 − f pm2 ), we obtained, on the photodetector, a set of frequency combs: the first and principal one is centered on f PGC (frequency spacing: f pm1 − f pm2 ) and the others on 2· f PGC and 3· f PGC . Note that each of these combs have the same frequency spacing of f pm1 − f pm2 and are an injective mapping of the DOFC. The underlying concept in this process is the three pure-phase modulation stages performed in an interferometer. From the basic interferogram equation [23], we can consider three pure-phase modulations, two for the sidebands of the optical combs and one for the PGC approach. Although this is simple, this approach enables us to understand the process of sideband generation and, at the same time, the PGC process [24][25][26][27] as a simple Equation (3). It provides an expression for the detected intensity of the DOFC based on the PGC. where A 1 2 is the power of one of the DD-MZM arms and A 2 2 is the power of the second arm of the DD-MZM. β i is the modulation depth of the i-th modulation and ω i is the angular frequency of the i-th modulation. For convenience, the β 3 sin(ω 3 t) term is associated with a pure-phase-modulation phase-generating carrier signal whose angular frequency ω 3 is very small in comparison to the sideband-generating angular frequencies ω 2 and ω 1 . As an example, in our practical case, 656.5 MHz and 656 MHz for ω 1 /2π and ω 2 /2π, respectively, and 4 MHz for ω 3 /2π. φ 1 (x) − φ 2 (x) represent the phase difference between the arms of the DD-MZM, and it is physically controlled by the bias signal. The Fourier decomposition of (4) provides a useful insight. It enables us to understand the relationship between the optical combs and the photo-detected comb. where the "m-th" index sum is associated with the PGC-carrier generation of modulation depth β 3 and angular frequency ω 3 . Accordingly, the "k-th" index is associated to each homolog pair of tones that are mapped to the detector bandwidth. kω 1 and kω 2 harmonics are assumed to lie outside the detected bandwidth, and therefore they can be neglected. Therefore, just the carriers corresponding to ω 3 and their multiheterodyne sidebands remain detectable. Otherwise, the residual harmonics can be easily removed with low-pass filtering. Each A 1 A 2 J m (β 3 )J k (β 2 )J k (β 1 ) term refers to the optical power for the "k-th" homolog tones measured in the "m-th" PGC carrier. Therefore, the relationship between the electrical domain (I DOFC ) FFT and the optical domain is injective (one to one mapping). Multiheterodyne Dispersion Interferometer The scheme of the proposed multiheterodyne dispersion interferometer is shown in Figure 4. A compact DOFC based on a coherent laser seed and a DD-MZM (as in Figure 3) is injected into a Michelson interferometer with an FBG sensor in one arm. The central wavelength of the optical comb is aligned with the Bragg wavelength of the FBG sensor. bands remain detectable. Otherwise, the residual harmonics can be easily removed with low-pass filtering. Each ‖ ‖‖ ‖ term refers to the optical power for the "k-th" homolog tones measured in the "m-th" PGC carrier. Therefore, the relationship between the electrical domain ( ) FFT and the optical domain is injective (one to one mapping). Multiheterodyne Dispersion Interferometer The scheme of the proposed multiheterodyne dispersion interferometer is shown in Figure 4. A compact DOFC based on a coherent laser seed and a DD-MZM (as in Figure 3) is injected into a Michelson interferometer with an FBG sensor in one arm. The central wavelength of the optical comb is aligned with the Bragg wavelength of the FBG sensor. The light mix of the interferometer is composed of a variable amplitude and phasesignal correspondent with the reflection from the FBG sensor and a constant amplitude and phase-signal correspondent to the reference arm of the Michelson interferometer that is partially reflective. The optical phase difference of the photo-detector depends on the source frequency: where is the optical phase difference between the two paths of the interferometer, d is the optical path difference (OPD) that considers both the length and the refractive index of the fiber, is the optical frequency, c is the speed of light in vacuum and is the optical wavelength. In this case, each tone of the optical comb is detected with a different optical phase, and the phase difference of each tone with the central tone of frequency (wavelength ) can be expressed as in (6). The light mix of the interferometer is composed of a variable amplitude and phasesignal correspondent with the reflection from the FBG sensor and a constant amplitude and phase-signal correspondent to the reference arm of the Michelson interferometer that is partially reflective. The optical phase difference of the photo-detector depends on the source frequency: where φ is the optical phase difference between the two paths of the interferometer, d is the optical path difference (OPD) that considers both the length and the refractive index of the fiber, ν is the optical frequency, c is the speed of light in vacuum and λ is the optical wavelength. In this case, each tone of the optical comb is detected with a different optical phase, and the phase difference of each tone with the central tone of frequency f 0 (wavelength λ 0 ) can be expressed as in (6). where φ 0 is the optical phase difference at frequency f 0 (wavelength λ 0 ) that corresponds to the laser wavelength and (λ − λ 0 ) << λ 0 . Note that if the OPD is zero, then the optical phase of each and every tone is the same. The optical phase difference between the adjacent tones is 2π· d c · f pm , where f pm is the modulation frequency applied to the phase modulators (see Figure 2). In Figure 5a, we reproduce the output of the interferometer substituting the pair DOFC and PD by a low-coherent broadband source and an optical spectrum analyzer (OSA) as an example of the interferometer output as a function of the wavelength [28]. The interferometer output contains the amplitude profile of the FBG reflected spectrum and also the sinusoidal modulation with the wavelength change. When the reflected Bragg wavelength changes with the strain, the sinusoidal modulation shifts. Therefore, to obtain the strain, we can sample specific wavelengths that are representatives for an optical phase read-out, as it is represented in Figure 5b. In particular, the case of using the two tones f 0 + f pm and f 0 − f pm with the π rad optical phase difference between them and the DOFC central wavelength is aligned with the FBG Bragg wavelength. and the DOFC central wavelength is aligned with the FBG Bragg wavelength. In Figure 5a, an asymmetric spectral structure is observed. The representation is a combination of the envelope presented by the filtered/reflected spectrum of the FBG and the phase change with the wavelength of the FBG (i.e., delay) detected with a dispersion interferometer. The delay in this FBG is constant with the wavelength (minimum dispersion), but an excess is typical on the passband sides of an FBG without apodization [11]. In this case, zero path imbalance is obtained for a wavelength other than the Bragg wavelength, and the phase-wavelength profile is shifted from the amplitude-wavelength profile. Furthermore, the shorter and longer wavelengths do not reflect at exactly the same point on the FBG, so the path imbalance slightly changes for different wavelengths. (a) (b) Figure 5. Principle of measurement based on a multiheterodyne dispersion interferometer to interrogate an FBG: (a) example of photo-detected output of a dispersion interferometer with an FBG obtained with a SLED and an optical spectrum analyzer (OSA); (b) sampling of specific wavelengths with a DOFC for an optical phase read-out. Experimental Set-Up The experimental set-up reproduces the scheme of Figure 4 for the interrogation of a weak FBG with a multiheterodyne dispersion interferometer. The DOFC reproduces the scheme of Figure 3 for a compact and stable implementation. The system is driven with a laser 1310 nm wavelength (Santec Tsl-210 tunable laser). The DOFC is generated with a DD-MZM (model MZDD-LN-10-PD-P-P-FA-FA, iXblue, Saint Germain-en-Laye, France); an input is driven by a signal of 656 MHz, the other by both a signal of 656.5 MHz and a signal of 4 MHz, the latter for the PGC. An example of the optical output of the DOFC is shown in Figure 6, detected with an OSA (model Yokogawa AQ6370B). In this case, the modulation frequencies applied to the In Figure 5a, an asymmetric spectral structure is observed. The representation is a combination of the envelope presented by the filtered/reflected spectrum of the FBG and the phase change with the wavelength of the FBG (i.e., delay) detected with a dispersion interferometer. The delay in this FBG is constant with the wavelength (minimum dispersion), but an excess is typical on the passband sides of an FBG without apodization [11]. In this case, zero path imbalance is obtained for a wavelength other than the Bragg wavelength, and the phase-wavelength profile is shifted from the amplitude-wavelength profile. Furthermore, the shorter and longer wavelengths do not reflect at exactly the same point on the FBG, so the path imbalance slightly changes for different wavelengths. Experimental Set-Up The experimental set-up reproduces the scheme of Figure 4 for the interrogation of a weak FBG with a multiheterodyne dispersion interferometer. The DOFC reproduces the scheme of Figure 3 for a compact and stable implementation. The system is driven with a laser 1310 nm wavelength (Santec Tsl-210 tunable laser). The DOFC is generated with a DD-MZM (model MZDD-LN-10-PD-P-P-FA-FA, iXblue, Saint Germain-en-Laye, France); an input is driven by a signal of 656 MHz, the other by both a signal of 656.5 MHz and a signal of 4 MHz, the latter for the PGC. An example of the optical output of the DOFC is shown in Figure 6, detected with an OSA (model Yokogawa AQ6370B). In this case, the modulation frequencies applied to the DD-MZM are higher in order to distinguish the different tones with the limited resolution of the OSA (20 pm resolution). The wavelength of the laser seed was about 1311.5 nm and the power was greater than −30 dBm. The central wavelength of the comb was the same (1311.5 nm) and the power was slightly less than −30 dBm. The amplitude of the harmonics was larger in the comb trace compared to the laser trace, since the EO modulator generated sidebands, but the total comb power was less than the laser power due to insertion losses. of the OSA (20 pm resolution). The wavelength of the laser seed was about 1311.5 nm and the power was greater than −30 dBm. The central wavelength of the comb was the same (1311.5 nm) and the power was slightly less than −30 dBm. The amplitude of the harmonics was larger in the comb trace compared to the laser trace, since the EO modulator generated sidebands, but the total comb power was less than the laser power due to insertion losses. The fiber was SMF-28. The fiber-optic coupler was a 50:50 dual-wavelength (1310 nm, 1550 nm). The FBG sensor was a weak FBG of 2 cm, low back reflectivity of 12.5% and 1308.2 nm reference Bragg wavelength. The central wavelength of the optical comb was aligned with the Bragg wavelength of the FBG sensor by adjusting the wavelength of the laser. During the experiments, it was adjusted before the measurements to obtain an identical amplitude of the first sidebands. In practice, the effect of the thermal drift of the Bragg wavelength can be compensated through the tunability of the laser by implementing a low-bandwidth closed loop. Furthermore, if 5 optical-phase sample points (5 comb lines) are obtained [28], the optical phase change can be reconstructed over a range of more than 2π rad, so vibrations and drift can be measured simultaneously to compensate for the latter. The tones of the comb were spaced at 656 MHz to have a phase difference of π/2 rad between the two adjacent tones, so the optical path difference of the fiber interferometer was 11.43 cm. Since the light traveled along the fiber forward and backward (two times) and considering an effective refractive index of 1.4676 at 1310 nm, the length difference between the reference fiber and the FBG reflection was 3.90 cm. The frequency difference between the first and second comb was imposed to be 0.5 MHz, which was the limit for the FBG sensor's detected bandwidth without interference among the tones in the photodetector. This frequency difference was far less than the modulation frequency of 656 MHz, so both optical combs sampled the same point of the optical spectrum (the phase difference was approximately 381 ppm of π rad). The PGC was modulated at 4 MHz and the amplitude was chosen to have 2 Vπ of the phase modulator, which was approximately 7 V. This means that, for an amplitude equal to Vπ = 3.5 V, we obtained the π radians of the optical phase modulation. Therefore, for 2 Vπ, we obtained a whole period of modulation for each period of the phase modulation. In this case, with a frequency shift of 4 MHz and a frequency difference of 0.5 MHz, up to 8 tones can be read unambiguously on the photodetector. As can be observed in Figure 6, the DOFC has 7 tones within 30 dB of the relative amplitude and the other tones are negligible. The fiber was SMF-28. The fiber-optic coupler was a 50:50 dual-wavelength (1310 nm, 1550 nm). The FBG sensor was a weak FBG of 2 cm, low back reflectivity of 12.5% and 1308.2 nm reference Bragg wavelength. The central wavelength of the optical comb was aligned with the Bragg wavelength of the FBG sensor by adjusting the wavelength of the laser. During the experiments, it was adjusted before the measurements to obtain an identical amplitude of the first sidebands. In practice, the effect of the thermal drift of the Bragg wavelength can be compensated through the tunability of the laser by implementing a low-bandwidth closed loop. Furthermore, if 5 optical-phase sample points (5 comb lines) are obtained [28], the optical phase change can be reconstructed over a range of more than 2π rad, so vibrations and drift can be measured simultaneously to compensate for the latter. The tones of the comb were spaced at 656 MHz to have a phase difference of π/2 rad between the two adjacent tones, so the optical path difference of the fiber interferometer was 11.43 cm. Since the light traveled along the fiber forward and backward (two times) and considering an effective refractive index of 1.4676 at 1310 nm, the length difference between the reference fiber and the FBG reflection was 3.90 cm. The frequency difference between the first and second comb was imposed to be 0.5 MHz, which was the limit for the FBG sensor's detected bandwidth without interference among the tones in the photodetector. This frequency difference was far less than the modulation frequency of 656 MHz, so both optical combs sampled the same point of the optical spectrum (the phase difference was approximately 381 ppm of π rad). The PGC was modulated at 4 MHz and the amplitude was chosen to have 2 Vπ of the phase modulator, which was approximately 7 V. This means that, for an amplitude equal to Vπ = 3.5 V, we obtained the π radians of the optical phase modulation. Therefore, for 2 Vπ, we obtained a whole period of modulation for each period of the phase modulation. In this case, with a frequency shift of 4 MHz and a frequency difference of 0.5 MHz, up to 8 tones can be read unambiguously on the photodetector. As can be observed in Figure 6, the DOFC has 7 tones within 30 dB of the relative amplitude and the other tones are negligible. The photo-detecting stage is a self-made bank of photodetectors with 35 dB of gain that can operate at 1310-1550 nm wavelengths. We chose the wavelength of 1310 nm of the tunable laser and FBG samples; other wavelengths, such as 1550 nm, can be used with this configuration [28]. An advantage is the reduced dispersion of the fiber cables at 1310 nm, which implies that a model in which the optical phase is constant with the wavelength is representative. Regarding the comb line spacing, 656 MHz was chosen to accurately place the two main sidebands f 0 + f pm and f 0 − f pm in a specific optical phase of ±π/2 rad, with respect to the reference (Figure 5b). Furthermore, this frequency was moderate, so it satisfied f shi f t << f pm1 , f pm2 As previously mentioned, multiple carriers were generated with the PGC, each one mapping the optical comb to a comb on the photodetector. Therefore, the optical comb was mapped as a comb with a frequency spacing of 0.5 MHz centered on 4 MHz and equivalent combs (0.5 MHz spacing frequency) centered on the harmonics of 4 MHz (such as 8 MHz and 12 MHz). This injective mapping from the DOFC to the photodetector signal can be observed in Figure 7. The PCG characteristic of the modulation lead to multiple carriers and, therefore, the same spectra are injective and mapped along the spans of 2-6 MHz, 6-10 MHz, 10-14 MHz and 14-18 MHz for the first order, second order, third order and fourth order, respectively. In this case, 5 tones of the optical comb were clearly detected in the principal and secondary carriers of 4 MHz and 8 MHz, respectively, and 3 tones of the optical comb were detected in higher-order harmonics of 4 MHz. that can operate at 1310-1550 nm wavelengths. We chose the wavelength of 1310 nm of the tunable laser and FBG samples; other wavelengths, such as 1550 nm, can be used with this configuration [28]. An advantage is the reduced dispersion of the fiber cables at 1310 nm, which implies that a model in which the optical phase is constant with the wavelength is representative. Regarding the comb line spacing, 656 MHz was chosen to accurately place the two main sidebands + and in a specific optical phase of ±π/2 rad, with respect to the reference (Figure 5b). Furthermore, this frequency was moderate, so it satisfied << , (4 MHz << 656 MHz, 656.5 MHz) and -<< , (0.5 MHz << 656 MHz, 656.5 MHz). As previously mentioned, multiple carriers were generated with the PGC, each one mapping the optical comb to a comb on the photodetector. Therefore, the optical comb was mapped as a comb with a frequency spacing of 0.5 MHz centered on 4 MHz and equivalent combs (0.5 MHz spacing frequency) centered on the harmonics of 4 MHz (such as 8 MHz and 12 MHz). This injective mapping from the DOFC to the photodetector signal can be observed in Figure 7. The PCG characteristic of the modulation lead to multiple carriers and, therefore, the same spectra are injective and mapped along the spans of 2-6 MHz, 6-10 MHz, 10-14 MHz and 14-18 MHz for the first order, second order, third order and fourth order, respectively. In this case, 5 tones of the optical comb were clearly detected in the principal and secondary carriers of 4 MHz and 8 MHz, respectively, and 3 tones of the optical comb were detected in higher-order harmonics of 4 MHz. Demodulation The demodulating process can be explained over one wavelength, and generalized to each comb tone on the photodetector (PD). The vibration tone resulting from the wavelength can be extracted from the PD by mixing the signal of PD (SPD) with the reference mixing signal (SR) and filtering the output with a low-pass filter. In this case, we used an analog mixer whose output was proportional to the SPD times of the mixer reference signal SR. If the SR had a constant amplitude and the same frequency as the understudy SPD harmonic, we obtained an electrical output amplitude that was proportional to the amplitude modulation of the optical tone lying in the FBG reflected spectrum. Thus, the fluctuation of this amplitude will be at the same rate as the frequency of the mechanical vibration. Demodulation The demodulating process can be explained over one wavelength, and generalized to each comb tone on the photodetector (PD). The vibration tone resulting from the wavelength λ i can be extracted from the PD by mixing the signal of PD (SPD) with the reference mixing signal (SR) and filtering the output with a low-pass filter. In this case, we used an analog mixer whose output was proportional to the SPD times of the mixer reference signal SR. If the SR had a constant amplitude and the same frequency as the understudy SPD harmonic, we obtained an electrical output amplitude that was proportional to the amplitude modulation of the optical tone lying in the FBG reflected spectrum. Thus, the fluctuation of this amplitude will be at the same rate as the frequency of the mechanical vibration. The vibration information was extracted from the principal carrier with the differential measurements of the amplitudes of the harmonics at 3.5 MHz and 4.5 MHz. By applying the lock-in technique to those two tones and subtracting them, we obtained a value of the phase shift of the interferogram. This simplification is extremely important in the dynamic-strain measurement with the FBG. In Figure 8, we can observe the implementation of the analog demodulation stage. The PD signal is split and injected into analog mixers that are driven with the other secondary signals of 3.5 MHz and 4.5 MHz, respectively. The output of each mixer contains the beat in DC and twice the frequency. Thus, at the output of the low-pass filter, we obtain the electrically encoded vibration signal whose amplitude is proportional to the mechanical vibration signal around the DC. In Figure 8, we can observe the implementation of the analog demodulation stage. The PD signal is split and injected into analog mixers that are driven with the other secondary signals of 3.5 MHz and 4.5 MHz, respectively. The output of each mixer contains the beat in DC and twice the frequency. Thus, at the output of the low-pass filter, we obtain the electrically encoded vibration signal whose amplitude is proportional to the mechanical vibration signal around the DC. Calibration The calibration system is performed with an independent heterodyne interferometer. Both sensing parts were mechanically attached to the same point and in the same way to ensure that the same strain and vibration stimuli was applied to both the calibration interferometer and the main measurement system. Each system was driven with a different laser: the main measurement system was driven with a tunable laser (Santec Tsl-210) and the calibration system was driven with a laser diode (QDFB-LD-1550-20). A heterodyne interferometer purely modulated in the phase on one of its arms generated electrical sidebands around the carrier signal. Their spacing of the sidebands was the same as the sinusoidal excitation frequency. As the measurement arm of the calibrating interferometer and the FBG sensor were mounted in the same assembly, both parts experienced the same mechanical displacement (Figure 9). The number and amplitude of the sidebands generated in the interferogram of the calibrating interferometer depends on the amplitude of the mechanical vibration signal. Calibration The calibration system is performed with an independent heterodyne interferometer. Both sensing parts were mechanically attached to the same point and in the same way to ensure that the same strain and vibration stimuli was applied to both the calibration interferometer and the main measurement system. Each system was driven with a different laser: the main measurement system was driven with a tunable laser (Santec Tsl-210) and the calibration system was driven with a laser diode (QDFB-LD-1550-20). A heterodyne interferometer purely modulated in the phase on one of its arms generated electrical sidebands around the carrier signal. Their spacing of the sidebands was the same as the sinusoidal excitation frequency. As the measurement arm of the calibrating interferometer and the FBG sensor were mounted in the same assembly, both parts experienced the same mechanical displacement (Figure 9). The number and amplitude of the sidebands generated in the interferogram of the calibrating interferometer depends on the amplitude of the mechanical vibration signal. secondary signals of 3.5 MHz and 4.5 MHz, respectively. The output of each mixer contains the beat in DC and twice the frequency. Thus, at the output of the low-pass filter, we obtain the electrically encoded vibration signal whose amplitude is proportional to the mechanical vibration signal around the DC. Calibration The calibration system is performed with an independent heterodyne interferometer. Both sensing parts were mechanically attached to the same point and in the same way to ensure that the same strain and vibration stimuli was applied to both the calibration interferometer and the main measurement system. Each system was driven with a different laser: the main measurement system was driven with a tunable laser (Santec Tsl-210) and the calibration system was driven with a laser diode (QDFB-LD-1550-20). A heterodyne interferometer purely modulated in the phase on one of its arms generated electrical sidebands around the carrier signal. Their spacing of the sidebands was the same as the sinusoidal excitation frequency. As the measurement arm of the calibrating interferometer and the FBG sensor were mounted in the same assembly, both parts experienced the same mechanical displacement (Figure 9). The number and amplitude of the sidebands generated in the interferogram of the calibrating interferometer depends on the amplitude of the mechanical vibration signal. We adjusted the mechanical set-up to ensure that the FBG sensor and the total sensing path of the calibrating interferometer were the same, so we could easily obtain an absolute amplitude for the vibration mechanical system that was independent of the main system. This allowed us to measure the minimum resolution of the main system from the output of the calibration system in PD2. The normalized optical fiber path length change with the strain (7) was in the range of the relative Bragg wavelength change of the FBG. where nL is the optical path length of the sensing piece of fiber, n is the effective refractive index, ∆nL is the change of this path length and K F is the gage factor of the optical fiber that considers the strain-optic coefficient. Considering a practical gage factor of 0.78 and a 1.4682 refractive index for the 1550 nm wavelength in the SMF-28 silica fiber, the optical path elongation was 1.16 times that of the physical elongation of the fiber. A change of ∆nL equal to the laser wavelength produced a 2π rad optical phase change. The calibration process was very simple and allowed us to measure the resolution limits of the system for the dynamic strain. The algorithm was explained in [29,30]. The idea is to measure the sideband attenuation between the zero and first-order Bessel functions of the heterodyne interferometer. This attenuation is proportional to the modulation depth and therefore to the ratio between the wavelength of the laser diode (LD2) and the actual amplitude of the mechanical vibration. To obtain a calibration level that shows a value of applied strain, we applied a known value of amplitude to the PZT. The resulting strain generated sidebands over the calibrating signal that was easily transformed into the absolute strain. A particular strain value is shown in Figure 10 as an output response of the calibrating system for 20 V applied at a 20 kHz frequency. system. This allowed us to measure the minimum resolution of the main system from the output of the calibration system in PD2. The normalized optical fiber path length change with the strain (7) was in the range of the relative Bragg wavelength change of the FBG. where nL is the optical path length of the sensing piece of fiber, n is the effective refractive index, ΔnL is the change of this path length and is the gage factor of the optical fiber that considers the strain-optic coefficient. Considering a practical gage factor of 0.78 and a 1.4682 refractive index for the 1550 nm wavelength in the SMF-28 silica fiber, the optical path elongation was 1.16 times that of the physical elongation of the fiber. A change of ΔnL equal to the laser wavelength produced a 2π rad optical phase change. The calibration process was very simple and allowed us to measure the resolution limits of the system for the dynamic strain. The algorithm was explained in [29,30]. The idea is to measure the sideband attenuation between the zero and first-order Bessel functions of the heterodyne interferometer. This attenuation is proportional to the modulation depth and therefore to the ratio between the wavelength of the laser diode (LD2) and the actual amplitude of the mechanical vibration. To obtain a calibration level that shows a value of applied strain, we applied a known value of amplitude to the PZT. The resulting strain generated sidebands over the calibrating signal that was easily transformed into the absolute strain. A particular strain value is shown in Figure 10 as an output response of the calibrating system for 20 V applied at a 20 kHz frequency. A ratio of about 25 dB between the zero and the first harmonic was achieved. It led to 0.1 rad over a 10 cm length of fiber that was 215 nε, measured with the heterodyne interferometer at 20 kHz and 20 V of amplitude. Results The set-up compactness by a single dual-drive-unit optical modulator is important to improve the stability of the optical frequency combs generated and the interference between them. The mapping quality is better in the case of dual-drive implementation than in a discrete component arrangement. This is because the impact of the temperature instabilities that arise from interferometric implementations are reduced to their A ratio of about 25 dB between the zero and the first harmonic was achieved. It led to 0.1 rad over a 10 cm length of fiber that was 215 nε, measured with the heterodyne interferometer at 20 kHz and 20 V of amplitude. Results The set-up compactness by a single dual-drive-unit optical modulator is important to improve the stability of the optical frequency combs generated and the interference between them. The mapping quality is better in the case of dual-drive implementation than in a discrete component arrangement. This is because the impact of the temperature instabilities that arise from interferometric implementations are reduced to their minimum as the DD-MZM is intrinsically more stable than a fiber implementation. Therefore, less unwanted fluctuations of amplitude RF mapping were achieved. To support this conclusion, we analyzed the amplitude stability during a total period of 100 min. We registered the amplitude of two free-running DOFC: an EOM-AOM implementation, following the scheme of Figure 1, and a DD-MZM implementation, following the scheme of Figure 3. The measurements of both systems were made at the same time. We used controlled laboratory conditions on the same table under a similar environment. The results of both DOFC can be seen in Figure 11. fore, less unwanted fluctuations of amplitude RF mapping were achieved. To support this conclusion, we analyzed the amplitude stability during a total period of 100 min. We registered the amplitude of two free-running DOFC: an EOM-AOM implementation, following the scheme of Figure 1, and a DD-MZM implementation, following the scheme of Figure 3. The measurements of both systems were made at the same time. We used controlled laboratory conditions on the same table under a similar environment. The results of both DOFC can be seen in Figure 11. From this, we can extract that the maximum fluctuation of the discrete implementation is bigger than the same DD-MZM based implementation. In the case of the DD-MZM, the maximum value is about 0.5 dB, while in the case of discrete implementation, it is almost 1.5 dB. This implies that better amplitude stability on the optical source leads to a higher quality in the measurement of the output signal. On the other hand, we measured the noise levels for different bandwidth resolutions in order to determine the quality of the under study signal. We can observe in Figure 12 that, for a higher sampling rate (x-axis), we obtained a lower noise that was able to reach the levels of 20 pV /Hz, in the case of the Santec laser, and of about 10 pV /Hz, in the case of QDFB-LD-1550-50. Finally, we applied small-amplitude high-frequency signals. Therefore, a small signal approach was used for very fast mechanical excitation and a linear amplitude modulation of the optical comb tones by the FBG sensor was assumed. From this, we can extract that the maximum fluctuation of the discrete implementation is bigger than the same DD-MZM based implementation. In the case of the DD-MZM, the maximum value is about 0.5 dB, while in the case of discrete implementation, it is almost 1.5 dB. This implies that better amplitude stability on the optical source leads to a higher quality in the measurement of the output signal. On the other hand, we measured the noise levels for different bandwidth resolutions in order to determine the quality of the under study signal. We can observe in Figure 12 that, for a higher sampling rate (x-axis), we obtained a lower noise that was able to reach the levels of 20 pV 2 /Hz, in the case of the Santec laser, and of about 10 pV 2 /Hz, in the case of QDFB-LD-1550-50. To support this conclusion, we analyzed the amplitude stability during a total period of 100 min. We registered the amplitude of two free-running DOFC: an EOM-AOM implementation, following the scheme of Figure 1, and a DD-MZM implementation, following the scheme of Figure 3. The measurements of both systems were made at the same time. We used controlled laboratory conditions on the same table under a similar environment. The results of both DOFC can be seen in Figure 11. From this, we can extract that the maximum fluctuation of the discrete implementation is bigger than the same DD-MZM based implementation. In the case of the DD-MZM, the maximum value is about 0.5 dB, while in the case of discrete implementation, it is almost 1.5 dB. This implies that better amplitude stability on the optical source leads to a higher quality in the measurement of the output signal. On the other hand, we measured the noise levels for different bandwidth resolutions in order to determine the quality of the under study signal. We can observe in Figure 12 that, for a higher sampling rate (x-axis), we obtained a lower noise that was able to reach the levels of 20 pV /Hz, in the case of the Santec laser, and of about 10 pV /Hz, in the case of QDFB-LD-1550-50. Finally, we applied small-amplitude high-frequency signals. Therefore, a small signal approach was used for very fast mechanical excitation and a linear amplitude modulation of the optical comb tones by the FBG sensor was assumed. Figure 13a shows the calibrated measurement at 30 kHz (ESA resolution bandwidth of 300 Hz). A dynamic strain in the range of 215 nε was detected with a SNR of 30 dB. The Finally, we applied small-amplitude high-frequency signals. Therefore, a small signal approach was used for very fast mechanical excitation and a linear amplitude modulation of the optical comb tones by the FBG sensor was assumed. Figure 13a shows the calibrated measurement at 30 kHz (ESA resolution bandwidth of 300 Hz). A dynamic strain in the range of 215 nε was detected with a SNR of 30 dB. The vibration signal was measured for several cases of different mechanical excitation frequencies to see the maximum operating bandwidth as shown in Figure 13b. That is, in our case, at about 130 kHz, it was defined by the maximum frequency of vibrations that are detectable with an SNR of 10 dB. The y-axis represents the electrical amplitude and the x-axis represents the frequency of the mechanical excitation. Each color represents a different mechanical excitation frequency. vibrations is widened with respect to the excitation frequency. As can be seen in Figure 13b, the bandwidth of the detected vibrations is similar for different frequencies, so the same amplitude/phase modulation of the main mechanical frequency is observed in all cases, which represents a common fluctuation. The phase noise of the laser seed (5 MHz bandwidth, FWHM) and the amplitude jitter of the reference comb can be considered the sources of error. However, a continuous wave, rather than a burst, was used to excite the acoustic actuator, which could have contributed to this effect. Conclusions In this work, a dispersion reading system to interrogate low reflectivity FBG sensors and measure dynamic strain with a high resolution was presented. It was based on dual optical frequency comb generation (DOFC) and allowed a compact setup that operated with increased amplitude stability compared to classical discrete architectures. The DOFC was generated with a single device, a dual-drive Mach Zehnder modulator (DD-MZM), and provided a compact and usable alternative to the discrete architecture implementation. We used sideband generation and the phase generating carrier technique for reading dispersion variations due to sinusoidal vibration in a low reflectivity FBG sensor. We detected a dynamic strain with amplitudes in the range of 215 nε and with a signal to noise ratio of 30 dB that were calibrated independently with a heterodyne interferometer. The main system also reached a maximum detectable frequency of 130 kHz with a signal to noise ratio of almost 10 dB. As future work, we can point to the application of the dispersion reading to distributed grating measurements, that is, the measurement of the intra-grating strain as shown in [14,31,32] and intra-grating temperature [33]. These ideas rely on the fact that the dispersion may be dependent on the intra-grating position of the FBG sensor and, It can be observed in Figure 13a that the bandwidth of the detected mechanical vibrations is widened with respect to the excitation frequency. As can be seen in Figure 13b, the bandwidth of the detected vibrations is similar for different frequencies, so the same amplitude/phase modulation of the main mechanical frequency is observed in all cases, which represents a common fluctuation. The phase noise of the laser seed (5 MHz bandwidth, FWHM) and the amplitude jitter of the reference comb can be considered the sources of error. However, a continuous wave, rather than a burst, was used to excite the acoustic actuator, which could have contributed to this effect. Conclusions In this work, a dispersion reading system to interrogate low reflectivity FBG sensors and measure dynamic strain with a high resolution was presented. It was based on dual optical frequency comb generation (DOFC) and allowed a compact setup that operated with increased amplitude stability compared to classical discrete architectures. The DOFC was generated with a single device, a dual-drive Mach Zehnder modulator (DD-MZM), and provided a compact and usable alternative to the discrete architecture implementation. We used sideband generation and the phase generating carrier technique for reading dispersion variations due to sinusoidal vibration in a low reflectivity FBG sensor. We detected a dynamic strain with amplitudes in the range of 215 nε and with a signal to noise ratio of 30 dB that were calibrated independently with a heterodyne interferometer. The main system also reached a maximum detectable frequency of 130 kHz with a signal to noise ratio of almost 10 dB. As future work, we can point to the application of the dispersion reading to distributed grating measurements, that is, the measurement of the intra-grating strain as shown in [14,31,32] and intra-grating temperature [33]. These ideas rely on the fact that the dispersion may be dependent on the intra-grating position of the FBG sensor and, therefore, information about the position can be extracted for a distributed sensing of the strain. This information about the position is proportional to the first derivative of the phase with respect to wavelength, which is a magnitude that can be extracted with our proposed technique. In addition, the narrow-band detection of the fiber sensor dispersion changes will enable the compact, cost-effective and high-resolution interrogation of high-throughput interferometric fiber sensors as integrated Match-Zehnder interferometer (MZI) sensors and long-period grating (LPG) sensors. These open also its application to the precise measurement of chemical/biological sensing samples at high speeds. Data Availability Statement: The datasets of the current study are available from the corresponding author upon reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
13,032.6
2022-05-01T00:00:00.000
[ "Physics" ]
Radiometric Inter-Consistency of VIIRS DNB on Suomi NPP and NOAA-20 from Observations of Reflected Lunar Lights over Deep Convective Clouds : The Visible Infrared Imaging Radiometer Suite (VIIRS) Day / Night Band (DNB) is capable of observing reflected lunar radiances at night with its high gain stage (HGS), and the radiometric calibration is traceable to the sun through gain transfer from the low gain stage (LGS) calibrated near the terminator with the solar di ff user. Meanwhile, deep convective clouds (DCC) are known to have a stable reflectance in the visible spectral range. Therefore, the reflected lunar radiance at night from the DCC provides a unique dataset for the inter-calibration of VIIRS DNB on di ff erent satellites such as Suomi National Polar-orbiting Partnership (NPP) and NOAA-20, as well as quantifying the lunar radiance as a function of lunar phase angle. This study demonstrates a methodology for comparing nighttime Suomi NPP and NOAA-20 VIIRS DNB measured DCC reflected lunar radiance at various phase angles using data from July 2018 to March 2019 with an 86 second sampling interval and comparing Suomi NPP VIIRS DNB measured lunar radiances with those from lunar model predictions. The result shows good consistency between these two instruments on the two satellites, although a low bias in the NOAA-20 VIIRS DNB of ~5% is found. Also, observed lunar radiance from VIIRS DNB on Suomi NPP is found to be consistent with model predictions within 3% ± 5% (1 σ ) for a large range of lunar phase angles. However, discrepancies are significant near full moon, due to lunar opposition e ff ects, and limitations of the lunar models. This study is useful not only for monitoring the DNB calibration stability and consistency across satellites, but also may help validate lunar models independently. Introduction The moon has been recognized as a very stable reflectance reference for satellite radiometer calibration owing to its intrinsically stable surface reflectance properties, lack of atmosphere and related weathering processes, and its availability to most satellite radiometers. Although most modern satellite radiometers are equipped with a solar diffuser (SD), and in many cases together with a solar diffuser stability monitor (SDSM), studies [1][2][3] have shown that even with the best practices of utilizing the SD and SDSM onboard, residual degradations may not be all accounted for due to uncertainties in the characterization of various optical components. As a result, lunar calibration is recognized as a Background and Previous Studies The Suomi NPP satellite was launched in October 2011, and the data became available a month later. However, the VIIRS DNB focal plane array was not cooled to its nominal operating temperature until January 20, 2012 when the cryo-radiator cooler door was opened. The VIIRS sensor data record (SDR) (also known as Level 1b calibrated and geolocated radiance data) has gone through extensive calibration/validation, and the data achieved beta maturity on 2 May 2012, provisional maturity on 13 March 2013, and validated maturity on 17 April 2014. There have been several changes in the VIIRS DNB SDR including the spectral response shift due to the rotating telescope Assembly (RTA) mirror degradation ( Figure 1) [15,16], geolocation improvements including terrain correction, and changes in methodology using onboard vs. dark ocean for calibration offset since 12 January 2017 [17]. Therefore, to avoid complications due to these changes, we focus on more recent data collected in 2018-2019. The VIIRS DNB is well known for its ability to detect low lights at night [18][19][20][21]. It has three gain stages which allow for a wide measurement range spanning from 1 × 10 −11 to 0.02 W/(cm 2 ⋅sr). The high gain stage can detect radiances typically ranging from 1 × 10 −11 to 10 −6 W/(cm 2 ⋅sr) although the actual range depends on the scan angles from nadir. This is because 32 zones are used between nadir and end of scan (referred to as aggregation zones), and each zone uses different number of subpixels in the aggregation to derive one pixel to achieve constant spatial resolution across scan [22]. The DNB HGS noise is around 2.5 × 10 −11 W/(cm 2 ⋅sr) (1σ) at aggregation zone 1 (as of 21 February 2012), and increased about 8% by the end of 2016. The noise is higher at high aggregation zones, and up to 10 times at the end of the scan than that at the nadir. By comparison, the lunar radiance reflected from the DCC has a nominal range of 1 × 10 −9 W/(cm 2 ⋅sr) at half-moon to > 50 × 10 −9 W/(cm 2 ⋅sr) near full moon. Therefore, the VIIRS DNB is able to measure reflected lunar radiances from the DCC with the high gain stage which is essential for this study. The illuminating capabilities of the VIIRS DNB for nighttime remote sensing are well demonstrated in [18][19][20]. A comprehensive review of the instrument characteristics and early onorbit performance was fully covered in [19] which also included an early validation result using the vicarious site at the Railroad Valley. The result showed that the DNB measurements are in agreement with vicarious calibration on the order of 15%. This was followed by other ground based vicarious studies [21,23,24]. A major drawback of the ground based vicarious studies is that the validation relies on in situ measurements with atmospheric correction, and is limited to a short time period, or in some cases just a single event. Also, since the VIIRS DNB can detect very small signals (as small as lights from a 1 kW light bulb within a 1 km 2 area), the earth's atmosphere becomes a significant source of uncertainties for the ground based validation, as discussed in [19]. In this paper, the term uncertainty refers to a "parameter associated with the result of a measurement that characterizes the dispersion of the values that could reasonably be attributed to the measurand" [25]. Uncertainty plays an important role in the International Systems of Units (SI) traceability, which is defined as "property of a measurement result whereby the result can be related to a reference through a documented unbroken chain of calibrations, each contributing to the measurement uncertainty" [26]. To reduce the uncertainties due to atmosphere, researchers explored alternative techniques such as the Dome C, DCC, and stars [23,27,28]. The VIIRS DNB is well known for its ability to detect low lights at night [18][19][20][21]. It has three gain stages which allow for a wide measurement range spanning from 1 × 10 −11 to 0.02 W/(cm 2 ·sr). The high gain stage can detect radiances typically ranging from 1 × 10 −11 to 10 −6 W/(cm 2 ·sr) although the actual range depends on the scan angles from nadir. This is because 32 zones are used between nadir and end of scan (referred to as aggregation zones), and each zone uses different number of subpixels in the aggregation to derive one pixel to achieve constant spatial resolution across scan [22]. The DNB HGS noise is around 2.5 × 10 −11 W/(cm 2 ·sr) (1σ) at aggregation zone 1 (as of 21 February 2012), and increased about 8% by the end of 2016. The noise is higher at high aggregation zones, and up to 10 times at the end of the scan than that at the nadir. By comparison, the lunar radiance reflected from the DCC has a nominal range of 1 × 10 −9 W/(cm 2 ·sr) at half-moon to > 50 × 10 −9 W/(cm 2 ·sr) near full moon. Therefore, the VIIRS DNB is able to measure reflected lunar radiances from the DCC with the high gain stage which is essential for this study. The illuminating capabilities of the VIIRS DNB for nighttime remote sensing are well demonstrated in [18][19][20]. A comprehensive review of the instrument characteristics and early on-orbit performance was fully covered in [19] which also included an early validation result using the vicarious site at the Railroad Valley. The result showed that the DNB measurements are in agreement with vicarious calibration on the order of 15%. This was followed by other ground based vicarious studies [21,23,24]. A major drawback of the ground based vicarious studies is that the validation relies on in situ measurements with atmospheric correction, and is limited to a short time period, or in some cases just a single event. Also, since the VIIRS DNB can detect very small signals (as small as lights from a 1 kW light bulb within a 1 km 2 area), the earth's atmosphere becomes a significant source of uncertainties for the ground based validation, as discussed in [19]. In this paper, the term uncertainty refers to a "parameter associated with the result of a measurement that characterizes the dispersion of the values that could reasonably be attributed to the measurand" [25]. Uncertainty plays an important role in the International Systems of Units (SI) traceability, which is defined as "property of a measurement result whereby the result can be related to a reference through a documented unbroken chain of calibrations, each contributing to the measurement uncertainty" [26]. To reduce the uncertainties due to atmosphere, researchers explored alternative techniques such as the Dome C, DCC, and stars [23,27,28]. The stability of the DCC reflectivity in the visible spectrum and its use for calibration has been studied for many years by the science community. Extensive studies of vicarious calibration using Remote Sens. 2019, 11, 934 4 of 18 DCC have been performed in the past for many satellite instruments [9][10][11][12][13][14]. However, most of the previous studies were for day time DCC observations with reflected solar radiance, for example, for the VIIRS reflective solar bands (RSB) [11,12], or VIIRS DNB but again at day time with its Low Gain Stage (LGS) [10]. A more relevant study was found in [29], in which vicarious calibration of SNPP/VIIRS DNB was performed using DCC at night under lunar illumination. It demonstrated the viability, advantages and disadvantages of this approach. It concluded that for a one year period from August 2012 to July 2013, the DNB calibration is −4.9% with ± 8.8% uncertainty range "near full moon." While this study demonstrated the value of DCC for vicarious calibration for the VIIRS DNB, there are several limitations. First, it uses daily averaged data based on "a 2.5 by 2.5 latitude by longitude grid for four seasonal months" for comparisons. Second, the comparison was performed in reflectance instead of radiance, in which all data were converted to reflectance, thus requiring the conversion of all data to reflectance, which depends on models and associated uncertainties, while both the original DNB product and the lunar model output are in irradiance/radiance. Third, previous studies have shown that lunar irradiance "near full moon" have greater uncertainties due to opposition effects [30]. For example, the GIRO lunar irradiance model is unable to generate irradiance near full moon due to large uncertainties [31]. Fourth, the lunar model used in that study is known to have uncertainties on the order of 7-12% [7]. Thus, the authors [29] concluded that their study is not an absolute calibration, but "more of a consistency check between the simulated values and the DNB calibration." In contrast to the previous efforts, the current study made new advances in the following areas: (1) A unique dataset was developed in our study with a much finer temporal sampling interval, at every 86 seconds (or 48 VIIRS scan lines) when the satellite passes over the DCC. This allows us to investigate the lunar phase angle dependent calibration. Given the fact that the lunar phase angle changes only 0.012 degrees at this sampling rate, the uncertainties associated with lunar irradiance changes due to temporal sampling is eliminated. This DNB-DCC lunar dataset can potentially be available continuously for the next decades with the planned launch of two more VIIRS DNB for a variety of studies; (2) Our study compared the lunar phase angle dependent radiances between VIIRS DNB on two satellites: Suomi NPP and NOAA-20, which had not been done before; 3) We compared the Suomi NPP VIIRS DNB measurements with the latest state-of-the-art GIRO model predictions, and also characterized its differences from MT2009 by lunar phase angle. The range of radiances in our current comparison study is between~0.5 and~60 nanowatts (nW)/(cm −2 ·sr) (for reference, the minimum radiance specification (L min ) for VIIRS DNB is 3 nW/(cm −2 ·sr) [19]. With this unique dataset and methodology presented in this paper, there are several fundamental science questions can be asked: can VIIRS DNB measure the lunar radiances as a function of lunar phase angle? What physical/mathematical functions does this relationship follow? How does the bi-directional reflection of the DCC affect the functions? Is the function dependent on waning and waxing phases of the moon? How does the function compare to existing lunar models? Finally, can the long-term observations from the VIIRS DNB be used to improve the GIRO as well as the DCC bi-directional reflectance models. To verify the stability of the VIIRS DNB calibration for the period of our study, we also examined the DNB long term stability during day time with reflected solar radiance over DCC in its Low Gain Stage (LGS). Figure 2 shows the time series of the Suomi NPP (and NOAA-20) daytime monthly DCC reflectance. First, the Suomi NPP VIIRS DNB observations over DCC has been very stable after mid-2013. Second, the DCC reflectance was increasing from 2012 to mid-2013 (0.83 to 0.92). This was due to the spectral shift of the DNB caused by the Suomi NPP VIIRS RTA mirror degradation which modulated the spectral response function of the DNB, as discussed earlier. Since mid-2013, the RTA mirror degradation has become stabilized, and more importantly, the DNB spectral response function was updated in the operational calibration [32] which corrected the bias and led to a stable trend after mid-2013 shown in Figure 2. The estimated calibration variability after mid 2013 is on the order of 0.5% in the DCC time series which is very stable. We also see that the stability in 2018-2019 is as good as that in earlier periods. Figure 2 also shows that NOAA-20 VIIRS DNB LGS calibration, with a much shorter time series, became stable after the latest on-orbit calibration update on 27 April 2018. shorter time series, became stable after the latest on-orbit calibration update on 27 April 2018. In contrast to a solar diffuser typically used for the onboard calibration of satellite radiometers, DCC does not have a fixed location or shape, and has variability in reflectance due to a number of parameters, However, statistically the DCC reflectance is very stable given a large enough samples, for example, each data point in Figure 2 is aggregated from one month of DCC data which consists of millions of individual samples. In statistical analysis of DCC, it is typical to use the mode value of the samples, as opposed to the mean value to filter out extreme values that are out of the range for smaller variability in the trend. The VIIRS DNB calibration is performed using the low gain stage (LGS) with the onboard solar diffuser using incident sunlight, then the calibration is transferred to the medium gain (MGS) and high gain stages (HGS) using observations near the terminator, where all gain stage data are available. Given the fact that the low gain stage data is stable and traceable to solar calibration, it can be assumed that the high gain stage calibration is also relatively stable. It is known that the absolute radiometric accuracy for the high gain stage is significantly reduced due to the calibration transfer, but since we rely on the stability (as opposite to the absolute accuracy) of the calibration in analyzing the lunar phase dependent reflected radiances from the DCC, the most critical factor in this study is the high gain stage radiometric stability. To further assure the stability of the HGS, we also analyzed the gain changes of the DNB HGS over time. We found that the gain change is at the rate of about 1% per year from 2016 to 2018 based on analysis of onboard calibration data and monthly observations of the dark ocean [8]. Therefore, after the operational routine calibration update, we validated that the stability of the HGS radiance data is at a fraction of a percent for the period of study as a residual of gain change correction in the operational calibration. In contrast to a solar diffuser typically used for the onboard calibration of satellite radiometers, DCC does not have a fixed location or shape, and has variability in reflectance due to a number of parameters, However, statistically the DCC reflectance is very stable given a large enough samples, for example, each data point in Figure 2 is aggregated from one month of DCC data which consists of millions of individual samples. In statistical analysis of DCC, it is typical to use the mode value of the samples, as opposed to the mean value to filter out extreme values that are out of the range for smaller variability in the trend. Methodology and Data The VIIRS DNB calibration is performed using the low gain stage (LGS) with the onboard solar diffuser using incident sunlight, then the calibration is transferred to the medium gain (MGS) and high gain stages (HGS) using observations near the terminator, where all gain stage data are available. Given the fact that the low gain stage data is stable and traceable to solar calibration, it can be assumed that the high gain stage calibration is also relatively stable. It is known that the absolute radiometric accuracy for the high gain stage is significantly reduced due to the calibration transfer, but since we rely on the stability (as opposite to the absolute accuracy) of the calibration in analyzing the lunar phase dependent reflected radiances from the DCC, the most critical factor in this study is the high gain stage radiometric stability. To further assure the stability of the HGS, we also analyzed the gain changes of the DNB HGS over time. We found that the gain change is at the rate of about 1% per year from 2016 to 2018 based on analysis of onboard calibration data and monthly observations of the dark ocean [8]. Therefore, after the operational routine calibration update, we validated that the stability of the HGS radiance data is at a fraction of a percent for the period of study as a residual of gain change correction in the operational calibration. DCC Data Sampling from VIIRS DNB Pixels DCC are typically identified based on their brightness temperature at 11-12 µm (< 205K) on a pixel by pixel basis [9,10]. In this study, the VIIRS DNB DCC pixels are selected by identifying pixels in the 12 µm band (M15) with brightness temperatures corresponding to DCC and finding the collocated DNB pixel at night. Samples are collected globally within ±25 degrees latitude which is in the Inter Tropical Convergence Zone (ITCZ). Since the lunar radiance depends on the time, location, lunar phase angle, lunar zenith angle, and DCC bidirectional reflection, the samples at pixel level are individually kept in the first step when processing each "granule," which is the smallest unit in data processing, or data with 48 VIIRS scan-lines acquired in 86 s. Then, the following filtering criteria are applied to each granule: (1) Only near-nadir pixels (scan angle ± 17 degrees from nadir) are kept and used in this study. This corresponds to view zenith angles less than 20 degrees (Figure 3), according to Equation (1). where: α is the scan angle; R e is the Earth radius (6378 km); H s is the satellite altitude (829 km); and θ is the view zenith angle. DCC Data Sampling from VIIRS DNB Pixels DCC are typically identified based on their brightness temperature at 11-12 μm (< 205K) on a pixel by pixel basis [9,10]. In this study, the VIIRS DNB DCC pixels are selected by identifying pixels in the 12μm band (M15) with brightness temperatures corresponding to DCC and finding the collocated DNB pixel at night. Samples are collected globally within ±25 degrees latitude which is in the Inter Tropical Convergence Zone (ITCZ). Since the lunar radiance depends on the time, location, lunar phase angle, lunar zenith angle, and DCC bidirectional reflection, the samples at pixel level are individually kept in the first step when processing each "granule," which is the smallest unit in data processing, or data with 48 VIIRS scan-lines acquired in 86 s. Then, the following filtering criteria are applied to each granule: (1) O view zenith angles less than 20 degrees ( Figure 3), according to Equation (1). Where: is the scan angle; is the Earth radius (6378 km); s is the satellite altitude (829 km); and is the view zenith angle. By focusing on near-nadir samples, it greatly simplifies the problem and avoids complex effects (such as pixel size differences between DNB and M15 at high scan angles, atmospheric path length effect, and increased uncertainties in bi-directional reflection effect at high scan angles), while preserving enough samples. (2) Pixels with lunar zenith angles greater than 90 degrees are removed due to large uncertainties in radiances in those cases. (3) Granules with fewer than a threshold number of DCC pixels are excluded in the analysis to avoid effects of cloud fraction. There is a large variation in the number of DCC pixels in each granule (nominally between ~10 and ~250,000). This threshold is somewhat arbitrary and it is a tradeoff between number of samples and variability in the By focusing on near-nadir samples, it greatly simplifies the problem and avoids complex effects (such as pixel size differences between DNB and M15 at high scan angles, atmospheric path length effect, and increased uncertainties in bi-directional reflection effect at high scan angles), while preserving enough samples. (2) Pixels with lunar zenith angles greater than 90 degrees are removed due to large uncertainties in radiances in those cases. (3) Granules with fewer than a threshold number of DCC pixels are excluded in the analysis to avoid effects of cloud fraction. There is a large variation in the number of DCC pixels in each granule (nominally between~10 and~250,000). This threshold is somewhat arbitrary and it is a tradeoff between number of samples and variability in the radiance values. In the long-term analysis in Section 4.3, a threshold of 400 pixels was used (each pixel covers an area of 750m × 750m). The remaining pixels are aggregated for each granule, and the mode and mean radiance values of for each granule are computed. Since the lunar phase angle for each granule is practically constant as discussed earlier, the important variables of lunar phase and zenith angles vs. mode radiance are generated for each granule and then used in subsequent analysis. In the end of this sampling process, one "near-nadir-sample" is created for each granule satisfying the above criteria. Data Processing and Analysis The following steps are used in the data processing: (1) For each lunar cycle, a scatterplot is generated by pairing the lunar phase angle (LPA, or moon phase angle MPA, used interchangeably in this study) and the mode of the VIIRS DNB DCC reflected lunar radiances for each near-nadir-sample aggregated from a granule. (2) The scatterplot shows a correlation between the DCC reflected lunar radiances and the lunar phase angle. This relationship is further quantified by fitting a polynomial to the data. (3) The measurements from both Suomi NPP and NOAA-20 are compared. (4) Lunar irradiance models are used to generate predicted lunar irradiances to compare the DNB observations with the model predicted values. (5) The agreement between the model and the DNB observations are evaluated, and uncertainties in both the DNB observations and models are also discussed. The results will be presented in the next section. The lunar irradiance models generate the spectral lunar irradiance at the given time and observation geometry at the top of the atmosphere for the DNB. In the case of MT2009 [7], the generated hyperspectral lunar irradiance is convolved with the DNB spectral response function and converted to the top of atmosphere radiance in the unit of nW/(cm 2 ·sr), which matches the radiance unit for the VIIRS DNB. The following equation is used to compute the radiance: where: L is the lunar radiance at the top of DCC (nW/cm 2 ·sr); I is the model predicted lunar irradiance for the DNB band (W/cm 2 ); ω is the lunar zenith angle (degrees); π is the solid angle of hemisphere (sr); ρ(ω, θ, φ) is the angle dependent DCC reflectance from the Angular Distribution Model (ADM) [33]; θ is the view zenith angle (degrees); and φ is the relative azimuth angle (degrees). The physical process described by Equation (2) is as follows. The lunar irradiance (I) incident on the DCC at the lunar zenith angle ω is reflected with the reflectance of ρ(ω, θ, φ) into hemisphere (π sr solid angle), and observed by VIIRS DNB. At near nadir, ρ is mainly a function of ω. Here, the reflectivity of the DCC depends primarily on the lunar zenith angle and to a lesser degree on the view zenith angle because it is near nadir view. Note that in Equation (2), the lunar zenith angle has two separate effects: the cosine effect which affects the unit area lunar irradiance incident on the DCC surface; and the bi-directional reflectance effect of the DCC which can be estimated using the Angular Distribution model (ADM) [33]. Similarly, the GIRO model takes as inputs the date, time, location, and VIIRS DNB spectral response function. It generates averaged spectral lunar irradiance in the unit of W/m 2 -um. This is then converted to radiance with the unit of nW/(cm 2 ·sr) given the DNB band equivalent width. In this study, the same data processing and analysis procedures are used for both NOAA-20 and Suomi NPP VIIRS DNB observations. Results and Discussion The scatterplot between the VIIRS DNB observed reflected lunar radiance from the deep convective clouds and the lunar phase angle is a primary result for this study. In this section, we examine the results in detail in three separate subsections. Figure 4 provides an example of the relationship between VIIRS DNB observed lunar radiance reflected from DCC vs. lunar phase angle. The data shown here are from 7 November 2019 to 7 December 2018 (also referred to as the November-December 2018 lunar cycle) which covers a full lunar cycle from new moon to full moon and then to new moon again. From the figure we can see that the lunar radiance increases during waxing with decreasing lunar phase angle from new moon (lunar phase angle~180 degrees) to full moon (lunar phase angle approaching 0 degree), while in the waning phase, the lunar radiance decreases from full moon towards new moon. The change in radiance is approximately two orders of magnitude, with~5 0 × 10 −9 W/(cm 2 ·sr) near full moon, and down tõ 0.5 × 10 −9 (W/cm 2 ·sr) at lunar phase angle of 110 degrees. In the analysis, we experimented with different methods such as using mean vs. mode radiance in generating the "near-nadir-samples". Our findings confirmed previous studies that the mode values better represented the granule values compared to the mean values because the mean values are prone to a few but extreme values that are out of range due to the nature of DCC reflectance variability as discussed earlier. Therefore, in the analysis, we only use the near-nadir-samples with mode radiance values for the lunar radiance study. It is also noted that few data points are available for lunar phase angles near zero, i.e., full moon, although this varies month by month. This is because the SNPP/NOAA-20 have an orbital period of ~101 minutes and the DCC samples are taken only in the tropical regions, while the lunar phase angle changes ~0.51 degrees/hour. As a result, not all months capture the DCC during full moon (within 2 degrees lunar phase angle). However, we found that when the full moon data were present in the near-nadir-samples, the curve near the full moon becomes extremely nonlinear, again due to lunar opposition surge [30]. Therefore, in this study we selected the November-December 2018 lunar cycle which does not have such full moon samples and focused on the data points between 7-90 degrees in lunar phase angle. Figure 4 also shows that although there is a general relationship between VIIRS DNB observed radiance and the lunar phase angle, waxing and waning phases appear to follow different patterns. A more detailed analysis reveals that this large difference between waxing and waning lunar radiances is primarily due to the lunar zenith angle changes that occurred in the Suomi NPP orbit between waxing and waning phases of the moon. It is understood that there are lunar irradiance In the analysis, we experimented with different methods such as using mean vs. mode radiance in generating the "near-nadir-samples". Our findings confirmed previous studies that the mode values better represented the granule values compared to the mean values because the mean values are prone to a few but extreme values that are out of range due to the nature of DCC reflectance variability as discussed earlier. Therefore, in the analysis, we only use the near-nadir-samples with mode radiance values for the lunar radiance study. It is also noted that few data points are available for lunar phase angles near zero, i.e., full moon, although this varies month by month. This is because the SNPP/NOAA-20 have an orbital period of~101 minutes and the DCC samples are taken only in the tropical regions, while the lunar phase angle changes~0.51 degrees/hour. As a result, not all months capture the DCC during full moon (within 2 degrees lunar phase angle). However, we found that when the full moon data were present in the near-nadir-samples, the curve near the full moon becomes extremely nonlinear, again due to lunar opposition surge [30]. Therefore, in this study we selected the November-December 2018 lunar cycle which does not have such full moon samples and focused on the data points between 7-90 degrees in lunar phase angle. Figure 4 also shows that although there is a general relationship between VIIRS DNB observed radiance and the lunar phase angle, waxing and waning phases appear to follow different patterns. A more detailed analysis reveals that this large difference between waxing and waning lunar radiances is primarily due to the lunar zenith angle changes that occurred in the Suomi NPP orbit between waxing and waning phases of the moon. It is understood that there are lunar irradiance differences in the waxing and waning phase according to a previous study in which a separate correction had to be made to the MT2009 model for waxing and waning effects [27], but that difference is much smaller than what we see here, as further discussed in Section 4.3. Figure 5 shows that at the same value for lunar phase angle, the lunar zenith angles at satellite nadir can be very different between lunar waxing and waning phases. For example, at 50 degree lunar phase angle, the lunar zenith angle is about 70 degrees in the waxing phase, while it is about 40 degrees during the waning phase. According to Equation (2), the lunar zenith angle plays a large role in the reflected radiances. In this specific case, the latter is 2.24 times of the former (cosine of 70 degrees vs. cosine of 40 degrees), even if the lunar irradiances are the same. difference is much smaller than what we see here, as further discussed in Section 4.3. Figure 5 shows that at the same value for lunar phase angle, the lunar zenith angles at satellite nadir can be very different between lunar waxing and waning phases. For example, at 50 degree lunar phase angle, the lunar zenith angle is about 70 degrees in the waxing phase, while it is about 40 degrees during the waning phase. According to Equation (2), the lunar zenith angle plays a large role in the reflected radiances. In this specific case, the latter is 2.24 times of the former (cosine of 70 degrees vs. cosine of 40 degrees), even if the lunar irradiances are the same. Another feature shown in Figure 5 is that during the waxing phase, the lunar zenith angle change is in sync with the lunar phase angle change. However, for the waning phase, the lunar zenith angle change progressively lagged behind as the lunar phase angle increases. This may have an impact on the lunar radiance comparisons with models, as discussed in Section 4.3. It is also noted that for NOAA-20, the relationship between lunar zenith angle and lunar phase angle are nearly identical to that of Suomi NPP as shown in Figure 5 (the data points overlapped), since these two satellites are in the sun-synchronous orbit with the same orbital plane, except with ~50 minutes separation, and with the same local equator crossing time. As a result, near the equator, the lunar phase angle (as well as the lunar zenith angle) differences in the near-nadir-samples between Suomi NPP and NOAA-20 VIIRS DNB are within 0.43 degrees (given the lunar phase angle rate of 0.51 degree/h). This makes the direct comparison of the near-nadir-samples between Suomi NPP and NOAA-20 VIIRS DNB viable. It should be noted that due to earth rotation, the two satellites are not observing the same DCC cloud at nadir near the equator. Thus it is assumed that all DCCs have the same characteristics in reflectance, and any difference would contribute to the scatter in the plot. VIIRS DNB Direct Lunar Radiance Comparison between Suomi NPP and NOAA-20 Based on the discussion in the previous section, the Suomi NPP and NOAA-20 VIIRS DNB observations of DCC reflected lunar radiance can be directly compared as a function of lunar phase angle, and the results can be quantified. It is assumed that this function is relatively stable due to the Another feature shown in Figure 5 is that during the waxing phase, the lunar zenith angle change is in sync with the lunar phase angle change. However, for the waning phase, the lunar zenith angle change progressively lagged behind as the lunar phase angle increases. This may have an impact on the lunar radiance comparisons with models, as discussed in Section 4.3. It is also noted that for NOAA-20, the relationship between lunar zenith angle and lunar phase angle are nearly identical to that of Suomi NPP as shown in Figure 5 (the data points overlapped), since these two satellites are in the sun-synchronous orbit with the same orbital plane, except with~50 minutes separation, and with the same local equator crossing time. As a result, near the equator, the lunar phase angle (as well as the lunar zenith angle) differences in the near-nadir-samples between Suomi NPP and NOAA-20 VIIRS DNB are within 0.43 degrees (given the lunar phase angle rate of 0.51 degree/h). This makes the direct comparison of the near-nadir-samples between Suomi NPP and NOAA-20 VIIRS DNB viable. It should be noted that due to earth rotation, the two satellites are not observing the same DCC cloud at nadir near the equator. Thus it is assumed that all DCCs have the same characteristics in reflectance, and any difference would contribute to the scatter in the plot. VIIRS DNB Direct Lunar Radiance Comparison between Suomi NPP and NOAA-20 Based on the discussion in the previous section, the Suomi NPP and NOAA-20 VIIRS DNB observations of DCC reflected lunar radiance can be directly compared as a function of lunar phase angle, and the results can be quantified. It is assumed that this function is relatively stable due to the stability of both the deep convective cloud reflectance and the Suomi NPP VIIRS DNB instrument calibration as discussed earlier for the period of study. Changes in this function would be primarily due to the lunar irradiance changes from month to month and the lunar zenith angle variations at the time of observation. It is also noted that this lunar-DCC curve can not only be used to quantify the calibration biases between VIIRS DNB on different satellites, but also can potentially be used to monitor the calibration changes over time for the same instrument once a long-term time series over several years is established. Figure 6 compares the lunar-DCC radiance curves from both VIIRS DNB on Suomi NPP and NOAA-20. The data used here are from 7 November 2018 to 6 December 2019. Visual inspection of Figure 6 suggests that the VIIRS DNB calibration between Suomi NPP and NOAA-20 agree very well at all lunar phase angles if the data are grouped by waxing and waning phases separately. Apparently there are two separate patterns depending on the lunar phase, as discussed in Section 4.1, primarily due to lunar zenith angles in each phase. Further quantitative comparison is also performed by generating a polynomial regression for each data set with equation (3), and the coefficients for the polynomials are provided in Table 1. where: L is the DNB measured lunar radiance (nW/(cm −2 ·sr)); x is the lunar phase angle; and C 0 to C 4 are coefficients (see Table 1). Remote Sens. 2019, 11, x FOR PEER REVIEW 10 of 18 stability of both the deep convective cloud reflectance and the Suomi NPP VIIRS DNB instrument calibration as discussed earlier for the period of study. Changes in this function would be primarily due to the lunar irradiance changes from month to month and the lunar zenith angle variations at the time of observation. It is also noted that this lunar-DCC curve can not only be used to quantify the calibration biases between VIIRS DNB on different satellites, but also can potentially be used to monitor the calibration changes over time for the same instrument once a long-term time series over several years is established. Figure 6 compares the lunar-DCC radiance curves from both VIIRS DNB on Suomi NPP and NOAA-20. The data used here are from 7 November 2018 to 6 December 2019. Visual inspection of Figure 6 suggests that the VIIRS DNB calibration between Suomi NPP and NOAA-20 agree very well at all lunar phase angles if the data are grouped by waxing and waning phases separately. Apparently there are two separate patterns depending on the lunar phase, as discussed in Section 4.1, primarily due to lunar zenith angles in each phase. Further quantitative comparison is also performed by generating a polynomial regression for each data set with equation (3), and the coefficients for the polynomials are provided in Table 1 Where: L is the DNB measured lunar radiance (nW/(cm -2 ⋅sr)); x is the lunar phase angle; and C0 to C4 are coefficients (see table 1). It should be noted that although the 4th order polynomial function forms remain valid overall from month to month, the coefficients do not remain the same because the lunar irradiance varies from month to month due to such factors as lunar phase angle and lunar zenith angle variations relative to the sun-synchronous satellite orbit, and earth-moon distance variations. This is further discussed in Section 4.3 where a time series is shown. Based on the above coefficients, the radiometric biases between VIIRS DNB on NOAA-20 vs. Suomi NPP can be calculated and plotted as shown in Figure 7. It should be noted that although the 4th order polynomial function forms remain valid overall from month to month, the coefficients do not remain the same because the lunar irradiance varies from month to month due to such factors as lunar phase angle and lunar zenith angle variations relative to the sun-synchronous satellite orbit, and earth-moon distance variations. This is further discussed in Section 4.3 where a time series is shown. Based on the above coefficients, the radiometric biases between VIIRS DNB on NOAA-20 vs. Suomi NPP can be calculated and plotted as shown in Figure 7. It can be seen in Figure 7 that the lunar radiance measured by VIIRS DNB on NOAA-20, in general, has a lower value (up to ~5%) relative to Suomi NPP for both waxing and waning phases of the moon, especially at low lunar phase angles. There are several factors that might have contributed to the biases here. First, the half orbit separation between the NOAA-20 and Suomi NPP leads to small lunar phase and lunar zenith angle differences in the observing the DCC pixels. However, this effect is mitigated by the polynomial function through interpolation which addresses this observation time differences. Also, since Suomi NPP is always trailing the NOAA-20, the lunar phase angle effect for the near-nadir-samples would have opposite effects in the bias during waxing and waning phases, which contradicts to the consistent bias shown in Figure 7. The lunar zenith angle difference at the time of observation between SNPP and NOAA-20 might introduce biases due to the bi-directional reflection of the DCC as discussed in Section 4.3. However, Figure 5 showed that they have nearly identical lunar phase vs. zenith angle patterns. This is further confirmed through orbital simulations. Identifying the root cause of the bias is beyond the scope of the current study. However, we have identified two sources of biases and both lead to a high in-band solar irradiance in the Suomi NPP VIIRS DNB operational data production: a ~2% bias due to spectral response differences (Figure 1) between VIIRS DNB on NOAA-20 and Suomi NPP, and a 1.1% bias due to solar irradiance spectrum differences used in the operational data processing systems (outdated version used for Suomi NPP processing) [34]. The remaining ~2% bias is not well understood, except that it is noted that the low It can be seen in Figure 7 that the lunar radiance measured by VIIRS DNB on NOAA-20, in general, has a lower value (up to~5%) relative to Suomi NPP for both waxing and waning phases of the moon, especially at low lunar phase angles. There are several factors that might have contributed to the biases here. First, the half orbit separation between the NOAA-20 and Suomi NPP leads to small lunar phase and lunar zenith angle differences in the observing the DCC pixels. However, this effect is mitigated by the polynomial function through interpolation which addresses this observation time differences. Also, since Suomi NPP is always trailing the NOAA-20, the lunar phase angle effect for the near-nadir-samples would have opposite effects in the bias during waxing and waning phases, which contradicts to the consistent bias shown in Figure 7. The lunar zenith angle difference at the time of observation between SNPP and NOAA-20 might introduce biases due to the bi-directional reflection of the DCC as discussed in Section 4.3. However, Figure 5 showed that they have nearly identical lunar phase vs. zenith angle patterns. This is further confirmed through orbital simulations. Identifying the root cause of the bias is beyond the scope of the current study. However, we have identified two sources of biases and both lead to a high in-band solar irradiance in the Suomi NPP VIIRS DNB operational data production: a~2% bias due to spectral response differences (Figure 1) between VIIRS DNB on NOAA-20 and Suomi NPP, and a 1.1% bias due to solar irradiance spectrum differences used in the operational data processing systems (outdated version used for Suomi NPP processing) [34]. The remaining~2% bias is not well understood, except that it is noted that the low bias in NOAA-20 VIIRS DNB is consistent with other independent studies which concluded that NOAA-20 VIIRS solar band calibration are systematically lower than that of the Suomi NPP by~2%, likely due to prelaunch characterization uncertainties according to recent investigations [35][36][37]. The absolute radiance differences become smaller with increasing lunar phase angle, mainly because the lunar radiance also decreases significantly, e.g., down to 1-2 nW/cm −2 ·sr at 90 degrees lunar phase angle as shown in Figure 6. As a result, a small difference in radiance can be a significant percentage of the radiance, which leads to larger uncertainties in percentage at very low radiances. Analysis of other months from July 2018 to March 2019 showed similar results, and all of them followed a 4th order polynomial functional form despite the coefficient differences. Comparison between Observed and Lunar Irradiance Model Predicted Radiances Despite the good agreement between the VIRIS DNB measurements from the two satellites, Suomi NPP and NOAA-20, the comparison made in the previous section does not give any indication of the absolute accuracy of the radiance values from the DNB measurements. To address this issue, in this section we compare the Suomi NPP VIIRS DNB measured radiances with lunar irradiance model output results. It is recognized that there are uncertainties in both lunar irradiance model predictions (GIRO and MT2009) and VIIRS DNB observations. For the GIRO model, no official uncertainty statement is found but it has been demonstrated in previous studies that the uncertainty in the absolute lunar irradiance produced by the model is estimated to be within 10% [6], although the stability is much better, on the order of <1%. For the DNB observations, the uncertainty is mainly caused by a number of factors in the earth view observations. This includes but is not limited to: (1) variabilities in the DCC reflectance depend on the view geometry, cloud optical thickness, cloud fraction, and small regional differences in the cloud properties, and (2) lunar zenith angle variations leads to bidirectional reflectance effects. A brief review of the two lunar models is necessary here to clarify the differences between them. The GIRO is a joint effort among different institutions, including the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT), the United States Geological Survey (USGS), the Centre National d'Etudes Spatiales (CNES), the Japan Aerospace Exploration Agency (JAXA), and the National Aeronautics and Space Administration (NASA). The effort is led by EUMETSAT, with an objective to make the lunar model publicly available, verified and validated, uncertainties characterized, and to establish traceability for the predicted irradiances [5,38]. There is also discussion of further improvements from the baseline model, based on new observations such as from the CNES mission Pleiades-HR (High Resolution) satellite constellation [39]. In contrast, the MT2009 [7] is an independent model developed for a different application. It was originally developed to compute the lunar spectral irradiance for the VIIRS DNB for geophysical retrievals under lunar illumination. The model produces 1-nm resolution irradiance spectra over the spectral interval [0.3, 1.2 um] for a given date and time. The model itself is based on multiple sources of lunar observations. The model takes into account of the lunar phase angle, Sun-to-Moon, and Moon-to-Earth distances. The model output is actually interpolated based on lunar irradiance samples from the pre-calculated database, for lunar phase angles ranging from 0 degrees to 180 degrees. However, neither lunar libration nor waxing/waning effects are considered in this model. The uncertainty of the MT2009 model is estimated to be 7%-12% [7]. The purpose of using this model in the current study is that it provides an independent comparison with the VIIRS DNB observed lunar radiances such that any differences or discrepancies would be uncorrelated with the other model for the validation. In addition, since the MT2009 does not treat waning vs. waxing separately in the model predictions, it helps us diagnose the patterns between the waxing and waning phases of the VIIRS DNB data used in this study. The lunar models generate lunar irradiance at top of the atmosphere given the time, location and view geometry of the observation. The lunar irradiance values are then converted to DNB in-band radiance by convolving with the DNB spectral response function and with Equation (2). For both models used, the unit required conversion to make the comparison consistent. Several observations can be made from Figure 8. First, predicted lunar radiances matched very well with the DNB observations. Both the cosine function and bi-directional reflection correction to the GIRO predicted values in Equation (2) played important roles based on testing by omitting each of the terms individually. Second, similar to the VIIRS DNB observed radiances, the GIRO model predicted radiances can be fitted with the 4th order polynomial function which are shown in Figure 8. Third, the GIRO predictions did not agree well for very small lunar phase angles near full moon, where the observed values are higher than the model predictions, although there are only a few such data points. Finally, the polynomial coefficients, together with those presented in Table 1 for the DNB observed functions, are used in generating Table 2, which quantifies the differences between the model predicted and the DNB observed values at representative lunar phase angles from 10 to 80 degrees. Remote Sens. 2019, 11, x FOR PEER REVIEW 13 of 18 8. Third, the GIRO predictions did not agree well for very small lunar phase angles near full moon, where the observed values are higher than the model predictions, although there are only a few such data points. Finally, the polynomial coefficients, together with those presented in Table 1 for the DNB observed functions, are used in generating Table 2, which quantifies the differences between the model predicted and the DNB observed values at representative lunar phase angles from 10 to 80 degrees. The DNB to GIRO radiance ratio in Table 2 quantifies the agreement between observation and model predictions. A value of 1.0 would indicate perfect agreement, while deviating from 1.0 would give the percent difference. The table shows that during the waxing phase, the agreement between DNB observation and GIRO model predictions is within ±5% (0.96 to 1.05) for lunar phase angles from 10 to 50 degrees. Note that according to Figure 8, the radiance at 50 degrees of lunar phase angle is about The DNB to GIRO radiance ratio in Table 2 quantifies the agreement between observation and model predictions. A value of 1.0 would indicate perfect agreement, while deviating from 1.0 would give the percent difference. The table shows that during the waxing phase, the agreement between DNB observation and GIRO model predictions is within ±5% (0.96 to 1.05) for lunar phase angles from 10 to 50 degrees. Note that according to Figure 8, the radiance at 50 degrees of lunar phase angle is about 3 DNB radiance units (nW/cm −2 ·sr) which is at the DNB minimum radiance specification L min . The large differences at 60 degrees is likely due to the very small radiance value and artifacts of the polynomial fit in that area, while at 70 and 80 degrees the radiance values decreases below L min according to Figure 8. On the other hand, for the waning phase, the agreement is within ±5% in the lunar phase angle range between 10 and 70 degrees, although the best agreement can be around 1% at low lunar phase angles. As with any statistical analysis, the polynomial fits to both the DNB observations and GIRO model predictions have uncertainties. The standard error in the estimate using the 4th order polynomial fit is on the order of 1.19 (nW/cm −2 ·sr) radiance unit, which is about 3% at the radiance level of 40 nW/cm −2 ·sr, but can become significant in percentage at low radiances such as L min . On the other hand, the R-square or goodness of fit for our polynomial curve fitting is typically better than 0.98. It was found that the ADM is very useful in accounting for the DCC BRDF effects and reducing the biases. However, the ADM was developed for day time applications under solar illumination, and its fitness for lunar illumination needs to be further evaluated despite the positive results in this study. It is possible that the ADM can potentially be further refined with long-term VIIRS DNB observations of the reflected lunar radiance over DCC. To ensure the consistency of our study results, we also performed comparison between the Suomi NPP and GIRO predicted time series with the near-nadir-samples (with an 86 second sampling interval), and the ADM correction to account for DCC bi-directional reflectance effects. The major inputs to ADM are the lunar zenith angle from the DNB data, and view zenith angle of 10 degrees (half of the view zenith angle range used in the DCC granule level sampling), which is ±8.8 degrees from nadir. In this part of the analysis, an additional criterion for filtering the data with a threshold of >400 DCC pixels for each granule is applied, which reduced the standard deviation of the biases. The results are presented in Figure 9 below. polynomial fit in that area, while at 70 and 80 degrees the radiance values decreases below Lmin according to Figure 8. On the other hand, for the waning phase, the agreement is within ±5% in the lunar phase angle range between 10 and 70 degrees, although the best agreement can be around 1% at low lunar phase angles. As with any statistical analysis, the polynomial fits to both the DNB observations and GIRO model predictions have uncertainties. The standard error in the estimate using the 4 th order polynomial fit is on the order of 1.19 (nW/cm −2 ⋅sr) radiance unit, which is about 3% at the radiance level of 40 nW/cm −2 ⋅sr, but can become significant in percentage at low radiances such as Lmin. On the other hand, the R-square or goodness of fit for our polynomial curve fitting is typically better than 0.98. It was found that the ADM is very useful in accounting for the DCC BRDF effects and reducing the biases. However, the ADM was developed for day time applications under solar illumination, and its fitness for lunar illumination needs to be further evaluated despite the positive results in this study. It is possible that the ADM can potentially be further refined with long-term VIIRS DNB observations of the reflected lunar radiance over DCC. To ensure the consistency of our study results, we also performed comparison between the Suomi NPP and GIRO predicted time series with the near-nadir-samples (with an 86 second sampling interval), and the ADM correction to account for DCC bi-directional reflectance effects. The major inputs to ADM are the lunar zenith angle from the DNB data, and view zenith angle of 10 degrees (half of the view zenith angle range used in the DCC granule level sampling), which is ±8.8 degrees from nadir. In this part of the analysis, an additional criterion for filtering the data with a threshold of >400 DCC pixels for each granule is applied, which reduced the standard deviation of the biases. The results are presented in Figure 9 below. In this figure, the lower panel (left vertical axis) shows the observed vs. GIRO predicted radiances. The maximum lunar radiance changes month by month, thus leading to different coefficients for the polynomial functions in the curve fitting as discussed earlier. We also found that for some months, such as January 2019, a few granules contained data near full moon (lunar phase angle < 2 degrees) in which case the GIRO failed to generate a result because it is out of the range and capability of the model The upper panel of the figure (right vertical axis) shows the radiance ratio (DNB observed/GIRO predicted). A statistical analysis of the ratio shows that the mean ratio is ~0.97, In this figure, the lower panel (left vertical axis) shows the observed vs. GIRO predicted radiances. The maximum lunar radiance changes month by month, thus leading to different coefficients for the polynomial functions in the curve fitting as discussed earlier. We also found that for some months, such as January 2019, a few granules contained data near full moon (lunar phase angle < 2 degrees) in which case the GIRO failed to generate a result because it is out of the range and capability of the model The upper panel of the figure (right vertical axis) shows the radiance ratio (DNB observed/GIRO predicted). A statistical analysis of the ratio shows that the mean ratio is~0.97, and the standard deviation is 0.05, which suggests that the DNB observations are lower than the model predicted values by~3%, with an uncertainty of ±5%. This result is slightly better than but consistent with what we found earlier in this section. Another observation from the ratio plot is that the spread in the ratio values appear to become larger with increasing phase angles (away from the peak radiance) due to lower radiances at high lunar phase angles as expected. Finally, we also compared sample outputs from the GIRO and MT2009 models for the November-December 2018 lunar cycle. Figure 10 shows the ratio between the two model generated irradiances (ratio=MT2009/GIRO) for this time period. The figure shows that predictions from GIRO and MT2009 agree within ±7% for a large range of lunar phase angles. However, differences grow in the waning phase up to~17% for lunar phase angles beyond 50 degrees. MT2009 underestimated the lunar irradiance near full moon while overestimated at lunar phase angles above 10 degrees in this particular study. This is expected given the intended application and accuracy estimates of the MT2009 as stated in [7]. Remote Sens. 2019, 11, x FOR PEER REVIEW 15 of 18 the spread in the ratio values appear to become larger with increasing phase angles (away from the peak radiance) due to lower radiances at high lunar phase angles as expected. Finally, we also compared sample outputs from the GIRO and MT2009 models for the November-December 2018 lunar cycle. Figure 10 shows the ratio between the two model generated irradiances (ratio=MT2009/GIRO) for this time period. The figure shows that predictions from GIRO and MT2009 agree within ±7% for a large range of lunar phase angles. However, differences grow in the waning phase up to ~17% for lunar phase angles beyond 50 degrees. MT2009 underestimated the lunar irradiance near full moon while overestimated at lunar phase angles above 10 degrees in this particular study. This is expected given the intended application and accuracy estimates of the MT2009 as stated in [7]. Conclusions The VIIRS DNB on Suomi NPP and NOAA-20 have great sensitivity to low light radiances, which allows us to measure lunar radiances reflected by the deep convective clouds from full moon to quarter moon on a monthly basis with no significant temporal gap. It has been shown that the DNB calibration is very stable during day time over deep convective clouds with reflected solar radiances, and it is expected that the night time DNB calibration stability is also comparable based on analysis of calibration data. This study shows that the VIIRS DNB measured lunar radiances reflected from deep convective clouds is primarily a function of lunar phase angle and lunar zenith angle. There is good consistency in the DNB observations between NOAA20 and Suomi NPP, although a bias on the order of up to 5% is found between them, with the NOAA20 DNB biased lower. The Suomi NPP VIIRS DNB observations matched very well with the lunar model predictions by GIRO within 3% with a standard deviation of ±5% (1σ) based on analysis of data from October 2018 to March 2019, which significantly outperforms the VIIRS DNB instrument specification. The two lunar irradiance model outputs are found to agree within ±7% for a large range of lunar phase angles, although the difference can be up to 17% at high lunar phase angles at low radiances. The GIRO was unable to produce predictions for Conclusions The VIIRS DNB on Suomi NPP and NOAA-20 have great sensitivity to low light radiances, which allows us to measure lunar radiances reflected by the deep convective clouds from full moon to quarter moon on a monthly basis with no significant temporal gap. It has been shown that the DNB calibration is very stable during day time over deep convective clouds with reflected solar radiances, and it is expected that the night time DNB calibration stability is also comparable based on analysis of calibration data. This study shows that the VIIRS DNB measured lunar radiances reflected from deep convective clouds is primarily a function of lunar phase angle and lunar zenith angle. There is good consistency in the DNB observations between NOAA20 and Suomi NPP, although a bias on the order of up to 5% is found between them, with the NOAA20 DNB biased lower. The Suomi NPP VIIRS DNB observations matched very well with the lunar model predictions by GIRO within 3% with a standard deviation of ±5% (1σ) based on analysis of data from October 2018 to March 2019, which significantly outperforms the VIIRS DNB instrument specification. The two lunar irradiance model outputs are found to agree within ±7% for a large range of lunar phase angles, although the difference can be up to 17% at high lunar phase angles at low radiances. The GIRO was unable to produce predictions for near full moon cases, where VIIRS DNB observations are available occasionally. The unique dataset and methodology presented in this paper provide a viable technique for evaluating and ensuring the consistency of VIIRS DNB calibration across satellites for low light observations. It also potentially provides an independent approach validating both the lunar irradiance and ADM bi-directional reflectance models for improved accuracy, as well as ensuring our confidence in their use for satellite radiometer calibration. Author Contributions: C.C. conceived of and designed the study, analyzed the data, and wrote the paper; Y.B., W.W., and T.C. supported the study with data acquisition, processing, model prediction, and analysis, as well as graphic drawing. Funding: This study is partially funded by the NOAA Joint Polar Satellite System (JPSS) Program with a grant to the University of Maryland.
14,338.8
2019-04-17T00:00:00.000
[ "Environmental Science", "Physics" ]
Pairwise hybrid incompatibilities dominate allopatric speciation for a simple biophysical model of development Understanding the origin of species is as Darwin called it “that mystery of mysteries”. Yet, how the processes of evolution give rise to non-interbreeding species is still not well understood. In an empirical search for a genetic basis, transcription factor DNA binding has been identified as an important factor in the development of reproductive isolation. Computational and theoretical models based on the biophysics of transcription factor DNA binding have provided a mechanistic basis of such incompatibilities between allopatrically evolving populations. However, gene transcription by such binding events occurs embedded within gene regulatory networks, so the importance of pair-wise interactions compared to higher-order interactions in speciation remains an open question. Theoretical arguments suggest that higher-order incompatibilities should arise more easily. Here, we show using simulations based on a simple biophysical genotype phenotype map of spatial patterning in development, that biophysics provides a stronger constraint, leading to pair-wise incompatibilities arising more quickly and being more numerous than higher-order incompatibilities. Further, we find for small, drift dominated, populations that the growth of incompatibilities is largely determined by sequence entropy constraints alone; small populations give rise to incompatibilities more rapidly as the common ancestor is more likely to be slightly maladapted. This is also seen in models based solely on transcription factor DNA binding, showing that such simple models have considerable explanative power. We suggest the balance between sequence entropy and fitness may play a universal role in the growth of incompatibilities in complex gene regulatory systems. Introduction 1 The detailed genetic mechanisms by which non-interbreeding that the hybrid binding energies change diffusively (Khatri and 48 Goldstein 2015a,b). However, real gene regulatory systems are 49 more complex than a single TF binding to DNA, so again the 50 question arises do these predictions hold for more complex gene 51 regulatory systems with more realistic fitness landscapes? 52 Although there has been much progress in understanding 53 evolution in terms of selection, mutation and genetic drift, the 54 majority of this work has been reliant on phenomenological fit-55 ness landscapes, which encompass in a heuristic manner smooth-56 ness, epistasis and neutrality (Higgs and Derrida 1992;Kauffman 57 and Levin 1987). In recent years, the question of the structure In this paper, we will use a slightly modified version of the 73 spatial patterning model in Khatri et al. (2009), which has explicit 74 sequence representation of each loci, to examine the growth of 75 Dobzhansky-Muller incompatibilities in allopatry as a function 76 of population size and under stabilising selection in each lineage. 77 Our results show that smaller populations develop incompati-78 bilities more quickly and in a manner mostly predicted based 79 solely on simple models of transcription factor DNA binding, 80 showing the power of these simple approaches (Khatri and Gold- 92 Genotype-Phenotype map 93 The genotype-phenotype map we use is a slight modification of denoted by E and protein-protein denoted by δE. More specif-106 ically, gene regulation is controlled by two non-overlapping 107 binding sites, the promoter P and an adjacent binding site B, 108 together with two protein species, the morphogen M and RNA Figure 1 An overview of the genotype-phenotype map. The gene regulatory module has input a morphogen gradient [M](x) across a 1-dimensional embryo of length L and outputs a transcription factor TF(x). Gene regulation of TF using a morphogen and RNAP (R) is controlled in a bottom-up manner, by binding to its regulatory region consisting of a promoter P and adjacent binding site B; E represents binding free energies of proteins to one of the two binding sites of the regulatory region of the transcription factor T, δE are protein-protein free energies to aid in co-operative binding of paired protein complexes. Each energy is calculated by the number of mismatches ρ (Hamming distance), shown in red, between relevant binary sequences, together with mismatch energies pd and pp for protein-DNA and protein-protein energies respectively. Transcription of T is controlled by the probability of RNAP being bound to the P, p RP . [M](x, α) as a function of the position of embryonic cells, x, 1 and a fixed concentration [R] of RNAP, in each cell, we follow 2 Shea and Ackers (1985) to calculate the TF concentration profile 3 [TF](x), which assumes the steady state concentration profile 4 is simply proportional to the probability of RNAP being bound 5 to the promoter: [TF](x) ∝ p RP (G, R, M(x, α)). The proportion-6 ality constant is given by the ratio of the rate of transcription 7 and translation to the rate of degradation of TF, which is not 8 important in our study, since we are only interested in the shape 9 or contrast of [TF](x) that can be achieved. 11 We use a kinetic Monte Carlo scheme to simulate a Wright-Fisher 12 evolutionary process for the genome G and α on two indepen-13 dent lineages, as detailed in Khatri and Goldstein (2015b). The 14 rate of fixation of one-step mutants are calculated based on 15 Kimura's probability of fixation (Kimura 1962 Monte Carlo Scheme for speciation simulations where, where W * is related to the threshold for inviability and κ F is 26 the strength of selection for the trait represented by the spatial 27 patterning process; when the hybrid's log-fitness drops below 28 F * = κ F log(W * ) an incompatibility or DMI arises. We choose 29 W * = 0.2 to give a reasonable number of incompatibilities that 30 arise in a simulation, where typically the maximum of W ≈ 0.6. 31 Note that although here the exact form of the fitness is slightly 32 different to the one used in Khatri et al. (2009), the qualitative 33 behaviour is the same (Supporting Information). 34 The speciation simulations consist of two replicate simula-35 tions starting with the same common ancestor and with the 36 same fitness function. We draw the common ancestor from the 37 equilibrium distribution for G and α. Our genome is composed of 4 loci: 1) RNAP, (whose sequence 3) Regulatory region of TF (g T = [t P , t B ]) and 4) the morphogen 49 gradient steepness α. Hybrids between the two lines are con-50 structed by independent reassortment of these loci and assuming 51 complete linkage within each loci. We define a hybrid genotype 1 by a 4 digit string where each digit corresponds to one of the loci 2 defined above and takes one of two values, which correspond 3 to the allele from the 1st line or 2nd line; for example, the hy-4 brid rMTa corresponds to R locus having an allele from the 1st 5 lineage, M locus with the allele from the 2nd lineage, T locus 6 from the 2nd lineage and α locus the allele from the 1st lineage. 7 Note that the underlying sequence of each hybrid changes as 8 different substitutions are accepted in each lineage; the notation 9 only refers to alleles fixed at any point in time. As α is a continu- 1 The total number of n−point DMIs is (2 n − 2)( L n ), as there are ( L n ) combinations of n loci amongst L total loci and then considering a binary choice of alleles across both lines, there are a total of 2 n allelic combinations or states, 2 of which are the fit allelic combinations where all alleles come from one lineage or the other giving 2 n − 2. For example, between each pair of loci there are 2 2 − 2 = 2 mismatching combinations of alleles (e.g. rM and Rm) that could give DMIs and ( L 2 ) = L(L − 1)/2 = 6 pairwise interactions. A similar argument would give a total of 24 3-point DMIs as there are 2 3 − 2 = 6 mismatching combinations of alleles at 3 loci (e.g., excluding rmt and RMT) and ( L 3 ) = 4 3-point interactions and similarly, 14( L 4 ) = 14 for 4-point interactions. In total, the max number of DMIs is I max = ∑ L n=2 (2 n − 2)( L n ) = 3 L + 1 − 2 L+1 , which for L = 4 loci is I max = 50. 53 Evolutionary properties of genotype-phenotype map on each 54 lineage 55 The properties of this genotype-phenotype map have been pre-56 viously explored (Khatri et al. 2009). An important property 57 of this genotype-phenotype map is that only a single mecha-58 nism of patterning is found, which is that RNAP (R) binds with Khatri et al. . CC-BY 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/123265 doi: bioRxiv preprint first posted online Apr. 2, 2017; As discussed in the model section the DMIs shown in Fig.4 will 2 have contributions from many different fundamental incompati-3 bility types, which can be 2-point, 3-point and 4-point in nature. 4 Using the method described above to decompose DMIs into fun-5 damental types, we plot the total number of each type of DMI 6 versus divergence time in Fig.5, where the panels correspond 7 to different scaled effective population sizes from 2Nκ F = 0.1 8 to 2Nκ F = 100. We see clearly that pair-wise DMIs are domin-9 inant at all population sizes and divergence times, though the 10 difference is diminished at larger population sizes. These results 11 show that contrary to the prediction of Orr, that higher order 12 DMIs should be easier to evolve, higher order DMIs evolve more 13 slowly and are in smaller number compared to pair-wise DMIs. 14 As mentioned in the introduction the Orr model also predicts 15 that n-point DMIs should increase as ∼ t n . Here, we find that which has the asymptotic form of I(t) ∼ t γ for t τ and t T Table 1. We see that the total number of DMIs and 2-point 27 DMIs have a power law exponent close to γ = 2, which is con- pect P I (t) ∼ (µt) K * and so given that at least n substitutions are 46 needed for a n-point incompatibility, we would expect K * ≥ n. It is possible the inconsistency here could be resolved by more 52 At larger population sizes (2Nκ F ≥ 50), Fig.5 Table 2 Table of values of the parameters characterising the sub-diffusive growth of DMIs for large scaled population sizes; β = 1 corresponds to normal diffusive motion, β < 1 to subdiffusion and β > 1 super-diffusion, while K * corresponds roughly to the number of substitutions required to reach the invaible region. Fig.6 we have plotted the number of 2-point we will see the analysis of these DMIs will not be so clear. 23 Examining Fig.6, we see at small population sizes, 2Nκ F ≤ 1, 24 that all the DMIs grow approximately quadratically at short 25 times with a saturating form at long times, as also seen in Fig.5. 26 In addition, we see that when population sizes are small, the quence length, as phenotypes coded by longer sequences evolve 10 more quickly, so it is possible these two effects could confound 11 each other. Here for example, E MB has a stronger selective con-12 straint compared to δE RM , but a longer sequence length, as each 13 DNA-protein interaction interface has 10 binary digits versus 14 each protein-protein interaction interface that has 5. Also it is 15 not clear how a single inviability threshold F * effectively maps 16 to these pair-wise incompatibilities, complicating the picture 17 further. 18 However, as the scaled population size increases, we see 19 that the time for I mt incompatibilities to arise sharply increases, 20 while the time for I rm increases less rapidly and I rt even less 21 rapidly. This is consistent with the simple model of transcription 22 factor DNA binding described in Khatri and Goldstein (2015b) 23 and as observed with the hybrid DMIs in Fig.4, as E MB , which 24 contributes most to I mt is under the greatest selection pressure There is still very little understood about the underlying genetic 12 basis that gives rise to reproductive isolation between lineages. 13 Gene expression divergence is thought to be a strong determi- sizes), which is as predicted by Orr's framework (Orr 1995), the 37 underlying reason is very different in these models and arises as 38 the common ancestor is likely to be close to the inviable region 39 that gives non-functional binding (Khatri and Goldstein 2015b). slower speciation rate for animals with large ranges or popula-56 tion sizes (Mayr 1970(Mayr , 1954Rubinoff and Rubinoff 1971;Cooper 57 and Penny 1997). In addition, there is more direct evidence from 58 the net rates of diversification (Coyne and Orr 2004) sizes has a characteristic negative curvature on a log-log plot, 1 predicted theoretically by Khatri and Goldstein (2015a), indi-2 cating that hybrid traits randomly diffuse, a simple model of 3 diffusion does not fit the simulation data well; instead a model 4 of sub-diffusion, that would arise if there are a number of kinetic 5 traps giving a broad distribution of substitution times, does 6 fit the data well. This is consistent with the finding that the 7 genotype-phenotype map has a rough fitness landscape, which 8 is only revealed at sufficiently large population sizes (Khatri et al. 9 2009). These predictions can be tested empirically by more de- There is an inherent simplicity with our gene regulatory mod-53 ule for spatial patterning, which requires only two proteins to 54 bind to a regulatory region to turn on transcription; a key direc-
3,210.2
2017-01-01T00:00:00.000
[ "Biology" ]
On Estimating Fluxes due to Small-Scale Turbulent Convection in a Rotating Star The way in which turbulent fluxes are usually represented in computations of large-scale flow in the convection zones of the sun and other stars is briefly described. A model of an ensemble of eddies that is capable of generalization to circumstances more complicated than the usual essentially spherically symmetrical convection zone is outlined. Generalization usually requires the introduction of new postulates, and, in so doing, also lays bare some of the assumptions, often implicit, in the usual mixing-length formalisms. Introduction Computations of large-scale flow in stellar convection zones, be it a meridional circulation or simply rotation, require some representation of the fluxes of heat and momentum by turbulent motion on scales too small to be resolved.It is not uncommon in astrophysics to adopt the attitude that since the theory upon which any such representation is based (usually mixing-length theory) cannot be trusted, it is hardly necessary to exercise care in deriving the formulae for the fluxes.Readers can thus be faced with the prospect of learning of the results of complicated numerical computations, yet being unable to know precisely what those results mean. The main point I wish to make is that the situation can be improved.The so-called theory used for calculating the turbulent fluxes can be refined, and the numerical machinery that has been built to model the large-scale flow can be used experimentally to test the refinements. First I shall describe the broad principles behind many of the computations of turbulent fluxes in astrophysics.Then I shall outline a procedure whereby the representations of the small-scale motion might be developed in a coherent manner. The Turbulent Fluxes The first step in all computations of large-scale astrophysical flow, whether it is stated explicitly or not, is to imagine a separation of scales.Motion with characteristic length and time scales greater than some value, usually implicitly prescribed by the numerical technique, is treated by solving the governing equations directly, using some parametrized approximation to describe the motion with characteristic scales smaller than this value: ignoring the small-scale motion completely is a trivial form of such a parametrization.The largescale fields of velocity and temperature, say, may be considered to derive from the total fields v and T via the application of an averaging procedure, which I denote by an overbar.Thus I shall refer to fields such as v and T as mean quantities; the fields u and ϑ are fluctuations.In this account, I have in mind particularly the convective envelopes of cool stars, in which turbulent motion is driven predominantly by buoyancy. ISRN Astronomy and Astrophysics For simplicity of presentation I refer the fields to rectangular Cartesian coordinates x i , with x 3 vertical, and I shall ignore the curvature of the convection zone.This is a fair approximation if the scales to be parametrized are small enough.Equations of motion may then be written, in an obvious notation, where ρ and p are density and pressure, and h are specific internal energy and enthalpy (viz., internal energy and enthalpy per unit mass), g is the magnitude of the gravitational acceleration g i , F i is the radiative heat flux, t is time, and δ i j is the Kronecker delta.Throughout my discussion I adopt the summation convention for repeated indices except where one of them is enclosed in parentheses.The total energy equation ( 3) is derived by combining the kinetic-energy equation, obtained by taking the (scalar) product of (2) with v i to relate the rate of change of the kinetic energy to the rate of working, with the thermal-energy equation which is a statement of the first law of thermodynamics.In (5), c v is the specific heat at constant volume.These equations must, of course, be supplemented by an equation of state and an equation determining F i .Viscous terms have been ignored; so too have perturbations to g. To obtain equations for the large-scale flow the averaging procedure is applied to the full equations of motion.Usually some approximations are made in representing the fluctuations, the most common being the neglect of density fluctuations, except where they appear in the enthalpy flux.This constitutes part of the Boussinesq approximation, which has been justified by Spiegel and Veronis [1] for the case in which v i = 0, when the fluctuations are generated by buoyancy alone, and when the length scales of the fluctuations are all much smaller than the scale heights of density and pressure of the averaged state.Despite the fact that these conditions are believed not to be satisfied in most stars, I too shall nevertheless adopt the practice, obtaining The only fluxes arising from the small-scale turbulent motion that need to be calculated for closing these equations are evidently the Reynolds stress R i j = ρ u i u j , the turbulent heat flux F ci = ρ h u i , where the prime denotes a turbulent fluctuation, and the turbulent kinetic-energy flux In the absence of a large-scale mean flow the equations for the fluctuations are given in Boussinesq approximation by [1] where c p is the specific heat at constant pressure and is the superadiabatic lapse rate; δ = −(∂ ln p/∂ ln ρ) T is the inverse dimensionless isothermal compressibility.In what follows I shall assume that the perturbations are optically thick, so, ignoring, for simplicity, fluctuations in the thermal conductivity, where K = 4acT 3 /3χ ρ, in which a is the first radiation constant, c is the speed of light, and χ is the Rosseland mean opacity.Various procedures have been suggested for treating optically thin perturbations (e.g., [2][3][4][5]); I do not discuss them here other than to suggest the procedure recommended by Gough [6], which is consistent with the procedure adopted here for dealing with the other fluctuation equations and which, unlike the other procedures, provides a smooth transition from optically thick to optically thin fluctuations. Rough Estimates of the Turbulent Fluxes In most cases some kind of diffusion approximation based on mixing-length theory is used to compute the turbulent fluxes.Let us consider first the convective heat flux.Since the Boussinesq approximation has already been used to derive (6), one might as well continue doing so and, in particular, neglect the contribution to the convective heat flux from pressure fluctuations.That flux can then be written (It is perhaps worth pointing out that for a perfect unionizing gas the pressure fluctuation makes precisely no contribution to h when the latter is expressed in terms of the temperature fluctuation.)In this and all subsequent equations overbars are omitted from mean quantities, except where there is risk of confusion. If there were no large-scale motion the star would be spherically symmetrical in the mean-I am ignoring the possibility of there being a substantial magnetic field-and only the vertical components β and F c of the superadiabatic lapse rate and the convective flux would be nonzero.Moreover, the Reynolds stress would be diagonal; I denote its (3,3) component, the only component that enters hydrostatics when the spherical geometry is ignored, by p t .In that case the simplest mixing-length estimates for the magnitudes of typical velocity and temperature fluctuations can be obtained by first ignoring the time derivatives and also ignoring the role of the pressure gradient in (7), so that the convective motion is considered to be purely vertical, with velocity w.Then one replaces ∂/∂x i by −1 , where , called the mixinglength, is some characteristic lengthscale of the largest turbulent eddies.For upwardly directed turbulent motion, this leads to where κ is the thermal diffusivity: κ = K/ρc p .In obtaining these estimates it was assumed that is much less than the scales of variation, H, of all the other variables appearing in (13) including β.That assumption is the basis of socalled local theories and allows fluctuations at any point to be expressed in terms of the mean state at that point.It is violated in any of the usual mixing-length models of stellar convective envelopes, for which the calibration of , usually to reproduce the solar radius, leads to a value of /H in excess of unity (e.g., [7,8]). To estimate the average wϑ one simply multiplies the estimates of w and ϑ from (13), and introduces a scaling factor α H to account for the imperfect correlation between velocity and temperature fluctuations, giving where is the product of the Prandtl number and a locally defined Rayleigh number based on the lengthscale .The factor α H depends on the way in which the mixing-length ideas have been employed to describe the dynamics; in a description of the turbulence more realistic than those assuming that all motion is vertical, it depends also on the supposed geometry of the turbulent eddies.The (3,3) component p t of the Reynolds stress tensor R i j can be estimated similarly: in which α R plays a similar role to α H in (14).The turbulent kinetic-energy flux is usually ignored; it is proportional to ρw 3 , with a constant of proportionality that depends on the (uncertain) spatial correlations of the turbulent velocity field.Deep in stellar convection zones S is very large-in the solar convection zone it exceeds 10 20 .This implies that the coherent convective motion is essentially adiabatic.Microscopic diffusion of heat is negligible, and Then which, of course, are independent of the thermal conductivity K. The formula (18) for F c has been written as the product of a gradient β and a transfer coefficient K c .The reason for so doing is that it is this form that has been the most common basis for generalization to situations that are no longer spherically symmetrical in the mean.What is sometimes done (e.g., [9]) is to assume that ( 18) is the 3-component of a vector equation and that the transfer coefficient K c is a scalar, even though it appears to depend on the vectors β i and g i .Thus one obtains with the turbulent heat flux being parallel to the superadiabatic temperature gradient.One might more realistically envisage trying to generalize the formula to something like since generally, as will become apparent later, one would not expect F ci and β i to be parallel.Accepting (19) exposes the difficulty of principle in deciding how the coefficient K c should be computed.Perhaps the simplest hypothesis would be to leave g as it stands and consistently replace β by the magnitude (β i β i ) 1/2 of β i in the formula (18) for K c .But in practice even cruder approximations are often adopted, which at best are based on the value of K c computed from an initial trial model in which v i is constrained to be zero, even though the resulting available heat energy advected by the large-scale flow in the final model may turn out to be comparable with F ci . The treatment of the Reynolds stress in the presence of shear is also motivated by the desire to reduce the formula to a linear Fick law.In most cases the usual scalar viscous law is adopted, with a turbulent shear viscosity derived in some way from the product of and a turbulent-velocity estimate such as that in (17). It is convenient to consider the Reynolds stress tensor R i j to be divided into two parts: a part R 1i j that vanishes when v i = 0 and the part R 0i j that does not.In the absence of a magnetic field or other perturbing agent, R 0i j is diagonal and acts as a (generally anisotropic) turbulent pressure p ti .It is usually ignored in computations of stellar structure.This may be dangerous, because the presence of any anisotropy in p ti can modify large-scale flow (e.g., [10,11]). In modelling the remaining terms in the Reynolds stress, once again an assumption of localness is usually adopted so that it may be presumed that the mean flow may be replaced by a truncated Taylor expansion about any point of interest.Thus one can write There is no term depending on undifferentiated mean velocities v i because the formula must be invariant under Galilean transformation.Gross simplifications of the physics have always been made in attempts to evaluate the viscosity tensor.For example, Wasiuty ński [12] assumed fluid elements to travel with constant acceleration, relative to an inertial frame, from initial velocities having uncorrelated components.He obtained a formula which in Cartesian coordinates would have been, had spherical geometrical terms been ignored, where M nk is a symmetrical tensor with principal axes in the east-west, north-south, and vertical directions.Such a form, simplified further by assuming axisymmetry about the vertical, was adopted in some of the early discussions of the sun's equatorial acceleration (e.g., [10,12,13]).The assumptions upon which the derivation is based imply, for example, that the convective velocities are not influenced significantly by the solar rotation.Some investigators assume complete isotropy of the turbulent motion, and no resistance to dilatation of the mean flow, which yields where η is a scalar.This is the ordinary shear viscosity law, which, unlike the anisotropic formulae, has a (no doubt erroneous) tendency to push the mean flow towards a state of rigid rotation. The Eddy-Ensemble Approach In order to refine the estimates of the turbulent fluxes one must consider more carefully the dynamics of the turbulent eddies.I adopt here an eddy-ensemble approach developed originally for pulsating stars by Gough [6,14], who subsequently discussed a generalization to accommodate rotating flows [15] which this paper extends.The model representing the turbulent motion is much more carefully defined than in the previous section, and it is correspondingly more complicated to evaluate.Yet in the simple circumstance of convection in a fluid with no mean background flow it yields essentially the same results.The merit in the approach is that it lays bare the underlying assumptions, thereby making it clear how the theory can be extended to more complicated situations and what additional assumptions, if any, are needed to do so. It is safer and simpler to compute the fluxes directly from the formulae ρu i u j and ρc p ϑu i rather than to attempt an evaluation of transport coefficients such as K ci j and M i jkm or generalizations of them, as is more common (e.g., [16,17]).The reason is that ( 20) and ( 21) depend on assumptions additional to those of the basic turbulence theory and must therefore predict fluxes that are less soundly based and more difficult to test.Moreover, the tensors K ci j and M i jkm are of higher rank than the fluxes they determine, so one risks exacerbating the complexity of the calculation unnecessarily. Axially Symmetric Convection. In order to establish the procedure, I consider first the simple case of a star that is spherically symmetrical.Then the turbulence is (statistically) axially symmetrical about the vertical, and R i j = p t(i) δ i j has only two independent components. The approach I adopt here assumes that an eddy comes into being spontaneously (as a result of the convective instability acting on some random fluctuation in the medium) and subsequently grows in accordance with equations of motion linearized about the mean, background, state, and at all times there is some likelihood that it breaks up due to internal shear instability.The disruption is regarded as being instantaneous and occurs stochastically with probability proportional to the rms rate of strain in the eddy.It is assumed that the smaller-scale motion so generated contributes negligibly to the fluxes.I shall describe here only a local theory, using the Boussinesq approximation with β i constant within any eddy.The discussion is based on the review by Gough [18], where further elaborations and refinements, such as an approximate account of the smaller-scale motion, can be found. To render the notation more transparent I rename the x i coordinates (x, y, z), use vector notation where convenient, and denote the components u i of the fluctuating velocity field u by (u, v, w).The convective flow is represented by a superposition of eigenfunctions of the linear problem, each of which is confined between two horizontal planes a distance ±(1/2) from its central level z 0 .I adopt stress-free isothermal boundary conditions, which yield the simplest formulae.Thus for z 0 − (1/2) ≤ z ≤ z 0 + (1/2) , a convectively unstable eddy created at time t 0 with an initial vertical velocity amplitude W 0 may be represented by in which where the planform f (x, y) satisfies and where k h = (k x , k y , 0) is the horizontal component, with magnitude k h , of a wavenumber k characterizing the eddy, having magnitude k; k v = π/ is its vertical component.Also f x := ∂ f /∂x, and so forth, ∇ 2 h is the horizontal Laplacian operator; the overbar denotes horizontal, or ensemble, average.Furthermore, the growth rate q of the mode is given by in which and now (e.g., [19]), which differs from the definition (15) by just a constant geometrical factor.Where |z − z 0 | > (1/2) , u = 0 and ϑ = 0 (although, of course, there are other eddies there).The formulae can easily be generalized to account for optically thin eddies [6].Note that formally stable eddies also exist, with ϑ = −βq −1 w, but their direct contribution to the fluxes is not significant; they formally contribute indirectly via a filling factor f to be introduced below.The geometrical factor Φ and the planform f define the degree of anisotropy of the turbulence.An eddy is presumed to come into existence as a residue of the turbulent cascade from the breakup of previously existing eddies.The instant at which it is considered to have an existence of its own is not completely specified, although to be unstable it must have a temperature fluctuation that has a (randomly achieved) positive correlation with the vertical velocity.The actual criterion that is adopted to define the initial conditions is absorbed into the definition of the scaling factor Λ appearing in (35) and (37) for the turbulent fluxes, which in practice is subsequently calibrated by comparison with astronomical observation. The turbulent fluxes can now be computed at height z by first constructing the appropriate fluxes due to a single eddy and then averaging over all eddies that intersect the horizontal plane through z, weighting the average by the probability that the eddy has not been disrupted.The difficulties in performing this average arise from the computation of that probability, from assigning the distribution of initial amplitudes and the rate at which eddies are created, and for assigning a probability distribution to the eddy geometry characterized by the parameter Φ and, in the case of rotating flow discussed in the next subsection, the function f . Let us consider first the computation of the disruption probability.Eddy disruption is thought to occur predominantly as a result of shear instability within the eddy; its probability of occurring is therefore proportional to the magnitude of that shear.The coefficient of proportionality depends on the geometry of the eddy and is difficult to estimate because shear instability is ill understood.Perhaps the most plausible first guess is to take as the measure of the internal shear the square root e = e 0 exp [q(t − t 0 )] of the total squared rate of strain e 2 = e i j e i j , where e i j = (1/2)(∂u i /∂x j + ∂u j /∂x i ) and the angular brackets denote average over the volume occupied by the eddy.The ansatz is plausible, for at least it has the correct form in the limits of large and small k v /k h when the eddy motion approaches rectilinear shear.From the structure (25) of the velocity field, the disruption probability per unit time can then be evaluated to be where γ is some (constant) parameter to be calibrated later.The probability that an eddy created at time t 0 still exists is therefore where λ = γq −1 e 0 .It is an important assumption of the theory that the eddy dynamics is dominated by its growth against the unstably stratified environment, which can be well approximated by linear theory.Then the duration of the process of eddy disruption can be regarded as being short compared with the mean eddy lifetime τ, which permits disruption to be considered to be instantaneous.This implies that the mean eddy lifetime is given by the final, approximate, expression having been obtained as the leading term in an expansion for small λ. We come now to the distribution of W 0 and the eddy creation rate.I shall adopt the common practice of assuming that the turbulent convective flow is dominated at any point by eddies with a unique size and shape.Since the flow in a spherically symmetrical (nonrotating) star is statistically axisymmetrical, all orientations of the planform f are equally probable.Furthermore, since the theory is local and, for the present, the convection zone is considered to be time independent in the mean, the formula for the creation rate and the initial amplitude W 0 depend on neither z 0 nor t 0 at any given value of z. If n(z 0 ) is the rate of creation of eddies centred at z = z 0 per unit eddy-volume, then the proportion of the volume occupied by eddies at time t is whence n = f τ −1 ; f is essentially the filling factor of eddies with positive linear growth rates. One is now in a position to evaluate the convective heat flux by integrating the contributions from all eddies.Formally Only eddies that exist at height z contribute to the integral over eddy locations z 0 , so the outer integral in this simple theory extends only from z − /2 to z + /2.Moreover, consistent with the Boussinesq approximation, when evaluating that integral the spatial variation of n can be ignored.Note that if the horizontal and vertical wavenumbers are of the same order of magnitude, this expression for the probability is similar to that derived from the more literal interpretation of the mixing-length annihilation hypothesis suggested by Spiegel [20] in terms of vertically rising and falling fluid elements, which yields a breakup probability per unit time of w/ , which is proportional to k v W 0 .It is evident from (30) that the two expressions are rather different, especially for the highly elongated eddies having k v k h that are sometimes favoured by that literal interpretation [6], particularly if only those eddies with the greatest growth rates are considered, as Spiegel suggested.According to the formula proposed here, such eddies are prone to rapid disintegration by the shear in the highly elongated turbulent flow.Nevertheless, this approach is tantamount to no more than a sophisticated mixing-length formalism.It is evident that the manner in which the mixing-length hypothesis is interpreted affects the predicted anisotropy. With these assumptions the averages can easily be performed.The analysis is essentially the same as that presented by Gough [6,18].The resulting fluxes are which is essentially of the same form as (14).In this formula, which demonstrates how the filling factor f , the amplitude at which the eddy is considered to come into existence (which determines the value of e 0 ), and the scaling factor γ defining the eddy destruction rate combine together into a single (calibrateable) parameter.Note that if the assumption that is proportional to a scale height H of the background state is made: = αH, the constant of proportionality α combines similarly.Equation ( 35) also demonstrates that the fluxes depend only weakly on λ, and therefore only weakly on when one considers the eddy to have come into existence.One can evaluate the mean-squared velocities for determining the Reynolds stress similarly: Equations ( 35)-( 37) with ( 29) are the same as ( 14)-( 16), except for the constants multiplying the functional forms of F c , p t , and S, and they show the relation between α H and α R , and Λ and Φ. Finally one can evaluate the flux of kinetic energy, K t , likewise: where It requires yet another independent assumption to specify C. If, for example, it is assumed that γ = 1 and that every horizontal plane is tessellated with Christopherson hexagons (e.g., [21]), which can be represented by six wavenumbers k h of equal magnitude k orientated uniformly in a circle, then 0.078 (cf.[22,23]); if instead the plane is filled with closely packed cylinders with stagnant fluid in the interstices, as Böhm-Vitense [2] might prefer, then C 0.034. The analysis has not yet provided a method for computing the anisotropy parameter Φ.The simplified formalism presented here is incomplete, and it exposes important issues that must be resolved in order to close what remains a rather naive eddy-ensemble approach.However, before attempting a generalization, an observation of the structure of the formulae (35)-(37) is perhaps not out of place. Although it has been assumed that turbulent eddies with a given value of Φ dominate the flow, it is appreciated that actually this Φ is merely meant to be representative of a spectrum of eddy shapes.What is meant by an eddy that dominates the flow is one that contributes maximally to a particular flux.Thus w 2 is maximized at fixed when Φ = 1.5, and u 2 is maximized at Φ = 3.If these values are used separately for the two fluxes, which is not an implausible procedure if the distribution of eddy shapes is rather flat, one finds This relation is also a property of a single eddy with Φ = 2 (which is close to the geometric mean of the two stressmaximizing values, namely 3/ √ 2 = 2.1).So perhaps this intermediate value of Φ should be used to describe the representative eddy for computing the stress tensor.It is interesting, though perhaps merely coincidental, that the eddy shape that maximizes F c at fixed has Φ = 5/3, which does not differ substantially from 2. Convection in a Rotating Fluid.Generalization of the procedure outlined in the previous subsection in the presence of a general background mean flow is not straightforward, because it requires additional hypotheses.Nevertheless, some progress can be made. The large-scale currents in stars envisaged here include both meridional circulation and zonal flow.I continue to assume that a meaningful separation of the scales of motion is possible, so that a local theory provides a fair first approximation to the turbulent fluxes.As usual only the leading terms in the Taylor expansion of v about a point will be considered. Note first that the mean motion in the vicinity of a point can be decomposed into a translation, a rotation, a pure shearing motion, and an isotropic dilatation.The first does not affect the turbulent motion; the last modifies the turbulent pressure but does not add to it off-diagonal terms; it can be dealt with by generalizing the analysis by Gough [6] for sinusoidally pulsating stars.Rotation modifies the turbulence via the Coriolis forces, and pure shearing motion stretches the eddies; both of these do generate off-diagonal terms.Here I consider the special case where only the local rotation is appreciable; it is the simplest of the steady flows to deal with because there is no shear, and the modifications to the eigenfunctions that the mean flow imposes are relatively straightforward to calculate.Steps towards accounting for rectilinear shearing flow are being taken by Smolec et al. [24,25]. In order to proceed I shall make two additional simplifying assumptions.Firstly, I assume that surfaces of constant potential temperature (or entropy) are horizontal.This may not be strictly the case, as Durney [9] has pointed out, although the absence of a large pole-equator temperature difference in the sun suggests that at least in that case the assumption may not be a bad first approximation, at least in the upper parts of the convection zone.Secondly, I assume that the vorticity of the mean flow is much less than the eddy growth rate, which (almost) circumvents the issue of how axisymmetry is broken by rotation.This assumption too is easily satisfied in the upper parts of the solar convection zone, but not so well for the slowest eddies that occupy much of the zone beneath.Thus, as with the (sometimes implicit) assumption that the mixing length is much less than the background scale heights, the formulae I derive are not strictly valid, and one should be aware of that when they are used in larger-scale calculations. As before, the linear eigenfunctions of a plane Boussinesq layer are computed, but now the layer is presumed to rotate with angular velocity Ω = (1/2) curl v.The Cartesian axes are orientated such that Ω lies in the x − z plane.In view of the assumption Ω := | Ω | q μ, there is no question of the rotation stabilizing the convection; the growth rate of the convective mode and its eigenfunction can be expanded in powers of ε := Ω/μ, here up to second order: The leading-order solution is given by which are equivalent to the formula (25) with q 0 given by ( 27)-( 29) for q.Thermal diffusion is important only in the immediately subphotospheric layers of the star.There the growth rate is high, ε is very small, and therefore so typically is the influence of the rotation.Deeper down where rotation may matter, the motion is close to being adiabatic.Therefore in calculating the corrections to the structure of the eddies it is expedient to assume adiabaticity, approximating q 0 by μ and omitting some terms in the formulae for the eigenfunctions, in order to maintain a degree of lucidity.It would be quite straightforward to retain the nonadiabatic corrections were the necessity to arise, but I refrain from doing so here because the added complexity of the formulae would render them less easy to appreciate.The rotational modification to the convective mode can accordingly be represented by where θ is the angle of inclination of Ω to the vertical in the positive x direction. One can now go through the procedure described in Section 4.1 to evaluate the turbulent fluxes.That is quite straightforward except for one issue: the determination of the potential horizontal anisotropy of the turbulent velocity amplitudes resulting from the existence of a preferred direction introduced by the rotation.The assumption that the growth of the eddies satisfies linearized dynamics implies that anisotropy formally arises only from an anisotropy of initial conditions.Since the creation of the unstable convective eddy perturbations is not directly addressed by the formalism, an additional assumption must therefore be made.The assumption that I adopt here is simply that the filling factor f is isotropic.When ISRN Astronomy and Astrophysics it was introduced in the discussion in Section 4.1 of axisymmetric turbulence I implicitly presumed it to be constant.But now that axisymmetry is lost, that presumption must be relaxed.Here, I adopt the principle of detailed balance, by applying (33) separately in every horizontal direction.That implies that the creation rate n is proportional to q and therefore shares with q the same O(ε 2 ) anisotropy.However, it transpires that the anisotropy it imparts to the velocity field is even smaller, reducing the O(ε 2 ) terms by an O(ln 1/ε) factor; it can therefore be neglected.After carrying out the averaging, there results in which and where F c and p t are given by ( 35) and (37).The general formula for K t is algebraically complicated, but in the case when the planform is composed of randomly orientated harmonic hexagons it simplifies to where K t is given by (39).The most obvious influence of the rotation on the convective fluxes is to rotate them principally out of the plane of g and Ω, namely, in the y direction in the coordinate system adopted here.The angle of rotation is proportional to ε = Ω/μ.There are additional O(ε 2 ) components generated in the plane of g and Ω, and also the overall magnitudes of the fluxes are reduced somewhat, also by O(ε 2 ).The rotation of the momentum fluxes generates off-diagonal terms in the Reynolds-stress tensor. Discussion Many assumptions have been necessary to arrive at a tractable procedure.Most notably, it has been assumed here that the turbulent medium can be represented as an ensemble of eddies that not only do not interact directly with each other-they do so implicitly via the background (mean) stratification, however-but that nonlinear advection processes and nonlinearities in the thermodynamics within an eddy (the latter being a consequence of the assumption that the theory is local, which is intimately linked with the Boussinesq assumption) can be ignored throughout almost the entire lifetime of each eddy.The only explicit nonlinearity is the eddy-disruption process, which is assumed to occur on a timescale short compared with the eddy lifetime, which is essential, although not sufficient, for the validity of the linearization assumption: each eddy is presumed to grow via the linear influence of the background state, although there is an implicit nonlinear backreaction on the mean state via the turbulent fluxes.That reaction has been presumed to preserve the alignment of the superadiabatic temperature gradient with gravity, although one can, with additional assumption about the preservation (or otherwise) of eddy shape, take a departure formally into account.Relating that to the fluxes, however, would require a subsequent global calculation. The formalism is basically a mixing-length approach, which itself is also characterized by linear reasoning, either explicitly when describing the energy exchange with the background state or implicitly via the preservation of eddy size and shape, which is not predicted by the simplest forms of the theory; in any case, it always requires admitting the necessity of an additional assumption about eddy shape.Even though in linear theory the harmonic relation ( 26)˜is sufficient for determining the growth rate q and the z and t dependence of the eigenfunctions, more detailed specification is necessary for relating the kinetic-energy flux to the heat flux and the Reynolds stress, in particular for determining the correlation constant C. Eddy size is basically the mixing length, which is not an integral part of the theory.Therefore nor is any procedure to define how it changes with changing vorticity (and, in more general circumstances, the changing shear) in the mean flow.Here I have assumed that the "filling factor" of eddies, a fundamental property of the eddy correlation, is unaffected, leaving the way clear to determine the effect on the eddy shape from the linearized dynamics.As always in such a theory, the value of the vertical extent of the eddy is left to other considerations.In particular, no attempt has been made to address how it might change with the properties-in the simple case considered in this paper, the value of the local rotation-of the background flow. There remains much work to do before an even modestly reliable general theory emerges.Relaxing the restrictions that were imposed in the discussion above not only complicates the analysis but also requires one to face new problems whose resolution may depend on introducing new hypotheses.Accepting that rotation is not small compared with eddy growth rates, for example, raises the issue of the dependence of eddy creation on the orientation of the eddy.Imagining constant entropy surfaces to deviate from the horizontal introduces the possibility of baroclinic instability.How do these compete with the convective modes?It is easy to invent a set of superficially plausible hypotheses for tackling these problems, but some other no less implausible models may yield rather different results.The simple formulae ( 14)-( 16), for example, result from a relatively wide variety of physical pictures of the turbulent motion, but the more subtle results such as (44)-(47) are more sensitive to the details of the assumptions.Can one even hope to develop a theory on the basis of linear eigenmodes?Or do nonlinearities play an essential role during the growth of an eddy?How inaccurate is the process of separating scales?It is certainly quite evident that assuming a unique eddy lengthscale at any location can be a poor recipe, especially near the edges on convection zones where local stability changes sense.One might improve the description of the small-scale turbulence by using a nonlocal theory, but how can one develop a procedure that is both tractable and realistic?Can the mixing-length formalism in any guise be expected to provide an adequate description of the turbulence? It has been argued that because mixing-length theory is even less reliable at predicting fluxes in complicated situations than it is in the simplest of cases, the complexities of its consequences should be ignored and that it is not worthwhile attempting to generalize the theory.I disagree with this point of view.The mixing-length theory is still the only practical tool available for modelling stellar convection zones for evolutionary calculations-although work such as that by Trampedach and Augustson [26] may change that, but perhaps not before mixing-length theory has been calibrated more precisely [27]-and it should be utilized to fullest advantage until it is superseded.Why prefer a formula such as (23), which is surely incorrect, to one that attempts to include the effects of phenomena that are known to exist?Maybe the results obtained with a more intricate theory will at first be numerically no better at reproducing astronomical observations, but only by studying the consequences of rotation, shear, baroclinicity, and also of magnetic fields on the turbulence can some idea of the importance of these factors be acquired.Ogilvie [28], for example, has made some progress with the latter, using a rather different approach from the one adopted here.The introduction of each new factor tends to require new hypotheses, or to bring into prominence already existing hypotheses upon which the theory did not previously depend in an important way.With these come new parameters which need to be determined.In principle we have at our disposal laboratory experiments and numerical experiments performed with the solar modelling programmes to calibrate the formulae.Both should be utilizedindeed, some have already (e.g., [24,25,29]), but only to a limited extent-although eventually care must also be taken to ensure that sufficient checks remain to test the predictive power of the theory.
8,842.4
2012-02-29T00:00:00.000
[ "Physics", "Environmental Science" ]
Entropy Analysis of the Flat Tip Leakage Flow with Delayed Detached Eddy Simulation In unshrouded turbine rotors, the tip leakage vortices develop and interact with the passage vortices. Such complex leakage flow causes the major loss in the turbine stage. Due to the complex turbulence characteristics of the tip leakage flow, the widely used Reynolds Averaged Navier–Stokes (RANS) approach may fail to accurately predict the multi-scale turbulent flow and the related loss. In order to effectively improve the turbine efficiency, more insights into the loss mechanism are required. In this work, a Delayed Detached Eddy Simulation (DDES) study is conducted to simulate the flow inside a high pressure turbine blade, with emphasis on the tip region. DDES results are in good agreement with the experiment, and the comparison with RANS results verifies the advantages of DDES in resolving detailed flow structures of leakage flow, and also in capturing the complex turbulence characteristics. The snapshot Proper Orthogonal Decomposition (POD) method is used to extract the dominant flow features. The flow structures and the distribution of turbulent kinetic energy reveal the development of leakage flow and its interaction with the secondary flow. Meanwhile, it is found that the separation bubble (SB) is formed in tip clearance. The strong interactions between tip leakage vortex (TLV) and the up passage vortex (UPV) are the main source of unsteady effects which significantly enhance the turbulence intensity. Based on the DDES results, loss analysis of tip leakage flow is conducted based on entropy generation rates. It is found that the viscous dissipation loss is much stronger than heat transfer loss. The largest local loss occurs in the tip clearance, and the interaction between the leakage vortex and up passage vortex promotes the loss generation. The tip leakage flow vortex weakens the strength of up passage vortex, and loss of up passage flow is reduced. Comparing steady and unsteady effects to flow field, we found that unsteady effects of tip leakage flow have a large influence on flow loss distribution which cannot be ignored. To sum up, the current DDES study about the tip leakage flow provides helpful information about the loss generation mechanism and may guide the design of low-loss blade tip. Introduction In a shroudless turbine blade, the pressure difference from the pressure side to the suction side causes a part of the flow from the pressure side passing through the tip clearance to the suction side, thereby the tip leakage flow forms. Because of the large lateral velocity gradient of the leakage flow, it mixes with the secondary flow and causes large loss. The tip leakage flow could account for about 30% of the overall loss in turbine stage [1]. Meanwhile, as the inlet temperature continues to rise, the tip leakage flow could also overheat and damage the blade. Understanding the loss mechanism and developing the loss control method are of great significance for guiding the design of low-loss blade tip and increasing the efficiency of turbine. Regarding the study of the tip leakage aerothermal performance, there are plenty of mechanism studies of tip leakage flow. Moore and Tilton [2] developed an analytical model to investigate the aerothermal performance of an axial turbine blade, which is based on the potential flow theory. Yaras and Sjolander [3] studied the tip clearance loss by kinetic energy. Tallman and Lakshminarayana [4] studied the effects of tip clearance height and its flow mechanism. Zhou and Zhou [5] proposed a triple-vortices-interaction kinetic model, and a one-dimensional mixing model proposed to explain the vortex interaction. However, the prediction of the mixing loss from the one-dimensional model was lower than Computational Fluid Dynamics (CFD) simulation. With the understanding of the tip leakage flow, there is plenty of experimental and numerical research focusing on the tip geometry. For the cavity tip, Li et al. [6] investigated the effect of cavity depth and the thickness of the squealer rim, and found that the tip leakage flow was enhanced with increasing the thickness of squealer rim. Kang and Less [7] studied experimentally the effects of squealer rim height-to-span ratio on heat/mass transfer rates, and found that, when the squealer rim height-to-span ratio increased, the averaged heat/mass transfer rate on the cavity floor began to decrease steeply and then decreased slowly. For the squealer tip, Yang and Feng [8] studied numerically the tip leakage flow in the first stage rotor blade. They found that the tip gap and groove depth had significant effects on flow and heat transfer. With the groove depth increases to 3% of blade span, leakage flow would be weakened. Senel et al. [9] revealed the influence of squealer width and height on the aerothermal performance of a high pressure turbine blade by numerical calculation. They found that a proper squealer width and height was critical to reduce aerodynamic loss and heat transfer, which is the same as Yang and Feng's [8]. For the tip with winglet, Joo and Lee [10] employed the naphthalene sublimation technique to investigate the heat/mass transfer characteristics on the winglet top surface for cavity squealer tip and found that the winglet top surface had a lower averaged heat/mass transfer rate than the plane tip with no winglet. In recent years, as the temperature of the gas turbine has increased, the design of the tip cooling method and the cooling structure has also been the research focus. Park et al. [11] measured the heat/mass transfer coefficients and film cooling effectiveness on the tip and inner rim surfaces of blade with a squealer rim, and found that the high film cooling effectiveness was observed in the middle region of tip surface. Ma et al. [12,13] investigated the cooling-base flow interaction in a transonic turbine rotor blade tip. They found that the injection of coolant significantly altered the flow distribution in the tip clearance, and the heat transfer rate is changed by more than 50%. He [14] investigated the heat transfer coefficient and adiabatic film cooling effectiveness on a blade squealer tip with cooling holes. The squealer tip with both tip and pressure-side holes had a higher adiabatic film cooling effect than that with only tip cooling holes. Since aerodynamics and heat transfer are mutually influenced by each other, there is some research about optimization of blade tips by combining aerodynamics and heat transfer. Caloni and Shahpar [15] applied conjugate analyses to resolve fluid dynamics and thermal distribution for shroudless turbine blade with Thermal Barrier Coating. They found that the tip with opening trailing edge on the suction side provided 0.4% improvement of adiabatic efficiency compared to flat tip. Caloni et al. [16] adopted a multi-objective design optimisation to improve the squealer tip. They found that the combination of leading edge and trailing edge openings showed a significant improvement of aerodynamic performance and the heat load, which compared with a closed squealer tip. The mechanism study of tip leakage flow is the key point and foundation to understand the flow physics and enhance turbine performance as well as cooling effects. For the unsteadiness of tip leakage flow, there are still many aspects which we need to study further. In order to obtain more insights into turbulence characteristics and the loss mechanism in tip leakage flow, the flow field must be analyzed in more detail. The RANS method is less expensive and serves as the current main simulation method. Du et al. [17] simulated the blade tip by Unsteady Reynolds Averaged Navier-Stokes (URANS) in a high pressure turbine stage. However, with internal flow mechanism study of the turbine, the calculation accuracy of RANS method is not enough to capture the flow details of complex flow, which makes the study of the complex flow mechanism limited. Thus, the high fidelity and accurate numerical simulation is necessary. Kelly et al. [18] adopted the Very Large Eddy Simulation method (a hybrid URANS/LES method) to analyze a squealer tipped axial turbine stage. The VLES results showed a significant improvement to predict the adiabatic efficiency of the turbine stage. That means the hybrid RANS/LES method can obtain a richer flow field structure than RANS. In addition, compared with Large Eddy Simulation (LES) method, the hybrid RANS/LES method is relatively accurate and much cheaper. In order to further analyze the flow physics, the flow field needs to be decomposed. The POD method is one of the model reduction methods to extract dominant flow structures. The POD method has been widely used in science and engineering, including image processing, data compression, signal analysis, modeling and control of chemical reaction system, turbulence models, and coherent structures. Since Lumley [19] introduced the POD method into turbulence research in 1967, the POD plays an important role in flow analysis. The POD method has been applied in some simple flow studies, such as flat boundary layer flow [20], cylinder flow [21], Couette flow [22], flame combustion [23], turbulence jet in cross flow [24], airfoil flow in wind turbine [25] and so on. However, for some complex flows, such as tip leakage flow, there is a lack of using the POD method to analyze the unsteady turbomachinery flows. At present, the research on the tip leakage flow mostly adopts RANS or URANS approach, and the in-depth study of tip leakage flow using a high fidelity numerical method is seldom reported. In addition, existing analysis about tip leakage flow focus on the distributions of pressure, temperature and other parameters, while novel analysis methods like POD should provide new perspective about the flow physics. This paper uses the DDES, a hybrid RANS/LES approach to investigate the mechanism of tip leakage flow inside a high pressure turbine blade with a tip gap of 1% height, which has been proved to be superior in capturing a high accuracy and detailed leakage flow. With the POD method, dominant flow modes governing the unsteady evolution in the tip region are successfully obtained. The use of DDES approach as well as POD analysis brings an interesting view of what can be done to better understand the leakage flow with modern methods for computation and post processing. The time-averaged entropy generation rate reveals the overall loss of tip leakage flow, and the instantaneous entropy generation rate shows the local loss. In addition, the unsteady loss obtained by decomposing the loss reveals that the unsteady effects caused by tip leakage flow cannot be neglected. This paper is organized as follows: firstly, the numerical method and modeling are introduced as follows; secondly, an overview of the computational domain and mesh resolution are given; in the following parts, the detailed numerical results are analyzed; the last section concludes this work. Governing Equations The three-dimensional Navier-Stokes equation is expressed as follows: In the above equation, F, G and H represent the fluxes where scalar H is the total enthalpy, κ is the thermal conductivity, τ xy , τ yx , τ xz , τ zx , τ yz , τ zy , τ xx , τ yy , τ zz represent the viscous stress tensor component, respectively. U = (ρ, ρu, ρv, ρw, ρE) T is the conservative variable in the flow field, where ρ is the fluid density, u, v and w represent the velocity components in the Cartesian coordinate system, respectively, and E represents the total internal energy (E = c v T, γ = c p /c v , k = c p µ/(Pr), µ can be calculated by Sutherland's law). The state equation p = ρRT is used to close the system of equations, where R is the gas constant and T is the temperature. Turbulence Model The one-equation model, the Spalart-Allmaras model, is simple and has a relatively small amount of calculation. It behaves excellently in wall-bounded flow and is widely used in the aerospace community and the internal flow field. The standard Spalart-Allmaras equation [26] has the form of: where ν is the kinematic viscosity, and d is the distance from the wall surface.S is the production term, defined bỹ The equation for f w is given by and constant parameters used in the turbulence model are Delayed Detached Eddy Simulation In order to overcome the insufficient modeling stress of the Detached Eddy Simulation (DES) method, Spalart et al. [27] updated the DES method to the DDES method, aided by the low-dissipation numerical methods [28][29][30][31]. The DDES method uses a similar equation to the Shear Stress Transport (SST) model proposed by Menter [32] to limit the length scale in the DES method. It can ensure that the switching from RANS mode to LES mode is grid-independent. The DDES method is constructed by modifying the parameter r in S-A model as below: where U i,j is the velocity gradient. This modification can be applied to any eddy viscosity model. The parameter r d is mainly used to construct the following restriction equation: where the C DES is 0.65 [33], and ∆ is the maximum length scale of mesh element. This equation can effectively limit the "grey area" [27] between the RANS and LES regions, which can overcome the problem of insufficient modeling stress. Flow Solver An in-house CFD code, which is based on the Message Passing Interface (MPI)-parallel multi-block structured finite volume method [34] is used in this work. The governing equation of this code is the integral form of the Navier-Stokes equation as below: The first term on the left side of the equation represents the pseudo time term. In the time marching solution, a three-stage Runge-Kutta/implicit scheme with a multigrid method is used. The second term represents the physical time term, which controls the unsteady evolution. In unsteady calculation, dual-time-step of an optimized, second-order, backward difference temporal scheme (BDF2opt) [35] method is adopted. For high resolution of the small scale strucutres, the ideal numerical method for the convective term should be non-dissipative and discretely conserves the quantities like kinetic energy or entropy; however, this proposes a severe challenge to the stability of the simulation. In this work, to reduce the numerical dissipation, a high-accuracy 5th order Weighted Essentially non-oscillatory (WENO) reconstruction scheme [28,29,36] is adopted and the modified Roe scheme with low Mach number preconditioning [37] is used to compute the convective flux. 4th order centered scheme is used to discretize the viscous term. In the turbulence equation, the source term of the Spalart-Allmaras equation dominates and the discretization methods for the convective and viscous terms have a marginal effect. For stability, a second order upwind method and second order centered scheme are used for the convective and viscous terms in the turbulence model, respectively. Detailed validation about the DDES method in this code has been conducted by Lin et al. [31] and the same computational approach is used in this paper. For the isentropic Mach number distributions, it is found that the DDES results agree well with the experimental results, as shown in Figure 1. Computational Setup In this paper, the rotor blade in a single stage axial turbine is studied. The relevant parameters of the blade are shown in Table 1. The computational domain is shown in Figure 2a,b, and it is mainly divided into inlet, rotor and outlet blocks. The length of the inlet block is the same as the axial chord length, and the outlet block is twice the length of the axial chord, which can ensure the downstream flow is fully developed. Commercial software Numeca AutoGrid5 8.9-1 is used to generate multi-block structured mesh. H-Grid topology is applied in general. HO-Grid topology is applied for tip gap. The number of overall mesh points is about 9.11 million, with 53 layers of mesh in the tip clearance and 113 layers spanwise. The minimum orthogonality among all mesh block is 62.1 degrees. The maximum y + of the first layer off-wall mesh is 0.156, which ensures a good resolution of the viscous sublayer. Away from the wall, the mesh element is nearlly isotropic, with ∆x + ≈ 110 and ∆z + ≈ 20, which are in agreement with the recommended values for LES predictions [38]. In order to facilitate parallel computation, the mesh is split into 58 blocks, with an average of 0.16 million mesh points per block. The periodic boundary condition is set along the pitchwise direction. The surface, tip, shroud and hub of blade are set as no-slip solid surface. The inlet total temperature is 289 K and the inlet total pressure is 100,819 Pa. The outlet pressure is 97,080 Pa. To minimize the numerical reflections at the boundaries, the non-reflecting treatment [39] is adopted at the inlet and outlet surfaces. Validation and Comparison of DDES and URANS In the DDES method, sub-grid scale modeling is used away from the wall. The mesh convergence study of DDES is difficult compared to RANS because the modeled part of the turbulence is closely related to the length scale of the mesh element. As a result, refining the off-wall mesh would change the contribution from the subgrid scale (SGS) modeling. Besides checking the distributions of δx + , δy + and δz + , a useful and widely adopted approach is to check the ratio betwee the resolved and the modeled part of the turbulence kinetic energy. As defined in the following equation, the Index of Quality (IQ) is defined to measure whether the mesh elements resolve the majority of the turbulence kinetic energy [40]: where ν SGS denotes the viscosity defined by the sub-grid scale modeling, and C ≈ 100 [41], which evaluates the modeled turbulent kinetic energy, E modeled . E resolved is the resolved turbulent kinetic energy. Generally, IQ = 0.2 corresponds to the LES needs to resolve more than 80% of the turbulent kinetic energy [40], so that most of the turbulent components can be resolved. It can be seen from Figure 3, IQ for the region away from the wall area is less than 0.2, which indicates the mesh resolution is sufficient and satisfies the LES requirement. In the unsteady simulation, the physical time step is taken as 1.2 × 10 −6 s, and 20 pseudo iterations are employed for convergence. After performing Fourier transform on the data captured by the monitoring points, the velocity power spectra is obtained. As shown in Figure 4, an inertial subrange agrees with the Kolmogorov −5/3 law, which indicates that the inertial region of the turbulence is correctly resolved. There are multiple peaks in the power spectral density curve of the same monitoring point, indicating that there are kinds of vortices passing through this monitoring point. In addition, the frequencies of the vortices corresponding to different monitoring points are different. Figure 5 shows the results obtained by the DDES method and the URANS method, which are represented by by Q criterion and rendered by total pressure coefficient (Cp0). The Q criterion is defined in the form of: Q = (Ω ij Ω ij − S ij S ij )/2, where Ω ij is the vorticity tensor and S ij is the shear strain tensor. The Cp0 is defined in the form of: Comparison of URANS and DDES Results Both methods capture the tip leakage vortex and up passage vortex. However, the vortex structure of the tip leakage flow and the secondary flow captured by URANS method is basically steady, and the interaction of the two vortex structures is not observed. The DDES method captures the development and interaction process of the tip leakage flow and the secondary flow more precisely, which is very helpful for the analysis of tip leakage flow mechanism in the flow field. It can be seen that the RANS is not suitable for the tip leakage flow, mainly because its failure to predict the interaction between the tip leakage flow and the main flow, which can be traced back to the isotropic and equilibrium assumptions in the currently used linear eddy-viscosity RANS model. In the DDES model, RANS can be considered as the wall-model for the LES simulation. The RANS region is restricted in the boundary layer and, in the current case, the boundary layer is free of separation. As a result, a RANS model like Spallart-Allmaras or SST behaves very well in the boundary layer and performs better than using LES in the whole domain. Thus, the RANS approach is not suitable for the tip leakage flow analysis in the off-wall region. Instantaneous Cp0 contours are shown in Figure 6. It can be seen from the Cp0 distribution at different slices that URANS results are basically consistent with the DDES results for the first four slices, and the same distribution is also observed in Figure 7. That is because there is no strong interaction between tip leakage vortex and up passage vortex initially. However, as the tip leakage vortex and the up passage vortex move downstream, the instantaneous results of two models show a clear discrepancy in tip leakage flow. It can be seen that the Cp0 distributions of URANS appear spatially smoothly and nearly no interaction between tip leakage vortex and passage vortex. On the contrary, the DDES method predicts small scale flow structures in tip leakage flow. Since the URANS method does not capture the interaction between tip leakage vortex and up passage vortex, the influence of such interaction on the flow field is neglected. From Figure 7, it can be found that the maximum deviation is about 8.27% between DDES and RANS results. Thereby, URANS underestimates the loss of tip leakage flow. Analysis of the Flow Structures The vortex structures obtained with DDES method are represented by Q criterion and rendered by Mach number, as shown in Figure 8. It has to be noted that, in order to clearly distinguish the vortex structures, different Q criterion values are used to display the flow structures. It can be seen that there is a horseshoe vortex (HV) close to the end wall, and as the horseshoe vortex develops downstream, the boundary layer of the wall surface is continuously entrained. Because of the lateral pressure difference, a passage vortex (PV) is formed and corner vortex (CV) is induced near the blade surface. Similarly, the horseshoe vortex and passage vortex can also be found near the shroud. In order to distinguish the passage vortex near shroud or hub, the passage vortex near shroud is referred to as up passage vortex (UPV). Meanwhile, the wake vortex is also observed at the trailing edge downstream. At the blade tip, due to the pressure difference between the suction side and the pressure side, the fluid flows from the pressure side to the suction side, and then the tip leakage flow is formed. Due to the effect of the tip leakage vortex (TLV), the up passage vortex moves beneath the tip leakage vortex. The tip leakage vortex has stronger turbulence intensity than the up passage vortex. These two structures interact and affect each other. Such multi-scale turbulence structure is difficult to be predicted accurately by the RANS method. Figure 9 shows the details about the formation and development of the leakage vortex. Six slices are taken along the direction perpendicular to the main flow, which are 10% Ca, 25% Ca, 50% Ca, 75% Ca, 90% Ca, and 100% Ca. Projection of the vorticity vector ∇ × u onto the main flow direction u/|u| is displayed on these slices. It shows that at 10% Ca, in the tip clearance, there is separation bubble (SB) near the pressure side. This separation bubble blocks the fluid flows from the pressure side to the suction side. Therefore, there is almost no leakage flow on the suction side. At the same time, the up passage vortex has not yet formed. As moving downstream, the pressure difference between the suction side and the pressure side gradually increases, so that the fluid on the pressure side has more kinetic energy passing through the tip clearance, and then the tip leakage vortex forms at the suction side. Since the energy of the tip vortex is lower in the upstream, the up passage vortex cannot be completely moved by the leakage vortex. As the leakage vortex continues to grow up and develop, the leakage vortex splits the up passage vortex into two parts. One part of vortices is close to the end wall, which is endwall vortex (EV). The second part is below the leakage vortex, which is referred to as passage vortex. As the leakage vortex going downstream, the passage vortex and the leakage vortex gradually merge together. A schematic diagram of the formation and development of tip leakage flow can be used to demonstrate this process, as shown in Figure 10. Different slices are taken along the spanwise direction, and the isentropic Mach number distribution on the blade surface is obtained, as shown in Figure 11. It can be seen that the load on blade surface changes significantly at different slices. At 95% H, before the 55% Ca, the leakage flow goes into a suction side, which increases the pressure on the suction side and decreases the pressure on the pressure side. At 50% H, the isentropic Mach number on the suction side is further reduced while on pressure side the isentropic Mach number increases. On the contrary, the lateral pressure gradient in the passage between adjacent blades decreases pressure on the suction side and increases pressure on the pressure side. In addition, the isentropic Mach number on the suction side further increases and decreases on pressure side. Similarly, at 98% H, due to earlier contact with the tip leakage flow, the isentropic Mach number on the suction side decreases before the 35% Ca, and then the isentropic Mach number increases. On the pressure side, the isentropic Mach number is increased until it is basically the same with the isentropic Mach number of 50% H. Since the velocity of fluid in tip clearance is lower than the mainstream, the isentropic Mach number at 99% H is higher than other slices and the pressure is higher than the mainstream. The distributions of turbulent kinetic energy at several slices along the axial direction are given in Figure 12, where the black solid lines represent streamlines of main flow near the suction side, red solid lines represent the tip leakage flow streamlines, and green solid lines are the streamlines of main flow near the pressure side. The turbulence kinetic energy of the area where the tip leakage flow passes is high, and the turbulence kinetic energy is also high in the place where the passage flow passes. In the process of moving downstream, the intensity of turbulence kinetic energy increases first, and then it gradually diffuses and decreases. This is because the intensity of tip leakage vortex is strong upstream. Then, the interaction between tip leakage vortex and passage vortex causes a part of kinetic energy to dissipate during downstream transportation. It shows that the tip leakage vortex is the main source of turbulent pulsation. Proper Orthogonal Decomposition Analysis POD method, also known as Karhunen-Loeve decomposition or Principle Component Analysis (PCA), is a mathematical tool for analyzing multidimensional data. The employment of POD method in flow field can help identify the main structures and flow characteristics and also obtain modes' information with different orders [42]. The key of POD solution is to find a set of optimal orthogonal basis {ϕ 1 , ϕ 2 , ϕ 3 , · · ·, ϕ n } of the function or field space {v n (x) ∈ Ω}. v n (x) is assumed to be approximated by orthogonal bases, so v n (x) can be represented by the optimal orthogonal basis {ϕ 1 , ϕ 2 , ϕ 3 , · · ·, ϕ n } as where ϕ i (x) is the eigenfunction of v n (x), a i is POD coefficient and n is number of the eigenfunction. In order to solve ϕ i (x), the sample of v n (x) is needed. It is assumed that there are k linearly independent v n (x) The dimension is reduced by the optimal method, and the dimension reduction process is equivalent to solving the extreme value problem. That is, the average projection of all elements in set V = {v i (x)} k i=1 onto the orthogonal basis is maximized as follows: The constraint problem in Equation (13) can be transformed into an unconstrained problem by using the Lagrange multiplier method, and then it is treated by variational method. Define The primary function ϕ i (x) needs to meet the following formula: where K(x, x ) is called kernel function, and is a semi-positive self-correlation matrix. The optimization problem in Equation (13) is transformed into solving eigenvector and eigenvalues of kernel function K in Equation (15). Conducting POD analysis for the whole flow field would consume too much memory to store the required snapshots, so in the current work the key area of the flow field is extracted which rep the upper half blade span. 80 instantaneous flow fields are collected, which contain about 14 vortex shedding cycles. Eighty modes and its eigenvalues λ are obtained, as shown in Figure 13. Mode 0 represents the time-averaged flow field, and others are the unsteady modes. It can be seen that the eigenvalue decreases rapidly first, then falls slowly after mode 10. Mode 0 occupies a lot of energy, accounting for 99% of total energy. The remaining unsteady flow modes are in pairs. In order to facilitate the analysis, mode 0, mode 1, mode 3, mode 5 and mode 7 are selected for research. In addition, 25 %, 50 %, 75 %, 110% of axial chord slices are extracted. As can be seen, the energy at time averaged mode 0 is the highest, which indicates that most of the flows in the flow field are steady. From Figure 14, the main flow structure is the tip leakage flow vortex and up passage vortex in flow field, and there are no other obvious structures in mode 0. It can be seen that the tip leakage flow vortex gradually becomes larger in the process of moving downstream, and its intensity is stronger than up passage vortex. In time-averaged flow field, the tip leakage vortex is the dominant influencing factor. Figure 15 shows the different flow structures compared with mode 0. The unsteady flow structure can be seen on the suction side, and the position of unsteady structure is the same as the tip leakage vortex. It indicates that the tip leakage vortex has unsteady characteristics, and is also the source of unsteady loss in flow field. At 110% of axial chord slice, there are three kinds of vortices as EV, TLV and PV, which is consistent with Figure 10. These unsteady structures are created by interaction between tip leakage vortex and up passage vortex, which reveals that the interaction between the tip leakage vortex and up passage vortex has a strong unsteady flow characteristic. In mode 3, as shown in Figure 16, the quantities of unsteady structures increase in flow field, and its structure dimension gradually decreases. This indicates that the POD method captures small-scale structures in flow field at high order modes. These small structures occupy very lower energy, so it has a less effect on the flow field. As can be seen in mode 5 and mode 7, as shown in Figures 17 and 18, there are no obvious unsteady structures on the suction side at 25%, 50% and 75% of axial slices. However, there is a visible structure in tip clearance, which is caused by the separation bubble in tip clearance. This means that the separation bubble is also the source of unsteady losses in flow field. The unsteady structure on 110% axial chord slice gets smaller, and these small-scale structures contain lower energy, and their turbulence intensity is small. By using POD, the main unsteady flow structures are obtained, which are tip leakage vortex, up passage vortex, separation bubble, and the vortex generated by interaction of tip leakage and up passage vortex, respectively. Among them, the strongest unsteady characteristic is the tip leakage vortex, which indicates that the tip leakage vortex is the main source of the unsteady effect in flow field. Loss Analysis Loss coefficients are generally used to evaluate the flow loss in turbine design, such as energy loss coefficient, enthalpy loss coefficient, and total pressure loss coefficient. Denton [43] studied the enthalpy loss coefficient and total pressure loss coefficient; he found that these loss coefficients were not satisfactory for evaluating the loss of the turbine. He suggested using the entropy loss coefficient as the evaluation index. However, these loss coefficients are only a global value, which is the result of sum of losses, and it cannot provide information about local losses in flow field. The rich numerical simulation results of the flow field now make it possible to measure or visualize the factors which cannot be measured or visualized before. Therefore, the loss coefficient and the loss source can be directly linked together, which is very advantageous for studying the loss mechanism. According to the second law of thermodynamics, the entropy generation rate is a reasonable quantitative measure of irreversibility loss. The entropy generation rate provides detailed information about where the loss occurred, and a direct physical interpretation of the loss can be made by entropy loss, which cannot be provided by loss coefficients. At present, the analysis methods using entropy and entropy generation rate have been applied in basic research such as heat exchanger [44], diffuser [45] and microscopic flow channel [46], etc. Recently, Jin and Herwig [47] used the second law of thermodynamic loss method to analyze the physical mechanism of the influence of wall roughness on turbulence. Lin et al. [31] applied the entropy generation rate to analyze a high pressure turbine. The entropy generation rate is used in this paper to investigate the tip leakage loss. From the thermodynamics, the irreversibility in the flow field is the source of loss. There are two main types of irreversibility in the flow field: caused by fluid viscosity and heat transfer. The equation is as follows: ∂ρs ∂t The left side of the equation is the total entropy generation rate (EGR). The first term on the right side is the irreversible loss caused by heat transfer, defined as Sthe. The second term is the irreversible loss caused by viscous dissipation, defined as Svis. Figure 19 shows the time-averaged and instantaneous normalized total entropy generation rate, where the EGR value is cut off below 0.5 for displaying the flow loss clearly. It can be seen that the entropy mainly generates in the tip clearance, the area of tip leakage flow vortex, while the up passage vortex and interaction of it and leakage vortex have small entropy generation. It indicates that the tip leakage flow vortex interacts with up passage vortex and weakens the strength of up passage vortex, and then the loss of up passage flow is reduced. There are some differences between instantaneous and time-averaged total entropy generation rate. The loss caused by viscous dissipation in this case is much larger than the loss caused by heat transfer, where the Svis is more than 99%, as shown in Figure 20. Distribution of Entropy Generation Rate It can be seen that the entropy is highest in the tip clearance because of the flow friction dissipation caused by large shear stress of fluid inside the gap. The loss of the tip leakage vortex is greatest at the 50% Ca slice, and then the loss is gradually reduced. The loss in tip leakage vortex core is high due to the strong intensity and high initial energy. As the tip leakage vortex interact with the up passage vortex during the downstream transportation, the tip leakage vortex intensity becomes lower, and the energy in the vortex core is also smaller. Meanwhile, the loss caused by tip leakage flow is lower in the area where the tip leakage vortex and up passage vortex interact with each other. The mass-flow-averaged entropy generation rate is given at 110% Ca, which is along the upper half span as shown in Figure 21. There are three large losses in flow field, namely endwall loss, tip leakage flow loss and up passage flow loss. It can be seen that the up passage flow loss is lower than tip leakage flow loss both in both time-averaged and instantaneous results. The difference of up passage flow loss is quite small between time-averaged and instantaneous results. However, the difference of tip leakage flow is very large, which means that the unsteady loss of tip leakage flow is larger than passage flow loss. The time-averaged results of tip leakage flow are larger than instantaneous results; it indicates the turbulence characteristics in tip leakage flow is reduced by interacting with up passage flow and dissipating a large amount of energy in the transportation process. Therefore, if losses are only analyzed by time-averaged flow, the loss evaluation is over-predicted. Losses Due to Steady and Unsteady Effects In addition to classifying losses from physical sources, losses caused by the actual flow process can also be composed of two parts: losses due to steady effects (contribution from the time-averaged flow field, EGR(U)) and unsteady effects (instantaneous flow field subtracts time-averaged flow field, EGR = EGR − EGR(U)). With steady-state simulation using the RANS method, the unsteady evolution of the flow field is not available. For cases with simple flow structure, it yields satisfying loss prediction; however, the steady simulation method based on RANS often ignores or underestimates the influence of unsteady effects on the flow field, when analyzing the losses. The unsteady flow effects enhance the irreversibility of flow field, especially the large-scale periodic unsteady effects. In order to understand this problem more clearly, the steady and unsteady results are selected for further analysis. Figure 22 shows the instantaneous, steady and unsteady results at 98% of blade span. It can be found that losses in instantaneous field occur in tip leakage flow, up passage flow and wake area, especially caused by the interaction between tip leakage flow and up passage flow. The losses due to unsteady effects show the difference between the instantaneous loss and steady loss. It indicates that the unsteadiness in tip leakage flow along the flow direction cannot be neglected, since the losses caused by unsteadiness can be as high as 14% near the leading edge and 4.5% near the trailing edge on the suction side. These unsteady losses are mainly caused by the unsteadiness of tip leakage flow and its interaction with passage flow. Figure 23 displays the losses distribution at 110% of axial chord. The instantaneous results show that losses mainly occur in tip leakage vortex, up passage vortex and wake area. The largest losses take place close to the end wall, and the loss of up passage vortex is smaller than loss of tip leakage vortex. The differences due to steady and unsteady effects appear on the tip leakage vortex, and the maximum loss caused by unsteadiness accounts for 10%, which reveals that the tip leakage flow has strong unsteady characteristics. It can be seen from the steady and unsteady losses distribution in the flow field that the tip leakage flow has strong unsteady characteristics along flow and spanwise direction. It indicates that the tip leakage flow has large-scale turbulent and three-dimensional flow characteristics. Unsteady losses in tip leakage flow cannot be ignored. Conclusions This work aims to investigate the flow phenomena and loss mechanism of tip leakage flow. The rich and detailed flow fields show the generation and development of the tip leakage flow. In addition, a schematic diagram comes up for explaining the flow field clearly. The flow field is decomposed by the POD method, which helps to resolve the main unsteady structures. The loss obtained by entropy generation rate is helpful to understand the loss caused by tip leakage flow and its interaction. POD is used to identify the dominant energy containing modes in flow, and it is found that the main flow structures near the shroud are tip leakage vortex and passage vortex in time-averaged flow. Apart from these structures, there is the separation bubble in the unsteady flow field. The strongest unsteady characteristic occurs on the 110% of axial chord slice, where the tip leakage vortex interacts with the up passage vortex. This is the main source of unsteady losses in flow field. In tip leakage flow, the irreversible loss caused by viscous dissipation is the dominant factor. Meanwhile, the main loss generated in flow field is due to a separation bubble in tip clearance, tip leakage vortex and the end wall vortex formed by up passage vortex. The friction dissipation caused by separation bubble in tip clearance causes the local maximum loss. The loss of the tip leakage vortex has the strongest influence at 50% Ca. As the up passage vortex interacts with the tip leakage vortex, the loss caused by tip leakage flow is reduced in the flow field. In addition, the unsteady effects of tip leakage flow have a large influence on flow losses distribution, which cannot be ignored.
9,535.4
2018-12-28T00:00:00.000
[ "Physics", "Engineering" ]
Impacts of Air Pollution on Colour Fading and Physical Properties of Wool Yarns Dyed with Some Natural Dyes in Residential Site During a few last decades, revival and using of natural dyes has gained a great deal of attention and care. A renewed international interest has arisen in natural dyes due to remarkable increase of the environmental and health hazards associated with the awareness of synthesis, processing and use of synthetic dyes. Cochineal (red dye), Turmeric (yellow dye) and Madder (red dye) are extracted from (Daclylopius coccus) bug, (Curcuma longa. L, rotunda. L) plant and (Rubia tinctorum) plant respectively. When using these natural dyes for dyeing wool yarns, the light and washing fastness were very poor. Enhancement of fastness properties of wool yarns can be done by the help of mordants, usually metallic salts. Mordants have an affinity for both natural colouring matter and the fibre. After wool yarns being impregnated with such mordants, they were subjected to dyeing with different natural dyes (in case of pre-mordanting). In case of post mordanting method wool yarns were treated with different mordants after dyeing procedure. These metallic salts (mordants) after combining with dye in the fibre, they form an insoluble precipitate and thus both the dye and the mordant get fixed which improve light fastness to some extent [1-14]. Generally, concentrating residential and commercial activities without air quality management policy, led to complex mixtures of all types and sizes of uncontrollable air pollution sources. Introduction During a few last decades, revival and using of natural dyes has gained a great deal of attention and care. A renewed international interest has arisen in natural dyes due to remarkable increase of the environmental and health hazards associated with the awareness of synthesis, processing and use of synthetic dyes. Cochineal (red dye), Turmeric (yellow dye) and Madder (red dye) are extracted from (Daclylopius coccus) bug, (Curcuma longa. L, rotunda. L) plant and (Rubia tinctorum) plant respectively. When using these natural dyes for dyeing wool yarns, the light and washing fastness were very poor. Enhancement of fastness properties of wool yarns can be done by the help of mordants, usually metallic salts. Mordants have an affinity for both natural colouring matter and the fibre. After wool yarns being impregnated with such mordants, they were subjected to dyeing with different natural dyes (in case of pre-mordanting). In case of post mordanting method wool yarns were treated with different mordants after dyeing procedure. These metallic salts (mordants) after combining with dye in the fibre, they form an insoluble precipitate and thus both the dye and the mordant get fixed which improve light fastness to some extent [1][2][3][4][5][6][7][8][9][10][11][12][13][14]. Generally, concentrating residential and commercial activities without air quality management policy, led to complex mixtures of all types and sizes of uncontrollable air pollution sources. ΔL* value: indicates any difference in lightness, (+) if sample is lighter than standard, (-) if darker. Δa* and Δb* values: indicate the relative positions in CIELAB space of the sample and the standard, from which some indication of the nature of the difference can be seen. Results and Discussion Air pollution: Both the gaseous and particulate components of an atmospheric aerosol contribute to deterioration in air quality. Dust-fall samples are generally indication of atmospheric particulate concentration. Particles of size larger than 20 µm has appreciable settling velocities and relatively short atmospheric residence time. The annual mean rate of deposited particulate matter over Kalla region during the year of study was illustrated in Table 1. Table 1 shows that Kalla region (residential site) is highly polluted with a suspended dust comparing to Helwan city (industrial area). While Helwan city is highly polluted with fine particles and SO 2 more than Kalla region. Also, Table 1 indicated that the annual mean rate of deposited dust was 16.93 g/m 2 .month. According to Pennsylvania guidelines for dust-fall, these values are considered a heavy deposition rates. Pennsylvania guidelines for dust-fall [29]. The annual average concentration of suspended particulate in residential site (Kalla region) atmosphere was 951.04 μg/m³ this concentration is about 19 times higher than the value of 50 μg/m³ concentration limit of US National Ambient Air Quality Standards [28], which is the same value recommended by UK. Expert Panel on Air Quality Standards [30]. It is also about 13 times higher than the maximum allowable concentration that is given by the Egyptian Environmental Law No. 4, 1994 (70 μg/m³) [31]. The reactive acids such as sulfuric acid cause the melting of wool especially at high temperatures and transformation it into amino acids and peptides. In addition wool fibres release sulfur gases. It was noticed that concentrations of suspended particulate matter varied from one season to another during the period of the study and the maximum concentration recorded during autumn. The amounts of deposited and suspended dust were also affected by the location of the study "Kasr El-Jawahara Museum" in Kalla region as it is near to both Moqattam Mountain and the downtown. The percentage of the acidic particles resulting from the exhaust and the large particles coming from Moqattam Mountain was carried by the wind. Although there is a certain level of dust in the air at all times. The amount and dissolved salts are thoroughly mixed into the dye liquor. The wool yarns were interred into the bath and the dyeing process was continued for 30 minutes at 90°C, then rinsed and washed [23][24][25]. Preparation and exposure of samples to the ambient atmosphere: The dyed wool samples were placed over roofs of building in the investigated site. Wool samples were exposed for a period of one year. Five samples of each type were removed from each site after exposure of three months interval, and were taken off to the laboratory for the measurements. Unexposed samples were used as control. Air pollution determination: The Area under investigation is residential site (Kalla region). It is located in the middle of Cairo city (residential area). It is characterized with heavy population and mixed activities commercial and residential beside the heavy traffic. The present investigation was undertaken to study the air pollution in residential site (Kalla region), through determination of the suspended and deposited particulate matter and sulpher dioxide concentrations. Determination of deposited particulate matter: Deposition rate values for settled particulate matter were determined according to standard methods [26]. Dust fall collectors were used for collecting dust fall samples as previously used in Egypt [27]. The collectors consist of cylindrical glass beakers 17 cm in height and 8 to 9.5 cm diameter. The cylindrical glass beaker was half filled with distilled water to avoid re-entrainment of the collected dust and mounted on iron tripods at a height of 50 cm above roof level to avoid the collection of surface dust. Monthly collected samples were transferred quantitatively carefully to a dry, clean weighted beaker using successive washing with distilled water and a policeman until the inside of the jar became clean. Successive drying and weighing of the beaker was made until constant weight. The differences in weight represent the amount of deposit dust during the corresponding month at each site. Particulate deposition were calculated and expressed as gm/m².30 days. Determination of suspended particulate matter: The filtration technique for collecting atmospheric suspended particulate matter [28]. Determination of sulpher dioxide: West and Gaeke method was used for the determination of SO 2 [26,28]. Air was aspirated (one liter / minute) through a glass bubbler sampler containing 50 ml of absorbing solution (0.1 M sodium tetrachloromercurate). Non-volatile dichlorosulfito mercurate ion was formed when the sulphur dioxide in the ambient air is absorbed in 0.1 M sodium tetrachloro mercurate. Addition of acid bleached pararosaniline and formaldehyde to the complex ion produces red-purple pararosaniline methyl sulphuric acid, which is determined spectrophotometrically at a wavelength of 560 mu. The total difference ΔE CIE (L*, a*, b*) between two colours each given in terms of L*, a*, b* is calculated from: ΔE* = [(ΔL*) 2 + (Δa*) 2 + (Δb*) 2 ] 1/2 Where: ΔE* value: is a measure of the perceived colour size of the colour difference between standard and sample and cannot indicate the nature of that difference. type of dust varies considerably and depends on many factors including source, climate, wind direction, and traffic. Dust is generated from man-made and natural sources and may be made up of soil, pollen, volcanic emissions, vehicle exhaust, smoke or any other particles small enough to be suspended or carried by wind. The stronger the wind the larger the particles lifted and the more dust carried. Sulpher dioxide: SO 2 is a prominent anthropogenic pollutant and contributes to the formation of sulphuric acid, the formation of sulphate aerosols, and the deposition of sulphate and SO 2 at the ground surface. Seasonal and annual concentrations of Sulpher dioxide in the atmosphere of the Kalla region are given in Table 1. From this table it can be noted that sulpher dioxide concentrations greatly varied from one season to the other maximum concentration of 32.6 was recorded during winter. While annual mean concentrations of sulpher dioxide, reaching. 19.71 μg/m³ These concentrations were lower than the value of 60 μg/m³ set by the US Ambient Air Quality Standard, and also the Egyptian limit for the annual concentration of SO 2 [31]. Also, It was less than the primary US National Ambient Air Quality Standard (80 μg/m³) for SO 2 [32,33]. Sulpher in the atmosphere originate either from natural processes or anthropogenic activity [34]. Fuel combustion as well as metal production is the dominant source for SO 2 emissions into the atmosphere [35]. Measurements of colorimetric data (CIE L*, a*, b*) in residential site (Kalla region) Effect of exposure to air and light on the Change of colour (ΔE values) for mordanted wool yarns dyed with natural colouring matter extracted from Cochineal, Turmeric and Madder using different salts in Kalla region: Tables 2-4 give values of change of colour (ΔE) for the three plants used. The change of colour (ΔE) increases with increasing the period of exposure to air and light. Good light resistance was observed in fabrics dyed with natural colouring matter extracted from Cochineal, Turmeric and Madder using different salts. This is due to the formation of complex with the metal which protects the chromatophore from photolytic degradation. From Table 2 it can be observed that the colour change (ΔE) for the mordanted wool samples dyed with natural colouring matter extracted from Cochineal using different kinds of mordants follows the order: alum salt> tin salt> iron salt> copper salt. While, Table 3 shows that change of colour (ΔE) for mordanted wool samples dyed with Turmeric using different kinds of mordants follows the order: alum salt> tin salt> iron salt> copper salt. Finally, Table 4 indicated that change of colour (ΔE) for mordanted wool samples dyed with Madder using different kinds of mordants follows the order: tin salt> alum salt> copper salt> iron salt. Tables 2-4 gives the (L* values) for the three plants used. Effect of exposure to air and light on the lightness (L* values) for mordanted wool yarns dyed with natural colouring matter extracted from Cochineal, Turmeric and Madder using different salts in Kalla region: From Table 2, it can be concluded that lightness (L*values) for the mordanted wool samples dyed with natural colouring matter extracted from Cochineal, become lighter comparing to standard sample in case of using Iron or copper salts, but in case of using alum or tin salts , the colour become darker after time of exposure to air and light. From Table 3, it can be observed that the lightness (L*values) for the mordanted wool samples dyed with natural colouring matter extracted from Turmeric, become darker comparing to standard sample in case of using Iron, copper, alum or tin salts after time of exposure to air and light. It is clear from Table 4, that the lightness (L*values) for the mordanted wool samples dyed with natural colouring matter extracted from Madder, has no change of colour comparing to standard sample in case of using Iron or copper salts, but in case of using alum or tin salts, the colour become slightly darker after time of exposure to air and light. Effect of exposure to air and light on the nature of colour (a* values) for mordanted wool yarns dyed with natural colouring matter extracted from Cochineal, Turmeric and Madder using different salts in Kalla region: Table 2-4 gives the (a*) values for the three plants used. Table 2 shows (a*) values for all wool samples dyed with natural colouring matter extracted from Cochineal using Iron salts which indicated that there is a slight change of colour in the direction of green region in the [CIE L*, a*, b*] zone, but in case of using copper salts the green colour concentration increases and in case of using alum and tin salts, a decrease in red colour concentration is happened. From Table 3 it can be observed that (a*) values for wool samples dyed with natural colouring matter extracted from Turmeric using Iron salt, copper salt, alum salt and tin salt show no change of colour after time of exposure (12 months). It is observed from Table (4) that (a*) values for wool samples dyed with natural colouring matter extracted from Madder using Iron salt or copper salts show a change of colour in the direction of green region in the [CIE L*, a*, b*] zone, but in case of using alum or tin salts a decrease in red colour concentration is happened. Tables 2-4 gives the (b*) values for the three plants used. It is observed from Table 2 that (b*) values for all wool samples dyed with natural colouring matter extracted from Cochineal using iron or copper salts show a decrease in blue colour concentration, but in case of using tin salt an increasing of blue colour concentration is happened, and there is no change in colour happened in case of using alum salt. It can be seen from Table 3 that (b*) values for wool samples dyed with Turmeric using iron salts, copper salts, alum salts and tin salts show a change of colour in the direction of blue region in the [CIE L*, a*, b*] zone after exposure to air and light comparing to the blank. Table 4 shows that, (b*) values for wool samples dyed with Madder using tin salt show a change of colour from yellow region to blue region in the [CIE L*, a*, b*] zone. Also, in case of using alum salt, the blue colour concentration increases. But in case of using iron salt and copper salt, there is no change of colour after exposure to air and light. Table 2 shows that the hue of the colour values (H) for wool samples dyed with natural colouring matter extracted from cochineal and mordanted with iron salt is almost the same during all periods of exposure to air and light. However, when mordanted with copper salt, the H values shift towards the green axis. This may be because copper ions cause a bathochromic shift of the long wave length absorption bands of cochineal. When mordanted with alum or tin salt, the H values shift towards the blue axis. From Table 3, it can be observed that the H values for wool samples dyed with natural colouring matter extracted from turmeric and mordanted with iron or alum show no changes after exposure. On the other hand, when copper salt or tin salt is used, the H values shift towards the blue axis. This may be due to the fact that the ultraviolet (UV) visible spectra of turmeric show significant changes that have occurred in the band absorbance at the longest wave length in the presence of copper or tin ions, and these changes are characteristic of copper ions or tin ions. Copper ions and tin ions also cause a bathochromic shift of the long wave length absorption bands of turmeric. It is observed from Table 4 that the H values for wool samples dyed with natural colouring matter extracted from madder and mordanted with alum show a shift in the direction of the blue axis. The reason could be that alum cause a bathochromic shift of the long wave length absorption bands of madder. However, when iron, copper or tin salt is used, H values are almost the same during all periods of exposure to air and light. Table 2 that the chroma values (C) for all wool samples dyed with natural colouring matter extracted from cochineal and mordanted with iron, copper, alum and tin salts slightly decrease when exposure time to air and light increases. It can be seen from Table 3 that the C values for wool samples dyed with natural colouring matter extracted from turmeric and mordanted with iron, copper alum and tin salts slightly increase when the exposure time to air and light increases. Moreover, for almost all samples, the concentration of the chromophores decreased as the colour faded. Table 4 shows that the C values for wool samples dyed with natural colouring matter extracted from madder when tin salt is used decrease when the exposure time to air and light is increased. However, when mordanted with iron, copper or alum salt, the C values are almost the same during all periods of exposure to air and light. Physical Measurement: The Physical measurements which have done on the dyed wool samples indicated that a severe decline in the tensile strength for the mordanted dyed wool sample with cochineal, turmeric and madder increases with increasing time of exposure to air and light. Figure 1, it can be seen that, almost full damage of tensile strength occurred with copper salt after 9 months from exposure to air and light. Also, a complete damage happened in the tensile strength for the mordanted dyed samples with cochineal using iron, alum and tin salts after 12 months from the exposure to air and light compared to the standard. Effect of time of exposure to air and light on the tensile strength (B-Force) for mordanted wool yarns dyed with natural colouring matter extracted from Cochineal, Turmeric and Madder using different salts in Kalla region: From It is observed from Figure 2, that, a severe decline of tensile strength occurred with all mordants used (iron, copper, alum and tin salts) after exposure to air and light for 12 months. Also, a complete decline happened in the tensile strength for the mordanted dyed samples with turmeric using iron and alum salts after exposure to air and light for 9 months. Figure 3 shows that a severe decline of tensile strength occurred with copper salt for mordanted wool samples dyed with madder after Madder with different mordants in Kalaa region Madder + iron salt Madder + copper salt Madder + alum salt Madder + tin salt full time of exposure (12 months). Also, Figure 3 shows that good results of the tensile strength were obtained when using iron, alum and tin salts for mordanted wool samples dyed with madder after exposure to air and light for 12 months. Also, a gradual decline happened in the tensile strength for the mordanted dyed samples with Madder using iron, copper, alum and tin salts after exposure to air and light for 12 months compared to the standard. However, the decline of tensile strength which occurred with mordanted wool samples dyed with Madder is better than the decline of tensile strength which occurred with cochineal and Turmeric during the time of exposure to air and light (12 months). Finally, the tensile strength for mordanted wool samples dyed with Madder using different kinds of mordants follows the order: copper salt > tin salt > iron salt > alum salt. Effect of time of exposure to air and light on the Tenacity for mordanted wool yarns dyed with natural colouring matter extracted from Cochineal, Turmeric and Madder using different salts in Kalla region: Figure 4 shows that, a severe decline in the tenacity of the mordanted dyed wool yarns with cochineal using different kinds of salts (iron, copper, alum and tin salts) is occurred with increasing time of exposure. A complete damage of tenacity is occurred for the mordanted wool yarns with all mordants used during the exposure time (12 months). Also, a complete decline of tenacity is occurred for the cochineal dyed yarns mordanted with copper salts after 9 months from exposure to air and light comparing to the standard. It can be observed from Figure 5 that, a severe decline in the tenacity of the mordanted dyed wool yarns with turmeric using different kinds of salts (iron, copper, alum and tin) is occurred with increasing time of exposure to air and light. Also, a complete damage of tenacity is occurred for the dyed mordanted wool yarns with turmeric using tin salts after 9 months of exposure time comparing to the standard. From Figure 6, it can be concluded that, a severe decline in the tenacity of the mordanted wool samples dyed with madder using copper salt is occurred with increasing time of exposure comparing to the standard. Good results regarding the tenacity of the mordanted wool samples dyed with madder using different kinds of salts (iron, copper, alum and tin) are obtained with increasing time of exposure comparing to the standard. Also, a gradual damage of tenacity is occurred for the dyed mordanted wool yarns with different kinds of salts (iron, copper, alum and tin) during the exposure time (12 months). Finally, the tenacity for mordanted wool samples dyed with madder using different kinds of mordants follows the order: alum salt > tin salt > iron salt > copper salt. To sum it all up, the tenacity of mordanted wool yarns dyed with madder is the best one among the tenacity of other mordanted wool yarns dyed with Cochineal or Turmeric, due to its small decline during all periods of exposure time to air and light. Turmeric with different mordants in Kalaa region Turmeric + iron salt Turmeric + copper salt Turmeric + alum salt Turmeric + tin salt Madder with different mordants in Kalaa region Madder + iron salt Madder + copper salt Madder + alum salt Madder + tin salt in the elongisity of the mordanted dyed wool samples with cochineal using different mordants (iron, copper, alum and tin) salts occurs with increasing time of exposure to air and light comparing to the standard. Also, a complete decline of elongisity is happened for the mordanted wool yarns dyed with cochineal using iron and copper salts after 9 months from beginning of exposure to air and light comparing to the standard. Moreover, the elongisity for mordanted wool yarns dyed with cochineal using different mordants follows the order: alum salt > tin salt > iron salt > copper salt during all periods of exposure time to air and light. Figure 8 shows that, a severe decline in the elongisity of the mordanted dyed wool yarns with turmeric using different mordants (iron, copper, alum and tin) salts occurs with increasing time of exposure to air and light comparing to the standard. Also, a complete Turmeric with different mordants in Kalaa region Turmeric + iron salt Turmeric + copper salt Turmeric + alum salt Turmeric + tin salt decline of elongisity is happened for the mordanted wool samples dyed with turmeric using copper and iron salts after 9 months from beginning of exposure to air and light comparing to the standard. In addition, the highest elongisity of mordanted wool samples dyed with turmeric salts using different mordants is alum salt then the rest of all salts after full time of exposure to air and light (after 12 months). From Figure 9, it can be seen that, a severe decline in the elongisity of the mordanted dyed wool yarns with madder using different mordants (iron, copper, alum and tin) salts occurs with increasing time of exposure to air and light comparing to the standard. Moreover, a complete decline of elongisity is happened for the mordanted wool samples dyed with madder using copper salt after 12 months from the beginning of exposure to air and light comparing to the standard. Also, a gradual damage of elongisity is happened until it reach the complete damage of elongisity for the mordanted wool yarns dyed with madder using tin and copper salts after full time of exposure to air and light (12 months). Moreover, the elongisity for mordanted wool samples dyed with madder using different kinds of mordants follows the order: iron salt > alum salt > tin salt > copper salt during all periods of exposure time to air and light. Finally, the elongisity of mordanted wool yarns dyed with madder is the best one among the elongisity of other mordanted wool yarns dyed with cochineal or turmeric, due to its gradual decline during all periods of exposure time to air and light. Conclusion Kalla region (residential site) is highly polluted with a suspended dust comparing to Helwan city(industrial area) [36]. While Helwan city (industrial area) is highly polluted with fine particles and SO 2 more than Kalla region (residential site). The values of change of colour (ΔE) of mordanted wool yarns dyed with madder are the lowest one among other yarns dyed with cochineal or turmeric. From these results, it can be concluded that the light fastness for wool yarns dyed with madder is the best one among other wool yarns dyed with cochineal or turmeric. Using different mordants as well as different kinds of sources of dyes for mordanted dyed wool yarns gives beautiful colorful wide range of hues. Madder with different mordants in Kalaa region Madder + iron salt Madder + copper salt Madder + alum salt Madder + tin salt
5,986.4
2015-11-25T00:00:00.000
[ "Environmental Science", "Materials Science" ]
The geoengineering approach to the study of rivers and reservoirs Abstract This Special Publication contains contributions for two meetings held to explore the links between geoscience and engineering in rivers and reservoirs (surface and subsurface). The first meeting was held in Brazil and, as a result, the volume contains many contributions from Brazil. The second was held in Edinburgh, and produced contributions from modern rivers in the USA, China, India and Scotland. The geological record from Carboniferous to Recent is represented. A range of outcrop techniques are presented along with statistical techniques used to identify patterns in the time series and spatial sense. The book is intended to cover the cross-disciplinary interest in rivers and their sediments, and will interest geologists, geomorphologists, civil, geotechnical and petroleum engineers, and government agencies. Some of the papers collected here demonstrate longer term impacts of human activity on rivers and how these might change the future geological record and, more importantly in the short term, impact on the UN Global Sustainability Goals. 2015. The Geological Society of London held the second workshop as part of the Geological Society's 2016 Year of Water events undertaken to promote 'debate of current research … on how the planet works and how we can live sustainably on it'. This volume compiles papers and abstracts presented in both workshops. These workshops brought together scientists working from very short engineering timescales to long geological timescales, with the focus on understanding key controls on river behaviour and the resulting deposits. Combining the study of ancient analogue systems with modern-day processes gives a greater awareness of the impact that longer-term boundary conditions play on the modelling and prediction of fluvial systems, their role as stores, and their geological products. Engineers and geologists currently work at different scales and different time steps, and this introduction will highlight these aspects and help to inform future hydrologists, hydrogeologists, geologists and civil engineers in addressing the Sustainable Development Goals set by the United Nations to challenge traditional thinking. Initiatives such as Water and the Energy Industry (Simmons 2015) encourage participation from the water and traditional subsurface energy industries in addressing these goalsrivers and reservoirs and their interplay is the focus of this contribution. The Geological Society's Special Publications have a long track record of compilations of research developments in fluvial systems. The economic importance of fluvial sedimentology was included in the early Volume 18 (Brenchley & Williams 1985) and followed up with a focus on the downstream deltaic portion of the system in Volume 41 (Whateley & Pickering 1989), driven by the discovery of major oil reserves in the Brent Group reservoirs of the North Sea celebrated in Volume 61 (Morton et al. 1992). The complexity of fluvial reservoirs was recognized by reservoir geologists in Volume 63 (Ashton 1993) and was the subject of a key Volume 75 on the specific issues in braided fluvial systems (Best & Bristow 1993). The interdisciplinary approach to the floodplain was the subject of Volume 163 (Marriot & Alexander 1999). The importance of fluvial reservoir architecture on reservoir compartmentalization was included in Volume 347 (Jolley et al. 2010), with a focus on geometry and heterogeneity in Volume 387 (Martinius et al. 2014). Increasing interest in off-planet fluvial systems resulted in Volume 440 (Ventra & Clarke 2018). These volumes chart the developing research and approaches in rivers, their sediments, and the resulting complex reservoirs. This current volume is essentially an update on progress since volumes 75 and 163 and brings together rivers and reservoirs with a global cross-disciplinary perspective. Because of the initial workshop, this volume brings together a strong Brazilian component. This volume has been arranged in a geoengineering framework, collected into three broad sections -Architecture and properties; Modelling and simulation; and Managementas used in subsurface studies linking geoscience and engineering (Corbett 2009). In this way, we have tried to avoid the geologist v. engineer split of working practices that seems to exist (from what folk say anyway in the study of hydrology and hydrogeology (Helen Reeves pers. comm.), and we can all find our own anecdotal evidence). Multidisciplinary working is very much the way in which the UN Global Challenges will need to be tackled and this book hopes to stimulate such cross-disciplinary interests. Architecture and properties In this section a number of studies on modern systems (Blum 2019;Sinclair et al. 2016;Nicholson et al. 2019) are compared with ancient counterparts, from the Cenozoic (Stow et al. 2016) and Mesozoic (Dal' Bó et al. 2018;Yeste et al. 2018;Fambrini et al. 2019) through to the Paleozoic (Reesink 2016;Ellen et al. 2018). In reconciling these two timescales, it is necessary to find common ground between studies which focus on the plan-view (topdown) evolution of fluvial systems (often the focus of modern studies) and the cross-sectional view often employed by studies of ancient relict landforms in the geological record. One such approach is through the understanding of bedform evolution (Best & Fielding 2019). A more detailed understanding of the propertiescharacteristics of lithologies, sedimentary structures linked to the depositional architectureof modern braided, meandering and anastomosing rivers, allows information from the geological record to be inferred. Conversely, by studying the preserved bedforms in the geological record, we may critically evaluate our current thinking and models of multichannel river systems. Best & Fielding (2019) examine dune bedforms, perhaps the dominant small-scale alluvial bedform across a wide range of large alluvial channels, and show their morphology, and especially their leeside angle, to be very different to classic angle-of-repose dunes. They provide a unique dataset that quantifies these characteristics, which is used to discuss possible controls on such dune morphology. In particular, they focus on what we might expect to see when examining such bedforms in the ancient sedimentary record and the implications of these characteristics for flow-depth reconstruction from cross-set thickness. Very-low-angle, almost horizontal, laminae are characteristic of lee slopes in mixed-load rivers, and bedform stability diagrams will require modifications to take into account the influence of fines on the bedform angle. Reesink (2016) highlighted the lack of fines in most prevegetation fluvial systems, and noted that such systems may have different geobody geometries and structures as a result of the lack of clay, but also that they may, in some cases, have a strong microbial influence instead. The alluvial architecture of the Río Bermejo River in Argentina is considered a highly active meandering river that transports a large finegrained sediment load. Best & Fielding (2019) contribute their thoughts on the current distributive fluvial system model debate (Weissmann et al. 2010;Owen et al. 2015Owen et al. , 2016Owen et al. , 2017Hartley et al. 2016) with a call for linking short-term processes with consideration of longer-term processes in modern systems. They caution the reader against overuse of planform geometry in isolation of wider considerations. The importance of storey surfaces in the sectional view is noted (Owen et al. 2017). Continuing a focus on the world's largest rivers as unique analogues for geological records, Blum (2019) considers how the Mississippi drainage system has organized and reorganized itself through time. The Holocene-modern Mississippi delta was constructed by aggradation and progradation over a period of 7000 years, as global sea-level rise decelerated and reached the present highstand positions. Land gain and land loss is natural in deltaic systems, but loss rates accelerated in the last century due to: (a) an acceleration of global sea-level rise; (b) dam construction that reduced sediment load (supply limited); and (c) a continuous levee system that limits dispersal to the delta plain (transport-limited). More than 11 000 km 2 of the Mississippi delta region is less than 0.5 m in elevation, and will be likely to drown by 2100there is only enough sediment to sustain ,25% of the delta surface area. The Colorado River in the SW USA is one of the Earth's few continental-scale rivers with an active margin delta (Nicholson et al. 2019). Deformation along this transform margin, as well as associated intra-plate strain, has resulted in significant changes in sediment routing from the continental interior and post-depositional translation of older deltaic units. The oldest delta candidate deposits, fluvial sandstones of the Eocene Sespe Formation, are now exposed in the Santa Monica Mountains, 300 km to the north of the modern-day Colorado. Sedimentological and mineralogical evidence from the earliest (c. 5.3 Ma) unequivocal Colorado-River-derived sediments in the Salton Trough provide evidence for a rapid transition from locally derived sedimentation. Lack of evidence for a precursor phase of suspended-load sediment suggests that drainage capture took place in a proximal position, favouring a 'top-down' process of lake spillover. Following drainage integration, significant changes in detrital mineralogy of fluvio-deltaic sediments document the progressive incision of strata from the Colorado Plateau from the Miocene to the present. The Colorado is an example of a river responding to major long-term and dramatic disruptions to the boundary conditions (sink changing from a convergent margin to a transform margin, opening of the Gulf of California, climate changes during the Miocene) with significant implications for the morphology and sedimentation. The paper on the Nheocolândia wetlands in central Brazil (Oliveira et al. 2018) presents a unique insight into a modern intra-continental fluvial environment where human interaction and climate impact are closely related. Formed of thousands of distinct saline and non-saline lakes, the Nhecolândia region is unique in its modern hydrological regime. Insight into the intrinsic linkages between this unique wetland hydrology, the associated ecosystems provision, and the potential for land-use and climate change is provided by Oliveira et al. (2018). This paper demonstrates the potentially significant impacts that small changes in the water supply as a result of future climate or land-use change may play on the region compared to the major changes seen to the Colorado over its lifetime. Traditionally, studies of distributive fluvial systems rely primarily on understanding the stacking of architectural elements in space, controlled by the radial distribution of channels away from an apex or apices located at the basin margin (Weissmann et al. 2010). This approach centres on the downstream dynamics of the fluvial network. In order to understand the vertical dynamics and organization of distributive fluvial systems, Dal' Bó et al. (2018) analysed deposits from the proximal part of a distributive fluvial system from the NE margin of the Bauru Basin (Upper Cretaceous, SE Brazil). The fluvial succession records the deposition of a semi-arid distributive fluvial system, enabling understanding of the vertical dynamics of channels and associated floodplain deposits in these systems. The stratigraphic alternation between channel types of drier and more humid climatic regimes has revealed high-frequency climate-induced cycles influencing the organization of the fluvial deposits. Furthermore, palaeosols, which constitute the stratigraphic boundaries of the studied succession, reveal a superimposed longer-term geomorphological cycle marking variations in the recurrence time of avulsions of the channel system. This work describes how climatic and geomorphological factors act together as the most likely controlling mechanisms for the vertical organization of this distributive fluvial system. Calcrete development in the system can provide a strong restriction to vertical permeability in a reservoir context. Fambrini et al. (2019) describe how the Barbalha Formation (Aptian) represents the initial sedimentary record of the post-rift stage of the Araripe Basin, NE Brazil, which consists predominantly of sandy fluvial facies with reddish and yellowish pelitic intercalations and thin layers of conglomerates and lacustrine bituminous black shales. This study differentiates two main fluvial sequences, separated by lacustrine bituminous shales. The first is represented by a braided-style fluvial association constituted of orange-yellow, micaceous, friable, coarse to fine sandstones, with planar and trough cross-stratifications, and thin layers of fine conglomerates, and by interlaminations of shales and mudstones. The upper sequence represents a meandering-tyle fluvial association that is also sandy, but with disseminated mudstones and shales. It consists of thin sandstones of yellowish to grey colour, and reddish mudstones and siltstones and greyish to black shales deposited under low-energy conditions. Conglomeratic facies tend to be thin and the sequence fines upwards. At the base of the second sequence there is the occurrence of thin conglomerates, denoting an erosive unconformity. The interpretation of these sequences is primarily a braided continental fluvial system resulting from tectonic subsidence due to post-rift reactivation of faults with close association with a lacustrine basin. Other Cretaceous fluvial sandstones in the São Sebastião Formation and the Marizal Formation (Tucano Basin) in NE Brazil have also been subject to reservoir characterization studies (Carrera 2015;Janikian 2015). A Triassic sandstone geobody has been studied by Yeste et al. (2018) from the Triassic red beds of the Iberian Meseta (TIBEM) Formation in south-central Spain and interpreted to record the sedimentary dynamics of a fluvial braidplain. This particular example has been highlighted in the literature as an outstanding outcrop analogue for productive reservoirs such as the Algerian TAGI (Trias Argilo-Gréseux Inférieur) (Yeste et al. 2018). The analysis of architectural elements in outcrop allows differentiation of the sub-environments of deep perennial channel, compound bar, cross-bar channel and bar tail elements. Of the total geobody population, 80% is represented by sandy compound bars up to 1000 m long and 500 m wide, developed in four building phases. At the base, each compound bar starts with a thick set of planar cross-stratification corresponding to a transverse unit bar. On top of this, several thinning-upwards sets of planar and trough cross-bedding suggest that the subaerial accommodation space progressively decreased to become an island of rippled sands with clay and silt layers intercalated. Petrophysical analysis shows that lithofacies association within each subenvironment constitutes the main control on permeability baffle distribution throughout this reservoir analogue. The occurrence of clay drapes within the compound bars will have a significant impact on the vertical permeability (as also noted by Best & Fielding 2019). This study incorporates a range of techniques to aid the 3D investigation. A total of six wells, with core recovery and well-logging (natural and spectral gamma ray, optical and acoustic televiewer) data acquisition, have been drilled behind the outcrop targeting the main sub-environments. The combination of these subsurface data with the Georadar Survey and their comparison with the outcrop analysis has allowed the development of stochastic and deterministic models that accurately reproduce the distribution of reservoir heterogeneities. These models show a significant lateral variability within a laterally extensive, 17 m-thick, sheet-like system, which can then be used to improve operational strategies during enhanced oil recovery performances in this type of fluvial reservoirs. Ellen et al. (2018) describe in detail an outcrop of Lower Carboniferous fluvial sandstones from SW Scotland. The Spireslack surface coal mine offers outstanding, national to international standard, nearly complete exposure from the Lawmuir Formation (Brigantian) through to most of the Upper Limestone Formation (Arnsbergian). It shows all the nationally recognized marine limestones and marine bands up to and including the Calmy Limestone. It is the most continuous Mississippian Sub-Period section in Scotland and also in the Muirkirk (East Ayrshire) Coalfield basin, where it also provides a reference section for the locally named coal and ironstone seams. Relationships between the architecture of the large fluvial sand bodies within this stratigraphic framework are described. Two sandstone bodies are interpreted as being deposited in a lowsinuosity sand-dominated palaeovalley of significant relief. This setting could provide an analogue for a more restricted, stratigraphically-isolated, incised, reservoir trapping system. The outcrop also shows an interaction of the sedimentary response to syndepositional tectonic activity; features (if present in the subsurface) that will only add to reservoir complexity and potential compartmentalization. Modelling and simulation In the previous section a number of authors generated 2D panels (Stow et al. 2016;Fambrini et al. 2019) and developed 3D reconstructions of outcrop architecture (Ellen et al. 2018;Yeste et al. 2018). These models and the data collected in the field can be used for training images and statistical datasets for the building of 3D reservoir models (Martinius et al. 2014;Beaumont et al. 2016). Models of fluvial systems for the purpose of simulation are often geostatistical through to finite difference/finite element. Stochastic modelling is used in hydrology (Patidar et al. 2018) to constrain 2D simulations (Xia & Liang 2016) and 3D simulations (Wang 2016) depending on the complexity of the problem. Machine learning from Google Earth images (Ahmed et al. 2016;Russell et al. 2016) through to core datasets (Demyanov et al. 2019) can help to quantify uncertainty and improve geological classifications. Understanding 3D responses in the subsurface can utilize valuable engineering data in support of the geological models (Corbett & Duarte 2018). The interaction between understanding of the process, relating it to geology, collecting quantitative data and building 3D models for simulation is important in the understanding of preserved fluvial systems. Statistical analysis of modern rivers might assist in the development of more process-based models which can also improve the workflow. In recent years, the application of machine learning and statistical techniques has improved our ability to forecast and predict extreme behaviour in natural systems. A range of mathematical tools and software, mainly based on the use of a single N year extreme flow or rainfall event, are available to conduct a thorough assessment of fluvial flood risk and various related aspects of flood risk management (FRM) projects. Utilizing multiple realizations of flow sequences can assure a robust approach for attaining long-term sustainability of FRM projects. Previous studies have been shown to generate reliable results (multiple realizations of daily streamflow sequences) through successful application of stochastic modelling approaches such as the hidden Markov model (HMM) coupled with the generalized extreme value distribution (HMM-GEV) and generalized Pareto (HMM-GP) distribution. The HMM-GP model has been rigorously accessed for its ability for capturing the various statistical characteristics and stochasticity of the simulated flow sequences. Models have been robustly validated across four hydrologically distinct catchments in the UK (the rivers Don, Nith, Dee and Tweed) and demonstrate excellent performance (Patidar et al. 2018). These models might be further extended from civil engineering into the geological domain, to run at geological timescales with changing boundary conditions as a result of base level (either climatic or tectonic or both) variations and history-matched with geological systems. Demyanov et al. (2019) apply machine-learning techniques to objectively classify the information encapsulated in sedimentary logs from two modern braided rivers: the Río Paraná in Argentina and the South Saskatchewan River in Canada. They apply various data-classification techniques, such as self-organizing maps, for unsupervised clustering of sedimentary logs. Early results show that machine-learning classification has indeed the potential to reveal interpretable sedimentological information by grouping well logs according to consistent sedimentological patterns. A statistical framework for interpretation of such data (even if these are expert interpretations) might enable the calibration of models back to at least recent geological time. These data-mining approaches could then be applied to older and more complex systems. Extending the modelling of physical processes and responses from the Earth's surface into the subsurface could yield particularly useful results in the future. Translating what is visible in a vertical well profile to what is happening laterally at inter-well distances in fluvial reservoirs and aquifers is a challenge facing the subsurface community exploring ancient fluvial systems (Corbett & Duarte 2018). Producing fluids from a well and then shutting it in and observing the pressure build-up provides a signal from the near-wellbore region. Interpretation of that build-up response in fluvial reservoirs can be challenging as the response is most sensitive to the highest permeability regions connected to the wellbore. Fluvial reservoir rocks have some of the most variable, composite architectural arrangements leading to a myriad of pressure responses. What the pressure 'sees'or responds toin 3D is very difficult for geologists and engineers to easily comprehend (often because of a common language). At the present time, engineers have no single analytical model to cover the myriad of fluvial reservoir pressure responses. 'Endpoint' relatively simple models exist. Complex geological models can be built from detailed Google Earth images, as can be seen in the case study from Colombia. Exploration of the possible responses by geo/flow modelling and comparison of synthetic geotype curves with real build-up data can help to constrain the more appropriate architectural scenarios. The clay drapes internal to the compound bars described by Yeste et al. (2018) and the calcretes described by Dal' Bó et al. (2018) will reduce vertical permeabilityconditions whereby the lateral connectivity will become the critical aspect of pressure support. The model of Yeste et al. (2018, fig. 9) illustrates nicely the lateral stacking pattern implied by the well testing models. Further work is required to link the vertical and lateral stacking implied by the distributive fluvial model (A and C, B and D in Fig. 1) (Owen et al. 2017) with the Type I, II and III well test models (Corbett & Duarte 2018). Management Water supplies play into many of the key United Nation's Sustainable Development Goals (Fig. 2). In securing water supplies, Blum (2019) shows how trapping sediment in the dams along the Mississippi has reduced the dominance of the river and influenced the downstream 'health' of the delta. Management of water through periods of drought or flooding (Germano & Castilho 2016) is a major challenge in a country such as Brazil (Kuwajima et al. 2019). How will longer-term climate trends (from climate models) deal with the likelihood of increasing extreme events? What can the geological record tell the forecasters? The Nheocolândia wetlands in central Brazil (Oliveira et al. 2018) present a unique insight to an intra-continental fluvial environment where regional rainfall and regional-scale climatic cycles control the water regimes in this semi-arid region. Small changes in water sources in this delicate location can have large changes that impact society at a much larger scale. The history of changes along major river courses reflects the impact mankind has had on rivers over short timescales (Skelton 2015) overprinted on longer timescale palaeoclimatic changes (Hill 2015). An effective water resource management strategy is important to meet multiple objectives such as water supply, navigation, hydroelectricity generation, environmental obligations and flood protection. By implementing a predictive control approach over a short-term forecast horizon, it is possible to foresee stress conditions or peak flow events and support decision-makers to take action before these events happen, thus minimizing their impacts. In the case of flood events, this technique enables the operators to pre-release water from a reservoir to allocate additional storage before the flood event occurs in order to mitigate flood damage along downstream river reaches. A scenario that would have prevented the flooding of housing stock during Hurricane Harvey in the Greater Houston area in August 2017. For that purpose, a robust and fast routing model is required in order to obtain quick and reliable estimates of downstream flow conditions related to release changes of the reservoir. The novel shortterm optimization approach consists of the reduction of ensemble forecasts into scenario trees as an input into a multi-stage stochastic optimization (Kuwajima et al. 2019). The damming of the San Sabastian River, the subject of water management strategies (Silva 2015), in NE Brazil which caused a rise in water level and flooding of a fluvial system presents an interesting analogue of the effect of rising sea level Fig. 3. The drowning of the San Sebastian River system, NE Brazil, by the Largo Sobradinho as a result of a dam being built provides a small-scale analogue to the processes seen in fluvial systems due to rising sea levels and illustrates the various aspects of changing river systems addressed in this volume (© 2018 Google Earth CNES/ Airbus). The largest field of view is 1000 × 600 km, the smallest is 40 × 30 km. on coastal systems (Fig. 3). The same situation occurs in the Mississippi (Blum 2019) and all other rivers where artificial reservoirs trap sediment. Monsoons can also wreak havoc on fluvial systems in coastal tropical areas (Hackney 2016). GEOENGINEERING OF RIVERS AND RESERVOIRS Water resource management is one of the greatest threats facing many communities in the face of changing climates and increased demand. Hedging (the holding back of water for drier periods) is universally recognized as a useful practice for redistributing water shortages to avoid occasions of large, crippling water shortages during surface water reservoir operation (Adeloye & Soundharajan 2018). However, when based on zones of available reservoir storage, hedging has traditionally been static in that the rationing ratio (i.e. the supply/demand ratio) is constant from one period to another. Given the seasonal variations in inflows into reservoirs, it should be expected that certain periods or months in the year should require less hedging and, hence, be able to supply more water than others, thus further enhancing the effectiveness of hedging as a systems performance enhancer. In this study, Adeloye & Soundharajan (2018) examine the effect of dynamically varying hedging policies on the performance of the Pong reservoir on the Beas River in Himachal Pradesh, India. The impact on aquifers in Brazil is also a significant water resource issue (Hirato et al. 2015). The geotechnical interaction of water in the soil (Barreto 2015) impacts water run-off and storage, as well as slope stability. Civil engineering models are able to simulate the processes (Fan 2015;Guan 2016;Liang 2016). Statistical models can take data over limited time periods at high resolution and extrapolate to longer periods, such as the study on various Scottish rivers described in this volume (Patidar et al. 2018). These models can be further constrained by longer-term climate and usage changes (Fan 2015). There is also a role for government and government agencies in managing water resources (Jenkins 2015;Reeves et al. 2015) in the face of the challenges mentioned, and this volume might encourage a more holistic approach to scientific study to meet society's needs. Concluding remarks This volume is intentionally ambitious in spread and has pulled together a wide range of contributions across the geoscientific and civil engineering domains. These domains are normally conventionally addressed in single, separate, volumes. The reader can select the part of the domain of most interest and absorb related issues for closely related domains. The importance to, and interest of, governing and government agencies is highlighted. Understanding the context of fluvial models continues to dominate the geological literature, with the impact of the more recent distributive fluvial systems model building on older braided/meandering/anastomosing models; and both are used in this volume. Emphasis on the knowledge of the processes of deposition, evolving three-dimensional architectural models, will help in the interpretation of sparse subsurface records. Useful outcrop-based sand-body descriptions with channel dimensions and stacking patterns, with compound bars, associated calcretes and clay drapes, are presented from the Carboniferous to Recent. These are of interest to the subsurface reservoir characterization community. Subsurface 3D modelling has proven the importance of lateral and vertical connectivity (the former controlled by lateral amalgamation and the latter controlled by clay drapes, calcretes, and more general intercalated floodplain deposits). The impact of modern rivers on society is immense and climate change will doubtless change the geomorphology of rivers, as it has done in the past. Understanding the context of palaeorivers will potentially improve the understanding of how our modern rivers will change. For certain systems, tectonic controls exert an equally strong or even stronger control on the river systems than glacio-eustatic climate change drivers. A full range of modern data-mining techniques will surely be deployed to Google Earth images, ancient geological records and modern river data to improve the statistical characterization and modelling of fluvial systems and uncertainty predictions thereof.
6,358.8
2019-09-24T00:00:00.000
[ "Geology" ]
Family Involvement and Firm Governance : In the View of Socioemotional Wealth Protection Family business is one of the things in the past, also is the existing way, the model of the future. Based on the 1420 private companies listed in China for 7 years (2006-2012), data statistical analysis found that with the increasing of the year, two rights separation degree of private enterprises were falling. As the change of the institutional environment, involved in the enterprise internal members of the family are increasing, and the source is also diversified. Listed on the mainland China for 717 family enterprises 7 years (2006-2012), the data of empirical test showed that the family members involved in the enterprise are advantageous to the family firm social emotional wealth preservation; the relationship of core family and family enterprise social emotional wealth behavior had a direct relationship. The improvement of the external institutional environment also be advantageous to the family enterprise social emotional wealth preservation, and the external environment will also be able to change influence of the family members involved in the enterprise to family enterprise social emotional wealth preservation behavior. The outbreak of the financial crisis eases the contradiction between the members of the family and the common crisis awareness, which shows the relationship between brothers and relatives and friends with the core family relationship (marriage) family members for the preservation of the family enterprise social emotional wealth which make greater contribution than the second direct generation. Introduction Since the kick-off of 3rd Plenary Session of 11th Central Committee of the Communist Party of China (CPC) in 1978, along with the "Reform and Opening Up" policy, the potential of marketability of China's economic system has been more and more obvious, and the potential of private economy has been released as well.Benefited from policy superiority, private firms in China have become the main component of national economy.Recently, the 3rd Plenary Session of 18th CPC Central Committee was held in Nov. 2013, not only reiterated the importance of market economy in China, but also strengthened the confidence of China's private entrepreneurs.What should be mentioned is that the divergence between ownership and control rights had not been impaired by the market economy reform, but had been reinforced by the family involvement.Therefore, this paper counted the proportion of the actual control-rights and the proportion of the actual ownership based on 7 (2006-2012) years' data of 1420 private-listed firms in China from the database of China Stock Market Accounting Research (CSMAR).The fluctuations of the two proportions could be seen in Figure 1, which also showed the fitting curves of the two lines from 2006 to 2012. Figure 1 demonstrates that from 2006 to 2012, both the proportion of the actual control-rights and the proportion of the actual ownership went up.In fact, the data confirmed that the divergence of ownership and controlrights declined 0.7% from 2006 to 2012.Although this change might have many secondary causes, the principal social cause remained as the most significant and relevant one.Firstly, after family members were embedded into the family enterprises, the divergence between ownership and control-rights would be more than likely undermined.(In reality, according to Chinese Family Business Report: in 2011, 85.4% private firms were family firms).In addition, while the market economy reform liberated the development potential for private firms, it also intensified their sense of crisis at the same time, especially private family firms, which would eventually push family members to participate in their own family business.Therefore, in the following research, this paper would unveil.Which family members had participated in the private firms?What effects had been brought by family involvement to firm governance?Could the external environment change bring any impacts on the family members' behaviours? Family Firm Research and Family Involvement Motivation The phenomenon of terminology overlap and empirical results contradiction is common in family firm researches [1].Asaba, S. (2012) argues that the family firms can reduce the agent conflicts which can be seen in control rights ownership non-family firms constantly.If the firm can integrate the control rights and ownership, it will cut down the costs and establish competitive advantages.Give tit for tat, Wasserman, N. (2006) argues that there is "Principal-Principal" agency problems in family firms.The person who dominates the firm might take advantages of information asymmetry in order to invade and occupy other owners' benefits.From the perspective of embeddedness, Le Breton-Miller, I., Miller, D. & Lester, R. H. [2] reconciled the two contrary arguments.However, there are still certain unsolved concerns to explain the firms' behaviour and family members' action.Here comes the question, why more and more family members became employees of family firms?Under situation like this, the socioemotional wealth theory reveals its vitality.According to the recent studies, the socioemotional wealth should include the family's power and effects in the firm, the needs of family members' ascription sense and intimation, the continuation of family values, the altruism among family members, the family's social capital and the family firm's heritage at least [3].When conducting the strategic planning, family firms should take socioemotional wealth into consideration as a key component.What is more, if the family loses the control-rights, their socioemotional wealth would be challenged by nonfamily members.Indeed, comparing to the control-right, the ownership is a basic power.Thus, the socioemotional wealth is linked not only to the control-rights but also to the ownership.Therefore, another question, is it the ownership or the control-rights, would influence the socioemotional wealth eventually?The loss of socioemotional wealth will lead to the desalination affection among family members, the drop of family's social status, disappointment against members' original expectations, etc.As a result, we consider that protecting the socioemotional wealth is not only the ultimate aim for family members, but also an important strategic action for family enterprise operation. Family Involvement and Socioemotional Wealth Following the step of Le Breton-Miller, L. & Miller, D. [2], we also believe that the socioemotional wealth affects the whole operation process of family enterprise.In their research, as the family business grows, the ownership and control-rights will be shifted from the individual founder to the founder's family, and finally dominated by "Cousin Consortia".Exactly as they pointed out that family involvement is the root of transformation of ownership and control-rights.Obviously, we do find evidence from the 1,420 private-listed firms in China. Based on the data from CSMAR, this paper classifies the family members embedded into the family firms according to different types of GUANXI (literally meaning relationship), which can be seen from Figure 2. Figure 2 illustrates that of all the family members involved in the business, the first generation of direct folks occupies the most seats, and then followed by the second generation, the first generation of affinity and the first generation of collateral series.Only few members from the third generation participate in the business.Actually, when the firm was established, only a few people from the core family were embedded in the firm.With the progress of the firm, the status of founders has been confirmed.Then, the board is elected to meet the request of legitimacy, especially when the firm goes public.As the family business expands, collisions between the first generation and the second generation intensified, the conflicts among different relatives aggravated at the same time.Thus how to manage the large family enterprise has become a thorny issue.On one hand, in order to balance the interests, ease conflicts, and maintain harmonious relationship, people outside the core family are needed to add some diversity to the Board, in the form of specialized committee.On the other hand, along with the family enterprise expansion, the impact and the image will appear particularly eye-catching to the public, pressure from social responsibility will rise as well.As a result, nonfamily members are called to be introduced to the board.We name them as "conciliators".However, the "conciliators" do not have economic priorities but only provide cognate assistance.Based on the above theoretical analysis, we propose: Hypothesis 1a: family involvement is positive with socioemotional wealth protection; Hypothesis 1b: Different family member performance distinct behaviors in socioemotional wealth protection. External Environment Change and Family Involvement The quality of external environment reflects the degree of system risk in business.Here, we focus the external environment change on institutional environment.Based on Agency Cost (AC) Theory and Resource-Based View (RBV), the previous studies discuss that: there is an alternative relationship between institutional environment and family involvement.In the perspective of AC, the better the institutional environment, the lower the family involvement.This phenomenon is explained by RBV as the following: the decline is caused by advantages of social capital, which has been replaced by business resources during the process of market reform.La Porta, R., et al. [4] reasons out that if the change of institutional environment is not conducive to protect investors, the number of family firm will be down gradually.Considering the theory of the socioemotional wealth, the family firm's attitude and behavior depends on the nature of the risk in the future [5].After all, if the risk happens later, the whole family will be affected [6].They may lose their properties agglomerating all of the families' painstaking efforts.In contrast, some studies have questioned the argument of AC and RBV.They find that during the period of economic transition, institutional environment improvement and economic development attract more family members [7].Here, we have Data Collection It is easy to define a firm,however it is complicated to define a firm as a family firm.A family firm may become a nonfamily firm referring to stringent standards.In this paper, we define family firm under a loose concept, if the firm is controlled by family or natural person, it could be defined as a family firm.Based on CSMAR, we collected 717 firms that meet this notion as of December 31st, 2009. Family Involvement How to measure family involvement, especially the degrees of family involvement?If only one person is em-bedded in the firm that can make family involvement, all of our samples are family involvement firms.We jump out of the limitation of previous studies, and attempt to distinguish the different degrees of family involvement among family firms.Luckily, we conceive this variable through 103 words which represent the Chinese GUANXI.Firstly, we distinguish four basic types of GUANXI: couples (gf), parentage (gs), brothers (kf) and relatives (ks). Among which, the "g" represents the "direct folks"; the "k" represents the "collateral series"; the "f" represents the first generation; and the "s" represents the "second generation".We must take further clarification that the "ks" also includes the relationship of affinity.What should be pointed out is that we do not consider the third generation in our empirical study (because there are only 3 observations in our sample). Family Firm Governance In the theoretical analysis, we consider socioemotional wealth protection as the motivation of family involvement, and as result family involvement impacts family firm governance.In previous studies, the family business governance measurement mainly focuses on the internal structure of the Board and team structure of top management.Common measurement indicators include the size of the Board, independence of Board, CEO Duality, etc [8].In our study, we consider three data: the sum of squares of 10 top circulating stock shareholders shareholding ratio (herfindahl_10), the sum of squares of different strategic shareholders shareholding ratio (shrhfdm) and the separation of the ownership and control-rights (vc).These data can reflect the comprehensive governance level of family firms, and are closely related to socioemotional wealth.Mean while we acquiesce that the higher the herfindahl_10 and the shrhfdm, the lower the vc, and the better for socioemotional wealth preservation. External Environment Change In the past research, external environment change is usually reflected by Fan Gang's (Chinese: 樊刚, one of China's most prominent economists) edition of market index.Notably, many survey based indicators in this index are not being reinvestigated when confronting new situations, thus making the index lose its original meaning of measurement.Meanwhile, due to GDP data adjustment by National Bureau of Statistics and some incomplete data, the reliability and representativeness of Fan's edition of market index is debatable.Considering that the data quality may influence the research, we adopt "local family firms' operating income in current year" and "all local firms' operating income in current year" to construct a new index, namely, the privatization process, or provrevenue.In the empirical test, we utilize this index to reflect external environment change.Additionally, we define the time before 2008 is the pre-crisis period (PC), the time from 2008 to 2009 is the crisis period (IC), the time after 2009 is the later period (LC) and the time from 2006 to 2012 is the Research Interval (P). Control Variables When the family business performance is beyond expectations, the owners are more likely to think about business sustainability [9].Based on this fact, the variables representing business performance are named as control variables.In this paper, control variables include: the performance (t40401); the listed age of the firm (age) and its square (age_2); the size of the firm (empnum-the number of the employees) and the industry (indcd_c/ indcd_m/indcd_j/indcd_o). The statistics of the variables (which are used to test the hypotheses) and the correlations between each other can been seen from Table 1. Method Based on 7 years panel data, we choose the random effects models.There are some legitimate reasons what we have squared up.Firstly, the panel data includes the larger information than cross-sectional data and time-series data.It can make up for the bias of omitted variables which often occur in instrumental variables regression.Secondly, the individual should be treated as a random, generally [10].Last but not least, compared to the number of the firms (717), the number of the years (7) is tinny.In order to guarantee the freedom of variables, it is better to avoid choosing the fixed effects model.We also take the Breusch-Pagan Test after each regression in the process of the empirical research, and the good side is that the results of the B-P tests support our choice. Models In order to examine the Hypothesis 1a and Hypothesis 1b, we build model 1: For Hypothesis 2, we build model 2: For Hypothesis 3, we take the interaction variables into the models and regress the model 1 and model 2 under the condition of controlling the external environment change (crisis).where, , , Results Tables 2-4 present the results of our estimation.We show three different model specifications.In (1) the regression contains the whole data, in (2) the regression only includes the data of PC and in (3) the regression merely involves the data of LC.Each regression is under the condition of subjecting to other variables.Limited by the forums, the results of control variables do not present in each table.The standard errors of each main variable are in the following.The asterisk (*) represents the significance of each variable: * p < 0.1, ** p < 0.05, *** p < 0.01.Interpretations are as follows. Table 2. FI and firm governance. Family Involvement and Firm Governance We first examine whether family involvement is positive with socioemotional wealth protection (Hypothesis 1a) and whether different family members perform distinct behaviors in socioemotional wealth protection (Hypothesis 1b) at the firm governance level.The results offer some support for Hypothesis 1a and Hypothesis 1b, as can be seen from Table 2. Table 4. External environment change, family involvement and firm governance (2). ( Consistent with Hypothesis 1a, the ks involvement is proportional to herfindahl_10 (see Table 2), and socioemotional wealth will gain benefits from the increase of herfindahl_10.Although some of the variables are not outstanding, the coefficients of the 4 out of 8 types of GUANXI differentiated from each other (see Table 2).This is unanimous with Hypothesis 1b.Per the influence that family involvement imposed on shrhfdm, under remarkable situations, various family members' involvement contributes to the dispersion.In Table 2, only the gs is negative with vc in the period of PC (but it is not significant).The results confirm that the effect of gf is not a like that of ks in socioemotional wealth protecting behaviors at the lamination of firm governance, and that in accordance with Hypothesis 3, the crisis can change the family members' action to protect socioemotional wealth (by comparing the LC data regressions to the PC data regressions). External Environment Change, Family Involvement and Firm Governance For the improvement of the external environment change (Hypothesis 2), we find that during the period of P/LC, the higher level the provrevenue, the greater theherfindahl_10 (p < 0.01); during the period of P, the higher level the provrevenue, the smaller the shrhfdm (p < 0.01); during the period of LC, the higher level the provrevenue, the greater the shrhfdm (p < 0.1), and the effect of provrevenue on vc is not considerable.When taking the variable of provrevenue into models, the changes of ks affect not only coefficients but also the significances.This result suggests that family involvement does affect the family firm governance in the term of the shareholding structure which is closely tied to socioemotional wealth.Finally, we draw the conclusion that the amelioration of the external environment change is positive with socioemotional wealth protection. In order to test whether it is a substitute or complementary relationship between external environment change and family involvement, we hinges on the sign and significance of coefficients for family firm governance (her-findahl_10/shrhfdm/vc).The results (which have been declared in Table 4) present that only few interaction items are significant (p < 0.1).Most negative coefficients imply a substitute relationship in which higher provrevenue predicts less family involvement.Just as all of the coefficients are not significant, we have to accept the argument that the external environment change improvement has the function to adjust family members' behavior in socioemotional wealth protection (Hypothesis 3), yet we cannot confirm a substitute relationship between external environment change and family involvement.Although this paper does not depict an exact relationship between institutional environment and family involvement, it interrogates the previous theories, both AC and RBV. In empirical tests, we also investigated the interaction between family involvement and external environment change, finding that even some original variable symbols have changed; they are still not that considerable.Additionally, we did some inspections to test the rubustness of the results.First of all, we get the similar results by carrying out regression test, which is done through the control ship of heteroskedasticity and autocorrelation.Next, we build a marketization index (the ratio of private-listed enterprises to all local enterprises) to conduct the regression test, and the results are basically the same.Finally, we examine different family members' behaviors during the period of IC, finding the result similar with these during the period of LC.The difference is that the greater impact family involvement imposed on shrhfdm, the more prominent coefficients are. Discussion and Conclusion In the field of strategic management, most empirical work is based on AC or stewardship.Contrary to this existing path, our empirical study is in the view of socioemotional wealth.Considering that preserving socioemotional wealth is the key motivation of family involvement and that firm governance is closely linked to socioemotional wealth protection, we eventually draw a conclusion that the family involvement has the function to affect firm governance.Before economic crisis, the impact family involvements imposed on family firm governance mainly focus on family enterprise shareholding ratios from different groups and separation between ownership and control rights.This phenomenon, in certain ways, indicates that the involvement of family members may cause the disturbance of family members.After economic crisis, shareholding ratios, to some extent, still carry some weight, but some influence wakens the awareness of threat within some family members after the crisis. We also find that external environment change changes can affect socioemotional wealth via impact imposed on family involvement [1].This point of view helps family involvement to preserve socioemotional wealth as the amelioration of institutional environment goes on, but weakens family business control ship from one single family.Also, when the family enterprises are being threatened, alertness from family members will be aroused, thus arousal of sense of crisis contributes to socioemotional wealth preservation.Furthermore, we find that when the survival of family enterprises is under threat, strength from the first generation should not be underestimated while the strength from the second generation seems somewhat fragile.This may be explained that on one side, the strength from second generation is not vigorous enough; on the other side, many second generation themselves are united with their parents. Although our study utilizes cross-sectional data and time data, subdivides family involvement in internal family enterprise, and investigates different members' attitude towards socioemotional wealth preservation before and after the economic crisis.In the empirical test, some key variables, which affect family corporate governance variables, are still not that significant.We also look into the effect of institutional environment on family involvement, but the interaction model is not that striking.Ultimately, our study is based on the theory of socioemotional wealth, and the empirical results back up Millers' opinion.This is merely a beginning.In our study, we only inspect the impact on family business governance behavior considering family involvement, while the impact on daily activities of family enterprises still needs further verification.Furthermore, the measurement of socioemotional wealth and its allocation among different family members, and different family members' attitudes and behaviors in different periods, will be the direction of our future efforts. Figure 1 . Figure 1.The proportion of the actual control rights and the proportion of the actual ownership. Figure 2 . Figure 2. The FI of Chinese private-listed firms (2006-2012).Fd: the first generation of direct folks; sd: the second generation of direct folks; fc: the first generation of collateral series; sc: the second generation of collateral series; fa: the first generation of affinity; sa: the second generation of affinity; t: the third generation.The 7 types of relationship are distinguished by 103 words which can represent the Chinese GUANXI.These words are cited carefully by us from the CSMAR. Figure 2 2 :Hypothesis 3 : in the following to support Chen's point, which shows a positive correlation between institutional environment improvement and family involvement.Besides, there is a neutral statement proposed by Miller, D. et al.[2]: the changes of external environment change play an important role as regulation mechanism in adjusting the effects of family involvement in firm governance.Benefited by the economic reform in China, the external environment change has kept a staggering growth trend in the past 35 years.Therefore, we propose the following:Hypothesis The improving of the external environment change is positive with socioemotional wealth protection; The change of the external environment change could adjust the behaviors of family members in socioemotional wealth protection.
5,171.6
2015-08-28T00:00:00.000
[ "Economics" ]
ICARUS v3, a massively scalable web server for single-cell RNA-seq analysis of millions of cells Abstract Motivation In recent years, improvements in throughput of single-cell RNA-seq have resulted in a significant increase in the number of cells profiled. The generation of single-cell RNA-seq datasets comprising >1 million cells is becoming increasingly common, giving rise to demands for more efficient computational workflows. Results We present an update to our single-cell RNA-seq analysis web server application, ICARUS (available at https://launch.icarus-scrnaseq.cloud.edu.au) that allows effective analysis of large-scale single-cell RNA-seq datasets. ICARUS v3 utilizes the geometric cell sketching method to subsample cells from the overall dataset for dimensionality reduction and clustering that can be then projected to the large dataset. We then extend this functionality to select a representative subset of cells for downstream data analysis applications including differential expression analysis, gene co-expression network construction, gene regulatory network construction, trajectory analysis, cell–cell communication inference, and cell cluster associations to GWAS traits. We demonstrate analysis of single-cell RNA-seq datasets using ICARUS v3 of 1.3 million cells completed within the hour. Availability and implementation ICARUS is available at https://launch.icarus-scrnaseq.cloud.edu.au. Introduction With the increased throughput of single-cell RNA-seq technologies in recent years, the necessity for large-scale data analysis is becoming increasingly important.Single-cell RNAseq datasets and data from aggregated sources now include millions of cells (Almanzar et al. 2020, Tabula Sapiens Consortium et al. 2022) which has increased the need for efficient computational analysis.We have previously introduced ICARUS, an interactive web server application for single-cell RNA-seq analysis (Jiang et al. 2022(Jiang et al. , 2023)).ICARUS utilizes the Seurat R workflow to perform preprocessing, dimensionality reduction, and clustering.Recently released Seurat v5 (https://cran.r-project.org/web/packages/Seurat/index.html) and the BPCells R package (https://github.com/bnprks/BPCells) introduce methods to store large datasets on-disk whilst utilizing geometric sketch-based methods to identify a subpopulation of representative cells from the overall dataset to store in memory for rapid and iterative exploration.This drastically lowers computational processing time whilst retaining power to detect heterogeneity across the data.Our update to ICARUS v3 harnesses this methodology to perform dimensionality reduction and clustering as well as utilizing this population of sketched cells to perform common downstream data analysis including coexpression network analysis (Song and Zhang 2015), regulatory gene network construction (now updated to use the SCENICþ regulatory motif database) (Bravo Gonz� alez-Blas et al. 2023), trajectory analysis (Cao et al. 2019), cell-cell signaling (Jin et al. 2021), and examination of cell cluster association with GWAS traits (Jiang et al. 2022(Jiang et al. , 2023)). Applications and implementation 2.1 Exceptional computational processing speed ICARUS v3 implements the 'geometric sketching' method of sampling a subset of representative cells in the overall dataset.This method was first introduced by Berger and colleagues (Hie et al. 2019) and recently incorporated into Seurat v5.Geometric sketching involves an approximation of the geometry of a single-cell RNA-seq dataset by employing equalvolume boxes within multidimensional space that each cell occupies defined by its gene expression profile.These boxes are positioned to encompass all cells in the dataset, ensuring that each box contains at least one cell.Cells are then sampled at random from these boxes ensuring that both rare cell types and common cell types that occupy a similar volume of transcriptomic space are equally represented in the 'sketched' dataset (Hie et al. 2019).Once a subset of sketched cells is determined, this heavily reduced dataset is stored in memory while the larger overall dataset is stored on-disk using the BPCells R package (https://github.com/bnprks/BPCells).As detailed by Berger and colleagues (Hie et al. 2019), dimensionality reduction and clustering can then be performed on the sketched dataset at efficient speed, and the resultant clusters from the sketched dataset are projected back onto the overall dataset stored on disk (ProjectData functionality of Seurat v5).We demonstrate efficient clustering of a dataset comprised of 1.3 million cells completed within the hour (Fig. 1). Cell-type annotation against single-cell atlases Another major update introduced in ICARUS v3 is the incorporation of large single-cell RNA-seq atlases comprising of millions of cells for cell cluster labelling.ICARUS now supports cell label transfer utilizing the SingleR method (Aran et al. 2019) for atlases including Tabula sapiens (Jones et al. 2022), Tabula muris sensis (Almanzar et al. 2020), Human Brain Cell Atlas v1.0 (Siletti et al. 2023), Human Lung Cell Atlas (Sikkema et al. 2023), Asian Immune Diversity Atlas (AIDA) (https://chanzuckerberg.com/science/programs-resour ces/single-cell-biology/ancestry-networks/immune-cell-atlasof-asian-populations), developing human immune system (Suo et al. 2022), healthy human liver (Andrews et al. 2022), and adult human retina (Liang et al. 2023).Furthermore, datasets from the Chan Zuckerberg CELLxGENE (CZ CELLxGENE) database may be directly loaded into ICARUS and cell-type labels transferred using SingleR methodology.ICARUS additionally retains the functionality to perform cell cluster labelling through sctype, a R package that congregates cell-type specific markers from CellMarker (http://biocc.hrbmu.edu.cn/CellMarker) and PanglaoDB (https://panglaodb.se)databases.To achieve efficient cell-type annotation, the subset of sketched cells is first annotated against the reference datasets using SingleR or sctype and then projected back onto the larger overall dataset. Doublet detection at scale We have previously introduced the DoubletFinder (McGinnis et al. 2019) methodology of identifying cell multiplets that may arise during single-cell RNA-seq library generation.However, the computational speed of DoubletFinder during artificial k nearest neighbour (pANN) simulation is not efficient for large datasets (Xi and Li 2021).ICARUS v3 utilizes the subset of sketched cells to perform pANN generation at a small scale which then are projected back to the overall dataset (ProjectData functionality of Seurat v5) to enable approximation of multiplets at a large scale. .A sketched dataset of 10 000 cells was taken.Each sketched dataset was scaled and log normalized and dimensionality reduction was performed using 2000 variable features (Seurat::FindVariableFeatures) and the first 50 PCA dimensions.Graph-based clustering was performed using the Louvain algorithm with a k-nearest value of 20.Benchmarking was performed on a Linux Ubuntu 22.04.2LTS with 64 GB RAM and AMD EPYC-Milan Processor with 16 CPU cores.Benchmarking was also assessed on a Windows 11 machine with 16 GB RAM running a 16-core 11th Gen Intel(R) Core(TM) i7-11800H @2.30GHz.(C) ICARUS v3 introduces cell-type annotation against large single-cell atlases including Tabula sapiens, Tabula muris senis, human lung cell atlas, and others publicly available in the Chan Zuckerberg CELLxGENE database.(D) The geometric sketched dataset is leveraged to perform common downstream data analysis including co-expression network analysis, gene regulatory network construction, trajectory analysis, cell-cell signaling, and examination of cell cluster association with GWAS traits. Streamlined incorporation of 10× Genomics and Anndata hdf5 files We also introduce an easier method of data input with support for the 10× Genomics hdf5 and Anndata hdf5 file formats (Virshup et al. 2021).Users may now upload multiple hdf5 files at once for streamlined integration.Integration of datasets may be performed using anchor-based CCA integration (Stuart et al. 2019), anchor-based RPCA integration (Stuart et al. 2019), harmony (Korsunsky et al. 2019), or fastMNN (Zhang et al. 2019). Summary Our latest update to ICARUS provides users with the capability to process large datasets at speed that previously could not be effectively processed.To our knowledge, ICARUS is currently the only publicly accessible web server that supports in-depth analysis of large-scale single-cell RNA-seq data.Moreover, users can take advantage of ICARUS's built-in save and load feature, which has also been updated to leverage on-disk storage to streamline analysis and minimize computational time spent for repeated analyses requiring resource-intensive steps.ICARUS will continue to receive ongoing updates as new methodologies are developed, ensuring that users have access to a cutting-edge resource for making novel discoveries. Benchmarking For benchmarking, a sketched dataset of 10 000 cells was generated using the SketchData function from Seurat whilst the large overall dataset was stored on disk using the BPCells write_matrix function.The sketched dataset was then scaled and log normalized and dimensionality reduction was performed using 2000 variable features (Seurat::FindVariableFeatures) and the first 50 PCA dimensions.Graph-based clustering was performed using the Louvain algorithm with a k-nearest value of 20.Benchmarking was performed on a Linux Ubuntu 22.04.2LTS with 64 GB RAM and an AMD EPYC-Milan Processor with 16 CPU cores.Benchmarking was also assessed on a Windows 11 machine with 16 GB ram running a 16-core 11th Gen Intel(R) Core(TM) i7-11800H @2.30GHz. Figure 1 . Figure1.Efficient single-cell RNA-seq analysis with ICARUS v3.(A) Efficient computational speed is achieved in ICARUS v3 through the use of a 'geometric sketching' method of sampling a subset of representative cells whilst the larger overall dataset is stored on-disk.Dimensionality reduction and clustering is performed on the sketched dataset and then projected back onto the overall dataset.(B) Benchmarking of ICARUS v3 dimensionality reduction and clustering for various datasets of increasing cell numbers (dataset list available in Supplementary 1).A sketched dataset of 10 000 cells was taken.Each sketched dataset was scaled and log normalized and dimensionality reduction was performed using 2000 variable features (Seurat::FindVariableFeatures) and the first 50 PCA dimensions.Graph-based clustering was performed using the Louvain algorithm with a k-nearest value of 20.Benchmarking was performed on a Linux Ubuntu 22.04.2LTS with 64 GB RAM and AMD EPYC-Milan Processor with 16 CPU cores.Benchmarking was also assessed on a Windows 11 machine with 16 GB RAM running a 16-core 11th Gen Intel(R) Core(TM) i7-11800H @2.30GHz.(C) ICARUS v3 introduces cell-type annotation against large single-cell atlases including Tabula sapiens, Tabula muris senis, human lung cell atlas, and others publicly available in the Chan Zuckerberg CELLxGENE database.(D) The geometric sketched dataset is leveraged to perform common downstream data analysis including co-expression network analysis, gene regulatory network construction, trajectory analysis, cell-cell signaling, and examination of cell cluster association with GWAS traits.
2,182.6
2023-11-21T00:00:00.000
[ "Computer Science", "Biology" ]
The Educational Semantic Web : Visioning and Practicing the Future of Education Abstract: I (Terry) first became interested in the semantic web from reading Berners-Lee's original works and following first generation developments of semantic web technologies in information science, e-business and health fields. I then began including the ideas in talks I gave at various conferences and forums in 2003. Naturally, I became curious about what other educators were doing with the semantic web and so Googled the term, "education semantic web". Much to my surprise and disappointment, I found that most of the references were to my own admittedly introductory and visionary comments made in these speeches. Where was the real work, innovation and actual prototype development? Fortunately, we were able to locate this type of work and we believe that most of the leading researchers in the area of the educational semantic web have contributed to this special issue. Of course, if we have missed your work, we welcome comments and URLs in the discussion areas of the special issue. Editors: Terry Anderson and Denise Whitelock. I n t r o d u c t i o n The "Semantic We b" is a term coined by Tim Berners-Lee to refer to a vision of the next dramatic evolution of web technology.He envisions forms of intelligence and meaning being added to the display and navigational context of the current Wo r l d Wide Web (web).The Semantic Web is a long-range development that is being built in stages by groups of re s e a rchers, developers, scientists and engineers around the world through a process of specification and prototype instantiating these intero perable specifications. Semantic Web based applications are being developed in all disciplines and p rofessions, including education.Both formal and informal education are integral to all forms of human development.The information age, with its emphasis on k n owledge growth and multiple forms of communication, is dependent upon citize n s being able to learn effective l y.The speed and incessant demand for change is forc i n g formal and informal educational opportunities to become more effective and efficient.Mo re ove r, the social costs of neglecting education exacerbate schisms b e t ween those with opportunities for learning and those without.The "have" and " h a ve not" effects are social costs that individuals, as well as society as a whole, can ill afford.The Semantic Web provides a long-term vision of opportunity for educational provision that is unbounded by geographic, temporal or economic distance.But is this vision attainable?If so, is the effort re q u i red to re a l i ze this vision commensurate with the potential gain?I (Te r ry) first became interested in the semantic web from reading Be r n e r s -L e e's original works and following first generation developments of semantic we b technologies in information science, e-business and health fields.I then began including the ideas in talks I gave at various conferences and forums in 2003.Na t u r a l l y, I became curious about what other educators we re doing with the semantic web and so Googled the term, "education semantic we b".Much to my surprise and disappointment, I found that most of the re f e rences we re to my own admittedly i n t ro d u c t o ry and visionary comments made in these speeches.W h e re was the re a l w o rk, innovation and actual prototype development?Fo rt u n a t e l y, we we re able to locate this type of work and we believe that most of the leading re s e a rchers in the a rea of the educational semantic web have contributed to this special issue.Of course, if we have missed your work, we welcome comments and URLs in the discussion areas of the special issue (see below). Format of this Special Issue The Educational Semantic Web provides a theme around which many futures and technological applications can be crafted.This Special Issue of JIME is an i n t e r a c t i ve, peer and public re v i ewed exposé, in academic terms, of the future of the Educational Semantic We b.The format of Special Issue builds upon the work of a 2003 JIME issue in which chapters from the book, "Reusing Online Re s o u rces: A Sustainable Ap p roach to eLearning" we re publicly re v i ewed by an international gro u p of experts.The re v i ews sparked further commentary between re v i ewers, authors and the general re a d e r s h i p. This Special Issue will feature nine papers by invited, internationally re n ow n e d authors who have previously written about the effect of technology on education, learning and scholarship.Their interests and writing span distance education, higher education and lifelong learning.Each has shown capacity to write with vision and clarity that has garnered international attention.They we re asked to create original a rticles that envision the future decade of education and learning based on their c u r rent work and interests in respect to the emergence of a global and intelligent Semantic We b. The second component of the Special Issue is devoted to reactions to the art i c l e s written by some of the world's foremost educational practitioners with acknow l e d g e d leadership and competence in building educational systems based on the use of new technologies.Although the distinction between the two groups may not always be easy to discern, the authors of the commentaries we re asked to re v i ew and comments upon one of the selected articles.The goal of the commentaries was to re v i ew the a rticle with a critical eye tow a rds practicality, training and support issues, cultural and economic barriers, implicit assumptions, and other issues related to the adoption of innova t i o n Visions of the Educational Semantic We b The Educational Semantic Web is a developing and futuristic vision.As such, it has many enthusiastic proponents and an equal number of sceptics.In this intro d u c t i o n to the Special Issue, we highlight the promise of these technologies and conclude with the major arguments of the Semantic Web sceptics. The Educational Semantic Web is based on three fundamental affordances.The first is the capacity for effective information storage and re t r i e val.The second is the capacity for nonhuman autonomous agents to augment the learning and information re t r i e val and processing power of human beings.The third affordance is the capacity of the Internet to support, extend and expand communications capabilities of humans in multiple formats across the bounds of time and space.Ad vocates of the Semantic Web envisage its use to create ve ry powe rful new applications in nearly all disciplines, social and economic endeavors.Howe ver little has been written to date expanding on the promise and the current pro g ress that applies these powe rf u l a f f o rdances to educational contexts, challenges and opportunities.Thus, the rationale for this special issue. . 1 Information Storage and Retrieval We have rapidly become accustomed to a network in which search engines prov i d e potential hits numbering in the tens or hundreds of thousands for many re l e vant and i m p o rtant terms.Da i l y, tens of thousands more web pages of information are added to the net.Yet, our capacity to find and re t r i e ve, much less manipulate and organize this material is only at a ve ry ru d i m e n t a ry state.The Semantic Web deals with this challenge by ostensibly allowing content to become aware of itself.This aware n e s s a l l ows humans and agents to query and infer knowledge from information quickly and in many cases automatically.T h rough the use of metadata organized in n u m e rous interrelated ontologies, information is tagged with descriptors that facilitate its re t r i e val, analysis, processing and reconfiguration. For example, a simulation could be created for the Semantic Web that tracks the cargoes of ships arriving with relief supplies for a famine-struck country.The cargo manifests are placed on the web as they arrive in a port.Linkages to daily commodity m a rkets, consumption needs, transportation availability and other data can be re a d in real-time by development workers and students around the world.Di f f e re n t scenarios can be played out, informed by real-time interventions including enviro nmental or political vagrancies.These scenarios then become artefacts of the Se m a n t i c Web themselves, providing content for future students of history, geography, d e velopment or logistics. The capacity of the Semantic Web to add meaning to information, stored such that it can be searched and processed, provides greatly expanded opportunities for education, simulation and real-time action anywhere on the distributed network . Critics have argued that the creation of a single network of semantically re l a t e d m a rk-up is foolishly ambitious, and unworkable beyond small and centrally c o o rdinated communities -a characteristic that is anathema to the current we b. Wo rk in this area re q u i res the development of appropriately scaled ontologies, systems that relate and map different ontologies to each other and systems that learn and mine ontology connections through use and the development of work i n g p rototype systems. . 2 A g e n t s Agents are Internet-based computer programs that are created to act re l a t i ve l y autonomously for extended periods of time.The Educational Semantic Web utilize s a variety of student, teacher and content agents to enhance the teaching/learning p rocesses.For example, a teacher agent operating on the Semantic Web might u n d e rtake many of the routine administrative tasks that currently consume large amounts of teacher time.They communicate with individual student agents, tracking student pro g ress, providing automated lists of re s o u rces such as tutorials, re m e d i a l h e l p, and assisting scheduling and time allocation tasks.They schedule personal time b e t ween teachers and students to maximize the effect and affect of these interactions.Teacher agents will track professional interests of teachers relating to their field of subject expertise, developments in new pedagogies with active evaluation and testing of pedagogical interventions.Teacher agents will assist teachers in routine mark i n g tasks, re c o rd keeping, and document control for assessments requiring manual effort .Student agents will assist learners in working collaborative l y, finding sources of e x p e rtise and assisting students in documenting and archiving their learning p roducts.A further capacity of the Semantic Web is re a l i zed when agents extract information from one application and subsequently utilize the data as input for f u rther applications.In this way, agents create greater capacity for large scale automated collection, processing and selective dissemination of data. Howe ve r, these agents can only operate because the information on the web is e n d owed with semantic meaning in formats that can be read and processed by both agents and humans.Critics have noted that such personal agents have been "just a round the corner" for over twenty years.Indeed, agents are the least developed of the three primary technologies of the Semantic We b, but continuous increases in p rocessing powe r, coupled with increasingly automated tagging and organizing of content through information extraction techniques, gives promise for near future d e velopment of these technologies. The Educational Semantic We b Anderson & W h i t e l o c k Journal of In t e r a c t i ve Media in Education, 2004 (1) Page 6 . C o m m u n i c a t i o n Despite the capabilities of agents, human-to-human communication will always be a major component of the educational experience.Proponents of the Semantic We b, argue that this communication will be even less constrained by barriers of time or place when the Educational Semantic Web is functional.We have had access to longrange and instantaneous communications since the invention of the telegraph in the 1 8 5 0 's.Fu rther developments have added voice, video, and multi-point features to s y n c h ronous communications.All of these technologies have now converged on the we b.Educational Semantic Web scenarios envisage the capacity to store, search, filter and otherwise process these human interactions.This allows interactions to be used and reused in a variety of educational applications.For example, students can pro c e s s the content of commercial television adve rtisements to deduce strategic markers used to influence consumer behaviours.Fu rt h e r m o re, the Educational Semantic We b could add to our concepts of virtual presence by defining and structuring virt u a l reality environments and net-based enhancements to real work and study contexts. De velopments re f e r red to as "social computing" allow humans to make connections with others of like interest; coordinate activities, filter and recommend and otherw i s e assist fellow learners in acquiring and building new knowledge.Fi n a l l y, semantic tagging of individuals and utterances will allow for shifting and sorting of a p p ropriate individuals and content to filter and focus interactions. Despite the capacity and promise of the Educational Semantic We b, there continues a debate re g a rding the capacity, efficacy and even desirability of using such technologies in educational contexts (Noll, 2002).Fears of privacy intrusions and questions of the value, costs and desirability arise.Questions relating to the pedagogical and necessity of extensive human interaction as a component of the educational process are largely unanswe red or the subject of more epistemological debate than empirical re s e a rc h . . 4 Challenges to the Educational Semantic We b Like any expansive technological vision, the Semantic Web has attracted both va l i d criticisms and unsubstantiated denigration.These criticisms range from concerns with practicality and implementation to more fundamental challenges concerning the epistemological capacity of machines and humans to deal effectively with the same set of meaning-filled signs.Fu rt h e r m o re, concerns have been expressed relating to the interpre t i ve power that can be shared across all human and machine cultures. Beginning first with the practical issues, we note that the Semantic Web is much m o re complicated and difficult to implement than its HTML-based web pre c u r s o r.I recall my first experience with web creation working with a group of gifted high school students during an afternoon in 1994.At the end of the session we had c reated and posted multimedia pages from a yearbook to the Internet, despite the fact that none of us had ever created a web page before.By contrast, after four years of w o rk by the W3C (World Wide Web Consortium) and other global collaborations t h e re are as yet no complete practical or commercial applications of the Se m a n t i c Web -much less a "killer application."The networked world of the 21st century is much more diverse than that to which Tim Berners-Lee presented the original web in 1994.Now, ve n t u res in competing technologies such as web services and huge financial investments in systems such as .Net serve to fragment development effort s in competing systems and standards.Building the Semantic Web is much more complicated than just developing sites for the original display-orientated we b.T h e comment found on a deve l o p e r's discussion list that "either RDF is dumb, or I am" c a p t u res the frustration of many who see the vision but have not been equipped with the tools or techniques to allow them to exploit that capacity. The means by which the Semantic Web will be created often spawns acrimonious debate and discussion.Ha rking back to Raymond's (2001) perva s i ve differe n t i a t i o n b e t ween construction of an emergent and self-organizing bazaar as opposed to an a rchitected cathedral, Jack Schofield (2003) comments, For Mi c rosoft and IBM, it's like designing a giant metropolis, laying out the roads, agreeing on traffic regulations, putting in plumbing, and so on.Fo r the hackers, it's more like "let's build a city: eve rybody bring a brick." Educators certainly no longer have the power or the will to create global information systems, and thus we are hostage to emergent technologies.Howe ve r, it is unlikely that the Educational Semantic Web will be made useful unless and until it's end-user applications become simple enough to support useful learning experiences and activities controlled and created by ord i n a ry teachers and students. The vision of the Semantic Web is based on the capacity for machines to accurately locate, read, interpret and process data created by hundreds of thousands of differe n t individuals and organizations.It has proven to be an extremely challenging task to d e velop data stru c t u res that impose enough stru c t u re to insure pro g r a m m a b i l i t y without losing data or unduly confining the ways in which humans can expre s s t h e m s e l ves.Pre requisite to the effective functioning of the Semantic Web is the existence of systems for defining, creating and deploying sets of identifies or tags that describe and in some cases constrain the content on the Internet.These tags are o r g a n i zed and related to each other in the form needed for formally stru c t u re d ontologies.The tags are used by both humans and agents to re t r i e ve, process and o t h e rwise manipulate information found on the Internet.It is becoming appare n t f rom early work on large systems (such as Cyc) that it is unlikely that there will be a single unifying ontology under which all information can be classified.Fu n d a m e n t a l questions related to cultural understanding, contextual variations, as well as semantic and ontological underpinnings of information, make the quest for such systems q u i xotical.Howe ve r, work by groups such as the WC 3 's We b Ont gro u p ( h t t p : / / w w w. w 3 .o r g / 2 0 0 1 / s w / We b On t / c h a rter) to develop languages for cre a t i n g multiple ontologies and systems to translate between systems based on common f e a t u res of ontologies give promise to a workable system. Be yond the technology is the human motivation for tagging and making know l e d g e accessible.In a scathing essay entitled "Metacrap: Putting the torch to seven strawmen of the meta-utopia," Cory Do c t o row (2001) argues that people lie, are lazy, are stupid, have ve ry little self insight and work in environments where there are many legitimate yet different ways to describe or tag anything.Thus, the challenge of tagging eve rything on the Internet in a set of coherent schemas is immense and o bviously will not be done by professional cyber-librarians employed to catalogue books.Rather, systems are needed that allow tags to be acquired through use, that a l l ow multiple tags to describe the same data and systems that harvest and capture schema and tagging systems automatically.Of course, this need is somew h a t tautological in that a system of agents capable of doing this tagging, would need an existing Semantic Web in order to carry out their task.Thus, the Semantic Web is described and defended as a multi-ye a r, if not a multi-decade, project.As hoped for, a rticles in this special issue (notably McCallum and Downes) point to ways that the meta-tagging problem may yet be re s o l ved by increases in both automated and human input metadata. For all the reasons cited above and others, there exists scepticism about the utility of the Semantic Web vision.This suspicion is especially pronounced in educational contexts where for many the educational transaction is an intensely human experience.For some, education is more accurately described as an artistic social i n t e rchange rather than one waiting for enhancement and possible substitution by a human-machine interaction.Nonetheless, the capacity to create powe rful learning o p p o rtunities, accessible anywhere/anytime that maximize the use of content, social interaction and machine support is equally compelling to educators.Thus, this Special Issue was created to stimulate the debate and broaden the vision re g a rding the role of advanced networking in education through the development of the Se m a n t i c We b. Our hope is that educators around the globe will take the time to seriously read the a rticles and the responses in this special issue.Second, that you will take the time to respond with your own visions and concerns or post an appropriate question that will f u rther our discussions.A final thank you to all the authors and the respondents for an effort that we believe is of critical importance on the road to creation of more accessible, high quality education and training opportunities for each of us. Overview of the articles and commentaries An ove rv i ew of the semantic web and the special issue by Athabasca Un i ve r s i t y's Te r ry Anderson and Denise Whitelock from the Open Un i versity of the United Kingdom. Arthur St u t t and Enrico Mo t t a Semantic Learning Webs: Stutt and Motta from the Open Un i versity of the UK begin their exposition of applications of the educational semantic web quite appropriately by detailing learner needs.Besides the obv i o u s necessity for stru c t u re, authenticity and support they note the need for stru c t u r a l organization of the context of learning on the net.From there we move to explication of the critical role of argumentation that grounds both formal scholarship and informal learning.Can the semantic web help us make and defend our arguments?With the help of graphic knowledge browsers and other tools being developed at the Open Un i versity Stutt and Motta show us how global communities will build k n owledge neighbourhoods and charts that document, share and stimulate their c u r rent and evolving knowledge base. Au s t r a l i a's Rod Sims focuses on the practical in his commentary -if (and when) we build the educational semantic web-will it make a differe n c e ?Sims notes that Stutt and Mo t t a's knowledge neighbourhoods must do more than present knowledge-they must engage not only the highly motivated but the learner who is learning for a variety of reasons -many not dire c t l y associated with intrinsic interest in the subject.This variety of interest and engagement re q u i res that we not assume that learners will create the type of k n owledge communities that the technology can support.Si m's commentary ends with a warning to not just build systems that support and virt u a l i zes the types of educational interactions and cognition that has defined education to date.Rather, we have to build for a world in which cognition and interaction with machines is fundamentally different from that which has marked our e vo l u t i o n a ry history.s u m m a r i zes his extensive experiences and those of his colleagues at the Un i versity of Sa s k a t c h ewan in creating artificial intelligence applications for educational use.In the article he presents a potential solution to the meta-tagging dilemma that c o n f ronts all those working with educational objects.Just how will all of the essential metatags be created and maintained and is there any way that these tags can be rich enough to meet the diverse and ever changing needs of thousands of potential users? The Educational Semantic Mc C a l l a's outlines an ambitious plan to create an 'ecological appro a c h' to adva n c e d e-learning applications in which content is tagged automatically in response to its use by users and furt h e r m o re how these 'e ve r g re e n' manifests can be matched to cre a t e p e r s o n a l i zed learning contexts.Creating Mc C a l l a's model will be complex and technically challenging, but it promises an educational semantic web that dynamically grows in response to practical uses and applications of real users. McCalla article provides an insightful introduction and vision of a semantic educational web that builds on the 30 years development of educational applications by serious computer scientists and maximizes the advantages of the emerging distributed tools of the we b. In their response Leonie Ramondt, Tom Smith and Pete Bradshaw from the Anglia Polytechnic Un i ve r s i t y's UltraLab describe how the type of living, ecological tagging and annotation of learning objects described by Ma c C a l l a needs the commitment and ownership of end users who add the necessary a f f e c t i ve commitment to the learning process.This sense of collaborative and g roup commitment is seen as necessary to any sustainable vision of the educational semantic we b.They also briefly describe the way human discussions can be re-used as learning objects using development tools for capturing and annotating discussion and classroom interaction needs. Betty Collis and Al l a rd St r i j k e r Technology and Human issues in Reusing Learning Objects: Betty Collis and Allard St r i j k e r, from the Un i versity of Twente, highlight two major issues, which they consider affect the reuse of learning objects.These not surprisingly fall into the realms of technological constraints and social or human interactions with learning object repositories.They suggest that discussions s u r rounding the wonders of the Semantic We b, as a change agent for teaching and learning, assume that the if the labelling or meta-tagging and other pro b l e m s associated with the selection of learning objects is solved then real pro g ress will be made.Howe ver they suggest that a number of other components in their 'life cyc l e' of learning objects merit attention as they too present a number of pedagogical The p roblems that can unwittingly be passed on to the user.Collis and Strijker we l c o m e the development of intelligent agents which will enhance the automation of the Semantic Web but warn that learning objects are only a tool and that human sharing and collaboration take precedence in any meaning making pro c e s s . Te r ry Evans who is key player in the current debate about the role of globalization, technology and distance education responds to the notion of object repositories as a form of 'instructional industrialism'.A notion he has d e veloped with Da r ryl Nation which describes a 'behaviorist -inspire d d i d a c t i c i s m'.Evans suggests that learning objects may be viewed as the c u r rency of this instructional industrialism.A sober thought but he does not go on to tell us where this leads us.He does warn of the dangers associated with the colonizing potential of new learning systems with their learning objects such as the Semantic We b.Perhaps this is an issue that should be debated in this Jime special issue? Rob Ko p e r: Use of the Semantic Web to solve some basic problems in Education.Ro b Koper is best known for the ground breaking work he led at the Open Un i versity of the Netherlands in creating an educational modeling language that was incorporated in the IMS learning design specification.In this article he re v i ews seven of the most i m p o rtant technologies of the semantic we b, thus providing a technical primer and ove rv i ew of the technologies of the educational semantic we b.He goes on to map these technologies with current problems (and opportunities in education) and finally ove rv i ews his current work that moves "beyond the course" to invision self organizing lifelong learning webs and communities. In his response, the Un i versity of Wa t e r l o o's Tom Carey challenges some of the promises (after all we've heard many before), and notes that a learning design needs to be more than a finished, static product, if it is to capture and e x p ress the dynamic knowledge of those create it.He also urges caution in ove restimating the knowledge and understanding of learners that can be extracted by the tracings left by their pro g ress through learning e n v i ronments.It isn't quite as bad as interpreting the future by examining the entrails of birds, but both methods can produce error when we assume that actions equate to cognitions. Stephen learning re s o u rces.Rather than taking the traditional tack of trying to standard i ze on a particular type and specification for metadata, Stephen argues for a much bro a d e r and more distributed system of meta-tagging in which a re s o u rce is described by many people for many uses.He also points to ways in which this distributed system of meta-tagging can and will be implemented across the web creating an organic and self-organizing semantic we b.If it is re c u r s i ve then which pathway can be identified through the learning roles by a model of this nature?Brna goes on to examine the strengths of A l l e rt's model which he suggests lies in its diversity which is based on the acceptance that different communities of practice view eve n t s / t h i n g s d i f f e re n t l y.He does howe ver point out that the consequences of such a p remise leads to a postmodern view of the world where we need many d i f f e rent ways of scrutinising events.This observation leaves us with his i n t e resting deduction that the 'educational semantic web community may be f o l l owing a path similar to that described by Pe r ry (1970) on the d e velopment of students in higher education!' Kendall, Clark, Bijan, Pa r s i a and j i m He n d l e r: Will the semantic web change education?In this article Jim Clark and his colleagues from the Un i versity of Ma ryland outline the way the Semantic web enhances the powe rful hyperlinking of the original web to enhance both the re s e a rch and the pedagogical functions of education systems.Many of us have heard the exuberant claims for the semantic we b, but few of us understand just exactly how a machine can function to deliver these p romises.The introduction to the semantic web technologies of RDF and OW L p rovide a technical yet understandable ove rv i ew of the current tools being used to c reate the educational semantic we b.The result, the authors, claim will be a technological environment in which eve ryone can become a 'hyperk rep (hypert e x t u a l k n owledge re p resentation) hacker' . In his response, notable distance education author and teacher, Gre g Ke a r s l e y counters Clark et al.'s claims and notes that the average, ve ry busy educator has many priorities beyond intrinsic interest in becoming a 'hyperk rep hacker'.He doubts that o rd i n a ry education systems will be changed by any technology that is more complicated than simple uni directional web links.In combination these two art i c l e s f o rce us to look at the future, while at the same time noting how stuck in the past education systems remain -a dilemma that challenges this whole special issue and calls for continuing efforts to reduce this implementation gap and if we will live to see the educational semantic web in our life times.. Diana Ob l i n g e r: The Next Generation of Educational Engagement.Diana Ob l i n g e r's paper rounds off this special issue by drawing our attention to the young learners who will be using the Semantic We b.Diana Ob l i n g e r, the Vice President of EDUCASE, highlights the fact that the Net Generation is digitally aware and is exposed to a number of media that affects their expectations of e-Learning materials. In the United States playing computer games is part of college life but nearly two t h i rds of the cohort surve yed by Jones (2003) had little experience of the use of games as a teaching vehicle.Oblinger mentions the role of simulations in the teaching of Business Studies but there is also an increasing role for the use of simulations in the teaching of Science.One of the important features of gaming scenarios that she mentions is that they are performance based environments which she asserts stimulate the learning -by-doing approach which spills over into other We b Anderson & W h i t e l o c k Journal of In t e r a c t i ve Media in Education, 2004 (1) Page 10 Go rd McCa l l a: The Ecological Ap p roach to the Design of E-Learning En v i ro n m e n t s : Purpose-based Capture and Use of Information about Learners Go rd Mc a l l a Educational Semantic We bAnderson & W h i t e l o c k Journal of In t e r a c t i ve Media inEducation, 2004 (1) Page 11 Dow n e s: Re s o u rce Profiles.In this in-depth article Stephen Downes fro m C a n a d a's National Re s e a rch Council explores the manifold problems and at the same time the compelling need for metadata to help us find, annotate and effectively use The Educational Semantic We b Anderson & W h i t e l o c k Journal of In t e r a c t i ve Media in Education, 2004 (1) Page 12 fields of enquiry.So what fun and games should we expect on the educational Semantic Web of the future ?Robin Mason from the UK's Open Un i versity notes that gaming builds on the skills acquired during informal learning.She encourages educators to capitalise on the growth of informal learning 's p a rked off primarily by the We b' ' but warns against the costs of the development of high quality The Educational Semantic We b Anderson & W h i t e l o c k Journal of In t e r a c t i ve Media in Education, 2004 (1) Page 15 multimedia learning materials.T h e re is also a note of caution about the types of games the Ne t Ge n'ers are playing some of which are mindless and violent in nature.She does howe ver make a strong claim for the skills and the approaches to learning that are acquired by the best game-users which she suggests reflects the new 'learning to e-learn framew o rk' that will underpin the Semantic Learning We b. 5 R e f e r e n c e s Do c t o row, C. (2001).Metacrap: Putting the torch to seven straw-men of the meta utopia.Re t r i e ved June 10, 2003 fro m h t t p : / / w w w. we l l .c o m / ~ d o c t o row / m e t a c r a p. h t m # 0 .Noll, A.M. (2002).Technology and the future unive r s i t y.In W. Dutton & B. Loader ( Eds.), Digital Ac a d e m e. (pp.35-38).London: Ro u t l e d g e .Raymond, E. (2001).The cathedral and the bazaar.Cambridge: O'Re i l l y.Schofield, J. (2003, May 29).The third era starts here.The Gu a rd i a n.Standage, T. (1998).The Victorian Internet: The re m a rkable story of the telegraph and the nineteenth century's on-line pioneers.New Yo rk: Be rkley Books. Re g re t t a b l y, we we re forced to reduce the length of St e p h e n's article to fit the format of a Journal article.Extensions to the ideas presented here are available at h t t p : / / w w w. d ow n e s .c a / f i l e s / re s o u rc e _ p ro f i l e s .h t m David Wiley from the Un i versity of Utah is perhaps the world's leading e x p e rt on the use, classification and re-usability of learning objects.He comments that Downes has done the field a favor by renaming learning objects (a term that continues to elude a consensus definition) as more general educational re s o u rces.Wiley also notes the inherent problems of reliability and falsehood that arise when multiple metadata descriptions are attached by multiple authors and users to any educational re s o u rce.As Downes notes one meta-description is far too few, but how we delete those that are obviously false, inaccurate or devised for selfish pecuniary re a s o n s ?Coherent Social Systems for Learning -An Ap p roach for Contextualised and Community -Centred Metadata.He i d run Allert, from the Un i versity of Ha n ove r, continues the debate about metadata and the Ed u c a t i o n a l Semantic We b.She proposes a new form of metadata, which is based upon the concept of a 'Learning Ro l e .'This notion of role has been introduced to facilitate a dynamic modelling approach.Learning roles are indeed described as meta-ro l e s , which in turn specify roles, together with the interaction between roles, and the p ro p e rties that describe a role type.Allert's vision for the Semantic Web is based on a system that recognises the patterns and developmental pathways forged by these meta roles.She acknowledges that the learning Roles presented in this paper are 'f a r f rom complete' which leads to the question of what is a formal definition of a 'Learning Ro l e' ? Simon, Peter Dolog, Zoltán Miklós, Daniel Olmedilla and MichaelSi n t e k Conceptualising Sm a rt Spaces for Learning: Bernd Simon and his Eu ro p e a n Spaces for learning' in workplace learning context.In this context many educational services and re s o u rces must be made available to and customizable to the individual needs of diverse and distributed work f o rce.Such a challenge calls for i n t e roperability across firms and learning designs (a common ontology) and a capacity for these diverse re s o u rces to respond to learners based upon their unique learner profiles.The result is a prototype personal learning assistant that attempts to s e a rch for and deliver electronic learning content and activities customized to a p a rticular learner's needs and intere s t s .Ro ry McGreal from Athabasca Un i versity notes that personal learning agents can not work in an environment which is not formally defined by a series of interconnected standards.He notes, with examples from his ow n w o rk, the challenges yet the indispensability of common or at least commonly discoverable specifications for detailing activities critical to s u p p o rting online learning.These activities range from standards to identify and describe learning re s o u rces, to those that dynamically describe learner p rofiles and ways to adapt content display in response to unique learner needs.
9,616.6
2004-05-21T00:00:00.000
[ "Computer Science" ]
Virtual experimentations by deep learning on tangible materials Artificial intelligence relying on structure-property databases is an emerging powerful tool to discover new materials with targeted properties. However, this approach cannot be easily applied to tangible structures, such as plastic composites and fabrics, because of their high structural complexity. Here, we propose a deep learning computational framework that can implement virtual experiments on tangible structures. Structural representations of complex carbon nanotube films were conducted by multiple generative adversarial networks of scanning electron microscope images at four levels of magnifications, enabling a deep learning prediction of multiple properties such as electrical conductivity and surface area. 1716 virtual experiments were completed within an hour, a task that would take years for real experiments. The data can be used as a versatile database for material science, in analogy to databases of molecules and solids used in cheminformatics. Useful examples are the investigation of correlations between electrical conductivity, specific surface area, wall number phase diagrams, economic performance, and inversely designed supercapacitors. Artificial intelligence may significantly accelerate the discovery of new materials but is not easily applicable to non-periodic structures. Here, a deep learning framework is proposed to predict properties of tangible carbon nanotubes by generating virtual structures at different scales and compositions. Results and discussion Tangible and hierarchical structures. Although our computational framework can be applied to various tangible material systems, in this work we featured carbon nanotube (CNT) films (Fig. 1b). The CNT films were fabricated through vacuum filtration of CNTs dispersed in solution, and the surface structure was characterized by scanning electron microscopy (SEM) at multiple magnifications. In addition, the electrical conductivity and specific surface area of the films were measured (Fig. 1a). Films of CNTs represent excellent examples of hierarchical structures because CNTs spontaneously entangle by strong Van der Waals interaction and form complex networks from the nanometer to the millimeter length scales due to their onedimensional structure and exceptionally high aspect ratios. The structural hierarchy of the CNT films is shown by a series of SEM images taken at increasing magnifications (Fig. 1c), i.e., at the lowest magnification, a superhighway network of large CNT bundles; at higher magnification, mid-sized bundles with visible internal structure; and at the highest magnification, a random network of fine CNT bundles. We would like to note that the structural hierarchy, such as the porosity of the fine network structure and tortuosity of the dendritic-like structure governs the electrical conductivity. Although referenced identically by the name, CNTs, their structures vary widely in such ways as length, diameter, crystallinity, and wall number [18][19][20] . This diversity makes CNTs an excellent test-bed material for this work because of the unique opportunity to fabricate an assortment of CNT films with different structures and properties. Here, we chose seven commercial CNTs (eDips, SG, Tuball, JEIO, Knano, Nanocyl, Cnano) that encompass a wide range of structures with wall numbers varying from 1 to 8.9, diameters varying from 1.8 to 15 nm, crystallinities (estimated by the Raman G-band/D-band ratio) varying from 0.5 to 59, and lengths varying from <1 μm to more than 100 μm (Supplementary Table 1). Structural representation by conditional GANs. Our approach to predict properties of tangible materials by virtual experiments is based on training multiple GANs from real SEM images to make possible the creation of fake images of CNT films at various scales and compositions. Generation of fake images by a GAN includes the effects of dispersion, filtration, and SEM observation executed on the computer. Each GAN is made from two ANNs, i.e., a generator and a discriminator (Supplementary Figs. 1 and 2), that are simultaneously trained by an adversarial learning process. The generator generates fake SEM images of CNT films, and the discriminator differentiates fake from real 11,21 . When introducing the architecture of conditional GAN, generation of fake structures of arbitrary compositions can be controlled by an input of one-hot vector of composition into generator. The training protocol of the GAN was decided by the balance among conflicting criteria of generator and discriminator (see also Supplementary Fig. 3). Images on the order of 10,000 s are required for each training point, cropped real SEM images were used to increase image number. In addition, the pixel resolution of images must be minimized to reduce memory usage and enable learning convergence within a reasonable time. Furthermore, the pixel resolution must be sufficiently high to capture the characteristic structural features of the CNT film for the accurate prediction of properties. Taking these factors into account, for 17 CNT films (seven types of CNTs and ten mixtures of CNTs, see also Supplementary Table 2) we took~300 SEM images (960 × 1280 pixels) at a fixed magnification and divided them into 128 × 128 pixels, rotated 90 degrees for four times, and thus obtained 17 sets of~12,000 training data. For each conditional GAN model, we implemented 50,000 iteration steps of the adversarial learning process and snapshots ( Supplementary Fig. 4a) at different iteration levels show how the fake image of the generator evolved from random noise to structures that resemble the training data. This process was repeated to train four GANs at the four different SEM magnifications (×2k, ×20k, ×50k, and ×100k). The potential of the multi-scale GANs framework was shown by a side-by-side comparison of the (Fig. 2a) fake and real images of the 28 CNT films (seven types of CNTs at four different scales). Although the real CNT film images showed a wide variation in structure exhibiting the hierarchy as well as the difference in the CNT structures, exceptional similarity can be seen in all 28 pairs of fake and real images. This demonstrated the versatility of conditional GANs to process the wide range of structures of these tangible materials (see also Supplementary Figs. 5 and 6). Image analysis was implemented to quantitively assess the similarity between the fake and real CNT film images. Two descriptors (Fig. 2b), i.e., the width of CNT bundle structure and void diameter were chosen to characterize the structure of the CNT films. Specifically,~200 real and fake images of CNT films for each of the 21 cases (seven types and three scales) 18 were binarized (Fig. 2b) where white/black pixels represent CNTs/ voids, respectively. The frequency histograms of the width of CNTs and void diameter were calculated from the binary images, and two examples (Cnano at ×50k, JEIO at ×20k) are shown in Fig. 2c. While the frequency histograms differed significantly between Cnano at 50k and JEIO at ×20k, the agreement between the fake and real images was excellent for both cases, which illustrates the ability of a conditional GAN to reproduce highly diverse structures among CNT films. Furthermore, correlation plots ( Fig. 2d) between the fake and real mean values for the width of CNTs and void diameter, calculated from the histograms, provided correlation coefficients of 0.78 and 0.84, respectively, demonstrating the potential of conditional GAN to accurately create fake images of various CNT films. The properties of the CNT film were predicted by a structureproperty ANN ( Supplementary Fig. 7), however, our approach differs substantially from previous efforts, because training and prediction processes were entirely accomplished on fake images created by conditional GANs, and fake images representing different scales were tiled and merged into one to provide information about the hierarchical structure (Fig. 2e). By this approach, computational prediction of properties of tangible materials possessing structural hierarchy was made possible. Our structure-property ANN was trained to predict the electrical conductivity and specific surface area of CNT films (see also Supplementary Fig. 8). Specifically, the training was carried out on the 17 CNT films (seven types and ten mixtures of CNTs), and for each training point, 256 images (128 × 128 pixels) were created from each of the four GANs representing different magnifications, divided into four 64 × 64 pixels, rotated 90 degrees four times, providing 4000 images, and combination of tiling further increased the dataset to 12,000, of which 90% was used for training and 10% for validation. Once trained, the values of 256 predictions were averaged to provide the properties of a specific film. Predicted versus experimental values of the electrical conductivity and specific surface area (Fig. 2f), for the validation and test sets, spanned a wide range, with prediction reliabilities (R 2 ) of 0.99/0.85 for electrical conductivity and 0.99/0.42 for specific surface area, respectively. These results demonstrated the potential of our deeplearning based-computational framework to implement virtual experiments. Overall, this level of accuracy was sufficient to predict the properties of the CNT films to study relationships among structures and properties, as demonstrated later. It should be mentioned here that previous researches have elucidated that the electrical conductivity/specific surface of CNTs are strongly correlated to their length, crystallinity, and wall number 19,20 , however, the resolution of SEM is unable to resolve all of these CNT features even at the highest resolution, and thus not directly included here all. The influence of these undetected important structures of individual CNTs on properties must have been reflected indirectly through the difference in the network structure and hierarchy of the CNT films. The significance of using the tiling technique ( Fig. 2f) for accurate property prediction is demonstrated by the comparison of the prediction reliabilities of the electrical conductivity from tiled multiple-magnification images against that of a single magnification for the 17 CNT films (Fig. 2g). Whereas the reliabilities show a large variation among the films and magnifications, the prediction reliabilities based on tiled images were consistently high. Virtual experiments of CNT films. The potential of predicting properties of tangible materials by virtual experiments cannot be overstated. To highlight this aspect, we predicted the electrical conductivities and specific surface areas of 1716 CNT films composed of diverse compositions (Supplementary Data 1) and plotted the results into an Ashby map of surface area and electrical conductivity (Fig. 3a). We note that the 1716 virtual experiments were executed independently. However, plots in the Ashby map shows smooth transitions in color, which means continuous changes in composition with property, thus indicating that our naive deep-learning framework could capture the property trends on the material composition. We also stress that ARTICLE COMMUNICATIONS MATERIALS | https://doi.org/10.1038/s43246-021-00195-2 constructing this map experimentally would require years, however, in our case, once trained, these virtual experiments were completed within an hour. This speed underscores the great advantage of deep learning. The results of the 1716 virtual experiments can serve as a database for research on tangible materials, analogous to databases of molecules and solids used in cheminformatics. For this, we present several examples. First, the predicted properties of CNT films formed a triangular domain that covers a very wide region within the Ashby map. This originates from the wide structural variation of the CNTs. Moreover, an inverse relationship between the specific surface area and the electrical conductivity was observed, which means no single CNT film possesses both the highest levels of surface area and electrical conductivity. Second, the plots in the Ashby map were recolored (Fig. 3b) to show the weighted average wall number of the CNT films, from which a wall-number phase diagram (Fig. 3c) was constructed. The phase diagram showed that both properties rapidly decrease with increase in wall number, implying wall number as a crucial structural parameter of CNT films. Films exhibiting the highest conductivities and specific surface areas were both composed from single-walled carbon nanotubes (SWNTs), but from different types, one with higher crystallinity and smaller diameter giving higher conductivity, and the other with lower crystallinity and larger diameter. The former/latter SWNTs formed a tightly/ loosely packed bundle resulting in low/high specific surface areas. This is the origin of the inverse relationship between electrical conductivity and specific surface area. Moreover, considerable overlap among the wall-number domains was observed, e.g., the upper boundary of the double-walled carbon nanotube (DWNT) is nearly equivalent to that of the SWNTs. This means that the influence of a small addition of multi-walled carbon nanotubes (MWNTs) in a SWNT film is negligible and the properties are dominantly determined by the SWNT network. Third, the database is not only useful for science but is also invaluable to develop practical applications. Frequently, the mission of developing real applications is to pursue the most cost-effective mixture of materials while satisfying the target properties. Currently, highly functional SWNTs are hundreds of times more expensive than MWNTs, and to examine economical solutions, we defined (Supplementary Methods) and calculated the economic performance of the CNT films (larger value = more costeffective) and plotted them against the electrical conductivity (Fig. 3d) and specific surface area (Fig. 3e), respectively. Both figures clearly show that an increase in properties results in a drop in economic performance. The CNT films at the upper boundary (highlighted by yellow stars) represent the economic Pareto optimal solutions, i.e., the most economical CNT film at that specific property. The compositions of the Pareto optimal solutions vary with electrical conductivity level (Fig. 3f). For example, it changes from large diameter MWNT mixtures (Cnano-Nanocyl (−10 S cm −1 ), Cnano-Knano (−30 S cm −1 )), to medium diameter, fewer walled mixtures (Tuball-Knano (−120 S cm −1 )), to smaller diameter, longer length mixtures (Tuball-JEIO (−240 S cm −1 )), to small diameter, highly crystalline mixtures (eDips-Tuball (−330 S cm −1 ), eDips (−360 S cm −1 )). In contrast, the compositions of Pareto optimal solutions follow a different course for specific surface area (Fig. 3g) as it begins from large-diameter, MWNT mixtures (Cnano-Nanocyl (−240 m 2 g −1 ), JEIO-Knano (−300 m 2 g −1 )), to large diameter, few-wall CNTs (JEIO-Nanocyl (−580 m 2 g −1 ), JEIO (−660 m 2 g −1 )), to longer, SWNT-rich mixtures (SG-JEIO (−970 m 2 g −1 ), SG (−1020 m 2 g −1 )). The compositions of optimal solutions differ for electrical conductivity and specific surface area. These predicted trends are valuable in developing real applications and would be difficult to obtain experimentally. Inverse design of supercapacitors. The database can be used to seek the compositions of CNT films that possess an intended property, and this ability would open opportunities for the inverse design of applications. For example, CNTs films are known to be well-suited for supercapacitors electrodes due to their high surface area and electrical conductivity 22,23 . From the Ashby map we selected several CNT films with varying and targeted properties (Supplementary Fig. 9) and constructed twoelectrode electric double-layer capacitor (EDLC) cells using a H 2 SO 4 electrolyte. The galvanic discharge curves and impedance spectroscopies ( Supplementary Fig. 10) were measured from which the energy densities and relaxation time constants (a measure of the speed of operation) were calculated and plotted against the specific surface area (Fig. 4a) and electrical conductivity (Fig. 4b). The specific surface area/electrical conductivity showed a strong and monotonical correlation with energy/density/relaxation time constant. These results mean one can determine the mixture of CNTs for EDLC cell electrodes that meets a target energy density and relaxation time constant, and by combing the economic Pareto optimal solutions, even can address to the most economical electrode. Conclusion In summary, we proposed a general deep-learning-based computational framework that predicts properties by using conditional GAN to create fake structural images (SEM) of materials (CNT films) with hierarchical, non-definable, and non-periodic structures. Our approach is not only limited to CNT films but is readily extendable to other tangible material provided structural information is experimentally available and the properties have enough correlation with the structure, and thus we believe our approach represents an important step forward in expanding the scope of materials artificial intelligence can be applied. Methods CNT film fabrication. Seven commercial CNTs were used in this study. Four MWNTs (FloTube 9000, NC7000, K-nanos 100p, JC142) and three SWNTs (SG-CNT HT, Meijo eDIPS EC2.0, Tuball) were purchased or received from JiangSu CNano Technology Co. Ltd., Nanocyl SA., Kumho Petrochemical Co. Ltd., JEIO Co. Ltd., Zeon Corporation, Meijo Nano Carbon Co., Ltd. and OCSiAl. Details of CNTs are described in our previous work (ref. 18 ) and Supplementary Table 1. Each CNT was dispersed in methyl isobutyl ketone by a bead milling process. Mixtures of CNTs were prepared by mixing multiple CNT dispersions with arbitrary vol%. Then each mixture of CNT dispersion was filtered by a PTFE membrane filter and dried at 300°C in vacuum. Fig. 2 Potential of generative adversarial network (GAN) to create fake CNT film SEM images and accuracy of ANN prediction of properties. a Comparison of real (experimental) and fake (created by conditional GAN) SEM images of CNT films made of seven types of CNT at four different scales. b Binarization procedure of the image analysis showing the two descriptors. c Frequency histograms of the width of CNT and void size compared between real and fake images. d Median values of the CNT width and void size calculated from real versus fake CNT film images (seven types of CNT, eDips (red), SG (orange), Tuball (magenta), JEIO (green), Knano (purple), Nanocyl (light blue), Cnano (blue), at 3 magnifications). e Four fake images of a CNT film at different magnifications tiled together to serve as the input of the ANN to predict properties. f Measured versus predicted values of electrical conductivity and specific surface area (blue: validation set, red: test set). g Prediction reliabilities of electrical conductivity (upper) and specific surface area (lower) predicted from tiled images versus single magnification. Fig. 3 Predicted properties of 1716 CNT films. a Ashby map of predicted electrical conductivity against predicted specific surface area. The colors of each point represent the content of the CNT mixtures as determined by the hue average of the contributing CNT types. (eDips (red), SG (orange), Tuball (magenta), JEIO (green), Knano (purple), Nanocyl (light blue), Cnano (blue)). b, c Wall number phase diagrams of CNT films (single-wall (red), doublewall (green), triple-wall (magenta), and multi-wall (blue) carbon nanotubes). d, e Economic performance of CNT films of electrical conductivity (d) and specific surface area (e). Yellow stars show the economic Pareto optimal solutions showing the most economical CNT film at the specific property. A difference of one in economic performance is equivalent to a ten-fold difference in costs. f, g Contents of CNT mixtures at Pareto optimal solutions at specific electrical conductivity (f) and specific surface area (g). SEM characterization. SEM measurements were carried out using a field-emission SEM (Hitachi, SU8220) under an acceleration voltage of 5 kV and an emission current of 10 μA. Electrical conductivity measurement. Electrical conductivities of CNT films were measured by a four-point probe method using a resistivity meter Loresta-GP MCP-T610 (Mitsubishi Chemical Analytech). Surface area measurement. To estimate Brunauer-Emmett-Teller (BET) specific surface areas of CNT films, N 2 adsorption isotherms at 77 K were measured by a surface area and pore size distribution analyzer BELSORP-mini and -max (MicrotracBEL) after preheating at 300°C for 3 h. Fabrication and characterization of CNT supercapacitors. After vacuum drying of the CNT films (100 µm thick), supercapacitor electrodes were prepared by assembling a current collector (Pt mesh)/CNT film/separator (porous cellulose filter) structure. 1 M sulfuric acid was used as electrolyte solution. Electrochemical characteristics were measured by a VMP3 galvanostat/potentiostat/frequency response analyzer (Princeton Applied Research). The galvanostatic discharge curves and impedance spectrum of Fig. 4 are shown in Supplementary Fig. 10. Generation of virtual fake CNT images by GAN. The structure of the conditional GAN models was based on the BigGAN framework 21 . In this study we used opensource code of BigGAN from ref. 24 . The basic structures of generator and discriminator of conditional GAN used in this study are shown in Supplementary Figs. 1 and 2. For generator ( Supplementary Fig. 1), inputs of latent values as well as one-hot vector of class were used to generate fake images at a demanded class. The input of one-hot vector of the class was introduced to class-conditional batch normalization layers in ResBlock(including skip connection of upsampling). For discriminator ( Supplementary Fig. 2), inputs of the image are assessed in the model and the output was the logit of the truth of the structures by comparing the one-hot vector of the class of the input image. Training protocol of the generator and discriminator of the current conditional GAN is shown in Supplementary Fig. 3. Generated fake images from generator and inputs of latent and class were assessed by discriminator to obtain fake logits. Real image was also evaluated by the discriminator to obtain the output of the real logit. From these logits of real and fake images are used to calculate the loss of generator and discriminator. In the architecture of BigGAN, orthogonal regularization of weights of models was introduced to the total loss of optimization. The procedure for training conditional GAN models from real SEM images are as follows. SEM images of CNT films were cropped into small images with 128 × 128 pixels. Each image was rotated 90 degrees four times to increase the variation of the data. A generative model of fake CNT films was constructed based on the conditional GAN. Four conditional GAN models of CNT films with different magnifications (×2k, ×20k, ×50k, ×100k) were trained with 50,000 epochs until both generators of fake images and discriminator of generated images were well converged ( Supplementary Fig. 4). For each iteration, 1200 images for each kind of CNT (seven types and ten mixtures of CNTs) were randomly sampled and used for training. Generation of fake images at an arbitrary composition is described in Supplementary Fig. 5. Product of one-hot vectors of classes and composition vectors were calculated to for representing intermediate structures of different classes (here, in other words, representing mixtures of multiple CNTs). Vector of intermediate composition is introduced to the class-conditional batch norm layers in each ResBlock. In addition to the one-hot vector, latent values after the parameterizing by truncation normal were also introduced to the generator in BigGAN model. Two hundred and fifty six fake images for each with desired kind and contents of CNT mixture were generated by changing the values of latent (see also Supplementary Fig. 6 for generated images of CNT mixtures at the arbitrary combination). Example codes and trained weights of conditional GAN models (four models for different magnifications) are available in ref. 25 (see also Supplementary Note 1). To validate the morphology of real and fake CNT images, void size and width of CNT were compared. To calculate void size and width of CNT, both real and fake images were subjected to binarization to separate domains of CNT and voids. Prediction of properties of CNT films using tiling of images and deep learning. To include multi-scale information on hierarchical structure, we proposed a method for virtual experimentation based on tiling images of four different magnifications. First, generated fake images were cropped into 64 × 64 and compressed size into 32 × 32. Second, the compressed fake images of four magnifications were randomly selected and rotated (0, 90, 180, 270 degrees). Finally, four fake images were joined into a single image by arranging an image at ×2k into upper left, ×20k into upper right, ×50k into lower left, and ×100k into lower right. To predict the properties including electrical conductivity and specific surface area, convolution neural network (CNN) was introduced. The architecture of CNN used in this study is described in Supplementary Fig. 7. The structure of CNN is standard, consisting of convolution layers and fully connected layers with several max-pooling or dropout layers. The architecture of CNN can also be referred to ref. 26 . To investigate the effect of tiling on the prediction accuracies of properties, five CNN models including a model for tiled images and four models for individual magnification (×2k, ×20k, ×50k, ×100k) were constructed. The images for 17 CNT films (Supplementary Table 2) were gathered and randomly shaffled, then divided into training and validation sets with the ratio of 90-10%. Test set of additional 12 CNT films (Supplementary Table 3) was used to evaluate the prediction reliabilities of CNN models. All CNN models were trained with 100 epochs to sufficiently converge the models (see also Supplementary Fig. 8). Prediction reliabilities of CNN models were calculated from the coefficients of determination between measured and predicted properties of CNT films. Virtual experimentation of CNT films. In total, 1716 conditions of CNT compositions, including binary and ternary CNT mixtures, were analyzed by conditional GAN and CNN models. Despite the lack of implementing ternary CNT mixtures in training/testing of models leading to relatively less accuracy compared to pure CNTs and binary mixtures, the tendency of the relationship between the structures and the properties can be sufficiently analyzed from the current virtual experimentation. Two hundred and fifty-six images of each condition were generated by conditional GAN models. After creating tiled images, electrical conductivity and specific surface area were calculated. In total, 1716 pairs of electrical conductivity and specific surface area were plotted in an Ashby map with the horizontal axis of electrical conductivity and the vertical axis of specific surface area (Fig. 3). To classify the 1716 points of virtual experimentations, the wall number and economic performance of each condition were calculated. All data plotted in the Ashby map is attached in Supplementary Data 1. Classification of the Ashby map by wall number was conducted by drawing four lines of the convex hull of averaged wall number. Four classes, single (<2), double (<3), triple (<4), and multi-walled, were visualized in the Ashby map. Data availability The datasets of the trained models of the conditional GAN in this paper are available in figshare, https://doi.org/10.6084/m9.figshare.14872146. Additional supporting data generated during the present study are available from the corresponding author upon reasonable request. Code availability The example codes for generating tangible structures using conditional GAN models are accessible through figshare, https://doi.org/10.6084/m9.figshare.14872146. Additional information is provided by the corresponding author upon reasonable request.
5,992
2021-04-26T00:00:00.000
[ "Materials Science" ]
Summarizing Medical Conversations via Identifying Important Utterances Summarization is an important natural language processing (NLP) task in identifying key information from text. For conversations, the summarization systems need to extract salient contents from spontaneous utterances by multiple speakers. In a special task-oriented scenario, namely medical conversations between patients and doctors, the symptoms, diagnoses, and treatments could be highly important because the nature of such conversation is to find a medical solution to the problem proposed by the patients. Especially consider that current online medical platforms provide millions of public available conversations between real patients and doctors, where the patients propose their medical problems and the registered doctors offer diagnosis and treatment, a conversation in most cases could be too long and the key information is hard to be located. Therefore, summarizations to the patients’ problems and the doctors’ treatments in the conversations can be highly useful, in terms of helping other patients with similar problems have a precise reference for potential medical solutions. In this paper, we focus on medical conversation summarization, using a dataset of medical conversations and corresponding summaries which were crawled from a well-known online healthcare service provider in China. We propose a hierarchical encoder-tagger model (HET) to generate summaries by identifying important utterances (with respect to problem proposing and solving) in the conversations. For the particular dataset used in this study, we show that high-quality summaries can be generated by extracting two types of utterances, namely, problem statements and treatment recommendations. Experimental results demonstrate that HET outperforms strong baselines and models from previous studies, and adding conversation-related features can further improve system performance. Introduction Applying natural language processing (NLP) techniques to the medical field is a prevailing trend nowadays and has great potential in many applications, such as key information extraction in medical literature (Kim et al., 2011;Dernoncourt et al., 2017;Ševa et al., 2018), risk factor identification in electronic health records (Chang et al., 2015;Cormack et al., 2015;Cheng et al., 2016), and medical question answering (Pampari et al., 2018;Tian et al., 2019). As the demand for healthcare services increases greatly in the past decades, 2 it is urgent to improve the quality and efficiency of healthcare, reduce workload and mental stress of health providers and increase patient satisfaction. Recently, Internet-based healthcare platforms such as online doctor systems and doctor-patient cyber communities have been increasingly used by patients and health professionals with the hope that they would alleviate the ever-increasing demands for healthcare services and reduce the inaccessibility of services caused by geographical and socio-economic barriers. In such platforms, a patient can start a conversation to a registered doctor by typing their medical problems and then the doctor may ask the patients to specify his/her problem (e.g., symptoms, treatment has been taken, etc.). Since the conversation is asynchronous, it is possible that one speaker (either the Figure 1: An example of a conversation and its different types of summaries. P and D stand for speaker roles, i.e., patient and doctor, and PD, DT, and OT in the last column refer to the utterance tags for problem description, diagnosis or treatment, and others, respectively. SUM1 is a summary of the medical problem from the patient; SUM2 is a summary of the diagnosis and treatment from the doctor. The English translation is not part of the corpus, which is added as a reference. patient or the doctor) may type multiple lines (utterances) before the other speaker responds. Through this process, all key information regarding to a medical problem, as well as its diagnosis and medical recommendations, are recorded in the entire conversation. Once the platforms make all such conversations publicly available, other patients with similar medical problems can search relevant conversations and find potentially helpful solutions. However, when a conversation is too long or the key information is scattered in it, one could hardly find the essential contents or misread them in many cases. As a result, the summarization of the conversation, especially for problem statement and treatment recommendations, is an important task to help new patients locate useful information to address their medical concerns. Due to the nature of medical conversation, i.e., a task that seeks solutions to provide medical recommendations for particular health problems, it is possible to perform the task by identifying important utterances in such conversations. In this study, important utterances refer to the utterances that contain key information for the medical problem or for the treatment. Therefore, our focus is different from existing studies on utterances in conversations, where they pay more attention to assessment of utterances with respect to the functionalities of utterances in the conversations, such as analyzing automatically generated utterances regarding their suitability within particular conversational contexts (Inaba and Takahashi, 2016;Lison and Bibauw, 2017), evaluation of human conversational performance on readability, sensibility, and social involvement (Dascalu et al., 2010), and identification of segments of utterance that are produced with more emphases for certain interactional purposes (Takeuchi et al., 2007). Little research has been done to identify important utterances that contribute to a specific outcome of a conversation, which in this study refers to the content about patient's problem and recommendation treatment in the conversation. To conduct the medical conversation summarization task, in this paper, we propose a new benchmark Table 1a illustrates the overall statistics, and Table 1b reports the number of conversations with SUM2-A only or with both SUM2-A and SUM2-B. dataset in Chinese, which has over 40K cases covering nearly 2K disease types. In each case, there is a medical conversation between a patient and a doctor, and two summaries: one for problem statement and the other for treatment recommendations. Figure 1 shows an example conversation with the two types summaries: "SUM1" for problem statement and "SUM2" for treatment recommendations. SUM2 has two types, i.e., type A and B, which will be explained in the next section. Besides, we propose a hierarchical encoder-tagger (HET) model for extractive summarization to tag each utterance in a medical conversation with regard to whether an utterance is a problem statement or a treatment recommendation. We further enhance the model with end-to-end memory networks (Sukhbaatar et al., 2015) to incorporate the information in relevant utterances in the conversation. We use BERT (Devlin et al., 2019) as the token-level encoder and try several utterance-level encoders and taggers. Experimental results show that HET outperforms strong baselines as well as models from previous studies on this dataset. Analyses are also conducted to better understand our findings from the results. A Corpus of Medical Conversations Medical conversation is a type of task-oriented conversation. Different from ordinary conversations in which topics are often fluid, in task-specific conversations, participants interact to accomplish a projected set of goals and sub-goals (Litman and Allen, 1987;Drew and Heritage, 1992). Specifically, for conversations in the medical domain from online medical platforms, the projected goal is for the doctor to diagnose and offer treatment recommendation for the patient's problem (Drew and Heritage, 1992;Robinson, 2012;Wang et al., 2020). Particularly in China, many platforms make such medical conversations publicly available so that new patients with similar problem can search relevant conversations and find helpful information from them. Therefore, summarization of the patient's problem and doctor's recommendations in a conversation could be highly important because such summaries can help the new patients locate the key information, especially when a conversation is too long. To conduct such summarization, a straightforward solution is to identity the important utterances that contain key information for problem statements or treatment recommendations. However, limited corpus can be found to train such summarization model, especially for Chinese. Therefore, we develop a corpus in Chinese for medical conversation summarization and illustrate the details in the following text. The Raw Data The original data are crawled from one of the most well-known online health provider platforms 3 in China, under a section called "Frequently Inquired Health Problems." 4 In these conversations, patients consult registered doctors 5 about some health problems; doctors help them to determine the nature of the problems, provide treatment recommendations, and/or advise them to seek further medical attention from other health facilities. Instead of isolated question-answer segments or part of the conversations, this data contain full conversations between patients and doctors, covering the entire interaction process. In addition to dialogues, each conversation contains meta information such as the type of disease and the corresponding hospital department, as well as the speakership of the utterances in conversation. 6 Many (but not all) conversations include a summary added by doctors after the conversations are conducted. The summary has two parts: SUM1 describes the medical problem that the patient has; SUM2 summarizes the doctor's diagnosis or treatment recommendations. SUM2 is of two types: Type A (we denote it as SUM2-A) is the concatenation of a few utterances in the conversation, whereas Type B (we denote it as SUM2-B) is a more concise summary written by the doctor and may contain text that does not appear in the conversation. In all, we crawled 109,850 conversations from 23 hospital departments or sub-divisions, and the conversations cover 1,839 disease types, which forms our raw corpus. Among them, only half of them contains both SUM1 and SUM2. This again emphasizes the necessity of this summarization task, because if we can automatically generate the missing summaries for problem statement and treatment recommendations, new patients may have more references when they search conversations that are relevant to their problem. Data Processing To facilitate the task of conversation summarization, we process the raw corpus by only reserving the conversations that have both SUM1 and SUM2, and further clean the resulted data by removing duplicates and those conversations containing only one utterance. The cleaned data contain both input and output for the summarization task. Particularly, SUM1 and SUM2-A are the concatenation of selected utterances in the conversation that provide key information for problem statement and treatment recommendations. Therefore, the important utterances identified in a conversation are those likely to appear in the summary. In detail, following Nallapati et al. (2017) and Chen and Bansal (2018), we use ROUGE scores to measure the overlap between an utterance and a summary, and label the utterances accordingly; that is, we break the summary into segments, 7 and then for each segment, find the closest utterance in the conversation according to ROUGE-1 score. If the score is greater than a threshold, we label that utterance as "PD" if the summary is SUM1 and "DT" if it is SUM2. For all other utterances, we label them with "OT". We call those resulting "PD", "DT", and "OT" as silver-standard labels. Table 1 shows the statistics of the processed dataset, where Table 1a reports the overall statistics of all data and the train/test splits (we use 80% for training and 20% for testing), and Table 1b illustrates the number of conversations where SUM2 is SUM2-A only or has both SUM2-A and SUM2-B. A few points are worth mentioning. First, on average, each dialogue includes 19.0 UTTERANCE (and about half of them are by the doctor), but only 4.5 of them are tagged with label "DT", which demonstrates that more than half of doctors' utterances are not included in the summary. Such utterances can be greetings, symptom inquiry, etc. Second, the conversations between the patient and the doctor are asynchronous: either party can type some messages, walk away, and later come back to continue the discussion. This property makes the corpus different from other benchmark corpora (such as AMI (McCowan et al., 2005)) consisting of dialogues during in-person meetings. Third, for SUM2, all conversations have SUM2-A, and only a small portion (around 7.5% in the training and testing sets) have both SUM2-A and SUM2-B. Therefore, for the conversations with both SUM-2A and SUM-2B, we use their concatenation to compute the average length reported in Table 1a. Forth, while this paper focuses on summarization, the corpus can be used for other NLP tasks such as question answering and dialogue analysis. Summarization via Tagging To model conversation, a common approach is to use a two-level hierarchical sequential model (Serban et al., 2016), in which a conversation may be modeled as a sequence of utterances, and each utterance is modeled as a sequence of words or characters. Using such hierarchical models, conventional studies mainly focused on conversation generation (Sordoni et al., 2015;Serban et al., 2016;Serban et al., 2017), where a decoder is employed to generate responses conditioning upon the vectors encoded from the hierarchical modeling of previous utterances. For our dataset, there is a big overlap between utterances and the summaries; for instance, as shown in Table 1b, SUM2 in the majority of the conversations (92.5% in training and 92.3% in the test set) are in this dataset we only include typed messages for our research. 7 The summary in this corpus often uses a full-width comma (U+FF0C) as a delimiter, and we use this delimiter to break a summary into segments. of type SUM2-A only and the rest contain both SUM2-A and SUM2-B, where SUM2-A is generated by concatenating several utterances in the conversation. To take advantage of such a property, we treat summarization as a tagging task; that is, we generate the summaries by first labeling the utterances with the PD, DT, OT tags and then concatenating the labeled utterances to form summaries. We define the input utterance sequence as U = u 1 , u 2 , · · · , u i · · · , u n with each u i presented as a sequence of basic tokens (e.g., word or character) u i = w i,1 , w i,2 , · · · , w i,l i . To model the input, our model follows the typical hierarchical structure in which the tokens and utterances are encoded with by separate encoders and hierarchically stacked. Then a tagger is attached at the utterance-level to predict PD/DT/OT labels. Afterwards, we concatenate the utterances labeled by PD and DT to generate the summary of medical problems and doctor's diagnosis, respectively. To further enhance our model, we adopt memory networks (Sukhbaatar et al., 2015) to incorporate the information from relevant utterances in the conversation. Therefore, our model is a hierarchical encoder-tagger (HET) with the memory module applied between the token-level and utterance-level encoders, which is illustrated in Figure 2. Also, it is worth noting that our method can generate the two types of summaries simultaneously, since they directly come from the predicted PD/DT/OT labels. In the following texts, we firstly introduce the memory module and then elaborate the whole hierarchical tagging process with the memories. Utterance Memories As discussed above, we regard our summarization task as an utterance tagging process. Similar to other tagging tasks in which contextual information is highly helpful in determining the output tags (Song and Xia, 2012;Marcheggiani and Titov, 2017;Higashiyama et al., 2019;Tian et al., 2020a;Tian et al., 2020b), for each utterance u i in the conversation, relevant utterances in each conversation also provide useful information to determine whether a particular utterance is important. To exploit the information from relevant utterances, we adopt end-to-end memory networks (Sukhbaatar et al., 2015), which (as well as the variants) have been demonstrated to be useful in many tasks (Miller et al., 2016;Tian et al., 2020c), to learn from them to facilitate important utterance tagging. In doing so, we first map all utterances [u 1 , · · · , u j , · · · , u n ] in the conversation into their memory vectors and value vectors. The memory vectors (denoted by m j for u j ) are directly copied from the utterance representation obtained from the token encoder; the value vectors (denoted by v j for u j ) are obtained by a BiLSTM encoder. Specifically, memory vectors m j are used to compute the similarity with the input utterance; while v j carries u j 's encoding information for generating final memory output. Then for each utterance u i with its representation h i , we use it to address relevant utterance through the memory, which is formalized as Here, δ i,j ∈ {0, 1} is a binary activator which equals 1 if the speaker of u j is identical with that of u i and equals 0 otherwise; m j = h j because the memory vectors are copied from the utterance representation obtained from the token encoder (TE); p i,j is the weight measuring the relevance between u j and u i . Afterwards, the value vectors v j of u j are weighted with p i,j and summed by where a i is the vector to represent the information from relevant utterances via a weighted sum operation. The Hierarchical Encoder-tagging with Memories To obtain the representation of each input utterance u i , we apply BERT (Devlin et al., 2019) as our token-level encoder (TE), and use the encoded hidden vector of "[CLS]" 8 as h i to represent the utterance u i . Once a i is obtained from the memory module, we concatenate it with h i and get the resulting utterance representation for the utterance level encoding 9 by Then, an utterance-level encoder (UE) is applied to model the utterance representations in a sequential way. For example, if we use LSTM for UE, the utterance-level encoding is formulated as where the o i is the step-wise state for utterances and h i is used as the input to the UE at each time step. Note that, in addition to LSTM, there are many other ways for UE, e.g., BiLSTM. Herein we use LSTM as an example of the UE for the sake of simplicity. On the top of the encoder, there is the tagger layer performing the identification task, where a trainable matrix W and bias vector b is used to align o i to the output space: Afterwards, a softmax or conditional random field (CRF) (Lafferty et al., 2001) algorithm is applied to o i to obtain the output tags. Finally, we concatenation all utterance with the label PT and DT to generate the summary of patient's problem (SUM1) and doctor's diagnoses (SUM2), respectively. Settings We experiment our HET model with and without the memory on our corpus. For model implementation, at the token-level encoder (TE), we use the Chinese version of BERT 10 and ZEN (Diao et al., 2019) 11 with their default settings, where for both BERT and ZEN, we use 12 layers of multi-head attentions with the dimension of hidden vectors set to 768; for the utterance level, we firstly run experiments with no encoder; then following previous studies such as (Kalchbrenner and Blunsom, 2013;Kumar et al., 2018), we experiment with two recurrent neural network models (namely, LSTM and BiLSTM) to encode the utterance sequence for each conversation, where the dimension of hidden states is set to 300 for LSTM and 150 for BiLSTM encoder. In the memory module, the embedding matrix and BiLSTM encoder for obtaining the value vectors v j for u j are applied directly to the Chinese characters in the utterance. All parameters in the embedding matrix and the BiLSTM encoder in the memory module are initialized randomly, with the dimension of embedding and hidden states set to 768 and 384, respectively (which allows the dimension of v i to match that of the hidden vector of BERT and ZEN). For the tagger, we run two types of them, i.e., softmax and CRF, in order to test whether there is a strong The results of HET using BERT and ZEN as the token encoder with and without the memory module (M). We also try different combinations of utterance encoders (UE) (i.e., none, LSTM, and BiLSTM) and taggers (i.e., softmax and CRF). PD and DT are the two tags for important utterances; P, R, and F are the precision, recall, and F scores of the predicted labels when compared with the silver-standard PD/DT/OT labels; R-1, R-2, and R-L are ROUGE-1, ROUGE-2, and ROUGE-L scores of the generated summaries when compared with gold references in the corpus (i.e., the SUM1 and SUM2). dependency between the importance labels of adjacent utterances. We use cross-entropy and negative log-likelihood as loss functions for softmax and CRF, respectively. For evaluation, we use F scores for the tagging results 12 and use ROUGE-1, ROUGE-2, and ROUGE-L scores 13 to evaluate the generated summaries using SUM1 and SUM2 in the dataset as the gold standard. If the SUM2 of a conversation includes both SUM2-A and SUM2-B, we treat the concatenation of SUM2-A and SUM2-B as the gold standard for SUM2 in all the experiments, except the results in Table 4. Basic HETs The first experiment is to explore how the HET models perform under different settings on the proposed dataset, where models with and without the memory module and configured with different token encoders (BERT and ZEN), UEs (no UE, LSTM, and biLSTM), and taggers (softmax and CRF) are tested. Table 2(a) and 2(b) show the results of utterance tagging (in terms of precision, recall, and F scores) and summarization (in terms of ROUGE-1, ROUGE-2, and ROUGE-L) for both problem statement (SUM1) 12 We use the code in the sklearn framework https://scikit-learn.org/stable/modules/classes.html. 13 The code is from https://github.com/google-research/google-research/tree/master/rouge. Models PD (SUM1) DT (SUM2) Table 3: Experimental results of our runs of models from previous studies as well as our best HET (with BiLSTM UE, softmax tagger, and the memory module). and treatment recommendation (SUM2) when BERT and ZEN token encoders are used. Some observations are stated in order below. First, the overall results demonstrate that the method of generating summaries via tagging works well on our dataset. In most cases, models that perform well on tagging (F scores) also perform well on summarization (ROUGE scores). Second, for both BERT and ZEN encoders, the HET model works well with different combinations of UEs and taggers, which illustrates the validity of our approach. Among different settings, the one using BiLSTM UE outperforms others, suggesting that the sequential organization of utterances play an important role in identifying important utterances in conversations. Third, compared with models without the memory module, models with memories achieve greater improvements on the doctor diagnoses (SUM2). However, the effect of memories is not as good for the problem description (SUM1). One possible explanation could be that the information of other utterances is more useful for determining whether an utterance can be tagged for SUM2 than that for SUM1; the memory module can appropriately model such information and thus including the memory module in HET is more helpful on SUM2 than that for SUM1. Comparison with Previous Studies On our dataset, we compare our approach with two previous extractive summarization models. The first one is SummaRuNNer (Nallapati et al., 2017) and the other is a contextualized extractive method (CEM) proposed by . 14 Since these models are originally designed for document summarization, which cannot generate summaries for patient's problem and doctor's diagnosis simultaneously, in our experiments, we directly concatenation all utterances to form a document as the input (i.e., the conversation utterances are regarded as document sentences) and train the models for SUM1 and SUM2 separately . For both models, we apply the Chinese character embeddings from Tencent Embedding 15 and select the top ranked 7%, and 24% 16 of the utterances (sentences) as the summarization of patient's problem and doctor's diagnosis, respectively. Table 3 shows the best results of the two reference models as well as our model using BERT and ZEN with the best setting (i.e., BiLSTM UE, softmax tagger, as well as the memory module), where our approach outperforms both referential systems on both SUM1 and SUM2, where the model with ZEN obtains the best results. SUM2-A vs. SUM2-B as Gold Standard As shown in Table 1b, 7.7% of conversations in the test set contain both SUM2-A and SUM2-B. So far, for those conversations, we have used the concatenation of SUM2-A and SUM2-B as the gold standard (see Table 2-3). Table 4(a) shows the performance of the four systems (i.e., Ref-1 from Nallapati et al. (2017) and Ref-2 from ) and our model using BERT and ZEN as TE under the best setting (e.g., BiLSTM UE, softmax tagger, with the memory module)) on the entire test set, but with SUM2-A as the gold standard. Not surprisingly, for all three models, their performances with SUM2-A as the gold standard are higher than the ones with concatenation of SUM2-A and SUM2-B as the gold standard (see the last three columns in Table 3). (Nallapati et al., 2017) and Ref-2 (Wang et al., 2019)) and our best model (with BiLSTM UE, softmax tagger, and the memory module), where different part (i.e., SUM2-A or SUM2-B) of SUM2 is regarded as the gold standard. Table 4(b) reports the results on the 697 conversations in the test set that have both SUM2-A and SUM-2B, with either SUM2-A or SUM-2B as the gold standard. For all three systems, ROUGE scores with SUM2-B as gold standard are much lower than the ones with SUM2-A, indicating that generating summaries that are similar to manually crafted summaries is still a challenge task. HETs with Meta-Information In addition to the utterances, each conversation in the dataset has three major types of meta information; namely, speaker role (patient or doctor) (SR), hospital department (HD), and disease name (DN). We experiment with adding such meta-information on top of our model using BERT and ZEN as TE under the best setting. To incorporate the meta-information, we use a single-layer neural network to transfer them into vectorized representation, and concatenate them to their correspondent encoder layers. Specifically, SR is added to TE; HD and DN are added to UE. 17 Table 5 reports the performance of our HET models with different combination of the meta-information, where the results without using any meta-information are shown in the first row (which is identical to the last row in Table 3). Compared to the baselines, models with meta-information achieve better performance in most cases. Specifically, adding SR results in higher improvements compared with HD and DN. One possible explanation could be that the utterances from the patient and the doctor could be more important in generating problem statement (SUM1) and treatment recommendation (SUM2), respectively. Therefore, adding SR would help our model to focus more on the utterances for the patients and the doctors when it is predicting PD and DT labels for SUM1 and SUM2, respectively. Extractive Summarization As a direct research line related to our work, extractive summarization aims to extract important sentences in the input and use them to form a summary. Most previous studies focused on document summarization (Nallapati et al., 2017;Narayan et al., 2018;Xiao and Carenini, 2019;Luo et al., 2019) while some focused on summarization of meeting transcripts (Riedhammer et al., 2010;Singla et al., 2017), where their problem settings and data preparation are different from ours. Specifically, compared with summarization for documents, our task of conversation summarization is more challenging because utterances in the conversation are less formally written and there are speaker role changes during the entire conversation; compared with summarization for meeting transcripts, where the summary is similar to a short meeting-log, our task requires to generate more informative summaries to facilitate the needs of providing useful information to potential patients from the online platform. General extractive approaches for summarization always face challenges of redundancy when they use extracted sentences to generate an informative and readable summary within a length, in which additional modeling is required to address it even though with powerful neural models, e.g., BiLSTM (Nallapati et al., 2017), transformers , and attentions (Xiao and Carenini, 2019). On the contrary, Table 5: Results of our models using BERT and ZEN TE under the best setting (with BiLSTM UE, softmax tagger, and the memory module). "SR", "HD", and "DN" stand for the meta-information of speaker roles, hospital departments, and disease names, respectively. in our work, this challenge may not be an issue because the redundancy in the original input is limited and directly concatenating selected utterances with their same order in the original conversation does not lead to unreadable summmaries in most cases. Therefore, to have a good performance in conversation summarization in the medical domain, task-specific designs of summarization model are expected. Utterance Modeling in Conversations Studies on dialogue systems have drawn much attention recently, where many of them have been done on utterance modeling in human-human conversations (Wang et al., 2018a;Liu et al., 2019). In these studies, one stream of utterance modeling focuses on dialogue act classifications, which aims to attribute one of predefined acts to each utterance in conversations (Lee and Dernoncourt, 2016;Liu et al., 2017;Kumar et al., 2018;Wang et al., 2018b;Raheja and Tetreault, 2019). Another stream focuses on assessment of utterances in terms of their quality in various aspects, such as sentiment analysis (Inaba and Takahashi, 2016;Lison and Bibauw, 2017;Misra et al., 2019). Our study on extractive summarizations for conversation can be regarded as in the line of the latter stream in evaluating utterances for human-human conversations, where little research has been done for utterances based on their importance to the pragmatic outcomes (i.e., summaries for problem statement and treatment recommendations in our study) of the conversations. Conclusion and Future Work In this paper, we proposed a new task of medical conversation summarization, which is performed by identifying important utterances in the conversation between patients and doctors. Based on the real data from a Chinese online medical service provider, a hierarchical encoder-tagger model (HET), which is enhanced by the memory module, was proposed to tag each utterance in a conversation with problem statement or treatment recommendation. The labeled utterances are then concatenated to form summaries. The experimental results demonstrate the validity of our approach to medical conversation summarization via identifying important utterances on the proposed dataset. For future work, we plan to perform further key information extraction on the conversation summaries from similar medical problems, so that we can obtain relevant information such as symptoms and treatment recommendations to a particular medical problem and help new patients to locate more precise references that are covered in many cases.
6,988.8
2020-12-01T00:00:00.000
[ "Medicine", "Computer Science" ]
Modified Pulsatillae decoction inhibits DSS-induced ulcerative colitis in vitro and in vivo via IL-6/STAT3 pathway Background Ulcerative colitis (UC) is a chronic inflammatory disorder of the colon and rectum, which is positively correlated with the occurrence of IBD-related colorectal cancer (IBD-CRC). Conventional therapies based on drugs such as corticosteroids, mesalamine, and immunosuppression have serious side effects. Pulsatillae decoction (PD) served as a classical prescription for the treatment of colitis in China, has been shown to exert prominent curative effects and good safety. Based on clinical experience and our amelioration, we added an extra herb into this classical prescription, but its therapeutic effect on UC and the underlying mechanism are still unclear. Results We first found the curative effect of modified PD on dextran sodium sulfate (DSS)-incubated NCM460 cells. Then C57BL/6 mice were administered DSS to induce UC to evaluate the therapeutic of modified PD. The results showed that modified PD alleviated the inflammatory injury, manifested in body weight, colon length, and disease activity index, with histological analysis of colon injury. Transcriptomic sequencing indicated that modified PD treatment downregulated the IL-6/STAT3 signaling pathway, and reduced the levels of p-NF-κB, IL-1β and NLRP3, which were confirmed by western blot. Conclusions Collectively, our results indict that modified PD could efficiently relieve clinical signs and inflammatory mediators of UC, providing evidence of the anti-colitis effect of modified PD, which might provide novel strategies for therapeutic intervention in UC, which may be applied to the prevention of IBD-CRC. Background Inflammatory bowel disease (IBD), including ulcerative colitis (UC) and Crohn's disease (CD), is a chronic and recurrent inflammatory disorder of unknown etiology [1]. Its duration and severity are positively correlated with the occurrence of IBD-related colorectal cancer (IBD-CRC) [2]. A recent comparative study in China and Canada showed that compared with CD, the proportion of UC in China was significantly higher among patients with IBD-CRC [3]. The global burden of UC is rising, including the associated healthcare and societal costs. According to US data, the national annual direct and indirect costs related to UC are estimated to be $8.1 billion-$14.9 billion with a prevalence of 238/10000 [4,5]. Although studies of the Chinese population have shown that the incidence of UC has recently increased, with the economic development and improvement of living standards, the incidence is 1.09-1.64 / 100,000 [2,3]. Hitherto the precise cause of UC is unknown, recent research indicates that the individual's genetic sensibility, external environment and commensal microflora are all involved and functionally integrated in the pathogenesis of UC [4,5]. It is reported that patients with UC express high levels of immunocytokines such as TNF-α and IL-6 [6]. The goal of clinical treatment is to achieve disease remission, prevent disease-related complications, and improve the quality of patients' life. In the past decades, the conventional therapies for UC have been based on the use of corticosteroids, mesalamine and immunosuppressive drugs. Unfortunately, nearly one-third of the patients who are prescribed steroids requires repeated dosing or persists with refractory disease [7]. Currently biological therapies especially tumor necrosis factor (TNF) inhibitors such as infliximab (IFX), adalimumab (ADA), certulizumab pegol and golimumab start to matter. However, anti-TNF therapy has been accompanied with a certain number of side effects including the risk of serious infections and the occurrence of fatal T cell lymphoma on account of rapid decrease of the T-cell population in the gut tissue [8]. Therefore, there is an urgent need to develop safe and effective therapies for treating UC. Herbal medicine, the most common modality of complementary treatment, exhibits the abilities of bacteriostasis, anti-inflammation, and anticancer, which has already been used for treating some diseases since the third century B.C. in China. It has emerged as the alternative treatment for inflammatory diseases of late years, including UC. Several studies have shown that herbal medicine and its extracts exert anti-UC effects in vitro and in vivo [9][10][11]. Pulsatilla decoction (PD) was first prescribed by ancient Chinese physicians Zhang Zhongjing in his medical book "Shang Han Lun", approximately 1800 years ago. Studies have indicated that PD has multiple therapeutic functions including the anti-C. albicans, anti-diarrhea and anti-inflammatory activity [12][13][14]. However, our previously experiment showed that therapeutic effect of PD on UC was unsatisfactory. The Traditional Chinese Medicine pathogenesis of UC is "combination of excess and deficiency", its therapeutic medication should be a combination of "dispelling pathogenic factors and strengthening vital energy". The effect of the initial formulation of Pulsatillae decoction is clearing away heat, detoxify and cooling blood. Based on clinical experience and our amelioration, we added an extra herb, Rhizoma Atractylodis Macrocephalae, to strengthen vital energy, but its therapeutic effect on UC and the underlying mechanism are still unknown. In this study, Dextran sulfate sodium (DSS)-induced colitis mice model, which is characterized by the morphologically and biochemically, was utilized to investigate the therapeutic effect of modified PD, providing evidence of the utilization of modified PD for treating UC [10]. Furthermore, the study on mechanism also elucidated that IL-6/STAT3 signaling pathway was involved in the action of modified PD for alleviating UC. Modified PD alleviated DSS-induced injury in NCM460 cells To explore the effect of modified PD on DSS-induced UC, we first utilized the NCM460 cell line to evaluate the effect of DSS on cell viability. Cells were treated with a series of concentrations of DSS for 24 h and 48 h, and the cell viability was determined by MTT assay. As shown in Fig. 1A, we found that DSS significantly decreased cell viability and exerted its most deleterious effect at a minimum concentration of 0.2 μg/mL. Meanwhile, to assess the therapeutic effect of modified PD against the cell damage induced by DSS, we performed the exposure of NCM460 cells to modified PD and DSS simultaneously. Results showed that modified PD treatment ameliorated cell viability, and exerted the most effective action at 100 μg/mL (Fig. 1B). These data indicated that modified PD alleviated DSS-induced injury in NCM460 cells. Modified PD relieved DSS-induced UC in vivo To further determine the protective role of modified PD against DSS-induced UC, we then explored whether modified PD could exert similar effect in vivo. Modified PD was given to an animal model of UC induced by DSS. As presented above, C57BL/6 mice were given 2.5% DSS (w/v) in their drinking water for the induction of acute UC and were simultaneously administered modified PD at a concentration of 3.185, 6.37 and 12.74 g/kg for 10 days ( Fig. 2A). Body weight was recorded every day and the animals were sacrificed at 11th day, after which their colons were stored in formalin. As shown in Fig. 2B, the greatest weight loss was observed in the group treated with DSS alone, while the body weight in control group was almost unchanged. Mice given modified PD experienced less weight loss than those given DSS alone, which was closely related to the drug dose. The colon length and pathological grading also demonstrated the above conclusion ( Fig. 2C-E). Additionally, H&E staining of the colon sections indicated that modified PD significantly abolished the immunological injury in DSS-treated colon tissues (Fig. 2F). Taken together, these results showed that modified PD ameliorated DSS-induced UC in vivo. Transcriptome sequencing (RNA-seq) analysis hinted the potential signaling pathway involved in DSS-induced UC with modified PD treatment To determine the underlying mechanism of modified PD mitigating DSS-induced UC, transcriptome sequencing (RNA-seq) analysis was performed to detect differential expression profiles in the colon of NC, DSS and modified PD groups. MA plots and volcano plots of the fold change in gene expression for comparison between each two groups are shown in Fig. 3A, in which we found 2573 differential genes between normal and DSS group, 2019 differential genes between DSS and modified PD group, 1062 differential genes between normal and modified PD group |foldchange| ≥2 and p value ≤0.05 were deemed significantly. Figure 3B displayed a more concrete comparison by showing heatmaps and cluster analysis between each two groups. These preliminary analyses reveal that modified PD indeed elicits therapeutic effects on DSSinduces UC. To identify the transcriptomic pathways affected by the DSS and modified PD, the gene ontology (GO) analysis and Kyoto encyclopedia of genes and genomes (KEGG) analysis were utilized (Fig. 3C). As shown Fig. 3D, DSS treatment led to activation of Jak/STAT signaling pathway, whereas modified PD reversed this tendency. Additionally, we noticed that level of IL-6 varied with the DSS and modified PD treatment. Thereby these genes and pathways might play important roles in the DSS-induced UC and in therapeutic effect of modified PD. Modified PD inhibits DSS-induced activation of IL-6/STAT3 signaling pathway in vitro and in vivo Previous study has shown that inhibiting or blocking the activation of IL-6/STAT3 signaling pathway can attenuate the colon injury and inflammation in DSS-induced colitis, and RNA-seq analysis also pointed out that the underlying mechanism may related with IL-6/STAT3 pathway. To confirm this, we first selected the NCM460 cell to validate it. As shown in Fig. 4A&B, western blot showed that DSS treatment significantly increased the expression of NLRP3, IL-1β, IL-6 and TNF-α, meanwhile the phosphorylation levels of several marker proteins of IL-6/STAT3 signaling pathway (STAT3 and NF-κB) were also manifested a remarkable upward trend compared to matched controls. However, modified PD could restore this increment. In addition, modified PD treatment significantly inhibited the elevated mRNA expressions of VEGF, IL-6 and TNF-α in colonic tissues of DSS-treated mice, which illustrated that the adverse effects of modified PD might be mediated by the IL-6/STAT3 pathway (Fig. 5A). Moreover, the results of protein levels in colonic tissues were consistent with NCM460 cell (Fig. 5B&C). Taken together, modified PD could downregulate DSS-activated IL-6/STAT3 signaling pathways and suppress inflammatory cytokines production. These findings indicated that IL-6/STAT3 played an important role in DSS-induced acute UC, and modified PD could alleviate oxidative injury in the DSS-induced mouse model of UC. Discussion DSS, a low-molecular weight sulfated polysaccharide, was utilized in inducing epithelial damage and inflammatory response in the colon in experimental mouse model, which could provide signs of acute colitis including weight loss, bloody stools, and diarrhea [15]. Besides, given the massive application of murine colitis model on acute colitis study, we chose C57BL/6 mice to perform the experiments [16][17][18][19]. As previously mentioned, the cause and underlying mechanisms of UC remain unclear, but what can be determined is that the chronic relapsing-remitting inflammatory condition recruits proinflammatory cytokines such as interleukin-6 (IL-6), IL-1β, TNF-α and interferon-γ (IFN-γ), thereby resulting in severe colon injury [20]. The conventional treatments for UC including aminosalicylates, corticosteroids, and immune modulators, such as sulfasalazine and glucocorticosteroids, induce remission in only half of patients [21]. However, these chemotherapies can cause serious side effects like vomiting, anemia and generalized edema which can be life-threatening [10]. Due to its obscure etiology, high risk of recurrence, and poor prognosis, UC has become a clinical challenge in terms of treatment. Therefore, studies on the alternative therapies for inflammatory chronic disease, especially UC, have been the sudden explosion of great interest recently. Traditional Chinese Medicine (TCM) is one of the most developed branches of herbal medicine, which use plants or/and plant extracts for medical treatment. It has been recorded that a number of natural products exhibit effectiveness for the treatment of UC, including Curcumin, Cannabinoids, Andrographis paniculate, Tripterygium wilfordii, et al. [22][23][24]. Pulsatilla decoction, which consists of namely Radix Pulsatillae, Rhizoma Coptidis, Cortex Phellodendri and Cortex Fraxini, is a TCM formulation derived from a medical book "Shang Han Lun" written by an ancient Chinese physicians Zhang Zhongjing about 1800 years ago. Several trials have shown that the prescription exerted prominent anti-inflammatory effect, especially on enteritis and bacillary dysentery [25]. Based on the classical prescription and our clinical experience, we improved the proportion of ingredients in Pulsatilla decoction, and added another herbal plant, roasted Rhizoma Atractylodis Macrocephalae. In our study, we examined whether modified PD alleviated the severity in tissue affected by colitis, the results showed that treatment with modified PD improved the extent of damage in colon suffering UC. Besides, transcriptome sequencing indicated that IL-6/STAT3 signaling pathway may be involved in the anti-inflammatory mechanism of modified PD. Western blot verified the decreased expression of IL-6 and p-STAT3 in colon tissue. Meanwhile, the expression of NF-κB, NLRP3, IL-1β and TNF-α were diminished. Through IL-6/STAT3 signaling pathway, modified PD could inhibit the increased inflammatory response and reduce the severity of colitis lesions (Fig. 6). The multi-target effect of TCM formulation lead us to wonder whether modified PD functions by influencing other signaling pathway. KEGG pathway classification from transcriptome sequencing reveal that TLR4/MyD88 signaling is one of the most significantly enriched pathways, which play a vital role mediating inflammation response. TLR4 mainly recognizes pathogenassociated molecules and after being stimulated, TLR4 recruits and activates downstream IRAK, ARAK2 and TRAF6, eventually regulating MAPK, IRF5, NF-κB and as a result, terminal inflammatory factors IL-1β and TNF-α are released [26]. Several researches have showed that various TCM formulations and compounds could alleviate inflammatory bowel diseases through TLR4/MyD88 pathway [27][28][29], nonetheless whether modified PD takes effect through via this signaling still needs further investigation. Our work indicated the anti-colitis potential of modified PD in vitro and in vivo, shedding light on UC interruption through utilizing modified PD. Further animal experiments show that modified PD can significantly improve the symptoms of stool with pus and blood and weight loss in the acute stage of inflammation, and significantly reduce the occurrence of IBD-CRC when used to treat AOM/DSS-induced IBD-CRC. The mechanism remains to be further elucidated. Conclusions In summary, we demonstrated that modified PD could inhibit the increased inflammatory response and reduce the severity of colitis lesions through IL-6/STAT3 signaling pathway. Meanwhile, modified PD decreased the expression of NF-κB, NLRP3, IL-1β and TNF-α. These results suggest that the anti-colitis potential of modified PD in vitro and in vivo, shedding light on UC interruption through utilizing modified PD. Cell viability assay The cells were seeded into 96-well plates and treated with drug at a serious of concentrations for 24 h and 48 h. After treatments, cells were incubated with MTT (Solarbio, Shanghai, China) for 4 h at 37°C. Then the supernatant was removed, and the formazan crystals were dissolved in 200 μL dimethyl sulfoxide. Finally, the optical density was measured at 570 nm with a microplate plate reader (Thermo Fisher Scientific, Inc., USA). Mouse model Eight-weeks-old female C57BL/6 mice were purchased from Nanjing Qinglongshan Animal Breeding Base (Nanjing, China). All animals were reared in an SPF-level laboratory (temperature 24-25°C, humidity 70-75%, with a 12 h light/dark lighting regimen) and were fed a standard diet of pellets and water ad libitum. Animal welfare and experimental procedures were carried out strictly in accordance with the Guide for the Care and Use of Laboratory Animals (National Institutes of Health, the United States) and the IACUC protocols of our university (approval no. ACU170501). Mice were randomly assigned to 5 groups, 9 mice per group and 45 mice in total: negative control group (mice received drinking water and saline), model group (mice received DSS in drinking water only), low dose modified PD group (3.185 g/kg together with DSS), medium dose modified PD group (6.37 mg/kg together with DSS) and high dose modified PD group (12.74 mg/kg together with DSS). Colitis was induced by providing 2.5% DSS (w/v) to mice in drinking water for 7 days, and were simultaneously given modified PD of different doses since the first day [10]. Body weight was measured daily and the animals were sacrificed at day 11 by cervical dislocation, and their colons were collected. The colon was fixed in 10% formalin for at least 24 h for further histopathological assessment. Total RNA was extracted from colon tissues by using Trizol regent (Invitrogen, US) according to the manufacturer's procedural guidelines, and cDNA was synthesized by using HiScript RT SuperMix for qPCR kit (Vazyme, China). Then quantitative real-time PCR was performed by using iTaq SYBR Green Supermix With ROX kit (Bio-Rad, US). The specific primers used for detecting genes are listed in Table 2. All quantitative real-time PCR experiments were performed with LightCycler 96 System (Roche, Germany). Relative expression of target genes was normalized to GPADH, analyzed by 2 -ΔΔCt method and given as ratio compared with the control. The primer sequences are shown as follow. High-throughput transcriptomic sequencing RNA samples were sent to the Weifen Biotech (Anhui, China) for RNA-seq. Briefly, total RNAs were isolated from triplicates of colon tissues at control group, DSS group and modified PD group (here we selected the high dose modified PD group). Three biological repeats were included for each group. mRNA was extracted from the total RNA after removing 16 and 23 s rRNAs and then were pooled together for cDNA synthesis and sequencing. After generating the clusters, library sequencing was performed on an Illumina HiSeq2000 platform, to create paired-end reads with a length of 150 bp. Gene ontology and KEGG pathway analysis were performed using DAVID. Western blot Whole cellular or tissue proteins were extracted with Pierce RIPA Buffer (Thermo Scientific, US) and protease inhibitor cocktail (Yeasen, Shanghai, China), then lysates were transferred to a 1.5-ml microcentrifuge tube and centrifuged at 12000 rpm for 20 min at 4°C. The supernatant was retained and the protein concentration was normalized to equal level by using BCA Protein Assay kit (Thermo Fisher, USA). The extracts were separated by SDS-PAGE and then transferred to 0.45 μm immobilon-P transfer membrane (Millipore, Bedford, MA). Membranes were blocked with 5% skim milk for 1 h followed by incubation with a primary antibody at 4°C overnight. Then they were washed and treated with an HRP labeled secondary antibody at 37°C for 2 h. Immunoblots were visualized with the High-sig ECL Western Blot Substrate (Tannon, Shanghai, China). Histological analysis Mice colon tissue was fixed with a sufficient amount of 10% formalin for 24 h. Tissue sections were prepared by material extraction, dehydration, wax dipping, embedding, sectioning, HE staining, and coverslipping. The sections were scanned by digital scanning system and electronic sections were stored and analyzed on the computer. The specific scoring method of histological damage is as follows: the pathological morphological feature in the visual field is normal intestinal mucosa, 0 point is scored; the pathological morphological feature is mild inflammation and edema of the mucosal layer, and the 1/3 crypt in the basal part disappears, 1 point is scored; It is moderate inflammation of the mucosal layer, 2/3 of the crypts at the base part disappear, 2 points are scored; pathological features are moderate inflammation of the mucosa layer, the crypts completely disappear, but the epithelial layer is still intact, 3 points are scored; pathological features are mucosa inflammation, which was severe in the stroma, submucosa, and myometrium, and the crypts and epithelium disappeared, 4 points were scored [30][31][32]. Statistical analysis All data were presented as means ± standard deviation (SEM). One-way analysis of variance (ANOVA) was used to evaluate the data between two experimental groups. For all analyses, p < 0.05 were considered statistically significant. They were undertaken using the GraphPad Prism software (GraphPad Software Inc., Avenida, CA).
4,411.4
2020-06-09T00:00:00.000
[ "Medicine", "Biology" ]
The Blood Gene Expression Signature for Kawasaki Disease in Children Identified with Advanced Feature Selection Methods Kawasaki disease (KD) is an acute vasculitis, accompanied by coronary artery aneurysm, coronary artery dilatation, arrhythmia, and other serious cardiovascular diseases. So far, the etiology of KD is unclear; it is necessary to study the molecular mechanism and related factors of KD. In this study, we analyzed the expression profiles of 75 DB (identifying bacteria), 122 DV (identifying virus), 71 HC (healthy control), and 311 KD (Kawasaki disease) samples. 332 key genes related to KD and pathogen infections were identified using a combination of advanced feature selection methods: (1) Boruta, (2) Monte-Carlo Feature Selection (MCFS), and (3) Incremental Feature Selection (IFS). The number of signature genes was narrowed down step by step. Subsequently, their functions were revealed by KEGG and GO enrichment analyses. Our results provided clues of potential molecular mechanisms of KD and were helpful for KD detection and treatment. Introduction Kawasaki disease (KD) is an acute vasculitis, accompanied by coronary artery aneurysm, coronary artery dilatation, arrhythmia, and other serious cardiovascular diseases [1,2]. It was first described by Japanese doctor Kawasaki in the late 1960s and has since been reported around the world with an increasing incidence [3,4]. According to the recent survey, Japan owns the highest incidence of KD with 265 cases per 100,000 kids under the age of five [5]. KD initially manifested as high fever, cervical lymphadenopathy, and mucocutaneous inflammation [6]. Aspirin therapy and intravenous immunoglobulin (IVIG) injection play a key role in the effective treatment of KD, reducing the incidence of coronary artery complications from 5% to 25% [7]. KD occurs not only in infant and childhood period but even in adolescence. The young age of onset may suggest that susceptibility may be related to the maturity of the immune system [8]. So far, the etiology of KD is unclear, but epidemiological features indicate that there may be a connection between it and as-yet-undefined pathogen infections. In the surveys of Uehara and Belay, the incidence of KD reached a peak in winter and spring, which was similar to that of many respiratory diseases. This seasonal feature provides a new thought that KD may be caused by one or several pathogens related to respiratory diseases [2,8,9]. According to statistics, 8-42% of patients was associated with respiratory virus infection and 33% with bacterial infection [10][11][12][13]. Viral infection leads to abnormal lymphocyte subsets and inflammation, which were positively correlated with the occurrence of vascular inflammation in KD [14]. Rowley et al. found that the upregulation of expression of the interferon-stimulated gene was detected in acute lung tissue of KD, which illustrated the presence of cellular immune response after viral infection. They also observed that coronary artery inflammation of KD was characterized by antiviral immune response, including the upregulation of related genes induced by type I interferon and activation of cytotoxic T lymphocytes [15][16][17]. A related study suggested that some common respiratory viruses, such as enteroviruses, adenoviruses, coronaviruses, and rhinoviruses, were associated with KD cases [11]. It is reported that among these viruses, human coronavirus (HCoV)-229E may be involved in the occurrence of KD [18]. All of these strongly support the hypothesis that the infection of viruses and bacteria may be related to KD. Up to date, there is no clinical specific diagnostic test for KD, and the diagnosis is still highly dependent on the symptoms and ultrasound imaging results [19]. Therefore, it is still necessary to study the molecular mechanism and related factors of KD. In this study, we analyzed the expression profiles of DB (identifying bacteria), DV (identifying virus), HC (healthy control), and KD (Kawasaki disease) samples. By comparing their expression difference, we obtained 332 key genes related to KD and pathogen infections. Subsequently, their functions were revealed by KEGG and GO enrichment analysis. Our study provides a direction for the study of potential molecular mechanism of KD occurrence. HumanHT-12 V4.0 expression beadchip. Only the common 25,159 genes were analyzed. We performed quantile normalization to make sure the samples from a different batch were comparable using the R function "normalize.quantiles" in package preprocessCore (https://bioconductor.org/packages/ preprocessCore/). Boruta Feature Filtering. Since there were many genes and most of them were not associated with KD, we applied Boruta feature filtering [21] to detect all the relevant genes first. Boruta feature filtering is an advanced feature selection method wrapped with random forest. First, the real dataset was shuffled. Then, the importance of each feature was calculated. The features with real importance scores significantly higher than the shuffled ones were kept. Iteratively, all relevant features were selected. With Boruta feature filtering, we got a much smaller number of features for further analysis. We used python package Boruta (https://pypi.org/project/Boruta/) to apply the Boruta feature filtering. Monte-Carlo Feature Selection. We adopted the Monte-Carlo Feature Selection (MCFS) [22] to rank the relevant features. It generated a number of randomly selected feature sets and then constructed many classification trees [23][24][25]. By ensembling these classification trees, the importance of each feature was calculated. In general, a feature was important if it had been selected by many classification trees. Suppose d was the total number of relevant features selected by Boruta, m features (m < <d) were randomly selected for s times, and t trees for each of the s subsets were constructed. Finally, there were s•t classification trees. The relative importance (RI) of feature was where wAcc was the weighted classification accuracy of decision tree τ, IGðn g ðτÞÞ was the information gain of node n g ðτÞ which was a decision rule of feature g, ðno: in n g ðτÞ was the number of samples under node n g ðτÞ, ðno: in τÞwas the number of samples in decision tree τ, and u and v were adjusted parameters. Based on RI, the features were ranked as F where N was the total number of relevant features, and the feature with smaller index had greater RI. Incremental Feature Selection. After the features were ranked by MCFS, it was still difficult to decide how many features should be selected. To avoid arbitrary chosen cutoffs, we applied Incremental Feature Selection (IFS) [26][27][28][29][30]. For the selected and ranked feature list F, we created a series of feature subsets by iteratively adding top ranking features into the previous feature subsets and then evaluated their performance by building SVM classifiers and applying a leave-oneout cross validation (LOOCV). The feature subset with the highest LOOCV accuracy was selected. Results and Discussion 3.1. The Irrelevant Genes of Kawasaki Disease Were Filtered by Boruta. The genome-wide expression measurements of genes provided a powerful way to understand the molecular functions of Kawasaki disease. But most of the genes were not associated with KD and were noise for sophisticated bioinformatics analysis. Therefore, we applied the Boruta algorithm to filter the irrelevant genes and kept the relevant genes. After performing Boruta, the dimension of genes was reduced to 1,485 from the original 25,159 genes. The Genes Were Ranked Based on Their Importance in Kawasaki Disease. For the 1,485 KD relevant genes, we wanted to know how strong it was associated with KD. To rank them based on their importance, we used the MCFS method. It can rank the genes based on their contributions in a series of classification trees. Since it was an ensemble learning method, the results were reliable and robust. The ranked genes and their relative importance were listed in Table S1. The top 663 genes were marked as "top Ranking" genes by MCFS. 3.3. The Kawasaki Disease Signature Genes Selected with IFS Method. The number of genes, 663, was still too large for gene signature. To further reduce the number of genes, we applied IFS procedure on the top 663 genes in Table S1. We tried different numbers of top ranked genes and calculated their SVM LOOCV accuracy. The IFS curve was shown in Figure 1. The highest LOOCV accuracy was 0.933 when 332 genes were used. Therefore, these 332 genes were selected as the final Kawasaki disease signature genes. The confusion matrix of the 332 genes is shown in Table 1. It can be seen that most samples were correctly classified. Among the four groups, only DB had a relatively poor performance. The other three groups all had excellent performance. The Biological Significance of 332 Selected Genes. We found that some genes have been confirmed to be associated with KD. For example, Haptoglobin (HP), an acute-phase protein synthesized by the liver, responds to inflammatory cytokines and has been thought to be associated with vascular disease [31,32]. Huang et al. made a comprehensive evaluation of the acute phase reactants in patients with KD. It was found that the level of serum HP in KD cases was significantly higher than that in other febrile diseases. The ratio of HP/apolipoprotein A-I could accurately distinguish KD from other febrile diseases and could be used as an auxiliary laboratory index in the acute phase of KD [33]. The early diagno-sis and treatment of KD are very important for better prognosis and better survival rate in children. By studying the relationship between HP phenotype and coronary artery abnormal (CAA) formation in patients with KD, Lee et al. found that the clinical symptoms of HP-1 patients were delayed or incomplete, and the late diagnosis of KD was related to Haptoglobin phenotype [34]. BAX is an essential medium for endogenous apoptosis of the permeable mitochondrial outer membrane [35]. In the study of Tsujimoto et al., they measured the expression levels of antiapoptotic protein A1 and proapoptotic protein BAX and the ratio of A1/BAX in the viral infection group, bacterial infection group, KD group, and healthy children group. The results showed that the ratio of A1/BAX in patients with acute KD was significantly increased, suggesting that spontaneous apoptosis of PMN was inhibited in patients with acute KD [36]. To comprehensively study the biological functions of these 332 selected genes, we enrich them onto KEGG pathways and GO terms using a hypergeometric test. The enrichment results with FDR smaller than 0.05 are given in Table 2. The results of KEGG enrichment analysis showed that the key genes were significantly correlated with influenza A, Epstein-Barr virus (EBV) infection, hepatitis C, systemic lupus erythematosus, and measles. Many previous studies have shown that KD is associated with influenza, coronavirus, and EBV. A case report in Wang et al. described a case of incomplete KD (IKD) consistent with influenza A (H1N1) pdm09 virus, suggesting that influenza infection may be a potential cause of KD [14]. In addition, a study from Korea shows that the monthly incidence of KD showed significant correlation with the monthly overall viral detection, including human bocavirus, enterovirus, and influenza [37]. A case-control study shows that specimens of respiratory secretions from 8 of 11 children with KD and from 1 of 22 control subjects tested positive for New Haven coronavirus (HCoV-NH). These data suggest that HCoV-NH infection is associated with KD [38]. Unfortunately, in another study in Taiwan, researchers did not find any association between HCoV-NH infection and KD [39]. A recent study reported an unusually high incidence of Kawasaki disease in children in a French centre for emerging infectious diseases: 17 cases in 11 days. In 82% of the cases, IgG antibodies for SARS-CoV-2 were detected, suggesting an association between the virus and this syndrome in children [40]. As for the correlation between KD and EBV, Huang et al. found that EBV sequences were detected in 83% repeatedly tested KD patients within 3 months after onset; the proportion is much higher than that of the control group [41]. These virological studies indicate that an unusual EBV-cell interaction may occur in KD. Besides, the prevalent ages at onset for KD and EBV infection are known to be similar in Korea and Japan [42]. Pavone et al. suggest in the case report that KD is caused by a new virus that may cross-react with EBV. Therefore, in febrile children with EBV infection or similar conditions to consider the possibility of Kawasaki disease, it is necessary to make a differential diagnosis in order to start intravenous immunoglobulin therapy in time [43]. Lee Figure 1: The IFS curve of Kawasaki disease signature gene selection. The x axis was the number of top genes and the y axis was the LOOCV accuracy. The highest LOOCV accuracy was 0.933 when 332 genes were used. Therefore, these 332 genes were selected as Kawasaki disease signature genes. of KD appeared to be infected with EBV later than those with no history of KD [44]. It has been reported that in patients with a history of KD, the occurrence of the second autoimmune disease should also be considered. In addition, the initial manifestations of lupus may mimic KD [45]. Furthermore, through the enrichment analysis of GO pathway, it is found that these key genes were enriched in the functions related to cytoplasmic composition, immune response, enzyme activity, and molecular binding. There is a lot of evidence that the innate immune system plays an important role in initiating and mediating host inflammation. The acute phase of Kawasaki disease is characterized by inhibitory T cell deficiency. The obvious activation of T cells, B cells, and monocytes is related to the increase of cytokines secreted by these immune effector cells. This immune activation promotes the injury of vascular endothelial cells in Kawasaki disease. The light and electron microscopic studies on the antigens in the ciliated bronchial epithelium of acute KD by Rowley and Shulman showed that the KDassociated antigens were located in the cytoplasmic inclu-sions consistent with the aggregation of viral proteins and associated nucleic acids [46]. Conclusions To sum up, we analyzed the gene expression profiles of KD samples and identified the blood gene signature of KD. The functional analysis of these KD signature genes suggested that the correlation between KD and pathogen infection, especially the new influenza virus H1N1, should attract more attention. In addition, the potential mechanism of KD mediated by virus infection is also worthy of further study, which may provide scientific basis and new insights for the pathogenesis of KD. Our study also provides a direction for the study of etiology of KD in the future.
3,291.4
2020-06-28T00:00:00.000
[ "Medicine", "Biology" ]
Sevoflurane upregulates neuron death process-related Ddit4 expression by NMDAR in the hippocampus Postoperative cognitive dysfunction (POCD) is a serious and common complication induced by anesthesia and surgery. Neuronal apoptosis induced by general anesthetic neurotoxicity is a high-risk factor. However, a comprehensive analysis of general anesthesia-regulated gene expression patterns and further research on molecular mechanisms are lacking. Here, we performed bioinformatics analysis of gene expression in the hippocampus of aged rats that received sevoflurane anesthesia in GSE139220 from the GEO database, found a total of 226 differentially expressed genes (DEGs) and investigated hub genes according to the number of biological processes in which the genes were enriched and performed screening by 12 algorithms with cytoHubba in Cytoscape. Among the screened hub genes, Agt, Cdkn1a, Ddit4, and Rhob are related to the neuronal death process. We further confirmed that these genes, especially Ddit4, were upregulated in the hippocampus of aged mice that received sevoflurane anesthesia. NMDAR, the core target receptor of sevoflurane, rather than GABAAR, mediates the sevoflurane regulation of DDIT4 expression. Our study screened sevoflurane-regulated DEGs and focused on the neuronal death process to reveal DDIT4 as a potential target mediated by NMDAR, which may provide a new target for the treatment of sevoflurane neurotoxicity. INTRODUCTION More than 300 million operations are performed worldwide each year, and a continued increase is observed in all economic environments [1]. Postoperative cognitive dysfunction (POCD) is a common cognitive impairment in patients during the perioperative period and is mostly found in elderly patients, with a reported incidence ranging from 15% to 60% [2,3]. POCD is mainly characterized by progressive postoperative memory impairment, cognitive decline, and executive dysfunction. In addition, the effects of POCD may not be temporary and can lead to neurological dysfunction years after surgery [4,5], which is associated with an increased risk of life-threatening illness and death. Neuronal apoptosis, a high-risk factor inducing POCD, leads to decreased neurogenesis, impaired synaptic plasticity, neuroinflammation, and oxidative stress in POCD patients [6][7][8]. Sevoflurane can induce POCD-related behaviors in animal, such as mice [9] or rats [10,11]. However, the neurobiological basis of sevoflurane neurotoxicity remains largely unknown. General anesthetic neurotoxicity has been extensively examined in recent years [12][13][14]. An increasing number of studies have shown that inhaled anesthetics may cause neurotoxicity, leading to hippocampal neuronal damage and apoptosis, which result in cognitive dysfunction [15][16][17]. Sevoflurane, the most commonly used inhalation anesthetic, induces neuronal apoptosis [18][19][20][21][22]. Sevoflurane enhanced the production of lactate in aged marmoset brains [23]. Lactate accumulation can induce AGING neuronal apoptosis or even acidosis in critically ill patients [24][25][26]. Sevoflurane was shown to activate gamma-aminobutyric acid subtype A receptor (GABAAR) to induce apoptosis of immature dentate granule cells in mice [27]. Apoptosis is regulated by multiple pathways, among which the mechanism of neuronal apoptosis induced by sevoflurane through related signaling pathways has attracted increased attention [28][29][30]. Sevoflurane inhibits the ERK1/2 signaling pathway by antagonizing the N-methyl-Daspartate receptor (NMDAR) and upregulates the expression of the apoptotic proteins caspase-3 and Bax in mitochondria, resulting in apoptosis of hippocampal neurons [31]. Additionally, sevoflurane promotes the expression of the apoptotic factor connexin 43 (Cx43) and leads to neuronal apoptosis by activating the JNK/cJun/AP-1 signaling pathway [32]. However, a comprehensive analysis of the differentially expressed genes (DEGs) regulated by sevoflurane and further investigation of the molecular mechanism is lacking. Transcriptomic analysis has identified comprehensive gene expression patterns to help reveal potential mechanisms for various neurological diseases [33,34] and has shown that inhaled anesthetic are associated with neurological damage [35]. Here, we applied multistage comprehensive bioinformatics methods to explore the possible pathogenesis of sevoflurane neurotoxicity. We focused on the potential key genes with different expression levels in the hippocampus of aged rats after sevoflurane anesthesia, established the functional annotation of their potential target genes and used gene enrichment analysis to reveal the role of DEGs associated with the cell death process in sevoflurane anesthesia. We further performed an in vivo experiment in aged mice that received sevoflurane exposure to confirm the pattern of upregulation of these genes in the hippocampus and further found that the NMDAR mediates the sevoflurane regulation of DDIT4 expression. This work showed the molecular mechanism of sevoflurane-induced neuronal apoptosis and provided a new potential target for sevoflurane toxicity. DEG identification A total of 10032 unique genes were annotated with the SwissProt database. The expression boxplot of all genes for each sample is shown in Figure 1A after normalization with the rma function by using the oligo package. The differential expression analysis identified 194 upregulated and 32 downregulated genes after treatment with 2.5% sevoflurane in 100% oxygen for 4 hours in an anesthetizing chamber with the criterion of a P value less than 0.05. Among all DEGs, 160 DEGs (153 upregulated DEGs and 17 downregulated DEGs) could be annotated in metascape, they were listed in Table 1, and the heatmap of the DEGs between the two groups is displayed in Figure 1B. Functional enrichment analysis of DEGs Biological processes and KEGG annotation were applied to explore the function of DEGs. All DEGs significantly played a role in localization, signaling, metabolic process, development process, and positive regulation of biological process (Figure 2A). Twentyfour biological process terms were filtered with P value less than 0.001, and DEGs were significantly enriched in the regulation of neuronal apoptosis ( Figure 2B). Next, we screened the biological processes associated with neuron death with the key words neuron and death, and 5 biological processes (positive regulation of cell death, regulation of neuron death, negative regulation of neuron death, regulation of neuron apoptotic process and negative regulation of neuron apoptotic process) were dysregulated by sevoflurane ( Figure 3A). Most of the genes enriched in disordered biological processes associated with cell death were upregulated after sevoflurane inhalation ( Figure 3B-3F). A total of 10 KEGG pathways, such as peroxisome, AGE-RAGE signaling pathway in diabetic complications, inositol phosphate metabolism, vascular smooth muscle contraction, rap1 signaling pathway, and glycerophospholipid metabolism, were enriched by DEGs ( Figure 2C). Protein-protein interaction network construction and hub gene selection A total of 57 nodes and 97 interactions of the DEGs were identified in STRING and were visualized in Cytoscape ( Figure 4). We calculated the number of genes enriched in biological process terms, and the genes that were enriched in at least 10 terms are listed in Table 2. The cytoHubba application identified 58 hub genes with 12 algorithms, including 29 genes that were identified by at least five different methods as candidate hub genes (Table 3). Six hub genes (Agt, Cdkn1a, Ddit4, Pdgfra, Rapgef3, and Rhob) were both selected with two methods ( Figure 5A). The six hub genes were upregulated after sevoflurane inhalation ( Figure 5B). We further validated the expression of the six hub genes in vivo. We found that 4 h of 3% sevoflurane treatment increased the mRNA levels of Agt, Cdkn1a, Ddit4, Pdgfra, Rapgef3, and Rhob in the mouse hippocampus ( Figure 5C). Among the 6 hub genes, 4 genes (Agt, Cdkn1a, Ddit4, and Rhob) were also enriched in biological processes associated with neuronal death ( Figure 5D). Sevoflurane upregulated the expression of Ddit4 DDIT4, an encoded protein that regulates development and DNA damage and participates in various pathological processes, was significantly enriched in the regulation of neuron death and positive regulation of cell death ( Figure 6A). NMDAR and GABAAR are considered important targets of sevoflurane [36][37][38]. Therefore, we further explored whether DDIT4 is regulated by NMDAR or GABAAR. We found that activation of GABAAR by injection of the GABAAR agonist muscimol (1.25 μg) into the mouse hippocampus did not cause a significant change in Ddit4 expression ( Figure 6B). However, after injection of the NMDAR antagonist MK-801 (0.25 μg) into the mouse hippocampus (injection coordinates: AP −2.1 mm, ML 1.5 mm, DV −2.1 mm) by brain stereotactic injection, the mRNA expression of Ddit4 was increased ( Figure 6C). While using sevoflurane for anesthesia treatment, we injected 0.5 μg NMDA into the hippocampus (AP −2.1 mm, ML 1.5 mm, DV −2.1 mm) of mice and found that the increased expression of Ddit4 caused by sevoflurane could be rescued by NMDA, indicating that the effect of sevoflurane on the expression of DDIT4 might occur through the NMDA receptor ( Figure 6D). The western blot results show that DDIT4 level was elevated after sevoflurane-treated, but decrease after NMDA supplementation ( Figure 6E). DISCUSSION As a common perioperative neurological impairment in elderly patients, POCD strongly affects rapid recovery and long-term quality of life and places a heavy burden on patients' families and society [39]. Neuronal apoptosis induced by sevoflurane is one of the possible factors leading to POCD [10,40,41]. Sevoflurane may lead to neuronal death or neuroinflammation to induce cognitive impairment [42]. In this study, we comprehensively analyzed a total of 170 DEGs, 153 upregulated genes and 17 downregulated genes, in the hippocampus of aged rats after sevoflurane anesthesia, and 4 hub genes (Agt, Cdkn1a, Ddit4, and Rhob) were critically related to the biological process of cell death. We further confirmed the upregulation of these genes, especially Ddit4, in the hippocampus of the aged mice that received 4 hours of sevoflurane anesthesia. NMDAR, the core target receptor of sevoflurane, rather than GABAAR, mediates the sevoflurane regulation of DDIT4 expression. We screened the DEGs from the hippocampus of rats, which is closely related to cognitive function [43] and may play an important role in the pathogenesis of POCD [44,45]. Agt encodes angiotensinogen, an angiotensin precursor protein that functions in the reninangiotensin system (RAS). In addition to the liver, Agt is also expressed in the brain. Increasing evidence has shown that the brain RAS plays a key role in Alzheimer's disease, stroke, alcoholism, and depression [46]. Angiotensin regulates iron homeostasis in dopaminergic neurons and microglia through type 1 receptors, thus affecting neurodegenerative diseases such as Parkinson's disease [47]. The interruption of angiotensinogen synthesis in astrocytes in the rat brain affects the function of the locus coeruleus, which may be responsible for cognitive, behavioural, and sleep disorders [48]. In this study, we found that Agt participates in both the positive and negative regulation of neuronal apoptosis. This evidence suggests that the overexpression of Agt in the hippocampus of aged rats after sevoflurane anesthesia may lead to dysfunction of the brain RAS system by affecting neuronal apoptosis. Cdkn1a encodes cyclin-dependent kinase inhibitor 1A, which is mainly involved in cell cycle regulation. Several studies have shown that cell cycle-related molecules and pathways play a variety of important roles in influencing neuronal function. In some brain diseases, it is thought that cell cycle arrest may increase the susceptibility to cell death [49]. The failure of cell cycle regulation leads to neuronal dysfunction and cell death, which may be the underlying cause of several neurodegenerative diseases and the ultimate common pathway of other neurodegenerative diseases [50,51]. Our study confirmed that the Cdkn1a gene is enriched in the biological process of positive regulation of cell death, and the overexpression of Cdkn1a after sevoflurane treatment may disrupt the normal cell cycle and accelerate neuronal death in the hippocampus. Similarly, the small molecule GTPase Rhob encoded by Rhob is an important regulator of cytoskeletal tissue and vesicle and membrane receptor transport. Researchers have found that RHOB is highly expressed in the hippocampus and may be essential for synaptic plasticity in the hippocampus [52]. Moreover, Rhob plays a key role in the apoptotic response, and its deletion affects the apoptotic response of tumor cells to DNA damage [53]. Therefore, both Cdkn1a and Rhob may be the possible pathological basis of sevoflurane neurotoxicity. In this study, we found that Ddit4 is the only key gene enriched in both neuronal death and unidirectional regulation of apoptosis. Ddit4, also known as REDD1 and RTP801, encodes proteins that regulate development and DNA damage and participate in a variety of pathological processes. Suppression of DDIT4 expression decreases cell apoptosis in many kinds of cells [54][55][56]. Overexpression of DDIT4 promoted SUNE1 cell proliferation but inhibited apoptosis [57]. Here, we showed that sevoflurane upregulates DDIT4 expression, which suggests that neuronal apoptosis is induced by sevoflurane neurotoxicity. The apoptosis-related neuronal death process regulated by sevoflurane leading to cognitive impairment has been recognized. Inhalation of 2% sevoflurane for 5 hours can activate the NF-κB signaling pathway and promote neuronal apoptosis and the production of inflammatory factors, thus affecting learning and memory abilities [58]. Activation of the PI3K/Akt signaling pathway reduces hippocampal neuronal apoptosis and exerts a protective effect against sevoflurane-induced brain injury in aged rats [59]. We also confirmed that 3% sevoflurane treatment increased the mRNA levels of Agt, Cdkn1a, Ddit4, Pdgfra, Rapgef3, and Rhob in the mouse hippocampus. The expression of Ddit4 in the hippocampal CA1 region was significantly altered after chronic cerebral hypoperfusion, indicating that it may play an important role in neuronal injury [60]. Inhibition of DDIT4 could reverse metformin-induced cell cycle arrest and significantly protect against the deleterious effects of the drug on cellular transformation [61]. Inhibition of DDIT4 expression also exerted a neuroprotective effect after ischemia-reperfusion injury [62]. These results suggest that DDIT4 may be a key target for intervention in cell apoptosis induced by sevoflurane. General anesthetics play an anesthetic role mainly by inhibiting the target receptor NMDAR and activating GABAAR to regulate nerve signal transduction and can further induce a wide range of physiological effects through NMDAR and GABAAR to regulate downstream molecular signal pathways [31,38,63,64]. Inhibition of NMDAR by MK801 leads to apoptosis of neurons [65,66]. MK-801 also inhibits proliferation and increases apoptosis in hippocampal neural stem cells [67]. We found that the upregulation of DDIT4 expression in the hippocampus by sevoflurane can be inhibited through the supplementation of NMDA in the hippocampus. The injection of MK-801 into the hippocampus of mice also significantly promoted the expression of DDIT4. However, GABAAR activation did not significantly affect the regulatory effect of sevoflurane on DDIT4 expression. This finding indicates that sevoflurane regulates the expression of DDIT4 through NMDAR rather than GABAAR. However, a limitation in our analysis was that we screened 4 hub genes while we only explored the The data shown are the means ± SDs, n = 3. ns P > 0.05, * P < 0.05. AGING mechanism of DDIT4 elevation after sevoflurane inhaling. Mechanisms of the change in the other three genes need to be explored in future experiments. Here we used the antagonist of NMDAR MK-801 to determine the NMDAR mediating sevoflurane regulation, the gain and loss function of NMDAR subunits by RNAi will be performed in the future. In addition, we will further perform the overexpression or knockdown of DDIT4 in the hippocampus to investigate whether Ddit4 is involved in the sevoflurane-induced neuron death. Our study comprehensively analyzed sevofluraneregulated DEGs to indicate that Ddit4 may be a potential target of sevoflurane-induced neuronal apoptosis and determined that the NMDAR/DDIT4 pathway may be a potential target of sevoflurane neurotoxicity, which provides new possibilities for the prevention and treatment of sevoflurane neurotoxicity. Microarray data analysis GSE139220 expression profiles were retrieved and obtained from the NCBI-GEO website (https://www.ncbi. nlm.nih.gov/geo/query/acc.cgi?acc=GSE139220) [68]. The whole transcriptomic data of hippocampal tissue from 3 rats that received 100% oxygen at an identical flow rate for 4 h in an identical chamber and 3 rats that received 2.5% sevoflurane in 100% oxygen for 4 hours in an anesthetizing chamber were included. The raw data were normalized with the rma function by using the oligo package on the R version 4.2.2 platform [69]. The expression data were annotated with the SwissProt database. If the target gene was annotated with two or more probes, the mean value was calculated. Then, the Limma package for the R environment was used to detect the differentially expressed genes (DEGs) in hippocampal tissue between the control group rats and the sevofluranetreated rats [70]. DEGs were identified based on a P value less than 0.05. DEG functional enrichment analysis Gene enrichment analysis of DEGs was performed on the web-based portal Metascape (http://metascape.org/) [71] using the Gene Ontology biological processes and the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways [34]. The enrichment terms were visualized using the ggplot2 package in R. Protein-protein interaction network construction For all DEGs, a protein-protein interaction (PPI) network was constructed using the STRING database (https://cn.string-db.org/) [72]. Then, the network was visualized on Cytoscape software version 3.9.1, which can be freely downloaded on the website https://cytoscape.org/ and can be used to detect hub genes with the cytoHubba app [73,74]. Hub gene selection To explore the hub genes, we used two screening methods. One is that this gene is involved in multiple biological processes, and the other is that hub genes were screened by 12 algorithms with cytoHubba in Cytoscape, and the genes that were identified by both methods were considered to play a critical role in sevoflurane neurotoxicity. The hub genes enriched in the neuron death process were considered to be involved in neurotoxicity. In vivo validation We performed in vivo tests to validate the results from the microarray data. Quantitative real-time PCR (qPCR) Total RNA from the hippocampus was isolated by using RNAiso Plus (TaKaRa, China). cDNA synthesis from mRNA was performed by using the PrimeScript RT Reagent Kit with gDNA Eraser (TaKaRa). Then, the cDNA was used for qPCR detection by using Fast qPCR Mix (TaKaRa). Primers for the qPCR analysis of mRNA are shown as follows: Statistical analysis Statistical analysis was performed on the R version 4.2.2 platform. The quantitative data are presented as the mean ± SD. The microarray data and in vivo PCR validations are displayed with boxplots. Unpaired twotailed Student's t-test was used to determine significant differences between the two groups. A P value less than 0.05 was considered significant. Data availability statement GSE139220 is available from the NCBI-GEO database with the link-https://www.ncbi.nlm.nih.gov/geo/ query/acc.cgi?acc=GSE139220. The datasets analyzed during this study are available from the corresponding author upon reasonable request. ACKNOWLEDGMENTS We are very grateful to the providers who submitted the transcriptomic data to the public databases. CONFLICTS OF INTEREST The authors declare no conflicts of interest related to this study.
4,179
2023-06-21T00:00:00.000
[ "Biology" ]
Pathogen killing pathogen : antimicrobial substance from Acinetobacter active against foodborne pathogens Introduction: Antimicrobial substances (AMS) produced by bacteria may reduce or prevent the growth of pathogenic and spoilage microorganisms in food. In this study, 16 isolates of Acinetobacter baumannii/calcoaceticus (ABC) complex, previously obtained from reconstituted infant milk formula (IMF) samples and the preparation and distribution utensils from the nursery of a public hospital, were used to screen for AMS production. Methodology: Antimicrobial substance production and spectrum of activity assays were performed by agar-spot assay. Optimization of growth conditions for AMS production was also evaluated. Results: Three (17.6%) isolates, namely JE3, JE4, and JE6, produced AMS against the principal indicator strain Salmonella enterica subsp. enterica serotype Typhi ATCC 19214. JE6 was also able to inhibit strains of Klebsiella pneumoniae, Proteus vulgaris, and Bacillus cereus, a Gram-positive bacteria. Remarkably, JE6 was able to inhibit all the tested resistant and multidrug-resistant (MDR) strains of the ABC complex and Shigella dysenteriae associated with IMF and utensils, indicating a potentially valuable application. AMS produced by JE6 does not appear to be affected by proteolytic enzymes and the producer strain showed specific immunity to its own AMS. Conclusion: This study highlights AMS produced by Acinetobacter with applications against MDR spoilage and foodborne pathogens some of them, infectious disease causing agents which, to our knowledge, has not been previously described. Introduction Most commercially available preservatives and antibiotics are produced by chemical synthesis, and their long-term consumption can cause harm to consumer health or lead to microbiota reduction in the gut.Therefore, natural foods without chemical additives have become increasingly popular owing to their health benefits [1]. The use of antimicrobial substances (AMS) with antagonistic properties has become the prime candidate in food safety and preservation research.In foods and beverages, the addition of antimicrobial compounds to processed products has become a potent weapon in food preservation. Bacteriocins form a small subgroup within AMS and have potential uses in food preservation.Bacteriocins are proteins or peptides produced by bacteria that have antimicrobial properties [2][3][4].Bacteriocins differ from most therapeutic antibiotics because they have a biologically active protein component (being rapidly digested by the proteases of the human digestive system), are ribosomally synthesized, and have a narrower activity spectrum [5,6]. Biopreservation of foods using bacteriocins can be achieved by either the addition of bacteriocinogenic cultures or the direct addition of the purified substances.Bacteriocin-producing probiotic strains can establish a microbiota balance in the digestive tract, reducing gastrointestinal diseases.Alternatively, purified bacteriocins can be added directly to foods as a natural preservative [1]; this is the most plausible option when potentially pathogenic bacteria are the producers. Acinetobacter spp.have been studied for several years to determine possible clinical, industrial, and environmental applications (Table 1); however, according to Amorim and Nascimento [7], few studies have reported the association of Acinetobacter spp. with food.Some studies cite their presence as bacteria that, along with others of different genera, contribute to the taste, odor, and texture of foods (especially dairy products), owing to their proteolytic and lipolytic [8,9].Conversely, other researchers describe Acinetobacter as potential pathogens, but do not emphasize their role in food.However, studies such as those by Gurung and coworkers [10] and Dijkshoorn [11], report the isolation of Acinetobacter strains from dairy products and claim that these bacteria may be opportunistic pathogens associated with food. Acinetobacter strains have been recently associated with infant milk formula (IMF) and the utensils used in its preparation and distribution [12].Surprisingly, Acinetobacter baumannii/calcoaceticus (ABC) complex were the most frequently isolated bacteria (37.8%), followed by Enterobacter cloacae (26.7%) and other members of the Enterobacteriaceae family, which were expected to be the most commonly isolated microorganisms.This prevalence of Acinetobacter indicated that these isolates could be producers of AMS capable of inhibiting the growth of other common foodborne bacteria. Therefore, the present study aims to detect AMS production by ABC isolates from IMF and utensils used, and to determine the optimal growth conditions for the production of AMS, which may have potential application against foodborne pathogens and/or food spoilage microorganisms. Bacterial strains and growth conditions Sixteen isolates of Acinetobacter baumannii/calcoaceticus (ABC) complex (obtained in a previous study), from reconstituted IMF samples and sanitized utensils used in preparation/distribution, from the nursery of a public hospital in Rio de Janeiro, Brazil [12], were used as potential AMS producer strains in this study (Table 2). Antimicrobial substance production and spectrum of activity assays The agar-spot assay was performed as described by Giambiagi-deMarval et al. [13] with minor modifications.Each AMS producing ABC strain was grown in 5 mL of Casoy broth for 18 hours at 37°C.Five microliters of culture (approximately 5.0 ×10 6 cells) were spotted onto Casoy agar plates.After 18 hours at 37°C, the bacteria were killed by exposure to chloroform vapor and the plates were sprayed with the indicator strain culture Salmonella enterica serotype Typhi ATCC 19214 (0.3 mL of a previously grown culture in 3 mL of Casoy soft agar).This strain was chosen according to Damaceno and coworkers [14].Plates were further incubated at 37°C for 18 hours and the diameters (in mm) of the inhibition zones were measured.To determine the spectrum of activity, Gram-negative and Gram-positive strain (Table 2) were used as indicators.The sixteen ABC used as potential AMS producers were also used as indicators. Determination of proteinaceous nature The effects of the proteolytic enzymes pronase (Sigma-Aldrich, São Paulo, Brazil), proteinase K (Sigma-Aldrich, São Paulo, Brazil), and trypsin (Sigma-Aldrich, São Paulo, Brazil) on AMS activity were determined in accordance with Giambiagi-deMarval and coworkers [13], with minor modifications.The enzymes (1 mg/mL) were prepared in 0.05 M Tris (pH 8.0) with 0.01 M CaCI2, and 50 µL were applied around the producer spots after chloroform treatment.Plates were incubated at 37°C for 4 hours and then sprayed with the indicator strain. Bioremediation of industrial pollutants -Bioremediation of effluents contaminated with heavy metals [32,36,37] Stabilization of oil-water emulsions, biosorption, and bioemulsans -For use in paper-making, incorporation in shampoos and detergents, emulsification of oil waste pollutants, and in food industry products [32,[38][39][40][41] Other potential applications -Production of carnitine, immune adjuvants, and glutaminaseasparaginase (for clinical use in cancer treatment) -Plant growth promoters and bio-control agents against phytopathogens -Production of cellulolytic enzymes and alkaline lipase [31][32][33][41][42] Absence of inhibition zones indicated that the AMS was of proteinaceous nature.To discard the possibility that the inhibition exhibited might have been due to acids produced by the producer strain during its metabolism, the antimicrobial substances were also treated with 0.2 N NaOH. Influence of growth conditions on AMS production To evaluate the effect of the culture medium on AMS production, the producer strain was grown as previously described, and 5 µL of culture were spotted onto the surface of plates containing 25 mL of the following solid media: brain heart infusion (BHI, Isofar, Rio de Janeiro, Brazil), Casoy (Isofar, Rio de Janeiro, Brazil), Müller-Hinton (Himedia, São Paulo, Brazil), and nutrient agar (Himedia, São Paulo, Brazil). The influence of the initial pH, growth temperature, and NaCl on AMS production was determined as described by Fleming and coworkers [15].Because Casoy agar presented the best results (inhibition halos with the largest diameters), it was used in further experiments.The producer strains were spotted on Casoy agar plates and incubated at 37°C for 18 hours.The pH of the culture media was adjusted to achieve pH values of 5.0, 6.0, 7.0, and 8.0 with 1 N HCl or 1 N NaOH.The effect of the growth temperature was evaluated by incubating Casoy agar plates at room temperature (27°C), 37°C, and 42°C for 18 hours, and the influence of salt was determined by growing the producer strains on Casoy agar plates with different NaCl concentrations (0.5, 1.0, 2.0, and 3.0 g/L).The effects of aeration conditions on bacteriocin activity were evaluated after incubation of the producer strains spotted on Casoy plates at 37°C, both aerobically and anaerobically.Anaerobic conditions were created using the AnaeroGen atmosphere generation system (Oxoid Ltd., Hampshire, England). For these experiments, S. Typhi, Bacillus cereus and Proteus vulgaris were used as indicator strains. Statistical analysis One-way ANOVA with Tukey's post-hoc test was used to assess differences in the parameters.For all significance tests, p values < 0.05 were considered statistically significant.At least three replicates of each experiment were performed. Results In this study, sixteen isolates of ABC complex were tested for AMS production.Three (18.7%) were isolated from IMF preparation jars, namely, JE3, JE4, and JE6, and were able to produce AMS against the indicator strain Salmonella Typhi, chosen as the main indicator strain due to its sensitivity to some antimicrobial substances produced by Gram-negative bacteria [14,16]. Isolate JE6 was able to inhibit S. Typhi, Klebsiella pneumonia, Proteus vulgaris, and remarkably, B. cereus (Figure 1), a Gram-positive bacterium.Therefore, JE6 was chosen for the subsequent experiments. JE6 was also able to inhibit the others 15 ABC isolates tested (12 were MDR), indicating the potential application of this AMS against these MDR pathogens.Assays were performed with proteolytic enzymes (protease and trypsin) to verify whether the activity of JE6 AMS is affected, however, regardless of the indication used, here was no inhibition of AMS by The numbers represent the means and standard deviations of the diameters of inhibition zones (in mm) from three independent experiments; -, absence of inhibition halo or less than 2 mm; a, b, c , different letters indicate statistically significant difference among the parameters; * Salmonella enterica serotype Typhi ATCC19214. protease and trypsin enzymes.With regards to NaOH, the inhibitory activity produced by the JE6 isolate was also unaffected, indicating that the antimicrobial action is not due to acid production by the producing bacteria. To determine suitable conditions for the maximum production of AMS, JE6 the producer strain was inoculated in four different media (Table 3).Müller-Hinton and nutrient agar media did not allow the production of AMS by strain JE6 when using indicators S. enterica and P. vulgaris.However, there was inhibition of the B. cereus indicator when the JE6 strain was grown in these two culture media.No significant difference (p < 0.05) in the diameters of the inhibition halos was detected between BHI and Casoy media, however, visibly, the JE6 strain produced larger and more limpid halos in Casoy medium, which was then used in subsequent experiments. The AMS of JE6 was produced at room temperature (27°C), 37°C, and 42°C; AMS production was slightly higher at 37°C when B. cereus was the indicator.However, using S. enterica and P. vulgaris as indicators, AMS production was only detected at 37°C, providing further evidence that more than one type of BLIS is produced by the JE6 strain. Apparently, AMS that act against B. cereus are capable of being produced under less demanding nutrient conditions (Muller-Hinton agar and nutrient agar) and under different temperatures, whereas AMS that inhibit S. enterica and P. vulgaris require growth of the JE6 strain in a richer media and at specific temperatures. Growth in Casoy medium, with initial pH ranging from 5.0 to 8.0, did not affect the production of AMS by the JE6 isolate, although it was significantly higher at pH 5.0 for B. cereus and S. enterica, and under pH 6.0 for P. vulgaris.Varying the NaCl concentrations (up to 3%) in Casoy agar had little effect on JE6 AMS production; the only significant difference observed was in relation to inhibition of B. cereus, which was slightly higher without the addition of NaCl. Production of AMS JE6 was also evaluated after the aerobic and anaerobic growth of the producer strain.Inhibition zones for the three indicator strains were significantly reduced under anaerobic growth, suggesting that aerobic conditions are needed for AMS JE6 production. Thus, in general, the optimal conditions for the production of AMS by JE6 may be represented by growth in Casoy agar at 37°C with an initial pH of 5.0, in the absence of NaCl, and under aerobic conditions. Discussion AMS-producing microorganisms have a competitive advantage in a particular ecological niche.Consequently, the biotechnology industry is beginning to show an interest in the potential application of these microorganisms and prospecting of such strains has begun.However, Gram-negative enteropathogenic bacteria such as Escherichia, Salmonella, Enterobacter, and Klebsiella are rarely inhibited by Gram-positive AMS, such as bacteriocins [16,17].In contrast, Gramnegative bacteriocins, including microcin C7, colicins E1 and Ib, and the bacteriocin produced by E. coli strain Nissle 1917, have already shown in vitro and in vivo inhibition of several Gram-negative pathogens [17][18][19]. In our previous study, four representatives of the ABC group were able to inhibit indicator strains E. coli ATCC25922 and S. enterica ATCC19214 [14].These two species of bacteria are among the major causes of foodborne diseases.As far as we are aware, the study by Damaceno and coworkers was the first research reporting the production of an AMS by Acinetobacter strains. In general, antimicrobial substances, such as bacteriocin-like inhibitory substances (BLIS) and bacteriocins produced by Gram-negative bacteria, have a narrow antimicrobial activity spectrum, conferring a disadvantage that limits their industrial-scale application [2,20].However, in this work, isolate JE6 was able to inhibit S. Typhi, Klebsiella pneumonia, Proteus vulgaris, and remarkably, B. cereus (Figure 1), a Gram-positive bacterium.Therefore, JE6 was chosen for the subsequent experiments. Gram-negative pathogens pose serious threats to global public health as treatment options are limited owing to the spread of antibiotic resistance among Gram-negative bacteria [21].These resistant bacteria are increasingly found in foods of different origins. Recently, resistant and multi drug-resistant (MDR) strains of Shigella sp. were isolated from dairy products and associated food preparation utensils [22,23].JE6 was able to inhibit all five of the S. dysenteriae strains isolated from lactary utensils, four of which presented a typical MDR profile.Shigella dysenteriae has been associated with shigellosis, an acute enteric infection that poses an important public health problem in developing and underdeveloped countries, and may have potentially devastating consequences for children and newborns [22].According to our results, an AMS, such as that produced by JE6, may have potential for application as a potent inhibitor of foodborne pathogens, including, those resistant to antibiotics.To our knowledge, this is the first study reporting inhibition of this MDR species by AMS produced by an ABC isolate. Curiously, JE6 is also an MDR strain, but when it was tested as an indicator and producer, no inhibition was observed, suggesting that this strain has a specific immunity mechanism to its own AMS, that is a characteristic of bacteriocins [1].This suggests that the JE6 AMS is not a typical bacteriocin.Resistance of AMS to proteolytic enzymes is not uncommon.Studies performed with AMS produced by Bacillus species have demonstrated that these are insensitive to enzymes such as proteinase K and trypsin.The presence of proteolytic enzymes may occur due to the unusual amino acids present in the structure of the N-terminal or C-terminal protected bacterial or cyclic peptides [24][25][26]. Production of AMS can be influenced by growth [27,28] , therefore determining the optimal conditions for the maximum production of AMS is very important for further studies on purification and cost efficacy. The production of AMS by strain JE6 was detected in the four culture media used in this sudy, when the indicator strains were S. enterica and P. vulgaris.However, when the indicator strain was B. cereus, production of AMS JE6 was not detected in Müller-Hinton and nutrient agar.Our results suggest that B. cereus is more resistant to the AMS in these 2 media or that JE6 produces more than one AMS.Production of one or more AMS by a single strain has already been reported in lactic acid bacteria isolated from malted barley, where Lactobacillus sakei was able to produce two new bacteriocins called sakacin 5X and sakacin 5T.The inhibitory spectrum of each purified bacteriocin was analyzed and sakacin 5X was shown to inhibit a larger variety of microorganisms responsible for beer degradation than sakacin 5T [29]. Optimal conditions for the production of AMS by JE6 were similar to that found for the production of bacteriocin EC2, produced by E. coli, with inhibitory activity against other strains of E. coli and S. enterica [30].The optimal conditions for EC2 production in Casoy medium, were growth at 37°C, with an initial pH of 6.0, without the addition of NaCl.Studies with antimicrobial substrates produced by Klebsiella ozaenae K and Raoultella terrigena L, demonstrated that the best conditions of AMS production by these strains were also in Casoy agar, at 37°C, with an initial pH of 6.0, and with NaCl concentrations ranging from 0.5 to 3.0% [15]. Under these conditions, by replacing Casoy agar with Casoy broth, experiments were performed to verify whether JE6 AMS could be obtained from the supernatant of the producing strain.The initial results were very promising and suggested that antimicrobial activity against the three main indicator strains -S.Typhi, B. cereus and P. vulgaris -could be detected (data not shown).Subsequent experiments will assess their activity spectrum using preparations of reconstituted IMF as a food matrix artificially contaminated with the inhibited pathogens. Conclusion The present study elucidates a potential novel and important application for ABC, as the AMS from isolate JE6.The antimicrobial substance reported in this work exhibited efficacy in controlling potentially pathogenic and food spoilage micro-organisms, including those with MDR characteristics, conferring an interesting approach for food safety.Additional studies are required to elucidate the nature, the mechanisms and genetics of the inhibition through this substance. Figure 1 . Figure 1.Agar-spot assay demonstrating the inhibitory activity of JE6 AMS represented by the clear zones of inhibition against the indicator strain Bacillus cereus. Table 1 . Current usage and possible environmental and industrial applications of Acinetobacter spp.and their products. Table 2 . Bacterial strains used as producers and indicators of antimicrobial substances. ATCC, American Type Culture Collection; LMIFRJ, Collection of the Laboratory of Microbiology of the Instituto Federal do Rio de Janeiro; MDR, multidrug resistant. Table 3 . Effect of growth conditions on antimicrobial substance production by Acinetobacter baumannii/calcoaceticus JE6.
4,102
2018-05-31T00:00:00.000
[ "Biology", "Medicine" ]
The rate of electrical energy dissipation (power) and the RC constant unify all electroporation parameters Electroporation parameters can be optimized by coupling RC constant values with the amount of electrical power dissipation in the electroporation medium. Electroporation efficiency increases more steeply with power at low power values. taking into account cell size. This was indeed shown to be the case many years ago (Lurquin 1997). Yet, current articles reporting electroporation parameters almost invariably provide a ''cookbook'' approach to setting up these parameters, as if the process were largely of a trialand-error nature. These reports, while valuable, usually make no attempts to correlate electrical parameters (resistance, voltage, pulse length, capacitance, and electric field strength) with one another (e.g., ref. 3). In fact, we showed earlier that electroporation conditions can be standardized if they are based on the amount of electrical energy dissipated into the electroporation cell and not on the above electrical parameters considered separately (Lurquin 1997(Lurquin , 2002Chen et al. 1998). The importance of energy dissipation during electroporation was first established theoretically (Lurquin 1997), and soon demonstrated empirically using plant protoplasts (Chen et al. 1998). In addition to energy dissipation, the significance of the rate of energy dissipation was raised but not solved (Lurquin 1997). This issue is addressed in this paper. But first, it is necessary to review briefly the theoretical basis of electropore formation. As mentioned earlier, membrane breakdown in an electric field is dependent on cell size and is governed by the simplified Laplace equation V = 1.5 r E, where V is in the breakdown voltage (very approximately 1 V) and r is the cell radius in centimeters (Lurquin 1997). E is the applied electric field strength in V/cm. In practice, bacterial cells are porated around 12-16 kV/cm, microeukaryotes at about 1 kV/cm, and eukaryotic cells at about 0.3-0.7 kV/cm (Lurquin 1997). Large liposomes (2.5-20 l in diameter) made of di-palmitoyl-phosphatidyl-choline or L-a-phosphatidyl-choline are efficiently electroporated at 1.5 kV/cm (Lurquin and Athanasiou 2000). Thus, the first rule of electroporation is to achieve membrane breakdown, which is dependent on cell P. F. Lurquin (Lurquin 1997). Cell survival must be determined empirically in all cases, but is expected to be excellent within the limits given above. For capacitor discharges, pulse length is determined by the RC constant (resistance 9 capacitance) expressed in seconds. We will see below that the pulse length also plays a role in electroporation efficiency. Electrical energy dissipation e = 0.5 C V 2 is expressed in Joules (J) and is thus equal to 0.5 9 capacitance (in lFarads) 9 voltage squared (in Volts). Electroporation efficiency is thus directly proportional to e, whose values typically range from about 80 J for Escherichia coli to about 25 J for yeast, to about 23 J for HeLa cells, to about 35 J for soybean (Glycine max) protoplasts (Lurquin 1997). More generally, for eukaryotic cells, adequate energy values for plant protoplasts are in the 30-50 J range while for mammalian cells the range is 10-30 J (Lurquin 1997). For example, the e value for HUVEC cells is 30.6 J, it is 15.6 J for SK-N-SH and CHO DG44 cells, and 11.25 J for K562 cells (calculated from Jordan et al. 2008). What, then, is the importance of pulse duration (dictated by the RC constant) on the energy dissipation process and its effect on electroporation efficiency? This issue is solved here by re-analyzing and computing older energy data obtained with plant (Asparagus officinalis) protoplasts as reported in ref. (Chen et al. 1998). These authors noticed an effect of the RC constant on electroporation efficiency, but did not elaborate further. Electrical energy dissipation as a function of time is called electrical power (P = de/dt) and is measured in J/s. Figure 1 shows that electroporation efficiency and pulse length at various power levels are clearly correlated. Each data point corresponds to quadruple experiments with the same high statistical significance as in ref. 4 (Chen et al. 1998) (also see legend of Fig. 1). Thus, for each pulse length, electroporation efficiency increases with power. Two further conclusions can be drawn from these computations: (1) electroporation efficiency increases more rapidly with power at longer pulse times. This is in accordance with the known fact that longer pulse times decrease membrane breakdown voltage (Lurquin 1997), presumably increasing the number of electropores formed and possibly their stability. The correlation between electroporation efficiency and the RC constant can be quantified by calculating the former's rate of increase (its slope) versus power at each value of RC; thereby, estimating (DEE/DP) RC (Table 1). It can be seen that the increment in electroporation efficiency per unit power is proportional to the RC constant, being ten times higher at 100 ms than it is at 10 ms; (2) the same electroporation efficiency can be achieved at widely different power values, depending on pulse length. For example, Fig. 1 shows that ca. 40-45 % electroporation efficiency is achieved at P = 242 J/s if RC = 100 ms, P = 484 J/s with RC = 50 ms, P = 1,100 J/s with RC = 22 ms. But only about 24 % electroporation efficiency is reached at P = 1,210 J/s with RC = 10 ms. This also means that optimization of electroporation conditions can be done within a much narrower range of P values at long pulse times. Interestingly, pulse length from 100 to 10 ms had no effect on cell viability (Chen et al. 1998). The correlation of electroporation efficiency and power at t = 10 ms has only two data points. It is shown for completeness. It should be noted that, by virtue of the crosssectional nature of the present re-analysis, the slope at RC = 10 ms retains the high statistical significance of the data points as provided in (Chen et al. 1998). Typical P values for a variety of cell lines are as follows: about 15,000 J/s (at RC = 4.8 ms) for E. coli, 5,000 J/s (at RC = 4.5 ms) for yeast, 760 J/s (at RC = 45 ms) for soybean protoplasts, and 643 J/s (at RC = 35 ms) for He La (Chen et al. 1998). All data points preserve the original statistical significance of electroporation efficiency based on regression analysis and LSD 0.05 as in (Chen et al. 1998 (Lurquin 1997). For large liposomes, efficient poration is seen at P = 1,700 J/s (at RC = 12 ms) (Lurquin and Athanasiou 2000). Since energy factors are the same for capacitor discharges and square pulses (albeit calculated differently) (Lurquin 1997;Lurquin 2002), it is likely that power correlation with pulse length leads to similar electroporation efficiency with both techniques. It is suggested that companies that build electroporation units include energy and power values in the settings on their instruments. This would streamline the search for best electroporation conditions.
1,589
2012-12-11T00:00:00.000
[ "Engineering", "Physics" ]
Telemedicine System Based on Medical Consultation Assistance Integration With the aging of the global population, how to provide effective telemedicine for the aging population has become a very important issue, especially for the elderly with limited mobility. If there is a complete telemedicine sys-tem, it will not only greatly improve medical efficiency. It can reduce the chance of contact between people and avoid the medical risks caused by severe special infectious pneumonia. This paper focuses on the development of a high-efficiency telemedicine system platform that conforms to international standard data exchange formats. This system platform can not only solve the problem of shortage of medical staff but also allow patients to be free from medical outpatient time constraints. Achieve the effect of telemedicine at any time, and digitize the medical process rules to establish a complete online telemedicine system platform. Introduction With the development of medical technology, the average life expectancy of the global population continues to increase [1]. How to improve the efficiency of social medical treatment has become a topic that cannot be ignored. According to research by the United Nations Department of Economic and Social Affairs Office [2], the global support ratio of the elderly population by 2050, the number will be reduced to 4, indicating that the country's burden of supporting the elderly population is increasing. If there is no comprehensive medical system, when an unexpected outbreak occurs, the national medical system will break. With the continuous development of current technology, new types of viruses have gradually affected the world. Particularly serious in recent years, COVID-19 is very heavy impact. [3] It not only affects global economic development but also affects the human lifestyle. The current medical system can no longer cope with the medical capacity of countries around the world, if there is a set of instant telemedicine systems that can quickly transmit medical information without delay when patients are consulting, it not only can reduce the amount of medical treatment, but also make it more convenient for the elderly with mobility impairments, and for hospitals, it can reduce the chance of infection caused by crowd gathering. Before the emergence of COVID-19, many countries have actively promoted telemedicine systems. Portugal proposed the National Strategic Telehealth Plan [4] for medical infrastructure, collaboration systems, and services to improve the regulatory framework. In addition, Germany proposed the Telemedical Maritime Assistance Service [5] to ensure that when a global emergency medical situation occurs, the medical hotline immediately operates radio medical treatment. After the outbreak of COVID-19 in 2019, the development of telemedicine systems has been accelerated, and telemedicine systems are gradually moving towards precision, immediacy, and complete research and development. According to the definition of the World Health Organization [6], telemedicine refers to medical behaviors and health information transmission using real-time video and data communication technology for diagnosis, consultation, and treatment. In addition, with the development of Industry 4.0, the new thinking of smart medical treatment is so derived [7], telemedicine is an important pioneer in the development of smart medical systems. It not only brings revolutionary changes to traditional medical services, but doctors can also assist medical decision-making, reduce errors, and avoid medical disputes from rapidly accumulating data. Create new opportunities for smart medical care. We designed and implemented a web page telemedicine system that allows patients to conduct medical consultations in a remote manner through various information equipments. In addition to regularly feeding back the system platform with data for self-measurement of physiological conditions, it also provides long-term medical staff tracking the patient's physical condition to meet nonessential contact requirements, that is, the system platform exchanges opinions between the two parties. The biggest difference between our system and other systems is that we are designing for the real-time feedback system, which conforms to the international standard data exchange format. Under the premise, we analyze the format of the data. Even if the amount of data is huge, the system can still operate normally, without affecting the rights of patients and influencing doctors to treat patients. In this paper, we introduce a high-efficiency and real-time telemedicine system platform. The other parts of the paper are described below. Section 2 is the related work. Section 3 is the description of the platform operation process. Then we show the experimental results in Section 4. Finally, we conclude this paper in brief in Section 5. Telemedicine Communication Technology With the rise of 5th generation mobile networks [8] and WIFI6 [9], the real-time transmission of data in telemedicine is accelerated, especially in areas with inconvenient transportation. Telemedicine can not only solve the medical inconvenience of people in space, but also shorten the time for people to seek medical treatment. Telemedicine uses communication technology to lift the traditional medical care fixed area restrictions for patients, thereby improving medical quality and reducing medical costs. Therefore, communication technology is one of the core technologies for the development of traditional medical care into telemedicine. The increase in the penetration rate of wireless communication networks, the improvement of physiological measurement technology, and the rapid development of care medical equipment will drive the rapid growth of the telemedicine industry market. Health Level Seven International Health level seven international is an association defined in the telemedicine data exchange format [10]. By formulating a complete electronic health information framework and related standards, the 7 in the name corresponds to the seventh layer of the Open System Interconnection Model. It is expected that this framework, is conducive to the exchange, integration, and sharing of patients' medical information, and improves the privacy of patients and the quality of medical care [11]. Fast Healthcare Interoperability Resources Fast Healthcare Interoperability Resources(FHIR) is an international medical data exchange standard format [12], which is mainly used to describe the data format and data elements of Electronic Health Records, and provide standard system information in the Application Programming Interface [13], and strengthened patient data exchange, covering not only medical institutions, medical records, hospital stay, and discharge and referral records, but also data exchange between insurance institutions and insurance institutions. FHIR's biggest aim is to accelerate the effective communication of medical information among medical units and to use medical information widely on computers, tablets, and smartphones so that hospitals and patients can quickly and easily receive medical service information. FHIR is composed of a series of resource implementations. Medical staff use copying and fine-tuning resources to solve medical practice and management problems. The resource is a set of excel worksheets that can be organized and used to record data. At present, in the FHIR R4 version of the standard [14], 145 types of resources are summarized. These resources can be divided into five categories, including foundation, base, clinical, financial, and specialized. The fields and fields contained in the resource are listed in a hierarchical directory. The affiliation, the data description of the field is attached next to the field. In this way, each hospital can develop a customized FHIR extended data format definition, and publish the customized FHIR extended data format on the official website to provide everyone with a unified format standard reference source. Even after the patient is registered, the patient's personal information can be included in the Implementation Guide, and the relevant patient code and files can be attached for other hospitals to download and apply. Patient Medical Form We will refer to the FHIR standard specification to design the form design system, and then exchange information with other medical units and cross-system interconnection functions. In our system, we will design for three FHIR resource types [15], which include Patient, Encounter, and CarePlan, as shown in Figure 1. In addition, we use RESTful to obtain or access the JSON or XML format text files of the hospital's internal medical record database. As shown in Table 1, various RESTful operation types are written in the form of Application Programming Interface, so that users can use them more efficiently. System Interface We use HTML, CSS, and Javascript programming languages to design a web interface for user login and functional requirements, as shown in Figure 2, and finally, a complete medical interface [16], as shown in Figure 3, so that medical staff can easily view patient's material. Medical Information Browsing Interface At present, smartphones have become popular, and many hospitals have introduced tablets to replace computers for medical care. In response to the above mobile communication devices, we will propose a mobile communication device system environment for telemedicine. Various mobile communication devices have different screen sizes. We use responsive web design technology to solve the problem of cross-platform use. As shown in Figure 4, the system can automatically detect the size of the user's Internet device and automatically adjust the graphic content of medical web pages for different screens to give users the best browse screen. In addition, we integrate the Software Development Kit tools with the web system to build a rich application framework on the Android system and the iOS system. The application framework provided by the system can be adjusted according to the situation and can provide exclusive resources for different device settings. In turn, the efficiency of medical data transmission can be improved. Telemedicine System Implementation We use Visual Studio as the development tool for the telemedicine system platform, which is mainly compatible with other telemedicine systems and has a complete set of relevant cross-platform device development tools. Medical Records Conversion Electronic The primary task of establishing a telemedicine system platform is to convert paper medical records into electronic storage. We use the medical records of Chiayi Christian Hospital as a reference, as shown in Figure 5, and use the database to design an electronic data table that matches it, as shown in Figure 6. When the patient creates the file for the first time, the corresponding data is stored in the FHIR database at the same time. Patient Consultation Process The proportion of the aging in the world is gradually increasing, and the proportion of Chronic condition is also increasing year by year. Usually, the elderly with Chronic condition need to go back to the hospital for treatment, but for the elderly, it is a great physical burden. Therefore, through the telemedicine platform system we have developed, the patient and the medical end conduct medical consultation, as shown in Figure 7, and automatically store the medical data record, and use the collection of medical data as the data analysis data of the telemedicine system. In this telemedicine system platform, the system will automatically collect various medical data of patients. Through the medical record data collected in the HL7 FHIR format, various patient physiological data, and medical history data are stored. The system can according to different data such as blood pressure, weight, a heartbeat before the physiological data has deteriorated, a reminder is issued in advance to alert patients to precautions and related measures. In addition, doctors use platform analysis to understand the user's use of the platform and the characteristics of the user population. Provide the improvement direction of this system and understand the user experience. Telemedicine Login System Design According to the user's authority, we are divided into two identities: medical staff and patient. Figure 8 shows that the user chooses the login identity according to his own authority when logging in to the system. Medical Information Query The user can select the medical data of the patient according to the name and time of the visit. It is worth noting that the user is not limited to the medical staff or the patient himself, but in the part of querying the medical data, we select according to the user's login identity the difference is that if you are medical staff, you can query all patient data, but if you log in as a general user, you can only query your own medical information, as shown in Figure 9. Conclusion We proposed a rapid and convenient telemedicine system and introduced the system to the Chiayi Christian Hospital. We received feedback from the hospital and found that most patients did not reject this system. Patients can use this system to avoid exhaustion when visiting a doctor, and our integration of the system into the mobile communication device, so that users can receive complete medical care without being restricted by equipment. The operation of this system meets the requirements of the medical consultation process. The reason is that the doctor analyzes the collected physiological data and medical record data. The medical staff can remind patients to pay attention to their physical conditions through the analysis results. However, a small number of patients feel that the system interface is not friendly enough. This is where we need to improve in the future.
2,951.2
2021-10-08T00:00:00.000
[ "Medicine", "Engineering" ]
Perioperative and Oncological Outcomes of Percutaneous Radiofrequency Ablation versus Partial Nephrectomy for cT1a Renal Cancers: A Retrospective Study on Groups with Similar Clinical Characteristics Simple Summary Ultrasonography-guided percutaneous radiofrequency ablation is an attractive alternative treatment method for patients with small renal tumours. It has been compared to current standard—partial nephrectomy—in several studies. Most of them, however, are limited by a selection bias. In this study, we evaluated the results of ultrasonography-guided percutaneous radiofrequency ablation and partial nephrectomy in patients who, due to tumour- and patient-related factors, were most suitable for both treatment methods. The oncological results of both methods were comparable, all of recurrent or residual tumours were successfully re-treated. Percutaneous ablation was associated with significantly shorter procedure length and hospital stay, lower blood loss and analgesics used. Abstract Over the recent years, progress in imaging techniques has led to an increased detection of kidney tumours, including small renal masses. While surgery is still the standard of care, there is a growing interest in minimally invasive methods. Ultrasound (US)-guided percutaneous ablation is particularly attractive because it is a safe and relatively simple procedure. In this study, we investigated the results of US-guided percutaneous radiofrequency ablation (RFA) and partial nephrectomy (PN) in the treatment of cT1a renal cancers. Between August 2016 and February 2022, 271 patients with renal tumours underwent percutaneous RFA as initial treatment in our institution. In the same period, 396 patients with renal tumours underwent surgical tumour excision. For the purpose of this study, only patients with confirmed renal cancer with matched age and tumour characteristics (size, location) were selected for both groups. Thus, a group of 44 PN patients and 41 RFA patients were formed with the same qualification criteria for both groups. Parameters such as procedure length, blood loss, hospital stay, analgesics used, and pre- and post-procedural serum creatinine were compared between these groups. Patients followed up with contrast-enhanced CT. There was no significant difference in age, tumour size, tumour location, and creatinine levels between these groups. All procedures were generally well tolerated. During a median follow-up of 28 months, two cases of recurrence/residual disease were found in each group. The overall survival was 100% in both groups, and all patients were disease-free at the end of observation. Percutaneous RFA was associated with a significantly shorter procedure length and hospital stay, lower blood loss, and lower analgesics used than PN. In the selected group of renal cancer patients, US-guided percutaneous RFA was associated with a shorter hospital stay, less analgesics used, and a shorter procedure length than PN, without differences in the oncological results or kidney function. Introduction Over the recent years, the progress in imaging techniques and wide introduction of ultrasonography (US) and computed tomography (CT) imaging has led to an increased detection of renal tumours, including small renal masses (SRM, kidney tumours smaller than 4 cm) [1,2].While surgery is still the standard of care, the efficacy of thermal ablation (TA) in the treatment of SRMs has already been demonstrated [3][4][5][6].Ablative techniques, such as radiofrequency ablation (RFA), were initially suggested in older patients with significant comorbidities as an alternative to partial nephrectomy (PN) due to its lower burden than surgery [2,[5][6][7][8].However, recent data suggest the efficacy of TA in all patients with tumours < 3 cm [4,5,9,10]. Recently, some studies have compared the clinical outcomes of TA versus PN [11][12][13][14][15][16].Nevertheless, a major limitation of these studies is a selection bias, with different qualification criteria resulting in significantly different patients with different tumours being treated with TA and PN.As far as we know, there is only a single prospective, randomised study comparing percutaneous TA to PN in the treatment of SRMs published in 2023 [17]. In this study, we investigated the results of RFA and PN in the treatment of T1a renal cancers with exactly the same qualification criteria for both groups. Materials and Methods This retrospective observational study was approved by the institutional review board.We analysed patients with renal tumours who underwent percutaneous RFA or surgical tumour excision as an initial treatment in our institution between August 2016 and February 2022.Recurrent lesions were not included in this study. Medical records were retrospectively reviewed for patients' demographics, clinical data, and procedural details.Tumour anatomic features were evaluated in pre-procedural contrast-enhanced imaging (CT or MR).For each, tumour size was measured and location in kidney was described as upper, central, or lower pole; lateral; medial-anterior; or medial-posterior.The tumour was described as exophytic (at least one-third exophytic) or non-exophytic (less than one-third). The inclusion criteria were as follows: age under 67, functional contralateral kidney, no significant comorbidities that would be a contraindication to PN.The pre-procedural imaging was reviewed to include only patients with exophytic lesions and one of the following: not larger than 30 mm and located in central part of the kidney; not larger than 30 mm and located in the lower pole of the kidney; not larger than 25 mm.Such tumour characteristics (size, location) were previously found to be associated with the highest success rate of US-guided percutaneous TA [5]. Patients without histopathologically confirmed renal cell carcinoma (RCC) or with missing biopsy data, with inconclusive biopsy, lost from follow-up (no follow-up contrastenhanced imaging available), or with lacking diagnostic imaging data were excluded from the study. Patients were divided into two groups: those who had undergone percutaneous RFA and those who had undergone PN (laparoscopic or open).Parameters such as procedure length, blood loss, hospital stay, analgesics used, and pre-and post-procedural serum creatinine were compared between these groups.In this particular group of patients, the qualification either to RFA or PN was based mainly on their preference, without any specific criteria. All patients had contrast-enhanced imaging, either CT or MR, before the procedure.All tumours undergoing RFA were biopsied, either during the ablation or before, as a separate procedure.All the pathological samples were evaluated by the same pathologists in a high-volume institution. All ablations were performed percutaneously under US guidance in analgosedation and local anaesthesia by MJ and JS, both experienced in TA and US-guided procedures.For each procedure, Covidien Cool-tip™ RF Ablation System (Medtronic, Warszawa, Poland) was used.Ablation was performed with one probe, and the length of ablation and eventual probe repositioning were decided according to size, shape, and characteristics of the lesion.All surgical excisions were performed either laparoscopically or as an open procedure without clamping of renal vessels by different surgeons in a high-volume institution. Patients followed up with diagnostic imaging; contrast-enhanced CT or MR was performed at 3 months, 12 months after the procedure, then yearly (RFA) or at 6 months, 12 months after the procedure, then yearly (PN).Follow-up scans were evaluated to assess the outcome.The follow-up time was calculated from the procedure to the last diagnostic imaging available.The treatment failure (local relapse) was defined as follows: in the case of percutaneous TA-the presence of enhancing tissue at the margins of the ablation volume in the first follow-up scan (residual disease) or within the ablation zone after at least one contrast-enhanced follow-up study demonstrating absence of viable tissue within the target tumour and surrounding ablation margin (local progression), in the case of PN-the presence of abnormal, enhancing tissue next to the resection zone (local recurrence). We performed statistical analysis using Statistica 8.0 (StatSoft, Kraków, Poland) software.Differences between variables were assessed using Mann-Whitney U-test.The χ-square test was employed to evaluate differences in qualitative variables; p < 0.05 was considered statistically significant. Results During the studied period, 271 patients with renal tumours were treated with percutaneous RFA as the initial treatment in our institution.A total of 70 patients were excluded from this study.From the remaining group of 201 patients, we selected only those meeting the inclusion criteria.Thus, a group of 41 patients with 'ideal tumours' and who had undergone percutaneous RFA was formed. During the same period, 396 patients with renal tumours were treated with PN (open or laparoscopic) as the initial treatment in our institution.Three hundred twenty-three patients older than 67, without functional contralateral kidney, with benign lesions, with lesions larger than 3 cm, or lost from follow-up (no follow-up contrast-enhanced imaging available) were excluded from this study.We selected a group of 44 patients with kidney tumours most suitable for percutaneous TA who had undergone PN (25 laparoscopic and 19 open).Thus, a group of 44 PN patients and 41 RFA patients were selected. There was no significant difference in age, tumour size, tumour location, or creatinine level between these groups.The characteristics of studied groups are presented in Table 1.The procedures were generally well tolerated.We registered four Clavien-Dindo grade I complications: three cases of fever (one in RFA, one in laparoscopic PN, one in open PN) and one case of wound haematoma (open PN).We also registered three Clavien-Dindo grade II complications: one grade II bowel injury treated conservatively in the RFA group and two cases of blood loss requiring transfusion in the PN group (one in laparoscopic, one in open). The mean follow-up time was 29 months, and the median was 28 months (range 3-71 months). During follow-up, one case of residual disease (enhancement in CT/MR 3 months after the procedure) and one case of local progression later than 3 months despite initially complete ablation (no enhancement in CT/MR 3 months after procedure) were found in the RFA group.These cases were treated with repeated thermal ablation sessions (one additional procedure).Two cases of local recurrence were found in the PN group; one was treated with percutaneous RFA and one with surgical resection. The overall survival was 100% in both groups, and all patients were disease-free at the end of observation. There was a significant difference between PN and RFA in procedure length, hospital stay, blood loss, and analgesics used (Table 2).Blood loss during percutaneous ablation is negligible.There was no ischaemia in both groups.Interestingly, when laparoscopic and open PN were compared, there were significant differences in blood loss and hospital stay but not in the analgesics used.Patients undergoing laparoscopic PN were younger, but the difference in tumour size was not significant (Table 3).There were two conversions from laparoscopic to open-one because of bleeding and one because of other technical difficulties.Even if compared to only the laparoscopic PN group, the percutaneous RFA group still has significantly lower blood loss, a shorter procedure length, a shorter hospital stay, and less analgesics used (Table 4). Discussion PN remains the gold standard in the management of SRMs with established shortand long-term outcomes [13,[18][19][20].It is a well-known, thoroughly studied and described treatment method with known and established indications.It can be performed both as an open or endoscopic (laparoscopic or robot-assisted) procedure.US-or CT-guided percutaneous TA has been developed over the last decades and is emerging as an alternative treatment with curative intent for SRMs [2,3,5,13,18,21,22].It can be performed under analgosedation or general anaesthesia and is relatively well tolerated, which makes it a viable option for patients with comorbidities or who are unfit for surgery [2,[4][5][6].Indeed, the European Association of Urology recommends thermal ablation as an alternative for frail and/or comorbid patients with small renal masses [18]. Over the last few years, several studies comparing thermal ablation to PN have been published [13,14].Although there is a general agreement that thermal ablation is a safe and effective treatment, the details are not so consistent.While some authors reported equivalent outcomes [22][23][24], a similar overall survival and cancer-specific survival [25], and no statistically significant difference in local recurrence [12,24], others reported higher recurrence rates [11,26,27], and/or worse overall survival in patients treated with thermal ablation compared with surgery [13].A meta-analysis from 2016 reported inferior local oncologic control in patients treated with TA compared with patients treated with PN; however, with retreatment, RFA was no longer inferior [28].These conflicting-to some degree-results may be partially due to the fact that most of these studies are of somewhat limited quality and restricted by a significant selection bias [11,13,14,29].The inclusion criteria differ, and there is a tendency to perform PN in younger, fit patients and thermal ablation in older and comorbid ones.Some studies also include SRMs with benign histopathology.As far as we know, there is no proper, randomised prospective trial comparing percutaneous thermal ablation to PN.There is a propensity score-matched analysis comparing PN to percutaneous ablation [11].While that study is free of selection bias, it still includes patients with benign histopathology and non-diagnostic biopsy. There also is another possible aspect of bias: although the influence of patient-related factors (age, comorbidities, etc.) is generally recognised, the significance of tumour-related factors is much less discussed.Not all SRMs are equal, and there is a degree of 'treatment difficulty' related to tumour size and its location in the kidney [5].This is rather well described for PN, with scores such as RENAL and PADUA, but much less for percutaneous ablation [5,10,30].The scores developed for PN are not necessarily suitable for ablation and vice versa, with tumour complexity differently influencing the risk of recurrence and complications for PN and ablation.Therefore, even if all patient-related factors are matched, there is still a possibility of bias related to tumour complexity.It is difficult to properly estimate this issue, with many studies not reporting tumour complexity or using only scores developed for PN. To reduce the possible bias associated with patient selection, we decided to compare the treatment results in patients who were both good candidates for surgery and US-guided percutaneous ablation.To ensure uniform selection criteria, we excluded all patients who could be poor candidates for surgery due to age and/or comorbidities.Furthermore, the pre-procedural diagnostic imaging was reviewed by MJ, JS (experienced in TA and PN), and PW (experienced in PN and laparoscopic PN), and all patients who could be suboptimal for RFA due to tumour size and/or location were excluded.This allowed us to have no significant differences in characteristics of patients and tumours between groups and ensure that the only significant selection criteria were patients' and surgeons' preference.Finally, we excluded all lesions with benign or unconfirmed histopathology. Another issue that should be considered is the definition and criteria of cancer relapse.The diagnosis of local recurrence both after PN and TA is based mainly on diagnostic imaging.We do not routinely biopsy local recurrences due to the high rate of non-diagnostic biopsies and difficulties in interpretation what is a negative result in this situation.Therefore, we had a histopathological confirmation of local recurrence only in patients treated with surgical resection.There are, however, some significant differences between PN and TA.PN in most cases (all in this study) is a macroscopically complete resection, and the local recurrence is the presence of abnormal, enhancing tissue next to the resection zone.In the case of percutaneous TA, the definition of recurrence is more complex [27].As the tumour tissue is not resected and it may be sometimes difficult to determine the macroscopic completeness of the treatment, residual disease (the presence of enhancing tissue in the ablation volume in the first follow-up scan) is more common than it is after PN.It is also possible to find enhancing tissue within the ablation zone after a previous contrast-enhanced study demonstrating no enhancement (local progression).Furthermore, currently, there is no consensus of surveillance intervals after TA [27]. Several of our findings are remarkable.First, our study is different most studies comparing the ablation of renal masses to PN in the aspect of patient selection.Because of the above-mentioned selection criteria, patients in our study are significantly younger than patients included in most other studies-at least in the ablation arms [12][13][14]16].We have also excluded patients with significant comorbidities, often included in the ablation groups in other studies [12][13][14]16,29]. Second, due to these selection criteria, there was no other-cause mortality; all patients remained alive through the observation period.The oncological results were also good, with no systemic progression and four cases of local relapse successfully treated with repeated RFA or surgical resection.Such good oncological results, however, could be expected for relatively small, exophytic, and easy-to-manage tumours.In contrast to many other studies, there was no significant difference in local relapses between the RFA (4.8%) and PN (4.5%) group.This can be explained by the fact that in this study, we included only the tumours most suitable for percutaneous RFA, as described previously [5]. Third, while both procedures were well tolerated, with only 3 cases of grade II complications (3.5%) and no Clavien-Dindo grade ≥ III complications, there were significant differences in procedure length, hospital stay, blood loss, and analgesics used in favour of RFA [Table 2].Interestingly, we observed no deterioration of kidney function, even in the PN group.Tumour complexity and warm ischaemia were found to be associated with renal function loss [31].Perhaps our results may be explained by the fact that all lesions were low-complexity ones, and all procedures were performed without ischaemia. We also observed that laparoscopic PNs were associated with significantly lower blood loss and a shorter hospital stay than open procedures, which is consistent with other studies [4,32].Thus, it may be more appropriate to compare the burden of percutaneous ablation to laparoscopic or robotic PN.However, even if compared to only the laparoscopic PN group, the percutaneous RFA group still has significantly lower blood loss, a shorter hospital stay, and less analgesics used.This is consistent with the results of recent studies comparing robot-assisted PN (RAPN) to percutaneous ablation in challenging situations (solitary kidney, endophytic tumour) [33,34].In these studies, authors found percutaneous ablation to be associated with a shorter stay, a shorter procedure length, and less complications than RAPN.There were, however, significant differences in tumour characteristics between RAPN and ablation patients. Both percutaneous RFA and PN have strengths and weaknesses.Percutaneous ablation is associated with significantly lower morbidity, which may be particularly attractive for older patients with comorbidities who have an increased risk of serious surgery-related complications.It also offers potentially better kidney function preservation-but possibly not significantly in the case of the smallest, 'easiest' tumours.PN, on the other hand, has an oncological effect less sensitive to factors associated with tumour size and location, with a wider range of SRMs that could be treated without the increased risk of relapse.With the advancement in surgical techniques, the indications for RAPN are expanding to even more challenging tumours.Recently, RAPN was reported to offer encouraging results in the case of challenging perihilar masses [35]. While this study does not prove that percutaneous ablation is an equivalent of PN in all SRMs, it proves that percutaneous RFA may offer reduced morbidity without sacrificing oncological results in a selected subgroup of SRMs.This study was focused on the least complex tumours, in which we could expect good oncological results both for TA and PN, without a significant difference in this aspect.The reduced morbidity and shorter hospital stay was, however, in favour of TA.It may lead to a conclusion that there are kidney tumours for which the percutaneous TA could be the preferred treatment.Further effort should be made to clearly identify the tumour-related qualification criteria for percutaneous ablation so that this method could also be offered to some of the younger and healthy patients without the increased risk of disease recurrence. In order to provide better quality of evidence concerning TA as a treatment for SRMs, some aspects should be considered when designing new studies.First, to avoid patientrelated bias, studies should preferably be prospective randomised trials with clear inclusion and exclusion criteria or at least have uniform qualification criteria for all groups.Second, if a study is not a randomised trial, the groups should be matched for patient-related factors, such as age and comorbidities.Third, more attention should be given to the influence of the tumour-related factors.To avoid bias in non-randomised trials, the groups should be matched for tumour size and location.All trials, even the randomised ones, should report tumour size and location in more details, as certain tumour-related factors may favour one of the treatment methods.Finally, groups should be matched for or not include benign lesions; preferably, all lesions should be biopsied before inclusion, and only patients with a diagnostic biopsy result should be included. We have made an effort to overcome several limitations associated with many studies comparing renal tumour ablation to PN: we used uniform inclusion criteria, we only included confirmed RCC, and all patients were treated in the same period in one institution.Despite these strengths, this study is not free from limitations.First, it is still a retrospective study, and despite the uniform inclusion criteria, differences between groups and some form of selection bias may still exist.Second, the PN group was not uniform; it included both open and laparoscopic procedures, with some significant differences between them.Third, the median follow-up time was 28 months, which may be insufficient to fully assess the long-term outcomes.On the other hand, most residual disease/recurrences are detected within the first 2 years [36].Finally, many patients were excluded because of missing data or being lost from follow-up. Conclusions In conclusion, in a selected subgroup of RCC patients, percutaneous RFA was associated with a significantly shorter procedure length and hospital stay, lower blood loss, and less analgesics used than PN, but no difference in oncological results nor kidney function preservation was observed.It must be stressed that these results should not be extrapolated to all SRMs, as the tumours included in this study do not reflect the entirety of SRMs.There is a need for a prospective randomised trial to solidify these findings and better define the role of percutaneous TA in the treatment of SRMs. Table 1 . Characteristics of patients and tumours. * 2 conversions, laparoscopic to open, were included in the open PN group.PN-partial nephrectomy.
4,946.2
2024-04-01T00:00:00.000
[ "Medicine", "Engineering" ]
Supersymmetry, T-duality and Heterotic $\alpha'$-corrections Higher-derivative interactions and transformation rules of the fields in the effective field theories of the massless string states are strongly constrained by space-time symmetries and dualities. Here we use an exact formulation of ten dimensional ${\cal N}=1$ supergravity coupled to Yang-Mills with manifest T-duality symmetry to construct the first order $\alpha'$-corrections of the heterotic string effective action. The theory contains a supersymmetric and T-duality covariant generalization of the Green-Schwarz mechanism that determines the modifications to the leading order supersymmetry transformation rules of the fields. We compute the resulting field-dependent deformations of the coefficients in the supersymmetry algebra and construct the invariant action, with up to and including four-derivative terms of all the massless bosonic and fermionic fields of the heterotic string spectrum. Introduction At low energy, or small curvature, heterotic string theory reduces to ten dimensional N = 1 supergravity coupled to super Yang-Mills [1]. Successive terms in the α ′ -expansion may be expressed as higher-derivative interactions that are strongly constrained by the symmetries of string theory. There are several reasons to study the higher-order terms in the effective field theories of the massless string modes. They are needed to evaluate the stringy effects on solutions to the supergravity equations of motion [2,3], they play a central role in the tests of duality conjectures [4], in the microstate counting of black hole entropy [5] and in moduli stabilization [6].The swampland program [7] has revealed that the effective field theories of low energy physics and cosmology are limited by their couplings to quantum gravity [8], and together with the string lamppost principle [9], reinforces the interest in the restrictions imposed by string theory on the higher-derivative corrections to General Relativity. The first few orders of the heterotic string α ′ -expansion are known explicitly. The interactions of the bosonic fields up to O(α ′3 ) were originally determined from the computation of scattering amplitudes of the massless string states at tree [1,10] and one loop [11] levels in the string coupling and from conformal anomaly cancellations [12]. The contributions of the fermionic fields have been computed using supersymmetry and superspace methods [13]- [20]. Supersymmetry completely fixes the leading order terms [13] and it often provides an elegant underlying explanation of the higher-derivative corrections. But it holds iteratively in powers of α ′ and the transformation rules of the fields demand order by order modifications that are further restricted by other string symmetries and dualities. In particular, the effective field theories for the massless string fields exhibit a global O(n, n; R) symmetry when the fields are independent of n spatial coordinates. This continuous T-duality symmetry holds to all orders in α ′ [21] (see also [22]- [25]) and it has been explicitly displayed recently for the quadratic and some of the quartic interactions of the bosonic fields in [26,27]. This feature motivated the construction of field theories with T-duality covariant structures, such as double field theory (DFT) [28,29] and generalized geometry [30], which provide reformulations of the string (super)gravities in which the global duality invariance is made manifest. In the duality covariant frameworks, the standard local symmetries are generalized to larger groups: diffeomorphism invariance is extended to also include the gauge transformations of the two-form and the tangent space is enhanced with an extended Lorentz symmetry. Interestingly, the duality covariant gauge transformations completely determine the lowest order field interactions in string (super)gravities even before dimensional reduction (for reviews see [31] and references therein). Moreover, extensions of the duality group [32,33] as well as enhancings of the gauge structure of DFT [34,35] allowed to reproduce the four-derivative interactions of the massless bosonic heterotic string fields. Supersymmetry can be naturally incorporated in the duality covariant formulations [36]- [41]. A supersymmetric and manifestly O(10, 10 + n g ) covariant DFT reformulation of ten dimensional N = 1 supergravity coupled to n g abelian vector multiplets was introduced in [37,38]. Although it is formally constructed on a 20 + n g dimensional space-time, the apparent inconsistency of supergravity beyond eleven dimensions is avoided through a strong constraint that admits solutions removing the field dependence With the motivation to further understand the structure of the heterotic string α ′expansion, in this paper we perform a perturbative expansion of the formal exact construction of [41] and obtain the first order corrections to N = 1 supersymmetric DFT. Further parameterizing the duality multiplets in terms of supergravity and super Yang-Mills mul-tiplets, we show that the supersymmetric duality covariant generalized Green-Schwarz transformation completely fixes the first order deformations of the transformation rules of the fields. We also construct the invariant action with up to and including four-derivative terms of all the massless bosonic and fermionic fields of the heterotic string and up to bilinear terms in fermions. The paper is organized as follows. In section 2 we review the basic features of the N = 1 supersymmetric DFT introduced in [38] and we trivially extend it to incorporate non-abelian gauge vectors. In section 3, after briefly recalling the relevant aspects of the duality covariant mechanism proposed in [41], we extract the first order corrections to the transformation rules of the O(10, 10+n g ) generalized fields from those of the O(10, 10+k) multiplets, and obtain the manifestly duality covariant and gauge invariant N = 1 supersymmetric DFT action to O(α ′ ). We then parameterize the O(10, 10 + n g ) fields in terms of supergravity and super Yang-Mills multiplets in section 4 and find the relations between the duality and the local gauge covariant structures. We discuss the deformations induced from the generalized Green-Schwarz transformation on the transformation rules of the supergravity fields and compare with previous results in the literature. Finally, in section 5 we present the first order α ′ -corrections of the heterotic string effective action including up to bilinear terms in fermions. Conclusions are the subject of section 6. The conventions used throughout the paper and some useful gamma function identities are included in appendix A. Details of the proof of closure of the symmetry algebra on the duality multiplets are contained in appendix B. Finally, in appendix C we compute the deformed supersymmetry algebra on the supergravity multiplets and prove the supersymmetric invariance of the first order corrections in the heterotic string effective action. The leading order theory In this section we review the basic features of the DFT reformulation of N = 1 supergravity coupled to n g vector multiplets in ten dimensions that was introduced in [38], mainly to establish the notation. The frame formalism used in [42] is most useful to achieve a manifestly O(10, 10 + n g ) covariant rewriting of heterotic supergravity truncated to the Cartan subalgebra of SO (32) or E 8 × E 8 for n g = 16. Employing gauged DFT [43], we further include the full set of non-abelian gauge fields and recover the leading order terms of heterotic supergravity. The group invariant symmetric and invertible O(10, 10 + n g ) metric is H MN is also an element of O(10, 10 + n g ), constrained as It is convenient to define the projectors satisfying the usual properties and related with the generalized vielbein in the following way We use the convention that P AB , P AB and their inverse lower and raise projected indices. The generalized Lie derivative acts as where the partial derivatives ∂ M belong to the fundamental representation of O(10, 10+n g ) and the so-called fluxes or gaugings f MNP are a set of constants [42] verifying linear and quadratic constraints Consistency of the construction requires constraints which restrict the coordinate dependence of fields and gauge parameters. The strong constraint where · · · refers to products of fields, will be assumed throughout. This constraint locally removes the field dependence on 10 + n g coordinates, so that fermions can be effectively defined in a 10-dimensional tangent space 1 . The local O(9, 1) L × O(1, 9 + n g ) R double Lorentz symmetry is parameterized by an infinitesimal parameter Γ AB satisfying 9) in order to preserve the invariance of η AB and H AB . The two projections of a generic where the Γ A B and Γ A B components generate the O(9, 1) L and O(1, 9 + n g ) R transformations leaving P AB and P AB invariant, respectively, and δ Λ H AB = 0 implies Γ AB = 0. The fields transform under double Lorentz variations as where the O(9, 1) L gamma matrices can be chosen to be conventional gamma matrices in ten dimensions, satisfying Some useful identities for the product of gamma matrices are listed in Appendix A.1. The Lorentz and space-time covariant derivatives act on generic vectors as Only the totally antisymmetric and trace parts of ω ABC can be determined in terms of E M A and d, namely the latter arising from partial integration with the dilaton density for arbitrary V and V A . Only the combinations with the same projection on the last two indices are non-vanishing. The covariant derivatives of the (adjoint) gravitino and dilatino are The supersymmetry transformation rules are parameterized by an infinitesimal Majorana fermion ǫ transforming as a spinor of O (1,9) Putting all together, the generalized fields obey the transformation rules In Appendix B.1 we review the algebra of these transformations, and show that it closes up to terms with two fermions, with the following parameters where the C f -bracket is defined as The transformation rules (2.19) leave the following action invariant, up to bilinear terms in fermions, where L B is the generalized Ricci scalar, which can be written as up to terms that vanish under the strong constraint, and the fermionic Lagrangian is Using the Bianchi identity it is useful to rewrite The supersymmetry variation of the bosonic piece of the action gives where we have used and The supersymmetry transformation rules define the following Lichnerowicz principle and then, the supersymmetric variation of the fermionic piece of the action Parameterization and choice of section To make contact with ten dimensional N = 1 supergravity coupled to n g vector multiplets, we split the G and H indices as M = ( µ , µ , i) and A = (A, A), respectively with A = a, A = (a, i), µ , µ , a, a = 0, . . . , 9, i, i = 1, . . . , n g , and parameterize the generalized fields as follows: -Generalized frame where e a and e a are two vielbein for the same ten dimensional metric. To guarantee that the number of DFT and supergravity degrees of freedom agree, we gauge fix e µ a = e µ a , e µa = e µa , and identify e µ a , e µa with the supergravity vielbein e µ a , e µa , a, b = 0, . . . , 9, respectively, i.e. g µν = e µ a g ab e ν b , with g ab the Minkowski metric. C µν = b µν + 1 2 A i µ A νi , with A i µ being the gauge connection. For consistency, we also need to impose with e i i the (inverse) vielbein for the Killing metric of the SO(32) or E 8 × E 8 gauge group, η ij = e i i η ij e j j , as required for modular invariance of the heterotic string. 35) χ i being the standard gaugino field. The non-abelian gauge sector is trivially incorporated through the gaugings that deform the generalized Lie derivative (2.6a) as The γ-functions γ a = γ a δ a a verify the Clifford algebra {γ a , γ b } = 2g ab . The gauge fixing e µ a = e µ a implies δe µ a = δe µ a , and (2.11) lead to where Λ ab denotes the generator of O (1,9) transformations that parameterizes Γ ab . The additional gauge fixings δE i i = 0 and δE µ i = 0 lead respectively to where we have parameterized ξ M = (ξ µ , ξ µ , ξ i ) and Λ ai , Λ ij are introduced for convenience, as we will discuss in section 4. Solving the strong constraint in the supergravity frame, parameterizing (2.18) and using the non-vanishing determined components of the generalized spin connection listed in Appendix A.2, we recover the leading order supersymmetry transformation rules of the coupled ten dimensional N = 1 supergravity and Yang-Mills fields, namely where w (+) µab = w µab + 1 2 H µab is the spin connection with torsion given by the field strength of the b-field µνρ the Yang-Mills Chern-Simons form The Lorentz transformations of the supergravity and super Yang-Mills multiplets obtained from (2.11) are 42) and the gauge transformations derived from (2.6) are where the second term in the gauge transformation of the b-field is the gauge sector of the Green-Schwarz transformation required for anomaly cancellation. Parameterizing the DFT action (2.22), using the fluxes listed in Appendix A.2, we get (2.44) We use standard notation defined in Appendix A. Both the action and the transformation rules match the corresponding ones in [16], with the field redefinitions specified in 3 The first order α ′ -corrections In this section we construct the first order corrections to N = 1 supersymmetric DFT, performing a perturbative expansion of the exact formalism developed in [41]. The duality structure of the first order α ′ -corrections to heterotic supergravity was originally considered in [32,33]. Exploiting a symmetry between the gauge and torsionful spin connections that exists in ten dimensional heterotic supergravity [15,16], the duality group was extended to O(10, 10+n g +n l ), with n g (n l ) the dimension of the heterotic gauge (Lorentz) group. In this construction, the gaugings in the generalized Lie derivative (2.6a) preserve a residual O(10, 10) global symmetry. Including one-form fields in the GL (10) parameterization of the generalized vielbein, the formalism reproduces the first order corrections to the interactions of the bosonic fields in the heterotic effective field theory. The lack of manifest duality covariance and the difficulties to incorporate higher orders of the α ′ -expansion in these formulations motivated the search of alternative frameworks. A deformation of the gauge structure of DFT was proposed in [34], introducing a gen- The generalized Bergshoeff-de Roo identification The theory has a global O(10, 10+k) symmetry, where k is the dimension of the O(1, 9+k) group. This differs from the construction of the previous section, where the duality group is O(10, 10 + n g ) and n g denotes the dimension of the SO(32) or E 8 × E 8 heterotic gauge group. In the construction of [41] instead the gauge sector encodes the higher derivatives. The vielbein E M A is an element of O(10, 10 + k), parameterized in terms of O(10, 10) The gauge freedom is used to set E αā to zero and the bijective map e α β relates the Cartan-Killing metrics of O(k), κ αβ and κ αβ , as The parameterization (3.1) preserves the constraint where η MN and η AB are the invariant metrics of O(10, 10 + k) and O(9, 1) The generalized O(10, 10 + k) gravitino splits as Ψ A = (0, Ψā, Ψ α ), where Ψ a is a generalized O(10, 10) gravitino and Ψ α is a gaugino of the O(1, 9 + k) R gauge group, that will later be identified with a function of the O(10, 10) generalized fields. The gamma matrices are γ A = (γ a , 0, 0), with γ a the O(9, 1) L gamma matrices verifying (2.12). The transformation rules of the O(10, 10 + k) fields have the same functional form as Equivalent constraints to (2.7) and (2.8) must be imposed, i.e. The gauge fixing δE α a = 0 implies and δe α α = 0 determines (3.14) The gauge generators (t α ) A B implement the map Parameterizing δE M a one gets In order to eliminate these extra degrees of freedom, it is convenient to define which allows to establish the generalized Bergshoeff-de Roo identification between the generalized gauge and spin connections 20) and to determine Ψ CD as the generalized gravitino curvature since both sides of (3.20) and (3.21) transform in the same way. The main steps of the demonstration can be found in [41]. We now proceed to extract the first order α ′ -corrections to the transformation rules of the O(10, 10 + n g ) generalized fields. Induced transformation rules on O(10, 10) multiplets The covariant transformation rules (3.7) induce higher derivative deformations on the transformations (2.19) of the O(10, 10 + n g ) fields. In this section, we work out the first order modifications, expanding the coefficients ( To simplify the presentation, we turn off the gauge sector of the O(10, 10 + n g ) multiplets, i.e. we take n g = 0, and obtain the induced transformation rules of the O(10, 10) fields. The gauge sector will be trivially included in the next subsection. It is convenient to first express the components of the generalized O(10, 10 + k) fluxes we get the first order deformations where we used (1−X R )g 2 , the superscripts (2) and (3) refer to the number of derivatives, and we defined The transformation rules (3.7) take the following form: − Vielbein The identification E M a = E M a implies δE M a = δE M a , and from (3.7a) we get Using the gauge fixing (3.13) and the following relation which holds for any function f , one gets The second term in the r.h.s. of this expression allows to identify T ab with the Γ ab component of the Lorentz parameter (2.9). The third term contains the deformation which is the leading order of the O(10, 10) covariant generalization of the Green-Schwarz transformation [34]. And finally, the last term in (3.28) contains the first order correction to the supersymmetry transformation rule (2.18a), namely Following a similar reasoning, one can see that the other projection transforms as where we have identified where we have kept the leading order terms in the O(10, 10 + k) gaugino identification (3.21). Note that there are two corrections to the Lorentz transformations. The first term in the right hand side can be interpreted as a generalized Green-Schwarz transformation and the second one depends on the gravitino curvature, that we now define. − Gravitino curvature To leading order in (3.21), the induced O(10, 10) gravitino curvature is, From (3.7c), we find that it obeys the transformation rule The first order corrections to the transformation rules of the generalized dilatino (2.19e) that are obtained from (3.7d) are Note that the transformation rules of the dilaton (2.19c) as well as the diffeomorphisms on all the fields are not corrected. Including the heterotic gauge sector It is now trivial to include the gauge sector of the O(10, 10 + n g ) formulation. We simply In Appendix B.2 we show that the algebra of these transformation rules closes, up to terms with two fermions, with the following field-dependent parameters First order corrections to N = 1 supersymmetric DFT The invariant action under the transformation rules (3.7) is clearly of the same functional form as (2.22) but it depends on the O(10, 10 + k) multiplets, namely Hence it contains higher derivatives of the O(10, 10 + n g ) multiplets. The transformation rules (3.7) define the following Lichnerowicz principle, and then the O(10, 10 + k) generalized Ricci scalar determines the corrections to the generalized Dirac operator. In terms of the O(10, 10 + n g ) generalized fluxes, the O(10, 10 + k) generalized Ricci scalar is, up to first order, where R was defined in (2.25). Replacing the expressions (3.23) with the overlined indices extended to include the gauge sector (i.e. c, d, ... → C, D, ...), R (1) may be written as Note that it depends on the generalized gravitino through F * aBC . Similarly, we may define where L F was introduced in (2.23) and the first order corrections are given by F , up to bilinear terms in fermions. We have explicitly verified that the action To find the relations between both sets of fields, it is convenient to first work out the parameterizations of the generalized fluxes and curvatures and their transformation rules. From the first order terms in the action (3.47), we see that only the leading order expressions are necessary. We denote the parameterization of F * aCD aŝ where the hats distinguish objects that contain fermions and the collective indices of the tangent space C = (c, i) include the gauge indices. In terms of supergravity and super Yang-Mills fields, the components arê with w The generalized gravitino curvature Ψ AB is parameterized as is the parameterization of the generalized flux component F ABi . The transformation rule (4.9) contains, other than the standard Lorentz transformations, the supersymmetry variation of the torsionful spin connection [15,16] δ ǫŵ the supersymmetry and gauge transformations of the Yang-Mills field strength, and δ ξFµci = f ijk ξ jF Similarly, from the transformation rule of the generalized gravitino curvature cdAB γ cd ǫ (4.12) we obtain where we have definedR which has componentŝ In particular, (4.13) contains the supersymmetry transformation rule of the supergravity gravitino curvature µνab is the two-form curvature computed from the torsionful spin connectionŵ [15,16]. Now we turn to the parameterization of the elementary fields. We start from the deformed transformation rules of the components E M a and E M a given in (3.37a) and (3.37b). Of course, different definitions lead to supergravity multiplets that obey different transformation rules. An interesting one is the following whereT ab =F aciFb ci andT =F i acF ac i . The quadratic terms in spin and gauge connections are known to be necessary in order to remove the non-standard Lorentz transformations of the supergravity vielbein e µ a and dilaton φ fields [34,35]. Together with the gauge covariantT terms, these parameterizations determine e µ a and φ fields that obey the leading order supersymmetry and Lorentz transformation rules (2.39a) and (2.42). To get this result, the gauge fixings e µ a = e µ a ≡ e µ a , δE i i = 0 and δE µ i = 0 are used to absorb several terms into the Lorentz parameters. As a consequence, the following parameterization is needed for the duality covariant gravitino Interestingly, these parameterizations induce a deformation of the gravitino supersymmetry variation (2.39c) that can be absorbed into the torsion of the spin connection through the following modification of the two-form curvature The Yang-Mills Chern-Simons form C (g) µνρ was defined in (2.41), the coefficient The gaugino bilinear terms in (4.22) may be absorbed into the first order deformation of the Yang-Mills Chern-Simons form replacing A i µ → µ jk , but this is not convenient for reasons that will become clear shortly. The modified three-form H µνρ (4.22) may be rewritten as the compact expression Likewise, a parameterization of the dilatino analogous to (4.21) also induces the replacement of the lowest order H µνρ by H µνρ in the supersymmetry transformation rule (2.39c), so that the combination ρ = 2 λ + γ a ψ a and its supersymmetry transformation rule are not deformed, i.e. ρ = ρ and δ ǫ ρ = δ (0) ǫ ρ. From δE µ i and δΨ i in (3.37), one can see that the gauge and gaugino transformation rules are not deformed and hence it is not necessary to redefine these fields. Finally, from the transformation rules of the components E µā or E µa , and using the parameterizations defined above, we get This compact expression contains information about the gauge, Lorentz and supersymmetry transformations of the b−field, which we now analyze separately. Expanding the first term in (4.27) one gets (4.28) The first term in the r.h.s. is the Lorentz sector of the Green-Schwarz transformation [44], which requires the Lorentz Chern-Simons form (4.24) in H µνρ . It cannot be eliminated through redefinitions of the b-field [34]. The bilinear fermionic terms inŵ in order to compare with standard results. With this redefinition (4.22) becomes (4.31) Finally the third term in (4.28) together with the second term in (4.27) contain the first order deformations of the supersymmetry transformation of b µν , i.e. (4.32) The first term in (4.32) was originally introduced in [14] to restore manifest Lorentz covariance to the supersymmetry variation of the b-field curvature. It was later reobtained in [15] as a consequence of the assumption that the Yang-Mills and torsionful spin connections should appear symmetrically in ten dimensional N = 1 supergravity coupled to super Yang-Mills. The second term in (4.32) reflects the ̺ deformation of the Killing metric (4.23) in the zeroth order supersymmetry transformation (2.39b). These two terms are the obvious analogs of the Lorentz and Yang-Mills Green-Schwarz transformations as already noticed in [14]. Here, these transformations follow directly from the manifestly duality covariant formulation of the theory. Interestingly, the second term in (4.27) can be obtained from the leading order transformation of the 2-form in (2.39b) with the identifications A i µ ↔Ω µ CD , χ i ↔ Ψ CD , i.e. a generalization of the symmetry A i µ ↔ŵ (−)cd µ , χ i ↔ ψ cd that was used in [15,16] Before turning to the construction of the invariant action under the modified transformations, we analyze the deformations that were proposed in references [15,16]. In particular, we wonder if there is a parameterization of the duality covariant vielbein in terms of a gauge covariant one that transforms as proposed in [15] or [16], i.e. respectively, written here in our conventions. Note that we only examine the gauge dependent terms since the gravitational sectors coincide up to the order we are considering. Specifically, we search for a quantity E µ a such that e µ a = e µ a + E µ a and δ (1) e µ a = δ (0) E µ a . (4.38) The most general expressions that can reproduce either one of (4.37) can be schematically written as or as where the terms between parenthesis refer to all possible contractions of indices and numbers of γ-matrices, numerated by the supraindex m, while ψ . and ψ .. denote the gravitino and gravitino curvature, respectively. We found that neither of (4.37) can be reproduced. It is a straightforward though heavy exercise to parameterize the action (3.47). Interestingly, using Bianchi identities and integrations by parts, the action of the theory to O(α ′ ) may be written in the following compact form: where we have taken b = α ′ and defined / H = γ µνρ H µνρ and As expected, the bosonic fields reproduce the expression obtained from the scattering amplitudes of the heterotic string massless fields up to first order in α ′ and field redefinitions [10], i.e. The supersymmetric invariance of the action (5.1) is shown in appendix C. It simply results from the observation that both the action and the transformation rules of the fields have the same structure as the corresponding ones in [16], albeit with collective indices, except for the terms contained in the parameter Λ ci = 1 2 √ 2ǭ γ c χ i , which cancel in the variation of the action. Outlook and final remarks In this paper we have obtained the first order corrections to N = 1 supersymmetric DFT performing a perturbative expansion of the exact supersymmetric and duality covariant framework introduced in [41]. The action has the same functional form as the leading order one constructed in [38], but it is expressed in terms of O(10, 10 + k) multiplets, where k is the dimension of the O(1, 9 + k) group. Decomposing the O(10, 10 + k) duality group in terms of O(10, 10 + n g ) multiplets, the theory contains higher derivative terms to all orders. We kept all the terms with up to and including four derivatives of the fields and bilinears in fermions. The transformation rules of the O(10, 10 + k) multiplets obey a closed algebra and induce higher-derivative deformations on those of the O(10, 10 + n g ) fields. In particular, they produce a supersymmetric generalization of the duality covariant Green-Schwarz transformation that was found in [34]. We showed that the algebra of deformations closes up to first order and constructed the invariant action with up to and including four derivatives of the O(10, 10 + n g ) multiplets and bilinears in fermions. To make contact with the heterotic string low energy effective field theory, we parameterized the duality covariant multiplets in terms of supergravity and super Yang-Mills fields. The inclusion of higher-derivative terms requires unconventional non-covariant field redefinitions in the parameterizations of the duality covariant structures. The definitions that reproduce the four-derivative interactions of the bosonic fields of the heterotic string effective action were found in [34,35]. Here, we worked with a set of fields related to the latter through gauge covariant redefinitions. Except for the two-form, the fields defined in section 4 obey the leading order transformation rules with a modification of the two-form curvature in the supersymmetry variations. The Lorentz and non-abelian gauge transformations of the two-form are deformed by the standard Green-Schwarz mechanism, as expected, and its supersymmetry transformations are deformed by Green-Schwarz-like terms plus some extra Yang-Mills dependent higher-derivative terms. The deformed transformations obey a closed algebra, which guarantees the existence of an invariant action. We constructed such action in section 5, by parameterizing the manifestly duality covariant expression (3.47) in terms of the fields that obey supersymmetry transformation rules with the minimal set of deformations. As expected, the interactions of the bosonic fields agree with the results obtained from the heterotic string scattering amplitudes [10], up to terms proportional to the leading order equations of motion. To our knowledge, the three-derivative low energy interactions involving fermions have not been constructed directly from string theory. The action and transformation rules that we have obtained follow from an exact supersymmetric and duality covariant formalism. Hence the theory avoids an iterative procedure which only guarantees consistency up to a given order. Moreover, supersymmetry is manifest to all orders and dimensional reductions will preserve the expected T-duality symmetry of the theory. Supersymmetric extensions of the Yang-Mills and Lorentz Chern-Simons forms have been constructed using the Noether method. In particular, a supersymmetric L(R) + L(R 2 ) invariant was obtained in [15,16] from the leading order action (2.44), using the symmetry between the gauge and torsionful spin connections. The three-derivative terms that are independent of the Yang-Mills fields in the action (5.1) coincide with those results. But not surprisingly, the Yang-Mills field-dependent terms disagree with the corresponding expressions of the L(RF 2 ) + L(F 4 ) invariants proposed in those references, since the deformations of the transformation rules differ by Yang-Mills field-dependent terms. The supersymmetric and T-duality covariant generalized Green-Schwarz transformation strongly restricts the modifications to the leading order supersymmetry transformation rules, and in particular, it does not allow the proposals of [15,16]. As argued in section 4 this does not imply that the latter are in conflict with string theory. In order to establish if they are compatible with the required T-duality symmetry, the corresponding invariant action should be dimensionally reduced. The effort employed in the construction of the higher-derivative fermionic sector of the heterotic string effective field theory is justified for various reasons. First of all, an intriguing consequence of the duality covariant formalism is the natural appearance of the generalized collective tangent space indices C, D, ..., which allows to include the higher-derivative Yang-Mills field-dependent terms into gravitational structures such aŝ R µνCD ,Ω µCD or Ψ CD . In particular, it leads to relatively mild modifications of the leading order supersymmetry transformation rules of the fields, which permits the use of the leading order Killing spinor equations to obtain classical solutions containing higher-derivative corrections [2]. These features not only simplify the construction of new supersymmetric solutions but also allow to easily extend the known solutions for the gravitational sector to the Yang-Mills sector. The fermionic contributions to the action are also relevant for applications to fourdimensional physics. Both the superpotential and D-terms can be more easily computed from the fermionic couplings [6] and the higher derivative corrections to these terms as well as to the Yukawa couplings could also have interesting consequences for string phenomenology and moduli fixing. An obvious natural extension of our work would be to determine further interactions beyond the first order. The quartic interactions of the Yang-Mills fields that we have reproduced are mirrored by corresponding quartic Riemann curvature terms [10]. Consequently, we expect that the higher orders of perturbation will reproduce these higherderivative corrections. It would be interesting to see if the generalized structures with capital indices persist to higher orders. If they do, the formulation would contain information about higher than four-point functions in the string scattering amplitudes. Nevertheless, there is another quartic Riemann curvature structure that has no analog in the Yang-Mills sector [10]. At tree level, these terms are proportional to the transcendental coefficient ζ(3). The analysis of the higher-derivative terms is technically more challenging but also more interesting, since further duality covariant structures, or even a more drastic change of scheme, seem to be necessary as advocated in [45]. Performing a generalized Scherk-Schwarz compactification of the sub-leading corrections to N = 1 supersymmetric DFT would be another promising line of research, as this would produce higher-derivative corrections to lower dimensional gauged supergravities [46,35]. We hope to return to these and related questions in the future. A Conventions and definitions In this appendix we introduce the conventions and definitions used throughout the paper. Space-time and tangent space Lorentz indices are denoted µ, ν, . . . and a, b, . . . , respectively. The covariant derivative acting on a gauge tensor G µ ci and on a spinor ǫ is, respectively, and the torsionful spin connection The commutator of covariant derivatives acting on gauge tensors and spinors is where the Riemann tensor is defined as and the Yang-Mills field strength is The Ricci tensor and scalar are (A.11) A.2 Leading order components of the generalized fluxes Using the parameterizations introduced in section 2 and solving the strong constraint in the supergravity frame, the non-vanishing determined components of the generalized spin connection are, to leading order, and f ijk are the structure constants of the SO(32) or E 8 × E 8 gauge groups. A.3 The leading order action and equations of motion Here we rewrite the zeroth order action (2.44) in terms of the dilatino λ of the supergravity multiplet and compare with the corresponding expression in [16]. We also list the leading order equations of motion of all the massless fields derived from it. Rewriting the generalized dilatino ρ = 2λ + γ µ ψ µ in terms of λ and ψ and integrating by parts, the action (2.44) takes the form It matches the corresponding expression in [16] with the following field redefinitions: The leading order equations of motion of all the massless fields, written in terms of ρ, B Algebra of transformations of O(10, 10 + n g ) fields In this appendix we show that the algebra of transformation rules closes, up to terms with two fermions. We first review the algebra of zeroth order transformations (2.19) and in B.2 we include the first order corrections. We define [δ 1 , δ 2 ] = −δ 12 . B.1 Leading order algebra We focus on the algebra determined by the leading order transformations (2.19) and show that it closes with the parameters (2.20). We split the algebra of transformations on the generalized fields into the following commutators: − Supersymmetry transformations of the dilaton where we have usedǭ 1 γ a ǫ 2 = −ǭ 2 γ a ǫ 1 and ǫ 1 γ abc ǫ 2 = ǫ 2 γ abc ǫ 1 , and defined − Diffeomorphisms on the dilaton − Mixed supersymmetry and double Lorentz transformations on the dilaton − Mixed diffeomorphisms and supersymmetry variations on the dilaton − Supersymmetry variations of the frame Projecting with E M C , we get where we have used (2.14) and ξ ′M 12 is the generalization of (B.2), i.e. Projecting with E M c we find Following similar steps, we get (B.14) − Diffeomorphisms and double Lorentz variations of the frame Note that ξ ′′M 12 in (B.4) does not contain the second and third terms in the r.h.s. of this expression, due to the strong constraint. − Mixed diffeomorphisms and supersymmetry variations of the frame − Mixed double Lorentz and supersymmetry variations of the frame − Mixed diffeomorphisms and supersymmetry transformations of the gravitino − Mixed supersymmetry and double Lorentz transformations of the gravitino − Diffeomorphisms and double Lorentz transformations of the gravitino − Mixed supersymmetry and double Lorentz transformations of the dilatino Summarizing we have found, up to bi-linear terms in fermions, The commutator of supersymmetry variations on the gravitino and dilatino as well as the missing terms δ ξ ′ 12 ρ and δ ξ ′ 12 Ψ A are not included as they are of higher order in fermions. B.2 First order algebra We now work out the algebra of first order transformations (3.37) and show that it closes with the parameters (3.38), up to terms with two fermions. Here we denote δ ≡ δ (0) + δ (1) and [δ 1 , δ 2 ] = δ 12 . We split the algebra as in the previous section. − Double Lorentz transformations on the generalized frame Repeating the procedure for E M a , we find defined in (B.31) and − Mixed supersymmetry and double Lorentz transformations on the generalized frame Using we get the first order contribution to the mixed transformation rules of The first two terms are a Lorentz transformation with parameter From the second line, only one term survives after commuting the gamma matrices, which corresponds to a first order supersymmetric variation with zeroth order parameter cd . In the same way, from the remaining terms we find a first-order supersymmetry pa- The first line is a zeroth order Lorentz transformation with parameter Commuting the gamma matrices in the first term of the second line, the second contribution in the fourth line is canceled and we get again a supersymmetry transformation with zeroth order parameter ǫ ′′ 12 = − 1 2 ǫ [1 γ cd Λ 2]cd . Finally, commuting the gamma matrices in the second term of the third line, various cancellations leave a supersymmetry transformation with first order parameter (B.37). − Supersymmetry variations on the generalized frame The first and last terms of the r.h.s. combine into a Lorentz transformation with while the other terms form a diffeomorphism with first order parameter The same result holds for E M C δ ǫ 1 , δ ǫ 2 E Ma , while − Mixed diffeomorphism and Lorentz variations of the generalized frame Recalling that diffeomorphisms are not deformed, we get to first order which is a first-order Lorentz transformation with a zeroth order parameter. We use the This case is similar to the previous one. We start with which is a first order supersymmetry transformation with a zeroth order parameter. It is straightforward to see that the same result holds for E Ma . − Double Lorentz variations on the generalized gravitino After some straightforward manipulations, we finally obtain Lorentz transformations with the following parameters CD − Mixed Lorentz and supersymmetry transformations on the generalized gravitino Acd γ cd ǫ 1 + δ Commuting the gamma matrices in the second term of the r.h.s, and combining it with the corresponding term in the (1 ↔ 2) operation, we recognize a supersymmetry transformation with zeroth order parameter ǫ ′ 12 = − 1 2ǭ [1 γ ab Λ 2]ab . The first term in the second line together with the corresponding term in the (1 ↔ 2) operation, gives a zeroth order supersymmetry transformation with first order parameter The remaining terms cancel and then we get In the second line (adding the (1 ↔ 2) operation) we recognize a Lorentz transformation with first and zeroth order parameters In equations (3.38) of the main text we collect the parameters that appear in this algebra of first order transformation rules. C Supersymmetry of heterotic string effective action In the first part of this appendix we prove that the higher-derivative deformations of the transformation rules of the supergravity fields satisfy a closed algebra up to O(α ′ ) and up to terms with two fermions. In the second part, we show that the action (5.1) is invariant under these supersymmetry transformations. C.1 Supersymmetry algebra It is well known that the algebra of leading order transformations of supergravity and super Yang-Mills fields closes. Moreover, the replacement H µνρ → H µνρ in the supersymmetry transformations of the gravitino and dilatino does not affect the leading order closure on any field except for the b-field. Hence we focus on the algebra of first order transformation rules on b µν . It is convenient to first look at the brackets acting on b µν = b µν + b 8 A k [µ χ i γ ν] χ j f ijk . Up to first order and bilinear terms in fermions, we need the following transformation rules: µab γ ab ǫ , (C.1a) We exclude the diffeomorphisms since it is trivial to see that all the transformation rules of b µν (i.e. Lorentz, supersymmetry, abelian and non-abelian gauge transformations) transform as tensors under diffeomorphisms and hence their commutators are trivial. Therefore, we compute the brackets The first term in the r.h.s. gives Adding both contributions, we get To see the algebra of transformations on b µν , note that and it is easy to see that the second term in the r.h.s. vanishes. Rewriting (C.5) in terms of supergravity and super Yang-Mills fields, the brackets that mix supersymmetry with Lorentz and abelian gauge transformations vanish, while the supersymmetry algebra gives (C.10)
9,914.8
2021-04-20T00:00:00.000
[ "Physics" ]
Gradient flows without blow-up for Lefschetz thimbles We propose new gradient flows that define Lefschetz thimbles and do not blow up in a finite flow time. We study analytic properties of these gradient flows, and confirm them by numerical tests in simple examples. Introduction To study properties of strongly-correlated many-body systems, numerical simulation provides us powerful tools.Exact diagonalization of the Hamiltonian gives the complete information of physical systems, however it requires an exponentially large amount of the computational cost as the number of particles increases.Monte Carlo simulation of the path integral circumvents this problem, and many physical systems of hadron and condensedmatter physics in thermal equilibrium have been successfully studied with this method [1][2][3][4]. Monte Carlo simulation is based on importance sampling, and thus the Boltzmann weight exp(−S) with the classical action S must be positive semi-definite.The Boltzmann weight in many interesting systems, however, takes on complex values, so that the idea of importance sampling cannot be applied, which is called the sign problem [5,6].The conventional solution of this problem is the reweighting method with phase quenching, but this procedure generally revives the exponential complexity and its use is very limited [7].In hadron physics, high-density cold nuclear matter gets a lot of attention because of its relevance to neutron-star physics [8], but finite-density quantum chromodynamics (QCD) suffers from the sign problem, and we do not have a technique for ab initio computations of this theory [9]. Recently, a new systematic approach to the sign problem has been developing based on the complexification of path-integral variables, and it is called the Lefschetz-thimble Monte Carlo method .If the integration variables are complexified, there is a great deal of freedom in the choice of integration contours thanks to Cauchy's theorem.Although expectation values of physical observables does not change under the continuous deformation of integration contours, the strength of the sign problem heavily depends on the choice of contours, so we can expect that there must be an optimal choice.For onedimensional integrals, it is given by the stationary-phase path, and its higher dimensional analogue is called the Lefschetz thimble.This technique has been developed in the context of the hyperasymptotics of multi-dimensional exponential integrals [37][38][39], and it is now applied in physics not only to the sign problem but also to the resurgence theory .Quite recently, the Lefschetz-thimble method is also discussed in the context of quantum cosmology [63,64] through its application to the real-time quantum phenomena [17,18,31]. In order to find Lefschetz thimbles, all we have to do is to analyze the gradient flow in the space of complexified field configurations z i = x i + iy i [40,41]: However, as we will see in this paper, solutions of the gradient flow (1.1) generically blow up within finite-time intervals, so one must treat them carefully in numerical computations to get the correct answer.Instead, we propose a new gradient flow equation, where S B is the bosonic classical action, D is the fermionic determinant, and the total classical action is S = S B − ln D. Here, Λ B and Λ F are positive real parameters to be tuned appropriately.This new flow equation (1.2) turns out to define the Lefschetz-thimble decomposition of the original integration as the conventional one (1.1)does, and all of its solutions do not show blow-up.This paper is organized as follows: Section 2 gives a brief review on the Lefschetzthimble method with the conventional gradient flow, and we argue that blow-up generically happens using simple examples.In Sec. 3, we introduce new gradient flows and justify their use for the Lefschetz-thimble method.Furthermore, we show that blow-up does not occur in that new flow equation.In Sec. 4, we numerically study the gradient flow in simple examples to see its behaviors and check that the sign problem is indeed equally solved compared with the conventional gradient flow.Concluding remarks are made in Sec. 5. Several technical details are worked out in two appendices.In appendix A, we give a mathematical proof on the equivalence among gradient flows.The deviation equation of the gradient flow to compute the Jacobian is given in Appendix B, and in Appendix C, we discuss some technical details on the complex structure to claim that our proposal works with gauge symmetries. Blow-up of conventional gradient flows for Lefschetz thimbles We first review the Lefschetz-thimble approach to the sign problem in Sec.2.1.In Sec.2.2, we explain that the gradient flow conventionally used blows up in a finite time by demonstrating it in simple two examples. Brief review of Lefschetz-thimble methods for sign problems Let us consider the following integral, where the action S(x) is a complex-valued polynomial in x.Since S(x) is complex valued, this integral becomes oscillatory and the sign problem appears.The Lefschetz-thimble method is a method to make the sign problem milder by deforming the integration contour R n to other n-dimensional submanifolds of the complexified space C n .Conventionally, such submanifolds are constructed by solving the following gradient flow , where z i is a holomorphic coordinate of C n : z i = x i + iy i .We can understand why (2.2) is called the gradient flow as follows: Let us pick up the standard (Kähler) metric Therefore, Re(S(z)) = 1 2 h(z, z) monotonically increases along the flow (2.2).Another important property of (2.2) is that Im(S(z)) is constant along the flow, because of the holomorphy of S(z). Let {z σ } σ∈Σ be the set of the saddle points, ∂S(z σ ) = 0. Using the gradient flow (2.2), we define the Lefschetz thimble and its dual by [10,14,17,40,41] respectively.The claim from the Picard-Lefschetz theory is that we can compute relative homologies as [37][38][39][40][41] H if S(z) satisfy certain properties.By imposing appropriate orientations to J σ and K σ , the intersection pairing satisfies and thus we can compute the homology class of the original integration cycle R n as where [J ] represents the homology class of the cycle J .As a result, we can rewrite the original integration as Since Im(S(z)) is constant along each Lefschetz thimble J σ , the sign problem of (2.10) can be absent or much milder than that of the original integral (2.1). There is a practical way to realize the decomposition (2.10), and we introduce it following Refs.[30,31].Let z(t, x) be the solution of (2.2) with the initial condition z(0, x) = x.We fix the flow time T , and define the n-dimensional submanifold by (2.11) Thanks to Cauchy's theorem, we obtain (2.12) The first identity means that J (T ) belongs to the same homology class as R n .Furthermore, if T is sufficiently large, J (T ) would become almost identical to the sum of Lefschetz thimbles.Therefore, the last expression of (2.12) can be regarded as a realization of the Lefschetz-thimble decomposition (2.10) when T is large enough, which is useful for numerical computations. Blow-up of conventional flows In order to construct J (T ), we need to solve the gradient flow (2.2) numerically accurately, and thus it is quite important to understand its properties.Here, we would like to point out that the blow-up of solutions is a quite generic phenomenon for nonlinear differential equations.To be specific, let us consider the asymptotic behavior of the gradient flow (2.2) in simple examples, and we will show that the solutions of (2.2) blow up. The first example is a quartic potential S(x) = x 4 .One can regard this as a prototype of the scalar φ 4 field theory when the fields are quite large and the mass term is negligible. where we consider the case x is real.We can solve this equation with the initial condition x(0) = x 0 > 0 as We can readily see that x(t) → ∞ as t , and the solution blows up within a finite time even for this simple example.One must be careful with the treatment of blow-up when we apply the conventional gradient flow to construct Lefschetz thimbles numerically.Let us make it clear that this is quite a generic phenomenon.For that purpose, we set k = deg(S), then the flow equation for with some positive coefficient c.The qualitative behavior is hence given by r ∼ (t c − t) −1/(k−2) for some blow-up time t c .The only exception is the case when k = 2; the blow-up does not occur only if S is Gaussian. In order to avoid confusion, we emphasize that the blow-up does not violate the identity (2.12) if the equations are interpreted appropriately.Returning to the example Eq. (2.13), we now fix the flow time T , and regard x(T, x 0 ) as a function of the initial condition x 0 in the following way; (2.16) In this example, the formula (2.12) must be interpreted as which is true since it is obtained by change of variables.In this sense, (2.12) gives the correct answer. Let us give a heuristic argument why the formula holds true even with the blow-up.As we have seen, Re(S(z)) increases monotonically as the flow time becomes larger.Therefore, as the solution blows up, Re(S(z)) diverges to +∞, which implies that e −S(z) → 0. (2.18) The region where the solution blows up within the flow time T does not contribute because the Boltzmann weight vanishes. 1The rest in the original integration cycle, (−1/ √ 8T , 1/ √ 8T ), covers the whole J (T ) ⊂ C n , which gives the same value as the original integral.Therefore, the blow-up is not the problem of the formulation but requires a correct treatment in numerical computations. Next, let us consider the following example, i.e., S(x) = x 2 − ln(1 − x 2 ).Here, one can think of factor (1 − x 2 ) as a toy fermion determinant.We will apply Lefschetz-thimble method to this case, and find that Lefschetz thimbles terminate not only at infinities of the configuration space but also at the zeros of the fermion determinant [19,20].In this example, there are no infinities, so let us consider the behavior near zeros of the fermion determinant and set x = 1 + δx and |δx| 1.The conventional flow equation for δx is Here, the ellipsis represents the nonsingular terms at δx = 0, and we neglect them.The solution with the initial condition δx(0) = δx 0 1 is given by and the flow again reaches the singular point δx = 0 (or x = 1) within the finite time t = 1 2 δx 2 0 .To interpret the formula (2.12) correctly, at flow time T we must exclude the region from the integral by noticing that Re(S) = +∞ in this region. 2 3 Proposal of new gradient flows without blow-up In Sec.2.2, we have seen that the blow-up of conventional gradient flows happens even for very simple examples.The formula for the Lefschetz-thimble integral is still correct 1 Strictly speaking, the discussion given here is slightly imprecise.When the flow blows up, the Jacobian factor det(∂z(T, x)/∂x) diverges.Therefore, the suppression (2.18) must be strong enough to ensure that det(∂z(T, x)/∂x)e −S → 0.Here we just point out that this is the case for S(x) = x 4 . 2 In the case of the bosonic action, we need require that det(∂z(T, x)/∂x)e −S → 0. In the fermionic case, this requirement is too strong and not necessarily satisfied.It is enough to require that det(∂z(T, x)/∂x)e −S is bounded around zeros of the fermion determinant because the region with blow-ups shrinks to a set of measure zero.This is actually the case for ] shrinks to the point {1} while det(∂z(T, x)/∂x)e −S remains finite. even when blow-up occurs, but we must carefully control that behavior when we perform numerical computations.In this section, we propose a new gradient flow, for some regularization parameters Λ B , Λ F ≥ 1.We here consider the case S(z) = S B (z) − ln D(z), where S B (z) is a polynomial that mimics the bosonic action and D(z) is a polynomial that mimics the fermion determinant (i.e., S F = − ln D is the effective action for fermions). Justification of new gradient flows In this section, we argue that the new gradient flow (3.1) also defines the Lefschetz-thimble decomposition.To make the argument applicable to more general cases, let us consider a regular Hermitian metric3 ds 2 = g ij (z, z)dz i ⊗ dz j .Especially, it should be noticed that for any z ∈ C n and v ∈ T z C n \ {0}.Using the Hermitian metric, we define the gradient flow as Here, g ij is the inverse of metric g ij .We obtain (3.1) by setting One can easily check that this metric is Hermitian on C n \ {D = 0}.We point out that any choice of the Hermitian metric defines an equivalent Lefschetzthimble decomposition of the exponential integral.Using the flow equation (3.3), we obtain Therefore, the two most important properties of the conventional flow equation are satisfied in the general case (3.3):(a) Re(S(z)) increases monotonically and stays constant only at saddle points, ∂S(z σ ) = 0. (b) Im(S(z)) is constant along the gradient flow.In Appendix A we will give a proof that all gradient flows define an equivalent Lefschetz thimble decomposition under certain conditions on S(z). It would be more convincing to relate the new gradient flows (3.1) with the conventional gradient flow.We have introduced two positive parameters Λ B and Λ F in the metric of the gradient flow (3.1), and we can obtain the conventional flow equation by taking the limit Λ B , Λ F → ∞.In this sense, the new and conventional gradient flows are related by a continuous deformation without violating the most important properties of the conventional flow: d dt Re(S) ≥ 0 and d dt Im(S) = 0 for any Λ B , Λ F ≥ 1.Let us emphasize that our proposal is just a single example among the huge set of possibilities for the choice of g ij .Other choices, such as also satisfy all the above arguments, and we shall show that both choices work nicely to prevent blow-up in finite time. Proof of the absence of blow-up Let us check that the new gradient flows (3.1) does not blow up within a finite time.We assume that the action takes the form S = S B − ln D. We should notice that the flow diverges along the direction Re(S(z)) → +∞ because Re(S) monotonically increases.There are two possibilities to realize this divergence: or In the first case, |z| → ∞ since we assumed that S B (z) is polynomial.In the second case, z approaches a zero of D which are located in a bounded region.It is sufficient for our purpose to analyze the gradient flow in these limiting regions.We first consider the flow defined by (3.1).We can write the equation as .9) In the limit Re(S B (z)) → +∞, all factors on the right hand side except for e −2Re(S B ) shows a polynomial dependence on z or z.Since |z| 1 in this region, this implies that dz/dt is exponentially small, which means that z(t) → ∞ only logarithmically.In the other limit D(z) → 0, we have that dz/dt ∝ D(z) neglecting higher order corrections.Let λ be a zero of D(z) and write D(z) for some c > 0 when k = 1).In both limits, it an takes infinitely long time for the flow to reach the singularities. We next consider the gradient flow with the metric (3.6): The same analysis holds for the limits D(z) → 0, and we again obtain |z − λ| ∼ t −1/(2k−2) when z is close to a zero λ of D(z).We now consider the case Re(S B ) → ∞ as |z| → ∞. To be specific, let S B (z) is a polynomial of order n, then we obtain that dz/dt ∼ 1/z n+1 .Therefore, |z(t)| ∼ t 1/(n+2) and it again takes an infinitely long time for the flow to diverge.As a result, we have analytically shown that the blow-up within finite time can be evaded for certain choices of the Hermitian metric g ij on the complexified space C n , and we have constructed two specific examples in (3.1) and (3.6).Let us give an intuitive explanation of why the blow-up is prevented by introduction of the metric.For both choices, the metric g ij becomes quite small if Until the flow reaches this region, the new gradient flows show qualitatively the same behavior as the conventional one.However, once the flow reaches this region, the metric decelerates the flow sufficiently, and blow-up does not occur at a finite time. On the choice of Λ B and Λ F For practical use of our proposal in numerical computations, the appropriate choice of Λ B and Λ F is important.Let us write g ij = gδ ij , then the introduction of the metric effectively changes the discretization time ∆t to g∆t.If one uses the fourth-order Runge-Kutta method for solving the gradient flow, the error is given by O((g∆t) 4 ). It is quite natural from this point of view to require that g 1 while solving the flow starting from real configurations.This puts the constraints on Λ B as If we know the complex saddle points {z n } that have nonzero intersection numbers, then the flow reaches to the most dominant saddle points with a reasonable flow time by requiring It seems that there is no constraint for the upper bound of Λ B , so one can take a sufficiently large Λ B that satisfies these constraints.For any Λ F , the condition g < 1 is satisfied, and thus the lower bound is not given by this consideration.There is, however, an upper bound of Λ F for practical use.Let z * be a zero of the fermion determinant, and thus D(z) D (z * )(z − z * ).The flow equation (3.1) in the vicinity of z = z * reduces to The solution is given by In order to solve this exponentially fast convergence, we need to require that Λ 2 F |D (z * )| 2 ∆t 1 with the discretization time step ∆t.As a result, we obtain Although it is difficult to evaluate D (z * ) for realistic theories, the parametric dependence on ∆t of the upper bound of Λ F is obtained in this way. Im(z) Numerical tests in simple examples In this section, we compare behavior of the conventional and new gradient flow numerically for simple examples, the Airy integral, a toy model for a fermion determinant and a one-link U(1) model. Airy integral As a first example, we consider the Airy integration, The action of this theory is S(z) = S B (z) = −i z 3 3 + z , and it has two saddle points at z = z ± = ±i.The saddle point with non-zero intersection number is z + = i, and the classical action at that point is S B (z + ) = 2 3 .We numerically solve the gradient flow (3.1).Since the fermion determinant is absent, we set Λ F → ∞ and write Λ = Λ B : The other flows (3.6) give qualitatively similar behaviors, so we do not repeat our analysis.Let z Λ (t, x) be the solution with the initial condition z Λ (0, x) = x, and define the complex contour Figure 1 shows the behavior of the gradient flow using the trivial metric, i.e.Λ = ∞.The solid blue line is the Lefschetz thimble.The dashed lines show the flow lines starting from some real points, and other solid lines show the contours J Λ=∞ (T flow ) at When T flow = 0, then J Λ (0) = R and this reweighting factor vanishes for the Airy integral, R(T flow ) = 0.In order to get a better understanding of the qualitative behavior of R, let us comment on the semiclassical evaluation of R in the Lefschetz-thimble method.In this case, only the Lefschetz thimble at z = z + contributes.Therefore, in the semiclassical approximation, and we expect that R becomes slightly smaller in the exact computation due to the residual sign problem coming from the Jacobian factor ∂z Λ (T flow ,x) ∂x . In Fig. 3, we show the dependence of the reweighting factor on T flow between 0 < T flow ≤ 2.0 at Λ = 1, 2, 5, and 100.In all cases, R grows monotonically as T flow becomes larger.Since the choice Λ = 1 regularizes the flow too much as we have discussed, R is not saturated for T flow ≤ 2.0.When Λ 5 for the Airy integral, the additional factor e −2Re(S B )/Λ in the flow equation decelerates the flow only inside unimportant complex domains, and R shows dependence on Λ very weakly. Gaussian model with fermion determinant We consider the following Gaussian integral: In this model, the bosonic action is S B (z) = 1 2 (z − iβ) 2 and the fermionic determinant is D(z) = (z + i(α − β)) p , so this model is a prototype of the sign problem with the fermion determinant.Properties of the sign problem of this model (at β = 0) have been studied with the complex Langevin method in Ref. [65].The saddle points of the action S = S B − ln D are given by For α < 2 √ p, which we refer to as case 1, both the saddle points contribute to the integral, and the complex Langevin method breaks down in generic cases [65].This failure can be understood as a result of different complex phases for those two Lefschetz thimbles (at least within the semiclassical regime) [29], which necessarily requires a polynomial tail of the complex Langevin distribution and violates assumptions in the formal proof of the complex Langevin method [65][66][67][68] (see also Refs.[69,70] for recent related analytical studies).For α > 2 √ p, both classical solutions are purely imaginary, and only one of the saddle points has non-zero intersection number.In this case, which we refer to as case 2, the complex Langevin method works [65]. Case 1 In the following, we set p = 2, α = 2, and β = 3 so that α 2 < 4p: The zero of the fermion determinant is located at We see from this expression that the two saddle points indeed have different phases. Since the Gaussian bosonic action S B = 1 2 (x − iβ) 2 does not cause the blow-up, we set Λ B → ∞ and concentrate studying the effect of the fermion determinant.We write Λ = Λ F , and the flow equation becomes We again solve this gradient flow for various Λ using the fourth-order Runge-Kutta method with the time step ∆t = 0.01, and obtain J Λ (T flow ). In Fig. 4, we show how J Λ (T flow ) develops as T flow and Λ are changed.In Fig. 4a, its T flow -dependence at Λ = 2 is shown, and J Λ (T flow ) approaches to the saddle point, z ± = ±1 + 2i as T flow becomes larger.Let us also pay attention to the behavior of flows in the vicinity of the zero of D, z * = i.Since the flow slows down around z = z * , the complex contours J Λ (T flow ) make a slight detour to evade that point.This feature can be more clearly seem by looking at the Λ-dependence of the contours.In Fig. 4b, the Λ-dependence is studied at T flow = 1.0, and the detour becomes smaller as Λ becomes larger.This is consistent with the previous analysis that the flow decelerates if |D| Λ −1 . Figure 5 shows the T flow -dependence of the reweighting factor R at Λ = 1, 2, and 100 for 0 < T flow ≤ 2.0.Since the partition function Z is negative with our setup, the reweighting factor R is also negative in this case.The reweighting factor of the conventional phase quenching, i.e.R(T flow = 0), is about −1.It is easy to find that 1/S (z ± )e −S(z ± ) = c exp ±i(1 + π 2 + π 8 ) for some c > 0. As a result, we get R(T flow → ∞) = cos(1 + π 2 + π 8 ) −0.98 in the semiclassical approximation, which is roughly consistent with the saturation value given in Fig. 5. Case 2 In the following, we set p = 2, α = 3, and β = 4 so that α 2 > 4p, and the saddle points are located at 12) The zero of the fermion determinant is at z = z * = i.The values of the classical action at z = z ± are These two classical actions have the same imaginary part, and the theory is indeed on the Stokes ray; two saddle points z ± are connected by the gradient flow.We again consider the gradient flow given by (4.10). Im(z) This is a tricky example because J Λ (T flow ) does not converge to the contributing Lefschetz thimble J + although their homology classes are the same, [J Λ (T flow )] = [J + ] (J ± are Lefschetz thimbles associated with saddle points z ± ).To make the discussion on the intersection number well-defined, let us imagine that we add an infinitesimal imaginary part to parameters α, β so that the theory is off the Stokes ray.If one draws the dual thimbles K + and K − , one will find that K + intersects with R only once but that K − intersects with R twice with different relative orientations.Their intersection number with R can be computed as R, K + = 1 and R, K − = 1 − 1(= 0).As a result, the Lefschetz-thimble decomposition of the integral becomes We can now see why J Λ (T flow ) → J + as T flow → ∞ as manifolds: The construction of J Λ (T flow ) is sensitive to the cancellation of two intersections between K − and R, and thus the limit of J Λ (T flow ) roughly becomes lim (4.15) Since J + (−∞ + z + , z + + ∞), there are additional line segments, along which the integrals of holomorphic functions cancel.This observation is important when we discuss the reweighting factor, since the additional segments reduce the reweighting factor: R(T flow → ∞) −0.45. Since the Λ dependence is quite weak except in the vicinity of z = z * = i for Λ 10 as we have seen in Fig. 4b for slightly different parameters, let us show the numerical result only at Λ = 10. Figure 6 gives the T flow dependence of contours J Λ (T flow ) at Λ = 10 for 0 < T flow ≤ 5.0.We find that J Λ (T flow ) indeed approaches the contour given in (4.15) as T flow gets larger.Moreover, thanks to the metric in the gradient flow, J Λ (T flow ) detours evading the zero of the fermion determinant z = z * . Figure 7 shows the T flow dependence of the reweighting factor.Interestingly, the reweighting factor reaches the maximum around T flow 1.4 and overcomes the reweighting factor, R(T flow → ∞) −0.45, computed by using Lefschetz thimbles.It gradually decreases after that, and approaches to R(T flow → ∞) −0.45. U (1) one-link model The U (1) one-link model is given by The bosonic action is S B (z) = −β cos(z), and the fermion determinant is given by D(z) = (1+κ cos(z −iµ)).This model is considered in Ref. [16] in the context of Lefschetz thimbles.In order to control the blow-ups of this model at various values of the parameters, we need to introduce both Λ B and Λ F in the metric of the gradient flow.We shall see that our proposal to change the flow works well also for this situation.The structure of Lefschetz thimbles changes drastically as the parameter κ exceeds 1, so we consider the cases κ = 1/2 and κ = 2.We always set β = 1 and µ = 2. Small κ We take κ = 1/2, β = 1, and µ = 2 in (4.16), and we set Λ B = 5 throughout the analysis in this case.The relevant saddle points are approximately given by The values of the classical action at these saddle points are S 1 −1.9 and S 2 2.9, respectively, and thus the contribution is dominated by z 1 .The zeros of the fermion determinant are located at In Fig. 8a, the blue solid curve shows the Lefschetz thimble of z 1 that contributes to Z, and red squares show the zero z * − of the fermion determinant.We show in Fig. 8a how J Λ (T flow ) develops as T flow increases at Λ B = Λ F = 5.In Fig. 8b, the Λ F dependence of J Λ (T flow ) is studied at T flow = 1.0, and the contours approach to z = z * − as Λ F becomes larger.In this parameter region, Λ B does not play a significant role, because the blow-up due to the bosonic action S B does not occur.Figure 9 shows the T flow -dependence of the reweighting factor at Λ F = 1 and 5.We find that the Λ F dependence of the reweighting factor is quite small even though the contours themselves strongly depend of Λ F as we have seen in Fig. 8b.In this case, the contribution to Z is dominated by one saddle point z 1 , and thus the reweighting factor becomes close to 1.For Λ F = 5, the reweighting factor reaches its maximum around T flow 1.6, and it slightly decreases after that.This is because the zero z * − obstructs the deformation of real cycle to the Lefschetz thimble shown by the blue solid curve as we have seen in the Gaussian model, and thus the residual sign problem becomes more severe when T flow becomes larger than a certain value. Large κ We take κ = 2, β = 1, and µ = 2 in (4.16), and we set Λ B = 10 throughout the analysis in this case.The relevant saddle points are approximately given by The values of the classical action at these saddle points are S 1 −2.9 and S 2,3 3.8 ± 5.7i, and thus the contribution is dominated by z 1 .The zeros of the fermion determinant are given by These zeros are shown with red squares in Fig. 10a, and the blue solid curves show the Lefschetz thimbles contributing to Z. In Fig. 10a we study the T flow -dependence of J Λ (T flow ) at Λ F = 5.In this case, Λ B prevents the blow-up in the direction z → π + i∞.The Λ F -dependence of J Λ (T flow ) is studied in Fig. 10b, and the contour becomes similar to Lefschetz thimbles as Λ F becomes larger. Figure 11 shows the T flow -dependence of the reweighting factor at Λ F = 0.5 and 5.The Λ F -dependence of the reweighting factor is quite small.Within the time interval in our computation, the reweighting factor monotonically increases for this parameter.Compared with other examples studied in this paper, this model is an important benchmark because both Λ B and Λ F are effective to prevent the blow-up in this parameter region.We have checked through this model that the reweighting factor behaves as we have expected even in such situations. Conclusions We have argued that the conventional gradient flow defining Lefschetz thimbles generically blows up, and thus one needs to monitor the divergence of gradient flows with great care in numerical computations.Instead of doing that, we propose a new gradient flow equation (3.1) that also defines Lefschetz thimbles and does not suffer from blow-ups.We show its theoretical foundation by providing a geometric interpretation of the change in the gradient flows, and also prove rigorously that our new flow equation does not have blowups.In some examples of one-dimensional integrals with a sign problem, we numerically construct the complex contours using the new gradient flow to see how it works in practice.By appropriately choosing the regularization parameters of the new gradient flow, we check that it solves the sign problem as the conventional flow equation does. One possible concern about our proposal would be the numerical cost, but we believe that it is not the problem for the following reasons.Since the computation of the metric needs the absolute value of the fermion determinant, it takes O(N 3 ) in the LU decomposition when N is the size of the fermion matrix.However, one needs to compute the inverse of the fermion matrix even without introducing the metric and it also costs O(N 3 ), so this additional computational cost would not be a severe problem.Moreover, if the precise evaluation of the determinant is too costly, we could use the stochastic estimation of the determinant that reduces the cost significantly. Let us also briefly discuss another possible remedy treating the blow-up and compare it with our proposal.A simple remedy that uses the conventional flow equation would be the following: Introducing a cutoff in the process of solving the conventional flow equation in order to estimate the blow-up, we throw away a trial configuration when it satisfies a preset blow-up criterion.The argument is that blow-up occurs when the action diverges and such configurations are suppressed anyway.However, for strongly-coupled field theories, the configurations with exponentially small Boltzmann weights can give a significant contribution because of the exponentially large entropy, and we must check the cutoff-independence of the results obtained using this simple remedy.In our proposal, although additional costs are required to compute the metric, we do not introduce any cutoffs in doing the Monte Carlo algorithm.Since the additional computational costs for the metric is at most the same order of the computation for original flow equations, the check of cutoff-independence and our proposal would be comparable.Another possible merit for introducing the metric is that the flow equations becomes more stable than the original ones because of the absence of a blow-up, which allows to increase the step-size of the flow equation reducing the computational cost. Our proposal (3.1) is only one possible way to introduce a metric in the gradient flow that prevents blow-up.We have shown that other choices define an equivalent Lefschetzthimble decomposition so long as the gradient flow takes the form (3.3).We therefore would like to comment that technical problems of solving gradient flow equations might be circumvented by choosing a different metric. It would be interesting to see how our proposal works for the path integral of more realistic systems with strong interaction.Toward the final goal of computing finite-density QCD, chiral random matrix is a good candidate to be tested.Indeed, previous studies [71][72][73][74][75][76][77][78][79][80][81] reveal that chiral symmetry breaking and associated charged pions are the origin of the difficulties for the numerical simulations of cold and dense QCD.Since chiral random matrix theory shares the same universality with regards to the Dirac-eigenvalue distributions, its systematic study with Lefschetz thimbles, which is partly done in Ref. [22], will provide us an important insight in this problem. We denote the solution of the gradient flow with the metric g as z (g) (t), and define the Lefschetz thimble and its dual by We can show that τ , and hence the above assumption on Im(S) implies J (g) σ and K (g) τ cannot intersect when σ = τ .We should notice that J (g) σ and K (g) σ intersects only at z σ since Re(S(z)) is monotonically increasing along the flow.Moreover, the same argument shows that even when J and K are defined by different metrics g, g . Since Re(S(z)) increases monotonically along the gradient flow, J (g) defines integration cycles of the form e −S(z) d n z.Therefore, Using the above property on the intersection pairing, we obtain the identity for the homology class, Similarly, we obtain This shows that the homology class of the Lefschetz thimble does not depend on the choice of Hermitian metric.It also shows that for any choice of g.Here, we emphasize again that the coefficients R n , K σ are independent of the choice of metric g.It partly comes from the fact that the intersection number is a topological quantity while the metric is a regular complex function.The intersection number thus jumps only when the Stokes phenomenon happens, and what we have shown here is that the change of the metric does not cause the Stokes jumping. There are a few remarks on this result.For the one-dimensional examples, the Lefschetz thimble is nothing but the stationary phase contour, that is characterized by Im(S(z)) being constant.Since this amounts to one constraint in the two-dimensional space C R 2 , the Lefschetz thimble is uniquely defined as a submanifold.Independence of the metric is trivial since the stationary phase condition does not use the metric.On the other hand, this is highly nontrivial if one considers higher-dimensional integrals.In this case, the stationary phase condition is insufficient to characterize the half-dimensional submanifolds in C n , and there are a lot of possible choices for steepest descent cycles.Therefore, J σ and J (g) σ can be different submanifolds of C n if n ≥ 2. Application of the Picard-Lefschetz theory ensures that all of them are "equivalent" in the sense that their homology class is the same. B Flow equation for the Jacobian matrix For the numerical computation of the Lefschetz-thimble Monte Carlo method, we do not only need the flow z(T, x) but also the flow of the Jacobian det ∂z i (T, x)/∂x j .We first derive the result for the most general expression (3.3), and then apply it to the cases (3.1) and (3.6). Let us consider two solutions with infinitesimally close initial conditions z(T, x) and z(T, x + ∆x), where |∆x| 1.We compute the deviation of the gradient flow (3.3) as Here, we introduced the shorthand notation ∂ i = ∂/∂z i and ∂ i = ∂/∂z i , and ∆z = z(t, x + ∆x) − z(t, x).By writing the Jacobian matrix by ∆z becomes ∆z i = J i j ∆x j .To compute the Jacobian, we consider a real-valued variation for ∆x, and thus we obtain by comparing coefficients of ∆x .If one assumes that g ij ∝ δ ij at the saddle points ∂ i S = 0, then one can solve this equation in the vicinity of a saddle point by applying Takagi's factorization to Let us restrict ourselves to diagonal metrics g ij (z, z) = g(z, z)δ ij .Then, we obtain For the conventional gradient flow (2.2) , we reproduce the well-known formula [10] by substituting g = 1. For the proposed gradient flow (3.1), we need to compute ∂ k ln g for the metric (3.4): We obtain ∂ k ln g by taking the complex conjugation since g is real.As a result, the deviation equation for (3.1) is given by C Comment on the Hermitian and Kähler metric in the gradient flow In the original applications of Lefschetz thimbles to quantum gauge theories [40,41], the Kähler nature of the complexified field space was emphasized.Our proposal (3.1), however, introduces the Hermitian metric in the gradient flow, and it is not Kähler.In this appendix, we will justify the use of our proposal even for the sign problem of lattice gauge theories. C.1 Quick review on complex structure This is the brief summary of Hermitian and Kähler structures.In the following, we consider a 2n-dimensional smooth (real) manifold M .If there exists a bundle map J : T M → T M with J 2 = −1 and T M the tangent bundle of M , we call J an almost complex structure on M .By considering a complexification of the tangent bundle T M ⊗ R C, one can diagonalize J at each point p ∈ M with eigenvalues ± √ −1 and degeneracies n ± .If J = J ν µ ∂ ν ⊗ dx µ satisfies the integrability condition, we say J is a complex structure of M , and M is called a complex manifold.If M is a complex manifold, we can take local (holomorphic and anti-holomorphic) coordinates z i and z i = z i (i = 1, . . ., n), which diagonalizes J at each point and the transformation property among them is holomorphic, thanks to Newlander-Nirenberg theorem.More concretely, in such coordinates the complex structure looks like If one takes another coordinate patch w i and w i with the same property, then w i (z) are holomorphic and w i (z) are anti-holomorphic thanks to the integrability condition. C.1.1 Hermitian structure A Riemannian manifold with an (almost) complex structure (M, g, J) is called Hermitian if it satisfies Let us take a holomorphic local coordinate z i , then this condition implies That is, g ij = g i j = 0. Using the mixed component of the metric, one can define a nondegenerate 2-form, called the Hermitian form, Compared to the Hermitian case, the first condition is imposed additionally for the Kähler manifold.Let us take a holomorphic local coordinate z i again, then the second condition implies that g ij = g i j = 0, i.e., Using the mixed component of the metric, one can define a 2-form, called the Kähler form, The first condition means that ∂ i g jk = ∂ j g ik and ∂ i g jk = ∂ k g ji , and it is equivalent to say that ω is symplectic, i.e., dω = 0. (C.8) That is, one can say that Kähler manifolds are Hermitian manifolds whose Hermitian forms are symplectic.This condition ensures the existence of a local function K, which is called a Kähler potential, so that ω = i∂∂K, where ∂ and ∂ are holomorphic and anti-holomorphic exterior derivatives (they are called Dolbeault operators, and locally ∂ = dz i ∂ i , etc.).Note that K is not necessarily a (globally defined) function, which is why K is called "potential". C.2 Gradient flow and Hamilton equation of motion In this section, we first review why the Kähler nature of the complexified space is useful for analytic applications of the Lefschetz-thimble method to topological gauge theories [40,41].In the last paragraph, we argue that practical applications do not require the Kähler property, and we will conclude that the Hermitian metric can be used to define the flow equation of lattice gauge theories when treating the sign problem. Let us pick up a holomorphic map S : M → C, and consider the flow equation, This can be viewed from two perspectives, the Hermitian or the Kähler manifold.From the Riemannian nature of M , this is the gradient flow with the height function Re(S) = (S + S)/2 : M → R, dx µ dt = g µν ∂ ν 2Re(S). (C.10) One can easily check that in the holomorphic coordinate this goes back to the original equation (C.9) using the Cauchy-Riemann condition.In order to get another perspective, we introduce a "bracket" defined from the Hermitian or Kähler form ω: it again gives the original equation (C.9).Since {f, f } = 0 in general, this elucidates that Im(S) is conserved along the flow equation.The huge merit in choosing the Kähler metric comes from the fact that {, } becomes the Poisson bracket for the Kähler form ω, i.e. it satisfies the Jacobi identity {f, {g, h}} + {g, {h, f }} + {h, {f, g}} = 0. Therefore, for the Kähler metric, the gradient flow has a classical mechanical interpretation [40,41]. If Re(S) is a Morse function on M and satisfies the Morse-Smale condition (i.e., the critical points of S are non-degenerate and with all different Im(S) at those critical points), then one can compute basis of relative homologies H n (M, {e −Re(S) 1}) and H n (M, {e −Re(S) 1}) using the gradient flow, as we have seen in Appendix A. Those bases are called Lefschetz thimbles and dual thimbles, respectively.There exists a natural pairing called the intersection pairing, and this is important for the decomposition of the middle-dimensional cycles in terms of Lefschetz thimbles. However, the Morse-Smale condition for Re(S) is not always satisfied in practical applications to physics.Especially for gauge theories, the set of critical points is usually degenerate due to the gauge symmetry.In this case, the above equivalence between the gradient flow and the Hamilton equation is very helpful by choosing the Kähler metric [40,41] (see also [19,20]).Let us call the symmetry group G, then one can construct Noether charges Q for this symmetry.One can consider the reduced phase space by performing the symplectic reduction Q −1 (0)/G (also called Marsden-Weinstein reduction) [82], and define the Lefschetz-thimble decomposition in the reduced phase space.Using the Lefschetz thimbles and dual thimbles computed in Q −1 (0)/G, one can construct correct half dimensional cycles in M by considering group actions of the Noether charge Q so that the intersection number is well-defined.From this argument, it turns out to be quite helpful to choose the Kähler metric instead of the Hermitian metric in order to prove the existence of the Lefschetz-thimble decomposition when the classical action S has continuous symmetries. On the other hand, the two important properties, d dt Re(S) ≥ 0 and d dt Im(S) = 0, are satisfied in general for Hermitian metrics.So long as one has a program to construct halfdimensional cycles using the flow, the Hermitian property is good enough to cure the sign problem.Equation (2.11) provides such a method, and thus our choice of metric in (3.1) can be used also for theories with continuous symmetries, especially gauge theories. Figure 1 . Figure 1.Complex contours J Λ=∞ (T flow ) for the Airy integral with the trivial metric.The dashed black curves show flow lines with initial condition at the intersection with the real axis. Figure 2 . Figure 2. Complex contours J Λ (T flow ) for the Airy integral.The black dashed curves in the left figure show flow lines with initial condition at the intersection with the real axis.The saddle point is denoted by the blue dot which is intersected by the Lefschetz thimble in the left figure. Figure 3 . Figure 3. T flow -dependence of the reweighting factor for the Airy integral at various Λ's. Figure 4 . Figure 4. Complex contours J Λ (T flow ) for the Gaussian model with the fermion determinant with p = 2, α = 2, and β = 3 (Case 1).The black dashed curves (left figure) show flow lines with initial condition at the intersection with the real axis.The zero of the "fermion determinant" is denoted by the red square while the blue dots shows the saddle points which are intersected by the Lefschetz thimble in the left figure. Figure 6 . Figure 6.Complex contours J Λ (T flow ) for the Gaussian model with the fermion determinant with p = 2, α = 3, and β = 4 (case 2).The saddle points are again denoted by the blue dots with the saddle point at 3i intersected by the Lefschetz thimble.The zero of the fermion determinant at i is represented by the red square.The black dashed curves show flow lines with initial condition at the intersection with the real axis. Figure 8 .Figure 9 . Figure 8. Complex contours J Λ (T flow ) for the U (1) one-link model with κ = 1/2.We set Λ B = 5.The saddle points are denoted by the blue dots while the zeros of the fermion determinant are depicted as red squares.The dashed curves in the left figure are flow lines which end at a zero of the fermion determinant or at the saddle point on the imaginary axis which is intersected by the Lefschetz thimble. 5 Figure 10 . Figure 10.Complex contours J Λ (T flow ) for the U (1) one-link model with κ = 2.We set Λ B = 10.The saddle points, which in the left figure are intersected by the Lefschetz thimble, are denoted by the blue dots while the zero or the fermion determinant are shown as red squares.The dashed black curves show flow lines with initial condition on the real axis. Figure 11 . Figure 11.T flow -dependence of the reweighting factor for the U (1) one-link model with κ = 2 and Λ B = 10.
11,328.8
2017-06-12T00:00:00.000
[ "Mathematics" ]
Study on abnormal hot corrosion behavior of nickel-based single-crystal superalloy at 900 °C after drilling The hot corrosion behavior of nickel-based single-crystal superalloy after drilling is investigated at 900 °C. The characteristics of hot corrosion after drilling which are different from normal hot corrosion are reflected in the formation of a more stable oxide layer and less severe spallation. The change of microstructure around the hole is the main reason for the formation of a stable oxide layer during hot corrosion by changing the diffusion process of alloying elements. Subsequently, the formation of a stable oxide layer can reduce the effect of spalling by optimizing surface stress. INTRODUCTION The industrial gas turbines (IGTs), as the most important energy conversion mode for at least 20 years, have served a variety of service environments and conditions 1,2 . Nickel-based single-crystal superalloy becomes the primary choice of turbine blade material for IGTS because of its excellent mechanical properties, good casting properties, high oxidation resistance, and hot corrosion properties [3][4][5] . With the increasing requirements for IGTs, increasing turbine inlet temperature becomes a consensus for the development of gas turbines in the future 6,7 . However, with the continuous increase of turbine inlet temperature, it is far from enough to rely solely on the temperature bearing capacity of nickel-based single-crystal superalloy itself 8,9 . Therefore, thermal barrier coating technology and gas film cooling technology are proposed and widely used [10][11][12][13] . Advanced gas film cooling technology is widely adopted because of its extremely high thermal insulation effect 14 . However, due to the destruction of gas film holes on the overall structure of blades, it is necessary to study the properties of the superalloy after drilling 15,16 . So far, the microstructure and properties of drilled superalloys have been studied in part. Sundar Marimuthu et al. have analyzed the change of microstructure of superalloys during the fiber laser drilling 17 . Umacharn Singh Yadav et al. have studied the process of using electrical discharge machining (EDM) to drill holes and the corresponding microstructure changes 18 . It can be said that the microstructure changes around the hole obtained by using different drilling methods have been well studied. However, different drilling methods bring different degrees of microchange 19,20 . Therefore, the properties change based on different micro-changes caused by drilling is the urgent problem to be solved. Shang et al. observe the high-temperature tensile behavior of Ni-based single-crystal superalloy with cooling hole 21 24,25 . The researches above basically cover all the mechanical properties of superalloy after drilling, but it is worth noting that the change of the structure around the superalloy gas film hole not only brings about change of mechanical properties, but also has a huge impact on the oxidation and hot corrosion performance, and relevant studies on the oxidation and hot corrosion performance are not sufficient 26 . Dong et al. have conducted a preliminary study on the influence of oxidation behavior on fatigue property of Ni-based superalloy after drilling 27 . It can be seen that there are great changes in the oxidation behavior of the superalloy after drilling. Therefore, the hot corrosion behavior which is more sensitive to the superalloy structure can undoubtedly be greatly affected due to the drilling, while the research on the hot corrosion behavior after drilling is almost blank. So, it is of great significance and imperative to study the hot corrosion behavior after drilling. In this study, a low Cr content Ni-based single-crystal superalloy with poor hot corrosion resistance is selected to study the hot corrosion behavior after drilling for a dramatic and intuitive reaction 28 . Considering the practical service temperature of the first stage turbine blade after drilling and the reaction temperature of type-1 hot corrosion, 900°C is used for the temperature of hot corrosion experiment 29,30 . The EDM method and the hole with diameter of 1 mm which commonly used in IGTs are adopted 31 . A saturated aqueous solution of 75 wt.% Na 2 SO 4 + 25 wt.% NaCl is the molten salt medium for hot corrosion as it can truly reflect the environment where hot corrosion occurs 32 . In order to better explain the reasons why different characteristics appear between normal hot corrosion and hot corrosion after drilling, diffusion mechanism and finite element stress analysis are applied. Normal hot corrosion test The relationship between the thickness of the reaction layer and the hot corrosion time of the alloy sample is shown in Fig. 1. Considering that the blade itself is a structural part of the engine, and the position of the deepest part of the hot corrosion reaction layer will have a great influence on the service of the blade, the thickness of the hot corrosion reaction zone is measured by uniformly selecting the deepest part of the reaction layer and using Scanning electron microscopy (SEM) to calibrate it accurately. It can be seen from the curve that the thickness of the reaction zone of the sample experiences a linear growth period in the first 50 h and a growth deceleration period after 50 h until the end of the experiment. The rapid thickening of the reaction layer in the beginning 50 h reflects the general dynamic law of linear growth, while the variation trend of the thickness of the reaction layer after 50 h needs further observation. Figure 2 shows the surface and cross-section morphologies of the sample after normal hot corrosion at 900°C. The surface morphology of the sample after 200 h normal hot corrosion at 900°C is shown in Fig. 2a. It can be seen that the surface integrity of the sample is seriously damaged, with huge cracks all over the surface of the sample. A large amount of dispersed tiny oxides can be found on the surface of the sample. Combined with the weight change of the sample during hot corrosion after drilling and the surface new tiny oxides, it can be found that when the hot corrosion is carried out after 200 h, severe spallation occurs. An observation about the cross-section is made regarding the behavior of the hot corrosion after 50 h as reflected in Fig. 1 above. Figure 2b shows the cross-section morphology of the sample after 50 h normal hot corrosion. The surface of the sample still remains an oxide layer which is mainly composed of NiO. Below the oxide layer, some diffuse Al 2 O 3 and sulfides are distributed in the reaction zone. The presence of these hot corrosion products is not very significant at 50 h. Figure 2c displays the cross-section morphology of the hot corrosion sample at 100 h. Compared with the hot corrosion morphology at 50 h, the hot corrosion morphology at 100 h changes dramatically. First, the surface oxide layer (NiO) does not thicken with the reaction time, but its thickness and integrity decrease slightly. Second, the thickness of the reaction zone increases, and the oxide and sulfide in the reaction zone increase significantly. Finally, the size of the internal oxide increases, and the interface of the reaction zone appears obvious irregularities. As the reaction continues, the crosssection morphology changes to a certain extent when the hot corrosion reaches 200 h as shown in Fig. 2d. The oxide layer is still not thickened but the surface roughness is aggravated. At the same time, the reaction products inside the alloy are remarkable, and the thickness of the reaction layer increases. Hot corrosion test after drilling The original structures around the hole of the specimens after drilling are shown in Fig. 3. The overall morphology and the partial magnification morphology around the hole are shown in Fig. 3a, b, respectively. It can be seen that the structure around the hole is relatively complete, but there are also some relatively small bulges here. By further energy dispersive spectroscopy (EDS) analysis, the bulges are identified as Al 2 O 3 which can be shown in Fig. 3c, d. At the same time, a burning loss zone (Al-lean Ni-rich) which is about 15-μm-thick can also be found on the surface of the sample. These specimens are then used for hot corrosion experiment. The normal hot corrosion kinetic curves of the samples at 900°C after drilling are illustrated in Fig. 4. A severe hot corrosion occurs during the test to the sample undoubtedly. The mass gain and sample weight change (the effective area is 216.56 mm 2 ) keep rising during the experiment. At the beginning 40 h, the increase of weight is not drastic. After 40 h, the increasing trend of sample mass gradually becomes stronger and there is a significant increase in weight after 100 h until the finish of the test. However, no apparent spallation can be found compared to normal hot corrosion (the effective area is 239.52 mm 2 ) through the curves as the gap of mass gain and sample weight change increases a little bit (only about 10 mg cm −2 ) at the end of the experiment. The X-ray diffraction (XRD) patterns of the surface corrosion products after hot corrosion after drilling for 20, 50, 100, and 200 h are shown in Fig. 5. It can be seen that NiO is always the main component of surface reaction products and with the extension of experiment time, its dominant position becomes more and more significant. Na 2 Ta 4 O 11 and NaTaO 3 are always present, but as the degree of hot corrosion deepened, the content of them on the surface of the specimens become less and less. At the meantime, Ta 2 O 5 can only be observed on the surface at the beginning (the first 20 h) of the experiment as an initial product. Of course, from the perspective of the products and conditions of the whole reaction process, the hot corrosion experiment is largely explained by the nature of the oxidation reaction: the formation of (Ni, Co) Cr 2 O 4 , Al 2 O 3 , and NiO rather than the sodium salt. The surface morphologies around the hole after hot corrosion are displayed in Fig. 6. Figure 6a shows that the oxide layer has begun to form and some cracks in the surface layer can be clearly observed at the initial stage (20 h) of hot corrosion. Cracks also appear around the hole but do not appear to cause significant damage to the overall structure of the oxide layer. When the hot corrosion time reaches 50 h, the reaction degree on the sample's surface and around the hole increases slightly as shown in Fig. 6b. Some small surface oxides tend to flake off, but surface cracking is not worse. At the same time, no obvious cracking is found around the hole. As the reaction time increases to 100 h, a severe hot corrosion reaction occurs on the surface of the sample as shown in Fig. 6c. A partial oxide layer that has been removed and a larger piece of the oxide that is about to spall can be found on the sample surface. Surprisingly, though, cracks appear around the hole at this time, the oxide layer integrity around the hole remains intact. In other words, the cracking and spallation around the hole are not very serious compared with the sample surface (position away from the hole). When the reaction continues and reaches 200 h, a relatively dense and continuous oxide layer is formed on the surface of the sample as shown in Fig. 6d. Therefore, the sample does not show a very serious spallation phenomenon. Although cracks appear in some areas, the part around the hole is very intact. The cross-section morphologies around the hole after different hot corrosion time at 900°C are presented in Fig. 7. With hot corrosion lasting for 20 h, it can be seen that the surface of the hole occurs a non-dense outer oxide layer and the internal reaction zone as shown in Fig. 7a, b. Further observation suggests that the surface oxides are not so stable because some pits can be found. Meanwhile, internal sulfides and oxides also appear. Although this phenomenon is not serious, but the trend of the oxides to the internal expansion is still relatively obvious. Of course, the sulfides are mainly distributed in the reaction zone near the side of the substrate. Compared with the morphology at 20 h, when the reaction is conducted to 50 h, the change degree around the sample hole is not obvious as shown in Fig. 7c, d. There are no significant structure or product differences, except for some spallation of the surface oxides and a slight increase in the internal products. When the hot corrosion continues and reaches 100 h, the microstructure around the hole has undergone relatively dramatic changes, which can be seen in Fig. 7e, f. First, the thickness of the reaction layer around the hole increases significantly with the extension of reaction time. Second, the top layer of loose oxide is consumed, but at the same time, a relatively dense layer of Al 2 O 3 is quietly formed beneath it. Third, hot corrosion reactions involving Cr, Mo, and S occur 33 . Cr-S compounds are formed earlier than Mo-S compounds because of the strong external diffusion behavior of Cr and the lower Gibbs generating energy of Cr-S compounds. Subsequently, with the consumption of Cr caused by long-term hot corrosion, Mo appears more in the reaction zone and forms a phase with S. This long, white phase is more central to the reaction zone. Finally, the Al 2 O 3 in the reaction zone extended toward the matrix shows no obvious growth trend compared with that at 50 h. In consideration of the reaction zone (γ′-depleted zone) expanding, the formation of dense Al 2 O 3 on the surface can be verified. At the last stage of the hot corrosion, the hot corrosion reaction zone is expanded to a large scale at 200 h as shown in Fig. 7g, h. The relatively dense oxide layer mainly composed of Al 2 O 3 which still remains at 100 h of hot corrosion reaction reaches a considerable scale equal to about 40 μm at the end of the experiment. Compared to 100 h, the oxide layer around the sample hole is obviously thicker and its continuity is also complete at 200 h. There are no apparent penetrating cracks around the hole. Certainly, with the deepening of the reaction degree, more and more Mo-S phase appears in the reaction zone. The cross-section morphologies of the surfaces near the holes after different hot corrosion time at 900°C are shown in Fig. 8. From the point of view of reaction products only, there is no significant difference between the surface products near the hole and the products around the hole at the same reaction time. But it can still be seen that the difference in the distribution of the products appears. When the reaction comes to 20 h, a thin layer of Al 2 O 3 begins to form, and the internal distribution of Al 2 O 3 is not very obvious as shown in Fig. 8a. Then, when the reaction continues to 50 h, the reaction degree is significantly intensified, the Al 2 O 3 layer becomes thicker, and the internal dispersion of Al 2 O 3 is much more variable which can be seen in Fig. 8b. As the reaction continues, although the Al 2 O 3 layer still exists at 100 h, the reaction layer has significantly thickened, and it becomes very loose. Large amounts of internal oxides appear as dispersion as shown in Fig. 8c. When the hot corrosion lasts for 200 h, the dispersed oxides show a layered structure and begin to become dense. The thickness of the Al 2 O 3 layer also increases to a certain extent. Although there are still more loose oxides near the matrix which can be found in Fig. 8d. DISCUSSION The spallation is the most direct and obvious difference between normal hot corrosion and hot corrosion after drilling. The reasons for this difference will be analyzed in detail. Whether it is oxidation or hot corrosion, the structure of the oxide layer on the surface is the most direct factor to reflect the root cause of its spallation 34 . It can be found from the experimental results above that the surface structures of the samples with normal hot corrosion and hot corrosion after drilling are quite different. Under normal hot corrosion conditions, the surface oxide layer is mainly composed of loose and easily spalling NiO, while around the hot corrosion hole is composed of relatively dense and stable Al 2 O 3 . It leads to the difference in the degree of reaction and spallation between the different hot corrosions. According to the above experimental results, as shown in Fig. 3, the surface morphology of the sample after drilling shows the characteristics of burning loss zone (Al-lean Ni-rich) and dispersed Al 2 O 3 particles. The particular morphology is related to the drilling behavior. EDM is a hot discharge process that causes a remelting layer forming around the hole. Because of the high temperature in the EDM process, the element-burning phenomenon is obvious in the remelting layer. Due to its relatively active nature, the Al element suffers more serious loss in the process of burning than other elements, which results in the formation of the burning loss zone with the characteristics of Al-lean. At the same time, there are micro-area arc discharges in the EDM process. The random discrete arc discharges cause the Al element to be oxidized rapidly and form Al 2 O 3 particles deep into the matrix. The above discharge and high-temperature processes contribute to the formation of the particular morphology shown in Fig. 4 [35][36][37] . Based on this situation, the formation of the subsequent oxide layer also shows the particularity. Both oxidation and hot corrosion in the narrow sense can be considered as controlled by diffusion processes, including the external diffusion of alloying elements and the internal diffusion of oxygen elements as shown in Supplementary Fig. 1. Assuming that the thermodynamic equilibrium is established at each interface during hot corrosion, the process can be analyzed as follows [38][39][40] . The outward element flux, j M ; can be expressed as in Eq. 1, where x is the reaction zone thickness, D M is the diffusion coefficient for alloy element, and C 0 M and C 00 M are the element concentrations at the reaction zone-molten salt and reaction zone-substrate interfaces, respectively. Furthermore, there is also a correlation between diffusion and the reaction products as shown in Eq. 2, where V rx is the molar volume of the reaction products. Then, Integrating that x = 0 and t = 0, Eq. 4 can be obtained, By analyzing the above equations, the value of ðC 00 Al À C 0 Al Þ is constant in normal hot corrosion. However, after the sample is drilled, the value of ðC 00 Al À C 0 Al Þ decreases continuously due to the existence of surface burning loss zone (Al element spontaneously diffuses from the substrate to the surface through burning loss zone, as the burning loss zone itself has the characteristic of poor Al) which means that the hot corrosion reaction zone on the surface is smaller. Further, the hot corrosion reaction can be simplified to two parts: one is the fluxing of Ni and molten salt in the upper layer of the burning loss zone, and another is the oxidation of inner diffused O and outer diffused Al in the lower layer of the burning loss zone. Therefore, the Al 2 O 3 layer can be formed under the protection of the concentration gradient and the outer layer of Ni. This is why the abnormal hot corrosion can occur. Figure 9 shows the schematic diagram of oxide layer formation of hot corrosion after drilling and oxide layer formation on different shaped surfaces after hot corrosion. Except that the diffusion process of the elements mentioned above can be reflected in Fig. 9a, Al 2 O 3 particles also play a role in the oxide layer formation of the hot corrosion after drilling. As mentioned above, the Ni in the burning loss zone on the surface of the sample will react with molten salt priority (fluxing process), to a certain extent, for the formation of Al 2 O 3 layer provides a possibility. But when the reaction is catastrophic, that is, if the reaction product of NiO is extremely loose and easy spalling, then it will lose its protection effect, even accelerate the degree of hot corrosion. At this point, the effect of Al 2 O 3 particles will be displayed. When hot corrosion occurs, NiO begins to form, and the solid phase reaction as shown in Eq. 5 occurs 41,42 . The solid phase reaction product NiAl 2 O 4 has a stable bond, existing in the substrate and the hot corrosion product, connecting the two parts like nails, which slows down the spallation of the surface oxide layer to a certain extent, and provides an opportunity for the formation of Al 2 O 3 layer 43 . The above explains why Al 2 O 3 layer can be formed; its compactness is also briefly analyzed below. Figure 9b displays the schematic diagram of oxide layer formation on different shaped surfaces after hot corrosion. It can be seen that the growth of the oxide layer is different for different surfaces. For a flat plane, the growth of oxide layer is uniform, while for a circular plane, the growth of oxide layer is convergent. It is assumed that the size of the hole after hot corrosion is r i = 470 μm and the oxide layer thickness is h = 30 μm. Compared with a plane oxide layer of the same size, its volume shrinks (Vol s ) by 3% as shown in Eq. 6. 2π 500 30 À ðπ 500 2 À π 470 2 Þ 2π 500 30 ¼ 3% (6) Considering that the Pilling-Bedworth ratio of Ni 3 Al after oxidation is 1.71-1.88 normally, its influence on oxide density is more than 5% 44,45 . Moreover, the ratio increases with the decrease of hole size and the increase of oxide layer thickness. That is to say, under the same conditions, the oxide layer formed on the inner wall of the hole is denser than the planar oxide layer and less likely to spall. The previous section explains why a stable and dense layer of Al 2 O 3 is formed around the hole. This section will focus on the role of the Al 2 O 3 layer. After hot corrosion, it is observed that although the mass gain of the whole drilled sample is not much different from that of the undrilled sample, the degree of spallation is greatly reduced, in particular, the inner wall of the hole does not appear obvious spalling phenomenon as shown in Fig. 6. Hot corrosion is catastrophic mainly because it is difficult to form a stable oxide layer (due to severe spallation), and the spallation of the oxide layer is closely related to the stress state 46 . The result indicates that the stability of the oxides on the surface of the sample has been improved to some extent. In order to better illustrate the hot corrosion process after drilling and explore the role of oxides in hot corrosion, finite element simulation software ABAQUS is used to analyze the surface oxides stress state 47,48 . In view of the experimental results in the previous section, NiO has significant consumption during hot corrosion while Al 2 O 3 layer keeps stable, so the simulated microstructure around the hole is simplified: an Al 2 O 3 layer is established in accordance with the experimental results and the parabolic growth with time. For detailed simulation parameters, please refer to the Supplementary Notes. Figure 10a-d displays the stress state on the oxide layer (Al 2 O 3 layer) of the inner wall of the hole. It can be seen that, for the oxidation layer on the inner wall of the hole without the other surface oxidation layer and the substrate, the radial direction is subject to smaller compressive stress, while the tangential direction is subject to greater compressive stress with the growth of Al 2 O 3 layer. Combined with the hot corrosion process, which causes the oxidation layer to loose, a stable compressive stress helps to alleviate this phenomenon, increases the density, and to some extent inhibits the catastrophic hot corrosion caused by diffusion. It indicates that the oxide layer (Al 2 O 3 layer) on the inner wall of the hole tends to exist stably on the surface of the substrate because of the compressive stress state of the surface oxide layer on the samples. The structural stability of the Al 2 O 3 layer on the inner wall of the hole is described above. Subsequently, the simulations of the stress state of the overall oxide layer of the alloy changing with time are shown in Fig. 10e-h. Considering that when the hot corrosion goes on for 20 h, the inner wall Al 2 O 3 layer has formed stably, and the reaction state of 20 h is set as the initial state for simulation. Then the stress state of the whole-experiment process is analyzed. It can be seen from the simulation results that the Al 2 O 3 layer on the surface of the sample is always subjected to the compressive stress pointing to the direction of the matrix, and the stress level increases with reaction time which even does not take into account the Fig. 9 The schematic diagram of the formation of oxide layer. a Oxide layer formation of hot corrosion after drilling and b oxide layer formation on different shaped surfaces after hot corrosion. tendency of the oxides growth on the inner wall of the hole to converge outward (the space is smaller and the volume is compressed cause the Al 2 O 3 layer to become dense and stable). In other words, the compressive stress increases with the thickness of the oxide layer caused by the prolonged reaction time. This increased compressive stress caused by the oxide layer on the inner wall over time can provide structural stability to the oxide layer on the overall surface, which can be reflected by the large stress zone around the hole to some extent. Combined with the overall structure of the oxide layer, it is enough to show that the formation of Al 2 O 3 layer on the inner hole can reduce the oxides spallation of the alloy. This enhance effect is attenuated on the undrilled sample surface (open state) because of the oxides' spallation, but it can be better reflected at the convergent interface of the inner wall of the hole 49 . It shows that Al 2 O 3 plays a significant role during the hot corrosion experiment after drilling. Combined with the two sections above, drilling provides the possibility for the formation of stable Al 2 O 3 layer in the hot corrosion process, and the existence of a stable Al 2 O 3 layer can well prevent the oxidation layer from spallation. This positive facilitation process leads to this unusual hot corrosion behavior. The hot corrosion behavior of nickel-based single-crystal superalloy is studied after drilling at 900°C above. The normal hot corrosion experiment and the hot corrosion experiment after drilling are compared. The changes of different oxide layers and the corresponding spallation are analyzed. The main conclusions are as follows: 1. For nickel-based single-crystal superalloy with low Cr content, severe hot corrosion occurs at 900°C, and the oxide layer continuously spalls and cannot be formed stably. 2. Al-lean zone and Al 2 O 3 bugle structure are formed around the hole of nickel-based single-crystal superalloy after drilling, which can improve the hot corrosion performance to some extent. 3. Compared with normal hot corrosion, a relatively stable and dense Al 2 O 3 layer forms around the hole after hot corrosion, which depends on the different element diffusion effect of the Al-lean layer and the stability effect of Al 2 O 3 bugle structure on the oxide layer. 4. In addition to the influence of the change of microstructure on the formation of Al 2 O 3 layer, the convergence effect brought by the shape of holes also improves the density of the Al 2 O 3 layer. 5. When the stable Al 2 O 3 layer is formed, it produces stress pointing to the direction of the matrix, so as to maintain the stable existence of the oxide layer and reduce the spallation caused by hot corrosion. Preparation of materials The nominal composition of the experimental alloy is listed in Table 1. The single-crystal superalloy bars are directionally solidified by the liquid metal cooling method. The (001)-oriented René-N5 seed crystal is used to ensure the growth from rod crystal into single-crystal. Then, a solution heat treatment is applied. After solution heat treatment, an aging heat treatment is carried out for the alloy bars successively. Electro-spark wire-electrode cutting is used to cut the hot corrosion samples with a size of ø10 × 3 mm and then machines cylindrical sides by a lathe process. The surfaces of all alloy samples are ground by grinding from #60 to #1000 emery papers and washed with alcohol to remove dirt subsequently 50 . Drilling and hot corrosion testing In order to study the hot corrosion behavior of the microstructure near the alloy cooling hole, drilling, and then hot corrosion tests are carried out. Due to the severe hot corrosion, the surface reaction is intense, and the low Cr content alloy is used to conduct an experiment for the most obvious result. First, the hole with a diameter of 1 mm is machined by electric spark (ZGDC406 EDM numerical control punch Suzhou Zhonggu Machine & Electronic Technology Co. Ltd) at the center of the prefabricated alloy samples as shown in Supplementary Fig. 2 51 . Subsequently, the normal hot corrosion tests are carried out. Each sample is sprayed with a saturated aqueous solution of 25 wt.% NaCl + 75 wt.% Na 2 SO 4 (simulation of actual hot corrosion environment) through a little spray bottle and followed by drying. At the same time, to ensure that the inner surface of the hole is also sprayed with salt, a micron tube is used to assist in spraying the salt solution into the inside of the hole. After that, each specimen is weighted to ensure that a 0.3-0.5 mg cm −2 salt is existed on the surface of the sample. Each sample is placed into a crucible separately, and all the crucibles are put into a furnace at 900°C for hot corrosion, and the kinetics of hot corrosion is studied. The experiment is suspended every 5 h to weigh the weight change with both "mass gain" (including the weight of the corrosion spallation in the crucible) and "sample weight change" (referring just the weight of the specimen) and then the salt is added to make sure enough salt for the subsequent reaction. Before each cycle of the weigh and salt replenishment, the samples surfaces are cleaned simply by sitting in deionized water for 5 min to ensure the accuracy of hot corrosion test to a certain extent. When the tests last for 20, 50, 100, and 200 h, specimens are taken out of the furnace to observe the microstructure. Analyzing methods An analytical balance with a minimum sensitivity of 0.01 mg is used to weigh the weight changes of the samples and prepare salt solution. XRD is used to identify the hot corrosion products. SEM with SE and BSE detectors, an EDS, and electron backscattered diffraction (EBSD) is used to study the surfaces and cross-section morphologies, characterize the elements distribution, and analyze the sulfides information. DATA AVAILABILITY The raw/processed data required to reproduce these findings cannot be shared at this time as the data also forms part of an ongoing study. All relevant data can be available from the authors if required.
7,499.8
2021-04-23T00:00:00.000
[ "Materials Science" ]
The Failure of Economic Theory . Lessons from Chaos Theory The crisis that was being shaken the world economy should push economists to wonder about the approach used to analyse economic phenomena. The motivations that have generated it, describing a whole of interdependencies, interacttions, are clear and convincing. But a question remains: if the situation is so clear a posterior why economists have not been able to foresee it? What is happening to economic science if it is not able to recognize an economic crisis before it “steps on it”? How is it possible that the economic science was caught off guard yet again? Besides, what is the implication for the status of economics as a science if it is not able to successfully deal with real economic problems? The aim of the paper is to show the weakness of traditional economic theory and what improvements in terms of description and foresight could be obtained applying chaos theory to the study of economic phenomena. Introduction In his work of 1992 Allais said: "…the essential condition of any science is the existence of regularities which can be analysed and forecast.This is the case of celestial mechanics but it is true for many economic phenomena whose analysis displays the existence of regularities which are similar to those found in the physical sciences.This consideration is the basis of why economics is a science, and why this science can rest on the same general principles and methods of classical thermodynamics and in general as Physics" [1]. This sentence resumes the opinion of almost of economists have long been trying to build economic models that apply mathematical laws of hard sciences, in particular, physics.The aim was to create a real mathematical economics on those models. Some of important exponents of neoclassic economics explicitly declared their intentions of transferring to economics the concepts and the methods used in physics [2]. Jevons [3] said that "Economics, if it is to be a science at all, must be a mathematical science […] mechanics of utility and self-interest."Walras maintained "that economics, like astronomy and mechanics is both an empirical and a rational science."Its explanation of the existence of an "auctioneer," whose only purpose was to generate equilibrium prices evokes Maxwell's imaginary demon.Fisher, developed a mechanical analogy between economics and physics, claiming force and distance to be analogous to price and number of goods, respectively. Later von Neumann [4], Samuelson [5]), and Georgescu-Roegen [6] proposed a description of economic systems following the classical thermodynamics 1 .Samuelson acknowledges that the relationships between pressure and volume in a thermodynamic system bear a striking similarity in terms of differentials to price and volume in an economic system.Economics is formally identical to thermodynamics because they are both problems of static constrained optimisation. Various reasons supported this research of similarities and/or analogies.On one hand, physics was a science that they were all well acquainted with2 ; on the other hand it was very highly considered for the significant successes it had achieved and by its extensive use of mathematics. In particular this last aspect constituted the primary element to give a discipline such as economics, whose mathematical foundations were rather vague at the time, a more scientific character. The possibility that there should be similarities of structure or interpretation in the mathematical modelling of economic and physical systems has been an important focus in the economic speculation that produced the neoclassical theory based on the following assumptions. Firstly, the representative agent who is a scale model of the whole society with extraordinary capacities, particularly concerning the area of information processing and computation.Secondly, it proposes equilibrium as a natural end of economic systems.Lastly, linear models or, at least, the linearization of models have been traditionally preferred by economists.So described, Economics is largely a matter of formalized thin fiction that has little to do with the wonderful richness of the facts of the real world.The criticism 3 often voiced was that "these assumptions are frequently made for the convenience of mathematical manipulation, not for reasons of similarity to concrete reality" [8].Now because "economics is a science of thinking in terms of models joined to the art of choosing models that are relevant to the contemporary world, despite the undisputable success of those models their limitations are nowadays hard to ignore. Since the 1970s the irruption of the nonlinearity led to a profound transformation of numerous scientific and technical fields and Economics does not escape this revolution.Chaos theory, in particular has improved the probabilities of achieving good results in the modelling of phenomena and their empirical analysis. In economics chaos theory has attracted particular attention because of its ability to produce sequences whose characteristics resemble the fluctuations observed in the market place.Most economic variables whether microlevel, such as prices and quantities, or at the macro-level, such as consumption, investment and employment, oscillate and these oscillations too often were interpreted simply as exogenous shocks. However the goal of this paper is to redirect the attention of economists and policy-makers towards an alternative approach that significant results have reached in other scientific fields and that could contribute to ameliorate the economic analysis.We did not claim that this alternative approach would have provided answers, or that chaotic models could have predicted the crisis.Given the complexity of economy, "we believed that a healthy profession would be working on a variety of models and that it would be engaged in a vigorous debate about what the various models were telling us and which models were better" [10].Therefore starting from the descriptions about the basic assumptions made in economic theory the aim of this paper is to highlight the major contribute of chaos theory in improving the description, analysis, and control of economic processes and the re-sults reached until now by economists that have applied this theory to economic analysis. The paper is structured as follows.In the Section 2 we will analyse the methodology used to build a model describing economic phenomena.In the Section 3 the improvements in the economic analysis reached application of chaos theory will be displayed.In the Section 4 the question of presence of chaos in economic time series are described.Some conclusions are presented in Section 5. Economic Mainstream The economics profession spends much of its time on what was called a Walrasian general equilibrium model, based on the assumptions, which are more analytically tractable and interesting for a few limited phenomena. The large majority of economic models share a common element: they depart, in one way or another, from the benchmark of competitive markets with fully rational agents (consumer and firms), representative of all members of some class of agents with identical preferences and endowments. The market is a locus of impersonal exchange activities, where agents buy and sell products with defined characteristics, at prices that-according to standard economic theory-reflect supply and demand induced equilibria. Economic theory accords these prices the role of principal communication media between agents, who use the information prices convey to drive the actions they take in the economy.Relationships between agents do not count for much in "the market".What matters is how each agent separately values each action in "the market", values that "the market" then aggregates into prices.Individuals are the basic unit of analysis.Economic phenomena are decomposed into sequences of individual actions set aside culture, psychology, class, group dynamoics, or other variables that suggest the heterogeneity of human behaviour.Economic actors are treated as equivalent or the differences among them are presented as attributes of individuals.The regularities not the differences are considered.This is what required from the assumption of representative agent, a scale model of the whole society with extraordinary capacities, particularly concerning her capability of information processing and computation.But that is not the only restrictions to economic analysis.The second one is the study of economic systems in a state of equilibrium.The last one is the use of linear models or, at least, its linearization in the neighbourhood of equilibrium. The reductionist approach, applied by traditional economic theory, often overlooks the dependencies or interconnections among elements and their influence upon macroeconomic behaviour.Its focus is not to study the unfolding of the patterns its agents create, but rather to simplify its questions in order to simplify and seek closed analytical solutions. The Representative Agent In order to abstract from heterogeneity, which allows the application of rigorous calculus and to economics to gain deep insights embedded in a formal elegant framework, the explanation of human behaviour is brought back that of representative agent: an agent that acts with rationality when making choices and her choices are aimed to optimization of her utility or profit.What is taken as "rational" is of chief importance because rationality is used either to decide which course of action would be the best to take, or to predict which course of action actually will be taken, to have time and ability to weigh every choice against every other choice and finally to be aware of all possible choices.Further, individual preferences are taken to be given a priori, rather than constructed and revised through ongoing social processes: they are primitive, consistent, and immutable. In a more formal sense the economic agents are said to have transitive and consistent preferences and seek to maximize the utility that they derive from those preferences, subject to various constraints.They operate according to imperative choice: given a set of alternatives, choose the best. This process of choice postulates utility values associated with possible states of the world perfectly foreseen in which situations with higher utilities are preferred to those with lower ones.Choices among competing goals are handled by indifference curves-generally postulated to be smoothing (twice differentiable)-that specify substitutability among goals. The consumer maximizes his utility subject to the budget constraint and solve out the maximization problem in order to get some form of demand function for the consumer. The solution of this optimization problem is an individual demand curve used as the exact specification of the aggregate deduced just summing up the behavior of agents that compose a market or an economy. So the result of decision problem of the representative economic unit is the results of aggregate quantities.There are not significant differences between micro end macro levels: the dynamics of this latter is just the summation of dynamics of the former.The behaviour of an economic group is adequately represented by that of a group, each of whose members have the identical characteristics of the average of the group. Consider the efficient market hypothesis, which has ruled the root for some years in finance.Its originator was Louis Bachelier, who developed the notion of Brownian motion at the turn of the twentieth century.His argument that stock prices should follow this sort of stochastic process, after years of being ignored, was acclaimed by economists both for analytic and ideological reasons.Then Henri Poincaré [11], French mathematician, observed that it would not be sensible to take this model as a basis for analyzing financial markets.Individuals who are closed to each other, as they are in a market, do not take independent decisions-they watch each other and "herd".Thus Poincaré clearly envisaged one of the most prevalent features of financial markets long before modern economists took this theme up to explain "excess volatility" [12]. Markowitz [13] developed his theory of optimal portfolio using the assumption that the changes in returns on assets had a Gaussian distribution.Despite the empirical evidence and the pleas of Mandelbrot and others, this assumption prevailed, since one could apply the central limit theorem to it, unlike the family of Levy stable distributions favoured by Mandelbrot.The same thing applies to the development of Black-Scholes [14] option pricing.This again relies on the refutable and often-refuted assumption that the price of an asset follows a lognormal process [12].Some doubts come up: does the real economy work in this way?Is this approach adequate to describe a world in which agents use inductive rules of thumb to make decisions, they have incomplete information, they are subject to errors and biases, they learn to adapt over time, they are heterogeneous, they interact one another, in a few words are not rational in a conventional sense? The reality provides a wealth of evidence showing that the rationality in question has little or nothing to do with how people behave. Equilibrium Models Strictly connected with representative agent is the equilibrium notion meant to provide a credible explanation of observed economic phenomena and a guide to economic policy making. "A characteristic feature that distinguishes economics from other scientific fields is that, for us, the equations of equilibrium constitute the center of our discipline.Other sciences, such as physics or even ecology, put comparatively more emphasis on the determination of dynamic laws of change" [15].As said above Economics seeks to describe phenomena in terms of solutions to constrained optimization problems. The consumers determine how much they wish to demand of some good as function of its price.Similarly the producers determine using the same maximization process the amount of that good that desire to supply. Denoted with D(p) the demand curve and with S(p) the supply curve if a price p the amount of the good demanded D(p) will be compatible with amount supplied S(p) the market is in equilibrium.The agents' trades are compatible and we say "a unique and stale equilibrium exists". The normal economic order is a static equilibrium state plus small random noise. To have a unique and stable equilibrium it needed to impose to the model four basic constraints. First, increasing returns to scale are not allowed the hypothesis is of decreasing returns.Second, information diffusion and reaction does not occur among the agents: they are rational; they have all information about their actions.Third, the dimension of commodity space is fixed where no product innovations are allowed.Fourth resource limits and market extent are ignored. The equilibrium in a general equilibrium model is not necessarily either unique or stable. Colander [16] identifies three distinguishing characteristics of the post-Walrasian perspective.First, the equations necessary to describe the economy have multiple equilibria and complex dynamics.Second, individuals act on the basis of local, bounded rationality, since global rationality is beyond anyone's information processing capabilities.Finally, institutions and non-price coordinating mechanisms are the source of systemic stability in a market economy.It is widely believed among economists that equilibrium economics provides a consistent framework in economics, which is capable in explaining almost everything from demand and supply in micro, money and unemployment in macro, corporate finance and asset pricing in finance, even firms and law in institutional economics. Linear Models For a long time scientific models of exact sciences were built starting from the consideration that causal mechanisms of natural phenomena were linear.The world of classical science has shown a great deal of interest in linear differential equations for a very simple reason: apart from some exceptions, these are the only equations of an order above the first that can be solved analytically.Linearity is intrinsically "elegant", because it is expressed in simple, concise formulae, and a linear model is aesthetically more "attractive" than a nonlinear one. Following this tendency the economic science described the economic phenomena using linear equations and when irregular behaviour of some nonlinear relations are found, they are not appreciated because they are difficult and intractable to deal with.So they have been explained as stochastic or linearized. Because it may be difficult in models to deal with many variables leading economists use numerical approximations or linearisation around a 'steady state' where in economics a steady state is a point x such that if is an equilibrium at time t, then it is also an equilibrium at time (t + 1).The simplicity of linearization and the success that it has at times enjoyed have imposed, so to say, the perspective from which scientists observed reality, encouraging scientific investigation to concentrate on linearity in its descriptions of dynamic processes. To tackle the study of the dynamics of systems when the latter are in a condition that is close to stable equilibrium has been preferred because there the forces in play are small. The idea is simply that, as the terms that were ignored by linearizing the equations were small, the difference between the solutions of the linearized equation and those of the nonlinear equation assumed "true", but unknown, ought to be small as well.However, this is not always the case.Many fundamental problems remain unsolved. Conceptualizing, measuring, and modeling cause-effect linear relationships in economic systems is sometimes ineffective and inefficient.On the contrary, it is usually closer to reality to propose that relationships among the economic agents and variables are non-linear. The nonlinearity implies the loss of the causality correlation between the perturbation and effect propagated in time, assumption that characterises many economic models. Nonlinear dynamics tend to arise as the result of relaxing the assumptions underlying the competitive market general equilibrium approach. Nonlinear dynamics are the fruit of increasing returns to scale, bounded rationality and heterogeneity of expectations.The reductionist approach, applied by traditional economic theory, overlooks these dependencies or interconnections among elements and their influence upon macroeconomic behaviour, so both deterministic and stochastic descriptions are used to define main features of economic dynamics [17]. Chaos Theory in Economics The new perspective opened by chaos theory has spawned significant changes in mainstream economic theory improving the probabilities of achieving good results in the modelling of phenomena and their empirical analysis.The challenge that the macroeconomic theorist faces is whether she is capable of changing benchmark models in order to simultaneously make more realistic assumptions and as well attain more realistic results. Chaos theory stimulates the search for a mechanism that generates the observed movements in real economic data and that minimizes the role of exogenous shocks.In this sense it could represent a shift in thinking about methods to study economic activity and in the explanation of economic phenomena such as fluctuations, instability, crisis, and depressions. Economists began to look at the analysis of global dynamics in the late 1970s and the 1980s, with important work by Medio [18], Stutzer [19], Benhabib and Day [20], Day [21], Grandmont [22], and many others, some of them to be referred in following sections. In 1980 the pioneering work by Benhabib and Day [20,21], has been important for making the economists aware of the potential usefulness of chaos theory and its tools for analyzing economic phenomena.Since this work there has been an enormous number of papers addressed to investigate the presence of chaotic dynamics in standard models.Benhabib and Nishimura [23] employ the Hopf bifurcation in their study of how the properties of an optimal growth model are affected by the discount rate.Benhabib and Day [20], Grandmont [22] and Boldrin and Montrucchio [24] derived chaotic business cycle models from utility and profit maximization principles within the general equilibrium paradigm of perfectly competitive markets and rational expectations. Day [21] attracted considerable attention to the possibility of chaos in two quite familiar contexts: a classical growth model and a Solow growth model.Chaos has also been analyzed in the context of a multiplier acceleratortype model by Dana and Malagrange [25].Deneckere and Pelikan [26] discuss some necessary conditions for chaos Hommes [27] showed how easy it is to produce chaos in Hicksian-type models with lags in investment and consumption.Bala et al. [28] located sufficient conditions for robust ergodic chaos to appear in growth models.Mitra [29] shows the existence of chaotic equilibrium growth paths within a model of endogenous growth with externalities. Grandmont [30] is concerned with the effects of various government policies while Grandmont and Laroque [31] demonstrate the importance of the expectations formation mechanism for the stability of economy.Farmer [32] and Reichlin [33] both consider production economies and both make use of Hopf bifurcation which is often thought to be more robust than the flip bifurcation.In Farmer [32] chaos depends upon the government's debt policy.In Reichlin [33] it is shown that fiscal policy can cure chaos in the sense of suppressing chaos. Chiarella [34] introduced a general nonlinear supply function into the traditional cobweb model under adaptive expectations, and showed that in its locally unstable region it contains a regime of period-doubling followed by a chaotic regime. Puu [35] studied the nonlinear dynamics of two competing firms in a market in terms of Cournot's duopoly theory; by assuming iso-elastic demand and constant unit production costs this model shows persistent periodic and chaotic motions. A common feature of the models described above is that nonlinear dynamics tend to arise as the result of relaxing the assumptions underlying the competitive market general equilibrium approach. Chaos Control in Economics The interests of economists in the chaos theory derive also from the fact that this theory could offer also a new perspective in system control strategies which has some particularly interesting insights for economic policies. The current opinion among scientists was that chaotic motion in systems is neither predictable nor controllable because of the sensitive dependence on initial conditions.Small disturbances lead only to other chaotic motions and not to any stable and predictable alternative.Ott, Grebogi and Yorke [36] proposed an ingenious and versatile method for controlling chaos.The key achievement of their paper was to show that control of a chaotic system can be made by a very small, "tiny" correction of its parameters.This observation opened possibilities for changing behaviour of natural systems without interfereing with their inherent properties. If the system is non-chaotic the effect of an input on the output is proportional to the latter.Vice versa when the system is chaotic, the relations between input and output are made exponential by the sensitivity to initial conditions.We can obtain a relatively large improvement in system performance by using small controls [37,38].These considerations are particularly interesting in the applications of control of economic systems. First, moving from given orbits to others on the attractor means choosing a different behaviour for economic systems, that is, different trade-offs in economic policy.This richness of possible behaviors (many aperiodic orbits) in chaotic systems may be exploited to enhance the performance of a dynamical system in a manner that would not be possible to have if the system's evolution is not chaotic. Second, employment of an instrument of control in terms of resources in order to achieve a specific goal of economic policy will be lesser in magnitude when compared to the use of traditional control techniques. Lastly, using control based on sensitivity to initial conditions could mean greater efficiency, especially in terms of needed resources to accomplish economic policy goals.Therefore if the system is chaotic, limited resources do not reduce the possibility by policy-makers to reach predetermined goals of economic policies. The government may be able to manipulate some policy parameters in order to shift the economic system from a position of chaos to a fixed point outcome and in this way fulfil its stabilization goal (if one accepts the idea that the government should be mainly concerned with eliminating or mitigating fluctuations).A problem with the manipulation of policy parameters is that the changes needed to leave instability or chaos to achieve a fixed point are often unrealistic [39]. The methods of controlling chaotic dynamics have been applied to economic models in works by Holyst [40], Holyst and Urbanovwicz [17], and Kaas [41].Kopel [42] uses a simple model of evolutionary market dynamics showing how chaotic behaviour can be controlled by making small changes in a parameter that is accessible to the decision makers and how firms can improve their performance measures by use of the targeting method. Xu et al. [43] introduced an approach to detect UPOs patterns from chaotic time series data from the Kaldor business cycle model.Kaas [41] proved that within a macroeconomic disequilibrium model stationary and simple adaptive policies are not capable of stabilizing efficient steady states and lead to periodic or irregular fluctuations for large sets of policy parameters.The application of control methods to chaotic dynamical systems shows that the government can, in principle, stabilize an unstable Walrasian equilibrium in a short time by varying income tax rates or government expenditures. Chaos Theory and Economic Time Series Analysis The relevance of addressing chaos in economic models and the potentiality offered by its control techniques is associated to detecting the presence of chaotic motion in economic data.From an empirical point of view it is difficult to distinguish between fluctuations provoked by random shocks and endogenous fluctuations determined by the nonlinear nature of the relation between economic aggregates. If, hypothetically, it were possible to clearly separate stochastic and deterministic components of economic time series, this would be important from a policy point of view.While purely stochastic trajectories do not allow forecasting future outcomes, chaotic series are deterministic and thus, if one knows exactly which the initial state of the system is, then future outcomes are obtainable with full accuracy. Nevertheless the existence of economic chaos is still an open issue [44][45][46][47][48][49].Trends, noise, and time evolution caused by structural changes are the main difficulties in economic time-series analysis. The main and more used tests for chaos applied either in macroeconomic and financial time series are: correlation dimension; Lyapunov exponent; and BDS test. The correlation dimension, developed in physics by Grassberger and Procaccia [50], is based on measuring the dimension of a strange attractor.Its major advantage is the simplicity of calculating.However this analysis provides necessary but not sufficient conditions for testing the presence of chaos.In fact, designed for very large, clean data sets, it was found to be problematical when applied to short time series [51]. The Lyapunov exponent is generally regarded as necessary but not sufficient conditions for chaos.As for correlation dimension, the estimate of Lyapunov exponent requires a large number of observations.Since few economic series of such a large size are available, Lyapunov exponent estimates of economic data may not be so reliable. One other of the most commonly applied tool is the BDS 4 test by Brock, Dechert, and Scheinkman [54].It is not a test for chaos [55] but tests the much more restricttive null hypothesis that the series is independent and identically distributed.It is useful because it is a well defined, easy to apply test, and powerful against any type of structure in a series.It has been used most widely to examine a variety of economic and financial time series. Although the literature on tests for chaos in economic time series is, by now, somehow voluminous, there are no uncontroversial results to state.The application of these tests to such data presents numerous problems.The first problem is that noise of economic time series may render any dimension calculation useless [46]; then, to obtain a reliable analysis, large data sets are required.Data quantity and data quality are crucial in applying them and the main obstacle in empirical economic analysis is short and noisy data sets. Especially, testing on macroeconomic series are regarded with some suspicion; not only the gathered data is insufficient to perform tests (macroeconomic data is not available for periods shorter than the month), but also the macro time series involve mixed effects: it is not just the distinction between noise and nonlinearities that is in order, but also the eventual source of nonlinearity. Little or no evidence for chaos has found in macroeconomic time series.Investigators have found substantial evidence for nonlinearity but relatively weak evidence for chaos per se.That is due to the small samples and high noise levels for most macroeconomic series; they are usually aggregated time series coming from a system whose dynamics and measurement probes may be changing over time.In contrast to the laboratory experiments where a large amount of data points can easily be obtained, most economic time series consists of monthly, quarterly, or annual data, with the exception of some financial data with daily or weekly time series. In fact the analysis of financial time series has led to results which are, as a whole, more reliable than those of macroeconomic series.Financial time-series are a good candidate for analyzing chaotic behaviour.The reason is to be available in large quantities and for much disaggregated time intervals. The failure to find convincing evidence for chaos in economic time series redirected the interest to additional tests that work with small data sets, and are robust against noise .This goal seems to be reached by topological tools, like recurrence analysis characterised by the study of the organisation of the strange attractor.They exploits an essential property of a chaotic system, the recurrence property, i.e. the tendency of the time series to nearly, although never exactly, repeat itself over time. The topological method has been successfully applied in the sciences to detect chaos in experimental data [56][57][58][59] and it has been demonstrated to work well on relatively small data sets and to be robust against noise [60]. The tools based on topological invariant testing procedure (close return test and recurrence plot) compared to the existing metric class of testing procedures including correlation dimension, the BBS test and Lyapunov exponent are better suited to testing for chaos in financial and economic time series and to prove the existence of chaos in particular in macroeconomic time series [56,57]. In the literature the tools based on the topological approach are Close Return Test and Recurrence Plot. Conclusions The assumptions of mainstream economics are totally changing.Not more the Olympic rationality but processes in which the interacting economic agents adapt itself in reaction to environment and, by innovating, contribute to its change. In this ever changing environment, it is almost impossible to prefigure the outcome of decisions with a satisfactory degree of precision and use constrained optimization models to capture the behavior of these complex adaptive systems. Models have been built that unjustifiably claim to be scientific because they are based on the idea that the economy behaves like a rational individual, when the real economics provides a wealth of evidence showing that the rationality in question has little or nothing to do with how people behave. Economies are complex adaptive systems, that is, composed of a large number of interacting components and of the relationships between them."The goal of complex systems research is to explain in a multidisciplinary way how complex and adaptive behaviour can arise in systems composed of large numbers of relatively simple components, with no central control, and with complicated interactions" [61,62]. Not more aggregate reduced to the analysis of a single, representative, individual, ignoring by construction any form of heterogeneity and interaction, but the aggregate emerging from the local interactions of agents. Aggregate phenomena, are intrinsically complex because social interaction of boundedly rational agents implies features that are not observable at the level of the individual. Therefore, recognizing the existence of deterministic chaos in economics is important from both a theoretical and a practical point of view.From the theoretical point of view, if a system is chaotic we may construct mathematical models which would provide a deeper understan-ding of its dynamics.From the practical point of view, the discovery of chaotic behaviours makes it possible to control them. Finding chaos in GNP series may mean that nonlinear dynamics are observable in the relation among economic aggregates, but it can also mean that some chaotic external shock (e.g. in the physical environment or in technology) disturbs the economy.In this way, the problem is not only one of finding chaos in the economy, it is also to detect if the hypothetically found chaos is generated inside of the structure of the economy.In other words, finding evidence of chaos is half of the problem, the other half consists in finding where from the chaos is coming. Moreover chaotic series have another advantage from a policy perspective.Since routes to chaos are generally characterized by a process where fluctuations exist only for specific sets of parameter values, if authorities control some of the bifurcation parameters then they can manipulate their values in order to attain a region of fixed point stability; if the evolution of economic aggregates over time is of a stochastic nature, no parameter change would allow for a stabilizing effect. In the field of control systems the main criticism moved to models built on these assumptions arises from the fact that they mislead a real understanding of the economic phenomenon and can induce inadequate and erroneous economic policies [63].An incorrect policy advice based on the wrong theory produces effects that will be fundamentally different from those predicted by the theory.An alternative to performing adequate policies with different consequences from those associated with more conventional models [64] could be the use of chaotic models. Using sensitivity for initial conditions to move from given orbits to other ones of attractors means to choose different behaviour of the systems, that is, different tradeoff of economic policy.Moreover the employment of an instrument of control in terms of resources in order to achieve a specific goal of economic policy will be smalller if compared to the use of traditional techniques of control.Applying chaotic control can be expected to produce greater efficiency that is resources, to achieve economic policy goals.We can obtain a relatively large improvement in system performance by the use of small controls.Resource saving and choosing among different trade-offs of economic policies (many orbits) could be significant motivations to use chaotic models in economic analysis. Yet despite such limitations economists frequently talk as if deductions from general equilibrium theory are applicable to reality and to provide a credible explanation of observed economic phenomena and a guide to economic policy making But the theory ploughed ahead ignoring its own weaknesses, despite the criticisms of many mathematicians and Copyright © 2012 SciRes.ME economists. The reason of this could be resumed in words by Barnett [65] "It is my belief that the economics profession, to date, has provided no dependable empirical evidence of whether or not the economy itself produces chaos, and I do not expect to see any such results in the near future.The methodological obstacles in mathematics, numerical analysis, and statistics are formidable".
7,841.4
2012-01-05T00:00:00.000
[ "Economics" ]
Cysteine-independent Catalase-like Activity of Vertebrate Peroxiredoxin 1 (Prx1)* Background: Peroxiredoxin (Prx) was previously known only as a Cys-dependent thioredoxin. Results: Cys-independent catalase-like activity was observed in two vertebrate Prx1 proteins. Conclusion: Prx1 possesses dual antioxidant activities with varied affinities toward H2O2. Significance: This discovery extends our knowledge on Prx1 and provides new opportunities to further study the biological roles of this family of antioxidants. Peroxiredoxins (Prxs) are a ubiquitous family of antioxidant proteins that are known as thioredoxin peroxidases. Here we report that Prx1 proteins from Tetraodon nigroviridis and humans also possess a previously unknown catalase-like activity that is independent of Cys residues and reductants but dependent on iron. We identified that the GVL motif was essential to the catalase (CAT)-like activity of Prx1 but not to the Cys-dependent thioredoxin peroxidase (POX) activity, and we generated mutants lacking POX and/or CAT activities for individually delineating their functional features. We discovered that the TnPrx1 POX and CAT activities possessed different kinetic features in reducing H2O2. The overexpression of wild-type TnPrx1 and mutants differentially regulated the intracellular levels of reactive oxygen species and p38 phosphorylation in HEK-293T cells treated with H2O2. These observations suggest that the dual antioxidant activities of Prx1 may be crucial for organisms to mediate intracellular redox homeostasis. Peroxiredoxins (Prxs) are a ubiquitous family of antioxidant proteins that are known as thioredoxin peroxidases. Here we report that Prx1 proteins from Tetraodon nigroviridis and humans also possess a previously unknown catalase-like activity that is independent of Cys residues and reductants but dependent on iron. We identified that the GVL motif was essential to the catalase (CAT)-like activity of Prx1 but not to the Cys-dependent thioredoxin peroxidase (POX) activity, and we generated mutants lacking POX and/or CAT activities for individually delineating their functional features. We discovered that the TnPrx1 POX and CAT activities possessed different kinetic features in reducing H 2 O 2 . The overexpression of wild-type TnPrx1 and mutants differentially regulated the intracellular levels of reactive oxygen species and p38 phosphorylation in HEK-293T cells treated with H 2 O 2 . These observations suggest that the dual antioxidant activities of Prx1 may be crucial for organisms to mediate intracellular redox homeostasis. Peroxiredoxins (Prxs) 5 are a family of ubiquitous antioxidant enzymes known to be involved in sensing and detoxifying hydrogen peroxide (H 2 O 2 ) and other reactive oxygen species (ROS) in all biological kingdoms (1)(2)(3). Mammalian Prxs also participate in the regulation of signal transduction by controlling the cytokine-induced peroxide levels (4 -6). Humans and other mammals possess six Prx isoforms, including four typical 2-cysteine (2-Cys) Prxs (Prx1-4), an atypical 2-Cys Prx (Prx5), and a 1-Cys Prx (Prx6) (7)(8)(9). The thioredoxin peroxidase (POX) activity is the hallmark of Prx proteins. In the case of Prx1-4, the conserved N-terminal peroxidatic Cys residue (Cys P -SH, corresponding to the Cys 51 in the mammalian Prx1) is oxidized by H 2 O 2 to cysteine sulfenic acid (Cys P -SOH) and then resolved by a reaction with the C-terminal resolving Cys 172 (Cys R -SH) in the adjacent monomer to form a disulfidebound Cys 51 and Cys 172 . The disulfide linkage is reduced by NADPH-dependent thioredoxin (Trx)/Trx reductase cycles to complete the Prx catalytic cycle in cells or by a reducing agent such as dithiothreitol (DTT) commonly used in assaying POX activity (10 -12). Alternatively, at least the Cys P -SH and Cys R -SH residues in Homo sapiens Prx1 (HsPrx1) can be glutathionylated in the presence of a small amount of H 2 O 2 and deglutathionylated by sulfiredoxin or glutaredoxin I. Cys P -SH may also be hyperoxidized in the presence of an excessive amount of H 2 O 2 to form reversible sulfinic acid (Cys P -SO 2 H), which can be slowly recycled by sulfiredoxin, or irreversible sulfonic acid (Cys P -SO 3 H), resulting in the loss of the POX activity and the formation of Prx1 decamers with protein chaperone function (13)(14)(15)(16)(17). Among these reactions, the rapid recycling of POX activity is responsible for the reduction of H 2 O 2 and other ROS, whereas the other two appear to be involved in the regulation of Prx functions (18). Although Prxs can be oxidized in multiple ways, all these POX activities rely on the Cys-dependent peroxidation cycles. However, in the present study, we unexpectedly observed that the Prx1 from the green spotted puffer fish Tetraodon nigroviridis (TnPrx1) was able to reduce H 2 O 2 that was independent of the Cys peroxidation and in the absence of reducing agents. This Cys-independent activity observed in wild-type (WT) and site-mutated TnPrx1 proteins differs from the classic POX activity in Prxs but resembles the catalase-like activity, making Prx1 a dual antioxidant protein. For clarity, we have denoted Cys-dependent POX and Cys-independent CAT-like activities in TnPrx1 as TnPrx1-POX and TnPrx1-CAT, respectively. We determined the detailed kinetic features of the TnPrx1-CAT activity and identified that the 117 GVL 119 motif was essential to this activity. Using a human embryonic kidney 293T (HEK-293T) cell transfection system, we showed that the TnPrx1-CAT participated in the regulation of H 2 O 2 and H 2 O 2 -dependent phosphorylation of p38 in cells. Additionally, CAT activity was also confirmed in HsPrx1, suggesting that the Cys-independent Prx1-CAT activity is conserved from fish to mammals. Cloning and Expression of Recombinant Prx1 Proteins-The Prx1 open reading frames (ORFs) of T. nigroviridis (TnPrx1) and H. sapiens (HsPrx1) were amplified by RT-PCR from mRNA isolated from pufferfish kidney and HeLa cells (corresponding to GenBank accession numbers DQ003333 and NM_001202431, respectively) and cloned into the pET28a bacterial expression vector containing a His 6 tag at the N terminus as described (19). TnPrx1 mutants were generated by site-directed mutagenesis by replacing all three Cys residues (i.e. Cys 52 , Cys 71, and Cys 173 ) with Ser residues to eliminate POX activity (denoted by POX Ϫ CAT ϩ ), the 117 GVL 119 motif with 117 HLW 119 to eliminate the CAT-like activity (POX ϩ CAT Ϫ ), or both (POX Ϫ CAT Ϫ ) (see Table 1 for details on the genotypes of constructs). Recombinant Prx1 proteins were expressed in Escherichia coli and purified from the soluble fractions by nickel-nitrilotriacetic acid-agarose bead-based chromatography and eluted with elution buffer containing 250 mM imidazole or as specified (19). Purified Prx1 proteins were subjected to SDS-PAGE analysis and stained with Coomassie Brilliant Blue. The protein purities were determined by densitometry using 1D Image Analysis Software with a Kodak Gel Logic 200 Imaging System (Eastman Kodak Co.). A reversible monomer-to-dimer transition system was established to evaluate the Cys-dependent formation of dimers in which the purified recombinant proteins in the form of mono-mers were first allowed to be oxidized to form dimers in air at 4°C, and then the resulting protein dimers were reduced to monomers by the treatment with DTT (50 mM or as specified) at room temperature for 10 min. The reduced and oxidized forms of Prx1 were detected by non-reducing SDS-PAGE. Protein Structure Homology Modeling-TnPrx1 protein structure homology modeling was performed using rat Prx1 (Protein Data Bank code 1QQ2; 80% identity) as a template. Global alignment of various structural models was performed by using PyMOL to produce various structural model figures. The active site of TnPrx1 was predicted using an ␣ shape algorithm to determine potential active sites in three-dimensional protein structures in MOE Site Finder, and further mutation was designed to disturb the structure of the active site. Site-directed mutants of TnPrx1 were constructed using the overlapping extension PCR strategy. Primers used in the experiments are shown in Table 2. All constructed plasmids were sequenced to verify the correct gene insertion and successful mutation. Enzyme Activity Assays-The reduction of H 2 O 2 by TnPrx1 and HsPrx1 was determined by a modified sensitive Co(II) catalysis luminol chemiluminescence assay as described (20). Briefly, the luminol-buffer mixture was composed by 100 l of luminol (100 mg ml Ϫ1 ) in borate buffer (0.05 M, pH 10.0) and 1 ml of Co(II)-EDTA (2 and 10 mg ml Ϫ1 , respectively, pH 9.0). Reactions started with mixing 50 l of proteins (50 g ml Ϫ1 ) with 50 l of a series of H 2 O 2 solutions (0 -500 M) for 1 min at 25°C followed by adding 1.1 ml of the luminol-buffer mixture to stop the reaction. The same amount of phosphate-buffered saline (PBS) was used to replace proteins in the control and for generating standard curves. The intensity of emission was measured with an FB12 luminometer (Berthold Detection Systems, Pforzheim, Germany), and the maximum values were recorded. The kinetic parameters of Prx1 proteins were determined using the Michaelis-Menten and/or allosteric sigmoidal kinetic models. The production of oxygen was measured with an oxygen electrode (341003038/9513468, Mettler Toledo). The reaction was performed in 4 ml of 600 M H 2 O 2 solutions, and the measurement was started by addition of proteins, POX ϩ CAT ϩ dimers (0.32 Determination of Enzyme Properties-The effect of pH on Prx1-CAT activity was evaluated by detecting the reduction of H 2 O 2 in reactions carried out in 0.2 mM Na 2 HPO 4 , 0.1 mM citrate buffer for pH 2.0 -8.0 and 50 mM disodium pyrophosphate, NaOH buffer for pH 8.0 -11.0, respectively. The effect of temperature was tested between 0 and 70°C at pH 7.0. The thermal and pH stabilities were similarly assayed except that concentrated Prx1 proteins were first treated at 0°C for 1 h at various temperatures or 6 h under various pH conditions, and then their specific activity was determined under regular assay conditions (i.e. pH 7.0 at room temperature). Specific activities were also assayed for iron-saturated proteins prepared by mixing proteins with FeCl 3 (1:100 molar ratio) followed by ultrafiltration (molecular mass cutoff at 10 kDa) to remove unbound iron. The role of iron in Prx1-CAT was further evaluated by iron chelation and rescue assays in which TnPrx1 proteins were treated with Tiron (4,5-dihydroxy-1,3-benzene disulfonic acid; 25 mM) and 2,2-dipyridy (50 mM) at 4°C overnight followed by ultrafiltration. Chelatortreated samples were then incubated with FeCl 3 (200 M or as specified) at 4°C overnight followed by ultrafiltration to remove unbound iron. The residual TnPrx1-CAT activities of iron-free and iron-rescued proteins were determined in standard reactions as described above. To confirm that iron was truly bound to TnPrx1, iron-rescued samples were subjected to extensive ultrafiltration with PBS, and the iron content was detected by inductively coupled plasma optical emission spectroscopy (Optima 8000DV, PerkinElmer Life Sciences) as described (21). Effects of WT TnPrx1 and Mutants on Intracellular ROS Level and the Phosphorylation of p38 Mitogen-activated Protein Kinase (MAPK)-The ORFs of WT and TnPrx1 mutants were subcloned into pCMV-Tag2B vector. HEK-293T cells were maintained at 37°C in Dulbecco's modified Eagle's medium (DMEM; Invitrogen) supplemented with 10% fetal bovine serum (FBS; Gibco) in the presence of 5% CO 2 . For transient transfection, cells were plated in 100-mm cell culture plates (1.4 ϫ 10 6 cells/plate), grown overnight, and transfected with 17 g of WT Prx1 or mutant plasmids using FuGENE reagent (Promega). Blank pCMV-Tag2B vector was used as a negative control. After 48 h of transfection, cells were washed with PBS and incubated with 2Ј,7Ј-dichlorodihydrofluorescein diacetate (200 M; Sigma) in serum-free medium at 37°C for 30 min to allow uptake by cells and intracellular cleavage of the diacetate groups by thioesterase. Cells were washed with PBS to remove free 2Ј,7Ј-dichlorodihydrofluorescein diacetate from the medium and counted by the trypan blue (0.4%) exclusion method. Viable cells were then plated into 96-well collagen-coated plates (2 ϫ 10 4 cells/well) and treated with H 2 O 2 at final concentrations between 0 and 850 M for 60 min. Intracellular fluorescence signals of oxidized dichlorodihydrofluorescein at 0 and 1-h time points (T0 and T1) followed by H 2 O 2 treatment were measured with a Synergy H1 hybrid reader (BioTek) ( ex / em ϭ 485/525 nm). The relative fluorescence signal for each sample (RF sample ) was calculated using the following equations. where ⌬F max represents the fluorescence signal increases from the sample treated with the highest concentration of H 2 O 2 . For evaluating the effect of TnPrx1 constructs on the phosphorylation of intracellular p38 MAPK, transfected cells treated with H 2 O 2 (0 -1200 M) were collected and lysed followed by Western blot analysis using antibodies against p38 and phosphorylated p38, respectively (Cell Signaling Technol- Primer Sequences (5-3) Application . The immunoreactive bands were visualized using an enhanced chemiluminescence (ECL) system (Pierce). Statistical Analysis-All experiments were performed independently at least three times. Data are presented as the mean Ϯ S.D. A two-tailed Student's t test was used to assess statistical significance between experimental and control groups. Results Cys-independent CAT Activity in TnPrx1-Thioredoxin POX was previously the only known enzyme activity in Prxs that relied on NADPH-dependent oxidoreduction between Trx and Trx reductase to maintain the continuation of their POX activity. In the absence of Trx/Trx reductase/NADPH or a reducing agent (e.g. DTT), the reactions stop after the formation of a Cys P -Cys R disulfide bound in which one pair of Prx monomers may only reduce two H 2 O 2 molecules. Four to six H 2 O 2 molecules may be reduced when they are hyperoxidized without the formation of disulfide bounds. Surprisingly, however, in the absence of a reducing agent, we observed that the recombinant WT TnPrx1 monomers (Ͼ99% purity in reduced status) were able to continuously reduce H 2 O 2 molecules (Fig. 1, A, B, and C), implying the presence of non-POX oxidoreduction activity in TnPrx1. Similar activity was observed when TnPrx1 was fully oxidized to form dimers (Fig. 1, B and D), confirming that the observed non-POX activity was independent of the status of Cys residues. The observed activity was not attributed to nonspecific background reactions as it was not observed in reactions containing no or denatured TnPrx1 (Fig. 1, C and D, first and last columns). Additionally, we also detected O 2 production (Fig. 1, E and F), and the calculated ratio between the reduced H 2 O 2 and the produced O 2 was 2.29:1, indicating that this activity was derived from a CAT-like activity (i.e. 2 H 2 O 2 3 2 H 2 O ϩ O 2 ) rather than the POX activity that only produces H 2 O. To fully rule out the possibility that a trace amount of contaminating catalase from E. coli was present in the TnPrx1 preparations (despite Ͼ99% purity) and contributed to the activity, we prepared TnPrx1 proteins under different elution stringencies (i.e. imidazole at 150 -300 mM) to allow various impurities (i.e. containing various amounts of contaminants). We confirmed again that the activity was derived from TnPrx1 as it was correlated with the amount of TnPrx1 rather than with the level of impurity (Fig. 1, G and H). The activity was iron-dependent as it could be inhibited by ferrous/ferric chelators Tiron and 2,2-dipyridy, and the addition of Fe 3ϩ could not only increase the activity of untreated TnPrx1 but also reverse the inhibition by chelators (Figs. 1H and 2A). Fe 3ϩ displayed low nanomolar level binding affinity with TnPrx1 (apparent K d ϭ 0.17 M) and an ϳ1:1 (metal:Prx1) stoichiometry (Fig. 2C). To confirm the iron-TnPrx1 binding, we directly evaluated the iron content of recombinant TnPrx1 proteins under various conditions. Proteins were subjected to extensive ultrafiltration to remove unbound iron. The molecular ratio between iron and untreated TnPrx1 protein was 0.64 Ϯ 0.002:1 (Fig. 2D). Treatment by chelators reduced the ratio to 0.08 Ϯ 0.003:1, whereas the addition of FeCl 3 (200 M) restored the ratio to 0.75 Ϯ 0.006:1 (Fig. 2D). These observations indi-cated that each TnPrx1 binds to one iron, and up to 75% of the recombinant TnPrx1 proteins were in the active form. Additionally, the effect of other metals, including Mg 2ϩ , Ca 2ϩ , Cu 2ϩ , Mn 2ϩ , Co 2ϩ , Ni 2ϩ , and Zn 2ϩ on the CAT-like activity was tested, but no enhancement activity was observed (data not shown). The dependence on iron, but not on reducing agents and Cys residues, was characteristic of catalases, further confirming that the observed activity was not derived from the POX activity of Prxs. Instead, it resembled a catalase that was previously unknown to Prxs. However, TnPrx1 was insensitive to the inhibitors of typical CATs, such as DTT and the irreversible inhibitor 3-amino-1,2,4-triazole (Fig. 2, E and F), suggesting that Prx1 might represent a new class of CAT-like enzyme. Indeed, unlike typical CATs, TnPrx1 lacked the Soret absorbance peak unique to heme-containing moieties (data not shown), indicating that it is a hemeless metalloprotein rather than a heme-containing protein. In the presence of DTT, WT TnPrx1 displayed Michaelis-Menten kinetics on low concentrations of H 2 O 2 (i.e. Ͻ100 M) (Fig. 3A). The K m value was 2.2 M, which was comparable with the K m values reported previously for Prx1-POX activities that were typically much lower than 20 M Table 1). The data were in agreement with the notion that TnPrx1 possessed both POX and CAT activities as the activities with DTT (POX ϩ CAT) were higher than those without DTT (CAT only) by a relatively constant rate (i.e. 2.5 s Ϫ1 determined by a "Michaelis-Menten ϩ allosteric sigmoidal" model). Because CAT activity was described for the first time in a Prx1 of fish origin, we wanted to know whether it was also present in mammalian Prx1. We expressed recombinant HsPrx1 and performed a similar assay with or without a reducing agent. Our data supported that HsPrx1 was also bifunctional by possessing POX and CAT activities with kinetic parameters comparable with those of TnPrx1 (i.e. KЈ app(ϪDTT) ϭ 347 M, n H(ϪDTT) ϭ 10.1 and KЈ app(ϩDTT) ϭ 342 M, n H(ϩDTT) ϭ 8.8, respectively) ( Fig. 3E and Table 1). Although Prxs from more species need to be examined to make a firm conclusion, the data here suggest that the CAT-like activity is likely conserved among vertebrate Prx1 from fish to mammals. To further validate TnPrx1-CAT activity, we performed a site-directed mutagenesis and constructed a mutant by replacing all three Cys residues with Ser residues to completely eliminate its Cys-dependent POX activity. The resulting mutant (POX Ϫ CAT ϩ ) was unable to form dimers as expected (Fig. 3, G and H) but still capable of converting H 2 O 2 to O 2 (Fig. 1, E and F). The POX Ϫ CAT ϩ mutant displayed virtually identical sigmoidal curves when assayed with and without DTT that resembled that of WT TnPrx1 without DTT as well as similar kinetic parameters (i.e. KЈ app(ϪDTT) ϭ 211 M, n H(ϪDTT) ϭ 3.7 and KЈ app(ϩDTT) ϭ 227 M, n H(ϩDTT) ϭ 3.3, respectively) ( Fig. 3B and Table 1). These observations confirmed that the observed TnPrx1-CAT activity was truly independent of the Cys residues and reducing agent. Potential Active Site for the CAT-like Activity in TnPrx1- The discovery of a previously unknown Prx1-CAT activity prompted us to search for the functional motif. By examining a previous reported structure of rat Prx1 (Protein Data Bank code 1QQ2) and homology-based modeling of TnPrx1, we observed a flexible loop consisting of six residues, Gly 117 , Val 118 , Leu 119 , Phe 127 (rat Prx1) or Tyr 127 (TnPrx1), Ile 142 , and Ile 144 , at the dimer interface in which an H 2 O 2 molecule could well fit into a pocket formed by the highly conserved 117 GVL 119 residues (Fig. 4). To test whether this pocket might contribute to the TnPrx1-CAT activity, we generated a TnPrx1 construct by replacing 117 GVL 119 with 117 HLW 119 (denoted by POX ϩ CAT Ϫ ) to alter the pocket structure. Indeed, the mutant POX ϩ CAT Ϫ lost CAT-like activity (i.e. no activity without DTT in the reactions) but retained only DTT-dependent POX activity that followed Michaelis-Menten kinetics characteristic of Prx1-POX activity (K m ϭ 4.15 M) (Fig. 3C and Table 1). To further dissect individual TnPrx1-POX and Prx1-CAT activities, we generated a double mutation (POX Ϫ CAT Ϫ ) in which all Cys residues and 117 GVL 119 were replaced by Ser and 117 HLW 119 , respectively. As expected, this double negative mutant lost both POX and CAT activities and was unable to reduce H 2 O 2 regardless of whether DTT was present or not (Fig. 3D). Among all the mutants tested, POX Ϫ CAT ϩ also displayed the expected iron dependence in which iron chelators inhibited its activity that could be restored by adding iron (Fig. 2B), whereas the two CAT Ϫ mutants (i.e. POX ϩ CAT Ϫ and POX Ϫ CAT Ϫ ) only retained low activity (6% versus WT) that were unaffected by iron chelators and iron (data not shown). Additionally, the TnPrx1-CAT activity tolerated low temperature better than pH as it was able to retain virtually constant peak activity between 0 and 40°C but only retained peak activity at ϳpH 7.0 (Fig. 5). Collectively, these observations confirm that TnPrx1 possesses both POX and CAT activities, and the residues 117 GVL 119 are critical to Prx1-CAT activity. TnPrx1-POX acted on H 2 O 2 with much higher affinity (K m ϭ 4.15 M) but had a relatively low maximal activity (k cat ϭ 0.23 s Ϫ1 ) with a wider range of H 2 O 2 levels (Table 1 and Fig. 3C), whereas Prx1-CAT acted on H 2 O 2 with lower affinity (KЈ (ϪDTT) ϭ 210.7 M) but had a much higher activity (k cat ϭ 2.3 s Ϫ1 ) ( Table 1). Implication of TnPrx1-CAT in Regulating ROS Level and Signaling-The physiological roles of TnPrx1-CAT activity were investigated using a mammalian cell transfection system. First, we transfected HEK-293T cells to overexpress various TnPrx1 constructs and examined the effects in regulating intracellular ROS (iROS) in response to H 2 O 2 treatment. The expression of TnPrx1 constructs in transfected cells was confirmed by qRT-PCR (Fig. 6, A and B). We observed a general trend that cells overexpressing CAT ϩ proteins (i.e. WT and . D, molar ratio between TnPrx1 protein and bound iron determined by inductively coupled plasma optical emission spectroscopy. TnPrx1 was treated as specified followed by extensive washes with water by ultrafiltration prior to inductively coupled plasma. Bovine catalase and PBS were used as controls. E and F, effects of catalase inhibitors 3-amino-1,2,4-triazole (3-AT) and DTT on the CAT activity of WT TnPrx1 and mutants. Catalase was used as a positive control. Residual activities are expressed as the percent activity (versus untreated WT TnPrx1). Data are representative of at least three independent experiments. The error bars represent S.D., and statistical significances between experimental and control groups were determined by Student's t test. **, p Ͻ 0.01; ***, p Ͻ 0.001. POX Ϫ CAT ϩ ) had lower iROS levels than those expressing CAT Ϫ proteins (i.e. blank vector, POX ϩ CAT Ϫ , and POX Ϫ CAT Ϫ ) in response to the treatment with 150 -600 M exogenous H 2 O 2 (Fig. 6C). Second, because H 2 O 2 is known to also function as a signaling molecule, particularly in regulating kinase-driven pathways (22), we tested whether Prx1-CAT-associated regulation of intracellular H 2 O 2 affected the phosphorylation of p38 that played a central role in the p38 MAPK signaling pathway. In HEK-293T cells transfected with blank or double negative (POX Ϫ CAT Ϫ ) plasmids, there were low background levels of phosphorylated p38 in the absence of H 2 O 2 stimulation (0 M) (Fig. 6D). The levels of phosphorylated p38 in cells treated with 225-1,200 M H 2 O 2 displayed a bell curve that peaked in the 525-900 M H 2 O 2 groups, which was comparable with previously reported data (23). When CAT ϩ constructs (i.e. WT and POX Ϫ CAT ϩ ) were overexpressed, a considerable delay of phosphorylation of p38 was observed as phosphorylated p38 was significantly (p Ͻ 0.05) up-regulated in cells challenged with H 2 O 2 , starting at 525 M and peaking at 900 -1,200 M. Conversely, in cells overexpressing POX ϩ CAT Ϫ TnPrx1, no significant delay of p38 phosphorylation was observed as phosphorylated p38 was only significantly up-regulated in cells chal- (Fig. 6D), a pattern similar to that of the blank or double negative group. Although further studies are needed to fully dissect the physiological roles of individual Prx1-POX and Prx1-CAT activities in cells and in vivo, these observations provide primary evidence on the involvement of the TnPrx1-CAT activity in regulating the ROS-mediated p38 signaling pathway when cells are incubated with high micromolar to low millimolar levels of H 2 O 2 . Discussion Eukaryotic cells contain a complex system to detoxify and regulate H 2 O 2 and other reactive oxygen species. These include small molecules, such as ascorbic acid, ␤-carotene, glutathione, and ␣-tocopherol, and various enzymes, such as superoxide dismutase (SOD), CAT, glutathione peroxidase (GPx), and Prx (24). Some of these enzymes or isoforms are mainly cytosolic (e.g. Prx1, Prx2, Prx5, Prx6, SOD1, and GPx1), whereas others may be compartmentalized (e.g. catalase in peroxisomes; SOD2 and Prx3 in mitochondria; and SOD3, GPx3, and Prx4 in plasma), which constitutes a precise antioxidant network for the defense against various oxidative stresses in the diverse cellular activities (4,25,26). Cells are known to rely heavily on Prxs in scavenging H 2 O 2 and other ROS molecules. In fact, they are the third most abundant proteins in erythrocytes and represent 0.1-1% of total soluble proteins in other cells (4,7,24,26,27), and Prx1/2 knockout in mice may lead to the development of severe blood cell diseases (e.g. hemolytic anemia and hematopoietic cancer) (27,28). Prxs are widely distributed and have been found in animals, plants, fungi, protists, bacteria, and cyanobacteria, suggesting that they are a family of ancient proteins essential to a variety of critical cellular activities (29,30). Prxs were previously recognized only as a family of thioredoxin POX for which the biochemical features and biological functions were subjected to extensive investigations (22). In the present study, we discovered that TnPrx1 and HsPrx1 are bifunctional by possessing both Cys-dependent POX and Cys-independent CAT-like activities, further extending our understanding of this important family of antioxidant proteins. The CAT-like activity in TnPrx1 was validated by the identification of the active site containing the GVL motif, which also enabled us to generate mutants lacking CAT and/or POX activity for dissecting their individual activities. Our data suggested that previously observed antioxidant activity in WT Prx1 (at least in some animals) was in fact a combined activity of Prx1-POX and Prx1-CAT. The alteration of GVL motif abolished TnPrx1-CAT activity but not TnPrx1-POX activity ( Fig. 3C and Table 1). This suggests that Prx1 contained two independent H 2 O 2 -binding sites in agreement with previous reports that the H 2 O 2 -binding site for the Cys-dependent POX activity was near the Cys 51 and Cys 172 residues but distant from the GVL site (31)(32)(33). Catalases are heme-containing enzymes (34). Mammalian Prx1 was previously identified as heme-binding protein 23 kDa (HBP23) (10,35), and a bacterial 2-Cys peroxiredoxin alkyl hydroperoxide reductase C (AhpC) was also reported to be able to bind heme (36), although heme binding is non-essential to their functions. Our data indicated that TnPrx1-CAT activity was not heme-related but dependent on mononuclear iron. However, the exact iron-binding site remains to be determined. Sequence analysis indicates that Prx1 proteins from T. nigroviridis and mammals contain a 2-His-1-carboxylate facial triad-like motif (e.g. motif 81 HX 2 HX 36 E 121 in TnPrx1) that is conserved in mononuclear non-heme iron enzymes (37). Additionally, a Trp 87 residue is also present at the motif. Aromatic residues, particularly Trp and Tyr, are known to be enriched at the iron sites of iron proteins (38). The involvement of aromatic residues in redox catalysis and/or electron transfer is not yet fully understood, but their capability to mediate electron transfer reactions makes them most suitable for tunneling electrons to/from redox sites (38). Conversely, the putative facial triad is not in the immediate proximity of the GVL motif. Therefore, its involvement in iron binding and the mechanism of iron-mediated electron transfer for the Prx1-CAT activity needs to be verified by further structure-based analysis. The identity and similarity of vertebrate Prx1 are over 77 and 88% (Table 3), respectively, and the active site of CAT activity is completely conserved among Prx1 proteins, suggesting that CAT activity may be a ubiquitous function of Prx1 family members. The confirmation of reductant-independent HsPrx1-CAT activity indicates that this new function is likely conserved at least in some vertebrates. Furthermore, 117 GVL 119 are conserved in Prx1-3, whereas 117 GVY 119 are found in Prx4. Although Prx5 and Prx6 share low similarity with Prx1-4, they have similar threedimensional structures (39), suggesting that CAT activity might be present in other Prxs, at least Prx1-3. It might also explain why some parasites and cyanobacteria do not contain catalase and GPx but have diverse Prx homologies (29,40). The intracellular concentrations of H 2 O 2 and other ROS molecules in vivo are not precisely known but may range from sub-to lower micromolar levels in various prokaryotic and eukaryotic cells. However, intracellular H 2 O 2 levels may rise to the order of 100 M in phagocytes, and the transient H 2 O 2 levels may reach Ͼ200 M in brain cells (41,42). Moreover, appropriately stimulated polymorphonuclear leukocytes and monocytes can produce up to 1.5 nmol of H 2 O 2 in 10 4 cells/h (which is roughly equivalent to Ͼ350 -450 mM of H 2 O 2 if it is not removed and accumulated per hour given their cell sizes at ϳ330 and 420 fl) (43,44). In the present study, we have shown that Prx1 acts mainly (if not only) as a POX under a low level H 2 O 2 environment with high affinity and relatively low capac-ity (K m and k cat at ϳ2.23-4.15 M and ϳ0.23 s Ϫ1 , respectively) but as both POX and CAT when the H 2 O 2 level reaches ϳ50 M or higher in which the latter behaves as an allosteric enzyme with 10 times higher activity than the former (K m and k cat at ϳ210 M and 2.3 s Ϫ1 , respectively). In vitro transfection experiments also confirmed the notion as HEK-293T cells overexpressing WT TnPrx1 and mutant retaining CAT activity were capable of scavenging more iROS than those overexpressing mutants lacking CAT or both POX and CAT activities at low to middle micromolar H 2 O 2 levels (Fig. 6C). The levels of exogenous H 2 O 2 to produce significant effects on cellular activities such as the phosphorylation of p38 in cells transfected with various TnPrx1 mutants were ϳ375-1050 M or higher (Fig. 6D), which corresponds to ϳ50 -150 M intracellular H 2 O 2 based on the model predicting that intracellular H 2 O 2 concentrations are ϳ7-fold or even 10 -100-fold lower than that applied exogenously (42,45,46). The corresponding intracellular levels of H 2 O 2 fell within the levels for physiologically relevant signaling (i.e. 15-150 M) (46). Collectively, these features enable Prx1 to function on a wider range of ROS concentrations than many other proteins in (Table 2). -Fold changes of Prx1 and catalase transcripts are expressed relative to the catalase transcripts in the blank control (A) or to the transcripts of their own genes (B). C and D, effects of TnPrx1 constructs on intracellular ROS and ROS-mediated phosphorylation of p38 MAPK in transfected cells treated with exogenous H 2 O 2 as determined by dichlorodihydrofluorescein fluorescence assay and Western blot analysis, respectively. In the Western blot analysis, antibody to human GAPDH was used as a control (D, lower panel). Representative data from one of three or more independent experiments are shown. The error bars represent S.D., and statistical significances between experimental and control groups were determined by Student's t test. *, p Ͻ 0.05. p-p38, phosphorylated p38. the cytosol in which Prx1-POX acts on sub-to lower micromolar iROS normally present in cells, whereas Prx1-CAT (probably along with GPx and classic CAT enzymes) acts on moderate to higher micromolar iROS concentrations that are present in certain types of cells (e.g. some brain and immune cells) and/or required for H 2 O 2 signaling (Table 1). However, it is noticeable that, although TnPrx1 and HsPrx display CAT-like activity, their catalytic efficiencies are ϳ100fold smaller than those of regular CATs (i.e. k cat /K m Prx1-CAT at ϳ10 4 M Ϫ1 s Ϫ1 versus k cat /K m CAT at ϳ10 6 M Ϫ1 s Ϫ1 ), which raises the question of whether Prx1-CAT function is critical to organisms as a higher level of iROS may be quickly scavenged by regular CAT. Prx1 is a cytosolic protein, whereas native CATs are typically present in peroxisomes. Data mining the Multi-Omics Profiling Expression Database (MOPED) also reveals that human Prx1 is much more abundant than CAT in most cells/tissues (Fig. 7A). Therefore, we speculate that the CATlike activity in Prx1 and possibly in other Prxs may act as one of the first line of scavengers for cytosolic ROS. Prx-CAT may also play a more critical role in scavenging and/or regulating ROS in certain cells and tissues that are deficient or contain extremely low levels of CAT. For example, in human bone, oral epithelium, and retina, the CAT protein levels are 132-, 45-, and 36-fold less, respectively, than Prx1 (i.e. 13 versus 1,730, 55 versus 2,490, and 110 versus 4,020 ppm, respectively). Some cancer cells might also take advantage of the Prx1-CAT activity as the expressions of CAT are deficient or highly down-regulated in many of cancer cells (47), whereas those of Prx1 are up-regulated in cancer cells, including breast, lung, and urinary cancers and hepatocellular carcinoma (48). The down-and up-regulation of CAT and Prx1 were also clearly supported by comparing the MOPED protein expression profiles between cancer and non-cancer cells (Fig. 7). Additionally, we also confirmed by qRT-PCR that the mRNA level of CAT in HEK-293T cells was ϳ50 -200-fold less than that of Prx1 (Fig. 6, A and B). The Prx-CAT function might also explain how some invertebrates lacking CAT and GPx regulate high levels of intracellular ROS. For example, some parasitic helminths (e.g. Fasciola hepatica and Schistosoma mansoni) and roundworms (e.g. filarial parasites) as well as some protozoa (e.g. Plasmodium sp.) are CAT-and GPx-deficient but possess highly expressed Prx genes (29,40). In summary, we observed a CAT-like activity in the pufferfish and human Prx1 proteins that were independent of Cys residue and reductants but dependent on non-heme mononuclear iron. TnPrx1-CAT activity was capable of regulating intracellular ROS and the ROS-dependent phosphorylation of p38 in transfected HEK-293T cells. These newly discovered features extended our knowledge on Prx1 and provided a new opportunity to further dissect its biological roles.
8,128.4
2015-06-18T00:00:00.000
[ "Biology" ]
Automatic planning of head and neck treatment plans Treatment planning is time‐consuming and the outcome depends on the person performing the optimization. A system that automates treatment planning could potentially reduce the manual time required for optimization and could also provide a method to reduce the variation between persons performing radiation dose planning (dosimetrist) and potentially improve the overall plan quality. This study evaluates the performance of the Auto‐Planning module that has recently become clinically available in the Pinnacle3 radiation therapy treatment planning system. Twenty‐six clinically delivered head and neck treatment plans were reoptimized with the Auto‐Planning module. Comparison of the two types of treatment plans were performed using DVH metrics and a blinded clinical evaluation by two senior radiation oncologists using a scale from one to six. Both evaluations investigated dose coverage of target and dose to healthy tissues. Auto‐Planning was able to produce clinically acceptable treatment plans in all 26 cases. Target coverages in the two types of plans were similar, but automatically generated plans had less irradiation of healthy tissue. In 94% of the evaluations, the autoplans scored at least as high as the previously delivered clinical plans. For all patients, the Auto‐Planning tool produced clinically acceptable head and neck treatment plans without any manual intervention, except for the initial target and OAR delineations. The main benefit of the method is the likely improvement in the overall treatment quality since consistent, high‐quality plans are generated which even can be further optimized, if necessary. This makes it possible for the dosimetrist to focus more time on difficult dose planning goals and to spend less time on the more tedious parts of the planning process. PACS number: 87.55.de I. INTRODUCTION A number of uncertainties and variations are present in radiotherapy such as absolute dose precision, (1) delivery precision, (2,3,4,5) precision of calculated dose distributions, (6,7,8,9) and radioresponsiveness of the specific tumor and normal tissues. (10,11,12,13,14,15) Two of the largest variations within radiotherapy are the heterogeneity in target definition (16,17,18) and the variation among treatment plans for a given geometry both intra-and interinstitutional. (19,20) Most treatment plans are likely to have sufficient dose coverage of the delineated targets, but large variations in dose to healthy tissues occur. The dose distribution depends on the dose objective defined by the dosimetrist, typically in accordance with institution-specific guidelines. However, even guidelines do not ensure an optimal dose distribution for the specific anatomy, since the lower achievable dose limit to an OAR for a specific patient is unknown. This is the reason why treatment plans are optimized for the individual patient by trained dosimetrists. Moreover, the treatment optimization is labor-intensive work with a very large solution space, which makes it difficult to ensure that the clinical treatment plan is the optimal plan. Therefore, there is a need to automate the treatment planning optimization procedure both to reduce the amount of time spent on the optimization and, more importantly, to reduce the interdosimetrist variation. If an automatically generated treatment plan of high clinical quality is available prior to manual optimization, it could serve as a quality reference and starting point for the specific treatment and thereby ensure a certain minimum quality. Furthermore, the automatic plans could potentially be a time-saving tool during the treatment optimization, which would reduce one of the most tedious steps in the process. Sharing the optimization parameters between institutions could also provide a method to share knowledge and standardize plan quality. Previous documented solutions with somewhat different approaches have shown the potential of automation of the planning process. (21,22,23,24,25) The current study validates the performance of a prototype version of the Auto-Planning module which recently has been productized for clinical use in the Pinnacle 3 treatment planning system from Philips Healthcare (Fitchburg, WI). II. MATERIALS AND METHODS The Auto-Planning software was evaluated by replanning 26 previously delivered clinical head and neck IMRT treatment plans of the oropharynx. The plans were delivered over the 12 months prior to the study. The plans were created in accordance with the Danish Head And Neck Cancer Groups guidelines (DAHANCA -Version 2004), and each dose plan included three dose levels of 50 Gy, 60 Gy, and 66 or 68 Gy in 33 treatment fractions with a simultaneous integrated boost technique. In the Auto-Planning software, a template of configurable parameters known as a Technique (details in Appendix A) can be defined for each treatment protocol. The Techniques include definition of beam parameters and planning goals. The Auto-Planning module uses the Technique definition to iteratively adjust IMRT planning parameters to best meet the planning goals. The Technique was defined according to local standards, including prioritization between target coverage and dose to organs at risk. The Technique definition was based on five additional pilot patients independent of the 26 study patients. Each of the 26 treatment plans were replanned with Auto-Planning without knowledge of the clinically delivered treatment plans and without any dosimetrist postoptimization of the treatment plans. The only input to the replanning was the delineations of planning target and organs at risk and the positioning of the isocenter. Quantitative dosimetric evaluation of the performance of the treatment plans was performed on dose volume histograms (DVH) extracted from the planning system. CT scans had a slice thickness of 3 mm and in plane voxel size of 1 mm × 1 mm. The dose plans were calculated using the Pinnacle 3 collapsed cone algorithm with a dose grid resolution of 3 mm. Specific DVH values, as well as the overall shape of the DVH, was compared using average DVHs of the two types of treatment plans. The average DVH was calculated for each type of treatment plan as the average of the patient-specific DVH values at each dose level. To specify dose regions for which statistically significant differences exist, a probability curve as in Bertelsen et al. (26) was calculated. In short, the probability curve is a Wilcoxon matched-pair, signed-rank test performed at each dose level. Individual values of the curve are not strict statistical tests since the test is performed multiple times and on values that are not mutually independent. The probability curve is, therefore, primarily a tool that indicates regions for which the average DVHs deviate significantly. The DVH analysis was performed for all target volumes, as well as for parotid gland, submandibular gland, and spinal cord. To evaluate dose to other normal structures than the delineated, a DVH evaluation of all healthy tissues outside the PTV was performed. In contrast to the previous DVH evaluations, this evaluation was performed in absolute volume to compensate for a difference of the CT scanned volume of each patient (relative values would depend on the scanned volume). The dosimetric evaluation was extended with a blinded clinical evaluation of the treatment plans. Two senior head and neck radiation oncologist independently scored the treatment plans on a categorical scale from 1 to 6 (1 = bad and 6 = good). The scoring was performed for target coverage, sparing of healthy tissues, and an overall assessment of the treatment plan. Finally, based on the clinical evaluation, the radiation oncologists selected the plan they would favor for clinical treatment. The radiation oncologist had access to all clinical information and diagnostic scans. The evaluation was performed similar to the clinical procedure used for evaluation of two different proposals for a given clinical treatment. All information related to the production of the plans was blinded and the treatment plans were presented for evaluation in random order. All statistical tests were made by Wilcoxon matched-pair, signed-rank test using a significance level of 5%. III. RESULTS Average DVH comparison between automatic and the clinical delivered plans for the targets is shown in Fig. 1. In accordance with ICRU 83, (27) PTV50 includes PTV60 and PTV 66/68 causing the long tail towards high doses. Likewise, PTV60 contains PTV66/68. For PTV50, the autoplans had a small, but statistically significant, lower dose than the clinical plans below 95% of the 50 Gy and also above ~ 110% of the 50 Gy. The PTV60 had a similar pattern as the PTV50 below ~ 92% of 60 Gy. A slightly higher average dose for the autoplans was also observed, while no differences were seen for higher doses. For the highest dose level, PTV66/68, the trend of a small, but statistical lower, dose below 95% of prescribed dose was observed for the autoplans, while no other differences were found. The observations from Fig. 1 are reflected in Table 1 which shows selected DVH parameters. The difference in steepness of the DVH for PTV50 is statistically significant, with the automatic plans having the steeper slope. Also, the minimum dose was different for the two types of plans, with the clinical plans having a slightly higher minimum dose. Nevertheless, for both types of plans, the average DVH covers the PTV50 with the 95% isodose lines for at least 98.5% of the target which meets the ICRU 83 recommendation, (27) as well as the 2013 DAHANCA guidelines and QA evaluations. (28) For PTV60, the only statistically significant differences observed were a slightly higher dose level for the automatic plans (Table 1). For the highest dose level, the automatic plans produce more conformal dose distributions (Conformity Index CI 95%), but slightly less maximum and minimum dose. Average DVH curves for the organs at risk (OAR) are shown in Fig. 2. For all the delineated organs, the average DVH doses from the autoplans are either less than or equal to those of the clinical plans. There are no dose ranges for which the autoplans generated statistically significant higher doses to the delineated OAR. To evaluate dose to nondelineated healthy tissues, Fig. 2(f) shows the DVH difference for all tissues outside the PTV. The figure demonstrates the difference of the absolute volume irradiated above a specific dose level for the clinical plans minus the automatic plans. For all dose levels up to ~ 55 Gy, the dose levels outside the PTVs were lower for the automatic plans. Above ~ 55 Gy, the two types of plans were equal to each other. To quantify the DVH differences and their variation, selected DVH metrics for the organs at risk are shown in Table 2. The overall result in Table 2 is as stated above, that the OAR receives less dose using the automatic generated plans than the previously delivered plans. Part of the observed dose differences was related to interdosimetrist variation. Figure 3 shows an example of all pairs of DVH for ipsilateral parotid, which illustrates that the autoplans are confined to a narrower region than the manual plans. The evaluations by the oncolo gists are shown in Table 3. Except for Patient 16, the main finding is that the qualities of both types of plans are good. Patient 16 failed to meet the maximum dose goal to the spinal cord which has a very high clinical priority, but for the other targets and organ delineations, the plan was evaluated as good. In terms of target coverage, the scores were not statistically significantly different between the two types of plans for any of the two observers; however, a p-value of 0.079 for one observer could indicate a preference for the clinical plans in term of target coverage. The clinical evaluation of the dose to the OARs was clearly in favor of the autoplans for both observers, which is consistent with the dosimetric observations in Fig. 2 and Table 2. For the overall evaluation of the plans, one observer is clearly in favor of the autoplans, while the other observer showed no statistically significant differences. In 94% of the cases, the overall score for the autoplans were as good as, or better than, the clinical plans. The last column of Table 3 shows the preferred treatment plans by the clinicians. The column reflects the overall plan score, and shows that the observer with a difference in the overall evaluation only selected automatic plans while the observer who did not find a significant overall difference selected the automatic plans and the clinical plans evenly. IV. DISCUSSION For all 26 patients it was possible to produce automatic plans of high plan quality. Small differences in the PTV dose coverage but a significant reduction in dose to OAR between automatic plans and clinical plans indicate that the Technique parameters used in the current study are biased towards normal tissue sparing relative to the clinical practice. It is likely that another set of Technique parameters could be determined which would focus more on dose coverage. In addition, in the clinical release of the Auto-Planning software, modifications have been made to enhance the priority of maximum dose constraints to OAR such as the cord, a need highlighted by the physicians during plan evaluation. No postoptimization of the plans was performed in the study to make a pure validation of the Auto-Planning software. If needed or requested, in a clinical situation automatic optimized plans could be further optimized just as any manually created plan since the automatic plans are delivered in exactly the same format as manually created plans. Thus, the automatic plans can either be a high quality starting point for further manual optimization or an attempt to produce plans of clinical quality without further user intervention. It is likely that the reduced dose to normal tissue of the sizes seen in Fig. 2 could be of clinical impact for the patients, while the small difference in tumor coverage will almost certainly have no clinical impact. However, without no definitive answer to which plan is the better in Table 3. Clinical evaluation of the auto and clinical plans on a score from one to six (1 = bad -6 = good). The observers scores are shown in each columns separated by a slash. At the lower part of the table, mean values for both the individual observers and a combined score is shown together with a test of statistically significant differences between the score for auto and clinical plans. For Patient 16 please see note in text. Statistically significant differences are shown in bold. Target Coverage Organs terms of tumor control and sparing of normal tissue (e.g., based on large randomized trials), the evaluation of the clinical quality of the plans will be somewhat subjective and depend upon individual views of the oncologist judging the plans. In the current study there are indications that the two oncologists have slight differences in their priorities. As stated in the results, both oncologists scores the OAR irradiation significantly better for the automatic plans compared to the manual plans, while none of the oncologists found statistically significant differences between the automatic and manual plans in terms of target coverage; although a p-value of .074 for one of the oncologist indicates a favor of target coverage from the manual plans. These differences are also reflected in selection of treatment plan in which one oncologist only select the automatic plans while the other select an even mixture of automatic and manual plans. However, independent of the interoncologist variations, it is interesting to observe that in 94% of the cases (all except for one) the overall score given to the automatic plan is at least as high as the score for the manual plan, indicating that most of the plans could be used clinically without any user intervention. Only for one plan, one of the oncologists scored the automatic plan lower than the manual plan due to a violation of the maximum dose constraint of the spinal cord. With slight manual effort, this single treatment plan was later optimized to have an acceptable maximum spinal cord dose. A prerequisite to achieve high-quality automatic plans compared to manually created plans is obviously a sound optimization and tuning algorithm (the Auto-Planning engine). However, the quality of the manual plans is obviously also of importance in comparison with the automatic plans (poor quality manual plan would favor automatic plans). At the time the manually created plans were made, all clinical head and neck treatment plans in the department were created using IMRT; thus the department was experienced in creating "high quality" plans manually. As a result, given the significant experience with IMRT in the department, the interdosimetrist variation should be limited. Nevertheless, as seen in Fig. 3, there is quite a variation in manually optimized plans, a variation which is less for the automatic optimization. It therefore seems likely that one of the advantages of automatic plans is a reduction of the interdosimetrist variation which is present even within departments that use IMRT extensively. The reason of the interdosimetrist variation is related both to limited time to create the plan and to lack of knowledge of how much the plan in reality can be optimized. For a manually created plan, it is difficult to know the extent an OAR can be spared prior to actual plan optimization. Therefore, objectives for organs at risk are typically set relatively loose initially, in order to ensure dose coverage of the target. Having obtained dose coverage of target, the next step is to reduce the dose to organs at risk as much as possible. In a busy clinic, it can be hard to ensure that all constraints on organs at risk have been tightened as much as possible. This issue could be reduced significantly if an initial "high quality" plan -e.g., an automatic generated plan -were available such that the dosimetrist could focus on fine tuning of the treatment plan. Another potential benefit of Auto-Planning could be a simple method to exchange planning knowledge and procedures between institutions since the Technique configuration of the Auto-Planning software can easily be shared between institutions. This could help institutions with, for example, limited resources to quickly create IMRT or VMAT plans with similar quality as in more advanced institutions. Most previous work on automating the planning process has built on knowledge of previously treated patients. One approach of extracting information from previously treated patients is utilizing the overlap volume histogram method, which measures the position of an OAR relative to the target. (19,29,30,31) Knowledge of overlap volume histograms from previously treated patients can be used to predict the likely achievable irradiation level of specific OARs. A few published solutions on automating the planning process have been documented, (21,22,23,24,25) and all build on the Pinnacle 3 planning system. The published systems did show the feasibility of automating the planning process. However, in terms of flexibility, it could be a potential issue that "knowledge based" approaches require a database of "high" quality plans for each protocol. Changes to planning techniques, prescriptions, OAR sparing goals, and contouring style could, if not implemented in a very smart way, require a new "high" quality database. Such a change could be quite labor-intensive to implement clinically, and might not be as flexible to interchange between institutions. This issue might be addressed within "knowledge based" algorithms, but is not present in the Auto-Planning solution evaluated in this study since it only relies on a small set of Technique parameters. Finally, it should be mentioned that plan comparison studies are inherently difficult to perform since development of the treatment planning skill is continually ongoing within any department in order to optimize the treatments plans. Thus, if the current study was repeated, the results might be different since our dose planning team has learned new ways to improve the quality based on the results of this study. Similar statements could be made about the configuration of the automatic system. However, this does not change the fact that the current comparison between treatment plans that have been delivered clinically and the automatic treatment plans did show the autoplans to be superior at that time. Thus, the impact of Auto-Planning seems likely to be a tool to increase the overall quality of dose planning, rather than a tool that could remove the need of manual optimization. V. CONCLUSIONS Comparison of autoplans and previous delivered clinical plans showed only small dosimetric differences in target coverage, but significant reduction in dose to OAR for the autoplans. The blinded clinical evaluation of the plans showed that, for 94% of the evaluations, the autoplans were similar to or better than the clinical plans. Auto-Planning software will, therefore, be able to reduce the manual time spend per treatment plan since the most of the plans could potentially be used clinically without further optimization. Perhaps more importantly, Auto-Planning could be used as a high quality starting point for further plan optimization. This could increase the overall quality of the treatments and reduce the interobserver variation present in manually created treatment plans. APPENDICES Appendix A. Description of Auto-Planning. This Auto-Planning module simplifies the planning process through the use of templates called Techniques, and automatic optimization tuning methods called the Auto-Planning engine (APE). The user can define the following parameters in a Technique: -Derived regions of interest (ROIs) (e.g., PTV or expanded cord) -Placement of points of interest (POIs) -Prescriptions -Beam geometries, settings, and optimization options -Prioritized optimization goals A single selection creates a new plan based on the Technique settings and runs the APE. The APE tuning method maps the prioritized optimization goals defined in the Technique to optimization objectives. Multiple optimization loops are performed that iteratively adjust the optimization parameters to meet the goals and further drive down organ at risk (OAR) sparing with minimal compromise to the target coverage. This is achieved by using objectives specific to driving down OAR dose to the point it significantly affects target coverage and separate objectives to achieve the desired goals. Target conformality is automatically controlled by a system-generated ring structure, and objectives and body dose is controlled by a systemgenerated normal tissue structure and objectives. The objective dose and weight parameters are tuned using a proprietary method. Target uniformity is controlled by reducing hot and cold spots using system-generated control structures and objectives similar to the process defined in the study by Xhaferllari et al. (32) The input to a Technique is clinical goals (e.g., maximum dose, mean dose, and DVH constrains), as well as the pertinent parameters listed above. Optimization of a Technique is an iterative process in which Auto-Planning is performed on test patients. If there are short-comings in the autoplan results, the optimization goals in the Technique are adjusted to account for them and saved. In the current work, the optimization of the Technique was performed on separate patients than those included in the study, and finalized before autoplanning was performed on any of the patients included in the current study.
5,236
2016-01-01T00:00:00.000
[ "Engineering", "Medicine" ]
The Network of Inflammatory Mechanisms in Lupus Nephritis Several signaling pathways are involved in the progression of kidney disease in humans and in animal models, and kidney disease is usually due to the sustained activation of these pathways. Some of the best understood pathways are specific proinflammatory cytokine and protein kinase pathways (e.g., protein kinase C and mitogen-activated kinase pathways, which cause cell proliferation and fibrosis and are associated with angiotensin II) and transforming growth factor-beta (TGF-β) signaling pathways (e.g., the TGF-β signaling pathway, which leads to increased fibrosis and kidney scarring. It is thus necessary to continue to advance our knowledge of the pathogenesis and molecular biology of kidney disease and to develop new treatments. This review provides an update of important findings about kidney diseases (including diabetic nephropathy, lupus nephritis, and vasculitis, i.e., vasculitis with antineutrophilic cytoplasmic antibodies). New disease targets, potential pathological pathways, and promising therapeutic approaches from basic science to clinical practice are presented, and the blocking of JAK/STAT and TIM-1/TIM-4 signaling pathways as potential novel therapeutic agents in lupus nephritis is discussed. INTRODUCTION As a leading autoimmune disease, systemic lupus erythematosus (SLE) is a chronic inflammatory disease that affects multiple organs. SLE involves activations of dendritic cells (DCs), macrophages, and lymphocytes, which together lead to the production of high-affinity autoantibodies and immune complex formation. The pathogenesis of SLE remains unclear despite extensive clinical and animal studies. Various genes (1) and environmental factors including viral infections, hormones (2,3), and ultraviolet light are thought to exacerbate SLE. An abnormal production and imbalance of T helper (Th) lymphocyte cytokines was demonstrated to be involved in the development of autoimmune diseases (4), and Th1 cytokines such as interleukin (IL)-2 and -12 and interferon-gamma (IFN-γ) and Th2 cytokines (e.g., IL-4, IL-5, IL-10, and IL-13) are also implicated in the pathogenesis of SLE. The inhibition of these cytokines is a key factor in the development of NZB/WF1 mice, which develop severe lupus-like phenotypes that resemble human SLE (5). Th17 lymphocytes are a subset of Th cells with an important role in autoimmunity. These lymphocytes are derived from naïve CD4 + T cells and are characterized by the expression of the transcription factor RORγT (retinoic-acid-receptor-related orphan nuclear receptor gamma) (6). Once stimulated by various cytokines, including IL-23 (7), Th17 lymphocytes secrete cytokines such as IL-17 family members, IL-21, IL-22, tumor necrosis factor (TNF)-α, and IL-6 (6). Compared to healthy controls, individuals with SLE exhibited increased numbers of Th17 cells and IL-23 in their serum (8). Chen et al. observed that the frequency of circulating Th17 cells and the serum levels of IL-17 and IL-23 were higher in patients with loop nephritis compared to controls (9). Th17 lymphocytes' potent pro-inflammatory effect has been shown to be due to the induction of vascular inflammation and the recruitment of leukocytes, and this is suspected to contribute to several pathological pathways in SLE, including the B-cell activation and autoantibody production observed in SLE (10). The imbalance of cytokines in SLE may be part of a core process of pathogenicity, or it may be a secondary marker of the dysregulation of immune pathways such as those involving Th1-Th2 and Th17-Treg cells (11,12). IL-6 signaling via receptors (IL-6Rs) on activated B cells induces dimerization with the transmembrane protein gp130 and the activation of the receptorassociated Janus kinase (JAK) tyrosine kinases JAK1 and JAK2. This is the most important role of IL-6, as it is involved in multiple autoimmune diseases and contributes directly to the survival of plasma cells in the bone marrow niche (13). Effector T cells also recognize autoantigens that are present in the kidneys as implanted or endogenous antigens (14)(15)(16)(17)(18), and fewer CD4 + and CD8 + cells are recruited to the glomerulus and stroma. The members of the T-cell immunoglobulin mucindomain (TIM) family encode a protein that has an IgV-like domain and a mucin domain (19), and the three human TIM genes most similar to those in mice are TIM-1, TIM-3, and TIM-4. The roles of TIM proteins in T-cell differentiation, effector function, autoimmunity, and allergy are becoming clear (20), and it was demonstrated that TIM-1 is expressed on activated T cells (21). Another study suggested that TIM-1 on T cells acts as a costimulatory molecule to enhance cell proliferation and cytokine production and to mediate the loss of tolerance (22). Chemokines and adhesion molecules are reduced by TIM-1 antibodies (18). In intracellular adhesion molecule-1 (ICAM-1) knockout mice treated with TIM-1 antibody, the renal and spleen mRNA expressions of the Th1 chemokines CXCL9 and CXCL10 were reduced and ICAM-1 mediated the recruitment of leukocytes in glomerulonephritis (23). A promising next research task would be to target inflammatory cytokines via a blockade of the JAK-signaling transducer and transcriptional activator (STAT) and TIM-1 signaling pathways, in order to better target the development and survival of autoreactive pathogenic plasma cells during the early stages of SLE. In this review, new therapeutic targets for lupus nephritis, potential pathologies and promising therapeutic approaches to the JAK-STAT and TIM-1-TIM-4 signaling pathways from basic science to clinical practice are presented. Mechanisms Downstream of the JAK-STAT Pathway Several signaling pathways are known to be involved in the progression of renal disease in both humans and animal models, and the progression is usually due to a sustained cytokine and JAK-STAT activation of these pathways (24). The JAK-STAT pathway is downstream of the type I and II cytokine receptors. As part of a major signaling cascade, JAK is an effective therapeutic target for a variety of cytokine-driven autoimmune and inflammatory diseases (25,26). A cytosolic tyrosine kinase, JAK has been demonstrated to be an effective therapeutic target for a wide range of cell-surface receptors, and members of the cytokine receptor common gamma (cg) chain family in particular are involved in signaling (27). There are four mammalian JAKs: JAK1, JAK2, JAK3, and tyrosine kinase 2 (Tyk2). The activation of JAKs occurs via ligand-receptor interactions and results in the phosphorylation of the cytokine receptor; the signaling occurs via the generation of docking sites for signaling proteins known as STATs (19). JAKs catalyze the phosphorylation of STATs and promote STAT dimerization and nuclear transport, thereby regulating gene expression and transcription (28,29). The JAK proteins are structurally related but different in their activation and their downstream effects; their high specificity is thus expected (Figure 1). Although most of the JAKs are ubiquitously expressed, the expression of JAK3 appears to be restricted mainly to the hematopoietic system and vascular myocytes. JAK3 has an important role in lymphocyte development and function. JAK3 differs from the ubiquitous expression of the other JAK subtypes: it has a restricted tissue distribution, resides primarily on hematopoietic cells, and is associated with cg chains (33). The importance of the JAK3 signaling pathway was highlighted by the findings that mice and humans with genetic deletions or mutations in either the cg subunit or JAK3 develop defects in lymphocyte development, which result in a severe combinedimmunodeficiency syndrome phenotype (34). The Blocking of the JAK-STAT Pathway as a Therapeutic Target Since the JAK-STAT pathway has a major activating role in a variety of disease processes, concerted efforts have been made to develop specific inhibitors of this pathway. Inhibitors of protein kinases are relatively easy to identify, and the development of JAK inhibitors has received the most attention. The following three JAK inhibitors have been approved by the U.S. Food and Drug Administration (FDA) for clinical use. Ruxolitinib (Jakafi R , from Incyte) is a potent inhibitor of both JAK1 and JAK2 and was FDA-approved in late 2011 for the treatment of polycythemia vera and myelofibrosis (35). In late 2012, the FDA approved tofacitinib (Xeljanz R , from Pfizer), which was initially designed as a specific inhibitor of JAK1 and JAK3 kinases; tofacitinib has also been administered as an immunosuppressant for the treatment of transplant patients and individuals with autoimmune diseases (36). The JAK2 inhibitor baricitinib (Olumiant, from Eli Lilly) was FDA-approved in June 2018 for the treatment of moderately to severely active rheumatoid arthritis (RA). Several other JAK inhibitors have been developed as immunosuppressive agents for RA and other autoimmune diseases; e.g., upadacitinib (a JAK1 inhibitor) and filgotinib (a JAK1 inhibitor) were demonstrated to be effective as a treatment of RA (37,38). Given the effects of JAK-STAT activation on cytokines and chemokines and the specific roles of inflammation in the promotion of progressive renal injury, it is not surprising that JAK-STAT activation is involved in the pathogenesis of both renal disease and acute kidney injury. The JAK-STAT pathway has been studied extensively, and due to its potent immunomodulatory function, the JAK-STAT pathway and its components are promising candidates for immunological interventions for disease control. Indeed, JAK inhibitor clinical trials have been conducted for a variety of diseases including chronic kidney disease, RA, inflammatory bowel disease, atopic dermatitis, and psoriasis (39)(40)(41). There is significant interest in JAK-STAT as a therapeutic target for autoimmune nephritis in particular; the activation of JAK triggers the phosphorylation of IL-6R and gp130, followed by various secondary messengers including STAT3, mitogenactivated protein kinases (MAPKs), and Akt, all of which provide growth and proliferation signals and the activation of transcription factors (42). Cytokines are glycated proteins with immunomodulatory functions that have important functions in infection and inflammation. Representative cytokines are members of the IL-6 family (which consists of IL-6, IL-11, IL-27, oncostatin M, cardiotrophin-1, and neuropoetin) (43). These cytokines are homo-or heteroduplexes of the signaling β-receptor gp130, which is expressed on ubiquitin. They are characterized by their quantified biological effects. A further transduction of signaling is carried out by the JAK/STAT, MAPK, and phosphatidylinositol-3-kinase (PI3K) pathways (44). Genetic excision or polymorphisms of key suppressors of JAK-STAT signaling, such as suppressors of cytokine signaling, have been implicated in elevated serum IL-6 levels and in the risk of SLE development in humans (45,46). JAKs also play an important role in transmitting signals from IL-6Rs, and IL-6 is involved in both SLE and the maintenance of a pool of potentially autoreactive plasma cells. The blockade of JAK signaling with selective and potent JAK2 inhibitors may therefore weaken the supportive effect of IL-6 on the maintenance of autoreactive plasma cells in SLE. Targeting the cytokine/growth factor pathway-which is important for plasma cell generation and the development of SLE-has been supported by several studies targeting the IL-6 pathway and receptors for the treatment of SLE (47); however, targeting IL-6 and IFN-γ failed to produce significant renal effects in either case. The first in vivo study of the therapeutic use of the JAK/STAT pathway in lupus was performed in 2010 by Wang et al. (48). In that study, mice treated with the tyrosine kinase inhibitor AG-490 showed more inflammation (i.e., glomerulonephritis, interstitial nephritis, vasculitis, and even extra-renal features of the salivary glands as an extra-renal feature) than mice treated with the vehicle (inflammation). The inhibition of chemokines, IFN-γ, and major histocompatibility complex class II molecules on the surface of renal cells was observed. The AG-490 treatment also reduced the levels of blood urea nitrogen (BUN), serum creatinine, and proteinuria, and it reduced the depositions of IgG and C3 in glomerular cells. The study's immunohistochemical examination revealed a reduced expression of STAT1 in glomerular cells, tubular cells, and interstitial cells of the mice. The effects of a selective JAK2 inhibitor (CEP-33779) on mice with lupus nephritis (LN) were assessed in a pivotal study conducted by Lu et al. (49). CEP-33779 protected MRL/lpr mice from the development of renal damage and ameliorated established disease in the mice, as well as in NZB/WF1 mice. In mice with pre-existing conditions, CEP-33779 resulted in increased survival, decreased proteinuria, the resolution of histological features of renal disease, and a decreased level of pSTAT3. Interestingly, CEP-33779 also reduced the levels of longlived plasma cells in the spleen (at all doses) and in the bone marrow (at the highest dose). This effect may have therapeutic implications in human LN, given that long-lived plasma cells are involved in the production of antibodies. Conversely, treatment with CEP-33779 did not affect the levels of spleen shortlived plasma cells, which may be associated with a reduction in immunosuppression-related side effects (i.e., infections) and potentially associated with a better response to the vaccine (38,50). A specific blockade of JAK2 may also contribute to the treatment of SLE pathology, including arthritis and dermatitis. Multiple cytokines (IL-6, IL-12, and α/β-type IFNs) are suspected to have important roles in the initiation, progression, and development of SLE (51)(52)(53)(54). These three cytokines are signaled through receptors regulated by JAK kinases. IL-6 signaling via IL-6R on activated B cells induces dimerization with gp130 and the activation of the receptor-associated JAK tyrosine kinases JAK1 and JAK2. This is the most important function of IL-6, since IL-6 is involved in multiple autoimmune diseases and contributes directly to the survival of plasma cells in the bone marrow niche (13). In addition, multiple studies using mouse models of SLE have repeatedly demonstrated the importance of IL-6 in promoting disease expression in SLE (30,31,33,52,53). As noted earlier, the activation of JAK causes the phosphorylations of IL-6R and gp130, followed by growth and proliferation signals. JAK activates secondary messengers and transcription factors (e.g., STAT3, MAPK, and Akt) (43). Targeting the IL-6 pathway and receptors is currently being tested for the treatment of SLE (43,55,56). Based on experimental and preclinical data, the oral selective JAK1 and JAK2 inhibitor baricitinib (which has been approved for the treatment of RA) was recently studied in 314 patients with epidural SLE, an epidural disease primarily involving the skin and joints, in a randomized, 24-week, placebo-controlled phase II trial (57). The patients were randomized 1:1:1 to two doses of baricitinib (4 mg or 2 mg/day) or placebo. The percentage of patients who achieved the resolution of arthritis or skin lesions was significantly higher in the 4-mg baricitinib group compared to the placebo group. Among the patients who received baricitinib, the SLE Disease Activity Index 2000 score at week 24 had decreased by >4 points and the British Isles Lupus Assessment Group A or B disease activity score did not worsen, and the Physician's Global Assessment. In the trial (57), the percentage of patients whose disease activity and SLE Responder index-4 (as defined) did not worsen (64%) was also significantly higher than that in the placebo group (48%). The improvement in the number of tender joints was significantly higher in the 4-mg baricitinib group vs. the placebo group (−6.9 vs. −5.6 joints). However, the extent and severity of skin lesions (as assessed by the area and severity index of cutaneous lupus erythematosus) did not improve with baricitinib treatment compared to the placebo group. There were also no significant differences in the changes in anti-dsDNA antibodies and complement levels between the baricitinib and placebo groups. Although the occurrence of adverse events was similar among the three groups in the trial (57), serious infections were more common in the 4 mg baricitinib group (6%) than in the 2-mg baricitinib group (2%) and placebo group (1%). One patient with SLE who was positive for antiphospholipid antibodies and treated with 4 mg baricitinib developed a deep vein thrombosis (accounting for 1% of patients treated with 4 mg baricitinib). Although the effect of baricitinib in reducing joint tenderness is very small, the results of this trial provided a positive signal for further phase III trials of JAK inhibitors for various symptoms of SLE. Two multicenter, randomized, placebo-controlled phase III clinical trials of baricitinib in non-renal SLE are underway (NCT03616912 and NCT03616964). Solcitinib (GSK2586184), a selective JAK1 inhibitor, was going to be tested in a Phase II trial (NCT01777256) in patients with active non-renal SLE; the trial was stopped early after the recruitment of 50 patients, due to inadequate efficacy. No significant effect on the mean expression of IFN transcriptional biomarkers was observed (58). In addition, drug reactions exhibiting eosinophilia and systemic symptoms associated with solcitinib were observed in two patients (4%) and reversible hepatic dysfunction was documented in four patients (8%) (58,59). More clinical data are needed to confirm the selective effects of selective JAK inhibitors and their efficacy and toxicity. Based on the limited information available in the literature (57,60), JAK inhibitors are expected to provide an alternative treatment option for patients with non-life threatening lupus who are refractory to standard therapeutic management, such as those with joint or skin disease. Many new JAK inhibitors are currently in development and will be tested in patients with SLE, and it is hoped that more-effective and less-toxic drugs will soon be available to continue to improve the prognosis of SLE patients. KIM-1 as a Urinary Biomarker in Lupus Nephritis Kidney injury molecule-1 (KIM-1) and TIM-1, which are the same molecule, are relatively recently discovered transmembrane proteins with Ig-like and mucin domains in their ectodomain. TIM-1 modulates CD4 + T-cell responses and is also expressed by damaged proximal tubules in the kidney (where it is known as KIM-1). KIM-1 is upregulated more than any other protein in the proximal tubules of the kidneys and with various forms of injury (61,62) (Figure 2). KIM-1 is a phosphatidylserine receptor that mediates the phagocytosis of apoptotic bodies and oxidized lipids (63). A chronic expression of KIM-1 leads to progressive renal fibrosis and chronic renal failure (64), which is speculated to be due to oxidized lipids; KIM-1 is associated with phagocytic functions that take up toxic substances such as oxidized lipids. In addition to its role in phagocytosis, KIM-1 can activate signaling through the PI3K pathway (65). The role of KIM-1 signaling in proximal tubular cells and the link between KIM-1 phagocytosis and phosphorylation remain to be determined. Yang et al. observed that KIM-1-mediated phagocytosis functions downregulate the inflammation and innate immune responses in acute ischemic and toxic injury (66). It is thought that KIM-1 has a role in tubular interstitial damage (67). The expression of tubular KIM-1 is specific to ongoing tubular cell damage and de-differentiation (68,69), and urinary concentrations of KIM-1 are thought to reflect this expression. KIM-1 is also associated with renal interstitial fibrosis and inflammation in certain types of renal disease (70). Regarding prognostic factors, Austin et al. reported that tubular atrophy and fibrosis are associated with poor prognosis in LN (69). LN is often associated with comorbid acute and chronic pathological renal changes, and understanding the extent of renal damage without invasive testing is important in determining a patient's renal prognosis. The majority of tubular KIM-1 (∼90%) in various human renal diseases is of proximal origin, as was identified by double-labeling studies with aquaporin-1 (a marker for proximal tubules) (71). KIM-1 is localized in the apical membrane of dilated tubules in acute and chronic tubular injury (72). The localization of KIM-1 expression appears to be related to the susceptibility of specific tubular segments to different types of injury (72). The selective KIM-1 expression by injured proximal tubular cells provides a strong impetus for using KIM-1 as a biomarker of damage. Elevated urinary KIM-1 levels are strongly related to the tubular KIM-1 expression in experimental settings and in human renal disease (71,72). We observed a significant correlation between urinary KIM-1 levels and the activity in LN by and enzyme-linked immunosorbent assay (ELISA) in humans and mice (61,73). In the former study, we assessed the urinary KIM-1 level and tubular KIM-1 expression in kidney biopsies of SLE patients and their association with histological markers of renal damage (61), and we found that the urinary KIM-1 levels were significantly correlated with proteinuria (R = 0.39, p = 0.004) and with tubular damage (R = 0.31, p = 0.01). To assess the diagnostic value of urinary Kim-1 as a novel marker for crescent formation and interstitial infiltration, we used a receiver operating characteristic curve analysis to determine a cut-off level for urinary KIM-1 levels. At urinary KIM-1 levels >11.2 ng/day, the assay had 62.5% specificity and 100% sensitivity for the diagnosis in patients with cellular crescent formation. At urinary KIM-1 levels >3.2 ng/day, the assay had 60.8% specificity and 87.5% sensitivity for the diagnosis in patients with interstitial infiltration (61). Elevated urinary KIM-1 levels were strongly associated with tubular KIM-1 expression in both an experimental setting and human renal disease, and it was revealed that urinary KIM-1 is a very promising biomarker for the presence of tubular interstitial pathology and damage (61,74,75). Several studies have shown that in patients with other forms of renal injury (including ischemia, inflammation, and nephrotoxic drug injury), the renal cortical and medullary expression of tubular KIM-1 in damaged tubules is up-regulated after the disease induction (74,75). In clinical practice, it is essential to evaluate patients' kidney status. A renal biopsy is a standard diagnostic tool for the evaluation of kidney lesions in SLE, but due to its invasive nature, a kidney biopsy has potential risks and as a rule, it is not routinely performed. Moreover, a small amount of tissue may not be representative of the entire kidney (76). It is thus highly desirable to identify early and reliable biomarkers of kidney lesions in SLE (77,78). Mechanisms Downstream of the TIM-1/TIM-4 Signaling Pathway Different anti-TIM-1 antibodies that are specific to the IgV domain of TIM-1 have different effects on immune cell activation and response, due mainly to their different binding activities. A high-affinity anti-TIM-1 antibody, 3B3, forms a stable TIM-1 complex and brings TIM-1 into the TCR-CD3 complex, which enhances T-cell function and helps to form a large molecular activation cluster for complete T-cell activation (79). The lowaffinity antibody RMT1-10 has an inhibitory effect and may not support the formation of a stable TIM-1-TCR-CD3 complex (80). Foxp3-expressing regulatory T cells (Tregs) helped regulate the autoimmune response and provide protection against a murine model of LN (81). Treatment with the low-affinity antibody RMT1-10 increased the number of foxp3 + cells in the thoracic cavity and the percentage of foxp3 + CD4 + CD25 + cells in the spleen of the mice. RMT1-10 modulates the immuneresponse regulatory B cells (Bregs) and CD19 + CD5 + cells, and IL-10-producing cells may be involved in the effects of TIM-1 by increasing the percentage of CD19 + CD5 + and IL-10-producing cells. TIM-1 expression has been reported in activated T cells (82), DCs (83), and B cells (84). In association with this loss of IL-10 production in Bregs, the mice developed features of systemic autoimmune disease, including activated T cells with autoantibody formation and high IFN-γ production (85). The Blocking of TIM-1/TIM-4 Pathway Agents Macrophages and CD4 + , CD8 + , and CD4-CD8-B220 + T cells are present in the kidneys of individuals with LN. Leukocyte recruitment is influenced by cytokines and chemokines, which correlate with the degree of tissue damage and predict disease progression (86,87). Autoantibodies are important, and the T-cell population of T-cell-deficient MRL-Faslpr mice, which are prone to lupus, do not develop autoantibodies or immune complex disease (88)(89)(90). The tissue injury in LN is thus mediated by both autoantibodies and autoreactive lymphocytes (73,91). TIM-1 can bind to TIM-4, which is expressed on antigen presenting cells (APCs) (83) (Figure 3). TIM-1-TIM-4 interactions on macrophages contribute to T-cell activation and macrophage-induced autoimmune nephritis (19,81,92). In the direct pathway, TIM-1 expressed on activated T cells cross-links with TIM-4 and directly activates macrophages. TIM-4 is not expressed on T cells, but it is expressed on APCs, especially mature lymphoid DCs (92). TIM-4 binds to activated T cells expressing TIM-1, and TIM-1 binds to DCs expressing TIM-4; all of the fusion protein binding is mediated by TIM. It was demonstrated that RMT1-10 could be specifically blocked by a monoclonal antibody specific to TIM-1 (92). The antibody RMT1-10 was shown to inhibit both Th1 and Th17 responses without a significant inhibitory effect on Th2 responses, Tregs or Bregs, as a low-activity antibody; treatment with RMT1-10, when administered after the development of autoimmunity and the progression of renal damage, suggesting that manipulation of TIM-1 may have potential therapeutic applications for LN. We have examined the low-affinity antibody RMT1-10 in experimental studies (73,73,74). In a murine lupus model, treatment with RMT1-10 attenuated the progression of lupus nephritis by prolonging survival and affecting a range of important mediators (73). The renal manifestations of the systemic autoimmune disease SLE are characterized by the expression of autoantibodies in response to nuclear antigens, and they are associated with immune injury and a local inflammatory tissue response (93). Reduced autoantibody production is associated with a reduced recruitment of glomerular macrophages and reduced depositions of glomerular IgG and complement in brought about by RMT1-10. The serum anti-DNA antibody of IgG2a, whose switching is also known to be dependent on Th1 cytokines, was significantly reduced in RMT1-10-treated mice (94); circulating anti-DNA antibodies of IgG3 were associated with glomerulonephritis in MRL-Faslpr mice (73). These results suggest that an anti-TIM-1 antibody may affect not only the cytokine response, but also the ability to produce antibodies and immunoglobulins in LN. CONCLUSION The mechanisms of JAK-STAT and TIM-1/TIM-4 signaling pathways in controlling the inflammatory network in LN have been briefly explained herein. The JAK-STAT pathway sends signals from extracellular ligands such as cytokines, chemokines, growth factors, and hormones directly to the cell nuclei. Because the JAK-STAT pathway plays a major activating role in a variety of disease processes, extensive efforts have been made to develop specific inhibitors of this pathway. The JAK-STAT pathway and its components have been used in immunology for the regulation of various diseases, and this pathway is a good candidate for targeted interventions. The increased evidence of JAK-STAT activation in the pathogenesis of renal injury establishes a new set of targets for potential interventions in this disease. The TIM-1/TIM-4 pathway contributes to pro-inflammatory cytokines and triggers T-cell activation and macrophage activation. In the direct pathway, TIM-1 on activated T cells cross-links with TIM-4 and directly activates macrophages. In the indirect pathway, TIM-1 on activated T cells triggers IFN-γ production, resulting in the activation of macrophages. TIM-1 plays an important role in the development of systemic autoimmunity and its effects on end organs. The low-affinity antibody RMT1-10 inhibits both Th1 and Th17 responses without having a significant inhibitory effect on Th2 responses, Tregs or Bregs. As a result, TIM-1 appears to have potential as a therapeutic agent for LN. AUTHOR CONTRIBUTIONS The author confirms being the sole contributor of this work and has approved it for publication.
6,088.6
2020-11-06T00:00:00.000
[ "Medicine", "Biology" ]
An experimental model for the study of craniofacial deformities 1 Modelo experimental para o estudo de deformidades craniofaciais Purpose: To develop an experimental surgical model in rats for the study of craniofacial abnormalities. Methods: Full thickness calvarial defects with 10x10-mm and 5x8-mm dimensions were created in 40 male NIS Wistar rats, body weight ranging from 320 to 420 g. The animals were equally divided into two groups. The periosteum was removed and dura mater was left intact. Animals were killed at 8 and 16 weeks postoperatively and cranial tissue samples were taken from the defects for histological analysis. Results: Cranial defects remained open even after 16 weeks postoperatively. Conclusion: The experimental model with 5x8-mm defects in the parietal region with the removal of the periosteum and maintenance of the integrity of the dura mater are critical and might be used for the study of cranial bone defects in craniofacial abnormalities. Introduction Cranial bone defects caused by severe trauma, infection, neoplasms, surgery or congenital deformity continue to represent a major challenge to plastic reconstructive surgery 1 .Large cranial defects are associated with high morbidity and mortality in the neonatal period, and therefore their treatment is important not only to improve aesthetic appearance, but mostly to reestablish the rigid protection of the underlying brain 2,3 . Bone autograft transplantation is still the most used method for the reconstruction of cranial bone loss to date.However, this method is inadequate for critical size defects 4 which are characterized for remaining open during the lifetime of the animal with no spontaneous bone regeneration. Although a wide variety of bone grafts and implants have been evaluated in various species for experimental animal models, there is little consistency among investigators regarding the choice of an appropriate animal model 5 .Critical size bone defects represent attractive models to study bone healing because, by definition, they do heal without intervention 5 .The diameter of critical size defects varies greatly among animals of the same species across different studies 6 . The aim of this study is to determine the ideal critical size bone defect in rats using specific surgical techniques. Methods Four-month-old male NIS Wistar rats (Rattus norvegicus albinus), body weight ranging from 320 to 420 g, were used for this study.The Animal Research Ethics Committee at the University of Sao Paulo approved the experimental protocol.The animals were individually kept in separate cages in a ventilated stand, under standardized air and light conditions at a constant temperature of 22° C with a 12-hour light/day cycle.They had free access to tap drinking water and standard laboratory food pellets. The animals were anesthetized with an intraperitoneal injection (0.3 mL/100 g of body weight) with a combination of ketamine hydrochloride (5%) and xylazine (2%). The dorsal region of cranium was shaved and the head of the rats was positioned in a cephalostat during the operative procedure (Figure 1) and aseptically prepared for surgery. FIGURE 1 -Animals positioned in the cephalostat A 20-mm midline skin incision was performed from the nasofrontal area to the external occipital protuberance.The skin and underlying tissues, including the temporalis muscles, were reflected laterally to expose the full extent of the calvaria.The periosteum surrounding the defect was removed to prevent periosteum osteogenesis (Figure 2).The animals were randomly divided into two groups (20 in each group), as follows: in group 1, two 5x8-mm full thickness bone skull defects were created bilaterally in the dorsal part of the parietal bone, lateral to the sagittal suture (Figure 3), whereas in group 2, one 10x10-mm full thickness bone defect through the sagittal suture was created (Figure 4).The cranial defect was created using a drill with a micromotor under constant irrigation with sterile saline solution to prevent bone overheating.Additionally, in order to prevent dura mater damage an optical microscope was used during the osteotomy and the bone fragment was cautiously detached.The flap of each animal was closed with mononylon 4-0 sutures.The animals were killed 8 and 16 weeks postoperatively with inhaled CO 2 and the calvaria was removed for histological analysis (Figure 5). Histological preparation The studied tissue samples were fixed in 10% formalin for 24 hours, decalcified in 5% formic acid for 48 hours, and paraffin-embedded.The 5-micron sections were stained with hematoxylin and eosin and examined under a light microscope.This analysis was qualitative and analyzed either the presence or absence of mineralized tissue. Results Group 1 (N = 20): sagittal sinus laceration to a greater or lesser degree was observed in 9 animals during the removal of the calvarial bone.Two animals died during the first 24 postoperative hours and 4 animals had laceration of the dura mater during osteotomy.All the animals that died during the first 24 hours were immediately replaced by other animals.There was no osseous consolidation of the calvaria bone appreciable by gross and histological examination after 8 weeks postoperatively.After 16 weeks there was little bone regeneration commencing from the defect margins.None of the animals presented osseous defect closure until the end of the observation period, not even reaching 50% of the defect area (Figure 6). Group 2 (N = 20): no evidence of sagittal sinus laceration, bleeding or death of animals either during or after surgery.A similar scenario as in the group 1 was observed through gross and histological examination where none of the animals presented osseous defect closure until the end of the observation period, not even 50% of the defect area (Figure 6). No evidence of infection or wound dehiscence occurred postoperatively in both groups. Discussion Calvarial was used for studying regenerative critic size bone defect because the calvaria is an anatomic area of limited mechanical stress and relative stability of the surrounding structures, including intact calvarial margins, underlying the dura mater and overlying temporalis and frontalis muscles, creating a protected environment in which it is possible to study the interactions between new bone constructs and in situ bone 7 . Freeman et al. 8 and Turnbull et al. 9 , the first to attempt the study critical size defects in rat calvaria, demonstrated that 2-mm-diameter defects created through the parietal bone failed to heal within 12 weeks.Mulliken et al. 10 determined that 4-mmdiameter defects were unsuccessfully healed over a 6-month period.Takagi et al. 11 determined that 8-mm-diameter defects created in the calvaria were reduced to 5 mm in diameter within four weeks.No further healing of the wound was observed after 12 weeks.However, these authors did not provide any surgical details for an experimental animal model. Another important characteristic in determining an experimental model is the age of animals.Various animal models comparing the healing potential of calvarial defects in juvenile versus adult animals have been described in the literature.As early as 1939, Sutro and Jacobson reported that juvenile rats had a greater ability to heal 3-mm calvarial defects after 5 months compared with adults 12 .Longaker et al. 13 demonstrated that juvenile (6-day-old) mice have a significantly greater ability to reossify calvarial defects compared with adult (60-day-old) mice after 8 weeks of healing.Therefore, spontaneous regeneration is known to be slower in adult animals than in young animals, and accordingly the development of an experimental a model focusing on the study of craniofacial deformities animals must be conducted with adult animals. The periosteum was removed during bone defect creation because the periosteum is believed to be a key factor in the cranial bone healing of critical size defects.According to other authors 14 , the periosteum remaining at the osteotomy site assumes an essential role in determining the diameter of long-bone critical size defects. The integrity of dura mater described in this study is another important aspect that should be considered.Hobar et al. 15 emphasize the importance of the dura mater in the regeneration of critical defects in pigs and raise a correlation of this role to the age of the animal.Similarly, the injury is believed to interfere in the stimulus for bone formation when studying the role of stem cells in bone repair, for example.That is, this injury can induce the formation of nervous tissue, since introducing undifferentiated stem cells can differentiate them into cells of the nervous tissue, muscle, bone, cartilage and fat. The 16-week time point was used for being previously demonstrated as sufficient length of time to allow measurable bony repair in other models of adult calvarial defect healing 15,16 . It is also emphasized that the use of a cephalostat specially developed for rats and the optical microscope allowed a more precise osteotomy without adjacent lesions.The specially designed cephalostat kept the animal in a static position as observed in human cranial surgeries and the optical microscope allowed direct viewing during the osteotomy preventing any lesions in the dura mater. After the development of bone defects of varying sizes to establish an experimental model in order to examine craniofacial deformities, a rectangular-shaped (5x8 mm) critical size defect was chosen and created in the biparietal area, resulting in no spontaneous closure of the bone defect, not even reaching 50% of the area after 16 weeks of observation.Whereas, to achieve greater bone defects (10x10 mm), there was a laceration to a greater or lesser degree in the sagittal sinus causing high morbidity and exposing the animals to the risk of bleeding and infections of the central nervous system, besides requiring cranial sutures, which have a known biomolecular behavior different from other regions of the craniofacial bone 17 . This experimental model seems to be useful in evaluating the efficacy of stem cells isolated from a variety of organs and tissues in order to close critical size cranial defects.In our previous work, human dental pulp stem cells and human musclederived stem cells were found to be able of closing critical size rat calvarial defects using this experimental surgical protocol 18,19 . Conclusion The 5x8-mm rectangular defect in the biparietal bone of adult rats with periosteum removal and dura mater integrity can be considered a critical cranial defect and might be used as an experimental model for the study of craniofacial deformities. FIGURE 2 - FIGURE 2 -Exposure of the calvaria FIGURE 6 - FIGURE 6 -Comparative histological analysis of cranial defects at 8 and 16 weeks postoperatively.A, B: 8 weeks and A', B': 16 weeks FIGURE 5 - FIGURE 5 -Biopsy collection for histological analysis
2,374.4
2010-06-01T00:00:00.000
[ "Medicine", "Materials Science" ]
Non-linear Terahertz Driving of Plasma Waves in Layered Cuprates The hallmark of superconductivity is the rigidity of the quantum-mechanical phase of electrons, responsible for superfluid behavior and Meissner effect. The strength of the phase stiffness is set by the Josephson coupling, which is strongly anisotropic in layered superconducting cuprates. So far, THz light pulses have been efficiently used to achieve non-linear control of the out-of-plane Josephson plasma mode, whose frequency scale lies in the THz range. However, the high-energy in-plane plasma mode has been assumed to be insensitive to THz pumping. Here, we show that THz driving of both low-frequency and high-frequency plasma waves is possible via a general two-plasmon excitation mechanism. The anisotropy of the Josephson couplings leads to marked differences in the thermal effects among the out-of-plane and in-plane response, consistently with the experiments. Our results link the observed survival of the in-plane THz non-linear driving above $T_c$ to enhanced fluctuating effects in the phase stiffness in cuprates, paving the way to THz impulsive control of phase rigidity in unconventional superconductors. Order and rigidity are the essential ingredients of any phase transition. In a superconductor the order is connected to the amplitude of the complex order parameter, related to the opening of a gap ∆ in the single-particle excitation spectrum. The rigidity manifests instead in the quantum-mechanical phase of the electronic wave function, associated with the phase of the order parameter 1 . Twisting the phase is equivalent to an elastic deformation in a solid, meaning that its energetic cost is vanishing for sufficiently slow spatial variations. On the other hand, since phase fluctuations come along with charge fluctuations, long-range Coulomb forces push the energetic cost of a phase gradient to the plasma energy ω J 1,2 . While for ordinary superconductors this energy scale is far above the THz range, in layered cuprates the weak Josephson coupling among neighboring layers [3][4][5] pushes down the frequency of the inter-layer Josephson plasma mode (JPM) to the THz range. 6,7 The possibility to manipulate the inter-layer JPM by intense THz pulses has been theoretically discussed long ago within the context of the non-linear equation of motion [6][7][8][9][10][11] . This approached turned out to successfully capture the main features of a series of recent experiments 12,13 , even though a full quantum treatment of the JPM able to capture thermal effects across T c is still lacking. On the other hand, non-linear effects induced by strong THz pulses polarized in the planes [14][15][16] have been discussed so far only within the context of the SC amplitude (Higgs) mode, whose excitation energy ω H = 2∆, ranging from 5 to 10 THz in cuprates, appears as a better candidate than high-energy in-plane plasma waves. Nonetheless, the observed monotonic temperature dependence of the nonlinear response 15,16 , its persistence above T c 16 and its polarization dependence 14 do not easily match the expectations for the Higgs mode. The same problem holds considering lattice-modulated charge fluctuations, which are expected to dominate in the clean limit 17,18 but become less relevant [19][20][21][22] and strongly isotropic 22 when disorder is considered. Here we provide a complete theoretical description of Figure 1. (a) Schematic view of the mexican-hat potential for the free energy F (ψ), with ψ the complex order parameter of a superconductor below Tc. A phase-gradient excitation corresponds to a shift along the minima, while a Higgs excitation moves the system away from the minimun. An intense light pulse with almost zero momentum can excite simultaneously two plasma waves with opposite momenta (in red) or a single Higgs fluctuation (in blue). (b)-(c) Feynman-diagrams representation of the (b) plasma-waves or (c) Higgs contribution to the non-linear optical response. Here wavy lines represent the e.m. field, solid/dashed lines the plasmon/Higgs field, respectively. the JPM contribution to the non-linear response of layered cuprate superconductors, focusing both on thirdharmonic generation (THG) and pump-probe protocols. We first show that the basic mechanism behind nonlinear photonic of Josephson plasma waves is intrinsically different from the one of the Higgs mode, see Fig. 1. By pursuing the analogy with lattice vibrations in a solid, the Higgs mode is like a Raman-active optical phonon mode. It has a finite frequency at zero momentum, and its symmetry allows for a finite quadratic coupling to light [17][18][19][20][21][22][23][24][25][26] . The phase mode behaves instead like an acoustic phonon mode, pushed to the plasma energy by Coulomb interaction, carrying out a finite momentum at nonzero frequency. As such, zero-momentum light pulses can only excite simultaneously two JPMs with op-posite momenta, making this process strongly dependent on the thermal probability to populate excited states. This feature differentiates drastically the temperature dependence of the THG associated with out-of-plane or inplane JPMs, since the frequency scale of the former is comparable to T c , while it is much larger for the latter. In addition, in contrast to the Higgs mode 17,21 , for a light pulse polarized in the planes the signal coming from JPMs is in general anisotropic, since the momenta carried out by the two plasmons can be along different crystallographic axes. All these features not only contribute to the understanding of the existing experimental measurements, 12-16 but they also offer a perspective to design future experiments aimed at selectively tune non-linear photonic of Josephson plasma waves in layered cuprates. Let us first focus on the out-of-plane JPM. We take a layered model with planes stacked along z. In the SC state the Josephson coupling J ⊥ of the SC phase φ n between neighboring planes sets an effective XY model: An electric field polarized along z enters the Hamiltonian via the minimal-coupling substitution 1 θ n → θ n − (2π/Φ 0 )dA z , with θ n = φ n − φ n+1 , d interlayer distance and Φ 0 = hc/(2e). The corresponding out-of-plane current density I z = −∂H/∂(cA z ) is given by: where J c = 2eJ ⊥ / S, with S surface of each plane. The Josephson current (2) naturally admits an expansion in powers of A z to all orders: where the explicit time convolution of Eq. (3) has been omitted for compactness. Here, following the same approach used so far to investigate the Higgs response 17,18,23,24 , we rely on a quasi-equilibrium description, where the leading effect of the intense THz pump field is to trigger a third-order χ (3) response mediated by plasma waves. The quantum generalization of the model (1) has been widely discussed within several contexts 6,9,10,27,28 . Here we follow the approach of Ref.s 27,28 where long-range Coulomb interactions are introduced within a layered model appropriate for cuprates (see Ref. 29 ). The Gaussian quantum action for the phase mode at long wavelength has the usual form: where ω 2 J = c 2 /ελ 2 c = 8πedJ c / ε is the energy scale of the out-of-plane JPM, iω m = 2πmT are Matsubara frequencies and v = 2 ε/(16πe 2 ), with ε the background dielectric constant. In the classical limit only ω m = 0 is relevant and one recovers the leading term of Eq. (1), i.e. a discrete phase gradient along z, as expected for the Goldstone mode. To compute the third-order contribution in Eq. (3) we need to derive the effective action S (4) A for the gauge field up to terms of order O(A 4 z ) (see Ref. 29 ). By coupling the gauge field A z to the phase mode via the minimalcoupling substitution in Eq. (2) and by expanding the cosine term, one finds that: where dots denote additional terms not relevant for the χ (3) response. The second term in Eq. (5) can be treated as a perturbation with respect to S G 29 , so that integrating out the JPM one obtains: where I 0 is an overall constant. The vanishing of the denominator in Eq. (7) identifies the resonance of the non-linear kernel. Since the physical mechanism behind the THG is the excitation of two plasma waves, the largest I T HG in Eq. (8) occurs when the pump frequency matches the plasma frequency, i.e. ω = ω J . This has to be contrasted e.g. to the case of the THG from the Higgs mode. In this case the e.m. field excites nonlinearly a single amplitude fluctuation δ∆, via a term like A 2 δ∆. 17,18,23,24 As a consequence the non-linear kernel, identified by the dashed line in Fig. 1c, is proportional to a single Higgs fluctuation, and the THG (8) is resonant when the pump frequency matches half the mode energy, i.e. ω = ω H /2 = ∆, as observed in conventional superconductors 23,26 . The temperature dependence of the JPM non-linear kernel (7) and the corresponding THG (8) for a narrowband pulse are shown in Fig. 2a-d for different values of the pump frequency ω. Here we modelled J ⊥ (T ) and the corresponding ω J (T ) according to the out-of-plane [����] superfluid stiffness measured in Ref. 13 . As we explained, the possibility to match the resonance condition in the THG (8) depends on the relative value of the pump frequency ω with respect to ω J (T = 0) ≡ ω J,0 . In the case where ω < ω J,0 , as for ω = ω 3 in Fig. 2a, the temperature-dependence of I T HG (ω 3 ) is dominated by the maximum at the temperature where ω J (T ) = ω 3 . On the other hand, when ω ≥ ω J,0 , as it is the case for ω = ω 1 , ω 2 , the resonant excitation of the plasma mode cannot occur. However, I T HG is still non-monotonic, see Fig. 2b, due to the fact that by increasing temper- prefactor decreases, while the coth(βω J (T )/2) term increases, accounting for the thermal excitation of plasma modes. This thermal effect is particularly pronounced for the out-of-plane JPM since ω J,0 is of the same order of the critical temperature T c . The absolute value of I T HG depends also on the damping γ present in Eq. (7), which plays the same role 29 of a linear damping term in the equationsof-motion approach. In Fig. 2c,d we show the results for a temperature-dependent γ(T ) = γ 0 + r(T ), where r(T ) = r 0 e −∆/T has been taken in analogy with previous work 11 to mimics dissipative effects from normal quasiparticles. In this case the plasma resonance is progressively smeared out by increasing temperature, and for out-of-resonance conditions the THG signal looses rapidly intensity as the system is warmed up. The THG for a field polarized along z has been measured so far only by means of a broadband pump. 13 To make a closer connection with this experimental setup we then simulated (see Ref. 29 ) the THG for a short (τ = 0.85 ps) pump pulse E p (t) with central frequency Ω/2π = 0.45 THz, as shown in Fig. 2g. The frequency spectrum of the resulting non-linear current I N L z presents then a broad peak around 3Ω, as shown in Fig. 2e. The integrated spectral weight of the 3Ω peak is shown in Fig. 2f at several temperatures. Following Ref. 13 we used Ω ω J,0 , so the narrow-band response should corresponds to the case ω = ω 2 of Fig. 2d. However, the broadband spectrum of the pump pulse enhances the response at intermediate temperatures and apart from a small deep around T = 0.2T c the signal scales with the superfluid stiffness, in good agreement with the available experimental data. In the broad-band case the nature of the non-linear kernel can also be probed via a typical pump-probe experimental setup, schematically summarized in Fig. 2g. As it has been theoretically described in Ref. 18,30 for the transmission geometry, the oscillations of the differential probe field with and without the pump δE pr (t pp ) as a function of the pump-probe time delay can be directly linked to the resonant non-linear optical kernel. In the case of the out-of-plane response (7) one then obtains (see Ref. 29 ): When the pump pulse is short enough one can approximate A 2 z (t) δ(t) and Eq. (9) shows that the differential field δE pr (t pp ) oscillates at twice the JPM frequency, and not at the frequency of the mode, as it occurs for the Higgs mode observed in conventional superconductors 31 . This prediction is confirmed when a realistic pump pulse is used in Eq. (9), as shown in Fig. 2h, which reproduces very well the 2ω J oscillations reported at low-temperature in pump-probe experiments in reflection geometry 12 . Let us consider now the effects of a strong THz pulse polarized within the plane. In this case we can generalize the model (4) by taking into account both the twodimensional nature of the phase fluctuations in the plane and the anisotropy of penetration depth measured experimentally in cuprates, [3][4][5] where λ c 10−100λ ac depending on the material and the doping, and λ ac 2000 Å, so that ω J = c/ √ ελ ac is much larger than the out-of-plane one. Following again the microscopic derivation outlined e.g. in Ref. 27,28 we obtain where k = (k x , k y ) and we promoted the phase difference to a continuum gradient for the in-plane phase mode. To describe the non-linear coupling to the e.m. field we rely again on a quantum XY model, whose coupling constant is the effective in-plane stiffness J = 2 c 2 d/16πe 2 λ 2 . Even though the microscopically-derived phase-only action is not in general equivalent to the XY model 28 , for cuprates this can still represents a reasonable starting point 27 . By minimal-coupling substitution ∇φ(r) − (2π/Φ 0 )A we then obtain, in full analogy with Eq. (5), that: (11) By following the same steps as before we obtain a quartic action of the form (6), but the non-linear kernel becomes a tensor which admits two different K xx;xx and K xx;yy components (see Ref. 29 ): where K has the same structure of Eq. (7), provided that J ⊥ and ω J are replaced by J and ω J . The frequency and temperature dependence of K is shown in Fig. 3a. The in-plane stiffness J is taken as linearly decreasing, in analogy with experiments 3-5 . Since ω J (T = 0) ≡ ω J,0 is of the order of the eV, we only considered the case of THz pump frequencies ω i < ω J,0 . As one can see, when ω i is a fraction of ω J,0 the resonance condition ω i = ω J (T ) is still attained at temperatures where the kernel is large enough to give rise to a pronounced maximum in the THG intensity. However, when ω i ω J,0 the resonance is only attained near to T c where the prefactor has already washed out the two-plasmon resonace, and the THG scales with the superfluid stiffness. This can be easily seen from Eq. (7), since by putting ω 0 in the denominator, and considering that coth(βω J ) 1 at all relevant temperatures, from ω J ∝ J one finds The scaling of the THG intensity in the THz regime with J has several consequences. First, I T HG monotonically increases below T c , in striking contrast with the pronounced maximum one would expect for a resonance at ω i = 2∆(T ), due to the Higgs 23,24 or charge fluctuations 17,18 . Second, the superfluid stiffness appearing in the THG response is the one measured at THz frequencies. As such, due to pronounced fluctuations effects at this frequency scale, it vanishes in cuprates well above T c 16,32,33 . The I T HG at pump frequencies significantly smaller than ω J,0 closely follows the same behavior, as we exemplify in Fig. 3c where we report a simulation of the superfluid stiffness with a fluctuation tail above T c . Interestingly, both the monotonic suppression 15 and the persistence of non-linear effects above T c 15, 16 have been recently reported in THG and THz Kerr measurements in cuprate superconductors. Finally, due to the tensor structure of the in-plane kernel (12), the non-linear current associated with JPM for a pump field with a polarization angle θ with respect to the x crystallographic direction scales with: where K A1g/B1g = (K xx;xx ± K xx;yy )/2. The resulting I T HG (θ) ∝ |K(θ)| 2 is shown in Fig. 3d. According to Eq. (12), for JPM is K A1g /K B1g = 2. This result is rather different from the theoretical expectations for other collective modes. Indeed, in a single-band model, as appropriate for cuprates, the Higgs signal has only a A 1g component 17,24 . The density-fluctuations response has a largely dominant B 1g symmetry in the clean case 17 , but it becomes predominantly isotropic in the presence of even a weak disorder 22 . As a consequence, the recent observation 14 of a sizeable B 1g component at optimal doping in Bi2212 compounds cannot be simply ascribed to these collective excitations. On the other hand, it is worth noting that the ratio K A1g /K B1g = 2 for JPMs only holds within the phenomenological approach based on the quantum XY model, where the tensor admits the structure (12). Indeed, within a microscopically-derived phase-only model the interacting terms in the phase can differ from the one obtained within the XY model, as discussed for the clean case in Ref. 28 . On this view, while one expects in general an anisotropy of the non-linear JPM response, the exact value of the K A1g /K B1g ratio has to be determines within a microscopic approach. Our work establishes the theoretical framework to manipulate and detect JPMs in layered cuprates across the superconducting phase transition. The basic underlying mechanism relies on the excitation of two plasma waves with opposite momenta by an intense field. For the out-of-plane response, we support the well-established approach based on non-linear sine-Gordon equations, 6,9,10,12,13 adding a complete description of thermal effects and highlighting the possibility to tune the resonant excitation of JPMs by changing the temperature. For the in-plane response we suggest the possible relevance of JPMs to explain several puzzling aspects emerging in recent measurements in different families of cuprates. [14][15][16] An open question remains a quantitative estimate of the signal coming from the JPMs, as compared to the one due to Higgs or charge-modulated density fluctuations. Indeed, as the recent theoretical work done in the context of conventional superconductors demonstrated [19][20][21][22] , even weak disorder becomes crucial to make such a quantitative estimate, and to establish the polarization dependence of the response 22 . Here we notice that the large value of the in-plane plasma frequency comes along with a large value for the in-plane stiffness J , which controls the non-linear coupling of the JPM to the e.m. field. This suggest that especially near optimal doping, where J attains its maximum value, a two-plasmon THG signal can be comparable to other effects. On this perspective, the theoretical and experimental investigation of non-linear phenomena induced by intense THz pulses represents a privileged knob to probe relative strength of pairing and phase degrees of freedom in unconventional superconducting cuprates. The derivation of the quantum action for the phase degrees of freedom can be done following a rather standard approach, see e.g. Ref.s 1,27,28 and references therein. The basic formalism relies on the quantum-action representation of a microscopic superconducting model in the presence of long-range Coulomb interactions. The collective variables corresponding to the amplitude, phase and density degrees of freedom are introduced via an Hubbard-Stratonovich decoupling of the interacting superconducting and Coulomb term. This allows one to integrate out explicitly the fermionic degrees of freedom in order to obtain a quantum action in the collective-variables only, whose coefficients are expressed in terms of fermionic susceptibilities, computed on the SC ground state. The result for the Gaussian phase-only action in the isotropic three-dimensional case reads: (S1) Here D s = 2 c 2 /4πe 2 λ 2 andχ ρρ is the density-density susceptibility dressed at RPA level by the Coulomb interaction where χ 0 ρρ represents the bare charge susceptibility, which reduces in the static limit to the compressibility of the electron gas, i.e. χ 0 ρρ (ω m = 0, q → 0) ≡ κ. The nature of the Goldstone phase mode is dictated by the form of the charge susceptibility. For the neutral system Coulomb interactions are absent andχ ρρ in Eq. (S1) can be replaced by the bare one χ 0 ρρ . Thus, in the long-wavelength limit the pole of the Gaussian phase propagator defines, after analytical continuation to real frequencies iω m → ω + iδ, a sound-like Goldstone mode: ω 2 = (D s /κ)q 2 . On the other hand, in the presence of Coulomb interaction the long-wavelength limit of the charge compressibility (S2) scales as χ ρρ → 1/V (q). In the usual isotropic three-dimensional case V (q) = 4πe 2 /q 2 , where ε is the background dielectric constant, and one easily recovers from Eq. (S1) that where ω 2 P ≡ 4πe 2 D s / 2 ε = c 2 /λ 2 ε coincides with the usual 3D plasma frequency. In the case of cuprates one should start from a layered model where the in-plane and out-of-plane superfluid densities are anisotropic, so that the D s q 2 term in Eq. (S1) is replaced by (4D ⊥ /d 2 ) sin 2 (k z d/2) + D k 2 , with D ⊥/ = 2 c 2 /4πe 2 λ 2 c/ac . In addition, one can also introduce an anisotropic expression for the Coulomb interaction, to account for the discretization along the z direction 27 . Following e.g. the derivation of Ref. 27 one then recovers in the long-wavelength limit the two expressions (4) and (10) in the main text. An alternative but equivalent approach is instead the one followed e.g. in Ref.s 6,9,10,12,13 , where one deals with the equations of motion for the plasmon, coupled to the electromagnetic fields. The connection between the two approaches has been derived in details in Ref. 9,10 . Once more, the authors start from a microscopic SC layered model, and integrate out the fermionic degrees of freedom in order to build up an effective action for the phase field. The effective quantum action then reads: where r i is the two-dimensional in-plane coordinate running over each SC layer and δ α is the in-plane versor along the direction α = x, y. In Eq. (S4) the quantum term accounts for the capacitive coupling C 0 = s/4πR 2 D between the planes, s and R D being, respectively, the layer thickness and the Debye length. By retaining leading orders in φ in the cosine terms the Gaussian action of Eq. (S4) describes a sound mode, in full analogy with Eq. (S1) in the absence of RPA resummation of the density response. Indeed, as emphasized in the main text, the presence of long-range interactions is crucial in order to lift the sound mode to a plasmon. In Ref. 9,10 this is achieved by adding explicitly the electric and magnetic fields, and the corresponding scalar and vector potentials. To describe the out-of-plane JPM one needs an electric fieldẼ z n,n+1 polarized perpendicularly to the planes. The magnetic field will then lie in the plane, and we can take without loss of generalityB y n,n+1 along the y in-plane direction. Hence the Gaussian action becomes: where ∆ x φ n ≡ φ n,ri (τ ) − φ n,ri+x , a and D = d + s d are, respectively, the in-plane and out-of-plane lattice spacings. By means of the Maxwell equations one can replaceẼ z n,n+1 = (Ṽ n −Ṽ n+1 )/D andB y n,n+1 = (à x n+1 −à x n )/D into Eq. (S5). The explicit integration of the e.m. potentials then leads: where α = C/C 0 and describes the full dispersion of the plasma mode as a function of q = (k z , k), with k laying in the in-plane propagation direction. The pole equation for the Gaussian phase mode, i.e. ω 2 = ω 2 P (q), is completely equivalent to the solution of the linearized sine-Gordon equation for Josephson plasma waves previously addressed in the literature 6,9,10,12,13 . In cuprates the constant α is usually very small, so the main dispersion of the plasmon comes from the last term of Eq. (S7), which accounts for the inductive coupling between planes. In this approximation, the Gaussian phase fluctuations identify a collective mode whose energy dispersion is obtained as the pole of the Guassian propagator for phase fluctuations: . (S8) By analytical continuation iω m → ω + iδ in Eq. (S8) we then get The relation (S9) is the same that one obtains by using the equation of motion approach discussed in Ref.s 6,9,10,12,13 . In this case, one introduces directly the variable θ n ≡ φ n − φ n+1 which represents the phase difference between nearest-neighbour layers. It is then shown to satisfy the equation of motion 6 where ∂ 2 n f n ≡ f n+1 +f n−1 −2f n is the second-order discrete differential operator along the z direction, and analogously ∂ 2 x for the x direction. As one can easily check, when sin θ n ≈ θ n Eq. (S10) admits a wave solution θ n (x = mξ 0 , t) ∝ exp[i(kx + k z nd − ωt)] where the frequency ω and the momentum q = (k, k z ) satisfy Eq. (S9). In the approach of Ref.s 6,9,10,12,13 , based on the study of the equation of motions, the electromagnetic field is completely eliminated and the non-linear effects are included by retaining the full sin θ n term in the sine-Gordon model (S10). In this case, a real q solution for a propagating waves is only possible if one retains the full momentum dispersion in Eq. (S7). In contrast, in our approach the plasma mode is first computed at Gaussian level, and then non-linear effects originate by retaining in Eq. (S5) the full cosine term appearing in Eq. (S4), which is responsible for non-linear coupling to the gauge potential. The phase mode is then integrated out in order to obtain the complete electromagnetic response, as required to describe non-linear effects in the currents, see Eq. (3) and (6) in the main text. This is indeed the same Let us consider now the case of an in-plane polarized external e.m. field. Following the same scheme adopted for the out-of-plane case we find that the in-plane fourth-order effective action is:   is the polarization-dependent tensor, whose components read: Hence the tensor components of the in-plane non-linear optical kernel are those enlisted in Eq. (12) of the main text. As a final step, let us show how the additional σ reg |ω m | term of Eq. (S12) can be added to the non-linear kernel in order to account for the effect of dissipation. In this case, the calculation is done by introducing a finite spectral function to the phase mode A(z) ≡ zσ (z 2 −ω 2 J ) 2 +z 2 σ 2 reg . One then finds that, in general, the kernel becomes: where b(z) = 1 e βz −1 is the Bose function. If σ << ω J it can be shown that Eq. (S25) can be approximated, after analytical continuation, as: Eq. (S26) is the expression used, indeed, to compute all the quantities of interest in the main text. In analogy with Ref. 6 we also assumed that where r(T ) = r 0 e −∆/T 6 , and γ 0 is a small regularization constant, which prevents the non-linear optical kernel K (diss) to be ill-defined at T = 0. Both γ 0 and r 0 parameters are fixed by looking at the number of time-resolved oscillations observed experimentally in the pump-probe set up of Ref. 12 at low temperatures. To better reproduce the experimental findings, in Fig. 2 of the main text we fixed γ 0 /2π = 0.08 THz, while r 0 = 0.3ω J,0 in panels c,d and r 0 = 0.6ω J,0 in panels e-h. There ω J,0 /2π = 0.47 THz is the out-of-plane plasma frequency at T = 0. In Fig. 3, instead, we set γ 0 = 0.1ω J,0 , where now ω J,0 /2π = 240 THz is the T = 0 value of the in-plane plasma frequency. S3. MODELLING OF THE BROAD-BAND PUMP PULSE For a narrow-band multicycle pulse one can assume a monochromatic incident field, and the THG is simply related to the non-linear optical kernel via Eq. (8). However, for a broad-band pulse with central frequency Ω, the THG is more generally associated with the 3Ω component in the nonlinear current 17,18 : as shown e.g. in Fig. 2e in the main text (with i, j = z and K ij = K ⊥ ) at different temperatures. Here, A z (ω) is given by the Fourier transform of A z (t) = A 0 e −t 2 /τ 2 sin (2πΩt), while A 2 z (ω) is defined as the Fourier transform of A 2 z (t). The τ = 0.85 ps and Ω/2π = 0.45 THz parameters are set in such a way that the e.m. field E z (t) ∝ −∂A z (t)/∂t well reproduces the experimental pulse profile of Ref. 13 . S4. PUMP-PROBE CONFIGURATION In a pump-probe experiment designed to excite the out-of-plane JPM both the pump and probe fields are polarized along z, i.e. E z = E probe (t) + E pump (t). Here we will refer for simplicity to the transmission configuration, as discussed in Ref. 18,30 , where one measures the variation δE probe (t) of the transmitted probe field with and without the pump, so that terms not explicitly depending on the pump field cancel out. This allows one to express it as δE probe (t) ∝ dt A probe z (t)K(t − t )(A pump z ) 2 (t ). By considering a fixed t g acquisition time and implementing the time-delay t pp between the pump and the probe, δE probe (t g ; t pp ) becomes a function of t pp only, as given by the first line of Eq. (9). Finally, by computing from Eq. (7) the non-linear kernel in time domain, i.e. K(t) = dω 2π K(ω)e −iωt = F (T )e −γt sin(2ω J t), we derive the last line of Eq. (9). For the reflection geometry used in Ref. 12 the basic mechanism is the same, so that one expects that the differential reflectivity signal scales with the convolution of the non-linear kernel times the pump field squared given in Eq. (9). For the calculation of Fig. 2h in the main text we used the simulation of the broad-band pump field explained above. For the in-plane response measured in Ref. 16 , the huge frequency mismatch between the spectral components of the gauge field and 2ω J implies that only the term with t = t pp survives in the integral (9). As a consequences the
7,366.8
2020-06-03T00:00:00.000
[ "Physics" ]
High ‐ Capacity Embedding Method Based on Double ‐ Layer Octagon ‐ Shaped Shell Matrix : Data hiding is a technique that embeds a secret message into a cover medium and transfers the hidden information in the secret message to the recipient. In the past, several data hiding meth ‐ ods based on magic matrix have used various geometrical shapes to transmit secret data. The em ‐ bedding capacity achieved in these methods was often limited due to simple geometrical layouts. This paper proposes a data hiding scheme based on a double ‐ layer octagon ‐ shaped shell matrix. Compared to previous octagon ‐ shaped data hiding methods, the proposed method embeds a total of 7 bits in each pixel pair, reaching an embedding capacity of 3.5 bits per pixel (bpp). Experimental results show that the proposed scheme has a higher embedding capacity compared to other irre ‐ versible data hiding schemes. Using the proposed method, it is possible to maintain the Peak Signal to Noise Ratio ( PSNR ) within an acceptable range with the embedding time less than 2 s. Introduction Data hiding technique, also called as steganography, embeds a secret message into a cover medium and transfers the information hidden in the secret message to the recipient. The term "steganos" in Greek means "hidden", and "graphos," "to write." The earliest allusion to secret writing in the West with concrete evidence of intent appears in Homer's Iliad [1][2][3][4]. From ancient times to the present, steganography includes invisible ink, microdots, character arrangement, digital signatures, covert channels, and spread spectrum communication, etc., and ensures that the information is less likely to be noticed and harder to retrieve when the message is transmitted. It makes the information transfer more secure and integrated. The main characteristics of data hiding techniques include security, imperceptibility, embedding capacity, and integrity. As it is very important to know how much secret data can be hidden in a cover image, embedding capacity is an important factor in evaluating the quality of a data hiding technique. The larger the hiding capacity, the more satisfied is the transmitting side to transmit a large chunk of secret data. Also, when the embedding capacity is large, the possibility of the transmission being intercepted by a malicious attacker becomes less and the security becomes high. In the past, several data hiding methods have been proposed. For example, the method of least significant bits (LSB) substitution [5] proposed by Chang and Cheng in 2004. The limitation of this method was that it was easy to detect the hidden information, resulting in poor security. In 2006, Zhang and Wang proposed exploiting modification direction (EMD) [6] method. In 2010, Lee and Chen designed a modulus function [7] to embed secret data using a mapping process between the variant Cartesian product and each pixel group. The experimental results showed that Lee and Chen's scheme achieved a high embedding capacity and a low distortion. In 2017, Lee et al. proposed an REMD [8] method using image interpolation and edge detection that achieved an embedding capacity of 3.01 bpp with an average image quality of 33 dB. In the field of irreversible information hiding, the EMD method is an epoch-making method. It uses modular functions to reduce the computational cost of the information hidden in the carrier; therefore, this series of EMD methods also brings more novelty. Chang et al., in 2008, proposed a method using a reference matrix based on Sudoku, which made use of a pixel pair as coordinates of a Sudoku matrix to specify the value to embed a 9-ary secret digit into each pixel pair [9]. Chang et al., in 2014, proposed a turtle shellbased scheme (also called TS scheme) [10] for data hiding, in which a reference matrix was constructed based on a hexagon-shaped shell to embed three secret bits into each pixel pair of the cover image. Kurup et al. (2015) proposed a data hiding method based on octagon-shaped shells [11]. In 2017, Leng and Tseng introduced the reference matrix based on regular octagon-shape shells [12]. The further improvement in [13] achieved a higher payload of 3.5 bpp. In 2018, Lee and Wang proposed a magic signet hiding method [14], which randomly generated non-repetitive values of 0-15 in the signet to fill the reference matrix. Recently, in 2019, Zhang et al. proposed an efficient and adaptive data-hiding scheme based on a secure reference matrix [15]. The method was secure because of its large number of possible solutions (about 10 78 solutions) for resisting attacks. In 2019, Chang and Liu proposed two enhanced real time turtle shell-based data hiding schemes [16]. Both the schemes mapped each cover pixel pair onto the original or altered the turtle shell matrix to find out its associate set for embedding secret data. Then, the cover pixel pair was modified with minimum distortion according to the associate set. In 2020, Nguyen et al. proposed a new data hiding approach to embed secret data based on an x-crossshaped reference-affected matrix [17]. The reference matrix consisted of three parts: petal matrix, calyx matrix, and stamen matrix, which were combined for executing the embedding procedure. Using this method, it was found that the smooth regions were more suitable for embedding secret data due to the smaller difference between pixel values. Unlike the traditional EMD method, the above-mentioned methods used a reference matrix instead of the extraction function in the embedding and extraction procedures. All the magic matrices-based schemes mentioned above embedded a single layer of secret data in the reference matrix. First, a pair of cover pixels were mapped on the x-and y-axes coordinates of the reference matrix, and then the coordinate positions were replaced with the secret data. However, this limited the embedding capacity. In order to overcome the limitation, this paper proposes a new scheme based on double-layer octagon-shaped shell matrix in which each cover pixel pair is able to carry 7-bit sub-streams of secret data. This paper contributes to the related data hiding algorithms as follows. (1) The regular octagon-shaped shell method proposed in [12] carried only 5-bit of secret data for each pair of cover pixels. However, in our proposed scheme, additional 2bits data was embedded in each cover pixel pair, leading to a higher embedding capacity of 3.5 bpp. (2) Peak signal to noise ratio (PSNR) and structural index similarity (SSIM) are two measuring tools that are widely used in image quality assessment. Especially in the steganography image, these two measuring instruments are used to measure the quality of imperceptibility. Based on the experimental analysis, results show that the average value of the stego-image quality had an acceptable value of 37dB on an average while SSIM values were between [0.9, 0.95]. We contend that our proposed method is more suitable for complex images to obtain a higher SSIM value. Also, as seen previously, the PSNR values remained stable for all the test images irrespective of their image texture. (3) Lastly, the computational cost in terms of embedding time is 1.82 s on an average. Related Works In 2006, Zhang and Wang proposed an epoch-making information hiding method, called the exploiting modification direction (EMD) method [6] in which a reference matrix M in the size of 256 × 256 was constructed based on a cross-shaped shell to embed secret data into each pixel pair of the cover image. In 2014, Chang et al. developed a data hiding scheme based on turtle-shaped shells [10] and then Leng [12] designed regular octagonshaped shells for hiding secret data. The EMD Embedding Method and the EMD Extensions The exploiting modification direction (EMD) method [6] divided the to-be-hidden binary data into N pieces with L bits, and each secret piece is presented as a decimal value by D digital numbers in a (2n + 1)-ary notational system, where (1) where n is a parameter to determine how many pixels of cover image are used to hide one secret digit. In the embedding phase, EMD method firstly uses pseudo-random generator to permute all pixels of cover image according to a secret key. After that, EMD method partitions the permuted pixels into a series of groups. The group is denoted as a vector Pn = (p1, p2, …, pn), which consists of n cover pixels. A weight vector Wn = (w1, w2, …, wn) = (1, 2, …, n) is defined. Therefore, the EMD method defines an embedding function f as weighted sum function modulo (2n + 1) for each group, a secret digit d can be carried by the n cover pixels, and at most one pixel is increased or decreased by one. f can be expressed as Equation (2): After embedding a secret digit d, the group Pn is modified into Qn = (q1, q2, …, qn), which is defined according to following conditions: , and , for 1, 2, … , . From the above properties, the EMD method at most modifies only one-pixel value in a group. That is why the EMD embedding scheme's distortion induced in the stegoimage is not great. In the extracting phase, the secret digit can be extracted from stego-group Qn = (q1, q2, …, qn) by the following extraction function shown in Equation (3). The EMD method provides a good quality of stego-image with an average PSNR value of more than 51 dB, and the theoretical maximal embedded rate n n R 1) (2 log 2   of EMD method is 1.16 bpp for the best-case n = 2. For 2, the PSNR values of all test images averaged to 52.11 dB with embedding capacity of 1 bit per pixel (bpp). To further improve the EMD scheme, several schemes based on magic matrix based (MMB) schemes have been proposed in the past few years. In 2010, Lee and Chen designed a modulus function [7] to embed secret data using a mapping process between the variant Cartesian product and each pixel group. The experimental results showed that Lee and Chen's scheme achieved a high embedding capacity and a low distortion. The average PSNR was 51.157 dB when the embedding rate was 1 bit per grayscale pixel and the average PSNR was 31.847 dB when the embedding rate was 4 bpp with good security and absence of overflow/underflow. In 2017, Lee et al. proposed an REMD [8] method using image interpolation and edge detection that achieved an embedding capacity of 3.01 bpp with an average image quality of 33 dB. Data Hiding Scheme Based on Turtle-Shaped Shells In 2014, Chang et al. developed a data hiding scheme based on turtle-shaped shells, also called as TS scheme [10]. In their method, the secret data was represented in a binary format, 3 bits of which were embedded within every 2 pixels. Figure 1 presents a magic matrix M of size 256 256 based on turtle-shaped shells. In the matrix M, two adjacent elements in the same row have a difference of 1, and two neighboring elements present in the same column have an alternating difference of 2 and 3. Let m( , ) and m( , ) be two numbers in the magic matrix M, where ( , ) and ( , ) represent the pairs of cover pixels and stego pixels, respectively. This is before/after a 3-bit secret data has been concealed. If the number m( , ) falls in a turtle-shaped shell, then the secret number also can be found in the same turtle-shaped shell, such that , . However, if , is a number on the edge, then the number can be found in the surrounding three turtle shells. A calculation needs to be done to find the minimum distance between ( , ) and ( , ) under the condition that , . Another special case occurs when the number , is not located in any turtle-shaped shell. The solution, as shown in Figure 2, is to find the shortest distance between ( , ) and ( , ) so that the number , equals the secret data . The turtle shell-based method can obtain an average PSNR of 49.4 dB and an average embedding capacity of 1.5 bpp. The limitation of the turtle shell scheme [10] was that the embedding rate was limited. Kurup et al. (2015) proposed a data hiding method based on octagon-shaped shells [11], which also used a reference matrix. On average, the PSNR value obtained was 51.7 dB with an average embedding capacity of 2 bpp. In 2017, Leng and Tseng introduced the reference matrix based on regular octagon-shape shells [12] which achieved a payload of 2.5 bpp. The further improvement in [13] achieved the possible payload of 3.5 bpp. The Regular Octagon-Shaped Shell Embedding Method As shown in Figure 2, Leng [12] constructed a reference matrix M in the size of 256 × 256 based on regular octagon-shaped shells for hiding secret data. The construction rules were as follows: In the same row of the reference matrix M, the value difference between two adjacent elements was set to "1", and the value difference between two adjacent elements in the same column was set to "5", followed by "6", "6", "6", and "5" in repeated cycles. The reference matrix M composed of a number of contiguous regular octagonshaped shells. Each regular octagon-shaped shell had a total of 32 numbers, ranging from 0-31. Once the transmission side constructed a reference matrix, the secret message was embedded into the cover image to obtain stego-image. A pair of pixels represented the values of x-and y-axes of the reference matrix coordinates, where each pixel pair corresponded to a 32-ary notational system value. The data embedding process is presented as follows. In the reference matrix , , assume that a secret binary stream S is embedded into a cover image I of size . The embedding procedure is described as follows: Step 1: Convert the secret binary stream S into a sequence of 5 bits sub-streams s , s , … , s , where n represents the number of 5-bit 32-ary digits. Step 2: Divide the cover image I into non-overlapping pixel pairs ( , ),  1, 3, … , 1 . ( , ) are considered as the coordinates of the reference matrix M to specify the value of m( , ). Step 3: Embed a 5-bit secret digit into each pixel pair ( , ) to obtain a corresponding pixel pair ( , ). Thereafter, the algorithm can be categorized into two cases. Case 2: If m( , ) , it means that the current pixel pair cannot correspond to the 5-bit secret digit, viz., m( , ) ≠ . Find the closest , , which is equal to , and then replace , with , in the cover image to hide the 5-bit secret digit such that ( , ) = , . Find the closest , based on the following rules: is an internal number, e.g., 3, 3 , find , = within the octagon. Step 4: Repeat Step 3 to embed the next 5-bit sub-stream into subsequent pixel pairs until the entire secret data S is hidden. Finally, obtain the stego-image I'. In the process of data extraction, the receiving side constructed a reference matrix , by using the information composed in the matrix. According to the stego-image I' and the reference matrix, the hidden pixel pairs were mapped to the coordinate positions of the reference matrix. In this way, the value of the coordinates of the reference matrix were extracted, which were the actual values of the secret data. Moreover, the data hiding based on octagon-shaped shell scheme greatly increases the embedding capacity under the acceptable image quality. Proposed Scheme This paper proposes a new scheme based on double-layer octagon-shaped shell matrix in which each cover pixel pair is able to carry 7-bit sub-streams of secret data, achieving an embedding capacity of 3.5 bpp. The following subsections present the construction of the double-layer octagon-shaped shell magic matrix, the embedding procedure, and the extraction procedure. Construction of the Double-Layer Octagon-Shaped Shell Reference Matrix In this section, we will construct a double-layer octagon-shaped shell reference matrix consisting of octagon-shaped shells for hiding secret data with high embedding capacity. The procedure can be divided into two parts. In the first part, we assign a 4-ary digit, referred to as type, to each octagon-shaped shell located at the top layer. In the second part, we assign a type attribute to each element at the bottom layer of the reference matrix. Figure 2 shows the layout of octagon-shaped shells in the reference matrix. There are 51 51 octagons as shown in Figure 3. For ease of explanation, the type matrix of the octagon-shaped shell is denoted as , . The type matrix is constructed using the following rules: Firstly, the values of type in the same row change according to the gradient (ascent/descent) with a magnitude of "1". Secondly, the values of the elements in the same column change according to the gradient (ascent/descent) with a magnitude of "2". To demonstrate an example, we take four octagons near the origin (0, 0). First, we indicate their type values as {0, 1, 2, 3}, marked by red color in Figure 3. For example, 0, 0 0, 0, 1 1, 1, 0 2, 1, 1 3. In this manner, we assign the four octagon-based type values {0, 1, 2, 3} to the whole 51 51 octagons respectively. the bottom layer is also located on the edge adjacent to two octagons whose type codes are 0, 0 0 and 0, 1 1 at the top layer. The value of , involves two octagons whose type value can be calculated from the values of two octagons , and t , , using Equation (4), which is shown as follows. Accordingly, we obtain 2,5 0 which is shown in Figure 4b. (a) The top-layer octagon type (b) Type attribute , corresponding to each matrix element , In the process of embedding a secret message, a double-layer octagon-shaped shell reference matrix can be created, which can embed 7-bits of secret data into every nonoverlapping pixel pair. The construction of the double-layer matrix is similar to the method of the one-layer regular octagon-shaped shells [12]. The difference here is that the construction information in the proposed method has more additional information to reveal about the second layer. The construction of the second layer matrix (i.e., the type matrix) can be calculated according to the above steps. The extra information of the second layer includes the value with the coordinate (0, 0) in , , the construction rules of , and the algorithm for generating type matrix . The procedures for data embedding and extraction are described in detail in the following sections. Data Embedding and Data Extraction Procedures In this section, data embedding and extraction procedures of the proposed method are presented. Figure 5 shows the flow chart of the data embedding process. First, we divide the cover image into multiple non-overlapping pixel pairs ( , ). The binary secret stream S is divided into 7-bit sub-steams , which are further converted into two numbers: 4-ary digit and 32-ary digit , respectively. The pair contains 4-ary and 32-ary numbers ( , hidden in each pixel pair using the constructed double-layer reference matrix. When all the secret data is embedded, we get a stego-image. The detailed steps for data embedding and data extraction are described in the Algorithms 1 and 2. Algorithm 1. Data Embedding Algorithm. Input: A cover image sized , the binary secret stream S with length L. Output: A stego-image '. Step 1: Construct a one-layer reference matrix , according to the rules described in Section 2. Thereafter, generate a double-layer reference matrix, assigning the type value , corresponding to each matrix element , according to the rules described in Section 3.1. Consider , as the coordinates of the matrix M to specify the value , with the corresponding type ( , ). Step 3: Divide the secret message S into sub-streams of 7 bits, where ∈ 1, 2, … , . For each sub-stream , convert the first 2 bits into a 4-ary digit and the last 5 bits into a 32-ary digit , viz., || where "||" denotes the string concatenation operator. Step 4: Embed each sub-stream into each pixel pair ( , ) according to the following rules: Find the closest element , by searching in a square of 25 25 centered on , , where , and , . Replace , with , such that the stego-pixel pairs , , , to embed the sub-stream consisting of and . Step 5: Repeat Step 4 until all the sub-streams are embedded. Finally, obtain the stego-image I'. Algorithm 2. Data Extraction Algorithm. Input: A stego-image I' sized . Output: The binary secret stream S. Step 1: Reconstruct the double-layer reference matrix , where each matrix element , corresponds to a type value , , which is calculated according to the reules described in Section 3.1. Step 3: For each stego-pixel pair , , find two digits m , in 32-ary format and , in 4-ary format, respectively. The hidden secret data is || , where , and , , respectively. Step 4: Convert into binary bits of secret data. Step 5: Repeat Steps 2-4 to extract all the sub-streams. Combine all the sub-streams to form the secret binary stream S. Example of Data Embedding and Data Extraction Procedures We will take two cover pixel pairs ( , ) = 2, 7 and ( , ) = 8, 7 as examples to explain the embedding and extraction procedures based on the proposed method. The secret message of 14-bit stream to be embedded is "0110010 0000001". Figure 6 is the double-layer reference matrix related to the examples explained in this section. Figure 7 presents examples describing the data embedding process of the cover pixel pairs. Assume that we want to hide the sub-stream "01 10010" in the first cover pixel pair 2, 7 . The sub-stream "01 10010" is first divided into two segments, namely "01" and "10010". Thereafter, we need to convert binary digits into decimal digits, which results in two numbers 1 and 18 after conversion in the 4-ary and 32-ary systems respectively. Since m 2, 7 18 and 2, 7 1 is shown in Figure 6, it is not necessary to change the cover pixel pair, and the secret message can be extracted at a later stage. Therefore, the stego-pixel pair , 2, 7 is the same as the corresponding cover pixel pair ( , ) = 2, 7 as shown in Figure 7. The second cover pixel pair is ( , ) = 8, 7 . The sub-stream "00 00001" to be hidden, is converted into decimal digits, which results in 0 and 1 in the 4-ary and 32-ary systems respectively. Marked by blue color in Figure 6, the location (5, 5) is closest to the location 8, 7 and satisfies 5, 5 0 at the top layer and 5, 5 1 at the bottom layer. Finally, the second cover pixel pair is replaced with the stego-pixel pair ( , ) = 5, 5 , which is also shown in Figure 7. A double-layer matrix is constructed according to the construction rules shown in Figure 6, on the receiving end. We then divide the stego-image into non-overlapping pixel pairs to embed the 7-bit secret message. Taking the first stego-pixel pair ( , ) = 2, 7 as an example, the values of and are obtained from 2, 7 at the top layer and and m 2, 7 at the bottom layer of the double-layer matrix, where is 1 and is 18 respectively. Moreover, 1 and 18 are converted into binary data "01" and "10010" in the 4-ary and 32-ary number systems respectively. Finally, we get the combined secret sub-stream "0110010". Experimental Results Experiments were conducted using MATLAB R2017a to verify the performance of the proposed scheme. A total of six grayscale images were used from the University of Southern California-Signal and Image Processing Institute (USC-SIPI) image database [18] as shown in Figure 8. The size of each image was 512 512 pixels. The binary secret stream S was generated randomly. The parameters of Embedding Capacity (EC), Peak Signal to Noise Ratio (PSNR), and Structural Similarity Index (SSIM) were used as evaluation parameters to validate the performance of the proposed method. EC is the amount of secret message per pixel that can be embedded in the image. If the number of secret messages embedded in the image is more and the image quality can be maintained to a certain standard, then the data hiding method is supposed to have high imperceptibility and high embedding capacity. PSNR is used to measure the quality of the image. If the value of this parameter is high, it means that the image quality is good, and the secret message hidden in stego-image cannot be perceived easily by the human eye. Refer to Equations (6) and (7) | | ( is the height of image, is the width of image, represents the cover image, and I' represents the stego-image. MSE represents mean-square error, EC represents embedding capacity, and | | are the total number of secret bits that can be embedded). In addition, the parameter of SSIM measures similarity between the original image and the stego image. This is in line with the human eye's judgment of image quality. The higher the SSIM value, the higher will be the similarity between the original image and the stego image. SSIM value is calculated using Equation (9) as shown below, where and are the average values of the original image and the stego image respectively; is the covariation between the original and the stego images respectively; and and are the variation of the original image and the stego image respectively. and are constants. A comparison of the proposed method with the other four methods is shown in Table 1. It can be observed that the proposed scheme obviously outperforms the other methods in terms of the embedding capacity and image quality. To be more precise, we can see that the proposed method has a higher embedding capacity (3.5 bpp) compared to the methods in [12][13][14][15][16][17]19,20], with an acceptable decrease in the image quality. Also, the image quality of the proposed method is better than that of [13] at the same embedding capacity of 3.5 bpp. Our proposed method outperforms in terms of PSNR with an average value of 36.91 dB compared to the average PSNR of 30.62 dB obtained in [13]. While [14][15][16]20] have higher average PSNR values of 44.12 dB, 46.37 dB, 46.84 dB, and 41.27 dB respectively, the embedding capacity of our proposed scheme is higher by 1.5 bpp. As it can be seen in Table 2, we used six standard test images and set a fixed amount of embedding capacity to display changes in the PSNR for each image. For example, when the embedding capacity is 70 10 bits, the average PSNR of each image remains at 38.09 dB. At the embedding capacity of 50 10 bits, the PSNR remains at an average quality of 39.56 dB. Therefore, the PSNR value remains stable for each test image. The magnitude of the pixels change during the data embedding process, which depends on the constructed double-layer reference matrix and the embedding procedure. In other words, the image quality of the stego-image is independent of the cover image and it is possible to maintain the image quality within an acceptable range using the proposed method. Table 3 shows the SSIM values of six standard test images at a fixed EC (3.5 bpp, which is 917,504 bits). Interestingly, the SSIM value for complex images such as Baboon is higher compared to the SSIM value for smooth images such as Airplane. This is a unique finding in our proposed method. Therefore, we contend that our proposed method is more suitable for complex images to obtain a higher SSIM value. Also, as seen previously, the PSNR values remains stable for all the test images irrespective of their image texture. To confirm this, we tested our proposed method on 50 additional test images using USC-SIPI database [18] and Kodak Image database [21] as shown in Figure 9. We also calculated the PSNR values for six standard test images at different EC (bpp) as shown in Figure 10 below. The figure clearly shows that as the embedding rate increases, the PSNR value decreases similar to the property of methods based on the magic matrix hiding methods of [12][13][14][15][16][17]19,20,22]. However, interestingly, the PSNR values for the test images do not have much difference from each other, which again shows the point that the PSNR values remains stable using our proposed method irrespective of the image texture. [10] with EC = 1.5 bpp, Leng and Tseng's octagon-shaped shells scheme in a single-layer embedding [12] with EC = 2.5 bpp, Xie et al.'s two-layer turtle shell matrix embedding [19] with EC = 2.5 bpp, Shen et al.'s double-layer square magic matrix scheme with EC = 2.5 bpp, and the proposed method in a two-layer embedding using octagonshaped shells with EC = 3.5 bpp. As mentioned earlier, with regards to the information hiding method based on the magic matrix, under the same embedding capacity regardless of the image texture, the PSNR and SSIM values of all test images remain stable. Similarly, the execution time of each method also has this stable characteristic, which is presented in Table 4 and Figure 11. [10] with EC = 1.5 bpp, Leng and Tseng's octagon-shaped shells scheme in a singlelayer embedding [12] with EC = 2.5 bpp, Xie et al.'s two-layer turtle shell matrix embedding [19] with EC = 2.5 bpp, Shen et al.'s double-layer square magic matrix scheme with EC = 2.5 bpp, and the proposed method in a two-layer embedding using octagon-shaped shells with EC = 3.5 bpp. As mentioned earlier, the PSNR and SSIM values of all test images remain stable for the information hiding method based on the magic matrix, under the same embedding capacity regardless of the image texture. Similarly, the execution time of each method also shows stability, which is presented in Table 4 and Figure 11. As seen in Figure 11, the method we proposed demonstrates maximum embedding capacity, and it also consumes the longest computation time. However, irrespective of the method, execution time of the maximum embedding capacity lies between 0.62 s and 1.90 s. Conclusions The steganography is used for covert communication in which secret data is hidden into a cover media, resulting in a stego-media. The key goal of steganography is to embed the maximum amount (capacity) of secret data to hide its existence with minimal distortion of the cover media. So, a data hiding methodology has to handle the tradeoff between "capacity" and "transparency/imperceptibility." Inspired from the regular octagonshaped shells data hiding method of Leng and Tseng [12], we proposed a steganographic method based on double-layer octagon-shaped shell matrix with high embedding capacity superior to the existing data hiding schemes. In the regular octagon-shaped shells data hiding method, a 5-bit secret message could be embedded per pixel pair in the octagon-shaped shell matrix. In the double-layer octagon-shaped shells scheme proposed in this paper, we can further add 2 bits to enhance the embedding capacity. The digits of proposed reference matrix of the first layer are 2bit data in the 4-ary number system, and the digits of second layer are 5-bit data in the 32ary number system. Thus, each pixel pair can carry a total of 7-bit secret data, leading to a high embedding capacity of 3.5 bpp. The experimental results verify that the proposed method is superior to the existing data hiding schemes in terms of embedding capacity, which was also able to maintain an acceptable visual quality of PSNR 37 dB on an average while SSIM values were between 0.9-0.95. We contend that our proposed method is more suitable for complex images to obtain a higher SSIM value as compared to other methods. Also, as seen previously, the PSNR values were stable for all the test images irrespective of their image texture. Moreover, the computation cost in terms of embedding time was 1.82 s on average. In this article, there are two issues to be solved in the future. (1) The first point is the results in Table 3 are showing the proposed algorithm is insecure because of the obtained SSIM values with high EC. When SSIM = 0.95, most people will be satisfied with the visual image of the image. While SSIM is lower than 0.90, it means that the defect may be twice as much as 0.95, and the naked eyes may perceive the picture deterioration and cause insecurity. According to our experimental results, when the EC value reaches 3.5 bpp, approximately 10% of pictures will have an SSIM slightly below 0.9. The results under 0.95 should be the limit for increasing the EC. (2) From the experimental results in Figure 11, although the proposed method has a larger maximum embedding capability, it takes more time to implement information hiding than other methods. The execution time of the maximum embedding capacity was around 1.90 s. Therefore, in future, we will improvise the method showcased in paper [22] to develop another multi-layer information hiding method, in order to achieve better information hiding computational capabilities, making it more competitive in the real-time application environment.
7,788.2
2021-04-01T00:00:00.000
[ "Computer Science", "Engineering" ]
On Classical and Bayesian Reliability of Systems Using Bivariate Generalized Geometric Distribution The study of system safety and reliability has always been vital for the quality and manufacturing engineers of varying fields for which generally the continuous probability distributions are proposed. Bivariate and multivariate continuous distributions are the candidates while studying more than one characteristic of the system. In this article, an attempt is made to address this issue when the reliability systems generate bivariate and correlated count datasets. The bivariate generalized geometric distribution (BGGD) is believed to serve as a potential candidate to model such types of datasets. Bayesian approach of data analysis has the potential of accommodating the uncertainty associated with the model parameters of interest using uninformative and informative priors. A real life bivariate correlated dataset is analyzed in Bayesian framework and the results are compared with those produced by the classical approach. Posterior summaries including posterior means, highest density regions, and predicted expected frequencies of the bivariate data are evaluated. Different information criteria are evaluated to compare the inferential methods under study. The entire analysis is carried out using Markov chain Monte Carlo (MCMC) set-up of data augmentation implemented through WinBUGS. Introduction Reliability and system engineers often encounter the difficulty of dealing with uncertainties present in the system where more than one study variables are of interest to them.Medical experts also face the similar situations when the life of patients goes at stake for the failure of vital organs like heart, brain, kidney, liver, lungs, and the likes.Choice of discrete or continuous and bivariate or multivariate distributions depends on the nature and number of the study variables.A vast literature exists on the construction of probability distributions.For this, one may refer to [1].No hard and fast criteria could be established to construct probability distributions.More details on this issue may be found in [1][2][3]. If bivariate continuous distributions are to be used, we could choose from parametrical distributions to analyze bivariate lifetime data suggested in the literature, [4][5][6][7][8].As our study corresponds to the reliability of system generating bivariate count data, so the most suitable distribution to model such types of datasets is believed to be the bivariate generalized geometric distribution (BGGD) proposed by [9].Many bivariate distributions for continuous random variables are introduced in the literature to be used in data analysis, especially in applications of survival data in the presence of censored data and covariates (see, for example, [10][11][12][13][14][15][16][17][18][19][20].The recent study includes [21].Alternatively, it can be observed in the literature that it is not very common the use of bivariate distributions for survival data assuming discrete data.Some discrete bivariate distributions have been introduced in the literature as the bivariate geometric distribution of [4] or the bivariate geometric distribution of [22], but these discrete distributions are still not very popular in the analysis of bivariate lifetime data, especially in the presence of censored data and covariates (see also, [4,[23][24][25][26][27][28]). Classical methods are frequently used in the analysis but they suffer from a certain drawbacks.The frequentists consider parameters to be unknown fixed quantities and they just rely on the current data and deprive the results of any prior information available about the parameters of interest.However, the Bayesians treat the parameters as random quantities and hence assign a probability distribution to the parameters.The Bayesian analysis is a modern inferential technique that endeavors to estimate the model parameters taking both the current data and prior information about the parameters into account.As a result, we get a posterior distribution that is believed to average the current data and the prior information.The posterior distribution thus derived is the achilis heal and the work-bench of the Bayesians to infer about the parameters based on numerical procedures and the entire estimation is then based on the very posterior distribution.A good review of the advantages of using Bayesian methods may be seen in [29].The posterior distributions often have complex multidimensional functions that require the use of Markov chain Monte Carlo (MCMC) methods to draw results, [30][31][32][33][34].Its use is very popular for analyzing bivariate continuous or discrete random variables in presence of censored data and covariates (see for example, [29,[35][36][37][38][39][40].Recently, [41] has considered weighted bivariate geometric distribution and [42] has used the q-Weibull distribution in classical as well as Bayesian frameworks.In recent years, the use of Markov Chain Monte Carlo (MCMC) methods has gained much popularity, [43,44] and [45]. It has been established that the BGGD is a good choice to model and analyze reliability count data appearing in medicine, engineering.The probability mass function (pmf) of BGGD is given as And the cumulative distribution function (CDF) is Here , 1 and 2 are unknowns parameters that control the behavior of the datasets emerging from the BGGD.Estimating the unknown parameters is ultimate goal of the inferential statistics. Due to variety of applications of the BGGD, the efficient estimation of the PDF and the CDF of the BGGD is the purpose of the present study.In [9] have recently worked out the classical maximum likelihood estimators for the BGGD.Taking into account and to avail the aforesaid advantages, we estimate the parameters of the BGGD in Bayesian framework.We have used the MCMC methods to draw results and applied different model selection criteria to compare the methods under consideration.Such as ML, AIC, AICC, BIC is also known as Schwarz criterion), and HQC. The Frequentist Approach of Statistical Analysis In statistical terminology, the data generating pattern of any system or model depends on the system-specific characteristics, called parameters.So the data being generated from the model is believed to advocate the values of the parameters causing the system to generate the dataset.The uncertainty associated with the data values is defined in terms of frequencies of the data values emerging again and again from the system under study.The objective of the analysis is to infer the characteristics of the system or model from the relevant data collected randomly.It is considered as the default approach to be used in variety of areas of sciences.Commonly used frequentist methods of statistical inference include uniformly minimum variance unbiased estimation, maximum likelihood (ML) estimation, percentile estimation, least squares estimation, weighted least (1) squares estimation, etc.But we just report the most commonly used ML estimation method whose results will henceforth be compared with their Bayesian counterparts. Maximum Likelihood (ML) Estimation The likelihood function gives the probability of the situation that the model, system or distribution under study have witnessed to generate the observed sample.The frequentist method of maximum likelihood estimation professed by [46] calls for choosing those values of the parameters that maximize the probability of the very observed sample.We generally opt for algebraic maximization of the likelihood function to find the ML estimates, but we may also opt for evaluating the probabilities of the observed samples at all possible values of the parameters and to choose those parametric values as the estimates that maximize the evaluated probabilities of the observed samples. Algebraically, the ML estimation may be proceeded as follows.Let us consider the random sample of size n from the bivariate correlated data x i , y j for i = 1, 2, … , n 1 and j = 1, 2, … , n 2 from the BGGD f x, y; , 1 , 2 given in (11).The log likelihood function l x, y; , 1 , 2 may be written as Equating to zero the first partial derivatives of the log-likelihood function l x, y; , 1 , 2 with respect to the set of unknown parameters , 1 , 2 yields the nor- mal equations which may be solved simultaneously to get the required ML estimates.However, if the normal equations are too complex to be solved simultaneously, we have to proceed to numerical methods or by direct maximization of the log-likelihood function. ML estimation-Algebraic approach Let us consider the real dataset for the BGGD given in Appendix A 1 in Table 8 appearing in [47][48][49], where X represents the counts of surface and Y, the count of interior faults in 100 lenses.The summary of the data is presented in Table 1 along with its figurative representation made in Fig. 1.Obviously, the observed data is positively skewed and negatively correlated.As stated in Sect.3.1, the normal equations we get using the observed dataset are complicated and hence the estimates are found using the numerical methods.Following [9], the ML estimates, standard error (SE) and 95% confidence intervals for the parameters are reported in Table 2. ML Estimation-Graphical approach As already discussed in Sect.3.1, we may opt for direct maximization of the likelihood function to find the ML estimates.The ML estimation theory suggested by [46] calls for choosing those values of the parameters that maximize the probability of the observed sample, and these values are regarded as the ML estimates.This technique is used here to find the ML estimates by plotting the observed dataset of the BGGD at different parametric values.The resulting plots generated in R package and are displayed in Fig. 2. We observed that the highest probability is obtained at α = 2.288 , θ 1 = 0.676 , θ 2 = 0.652 (the 1st one of the plots of Fig. 2), hence they may be regarded as the ML estimates. The graph in first cell corresponds to that produced by using the ML estimates.The maximum likelihood is found to be 3.2619E-192 at the ML estimates, i.e., = 2.288 , 1 = 0.676 , 2 = 0.652.It is also interesting to note that the value of highest ordinate, i.e., 0.02822 appears at the data pair ( x = 2, y = 1 ) at the ML estimates and the value of negative log-likelihood is found to be -432.957. The Bayesian Approach of Statistical Analysis A brief overview on this topic is already given in Sect. 1. Bayesian method combines prior information about the model parameters with dataset using Bayes rule yielding the posterior distribution.The Bayes rule is named after Thomas Bayes, whose work on this topic was published in 1763, 2 years after his death, [50].To establish Bayesian inference set-up, we need a model or system in the form of a probability distribution controlled by a set of parameters to be estimated, the sample dataset generated by the model or distribution, and a prior distribution based on the prior knowledge of experts regarding the parameters of interest.These elements are formally combined in to posterior distribution which is regarded as a key-distribution and work-bench for the subsequent analyses.The algorithm is explained in [42]. If f (D| ) is the data distribution depending upon the vector of parameters ,p( ) is the joint prior distribution of vector of parameters , L(D| ) is the likelihood func- tion defining the joint probability of the observed sample data D conditional upon the parameter vector , then the posterior distribution of conditional upon the data D , denoted by f ( |D) , is given by The denominator is also termed as predictive distribution of data and is usually treated as the normalizing constant to make it a proper density.It may be omitted in evaluating the Bayes estimates but must be retained in comparing the models.The posterior distribution has the potential to balance the information provided by the data and prior distribution.It is of the supreme interest of the Bayesians but often has very complex and complicated nature and hence needs numerical methods to evaluate it. The Prior Distributions It has already been highlighted that main difference between frequentist and Bayesian approaches is to incorporate prior information regarding the model parameters into the analysis.The formal way of doing so it to quantify the initial knowledge of experts in the form of a prior distribution that can adequately fit to the nature of the parameter and the experts' opinion.The parameters of the prior distribution, known as hyperparameters, are elicited in the light of the subjective expert opinion.So, the prior is leading if selected and elicited carefully and adequately, otherwise it may be misleading. Uninformative Priors In the situations when there is lack of knowledge about the model parameters, we choose vague, defuse or flat priors.As the BGGD is based on the set of parameters , 1 , 2 , so we assume uninformative uniform priors for the parameters as follows: where > 0, 0 < 1 < 1, 0 < 2 < 1, and h i > 0 , for i = 1, 2, … , 6 are the set of hyperparameters associated with the priors. Informative Priors When there is sufficient information available about the model parameters, we assign informative priors to the model parameters in such a way that they can adequately represent the knowledge available for the model parameters being examined. In the present situation, we assign an exponential distribution to and independent beta distributions to 1 and 2 given as under Here again h i > 0 , for i = 7, 8, … , 11 are the set of hyperparameters associated with the priors that should be elicited in the light of expert opinion.It is to notice that the elicitation of hyperparameters is beyond the scope of this study, so we would opt for merely choosing the values of the hyperparameters to be used in the subsequent Bayesian analysis. The Posterior Distribution Being specific to estimation of the parameters of BGGD, let the vector of parameters of interest and the data are denoted by = ( , 1 , 2 ) and D = (X, Y) respectively.The data distribution is denoted by f (D| ) and prior distribution by p( ) .Then using the Bayes rule, the posterior distribution denoted by f ( |D) may be written as where, all the notations are already defined.The posterior distribution f { |D} may also be written as kernel density in proportional form as The marginal posterior distributions g( |D), g( 1 |D) and g( 2 |D) of the parameters , 1 and 2 may be found by integrating out the nuisance parameters from the posterior distribution f , 1 , 2 |D as follows: And p( ) = h 7 e −h 7 , > 0, Journal of Statistical Theory and Applications (2023) 22:151-169 Bayes Estimates To work out the Bayes estimates of the parameters ( , 1 , 2 ) , we need to specify some loss function.A variety of loss functions is used to derive the Bayes estimates.Under the well-known squared error loss function, the Bayes estimates ̂ , ̂ 1 and ̂ 2 are the arithmetic means of their marginal posterior distributions, and are evaluated as The marginal posterior distributions are generally of complicated and complex forms and hence need numerical methods to evaluate them.Markov Chains Monte Carlo (MCMC) is the most frequently used numerical method to be used in Bayesian inference.So we also proceed with MCMC with the WinBUGS package to find the posterior summaries of the parameters of interest. The MCMC Method The MCMC method selects random sample from the probability distribution according to random process termed as Markov Chain where every new step of the process depends on the current state and is completely independent of previous states.The MCMC methods can be implemented using any of the standard softwares like R, Python, etc., but the most specific software being used for the Bayesian analysis is Windows based Bayesian inference using Gibbs Sampling (WinBUGS).We have implemented WinBUGS using the following scheme.(iii) Specify the nodes and run the codes for 10,000 times following a burn-in of 5000 iterations. WinBUGS codes used to analyze the data are given in Appendix A2. Bayesian Results Under Uniform Non-informative Priors Here we have assumed uniform priors for all the set of parameters under study as defined in section 0, and the resulting Bayes estimates, standard errors, medians and 95% highest density regions along with the values of the hyperparameters are presented in Table 3. It is observed that the elicited hyperparameters have high impact on the Bayes estimates.The initial values have no effect no significant effects on the posterior estimates if the true convergence is achieved. Convergence Diagnostics Sequential plots are used in WinBUGS to assess difficulties in MCMC and realization of the model.In MCMC simulations, the values of the parameters of interest are sampled from the posterior distributions.So the estimates will be convergent if the posterior distributions are stationary in nature and the Markov chain will seem to be mixing well.To check convergence, different graphical representations of parametric behavior are used in MCMC implemented through WinBUGS. History Time Series Plots The time series history plots of parameters are presented in Fig. 3. Here, Markov chain seems to be mixing well enough and is being sampled from the stationary Dynamic Traces and Autocorrelation Function The traces of the parameters and autocorrelation graphs of α, θ 1 and θ 2 are presented in Fig. 4.These graphs also confirm convergence. Bayes Estimates Using Informative Exponential-Beta Priors The ideal characteristic of Bayesian analysis is that it can accommodate the prior information shared by the field experts about the unknown parameters in the analysis.It is important to notice that the experts may not be the experts of statistics and hence cannot translate their expertise in statistical terms.So it is the sole responsibility of the statisticians to formally utilize the experts' prior information to elicit the values of the hyperparameters of the prior density which are to be subsequently used in the Bayesian analysis.Elicitation of hyperparameters is beyond the scope of our study.However, an exhaustive discussion on the elicitation of hyperparameters may be found in [51].We have chosen the values of the hyperparameters with drastic changes and a summary of the Bayes estimates against all values are presented in Table 4.It is observed that the elicited hyperparameters have high impact on the Bayes estimates.The initial values have no significant effect on the posterior estimates if the convergence in Markov chain is achieved.However, the change in the initial values causes a slight change in the parametric estimates. Possible Predictive Inference After finding out the Bayes estimates, it is necessary to evaluate them based on the predictive inference.As we have the data having 100 observations, so predicted sample of size 100 observations is generated based on the Bayes estimates obtained through MCMC based analysis.The predicted data along with their summaries are presented in Tables 5, 6. Obviously, there exist some differences in the predicted estimates as compared to those of the original observed dataset.Definitely, these changes may be due to different Bayes estimates that are evaluated after accommodating the prior information about the model parameters via the hyperparameters. Comparison of the Frequentist and Bayesian Approaches An important aspect of this study is to compare the Bayes estimation method with the classical ML estimation method.We have accomplished this by using different model selection criteria presented as under. Model Selection Criteria The classical and Bayesian methods of estimation are compared using the model selection criteria, i.e., ML, AIC, AICC, BIC, and HQC, which are defined by and Here ln[L , 1 , 2 ] denotes the log-likelihood, n denotes the number of observa- tions and k denotes the number of parameters of the distribution under consideration.The smaller the values of these criteria are, the better the fit is.For more discussion on these criteria, see [52,53].The Maximum likelihood estimates, uninformative Bayes estimates and informative Bayes estimates along with their associated values of the model selection criteria are reported in Table 7. Here we witness that the values of model selection criteria produced by Bayes method are less than those produced by the ML estimation method, which declare the Bayesian method more appropriate.Definitely, it is due to the distinct characteristic of the Bayesian methods that they incorporate the prior information related to the model parameters.However, it is pertinent to note that these results are sensitive to the selection of values of the hyperparameters.Hence a careful elicitation of the hyperparameters is demanding and earnest need of using the Bayesian methods.Carefully selected or elicited values of the hyperparameters may lead to even better estimates. Summary and Conclusions The bivariate generalized geometric distribution is believed to model reliability count datasets emerging from diverse phenomena.To understand the data generating phenomena, it is necessary to estimate the model parameters of the BGGD.To accomplish this, statistical theory offers two competing approaches, namely the frequentist and Bayesian approaches.The former approach is based on current data only; whereas, the later one utilizes prior information in addition to the current dataset produced by system or phenomenon.This study offers a comparison between the frequentist and Bayesian estimation approaches.To elaborate the frequentist approach, different descriptive measures and the maximum likelihood estimates are evaluated.The Bayesian estimation approach has also been illustrated by using uninformative and informative priors.We have worked out the posterior summaries of the parameters comprising posterior means, standard errors, medians, credible intervals and predictions for both types of the priors using the MCMC simulation technique.Correlated bivariate count dataset on counts of surface and interior faults is used for the illustration purpose.Comparison of the two estimation methods has been made using different model selection criteria.It is proved by working out all the estimates that all the model selection criteria including ML, AIC, AICC, BIC, and HQC have proved that the Bayesian approach outperforms the competing ML approach across the board.It has also been observed that the results may coincide if the information contained in the prior distribution and the datasets agree.However, the improved prior information may improve the results.As a future study, it is recommended that the Bayesian analysis of datasets may be done by using the formally elicited hyperparameters of the priors instead of values chosen by the experimenter. Fig. 1 Fig. 1 Graphs of the BGGD at the observed data points =2. 29 Fig. 2 3 Fig. 2 BGGD at the observed data points and different parametric values and the parameters including the ML estimates, i.e., = 2.288 , 1 = 0.676 , 2 = 0.652 (First cell) (i) Define model based on probability mass function (11) of BGGD and then click Check Model menu in WinBUGS software.(ii) Load data given in Appendix. Table 2 ML estimates, standard error and 95% confidence intervals Table 3 Summary of Bayes estimates under the uninformative priors Table 4 Summary of Bayes estimates under the informative priors Table 5 The predicted expected data of Counts of surface (X) and interior faults (Y) in 100 lenses Table 7 Model selection criteria for classical and Bayes estimation methods for the data of the example Table 8 Data set: Counts of surface (X) and interior faults (Y) in 100 lenses
5,222
2023-05-31T00:00:00.000
[ "Mathematics" ]
Bitcoin Cryptocurrency and Electronic Commerce in Saudi Arabia Bitcoin, a well-known cryptocurrency, has attracted much attention worldwide and is becoming more widely used. This study develops a hypothesis to investigate and test the impact of Bitcoin on E-Commerce use in Saudi Arabia using a survey research approach. Analyzing factors such as Bitcoin awareness and usage among Saudi Arabian consumers can increase online transactions, increase payment efficiency, a larger degree of financial inclusion, and a higher level of trust and security. The online-based survey was used to collect responses, and descriptive-analytical and standard results. Assessment criteria approaches were implemented to interpret the results. Responses were collected from individuals and employees of various companies working in different occupations in Saudi Arabia. In addition, statistical tools SPSS and SmartPLS were used to test the study’s hypotheses. The findings indicated a rapid growth in e-commerce transactions and some knowledge of Bitcoin. Also, it shows a positive correlation between digital currencies (Bitcoin) and e-commerce in Saudi Arabia. The study’s conclusions are expected to be valuable for those involved in Saudi Arabia’s e-commerce sector, helping them to decide how to adopt and use Bitcoin in their digital business strategy. The study also opens the way for future investigations into topics including Saudi Arabia’s regulations for Bitcoin, consumer attitudes toward Bitcoin, and the potential of blockchain technology for enhancing the nation's e-commerce processes. Plain Language Summary Bitcoin and Electronic Commerce in Saudi Arabia This paper aims to analyze cryptocurrency and e-commerce development in Saudi Arabia and find if there is any correlation between the development of e-commerce and cryptocurrency “Bitcoin” usage in Saudi Arabia. The analysis used a questionnaire as a research method and a descriptive-analytical approach to analyze the results. The data sample comprises individuals and employees from different companies and businesses in Saudi Arabia. Statistical tools (SPSS) and SmartPLS are then used for data interpretation. According to our knowledge, this was the first survey covering this topic in this country. Thus, it will highlight the situation and offer recommendations for improving the development further. The findings suggested a positive relationship between digital currencies (Bitcoin) and electronic commerce in Saudi Arabia, satisfying the researchers’ hypothesis. Introduction The business world has witnessed a new revolution through thriving electronic commerce (e-Commerce) and the emergence of digital currencies.E-commerce has flourished globally in various buying, selling, supply, and demand fields.Amazon, Alibaba, and Microsoft are among the world leaders in e-commerce, as well as local companies and applications we use daily in Saudi Arabia, such as Wsslini and Jahiz (Chakraborty et al., 2021). In the same context, a new electronic currency (Cryptocurrencies) appeared in 2007.These are virtual currencies used in exchanges and electronic financial transactions performed over the Internet.Cryptocurrency uses a secure network, electronic signatures, and encryption without the need for an intermediary or a reliable third party, such as banks.It can also be exchanged with traditional cash currency, such as the Riyal and Dollar.These features meet the needs of companies and consumers to speed up services via the Internet.There are multiple types of cryptocurrencies, including Bitcoin (Acet & Diken, 2019;Nakamoto, 2008).Bitcoin is created (mined) via the internet using free programs that perform complex mathematical calculations, requiring high-performance devices and computers to decode the blocking codes (Kharb et al., 2017).Recently, e-commerce has experienced colossal growth worldwide, and COVID-19 might be a dominant factor contributing to this growth (Maks€ ud€ unov & Dyikanov, 2021).Traditional trade ultimately changed since Internet users increased globally by 49% in 2020.So many more online transactions are being conducted, and more companies are offering their products and services to clients worldwide, as stated by Communications and Information Commission (Communications and Information Technology Commission, 2017). In addition, digital goods are inspected through the Internet and delivered according to certain guarantees and valid electronic signatures.Moreover, the confidentiality of correspondence between dealers is protected (Salah, 2018).Furthermore, the preceding helps eliminate paper document usage and related expenses and contributes to organized project operations.Also, it provides an alternative to administrative and communication costs, leading to well-established relationships between sellers, investors, and consumers (Sepasi et al., 2014). Saudi Arabia is among the world's largest e-commerce markets; the statistics in this sector reported that the volume of e-commerce transactions approached $5.7 billion.Also, Saudi e-commerce contributed to the GDP with a return of $10,482 billion in 2020 (Chamber, 2019).As a result of this swift growth in both technologies' inventions and e-commerce transactions, new electronic methods were developed, for example, STC Pay, and others.With this competing environment, cryptocurrencies attracted the world as an acceptable payment method in e-commerce.However, the Saudi Arabian government has taken a cautious approach toward cryptocurrencies.In 2018, the Saudi Arabian Monetary Authority (SAMA), the country's central bank, issued a statement warning against the risks associated with cryptocurrencies, stating that they are not accepted in the country as legal money and that there are no regulating laws to monitor their usage.Despite that, SAMA has also acknowledged the potential benefits of blockchain technology and has been exploring its applications in various sectors (Andonov et al., 2021). Even though cryptocurrency usage in electronic commerce is increasing rapidly worldwide, supporters and detractors exist among their users.Therefore, analyzing what distinguishes these two groups of users is fundamental for understanding their different intention to use cryptocurrencies for electronic commerce. As far as we have realized through our research in the literature, no adequate coverage of the development of cryptocurrency and electronic commerce in Saudi Arabia was found.Therefore, this paper aims to analyze cryptocurrency and e-commerce development in Saudi Arabia and find if there is any correlation between the development of e-commerce and cryptocurrency ''Bitcoin'' usage in Saudi Arabia.The analysis used a questionnaire as a research method and a descriptive-analytical approach to analyze the results.The data sample comprises individuals and employees from different companies and businesses in Saudi Arabia.Statistical tools (SPSS) and SmartPLS are then used for data interpretation.According to our knowledge, this was the first survey covering this topic in this country.Thus, it will highlight the situation and offer recommendations for improving the development further. E-Commerce E-commerce is a crucial application of computers and the internet, where commercial transactions are conducted between major companies and individuals worldwide within seconds to meet their daily needs.As technology continues to evolve rapidly, it offers a wide range of benefits, reflected in the growth of e-commerce.This requires careful thinking about the system's principles during the planning phase of building an e-commerce presence; these might include the quality of the information system, services, attractiveness, and entertainment (Sepasi et al., 2014). E-commerce offers a new perspective that allows corporations to compete in different fields.Consequently, organizations where e-commerce opportunities are appropriately utilized, increase their business performance considerably, accompanied by even more options, higher growth rates, reduced costs and risks, and other benefits.However, to achieve all the above advantages, corporations need to rely on practical leadership skills and build enhanced customer experiences to develop an intelligent strategic method (Andonov et al., 2021). Despite the difficulties as consequences of the Covid-19 pandemic, such as social distancing, longer delivery times, workshop production, and lockdown, the pandemic is considered a driving factor that has accelerated the expansion and impact of global e-commerce since 2019.In addition, to continue improving their performance, corporations need to assess and study the trends and challenges of e-commerce and develop innovative solutions consistent with the technologies' advancement (Maks€ ud€ unov & Dyikanov, 2021).In Saudi Arabia, one objective of the National Transformation Program was to increase modern commerce to 80% by 2020.Therefore, a study was conducted in 2019 to investigate the impact of the business sector transformation to e-commerce on the economy's growth.The study found some obstacles to achieving that goal, including logistic issues, long governmental processes, lack of experience in the field, and retailers' concerns about changing to modern commerce.The study also recommended developing the regulations and ecosystem and expanding the information, communication & technologies (ICT) infrastructure, including human capacity building.Another recommendation was to allow gradual transformation to reduce the risk (Chamber, 2019). Cryptocurrency In 2008 the cryptocurrency base was established due to Satoshi Nakamura's vision, represented by Bitcoin.Bitcoin is a Blockchain software platform application, a technology leading the world for digital assets.It keeps all operations and records of cryptocurrencies in a chain of blocks for each transaction ordered chronologically.So health, finance, and many other fields will be expected to utilize Bitcoin and Blockchain technology (Durmus x & Polat, 2018). Cryptocurrencies are growing in significance in the economy as a payment method.It is also opening an investment opportunity globally.However, they faced different risks due to a need for more regulations.Moreover, the fluctuation in their value affects the investment (Arora & Vigg Kushwah, 2018). A study investigated whether Bitcoin is considered a kind of ''digital gold'' while inspecting mutual features between digital resources and other natural resources valuable to the economy.Using a simple model to project the sensitivity of natural and Bitcoin resources to a capital-to-energy ratio, it was concluded that the economic effect of Bitcoin is similar to those of other exhaustible resources like gold (Goorha, 2021). Another study was conducted to show the acceptance of Bitcoin in the digital economy as an instance of Blockchain technology.This study examined the features of virtual currencies and Bitcoin's rate and whether they existed in the market during the period of interest inflation between 2016 and 2017.The study found that many market indicators were insignificant, demonstrating that cryptocurrencies, that is, Bitcoin, are free of third-party control (decentralized).That makes Bitcoin a unique financial asset.The authors extended their analysis model to forecast other periods of collapses and peaks.It was found that there is a repetitive cycle of the increasing trend until some peak, then decreasing rate due to a demand drop.These cycles were varied in length, and Bitcoin values were fluctuated generally (Rutskiy et al., 2021). A study by Cristofaro et al. (2023) aimed to understand the factors influencing the adoption and use of cryptocurrencies for electronic commerce in the USA and China.The study analyzed the behavioral and cultural features that distinguish users who support and detract from using cryptocurrencies in electronic commerce.The results show that attitude, subjective norms, perceived behavioral control, and herding behavior positively affect cryptocurrency adoption for e-commerce, while financial literacy has no influence.Cultural dimensions significantly amplify or reduce the relationships between these factors and adoption.The study further emphasized the importance of considering cultural norms, trust, and attitudes toward innovation when promoting cryptocurrency acceptance.User experience factors, such as perceived usefulness and ease of use, are also crucial for driving adoption.The research highlighted the need to tailor strategies to market differences in cryptocurrency adoption.Also, it suggested future research directions, including investigating the influence of family, friends, and media, as well as exploring contextual variables such as policies and regulations (Cristofaro et al., 2023). A research study by Palos-Sanchez et al. ( 2021) examines the adoption of Bitcoin cryptocurrency as a means of payment in companies.The study utilizes the technology acceptance model (TAM) and extends it with new variables to investigate an adoption model.The sample consists of business executives from companies and commercial establishments.The analysis technique used is partial least squares structural equation modeling (PLS-SEM).The findings indicate that privacy significantly influences perceived utility, and trust significantly impacts privacy and perceived ease of use, indirectly influencing the intention to use cryptocurrencies.The study highlights the high level of trust placed in Bitcoin and suggests that cryptocurrencies will undergo significant changes in the future due to government regulations.Perceived security contributes to positive attitudes toward blockchain technology, promoting Bitcoin use in various sectors.Companies prefer Bitcoin as a payment method over traditional intermediaries like credit cards and bank documents.The study emphasizes the need for companies and financial institutions to be prepared to receive and offer cryptocurrency payments.The study's limitations include its focus on companies with significant income and the strong influence of the commerce sector.Future studies can explore other economic sectors that could be more economically developed (Palos-Sanchez et al., 2021).Konstantinidis et al. (2018) conducted a systematic literature review on business uses of blockchain technology.The study identifies significant results from existing research, explores application fields, and provides insights into the present level of knowledge in this area.According to the survey, blockchain technology has been used in supply chain management, finance, banking, healthcare, energy, and government services.It describes each domain's blockchain use cases and advantages.Blockchain technology has been touted for its transparency, immutability, security, and lower transaction costs.It also discusses blockchain's scalability, regulatory, and interoperability difficulties.The study explores business blockchain adoption determinants.It examines technological, organizational, and regulatory variables that affect blockchain adoption and implementation.While Bitcoin is not explicitly mentioned in the paper title, it is the key application of blockchain technology, the transactions verifier.Further, the study is based on a selected literature set that includes relevant blockchain research in business applications.However, the paper's findings are restricted to literature accessible till publication.Blockchain technology is continually growing; further advancements and research must be addressed.Since the study synthesizes the literature on blockchain in commercial applications, it helps academics, practitioners, and decision-makers comprehend this field's current knowledge, guide firms contemplating blockchain technology deployment, and inform decision-making.The paper highlights research gaps and encourages blockchain for business study.It emphasizes scalability, interoperability, and regulatory issues and invites scholars to examine blockchain's possibilities in developing sectors (Konstantinidis et al., 2018). The literature thoroughly discussed the advantages and disadvantages associated with Bitcoin transactions.The benefits included decentralization, deflationary, anonymity, security, freedom, semi-instant transfers, minimal charges, and investment chance.On the other hand, the drawbacks involved instability of the exchange rate, illegal use associated with its anonymity, distrust from the public, and protection against theft (Jurik, 2021).Amsyar et al. (2020) conducted an excellent systematic literature assessment covering cryptocurrencies' technological, regulatory, and financial difficulty in the digital revolution.The study addresses cryptocurrency technical issues, including scalability, security, and privacy, and explores blockchain, consensus, and cryptography protocols.Also, the paper examines cryptocurrency regulatory concerns, including anti-money laundering (AML) and know-your-customer (KYC).It highlights the need for correct legal frameworks and government policies to solve these issues.Additionally, the study explores the cryptocurrencies' financial challenges, including price volatility, lack of stability, and potential for fraud and market manipulation.These challenges affect the mainstream financial adoption of cryptocurrencies.However, the literature set used for the analysis may only include some relevant studies, reducing the conclusions' comprehensiveness.This paper raises awareness among researchers, policymakers, regulators, and industry professionals about cryptocurrency challenges that need to be addressed as technological, regulatory, and financial aspects to create a conducive environment for cryptocurrency adoption and consumer protection and marketstability legislation.Finally, the paper identifies research gaps, suggests specific aspects for future research, and provides a roadmap for future cryptocurrency studies on the challenges (Amsyar et al., 2020).Sousa et al. (2022) used a systematic literature study and bibliometric analysis to examine the cryptocurrency adoption literature.The study examines technological (security, scalability), economic (price volatility, transaction costs), social (trust, network effects), and regulatory (legal framework, government regulations) aspects that affect bitcoin adoption.The paper highlights the patterns of cryptocurrency adoption across various sectors and geographic regions and the adoption trends in finance, e-commerce, remittances, and other industries.The analysis reveals the predominant research themes in cryptocurrency adoption, including adoption models, user behavior, market dynamics, and regulatory challenges.Also, the study identifies research gaps for future exploration.However, the analysis is based on a selected set of research papers, which may skew the conclusions as conference papers and unpublished studies may have been excluded.Additionally, the study relies on literature up to the publication date, while its dynamic nature may cause new cryptocurrency research advancements to be missed.Nevertheless, the paper presents a detailed analysis of bitcoin adoption literature, pinpointing significant themes and research needs and can be a base for future studies in this domain, keeping policymakers, industry professionals, and investors informed and helping shape regulatory frameworks, business strategies, and investment decisions in the cryptocurrency field (Sousa et al., 2022).Nakamoto (2008) makes a proposal to enhance the Bitcoin protocol security.The idea was to build an electronic transactions system that does not depend on trust.Initially, the system shared the original protocol, starting with solid ownership through digital signatures.Next, the system offers a method to avert the increase in cost by introducing a peer-to-peer network in which proofof-work is used.This approach makes it computerintensive and unfeasible for the adversary to make a change (Nakamoto, 2008). Many other studies were conducted to determine the security aspect of Bitcoin.These studies analyzed the protocol of Bitcoin and defined its properties.It was proved secure at high network synchronicity and at a certain level of hashing power for those targeting to interrupt the protocol's properties.At the same time, security decreases as the network desynchronizes (Cojocaru et al., 2020).Similarly, an analysis of the cryptographic setting, mainly the Bitcoin computational function that works to overcome possible adversaries, was conducted.A set of essential conditions was proposed to offer a robust transaction record, even if, during the execution, a malicious adversary controlled less than 50% of the miners at each step (Garay et al., 2019).The Bitcoin protocol was examined again, considering that the adversary uses a computer with high specifications.The study proved that Bitcoin remained secure, provided that the adversary's hashing power is suitably bounded (Ciaian et al., 2021;Cojocaru et al., 2020). Since virtual currencies spread widely in electronic trading and have become of interest to large companies, many of which invested in, the legal adaptation of Bitcoin and other virtual currencies has become a global concern.This is because cryptocurrency has no proper organization or granted right, making it a good cover for illegal business.As a result, many countries banned its use (Mohmood, 2020). Methodology In the finance world, the appearance of new technologies allowed individuals and entities to move from paper transaction systems to electronic systems such as ATMs and wired transfers.Accordingly, efforts continue to find better, easy, and fast transaction performance methods.Therefore, cryptocurrencies appeared, and the world started exploring and using them.This paper aims to analyze the development of cryptocurrencies, mainly Bitcoin, and e-commerce in Saudi Arabia and study the relationship between them if it exists because of digital currencies' significance in facilitating transactions through the internet.Authors expected that the usage of Bitcoin might increase e-commerce transactions. Sample and Data Collection.The authors prepared an anonymous online survey to test the relationship between Bitcoin Cryptocurrency and Electronic Commerce in Saudi Arabia.The target population was different community sectors, including various professionals (governmental, semi-governmental, private companies, private businesses, and students) in Riyadh and Jeddah cities.Also, the researchers distributed the survey link through different channels, such as WhatsApp groups and emails, to different firms.So, the research population was comprised of 200 individuals (100 in each city) from diverse backgrounds and occupations.However, when the online survey was sent to the subjects, only 124 responses were received (58 from Jeddah and 66 from Riyadh populations).The reason for selecting these two cities is because both cities are among the main and advanced cities with large populations as well as multicultural communities. Questionnaire Design and Measurement.The authors used specific survey items that were designed especially for this study.This questionnaire included 14 questions listed in Table 1.It is a quantitative technique that allows the perceptions understanding of the population to cryptocurrency and e-commerce.Each one might influence Bitcoin or e-commerce measured by 5-point Likert-scale questions (i.e., strongly disagree ''1,'' disagree ''2,'' neutral ''3,'' agree ''4,'' strongly agree ''5'').The questionnaire was designed in three sections.The first part of the survey aims to gather demographic data regarding respondents' general information, including age, gender, education level, and work domain.The analytical part comprises the second and third parts, designed to study the key facts that can reveal the relationship between Bitcoin and e-commerce. Data Analytical Procedure.After 10 days of distributing the survey, there 124 responses were received.Next, questions were coded using Microsoft Excel, and analysis was conducted through the statistical software tools SPSS and SmartPLS.Data analysis consisted of two main parts: The descriptive statistics give general insight into the mean and SD of the survey's items, and using version 14 for Statistical Package for Social Science (SPSS), the frequency and statistical tables for the sample were calculated.The collected data was analyzed using Partial Least Squares (SmartPLS) version 3.3.7 for Mac.This software was recommended for studies similar to this work purpose, for example, exploratory research and when the sample size is small (Palos-Sanchez et al., 2021).The software uses the Partial Least Square (PLS) method, consisting of a 2-step approach.The first step is building and testing the measurement model to examine the relationship between the variables and their measures.The second step is creating and testing the Structural model to model the dependence relationship between one dependent and independent variables (Memon et al., 2021). The Theoretical Model and Hypotheses Development For this study, the theoretical model developed, based on the Technology acceptance model (TAM), is illustrated in Figure ( 1).TAM is a reliable tool and has been used by many researchers to study the acceptance of different technologies in different contexts (Palos-Sanchez et al., 2021).In this study, the independent variable is Bitcoin, while the dependent variable is E-commerce (Palos-Sanchez et al., 2021). Hypothesis: In order to test the impact of Bitcoin on E-Commerce, the following null hypotheses were developed. Descriptive Statistics (SPSS) The authors tested all parts' reliability and found it acceptable (a-Cronbach of .725)with slight variation compared to the SmartPLS result.Also, there was no eliminated value.The least (11.3%) of the participants held Diploma, and the remaining had a degree of Bachelor and above. Table 2 illustrates the demographic information for the responders. Table 3 shows that respondents generally agree in their responses to the items, with a relatively low Saudi Arabia has official bodies that offer awareness to those interested in investment or trading using cryptocurrencies BC5 There is a general trend toward using cryptocurrencies in Saudi Arabia BC6 Cryptocurrencies transactions are fast.BC7 Cryptocurrencies' transactions are secure to use BC8 Cryptocurrencies are easy to use in purchasing products from e-commerce stores BC9 The online trading of Cryptocurrencies is easy standard deviation indicating no significant dispersion in the answers of the study sample.However, there are four exceptions.These questions showed data inconsistency.They include BC1: Do you use cryptocurrencies (Bitcoin) in paying for online transactions, ( X = 1.37, s = 0.850) and BC3: Number of years working with cryptocurrencies, ( X = 1.10, s = 0.347).The variation in responses is due to the different scaling contexts for each question.In addition to the previous questions, the items BC2: I have adequate knowledge of the nature and usage of Bitcoin cryptocurrency, ( X = 2.68, s = 1.20) and BC4: Saudi Arabia has official bodies that offer awareness to those interested in investment or trading using cryptocurrencies, ( X = 2.67, s = 0.994) showed answers reflect the expected situation in Saudi Arabia, that is, the knowledge and awareness about the digital currencies are not mature enough, and no regulatory or official bodies are certified to offer consultations on trading with cryptocurrencies as this is illegal in Saudi system. In the analytical part of the survey, the researcher had to eliminate sources of error that might have biased our results.So they discarded the questions that did not represent a clear link to the study subject-four questions were discarded.These are EC5, BC1, BC2, and BC3.Measurement Model's Assessment.This is also known as the outer Model in PLS_SEM.This step is used to assess the validity and reliability of the measurement model through two parts: convergent validity and discriminate validity.The researchers use the reflective measurement for each variable. The convergent validity includes individual item reliability (factor loading), Composite reliability (CR), and Average Variance Extracted (AVE).Table 4 shows that six out of ten items have a factor loading of the indicators on their corresponding constructs above 0.7.However, for the remaining four items, the minimum one is BC7: Cryptocurrencies' transactions are secure to use, have a factor loading of 0.527, according to Hair et al. (2010), and are still above the minimum acceptable threshold value of 0.5 of factor loading.The same table shows that the composite reliability (CR) values were above the acceptable value of 0.7 (Bagozzi & Phillips, 1991).Moreover, reliability represented by Cronbach's alpha (a) for each construct to assess internal consistency was calculated, and the value of the two variables exceeds the threshold of 0.7 (Bagozzi & Yi, 1988), that is, all constructs demonstrate acceptable reliability coefficients, indicating that the indicators provide consistent measures of their respective constructs.Also, the T-values represent a satisfactory reliability level of all items as they are significantly linked with their variables.However, the table shows that the Average Variance Extracted (AVE) values for both constructs in this study the minimum acceptable threshold value of 0.5, as suggested by Fornell and Larcker (1981), impacting the model's validity.In order to identify potential reasons for the lower AVE values and ensure the robustness of the measurement model, the indicators' reliability, validity, and factor loadings were used for the interpretation.Factor loadings were found to be reasonably high, suggesting adequate convergent validity.Also, it was essential to assess discriminant validity to ensure that constructs are distinct.To address this concern, further analysis was conducted to examine the discriminant validity among the constructs.Therefore, the square root of the AVE for each construct was calculated, as suggested by Fornell and Larcker (1981).Subsequently, these values were compared with the inter-construct correlations.The analysis confirmed that each component's square root of the average variance extracted (AVE) surpasses the related correlation with other components, indicating satisfactory discriminant validity.These results offer additional proof to validate the measurement model.However, the authors acknowledge that the low AVE values can be a concern and may impact the overall interpretation of the results. Table 5 shows that the discriminate validity includes cross-loading and variable correlation (root square of AVE).The cross-loading for each measurement has a maximum value with its variable.The bolded and shaded entries highlight the highest values in the columns, indicating where each item has the most significant loading and, thus, where its contribution is most strongly associated.For the ''Bitcoin'' column, the highlighted numbers are higher compared to the corresponding numbers in the ''E-commerce'' column, and vice versa for the highlighted numbers in the ''E-commerce'' column.This suggests that items BC4 to BC9 have a stronger relationship with the ''Bitcoin'' construct, while items EC1 to EC4 relate more strongly to ''E-commerce'', demonstrating that each item loads significantly higher on its own construct.Table 6 shows that the correlation of the square root of AVE for our variables is higher than their correlation with the other variable.Structural Model's Assessment.This is also known as Inner Model; the key criteria are the significance of path coefficients and the R2 to assess the structural model.Researchers perform the nonparametric bootstrapping procedure in the PLS analysis with 124 samples to calculate the path coefficients and examine the structural model. As shown in Table 7, the authors' hypotheses are supported, which is that Bitcoin has a significant relationship with e-commerce (b = .590,t = 14.254, p \ .001). The researchers used the value of R2 to determine the predictive power of the study's model.R2 is a statistical measure representing the independent variables' ability to explain the dependent variable.Approximately 34% (R2 = 0.348) of the variation in the values of e-commerce is accounted for by a linear relationship with Bitcoin.Since R2 is higher than 0.1, which is the minimum acceptable level according to Bagozzi (1981), it indicates an accepted predictive power for the study's model.Table 7 summarizes the hypotheses test results. Figure 2 shows a positive relationship between Bitcoin and e-commerce (b = .590,p \ .05).The results indicate that Bitcoin influences e-commerce in Saudi Arabia. The study limitations can be summarized as follow: This study was conducted to demonstrate the relationship between Bitcoin and e-commerce in Saudi Arabia, as it forms an excellent environment to implement the study due to the rapid growth in digital and electronic fields, especially e-commerce and online payment methods.However, a further detailed and comprehensive survey has to be implemented to generalize this result.Also, the researchers had to discard four questions to ensure inaccurate results.Hence, the study ended up with only ten analytical questions, which might need more to generalize the results.Thus, researchers shall develop more questions to cover all related aspects in the future. Future Work These future research directions can advance knowledge of the connection between the Bitcoin cryptocurrency and e-commerce in Saudi Arabia and assist stakeholders, including policymakers and businesses, with the benefits, drawbacks, and implications of incorporating Bitcoin into the e-commerce ecosystem. We suggest some future work to enhance the specific field: 1. Future research must include other main cities in Saudi Arabia, a larger number of samples, and a more systematic way to select the samples to generalize the findings.Also, it must develop comprehensive analytical questions to cover aspects of the multiple variables. Regulatory Frameworks and Policy Implications: Suggest how Saudi Arabia's Bitcoin and online shopping regulations are changing.Analyze how government regulations have affected the use of Bitcoin, consumer safety, and the evolution of e-commerce.3. Cybersecurity and Fraud Prevention: Examine the security issues of using Bitcoin in online commerce and find efficient risk-reduction tactics.Look at how blockchain can improve cybersecurity, detect abnormal access, and prevent fraud in online transactions.4. Financial and Economic Implications: Analyzing the effects of Bitcoin on the Saudi Arabian economy as an investment asset.Investigate the effects of Bitcoin's price volatility on some entities like businesses, consumers, and financial organizations involved in electronic commerce. Conclusion Based on a survey method, this work exemplified the relationship between Bitcoin and E-Commerce in Saudi Arabia.As per the analytical results and the above discussion, the findings suggested a positive relationship between digital currencies (Bitcoin) and electronic commerce in Saudi Arabia, satisfying the researchers' hypothesis.Nevertheless, since this work is limited to the context of citizens, its result cannot be generalized.Future research should consider respondents from all around Saudi Arabia.It will also be helpful to develop comprehensive analytical questions that cover broader aspects of the two variables of the study.The authors recommend that the Saudi authorities establish a regulatory firm to set the policies and regulations for electronic transactions that involve digital currencies.It is also recommended to form an advisory and oversight body to monitor the usage of digital currencies and develop programs to raise public awareness about cryptocurrencies and related regulations in Saudi Arabia. Table 2 . Represent Demographic Information for Responders. Table 3 . General Attitude of Study Population Toward Bitcoin and E-Commerce. Table 4 . Results of Measurement Model _ Convergent Validity. Table 7 . Overview of the Hypotheses Test Results.
7,025.2
2023-10-01T00:00:00.000
[ "Business", "Computer Science", "Economics" ]
Visualizing Collaboration in Teamwork: A Multimodal Learning Analytics Platform for Non-Verbal Communication : Developing communication skills in collaborative contexts is of special interest for educational institutions, since these skills are crucial to forming competent professionals for today’s world. New and accessible technologies open a way to analyze collaborative activities in face-to-face and non-face-to-face situations, where collaboration and student attitudes are difficult to measure using traditional methods. In this context, Multimodal Learning Analytics (MMLA) appear as an alternative to complement the evaluation and feedback of core skills. We present a MMLA platform to support collaboration assessment based on the capture and classification of non-verbal communication interactions. The developed platform integrates hardware and software, including machine learning techniques, to detect spoken interactions and body postures from video and audio recordings. The captured data is presented in a set of visualizations, designed to help teachers to obtain insights about the collaboration of a team. We performed a case study to explore if the visualizations were useful to represent different behavioral indicators of collaboration in different teamwork situations: a collaborative situation and a competitive situation. We discussed the results of the case study in a focus group with three teachers, to get insights in the usefulness of our proposal. The results show that the measurements and visualizations are helpful to understand differences in collaboration, confirming the feasibility the MMLA approach for assessing and providing collaboration insights based on non-verbal communication. Introduction Teamwork and collaboration have become relevant as the complexity of today's problems surpasses individual capabilities [1,2]. Collaboration, defined as a group of people (or organizations) working together to achieve a common goal [3], requires effective communication among the participants. In the educational context, collaboration has been identified as an important learning component that helps to improve students' performance [4] and to develop higher-level reasoning. Companies are now looking for graduates that possess these new skills, together with many others, such as decision making, problem-solving, time management, and critical thinking [2,[5][6][7]. The above poses a new challenge for Higher Education Institutions (HEI) since they need to provide relevant knowledge and practices to allow their students to be highly productive and tailored for these new industry requirements [5,8,9]. It is noticeable that traditional methods struggle to assess the learning of these skills, as they usually focus on the results rather than the processes that led learners to acquire and/or develop them. In the specific case of collaboration, there are difficulties in producing standardized tests the theoretical framework on which we base to elicit the requirements for the platform [17]. Section 4 presents the design and technical considerations of the proposed system. Then, Section 5 presents the case study along with the results. In Section 6 we discuss in the implications of the results in the observation of collaboration constructs. Finally, Section 7 presents our conclusions and discusses future work. Related Work Non-verbal communication is defined as the behavior of the face, body, or voice, without linguistic content, i.e., everything except words [22]. Non-verbal communication involves, for example, facial expressions, gestures, voice tonalities, and speaking time, among many others. The work of [15] approaches the assessment of non-verbal collaboration, but just considering non-verbal elements of spoken interactions. Despite this limitation, this work serves as an initial foundation for our proposal, which aims to extend its spoken interaction-based approach to body posture analysis. This section considers works regarding using body posture and time in collaborative contexts. Postures may provide information related to the sentiments and intentions of a person or indicate power and social status [23]. For instance, the respect and disposition towards the participants during the interaction may be identified by the individual's posture [23]. In this sense, a closed and inflexible posture is less attractive than an open and relaxed posture. Identifying postures during collaboration may be important complementary information about the participants and may help to better understand the entire learning process [24]. Andolfi et al. [25] investigated how posture influences the generation of novel ideas in the context of creativity by proposing two studies. The first study used a sample of 102 students divided into two balanced groups. Each subgroup completed one of two creative tasks, and they requested the students to adopt randomly open and closed postures while describing their ideas. The findings support the hypothesis that posture influences creative task performance but did not conclude that open postures facilitating effects are specific to creativity. The second study involved 20 students, and they added additional dimensions to the analysis, incorporating different physiological measures and a logical task not requiring creativity. The results showed that postures specifically influence the performance of creative tasks. Hao et al. [26] incorporate the component of emotions in the participants. The method is very similar to that proposed by Andolfi et al., but here the emotions are induced by watching videos, and the participants are standing. The authors show that participants exhibited the greatest associative flexibility in the open-positive posture and the greatest persistence in the closed-negative posture. These findings show that compatibility between body posture and emotion is beneficial for creativity. This work makes us reflect on how an individual's posture might influence the ability to solve collaborative problems with a creative component or how it can affect the creativity of other team members. Moreover, Latu et al. [27] investigate how the behavior of visible leaders empowers women in leadership tasks. They hypothesize that women tend to imitate the empowered posture of successful women. Experiments showed that, in groups, women adopted the postures of the female leaders when these were famous models (but not when women were exposed to non-famous models). The above suggests that finding mimicry between postures may be a reflection of leadership among interlocutors. From the MMLA perspective, understanding collaboration and communication among students has been studied from different points of view. Grover et al. [28] developed a framework to capture multimodal data (video, audio, clickstream) from pairs of programmers while they were working together to solve a problem in order to predict their level of collaboration. Starr et al. [29] studied how delivering feedback to students regarding collaboration can affect productive small learning group interactions. This feedback can be by a traditional method (verbally delivered interventions) or multimodal (real-time). One of their findings is that simple verbal interventions can help participants pay attention to specific aspects (e.g., how much they talk and how much space they provide to their partner). However, they did not find evidence that continuous feedback supports collaboration. On the other hand, Davidsen et al. [30] expound on how two 9-year-olds collaborate through gestures and body movements. The experiment showed that differences of opinion were reflected in oppositional gestures and movements in the face of the same phenomenon. Cornides-Reyes et al. [31] analyze the collaboration and communication of students in a Software Engineering course in an exploratory study. They collect data using multidirectional microphones and applied social networks analysis techniques and correlational analysis. Their findings show that MMLA techniques offer considerable feasibilities to support the skill development process in students. Some of the mentioned articles consider using multiple modalities of communication, such as posture, proxemics, and chronemics. However, the tools to measure the data are traditional as recordings or data collection systems tailored to the experiment. In the case of Riquelme et al. [15], a tool was developed to provide automatic feedback to teachers. However, it only considers the chronemic component of communication. Therefore, there is an opportunity to expand and integrate new aspects of communication. On the other hand, Järvelä et al. [32] conclude that multimodal data can help understand regulatory processes in collaboration. Furthermore, a relevant factor pointed out by the authors is the delivery of timely information to improve results. Table 1 shows a synoptic summary of the research mentioned in this section. Instruments to measure mimicry. Videos and post analysis. Postures Groups, women adopted the postures of the female leaders when these were famous models (but not when women were exposed to non-famous models [29] Studied how delivering feedback to students regarding collaboration can affect productive small learning group interactions Yes The participants used a block-based programming language to navigate a robot through a maze. Pre and post-test assessments based on fill-in-the-blank questions. Self assessment questionnaire, post-experiment. Body Tracking System Data One of their findings is that simple verbal interventions can help participants pay attention to specific aspect No [30] Analyzed how two 9-year-old boys collaborate through gestures and body movements around a touch screen. Yes They collected data regarding the movement of the children's body around a touchscreen. The data were analyzed through the observation of movements, speech, screen touch and gestures. Gestures and body movements. The experiment showed that differences of opinion were reflected in oppositional gestures and movements in the face of the same phenomenon No [31] Analyze the collaboration and communication of students in a Software Engineering course in an exploratory study Yes The collected data based on the DiSC factor (Dominance, Influence, Steadiness and Compliance). The data were gathered by a series of low-cost sensors distributed in the classroom. Social networks analysis techniques and correlational analysis They collect data using multidirectional microphones and applied MMLA techniques offer considerable feasibilities to support the skill development process in students Background: Collaboration and Multimodal Learning Analytics Boothe et al. [21] have presented a framework to close the gap between research efforts on the theoretical understanding of the collaboration process and the multimodal learning analytics approach. The framework aims to connect collaboration theory constructs with MMLA measurements, quantitatively supporting the study of collaboration constructs with quantitative measurements. The framework is based on six collaboration constructs proposed by [17] (contribution, assimilation, team coordination, self-regulation, cultivation of environment, and integration), which are in their turn grouped into three categories: cognition, metacognition, and affect (see Table 2). Regarding the cognition category, the contribution construct refers to a cognitive action that contributes to advance in the collaborative goal, while the assimilation construct concerns the actions performed when receiving a contribution from another team member. Concerning metacognition, the team coordination construct refers to the actions taken to improve the team's overall efficiency, while the self-regulation deals with the individual actions through which a group member adapts his or her behavior to facilitate participation in the group. Finally, concerning the affect category, cultivation of environment refers to subjects supporting other team members through verbal or non-verbal signals of acceptance, while integration addresses affective actions of a group member towards the cohesion of the group. According to the framework, the collaboration constructs are firstly refined into behavioral indicators (e.g., subjects have a positive attitude when interacting) and then into traces of behavior from different communication modalities (e.g., an open body posture when speaking) [17]. With MMLA tools, it is possible to use sensors to collect media from different communication channels (e.g., audio and video) and then process them to extract communication features (e.g., speaking time and body postures) to support the observation of traces of behavior. The extracted features are organized and visually displayed to provide feedback analytics (e.g., a timeline with the spoken interaction and different body postures of all the group members), in order to support the observation of behavioral indicators and providing insights about the collaboration constructs. Our proposal aims to exploit the above framework by designing a MMLA platform to study collaboration constructs from non-verbal communication. Therefore, we consider the challenges of MMLA identified in [33], such as heterogeneity of data measurements, data integration, and generalization of the study, among others. Table 2. Framework based on [21] Developed Solution In this section, we present the design of a system to support the multimodal analysis of collaboration constructs. From a methodological point of view, we based our research on the Design Science (DS) methodology, particularly on the interpretation by Wieringa [34]. Design science specifies four stages to design and research artifacts in their context: problem definition, treatment design, treatment validation, and treatment implementation. This article covers the problem definition and the treatment design stage. In the problem definition stage, the stakeholder's goals and needs are identified, for which we use Boothe's framework [21]. In the Treatment Design stage, the tool must be designed, developed, and tested to determine if it could contribute to the stakeholder's goals, which we achieve through the case study and the focus group. The rest of the DS stages that consider validating the tool and its transference to a real-world context are out of this paper's scope. The design goal of the developed solution is to help teachers to understand how a team collaborates using MMLA. To achieve this goal, we have instantiated the framework by Ochoa [17] by proposing a set of behavioral indicators and their associated requirements for feedback analytics, as well as the behavior traces and their respective feature extraction requirements. We summarize these definitions in Table 2. In order to meet the above requirements, we propose to provide feedback analytics in the form of a set of visualizations based on measurements of the spoken interaction and the postures of the subjects. We designed five visualizations to address the six feedback analytic requirements presented in Table 2, which are detailed below. • Timeline: This visualization jointly depicts the spoken interactions (bars) and the body postures of each subject (circles) throughout the activity. The widths of the bars and circles show the length of each interaction and posture, respectively. With this visualization we aim to support the understanding of the assimilation, self regulation, and cultivation of the environment constructs. • Spoken interaction graph: In this visualization, each subject is represented by a node, whose relative size represents the number of spoken interactions. The directed arcs between the nodes are stronger (thicker) when a spoken interaction from a subject, represented by the source node, is followed by a spoken interaction of another subject, represented by the target node. This visualization was designed to support the contribution, team coordination, and integration constructs. For our proposal, we take as starting point our previous work [15], which supports capturing, storing, analyzing, and visualizing voice data coming from collaborative discussion groups. Multidirectional microphones provide the captured voice data, and we use social network analysis techniques for data analysis. We extend this work by incorporating four cameras and machine learning techniques to recognize the participants' postures. This involves addressing one of the challenges for MMLA researchers associated with synchronous multimodal data collection [35]. We have incorporated this kind of device/technique to present a panoramic scenario to the educator/researcher. Following, we present the technical environment of the system, which includes the high-level architecture and the technologies used. Figure 1 illustrates the high-level architecture of the developed system. It focuses on the distribution of the hardware used and the context of use. The system has a data-collection device, composed of a Raspberry Pi 4, which integrates the ReSpeaker, for audio data capture, and a group of four USB camera modules, for video data capture. The ReSpeaker consists of a group of multidirectional microphones that allow, through an algorithm, the detection of the vocal activity (VAD) and the direction of arrival (DOA) of four individuals within a capture radius of three meters. Furthermore, camera modules are used to obtain the images of four participants around the device. Thus, this device was designed to be located at the center of the interaction for the purpose of individualizing the participants. This device communicates with a server, which is in charge of storing and generating the data processing for its visualization. In order to control the operation of the ReSpeaker and the cameras, an application was developed. It receives data from the ReSpeaker through the GPIO connection and from the cameras through the USB ports. It was divided into two independent modules written in Python 3.7 and C. This application collects audio and video and then transmits this information to a server. The transmission is done wirelessly to a previously configured server using the UDP protocol. The transmission includes the audio from the four microphones and the images from the four cameras. The server receives and processes the data transmitted by the device, as shown in Figure 2. It deploys a web application composed of a front-end developed with the Flask 2.0.1 Framework and a back-end developed in Python 3.7. In addition, MongoDB 1.21 has been used as database management system. This web application aims to allow the user to record the sessions of an activity, process the data, and visualize the results. The process starts when the user sets up an activity. It then indicates the start of the recording of the activity. This generates a command on the server to record the audio and video, and starts extracting audio features in real time. Then, the user indicates the end of the recording, so the server ends the recording process. After the activity is recorded, the user starts the video processing, which consists of two parts, the obtaining of features and their subsequent classification. Finally, the visualizations are obtained. The platform processes the audio data in real time, from which it obtains the first metrics (speaking time and number of interventions). These first metrics are stored locally in the database with a time tag. The metrics are related to the analysis of the participants' interventions as described in [15]. The raw data are recorded and stored in WAV and AVI file formats for audio and video, respectively. Due to hardware limitations, the video data processing is performed subsequent to the activity, and is focused on posture metrics. Video processing is divided into two components. The first one consists of taking a frame (image) from the video and getting the key points. The key points, i.e., the parts of the body that describe the human anatomy, were estimated from the image using OpenPose [36], which uses a previously trained convolutional neural network. This method has been previously employed in the literature [37][38][39][40]. The second component takes the key points and classifies the pose. The classification model used is MultiLayer Perceptron (MLP), which is a helpful tool for classification problems and has been previously used to classify poses either from the image perspective [41,42], or from 2D and 3D skeletons [43,44]. MLP has three types of layers: the input layer, the output layer, and the hidden layers between the other two types of layers. In this work, the input layer has 100 neurons with a data input of 30. Then, the hidden layers consist of 21 neurons with a relu activation function. Finally, the output layer has 8 neurons with a Softmax function, to determine each pose. The postures were determined by the definition of closed posture. A closed posture is defined as any posture that involves covering the body and/or bending or crossing the limbs, such as crossing an arm, hand, leg, or foot with its opposite [45]. Therefore, the opposite is understood as an open posture. Moreover, the choice of postures was derived from those presented in [46,47], where the camera angle and that the individual is seated are considered. In Figure 3 In MLP training, we constructed the dataset from 2-min videos in which a person interprets the postures. The dataset was converted from videos to key points using Open-Pose. In total, 16, 640 samples were accounted for. These were divided into 75% for training and 25% for testing. The training result achieved 99% accuracy. Case Study In order to validate the proper achievement of the design goal presented in Section 4, we performed a case study to answer the following research question: Does the analytics feedback collected by the tool provide insights about the collaboration constructs? To get insights on this matter, we decided to compare two different teamwork activities, with high contrast between collaborative and non-collaborative work. The first activity, namely Collaborative Activity, aimed to explore whether the MMLA visualizations on subjects interacting collaboratively effectively support the observation by the teacher of behavioral indicators and traces of collaboration. The second activity, namely Competitive Activity, aimed to identify indicators and traces of non-collaborative behavior in an activity designed to produce conflict and more chaotic interactions among the subjects. We collected data automatically (through the MMLA platform) and manually (taking field notes) in both activities. Field notes allowed us to describe the flow of interaction of the subjects and observations about the six collaboration constructs of the framework [17]. The automatic data collection performed by the MMLA platform followed the requirements presented in Table 2: measurement of the number of spoken interventions, speaking time (per intervention), type of posture (open, closed, hands on the heaps, hands on the head, and hugging the opposite arm). The visualizations and the field notes were handed to two members of the research team that hold the degree of Master in Teaching for Higher Education, namely the reviewers. In the two activities, subjects received a task to be performed in five minutes, without further instructions about how to interact to achieve it. For both activities were considered the same four subjects. We recorded audio and video of each of them during the whole activity. The four subjects are students from different careers and universities: Psychology, Auditing Accountant, Industrial Management Execution Engineer and Business Administration. All participated voluntarily and gave their informed consent. The group is composed of 3 women and 1 man between 25 and 28 years old and they do not know each other. Collaborative Activity The subjects were asked to collaboratively write, in five minutes, a sentence about what might be the first article of Chile's new Constitution (at the time of the case study, Chile was in the midst of the process of writing its new political constitution). Field notes were taken about the interaction flow and the subjects' attitudes during the activity. According to the field notes, four main stages were identified during the activity: (S1) A brief initial coordination, where the subjects agreed to present their opinions sequentially and then write the sentence in agreement. (S2) The first exposition (by subject 4) that ended after approximately one minute, interrupted by another subject worried about the time remaining to complete the activity. (S3) The rest of the presentations, which continued sequentially with sporadic interventions by the rest of the subjects. (S4) An attempt to write down an agreement, although subjects could not successfully finish the activity during the remaining time. Regarding the subjects' attitude, a dominant attitude of Subject 4 was remarked by the experimenter, as the subject constantly commented on the positions of the rest of the group's members. The experimenter observed the rest of the members as open to hearing and collaborating. The recorded data, presented in Table 3, summarize the number of interventions recorded for each subject, as well as the number of posture changes identified by the system. When asked about the degree to which the visualizations contribute to understanding the interaction flow of the subjects, the reviewers agreed that the timeline visualization was the most valuable because it clearly shows that the subjects took turns to present their positions. The timeline visualization, presented in Figure 4, depicts what the reviewers characterize as the four stages described in the experimenter's notes. The analysis criterion agreed to by the reviewers was to ignore isolated detections that could be produced by noise or slight changes in posture. Instead, they focused on the big blocks of interaction. The timeline clearly shows how all the four subjects speak to agree on the interaction procedure during Stage 1, while the dominance of Subject 4 is shown in Stage 2. Then, Stage 3 interactions show the presentation of Subject 3, one comment by Subject 4, and then a brief exposition by Subject 2, complemented by Subject 1. Finally, Stage 4 interaction shows how Subject 4 starts wrapping up with the contribution of the other subjects. Another useful visualization for the interaction flow is the spoken interaction graph in Figure 5A. Although it does not provide a timely representation of events, it is clearly visible how Subjects 2 and 4 dominate the number of interventions. Note that this visualization does not allow observing how long the speaking interventions of the subjects took. Therefore, Figure 5B helps to understand better the distribution of the speaking time: Subject 4, again, shows some of the most extended interventions (22 s), while most of the interventions of the rest of the subjects are no longer than six seconds. When asked about subjects' attitudes during the activity, the timeline visualization in Figure 4 was also preferred by the reviewers to have an initial idea of the subjects' performance. Subject Competitive Activity In this activity, the same four subjects were asked to jointly decide who should be saved in a bunker in an apocalyptic scenario. Again, subjects had five minutes to get to an agreement while the experimenter took notes about the interaction flow and the subjects' attitude. The experimenter's notes describe that the activity was as chaotic as predicted: no interaction agreement was defined by the group, and each started to argue on how they themselves were the best choice to be saved. The experimenter noticed that Subject 4 kept a dominant attitude, but in this case, Subject 2 was more active in presenting their arguments, while Subject 3 was remarkably overwhelmed by the situation. Subject 1 showed a calm attitude, although their interventions were longer than the ones from the Collaborative Activity. Analogously to the collaborative activity, Table 4 illustrates the recorded data for this second activity. Figure 6 presents the timeline visualization. In this case, reviewers found less value in the visualization regarding the interaction flow. The reason is that the subjects' chaotic interventions can hardly be distinguished from what was considered noise by the reviewers in the previous case. However, in his case, the spoken interaction graph in Figure 7A was highly valuable, as the reviewers found that it reflects an intensive interchange of ideas among Subject 4 and Subjects 1 and 3, with strong colored arcs. Comparing this visualization with the analogous in the Collaborative Activity ( Figure 7A), the reviewers concluded that this visualization might be helpful to identify when the subjects are discussing a topic. Regarding the duration distribution of the spoken interactions, both reviewers agreed that there were no differences between the collaborative and competitive activities. Finally, regarding the subjects' attitude, reviewers agreed that the timeline, in this case, is valuable to understand the intensity of the discussion, as non-open postures were prevalent in all four subjects. When comparing the timelines of both cases, reviewers consider that the postures shown in the timeline could provide insights about the intensity of the debate and even help indicate changes in its dynamics. For instance, as shown at the last minute, subjects 1, 2, and 3 seem to anticipate the finish of the activity with a calm attitude, unlike subject 4, who consistently raised his arms. The posture proportion visualization also supports this in Figure 7B, where a higher proportion of non-open postures is found for all the subjects, unlike the results of collaborative activity ( Figure 5D). Discussion In this section, we present a focus group conducted to explore the usefulness of the visualizations. Then, we discuss the focus results and their relationship with the design goals and requirements. Focus Group To discuss the potential applications of the visualizations, we conducted a focus group with three teachers from Chile. The main research question was: What visual feedback elements can help you to assess whether a group is well-performing in a collaborative activity or not? The three teachers differ in experience and discipline, but all teach primary and secondary students and have educational backgrounds. Teacher 1 (T1) is a secondary teacher in mathematics with six years of experience. Teacher 2 (T2) is a secondary teacher in history and geography with ten years of experience. Teacher 3 (T3) is a primary teacher in English language (English) with four years of experience. Two researchers conducted a 60-min focus group. The three stages of the activity and its main results are detailed below. Blind Stage: The guiding question of this stage was "what non-verbal and paraverbal communicative characteristics does a collaborative team have?". We call this stage "blind" because none of the teachers has seen the visualizations. T1 and T2 commented that the members look at each other when talking in a collaborative group: "when everyone is looking at what they have to do individually, it is often a non-collaborative group" (T1). T2 and T3 agreed that engaged students have high kinesthetic activity: "generally a non-collaborative group is a group that does not express much with its body, because it has no interest, it is more individualistic" (T2). The three participants also agreed that open body postures show that team members are eager to collaborate: "when you're standing with your arms crossed, all in a little more rigid or backward position as you mention, it's a posture, shall we say, of little interest in collaborating" (T3). Guessing Stage: In this stage, we presented the visualizations of the collaborative (Figures 4 and 5) and competitive activity (Figures 6 and 7) to the three teachers, without telling them which type of activity it was. The visualizations of the collaborative and competitive activities were tagged as Group A and Group B, respectively. The guiding question was "which of the two groups is collaborative?". T1 and T2 agreed that Group A was collaborative because the timeline visualizations showed more structured interactions: "Each one had its moment, you could even see that subjects 1 and 2 of Group A as there was an interaction between the two of them in the last part, they interact in an orderly way, and in the other one (Group B) no, you don't see a process"..."I can think that they interrupt each other many times because one speaks, then the other speaks and they are almost speaking at the same time." (T1), and "generally when doing collaborative work, it is important that I give my point of view and that others listen to me" (T2). Also, T1 and T2 agreed that Subject 3 in Group B postures and hand movements were signs of a non-collaborative behavior: "He's kind of hedging, probably being a little bit more defensive. In my opinion, in the classroom this has a relationship with being individualistic" (T2). Teachers T1 and T2 comment on the postures that accompany the interactions, both of those who speak and those who listen: "Subject 4 of group B has, as far as I can see, purple circles at the moment of interacting, that is, talking, it is also a characteristic of nonverbal language" (T1), and "subject 1 did not show a variation because he kept himself in something that we know as active listening. Therefore, as subject 4 in group A was talking, moving, explaining, probably the others were with their hands down listening." (T2). T3 indicated agreement with these statements. On the other hand, T3 stated that Group B also seems collaborative from the point of view of collaborative language activities: "there are short dialogues, clearly there is less speaking time and in fact it is very good that everyone gets to speak for the same amount of time. Otherwise it becomes a monologue and the children don't practice the language" (T3) Usefulness Stage: The guiding question of this stage was "Which alerts or indicators could help you to improve the collaboration facilitation and assessment of the groups?". All the teachers agreed on the following indicators and alerts for the group: participation time and distribution among team members and alerting when the collaboration flow differs from a previously designed structure. The teachers also agreed on the importance of showing subjects' kinesthetic activity and knowing if they are looking to each other, as a sign of engagement in the activity. The teachers also agreed on alerting when a team member speaks significantly more than the others and when just a single team member is receiving all the interactions (as a sign that only one team member was doing all the work). Finally, all the participants agreed on alerting when a subject does not look to other team members. Discussion on Feedback Usefulness The results from the case study consist of a starting point to provide feedback for the behavioral indicators and traces proposed in Table 2. In the following paragraph, we detail our insights about each of the collaboration constructs. Regarding cognitive contribution, as the two activities were mainly spoken, we believe it was possible to trace the contribution of each member by the number and duration of the spoken interactions, as presented in Figures 5A and 7A. Furthermore, the activity's short duration helped the subjects to focus on contributing. Under this context, we think that the provided visualizations could be helpful for teachers to observe and understand the cognitive contribution of the subjects. More complex activities requiring more coordination, or longer activities where subjects could speak about other subjects than the required task, would need to identify each spoken interaction's matter to consider it a cognitive contribution. Moreover, complementary measures would be needed for activities with other types of cognitive contribution (e.g., collaborative writing or modeling). Concerning the assimilation construct, we believe that the results for the competitive activity successfully show the criticality behavior indicator in the overlapped, short-timed spoken interactions depicted in Figure 6, which are characteristic of a non-collaborative behavior. We think this result could allow teachers to identify whether a team needs their intervention to avoid excessive criticality between the subjects. Regarding the team coordination construct, the graphs in Figures 5 and 7 depict that team members communicated with each other. We expected that in the visualization for the competitive activity, it would be apparent how a subject was less involved in the debate. However, the graphs do not seem to show any insights into this fact. It seems the proposed analytic and visualization do not provide enough insight into team coordination. An improvement could be measuring the spoken interventions of the subjects aimed to achieve team coordination. For the self-regulation coordination, the timeline visualizations allow to clearly observe differences in how the subjects adapt their behavior to achieve a collaborative goal: while in the collaborative activity, each team member takes a turn to contribute, in the competitive activity, the chaotic interaction shows no adaptations to collaborate. We think that this visualization might be helpful for teachers to distinguish teams that are capable of self regulating from groups that would need their help to get coordinated, such as presented in [48]. For the cultivation of the environment, the differences in body postures presented in Figure 5B,D clearly show that subjects kept an open posture in the collaborative activity in contrast with more varied postures in the competitive activity. The emergence of expansive postures (e.g., hands to the head shown by Subject 4, in the competitive activity) and defensive ones (e.g., hugging the opposite arm by Subject 3, in the same activity) seems to provide insights about a change of attitudes that could affect the collaborative environment. However, this is valid for observing the same subjects in different situations. Besides the visualization, it would be helpful to notify the teacher when there is a change in the typical collaborative postures of the team's subjects. Finally, concerning integration, we think that spoken contribution visualization in Figures 5A and 7A is helpful given the specific characteristics of the activity, as subjects can participate in any other way than speaking. In this context, the visualization is valuable in identifying subjects with less spoken interaction, allowing teachers to intervene to foster the integration of those subjects. Limitations and Validity Discussion In this section, we comment on the limitations of the designed tool and for the initial empirical evaluation. Concerning the tool's design, our application of the framework by Boothe et al. [21] is constrained to non-verbal communication. Since our overarching goal is to provide realtime feedback for many groups simultaneously, we did not consider verbal communication or content analysis due to the technical limitations of analyzing multiple voice streams in real time. That said, we think that behavioral indicators combining non-verbal and verbal communication can better inform collaboration constructs, which is the focus of our future work. Another constraint for defining behavioral indicators is that the case study presented in Section 5 was performed under the restrictions of COVID-19, so the participants were using masks. Features such as facial expressions could not be extracted to inform behavioral indicators. However, thanks to the tool's architecture, they can easily be integrated without significant changes. The initial empirical evaluation is limited to assessing whether the designed tool contributes to the stakeholders' goals, and further studies are required to validate the tool's effect on collaborative learning. With this aim, we explicitly decided to ask the subjects to perform two types of opposite collaborative behaviors to emphasize the differences in the visualizations for their discussion in the focus group. Alternative study designs, such as comparing the analytics and the performance of several groups performing a collaborative activity, are being considered for validating the tool. Finally, the design and sample size of the focus group do not allow us to generalize the results. However, since we are not validating the tool but exploring if it helps stakeholders achieve their goals, we opted for a freer focus group design, favoring deeper discussions among participants, which is appropriate to our methodological framework. Conclusions and Future Work The gradual incorporation of technologies in educational environments can support teachers in developing highly valued competencies in the work environment [49]. Under this perspective, the measurement of aspects associated with non-verbal communication becomes relevant since it allows us to understand how subjects interact in collaborative activity, as well as providing effective feedback to both students and teachers. This paper presents the design and development of a MMLA platform using sensors to capture and visualize audio and video data. It graphically provides feedback analytics to support collaboration assessment in face-to-face environments (co-located collaboration). For this purpose, we integrated hardware and software, and incorporate machine learning techniques to develop a scalable system. The platform allows to detect the amount and duration of the team members' spoken interactions, body postures, and gestures. These features are presented in five different visualizations to provide insights about theoretical collaboration constructs. We conducted a case study to compare the visualizations provided by the system in two different situations: collaborative and competitive activities. The results suggest that the provided visualizations help to identify issues on cognitive contribution, assimilation, self-regulation, and integration of the team members. They could also support teachers to decide whether they must assist a team in fostering collaboration. While the results are naturally constrained to the characteristics of the activities in which we tested the platform, they provide initial evidence about the technical feasibility of extracting behavioral indicators and traces using MMLA to give insights on team collaboration. Future work will focus on the improvement of the platform's scalability in order to allow real-time monitoring of various teams. Moreover, future work will cover the extraction of features from verbal communication, allowing the identification of the topics/subjects of the team members' spoken interactions and better supporting different collaboration constructs in more extended and complex activities. Once real-time monitoring is implemented we intend to assess to what extent teachers' actions based on visualizations input affect students participation in the activities and helped to enhance their collaboration. For that, we intend to follow some conditions and guidelines for fruitful collaboration identified by [50]. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available from the corresponding author upon request. Conflicts of Interest: The authors declare no conflict of interest.
9,256.8
2022-07-26T00:00:00.000
[ "Computer Science" ]
Complex beam shaping by cascaded conical diffraction with intercalated polarization transforming elements. Cascaded conical diffraction where optical elements modifying the local polarization state are intercalated between the aligned biaxial crystals is analyzed theoretically in the framework of paraxial diffraction theory. The obtained expressions are verified and confirmed experimentally for the case of a two-crystal cascade intercalated by a polarizer or a wave plate. The present approach can be used to realize a variety of vector beams with complex beam shapes composed of concentric rings with strongly modulated azimuthal intensity distribution. A potentially very fast switching of the overall beam shape is possible if the intercalated elements are electro-optically tunable retarders. Introduction When a light beam enters a biaxial birefringent crystal along one of its two optical axes it experiences a phenomenon known under the name of internal conical refraction (or internal conical diffraction). The beam propagates as a hollow cone inside the crystal and emerges as a diffracting hollow cylinder. When the proper observation plane is chosen (focal image plane) the transverse intensity distribution associated to this effect exhibits a double circular ring separated by a narrow dark region known as the Poggendorff dark ring. Even though this phenomenon was predicted by Hamilton [1] already in 1832 and was first observed by Lloyd [2] just one year later, investigations of conical diffraction experience presently a second life and a strong renewed interest both theoretically and experimentally , as recently reviewed by Turpin et al. [25]. This is due on the one hand to an improved understanding of the effect following its paraxial diffraction theory by Belskii and Khapalyuk [3] and its elegant reformulation by Berry in 2004 [4]. On the other hand the strong potential of this peculiar phenomenon for several modern photonics applications has been recognized. These include optical tweezers or bottle-type beams for trapping of particles [7][8][9], optical trapping of Bose-Einstein condensates [10], polarization metrology [11][12][13], polarization multiplexing for free-space optical communication [14] , super-resolution microscopy [15,16], lasers with specific polarization properties or spatial profiles [17][18][19][20][21], applications in the field of singular optics [22][23][24], and several other. One of the major recent advances in the field of conical diffraction has been the extension of the effect to a cascade of two or more biaxial crystals with all their optical axes aligned. This approach adds versatility to the effect and leads in general to several concentric conical diffraction rings, with their relative intensities being governed by the angles of orientation of the crystals around the fixed direction of the optical axis. The general paraxial theory of cascaded conical diffraction was developed by Berry in 2010 [26]. An alternative approach based on the splitting and propagation of a bunch of classical rays was given by Turpin et al. [27,28] and several experimental and application oriented investigations of cascaded configurations were recently performed [8,14,23,24,[29][30][31][32][33]. In general the cascade of N crystals leads to 2 N −1 conical diffraction rings [26,28], for a circularly polarized or unpolarized input beam the intensity is azimuthally homogeneous on each of the rings. Notably, the local polarization on the rings is always linear with two radially opposite points exhibiting orthogonal polarizations, so that the conical diffraction process can be considered as a natural infinite channel polarization demultiplexer. These polarization properties suggest that the scrambling or filtering of the polarization between the crystals put in cascade should lead to a dramatic change of the overall observed conical diffraction pattern with respect to the case where the polarization is transferred without change. While the intercalation of polarization transforming elements in cascaded configurations was used in few experimental studies [8,24,29,32], a detailed theoretically description of this situation is still lacking. In the present work we treat theoretically and experimentally cascaded conical diffraction where polarization transforming elements, such as wave-plates or polarizers, are intercalated between each pair of crystals. It is shown that the usual angular homogeneity of the intensity along the conical diffraction rings for unpolarized or circularly polarized input is lost. This leads to the possibility to realize complex vector-type light structures with highly localized distributions along the azimuthal direction. The involved intercalated optical elements are simple, spatially homogeneous and can be potentially switched very fast if realized with electro-optical devices. Therefore, intercalated cascaded conical diffraction can represent a valid and faster alternative to techniques based on pixellated phase elements (spatial light modulators, SLM) or liquid-crystal based q-plates to generate various classes of complex beam shapes [34,35]. Section 2 describes the terms of the problem and gives the theoretical treatment based on Berry's paraxial diffraction theory [26] extended to include the role of the intercalated elements. With the help of modified complex Belskii-Khapalyuk integrals we give explicit analytic solutions of the basic Fourier-type integral in the special case of a cascade of two crystals intercalated either by a λ/4-, a λ/2-plate or a polarizer. In Section 3 we give few specific examples and verify experimentally the theoretical predictions for the case of a cascade of two crystals of different length. Finally, the appendix gives some details on the method of solution of the basic Fourier integral that leads to the expressions given in Section 2 and defines the modified Belskii-Khapalyuk integrals. Theory We consider a series of N biaxial crystals arranged with a common optical axis oriented along the z direction so that each crystal, individually, gives rise to internal conical diffraction. As shown in Fig. 1, the crystals can be rotated with respect to each other around the common optical axis. The rotation of each crystal is expressed by the angle γ n between the x-axis and its direction γ n of displacement of the conical diffraction cone. Formally this direction is given by γ n ∝ k × ( k × S * ), where S * is the specific Poynting vector on the cone that gives the maximum walk-off angle with the wavevector k ||z (see right-hand inset in Fig. 1). Without loss of generality one can orient the first crystal parallel to the horizontal x-axis of the laboratory frame, so that γ 1 = 0. The orientation of the indicatrix (index ellipsoid) for this first crystal is indicated in the top left inset in Fig. 1. For the case γ 1 = 0 the longest and the shortest main axis of the indicatrix are in the laboratory xz-plane while the middle main axis is along the y-axis. For the following crystals the same projection of the indicatrix seen in Fig. 1 is found, however in a plane x z, where the axis x is rotated by an angle γ n around z with respect to the x-axis. Each crystal may be of a different length l n , also the crystals may be composed of different materials and thus be associated to different cone semi-angles α n . As done in [4,5,26] we consider a beam with an intensity 1/e radius w (measured in the focal plane of a lens placed before the crystals) and we normalize all transverse dimensions in real space with respect to this quantity w. In this way each crystal is characterized by a normalized strength parameter ρ n ≡ α n l n /w, which is the radius of the emerging ring in unit of the beam width. Between each pair of crystal we allow the presence of an optical element controlling the polarization state of the light, which is assumed to be oriented at an angle θ m with respect to the x-axis. These elements may be polarizers, quarteror half-wave plates, general wave retarders or combinations of any of these elements, they are characterized by a Jones matrix J m , where m extends to N − 1. Fig. 1. Arrangement for cascaded conical diffraction of N crystals with strength parameter ρ n intercalated by N − 1 polarization transforming elements with Jones matrices J m . The inset on the right shows the situation for the first crystal. The Poynting vector directions S associated to the common wave-vector k parallel to the optical axis lie on a cone containing the vector k. The direction γ 1 of displacement of the conical diffraction cone points towards the Poynting vector S * that has a maximum walk-off angle with k. The inset on top left shows the orientation of the projection of the index ellipsoid (indicatrix) on the xz-plane for the first crystal. In order to fully account for diffraction we follow closely the Fourier optics approach reformulated by Berry [4] and include the effect of the polarization transforming elements. The optical beam entering the cascaded crystals is described as a sum of paraxial plane waves with wave-vector directions very close to the optical axis of the crystal. In Fourier space each wave-vector k = (k x , k y , k z ) can be represented in normalized cylindrical coordinates with the transverse components (κ, φ) such that k x ∝ κ cos φ and k y ∝ κ sin φ , where κ is normalized to 1/w, that is κ ≡ (k 2 x + k 2 y ) 1/2 w. Finally, we also use a normalized longitudinal spatial coordinate ζ [4,5,26] such that the position ζ = 0 corresponds to the plane where the rings are the sharpest. This is the "focal image plane" [4,5] of the incoming beam under the presence of all the crystals and all the polarization transforming elements. The normalization factor is given by the Rayleigh length z R = k 0 w 2 , that is ζ ↔ z/(k 0 w 2 ) with k 0 = 2π/λ and λ the light vacuum wavelength. Let us place ourselves in the framework of the paraxial approximation and consider an input field for which the transverse distribution of the electric displacement vector is given in wavevector space by D 0 (κ, φ). The output distribution in real space D( ρ, ϕ, ζ ) can be obtained by Fourier transformation in polar transverse coordinates of this field, after being propagated through the whole optical system [26], that is In the common case where the input beam is homogeneously polarized and of circular symmetry the input field D 0 is expressed as where the function a(κ) gives the amplitude distribution as a function of the transverse wavevector and d 0 is a unit polarization vector. The matrix U tot in Eq. (1) gives the transfer function through the optical arrangement. In the case of a cascade of N conical diffraction crystals its expression was given in [26], where the U n (κ, φ, γ n ) are unitary matrices associated to the individual crystals and are expressed as In the case addressed in the present work, where the crystals are intercalated by polarization transforming elements, the matrix U tot should contain the effect of these elements, Eq. (3) should then be replaced by where the Jones matrices J m (θ m ) are not necessarily unitary. Obviously, in the absence of one or more of the polarization transforming elements the corresponding Jones matrices have to be replaced by the unit matrix. The intensity distribution is finally obtained up to an unimportant multiplicative constant from (1) and (5) as While the complex integral (1) can be determined by a rather lengthy brute-force numerical calculation, it is more convenient to perform the azimuthal integration analytically. Appendix A describes the method for the integration of the Fourier integral (1) and introduces the modified Belskii-Khapalyuk integrals B m ( ρ,ρ, ζ ). In the following we treat explicitly the specific cases where N = 2 and the intermediate element is either a quarter-wave plate, a half-wave plate or a polarizer. We limit ourselves to the case of a circularly polarized input wave, for which, in absence of intercalated elements, conical diffraction always leads to rings with azimuthally homogeneous intensity. Two crystals intercalated by a λ/4-plate We start by considering the special case where a λ/4-plate is placed between two crystals with parallel optical axes. The wave-plate is oriented under an angle θ with respect to the x-axis and the input polarization to the first crystal is homogeneous and circular so that In absence of the intermediate wave-plate this situation leads to two conical diffraction rings, each with a homogeneous intensity along the azimuthal coordinate. The intensity associated to each ring depends on the value of the orientation angle γ 2 of the second crystal with respect to the first [26,28]. The radii of the two rings in our normalized units are |ρ + | and |ρ − |, with The output light distribution in the presence of the λ/4-plate is calculated from (1) using the Jones matrix associated to the wave-plate. Using the modified Belskii-Khapalyuk integrals (21) defined in the Appendix A the two complex components D x and D y of the electric displacement vector and where we have used the abbreviation B m (ρ) ≡ B m ( ρ,ρ, ζ ). It can be easily seen from (8) and (9) that the resulting intensity distribution (6) is no longer independent from the real-space azimuthal angle ϕ, as will be discussed with the concrete examples in Sect. 3. Two crystals intercalated by a λ/2-plate In this case the relevant Jones matrix is and the procedure to calculate the output D-vector is similar as above. For homogeneously circularly polarized input one obtains instead of (8) and (9), and Two crystals intercalated by a polarizer Finally we consider the case where the intermediate element is a polarizer described by the non-unitary Jones matrix The components of the output D-vector are then and Examples In order to visualize the effects we give in this section some specific examples and compare them with corresponding experimental tests in the case of a two-crystal cascade. The experiments were performed at the wavelength of 633 nm using circularly polarized input light to the first crystal, which we chose to be the longest one. We used two crystals of KGd(WO 4 ) 2 (KGW) with lengths of 22.6 and 17.6 mm, respectively. By using the principal refractive indices n g , n m and n p of KGW determined by Pujol et al. [36], the cone aperture semi-angle α 1/(2n g n p )[(n 2 g − n 2 m )(n 2 m − n 2 p )] 1/2 is α 19.6 mrad for this crystal. For our focusing conditions with a f =100 mm spherical lens the beam 1/e half-width at the waist position is w 4.5 µm and the corresponding normalized strength parameters are ρ 1 98.6 and ρ 2 76.8. Crossed crystals Let us consider the case where the crystals are crossed with a relative angle γ 2 = π/2. In absence of any polarization transforming elements between them, this situation leads to two azimuthally homogeneous rings (in fact two double rings), as shown in Fig. 2(a), obtained by integration of Eq. (1) for ζ = 0 in the case where the Jones matrix J 1 in (5) is identified to the unit matrix. As pointed out earlier [26,28], in this specific case the power is equally split among the two rings. Nevertheless the local intensity on the internal ring is larger due to the smaller area it occupies [28]. The latter is proportional to each ring radius and in our specific case the intensity ratio is roughly a factor of 8. We consider first the intercalation of a quarter-wave plate oriented parallel to the first crystal (θ = 0). Figure 2(b) shows the experimentally observed intensity distribution in the two rings as obtained by imaging the focal image plane to a far away CCD camera by means of an imaging lens placed behind the crystal cascade. Figure 2(c) gives the corresponding expected intensity distribution calculated with Eqs. (8), (9) and (6) and with a(κ) = exp (−κ 2 /2) in Eq. (2) and the integrals (21). This distribution a(κ) corresponds to the Fourier spectrum of the input Gaussian beam associated to the 1/e half-width w. Figures 2(b) and 2(c) clearly show that the introduction of the λ/4-plate breaks the angular degeneracy and leads to an azimuthal dependence of the intensity in the two rings. In addition to the intensity distribution we plot in Fig. 2(d) also the distribution of the absolute value of the output displacement vector | D| (proportional to the square root of the intensity). Since this choice permits a better visualization of the weaker ring, we will keep this representation in the further examples. Figures 2(b), 2(c) and 2(d) clearly indicate that the intensity maxima and minima of the internal ring are in anti-phase with those of the external one. To look at this aspect in more detail we plot in Fig. 2(e) the expected intensities on the two rings as a function of the output angle ϕ. The blue solid line corresponds to the radius at which the internal ring has its maximum and the red dotted line is at the corresponding radius for the external ring. The corresponding experimental data are given in Fig. 2(f). The latter are obtained by evaluating the angular dependence of the average intensity within equally wide narrow rings that contain the internal and the external double rings, respectively. Figures 2(e) and 2(f) show a good agreement and confirm that the intercalated wave plate leads to two maxima and two minima for each ring, mutually in opposition of phase. The intensity modulation in the outer ring follows approximately a dependence in sin 2ϕ, while the one in the inner ring goes with − sin 2ϕ, this means that here the azimuthal intensity modulation is double as fast as in the case where linearly polarized light is used as input to the conical diffraction process [28]. In the case of a quarter wave plate the azimuthal intensity modulation contrast is approximately 1/2 for both the outer and the inner ring. With some little algebra one can show that this modulation contrast is roughly given by the ratio: 2 (Re[ 2 , where for each modified Belskii-Khapalyuk integral B m one should take B m ( ρ, ρ + , 0) for the outer ring and B m ( ρ, ρ − , 0) for the inner ring (see Appendix A). The above discussion holds for our example for which the λ/4-plate was oriented at the angle θ = 0. However, the choice of another angle θ leads solely to a rotation of the whole output intensity distribution by the double angle 2θ and all the conclusions remain therefore valid. We note also that, despite the use of the quarter wave plate, the light on the two rings is locally linearly polarized with a polarization direction depending on the angle ϕ. In the present case the output wave of the internal ring is horizontally polarized for ϕ = −90 deg, vertically polarized for ϕ = 90 deg and is polarized at +45 deg and -45 deg for ϕ = 0 and ϕ = 180 deg, respectively. Therefore, as is the case for conical diffraction in a single crystal, the local polarization direction angle is a linear function of ϕ/2 so that two opposite points on the same ring possess orthogonal polarizations. Also, for the same azimuthal angle the polarizations on the two rings are mutually orthogonal, which means that the polarization on the external ring for a given ϕ corresponds to the one on the internal ring for ϕ + 180 deg. We remark that the above polarization distribution is exactly the same as the one obtained for crossed crystals in absence of an intercalated element, i.e. the case of Fig. 2(a) for which the intensity is azimuthally homogeneous. Therefore, the intercalation of the quarter-wave plate does not modify the local polarization on the rings, this statement remains true also if the λ/4-plate is replaced by another polarization transforming element. The polarization distribution on the two rings depends only on the orientation of the two crystals. It is worth noting that the intercalation of a half-wave plate according to Section 2.2 does not change dramatically the picture with respect to the above case of a quarter-wave plate and thus we discuss the differences only briefly. One gets also double peaked maxima and minima for each of the rings and, for a same orientation of the wave plates the intensity distribution keeps the same overall orientation. However, for the case of a half-wave plate the modulation is complete for both the inner and the outer ring. For example, if θ = 0 one gets zero intensity points for the internal ring at ϕ = π/4 and 5π/4 and zero intensity points for the external ring at ϕ = −π/4 and 3π/4. Therefore the use of a variable retardation wave plate permits to tune the azimuthal intensity modulation from zero to full contrast going from zero retardation to half-wave retardation, and back to zero if the retardation ranges between λ/2 and a full wave. As a next example we consider the case where a polarizer under the angle θ = π/2 is inserted between the crossed crystals. As shown in Fig. 3 this leads to a richer and more complex azimuthal intensity redistribution within each of the two rings. Figure 3(a) shows the expected distribution of the modulus of the D vector as obtained from Eqs. (14) and (15). It clearly shows that the intensities in each of the two rings exhibit a well identified maximum at a certain angle and that the mutual positions of these maxima are angularly shifted by ∆ϕ = π/2. This is clearly seen also in the experimental observation of Fig. 3(b) and in the theoretical and experimental intensity distribution along the two rings depicted in Fig. 3(c). Each ring possesses two zero-intensity points with a weak secondary intensity maximum between them, as seen for instance in the inset in Fig. 3(c). The zero-intensity points form a right angle with the center of the cone projection, in our specific case they are found at ϕ = −π/2 and ϕ = 0 for the internal ring, and at ϕ = 0 and ϕ = +π/2 for the external one. The 90 degrees out-of-phase relative orientation of the internal and external rings may be interpreted as the manifestation of some kind of "pseudo-chirality" associated with the two-crystal structure. In the case discussed here, for which γ 2 = +π/2, this chirality is positive as defined by the fact that the external ring is oriented 90 degrees clockwise with respect to the internal one. The reverse would be true if the direction of the second crystal is inverted and γ 2 = −π/2. Interestingly, such an inversion of chirality is obtained also if the order of the crystals is reversed and the shorter crystal would be put before the longer one. This is in contrast to the above case where the intercalated element is a wave plate, for which the overall orientation of the rings is independent of the order in which the birefringent crystals are put into the set-up. This difference is associated to the fact that in the case of a polarizer the total energy in the beam is not conserved and the corresponding Jones matrix (13) is not unitary. Finally, as was the case for the intercalated wave plates, it is worth noting that rotating clockwise the polarizer by an additional angle ∆θ leads to a clockwise rotation of the whole light distribution structure by the double angle 2∆θ. For instance, the structure for a polarizer under the angle θ = 0 is obtained by central point symmetry from the one depicted in Fig. 3. Parallel crystals We consider now the interesting situation where the two crystals are parallel with the vectors γ 1 and γ 2 oriented in the same direction, so that the relative angle is γ 2 = 0. In this case, in absence of intercalated elements one obtains a single double ring at ρ ≈ ρ 1 + ρ 2 [26,28]. This radius corresponds to the one of the external ring in Fig. 2(a). The system behaves like there was only a single crystal having a length corresponding to the sum of the lengths of the individual crystals. We first consider the case where the intermediate local polarization is scrambled by a quarterwave plate. As seen in Fig. 4 clearly the presence of the wave plate "reactivates" the otherwise non existing internal ring. As for the case of the crossed crystals given in Fig. 2 the intensities are azimuthally modulated along each of the rings with a two-fold symmetry. The maxima and minima of each ring are again mutually out-of-phase. However, here the modulation depth of the internal ring is complete, while the one of the external ring is only partial like in the case of the crossed crystals. Again, as in the previous cases, any further rotation ∆θ of the wave plate leads to a rotation of the whole light distribution structure by 2∆θ. It should also be noted that the use of a half-wave plate instead of a quarter-wave plate leads to a qualitatively similar and equally oriented picture, the only major change being that not only the internal, but also the external ring has a full modulation contrast. The fact that the internal ring can be activated by the presence of the wave plate has potentially very interesting applications. For instance, the use of an electro-optically tunable retarder shall allow to switch on and off very rapidly the internal ring structure without the need for any moving parts. The same is true for the switching on and off of the external ring in the case where the two crystals are oriented in antiparallel direction (γ 2 = π). As a final experimental example we discuss briefly the case where a polarizer is inserted between the parallel crystals. As seen in Fig. 5 also here the internal ring is reactivated with the same two-fold symmetry as for the case of Fig. 4. However, in contrast to all previous cases, the overall rotational symmetry associated to the intensity in the external ring differs from the one of the internal one and is only one-fold. More than two crystals Finally we give briefly two examples for the more complex cases where three or four crystals are put in cascade in the way shown in Fig. 1. In principle also for N > 2 a semi-analytical treatment like the one in Sections 2.1 to 2.3 can be done, however the related expressions become quite lengthy. It can be easily shown that for a number N of crystals put in cascade and intercalated by various polarization transforming elements the maximum order for the integrals B m involved in such expressions is m = N. Therefore for three crystals the integrals B 3 ( ρ,ρ, ζ ) are needed in addition to B 0 , B 1 and B 2 , while for four crystals also the B 4 ( ρ,ρ, ζ ) are required. Here, instead of using the semi-analytical approach, we show in Fig. 6 the expected intensity distributions obtained by a direct numerical integration of Eq. (1) in the focal image plane ζ = 0 and for a circularly polarized input wave. Fig. 6 is for the case of three cascaded crystals with the first two crossed to each other and the last two parallel to each other. A quarter-wave plate oriented at 45 deg is placed between the first two crystals and a half-wave plate oriented at θ 2 = 90 is placed between the last two crystals. The chosen normalized strength parameters of ρ 1 = 40, ρ 2 = 15, ρ 3 = 5 lead to four conical diffraction rings for which the normalized radii are roughly ρ ≈ (20; 30; 50; 60), which can be easily recognized in Fig. 6(a). The azimuthal intensity distribution on each of the rings exhibits two nodes which are aligned horizontally for the most internal (ring 1) and the most external ring (ring 4), and vertically for the two intermediate rings. Remarkably, in the present configuration the azimuthal intensity profile between these nodes is not symmetric, so that the intensity center of mass shifts clockwise for the rings 1 and 3, and counterclockwise for the rings 2 and 4. Panel (a) in For the case of Fig. 6(b) a fourth crystal with ρ 4 = 70 and γ 4 = 135 deg is added to the previous three. Here we consider the case where polarizers are inserted between the outer crystal pairs (θ 1 = 0 and θ 3 = 135 deg) and a half-wave plate is put between the second and third crystal (θ 2 = 45 deg). The expected radial positions of the eight resulting conical diffraction rings are ρ ≈ (10; 20; 40; 50; 90; 100; 120; 130). All these rings can be recognized in Fig. 6(b), however the ring at ρ ≈ 90 is only hardly visible due to a very weak associated intensity. As was the case for instance in Fig. 3, the polarizers lead to a quite complex angular dependence of the intensity on each of the eight rings and the angle ϕ associated to the maximum intensity differs for each of them. Also, the increase of the number of crystals and intercalated elements leads to a sharper confinement of the intensity on a rather narrow angular region for the main lobe of each ring. Conclusions We have described theoretically and verified experimentally the effect of intercalating polarization transforming optical elements between the biaxial crystals forming cascaded conical diffraction. The additional elements break the usual azimuthal intensity homogeneity expected for unpolarized or circular polarized input beams. Complex vector-type beams are obtained with their shapes governed by the crystals conical diffraction strength parameters ρ n , their angular orientations γ n and the nature and orientation of the polarization transforming elements. A particularly interesting case is the one where otherwise silent rings are "re-activated" by the presence of the intercalated elements, which occurs if two or more crystals are arranged parallel or anti-parallel to each other. Since variable retarders can be realized by means of electro-optical devices, this opens the possibility to switch on and off individual conical diffraction rings at speeds exceeding several MHz. We have given explicit analytic expressions only for the case of a two-crystal cascade with intercalation of a λ/4-plate, a λ/2-plate or a polarizer. However, the general formalism remains valid also for other polarization transformations between the crystals, as may be obtained for instance by a combination of optical elements described by an appropriate Jones matrix. Our few examples have shown that a variety of complex beam shapes with strong (and different) azimuthal localization of the light intensity on each ring can be obtained. Obviously this localization could be improved even further by post-filtering the polarization state after the last crystal by means of a polarizer. The richness and versatility of the vector beam shaping features resulting from the present approach open up interesting perspectives for virtually every application that has been proposed in connection with conical diffraction and complex beam shaping, including fast switchable optical trapping, singular optics, material processing, polarization metrology, and super-resolution microscopy. Appendix A: Integration of the Fourier integral (1) When evaluating the form of the 2×2 complex matrix U tot (5) or of the product U tot · d 0 inside the integral (1) one gets a complicated sum over sine and cosine functions containing as arguments various linear combinations of the angles φ, θ n , γ n as well as the products κ ρ n . It is convenient to express this trigonometric sum as a sum of exponential functions of the same arguments. The general form of an individual term may then be expressed as where m is a positive integer and β is a linear combination of the angles θ n and γ n . The quantitỹ ρ is composed of simple sum or differences of the normalized strength parameters ρ n . For instance in the case where we have only two crystals (N = 2) the possible values ofρ are ρ 1 + ρ 2 , ρ 1 − ρ 2 , ρ 2 − ρ 1 and − ρ 1 − ρ 2 . The values ofρ are associated to a specific conical diffraction ring. However, sinceρ and −ρ belong to the same ring, the total number of concentric rings observed in cascaded conical refraction is only 2 N −1 and not 2 N , as pointed out earlier [26,28]. Considering only the azimuthal φ-integral in (1) for the specific term (16) we get The above integral is a special form of an integral of the following class for which a general solution was given by Massidda [37], 1 2π 2π 0 e imφ e a cos(φ −α 1 ) e 2b cos 2 (φ −α 2 ) dφ = e b e imα 1 k e 2ik (α 1 −α 2 ) I 2k+m (a)I k (b) , (18) where I k (x) is the modified Bessel function of the first kind of order k, which is related to the Bessel function of the first kind J k (x) by I k (x) = (1/i k ) J k (ix). Since in (17) b = 0, only the term k = 0 contributes to the sum on the right-hand side of (18). With a = iκ ρ, α 1 = ϕ and using Therefore, upon integration the azimuthal phase e ±imφ in Fourier space leads to a corresponding phase e ±imϕ containing the azimuthal angle ϕ in real space. One can now use the integral p to evaluate the contribution q of the term (16) to the integral (1), one obtains, The quantities B m ( ρ,ρ, ζ ) in the above expression are modified Belskii-Khapalyuk integrals that we define as Note that, unlike for the standard Belskii-Khapalyuk integrals [5,38], the above modified integrals are complex even in the focal image plane ζ = 0. However, it follows directly from the above definition that for this plane the following symmetries hold Fig. 7 visualizes the real and imaginary parts of the modified Belskii-Khapalyuk integrals B 0 , B 1 and B 2 for the case ζ = 0. It can be easily recognized that in this case the integrals assume significant values only in the neighborhood of the normalized radius ρ = |ρ|, which is indicated by the vertical lines in Fig. 7. The radii ρ = |ρ| withρ = ρ 1 + ρ 2 orρ = ρ 1 − ρ 2 correspond roughly to the radii of the Poggendorff dark rings of the two-crystal cascaded conical diffraction. Note that the rather sharp curves for B 0 , B 1 and B 2 in Fig. 7 are due to the choice ζ = 0. If we leave the focal image plane (ζ 0) the curves become much broader and can assume significant values even for ρ being far from the characteristic valuesρ, reflecting the fact that the conical diffraction rings get defocused. Note also that in the specific example shown here, which is associated to large values ofρ, the curves for B 0 , B 1 and B 2 appear very similar, even though they are not identical. The differences between the curves become much more pronounced for smaller values ofρ (not shown in Fig. 7), as obtained for a less focused input beam, for shorter crystals or for materials with a smaller aperture angle of the conical diffraction. Finally we remark that in the analytic expression for the output components of the electric displacement vector given in sections 2.1 to 2.3 one always finds sums (or differences) of the modified Belskii-Khapalyuk integrals for values ofρ of opposite sign. Specifically these are B 0 (ρ ± ) + B 0 (−ρ ± ), B 1 (ρ ± ) − B 1 (−ρ ± ) and B 2 (ρ ± ) + B 2 (−ρ ± ) (see Eqs. (8) and (9) as well as the corresponding equations in sections 2.2 and 2.3). In combination with the symmetries expressed by Eqs. (22) and (23) this implies that only the real parts of the integrals B 0 , B 1 and B 2 play a role for the intensity distribution in the plane ζ = 0, what is no longer true if one leaves this plane. Also, even though we prefer to stick to the general form of the modified integrals (21), it is worth mentioning that the above sum (or differences) are directly proportional to the standard form of the Belskii-Khapalyuk integrals. In the latter, instead of the complex term exp(iκρ), the integrand contains a term cos(κρ) for m = 0 and for its generalization to all even values of m, and contains a term sin(κρ) for m = 1 and for the generalization to all odd values of m. (21) as a function of the normalized radius ρ for the case ζ = 0 and a(κ) = exp(−κ 2 /2). The panels in the left column give the real part and the panels in the right column give the imaginary part of the integrals. The top panels are for B 0 (ρ + ) (solid lines) and B 0 (ρ − ) (dotted lines). The corresponding functions for B 1 (ρ ± ) and B 2 (ρ ± ) are in the middle panels and bottom panels, respectively. Hereρ + ≡ ρ 1 + ρ 2 andρ − ≡ ρ 1 − ρ 2 , with ρ 1 = 98.6 and ρ 2 = 76.8. The vertical lines correspond to the conditions ρ =ρ − and ρ =ρ + .
8,797.6
2017-10-16T00:00:00.000
[ "Physics" ]
Coarse Technogenic Material in Urban Surface Deposited Sediments (USDS) : In the current paper, the analysis of heavy mineral concentrate (Schlich analysis) was used to study the particles of technogenic origin in the samples of urban surface-deposited sediments (USDS). The USDS samples were collected in the residential areas of 10 Russian cities located in different economic, climatic, and geological zones: Ufa, Perm, Tyumen, Chelyabinsk, Nizhny Tagil, Magnitogorsk, Nizhny Novgorod, Rostov-on-Don, Murmansk, and Ekaterinburg. The number of technogenic particles was determined in the coarse particle size fractions of 0.1–0.25 and 0.25–1 mm. The types of technogenic particle were studied by scanning electron microscopy (SEM) analysis. The amount of technogenic material differed from city to city; the fraction of technogenic particles in the samples varied in the range from 0.01 to 0.43 with an average value of 0.18. The technogenic particles in USDS samples were represented by lithoid and granulated slag, iron and silicate microspheres, fragments of brick, paint, glass, plaster, and other household waste. Various types of technogenic particle differed in morphological characteristics as well as in chemical composition. The novelty and significance of the study comprises the following: it has been shown that technogenic particles are contained in a significant part of the USDS; the quantitative indicators of the accumulation of technogenic particles in the urban landscape have been determined; the contributions of various types of particles to the total amount of technogenic material were estimated for the urban landscape; the trends in the transformation of typomorphic elemental associations in the urban sediments associated with the material of technogenic origin were demonstrated; and the alteration trends in the USDS microelemental content were revealed, taking into account the impurities in the composition of technogenic particles. Introduction Sediment deposition in the urban area reduces the environmental quality, and affects health, aesthetics, economics, and other aspects of city life [1]. The constant sediment supply increases the costs of municipal services and cleaning the territories, as well as deteriorating urban infrastructure facilities [2][3][4][5][6]. The deposited loose sedimentary materials silt stormwater systems, compact urban soils, decrease the fertility of the topsoil, etc. [7][8][9][10]. The deposited solid matter on streets and sidewalks increases the wear and tear of vehicles [7][8][9][10][11][12][13]. Dust deposition in electrical equipment may cause outages on electricity lines [14]. Coarse sand material of road-deposited sediments is about 50% of road-deposited sediments mass [15]. The coarse particles of anthropogenic origin may contain toxic heavy metals [16][17][18][19][20]. The large size fraction material of road-deposited sediments (>100 µm) contains the mass of heavy metals within particulate matter similar to the fine fractions [21]. The coarse particles are involved in the transport of heavy metal pollution from roads represented by magnetic particles including spherules and slag, comprising the particles of about 100 µm size [30,32,42]. Smelters and coal-fired power plants also represent significant sources of anthropogenic solid material in cities, forming non-point sources of pollution, such as fly ash [17,[43][44][45]. Thus, the identification of sources of anthropogenic material, the content of technogenic materials, and the assessment of the amount and types of anthropogenic particles in different parts of the landscape are among the significant environmental issues in an urban environment. While the environmental role of the USDS in modern cities had been demonstrated in the previous studies involving such characteristics as pollution with the heavy metals [22,24,25,46] and the contribution of the dust fraction [23], this study has been focused on the technogenic particles in the urban environment. The objectives of the study were: (1) the identification of particles of the anthropogenic origin found in the urban environment compartments; (2) the classification and characterization of the morphological features of technogenic particles; (3) the assessment of the amount of technogenic material in urban surface deposited sediments; and (4) in an urban environment; and (5) the characterization of cities according to the amount of technogenic material in the contemporary urban surface sediments. The Description of the Studied Cities The USDS sample collection program was performed in 10 Russian cities located in different climatic and industrial zones, in the territories with different geological structure ( Figure 1) [47]: Ufa, Perm, Tyumen, Chelyabinsk, Nizhny Tagil, Magnitogorsk, Nizhny Novgorod, Rostov-on-Don, Murmansk, and Ekaterinburg. The chosen cities have a high automobile traffic load, >250 cars per 1000 people, and high density of population. Atmosphere 2021, 12, x FOR PEER REVIEW 3 of 17 Road traffic is one of the main sources of technogenic material [30,40,41] such as the particles of wear of tires, brake pads, and road abrasion products. Tire wear products contribute the most part of anthropogenic material in road dust, galley sediments, pavement dust, car park dust, and roadside soils and snow. Anthropogenic material from vehicles is represented by magnetic particles including spherules and slag, comprising the particles of about 100 µm size [30,32,42]. Smelters and coal-fired power plants also represent significant sources of anthropogenic solid material in cities, forming non-point sources of pollution, such as fly ash [17,[43][44][45]. Thus, the identification of sources of anthropogenic material, the content of technogenic materials, and the assessment of the amount and types of anthropogenic particles in different parts of the landscape are among the significant environmental issues in an urban environment. While the environmental role of the USDS in modern cities had been demonstrated in the previous studies involving such characteristics as pollution with the heavy metals [22,24,25,46] and the contribution of the dust fraction [23], this study has been focused on the technogenic particles in the urban environment. The objectives of the study were: (1) the identification of particles of the anthropogenic origin found in the urban environment compartments; (2) the classification and characterization of the morphological features of technogenic particles; (3) the assessment of the amount of technogenic material in urban surface deposited sediments; and (4) in an urban environment; and (5) the characterization of cities according to the amount of technogenic material in the contemporary urban surface sediments. The Description of the Studied Cities The USDS sample collection program was performed in 10 Russian cities located in different climatic and industrial zones, in the territories with different geological structure ( Figure 1) [47]: Ufa, Perm, Tyumen, Chelyabinsk, Nizhny Tagil, Magnitogorsk, Nizhny Novgorod, Rostov-on-Don, Murmansk, and Ekaterinburg. The chosen cities have a high automobile traffic load, >250 cars per 1000 people, and high density of population. The significant development of urbanization in the cities occurred in the second half of the 20th century. The descriptions of the surveyed cities are represented in Table 1. The significant development of urbanization in the cities occurred in the second half of the 20th century. The descriptions of the surveyed cities are represented in Table 1. Sample Collection The USDS samples were collected on an irregular grid of at least 40 sampling sites in each city. The sampling site represents the courtyard area of the residential quarter with multi-story buildings. Each sample was taken from the local depressions of the microrelief from 3-5 localizations on the territory of the courtyard space of the quarter. The sample collection procedure was described in detail in previously published papers [22,25,46]. The sample mass was 1-1.5 kg. During the sample collection process, a questionnaire was filled for each sampling site containing information about the conditions of sediment formation, their thickness, the approximate area of the quarter, the proportion of landscaped functional zones, sidewalks, parking lots in the quarter, the quality of cleaning, carrying out construction work, and the approximate time of development of the territory. Particle Size Analysis Large roots, stones, debris, and foreign inclusions (glass, plastic, etc.) were removed from the samples. The samples were dried at room temperature. The dried sample was crushed manually using a rubber-tipped pestle, and thoroughly mixed. A representative subsample of about 200 g for particle size analysis was taken from each sample by quartering. To conduct particle size analysis, at least 5 samples were randomly chosen from 40 samples collected in each city. The special separation procedure was used to determine the granulometric composition and to obtain the solid material of the various particle size fractions of the samples. The technique based on decantation and wet sieving of the material of subsample of 200 g was earlier described in detail by Seleznev and Rudakov [46]. The subsample of 200 g was fractionated into 6 granulometric subsamples with sizes: >1 mm, 0.25-1 mm, 0.1-0.25 mm, 0.05-0.1 mm, 0.01-0.05 mm, and 0.002-0.01 mm. The resulting granulometric subsamples were weighed. The mass fraction of each particle size fraction in the sediment sample was calculated. Mineral Analysis The analysis of the heavy mineral concentrate (Schlich analysis) of sediment was used to determine the particles of technogenic origin. Manual analysis was performed for 0.1-0.25 and 0.25-1 mm granulometric subsamples. The fraction of anthropogenic particles was calculated in 0.1-0.25 and 0.25-1 mm fractions. The analytical procedure is described below. The solid material of the studied granulometric subsample was poured on paper and thoroughly mixed. Then a cone pile was formed from the poured loose material. After that, the material was flattened into a disk 1-2 mm thick. This disk was divided radially into quarters; two opposite quarters were taken for the further analysis of the subsample and the other two were discarded. Such a procedure of quartering and reducing the volume of the material of the granulometric subsample was repeated multiple times until the subsample of the desired weight or volume was obtained. The final volume of the quartered granulometric subsample was approximately 15 mL. Using a blade, the quartered granulometric subsample was distributed on the slide in three parallel lines. To identify and count particles, the lines were formed narrow and sparse. All manipulations with the grain mounts were conducted manually using the binocular microscope. Manipulation with the cone, disk, and the lines of particles, as well as quartering was performed using a wooden stick or copper needle. The identification of the technogenic particles was carried out by morphology, structure, color, density, optical and physical properties (shape and crystal habitus, splinters, fracture, transparency, luster, elasticity, and hardness). Each particle was photographed using a Carl Zeiss Axioplan 2 optical microscope and binocular microscope equipped with an Olympus C-5060 camera. The size of particles was determined by a calibrated stage/objective micrometer (1 mm divided into 100 units) measurement scale of the optical microscope and its software. All the particles of the quartered subsample were distributed by type; the fraction of particles of each type was counted. After quartering and heavy mineral concentrate analysis 2-5 visually typical particles were selected from the part of granulometric subsample attributed to the technogenic phase. These particles were analyzed with a JEOLJSM-6390LV scanning electron microscope equipped with Oxford Instruments INCAEnergy 350 X-Max 50 energy-dispersive spectrometer. At least one image was obtained from the surface of each selected particle. The homogeneity of the chemical composition of the particle surface was identified visually by the color of the image. At least one spectrum of elemental composition was determined for a particle with a flat surface, characterizing its uniform composition. For particles with a concave or convex surface at least two spectra of elemental composition were taken from the surface (in the center of the surface and at its peripheral). For particles with visually different chemical compositions (different shades of gray in the image), at least one spectrum in each light area was taken. For particles with inclusions at least one spectrum was taken on each inclusion, and the linear size of the inclusion was measured. Similarly, at least one spectrum was taken on each area of the external contamination of particles (if it was present). Optical analysis, photography, and scanning electron microscopy (SEM) were carried out in the "Geoanalyst" Center for Collective Use at the Institute of Geology and Geochemistry of the Ural Branch of the Russian Academy of Sciences. The origin of the particles (technogenic or natural) was finally determined according to the results of their visual analysis (color, luster, morphology, and size) and SEM investigations (surface morphology and chemical composition). Results The number of USDS samples collected in the cities and analyzed fortechnogenic phase is shown in Table 2. The analysis of heavy mineral concentrate was performed in 85 granulometric subsamples of 0.1-0.25 mm and 80 subsamples of 0.25-1 mm in size. For the particle size fraction of 0.1-0.25 mm, 11,985 particles were analyzed with the optical method, and 2306 of them were visually identified as technogenic. For subsamples of 0.25-1 mm in size, 10678 particles were inspected with a binocular microscope, of which 1409 particles were attributed to the technogenic phase. The statistical parameters of the fractional distribution of technogenic particles in the surveyed cities in particle size fractions of 0.1-0.25, 0.25-1, and combined fraction of 0.1-1 mm are shown in Figures 2 and 3. According to SEM analysis, the studied technogenic particles were divided into types presented in Table 5. Table 6 shows the morphological features of the various types of particles. Totally Table 5 as well. The distribution of different types of technogenic particle in urban areas in the 0.1-1 mm grain size fraction and 0.1-0. 25 Discussion The USDS samples were collected in 10 large cities located in different geographic and climatic zones, and in territories with different geological setting, anthropogenic pressure, and economy. The research was carried out according to the uniform methodology in all the studied cities. A part of the obtained particle size subsamples of 0.1-0.25 and 0.25-1 mm in size did not have enough material to conduct the analysis of heavy mineral concentrate, thus these subsamples were rejected from the technogenic particle investigations. In the cities of Perm, Ekaterinburg, and Tyumen a smaller number of USDS samples were collected, thus a correspondingly smaller number of subsamples for the analysis of heavy mineral concentrate were selected. Such a homogeneous distribution of the USDS sample amount and particle size subsamples did not affect the results of the analysis of heavy mineral concentrate and was suitable for the current study. The total number of the studied samples is sufficient to assess the contribution of the technogenic component to the USDS solid coarse fractions of 0.1-0.25 and 0.25-1 mm in size. According to the visual mineral analysis, 19% and 13% of particles were characterized as technogenic in particle size fractions of 0.1-0.25 and 0.25-1 mm, respectively. The rest of the particles is represented by the mineral and natural organic fragments. The proportion of technogenic particles differs from city to city. The largest portion of anthropogenic particles in the USDS coarse fraction was found in Rostov-on-Don, Ekaterinburg, Nizhny Novgorod, Nizhny Tagil, and Magnitogorsk. The high proportion of technogenic particles in these four cities is apparently related to the ferrous metallurgy and mechanical engineering industries. The city of Rostov-on-Don is the most southern of the surveyed cities. According to previous studies, the city has the highest accumulation of dust and USDS due to the arid climate and bad cleaning and management of the urban environment [1,22]. The lower amount of the anthropogenic coarse material was found in Perm and Tyumen. Tyumen is one of the least-polluted cities in Russia, although it has a slightly large number of cars per capita in comparison with other cities [22]. It should be noted that for all cities the proportion of technogenic phase in the combined fraction of 0.1-1 mm will be consistent with the proportions of the anthropogenic material in the separate fractions of 0.1-0.25 and 0.25-1 mm (Figure 4). The ratio between the number of technogenic particles visually identified and the total amount of particles in the granulometric subsamples may be used to roughly estimate an error in determining the number of technogenic particles by visual inspection (for subsamples of 0.1-0.25 and 0.25-1 mm, 19% and 13%, respectively). The SEM-EDS (energy-dispersive spectroscopy) technique allows us to analyze the surface of the particle and determine its chemical composition. Thus this method of analysis is more reliable for the determination of particle type than visual diagnostics. Visual inspection depends on the qualification, physical abilities, and experience of the operator. Therefore, optical methods of research do not fully guarantee the reliability of determination of the particle type. Fully reliable determination of the particle type by its visual features is unattainable and is not required. However, the combination of methods of analysis of heavy mineral concentrate and visual diagnostics is a suitable and easy technically realized procedure to discriminate technogenic particles in comparison with SEM-EDS analysis that requires the investigator to have skills in electron microscopy. At the same time, the analysis of heavy mineral concentrate provides the search of the required particles among the big amount of the similar objects and a rough estimation of the quantity of the objects of interest. Various types of technogenic particles differ in shape and physical characteristics as well as in chemical composition. The major elements forming the composition of the particle core were O, Si, Fe, Al, Ca, Ti, etc. The minor elements found on the surfaces of the particles and forming the impurities were Mg, K, Cu, Na, etc. In many cases, impurity elements contribute to the environmental pollution, in particular, the composition of various particles of plaster coated with paint and whitewash includes Pb, Cu, and Cr. The separate group of the cities of the Ural region with a metallurgical industry (Nizhny Tagil, Chelyabinsk, and Magnitogorsk) can be distinguished among the studied cities. Each city in this group has a large metallurgical plant, coking, and coal power plants. The number of technogenic particles does not differ significantly both in fractions of 0.1-0.25 and 0.25-1 mm separately and in the combined particle size fraction of 0.1-1 mm in these cities. According to the results of previous studies [46], the anthropogenic material in the form of slag is used in such cities as a building material, for example, instead of sand and stone in pavement and road construction in residential areas. There is also a coal power plant in Murmansk. It can be assumed that in the group of four cities, technogenic particles, in particular slag, can enter the USDS material with emissions from power plants and smelters. All the studied cities have a high automobile traffic network, as well as road construction works being underway. The technogenic components (especially fly ash) are often used as construction materials or backfill materials on pavements. Such material can be transferred into the USDS by the wheels of vehicles in the residential area. In general, the amount of technogenic material is comparable to the data obtained for other cities [15]. The distribution of the proportion of technogenic particles in the samples deviates from the normal and is close to lognormal and asymmetric. Several studies conclude that the lognormal distribution of elemental concentrations in environmental compartments or close to it relates to additional anthropogenic input of the elements [48][49][50]. In our study, the conclusion about the distribution of the proportion of anthropogenic particles in the studied samples close to lognormal was expected; however, it is important to take into account the uncertainty of information about the source of technogenic particles in the urban environment. The coefficient of variation of the portion of anthropogenic particles also confirms the fact of the heterogeneity of the sample populations in the studied cities. The analysis of the technogenic phase composition of USDS samples in the combined fraction of 0.1-1 mm shows that slag particles predominate in all cities and, besides, a large amount of domestic wastes (glass), the particles of construction materials (plaster and brick), and to a lesser extent paint particles, are observed. The analysis of the distribution of The individual particle size subsamples reveal the features of the cities that may be related to the contribution of the studied types of technogenic particle to the city pollution. For example, the granulometric fraction of 0.25-1 mm in Tyumen contains about 10% of coal, which indicates the presence of local coal-fired boilers in addition to the main stationary gas-fired power plants in the city. Moreover, the residential neighborhoods with multi-story buildings in Tyumen are adjacent to low-rise wooden buildings, where heating is provided from coal combustion [51]. Tyumen also has approximately 8% of tire material in fraction of 0.25-1 mm, indicating a high number of cars per capita (higher than in other cities). In Murmansk, with a coal cargo port located within the city center, about 7% of coal is found in particle size fraction of 0.1-0.25 mm. The elemental composition of technogenic particles is formed by different elements depending on the particle origin. Major elements may include the same elements that form the mineral component of the urban sediment: Si, Al, Ca, Fe, Mg, etc. [23]. However, each type of anthropogenic particle relates to some source of environmental pollution and to a related potentially harmful elements. In the current study, the granulometric subsamples were obtained after washing the samples with distilled water and, therefore, minor element content in the studied technogenic particles refers to trace elements rather than to material adsorbed on the particle surfaces. The accumulation of paint particles and colored plaster debris in the USDS contributes to the pollution of the urban environment with potentially toxic elements. The technogenic particles in the USDS samples tend to the formation of the geochemical anomalies in the urban area and increased concentrations of heavy metals in contemporary surface sediments. The uncertainties in this study are related to the following factors: the errors of the operator in identifying the particle type; -particle loss in particle size analysis under water washing and decantation; -counting errors in the analysis of heavy mineral concentrate; -the location of sampling sites in residential blocks far from roads, etc. Taking into account the sources of uncertainty, the obtained results satisfactory characterize the anthropogenic component of the surface sediments in residential areas in large Russian cities. The total amount of the USDS estimated for several Russian cities varies in the range from 1.8 to 3.2 kg/m 2 including approx. 65% of fraction >100 µm [23,24]. Thus, the amount of anthropogenic material in Russian cities varies from 0.21 to 0.37 kg/m 2 . This result shows a quite large accumulation of technogenic material in the urban environment. The preliminary analysis of microplastic particles in the USDS samples in Russian cities allowed the amount of microplastic particles <1 mm to be considered insignificant in this environmental compartment [28]. The results of the assessment of the number of microplastics are not presented in the current paper; however, further studies may use the methodological approaches represented in the paper to search for plastic microparticles and estimate their amount. Conclusions The combined approach was applied to assess the number of technogenic components in loose coarse sedimentary material in an urban environment. When determining the types of technogenic particle, the shape of the particles as well as their color and surface morphology are of great importance. The approach was based on the methods of quantitative and quantitative mineral, SEM-EDS, and environmental analysis. This approach can be implemented in other environmental studies for similar purposes. The study of technogenic particles in the contemporary anthropogenic sediments allows important information about the sources of pollution to be obtained, especially about local non-point sources of pollution and their characteristics in an urban area. According to revealed quantitative indicators, it has been shown that the USDS in Russian cities contain a significant part of technogenic particles. Surveyed cities are differentiated by the amount and types of the technogenic particles preferably presented in the local USDS in residential area. Techogenic material may impact the transformation of typomorphic element associations in the urban environmental compartments. The trace elements found among the technogenic particles as impurities may change the microelement composition within the components of the urban sediment cascade. Author Contributions: Conceptualization, methodology, formal analysis, data curation, writing-Original draft preparation, supervision, review and editing, visualization, project administration, funding acquisition, planning of laboratory analysis, A.S.; laboratory analysis, E.I.; field study, writing-Original draft preparation, review and editing, I.Y.; field study, review and editing, G.M. All authors have read and agreed to the published version of the manuscript.
5,579
2021-06-10T00:00:00.000
[ "Environmental Science", "Geology" ]
An Alternative to Real Number Axioms In the present paper we consider one of the basic theorems of probability theory on real numbers. We prove that it is equivalent with the supremum axiom of real numbers. Introduction It is a well-known fact that the set Q of rational numbers is not complete-hence, such important constants as √ 2 or π do not exist in Q.Because the way in which the set R is constructed of real numbers from Q is quite complicated, it is usually defined axiomatically.The completeness of R can be formulated in different ways, e.g., as a complete metric space, or as a complete lattice.In [1], a review of some completeness axioms for R is presented.In this paper, the set R will be characterized by a property which is very important in the probability theory, which may prove useful from the point of view of applications, as well as didactics.In the paper we shall characterize the set R from the perspective of the probability theory, namely in the Kolmogorov formulations-an event is a set on certain σ-algebra S of subsets of a space Ω, and the probability is a σ-additive mapping P : S −→ [0, 1] .In terms of measurement, the mapping is a real function ξ : Ω −→ R , and it is an interesting point, especially in terms of didactics, that the complete information about ξ is obtained from the distribution function F of ξ, which is a real function, F : R −→ [0, 1] , with some particular properties. The paper is organized as follows: In Section 2 we will formulate two different axioms-the supremum axiom (S) and the distribution function axiom (D); and in Section 3 we will prove that the axioms are equivalent. Materials and Methods In this section we formulate the important properties of the distribution function.In the literature there are two well-established but different definitions of the distribution function F : R −→ [0, 1] of a random variable ξ : Ω −→ R .The first is given by the formula F(x) = P({ω ∈ Ω : ξ(ω) < x }), and the second by the formula F(x) = P({ω ∈ Ω : ξ(ω) ≤ x}).In this paper we shall use the second approach, which is more convenient for working with the supremum axiom.Evidently, the first one could be used in the infimum way.The distribution function F : R −→ [0, 1] can be characterized without any reference to the general probability space [2,3] and by applying only a few properties of F, as shown in the following definition.F is right continuous in any point x 0 ∈ R, In the probability theory, the following theorem presents a translation method between the elementary approach and the abstract theory.To any distribution function F : R −→ [0, 1] there exists a probability measure λ : B −→ [0, 1] defined on the family B of Borel subsets of R, such that: In our elementary approach, instead of B we will work only with the family R for all unions of intervals I ⊂ R (bounded as well as unbounded).According to the measure extension theorem, any additive and continuous mapping λ : R −→ [0, 1] can be extended from R to B, since R is an algebra and B is the σ-algebra generated by R. Axiom (S). Any increasing bounded sequence of real numbers has the supremum-the least upper bound of the sequence. In our distribution axiom, instead of σ-additivity, we shall use the notion of additivity and the notion of continuity. A mapping λ : R [0, 1] is additive, if for sets A, B ∈ R such that A B = ∅, it holds: Results There are many known proofs of the axiom (D), e.g., referring to the completeness of R by (S).Now we shall prove the opposite implication. Theorem 1. Axiom (D) implies Axiom (S). Proof of Theorem 1.Let {a n } n be a sequence, such that 0 < a Our goal is to construct a distribution function y = F(x) and an increasing sequence b n , such that F(b n ) = a n for every n.Consider the points B 0 , B 1 , C 1 in the coordinate system, where B 0 = (0, 0), B 1 = (b 1 , 0), and C 1 = b 1 , 1 2 (see Figure 1).Denote the area of the triangle, defined by these points by 1 .Clearly, 1 = 4 1 .Let x be a point in the interval [0, 1 ).Then, () is the area of the triangle, defined by points 0 , 1 = (, 0) ) (see Figure 2).) ; the area of the trapezoid defined by these points is 2 − 1 (see Figure 3). Hence, . Denote the area of the triangle, defined by these points by a 1 .Clearly, b 1 = 4a 1 .Let x be a point in the interval [0, b 1 ).Then, F(x) is the area of the triangle, defined by points B 0 , X 1 = (x, 0) and X 2 = x, x 8a 1 (see Figure 2). • If there exists a natural number n and x ∈ [b n , b n+1 ], then It can easily be proved that F is a distribution function.Assume that there exists a probability measure λ : R −→ [0, 1] , such that λ( α, β]) = F(β) − F(α), and in particular, Thus, we have found that any increasing sequence {a n } n from the interval [0, 1] has the supremum.Now, let {a n } n be an arbitrary bounded increasing sequence from (0, k].For any natural n, take c n = a n k .Then c n ∈ (0, 1], and there exists the supremum of {c n } n .Hence, there exists the supremum of {a n } n , and: sup Finally, consider (a n ) n as an arbitrary increasing bounded sequence.Take d n = a n − a 1 .This means that d n is non-decreasing, non-negative, and bounded.Therefore, there also exists the supremum of d n .Hence, there exists the supremum of a n and supa n = a 1 + supd n . Figure 2 . Figure 2. The area of the triangle, defined by points B 0 , X 1 , X 2 . Figure 3 . Figure 3.The area of the trapezoid defined by points B 1 , C 1 , B 2 , C 2 . Figure 4 . Figure 4.The area of the trapezoid defined by points B 0 , C 1 , X 1 , X 2 . Figure 5 . Figure 5.The area of the trapezoid defined by points C n , C n+1 , B n , B n+1 . Figure 6 . Figure 6.The area of the trapezoid defined by points B n , C n , X 1 , X 2 . Definition 1.A function F : R −→ [0, 1] is called a distribution if it satisfies the following properties:
1,650.6
2018-08-21T00:00:00.000
[ "Mathematics" ]
A pipeline for the de novo assembly of the Themira biloba (Sepsidae: Diptera) transcriptome using a multiple k-mer length approach Background The Sepsidae family of flies is a model for investigating how sexual selection shapes courtship and sexual dimorphism in a comparative framework. However, like many non-model systems, there are few molecular resources available. Large-scale sequencing and assembly have not been performed in any sepsid, and the lack of a closely related genome makes investigation of gene expression challenging. Our goal was to develop an automated pipeline for de novo transcriptome assembly, and to use that pipeline to assemble and analyze the transcriptome of the sepsid Themira biloba. Results Our bioinformatics pipeline uses cloud computing services to assemble and analyze the transcriptome with off-site data management, processing, and backup. It uses a multiple k-mer length approach combined with a second meta-assembly to extend transcripts and recover more bases of transcript sequences than standard single k-mer assembly. We used 454 sequencing to generate 1.48 million reads from cDNA generated from embryo, larva, and pupae of T. biloba and assembled a transcriptome consisting of 24,495 contigs. Annotation identified 16,705 transcripts, including those involved in embryogenesis and limb patterning. We assembled transcriptomes from an additional three non-model organisms to demonstrate that our pipeline assembled a higher-quality transcriptome than single k-mer approaches across multiple species. Conclusions The pipeline we have developed for assembly and analysis increases contig length, recovers unique transcripts, and assembles more base pairs than other methods through the use of a meta-assembly. The T. biloba transcriptome is a critical resource for performing large-scale RNA-Seq investigations of gene expression patterns, and is the first transcriptome sequenced in this Dipteran family. Electronic supplementary material The online version of this article (doi:10.1186/1471-2164-15-188) contains supplementary material, which is available to authorized users. Background The Sepsidae family of flies consists of over 200 species with a global distribution [1]. Sepsids are a model system for the investigation of sexual selection and how it affects courtship and sexual dimorphism [2]. Sepsids have complex courtship behaviors that include elements of male display, female choice, and sexual conflict [3][4][5][6]. Specialized male traits have evolved alongside these complex courtship behaviors. Sexual selection has resulted in the evolution of modified forelimbs, body size, and abdominal appendage-like structures, which are articulated and have long bristles attached to their distal ends [7][8][9][10][11][12][13][14][15]. Nextgeneration sequencing in combination with gene expression analysis has the potential to answer multiple questions including: how new morphologies evolve, whether shared developmental mechanisms underlie traits that have evolved multiple times, what the genetic basis of sexual dimorphism is and how to resolve the phylogenetic relationships within Sepsidae. Despite the potential of sepsids as a model to test a wide variety of evolutionary hypotheses, almost no molecular resources exist in this family, nor are any genomes or EST databases available. Most Dipteran families have few genomic resources compared to drosophilids and mosquitoes. Sepsids shared a common ancestor with Drosophila melanogaster and houseflies between 74 and 98 MYA, and are not closely related to any taxon with significant genomic resources [16,17]. A detailed investigation of the even-skipped locus revealed that approximately twice as many nucleotide substitutions exist between coding regions of D. melanogaster and sepsid species as exists between D. melanogaster and the most distantly related Drosophila species [18]. The Sepsidae are a sister taxon to the Tephritoidea or true "fruit flies," which contains four species with genomic and transcriptomic resources [19][20][21][22], but these are not as well annotated as Drosophila and the level of sequence similarity with sepsids is unknown. A sepsid transcriptome would not only facilitate gene expression studies across the Sepsidae, but would also enhance comparative bioinformatics within Diptera. For non-model organisms, the challenge of gene discovery no longer resides in a dearth of sequence data, but from the computational challenges of large and complex datasets [23]. This challenge is particularly true for de novo assembly, which is more computationally intensive than syntenic assembly via mapping to a reference genome. Another hurdle to de novo assembly is recovering rare transcripts from a datasets with heterogeneous sequence coverage. Assemblies that combine multiple k-mer lengths generally recover a greater number of unique transcripts during de novo assembly than single k-mer approaches [24,25], but with additional potential for mis-assembly. Although both cloud computing and multiple k-mer approaches are widely available, they have not been employed as broadly as referencebased pipelines because some programing knowledge is required. Our objectives were two-fold: 1) to construct a general purpose de novo transcriptome assembly pipeline that compares the output of multiple programs and automatically analyzes this data for downstream applications, and 2) to use that pipeline to assemble the transcriptome of the sepsid T. biloba. Our pipeline uses Velvet-Oases and Trinity for the initial assembly and constructs a meta-assembly with CAP3 followed by analysis with various downstream programs, including BLAST and Blast2GO [26][27][28][29]. The pipeline functions on a low-cost cloud computing network, and can be operated from a standard desktop computer. In addition to assembling the de novo transcriptome of the sepsid fly T. biloba, we used this pipeline to re-assemble previously published transcriptomes that used both 454 and Illumina sequencing platforms. Compared to the standard single k-mer assembly, our pipeline assembles longer contigs and more base pairs in all four species. By comparing annotated transcripts from different assemblies of the T. biloba transcriptome, we demonstrate that our pipeline recovers a greater number of transcripts than standard approaches by pooling unique transcripts from multiple assemblies. General overview of computational pipeline This pipeline was designed to automate a large number of intermediate bioinformatic activities such as trimming and filtering reads, converting sequence files through various formats, performing a large number of sequential assemblies using different assemblers and parameters, and formatting the output for downstream use (Figure 1). This pipeline was also designed to circumvent what have traditionally been significant limitations for small research groups, a lack of computing facilities and programing knowledge. This pipeline, while functional on a local network, is designed to make use of virtual cloud computing units, which provide scalable resources with direct interaction. Our pipeline produces intermediate products that are compatible with graphical user interface (GUI) based platforms such as The iPlant Collaborative and Galaxy, so that researchers can use these interfaces for downstream applications if desired [30][31][32][33]. We used this pipeline to perform the de novo assembly of the T. biloba transcriptome, the first transcriptome assembly for any species for the family Sepsidae. We also used the pipeline to re-assemble archived RNA-seq reads from other studies to assess the performance of the multiple k-mer length assembly process compared to a single k-mer assembly. Archived sequence from an arthropod (the milkweed bug, Oncopeltus fasciatus: [SRR:057573]), a plant (Silene vulgaris: [SRR:245489]), and a mammal (the ground squirrel Ictidomys tridecemlineatus: [SRR:352220]) were selected to test the performance of the pipeline across taxa and genome sizes. Each of these data sets consists of 454 sequence reads of approximately 3.2-4× coverage, the same coverage as our T. biloba data set. The O. fasciatus and S. vulgaris sequence reads were generated for de novo assembly of the entire transcriptome of the organism while the I. tridecemlineatus sequences were generated for differential expression analysis [34][35][36]. Cloud computing network and data management All of the data presented here were generated using Amazon Web Services Elastic Cloud Compute (AWS EC2) using a Debian Linux operating system (version 6.0.3). Software, sequence reads, reference assemblies, and other files are stored persistently on AWS Elastic Block Storage (EBS) volumes for the purpose of off-site backup, reduced network traffic, and storage. Data produced by the pipeline may be parsed and manipulated further through AWS or downloaded locally as needed. As presented here, the pipeline runs software in series. However, it is simple to create many duplicate systems through AWS, which may then run the processes in parallel. Cloud computing instances were initialized using memory-optimized architecture to memory requirements the high memory requirements of Velvet-Oases assembly of 454 sequence reads. An instance with 64 gigabytes (GB) of available memory was used to during initial analysis of assembly performance at different k-mer lengths. This was sufficient to produce assemblies with a k-mer length up to 31 bp after which available memory became a limiting factor, which coincided with a reduction in assembly quality. At the time of this writing high-memory instance types with up to 244GB of available memory are available for larger data sets. Instances were initialized using a publically available Linux operating system disc image hosted by Amazon. Software, data, and scripts are stored on EBS volumes and software installation is simplified by a script that unpacks and installs all of the packages required for this pipeline to a newly created 'bare' cloud instance. All functional aspects of the pipeline shown in Figure 1 are performed by a wrapper script which sequentially performs the assembly and analysis of sequence data before storing it remotely and terminating the instance to minimize computing cost which is calculated in hourly blocks based on instance type. The pipeline ran to completion in approximately 20 hours. Larger sequence data sets requiring more Figure 1 Flowchart of the bioinformatic pipeline. The pipeline performs multiple operations from sequence editing to annotation. First, a cloud network is initialized and algorithms are retrieved and installed. The sequence reads are parsed and filtered for quality and removal of adaptor sequences (blue). Next, assemblies are generated using various k-mer lengths and algorithms to create a diversity of transcript fragments (green). Then, the transcripts from all assemblies are pooled and re-assembled to remove redundant contigs and extend sequences based on overlap (yellow). The resulting multiple k-mer length meta-assembly is then analyzed and formatted for various downstream applications. Reads are mapped back to contigs, genes are annotated, and gene ontology is applied using BLAST and Blast2GO (orange). The pipeline generates an analysis of the assembly and the quantity and distribution of sequences. The resulting data is packaged in an archive for transfer and the cloud network is disbanded. memory and computing time may benefit from separating memory-intensive assembly from processor-intensive downstream analysis as the cost of processing with cloud computing is much lower than reserving large blocks of memory and storage space. Trimming and quality filtering reads Prior to assembly, the reads are processed to remove adaptor sequences, low-quality reads and regions, and highly redundant sequences. The initial quality of the untrimmed sequence reads is assessed using FastQC, which also generates a list of over-represented sequences which may then be removed [37]. The raw sequence reads are then converted to a standard format which is passed on to the FastX Toolkit which removes adaptor sequences using trimming and clipping functions [38]. The reads are subsequently run through the FastX quality filter which removes reads that fail to pass a quality check (80% of the bases having a Phred score of 20 or higher, corresponding to a 1:100 base-calling error rate were used for the data presented here). The remaining reads are analyzed for redundancy by FastX and then collapsed into a single representative read. This removes large numbers of identical reads that may result from the amplification process prior to sequencing. Reducing the number of reads can dramatically reduce the amount of memory needed during the assembly process. It can also significantly reduce the amount of time required for assembly, which is an important consideration when generating multiple assemblies [39]. Assembly It has been shown that performance varies significantly between assemblers and data sets [40]. This has prompted the development of a number of techniques, such as multiple-k approaches, to retrieve more contigs from the initial sequence reads [25,[41][42][43][44]. To assemble the T. biloba sequence reads we have used a multiple k-mer length approach that creates a large number of assemblies, each of which contains potentially unique transcripts. Because many assembly programs can support multiple k-mer assembly after the addition of custom scripts, we compared the performance of four different assembly programs: Abyss, Newbler, Trinity and Velvet-Oasis, using a previously described protocol (Additional file 1: Table S1) [26,27,40,[45][46][47][48]. T. biloba sequence reads from multiple life stages were pooled and assembled with a k-mer length of 25 using each of the four assembly programs ( Table 1). The resulting transcripts were then aligned to the D. melanogaster transcriptome. A conservative cut-off value with a minimum aligned length of 400 bp was used to create the distribution in Table 1. While Velvet-Oases produced the longest contigs, Trinity generated a larger number of contigs. A nucleotide BLAST of contigs in each assembly showed an increase in the number of contigs unique to one assembly in those produced by Trinity and Velvet-Oases. Based on these results, Velvet-Oases was selected for the length of the resulting transcripts and the ease of generating assemblies of different k-mer lengths, and a single Trinity assembly is included to provide isoform detection. The Velvet-Oases and Trinity de novo assembler algorithms have complementary strengths and weaknesses when comparing memory requirements and run-time. The T. biloba sequence data was used to generate assemblies with k-mer lengths of 17,19,21,23,25,27,29, and 31 base pairs. To demonstrate that assemblies with different k-mer lengths recover unique transcripts, the stand-alone BLAST algorithm was used to align contigs from each assembly to a pool of contigs from all assemblies, with the resulting unaligned contigs representing those unique to one assembly ( Figure 2). For example, to determine the number of contigs unique to the K17 assembly, the K17 contigs were blasted against the pooled contigs from all other assemblies. If a contig did not align, then it was unique to the k17 assembly. Contigs were discarded that were less than 200 base pairs. Next, BLAST was performed against D. melanogaster to annotate the unique contigs, and only those contigs with orthology to D. melanogaster were reported (Table 2). After the initial analysis, the pooled assemblies were also annotated using the D. melanogaster transcriptome to generate a total number of transcripts for the pool, to which the number of unique transcripts could be compared (Table 2). A significant number of transcripts were represented in only one of the single k-mer length assemblies (Table 2). In total, 2,296 transcripts were identified as unique to a specific assembly using BLAST analysis. For k-mer lengths 17-27, unique transcripts were approximately 2% of each assembly, and this percentage did not decrease with increasing k-mer length. However, at K29, unique transcripts decreased to only 0.8% of the total. The number of unique transcripts generated from this analysis is a low estimate because it contains only conserved Drosophila orthologs, and excludes transcripts unique to T. biloba and those too divergent to be identified by BLAST. Therefore, the number of unique transcripts recovered from different k-mer assemblies is likely higher. Our analysis confirms that restricting assemblies to only a single k-mer length limits the number of transcripts recovered, regardless of which k-mer length is chosen. Meta-assembly The assemblies generated with k-mer lengths of 23, 25, 27, and 29 base pairs were combined through metaassembly which extends contigs found in multiple assemblies and retains contigs found in only one. K-mer lengths shorter than 23 resulted in a large number of singletons and short contigs. Assemblies with a k-mer length larger than 29 required much larger memory allocations and computational time and were more conservative than other assemblies resulting in diminishing returns in which larger k-mer word sizes produce few novel transcripts not present in other assemblies. The CAP3 software was used to construct the metaassembly [28]. The CAP3 software removes the redundancy generated within and between assemblies of different k-mer lengths to consolidate the transcripts. Consolidating the results of all k-mer assemblies created a pool of 138,954 contigs. CAP3 clustered and assembled these sequences into a meta-assembly of 15,984 extended contigs and 8,511 singletons. The singletons represent sequences for which no overlap exists between assemblies and thus could not be extended by CAP3. The final meta-assembly consisted of 24,495 contigs with a mean sequence length 1,403 base pairs, an increase of 372 bp (34.1%) compared to the K25 assembly. Analysis of transcript length revealed that the total number of base pairs assembled improved significantly from 17.4 Mb to 32.7 Mb and the mean contig length increased by 310 bp from 1,093 bp to 1,403 bp. A frequency distribution of the number of contigs of a given length (Figure 3) shows an increase in the number of longer contigs in the meta-assembly, compared to the single k-mer assemblies and the Trinity assembly. The single k-mer assemblies have a relatively high number of singletons (sequences of less than 500 bp). The number of singletons was greatly reduced in the meta-assembly, indicating that meta-assembly was able to extend contigs by incorporating singletons. To demonstrate that contigs from different k-mer assemblies were used to Figure 2 BLAST strategy to identify unique transcripts. Identification of unique transcripts in each individual assembly was performed by reserving contigs from one assembly and pooling all contigs from the remaining assemblies. The contigs from the single assembly were aligned to the pooled contigs. Contigs that fail to align were considered unique to that single assembly. The unique contigs were annotated by aligning to the D. melanogaster transcriptome. create extended consensus contigs, genes from a candidate list of transcription factors were tracked from the 454 reads through the assembly and meta-assembly process (Table 3). Transcription factors are generally low abundance transcripts, and therefore full-length sequences are less likely to be recovered in single k-mer assemblies. Five out of the seven transcripts were extended through CAP3 re-assembly (Table 3). Primers were designed for four sequences and PCR amplification using T. biloba cDNA produced bands of the expected size, indicating that these extended contigs are correctly assembled transcripts (Additional file 2: Figure S1). To better visualize how meta-assembly extends transcript length, we examined in further detail how extradenticle contigs from different assemblies were meta-assembled ( Figure 4). The meta-assembly recovered the entire length of the coding sequence of the Tbil-exd transcript, as compared to Drosophila. Assembling the full transcript required contigs from multiple assemblies, and only a subset of the individual assemblies contained sequences fragments for the middle of the transcript. Contigs from assemblies outside the 23-29 k-mer range show a reduction in coverage caused by fragmentation in assemblies with shorter k-mer lengths and conservative assembly with larger k-mer lengths. The Tbil-exd sequence contains several single nucleotide insertions within the region aligned to the Drosophila reference and 83% of the nucleotide identities are conserved. To determine whether meta-assembly would improve transcriptome quality across taxa, the meta-assembly process was performed on three archived datasets (Oncopeltus fasciatus: SRR057573; Silene vulgaris: SRR245489; Ictidomys tridecemlineatus: SRR352220) using the same pipeline used to generate the T. biloba transcriptome. (Table 4; Figure 5). The meta-assemblies for each of the four datasets were compared to a single 25 k-mer length assembly. Figure 3 Frequency distribution of transcript lengths by assembly. A plot of the quantity of transcripts with a given length per assembly shows differences in assembly output and a pronounced peak representing the median transcript length. The meta-assembly was generated by the re-assembly of all k-mer lengths using CAP3. Meta-assembly improved transcript length, as indicated by the leading edge of the graph. Meta-assembly also reduced the number of short contigs, compared to the single k-mer assemblies. Trinity automatically removes contigs smaller than 200 base pairs. We used multiple metrics to compare transcription quality between the 25 k-mer length assembly and the meta-assembly including: number of base pairs assembled, number of contigs, percent of reads used in the contigs, and median contig length ( Figure 5; Table 4). In all four datasets, the number of base pairs assembled was greater in the meta-assembly. The greatest increased was observed in I. tridecemlineatus in which the number of base pairs assembled doubled with meta-assembly. Overall, the total number of assembled base pairs is 60.1% to 105.6% greater. The increase in base-pairs assembled was mirrored by an increase in contig length in all four species, as measured by mean contig length, median contig length, and n50 ( Figure 5D; Table 4). The increase in length is presumably a result of incorporating more reads, because the percent of total reads that were assembled into contigs also increased with metaassembly ( Figure 5B). In addition to increasing contig length, the meta-assembly also increased contig number in the I. tridecemlineatus, S. vulgaris, and O. faciatus, data sets ( Figure 5B). The increase in contig number is further evidence that meta-assembly recovers unique contigs from different k-mer length assemblies. The gain in contig number was likely even greater than the observed increase because the 25 k-mer assembly includes redundant contigs, whereas the meta-assembly does not. The same pre-processing steps were used to generate the filtered reads for both the 25 k-mer and meta- Figure 4 Extension of extradenticle sequence by meta-assembly. Contigs generated by multiple k-mer lengths were consolidated by meta-assembly to recover the entire coding sequence of the gene extradenticle from sequence fragments. Contigs from individual assemblies of multiple k-mer lengths are shown in alignment to the meta-assembly and the Drosophila transcript. The k-mer length 31 contigs were not included in the meta-assembly and show a reduction in coverage compared to other assemblies. Assemblies with shorter k-mer lengths also show a reduction in coverage but are not shown due to excessive fragmentation which results in a large number of short contigs that cannot be confidently aligned. The extended transcript aligns to the full length of the Drosophila reference sequence with 83% nucleotide sequence conservation. assemblies but the 25 k-mer assemblies did not undergo a secondary assembly to remove internal redundancy. When applied to a single Velvet-Oases assembly, CAP3 reduces the number of contigs by 5.5%. The only species to see a reduction in the number of contigs after metaassembly was T. biloba. We hypothesize this reduction was due to either elimination of duplicates, consolidation of contigs, or both. Alignment and annotation of the Themira biloba transcriptome The T. biloba transcriptome was annotated using the D. melanogaster transcriptome as a reference. The pipeline aligned the T. biloba transcripts to D. melanogaster using the standalone BLAST package and a reference database available from FlyBase [49]. 11,008 transcripts from the meta-assembly were identified via BLAST as homologous to Drosophila sequences (44.9%). We found that the aligned T. biloba sequences were 82.3% conserved (mean sequence conservation taken from a subset of 500 BLAST hits) indicating that BLAST may not be sufficient to identify some sequences. Therefore, sequence divergence between the two species could explain why over half the T. biloba contigs in the meta-assembly could be annotated based on Drosophila. However, contig mis-assembly could also cause low annotation rates. To determine whether sequence divergence or mis-assembly was the cause, we annotated the T. biloba transcriptome with a more closely related Dipteran. Sepsidae is more closely related to Tephritidae than the drosophilids [17], so it would be expected that higher sequence conservation exists between these two families, and that comparison to a tephritid would identify more transcripts. To determine whether such a comparison would identify more transcripts than Drosophila, a transcriptome was constructed using archived Illumina sequence reads from adult male and female Bactrocera dorsalis (SRR818498, SRR818496) [50]. Bi-directional alignments were created using T. biloba, B. dorsalis, and D. melanogaster. Contrary to our prediction, the alignments between T. biloba and B. dorsalis did not show increased aligned contigs or even conserved sequence versus Drosophila (Table 5). On average, B. dorsalis had around the same sequence similarity to T. biloba that Figure 5 Performance of meta-assembly across species. A single assembly using Velvet-Oases with a K-mer length of 25 (light gray) was compared to the multiple k-mer length meta-assembly (black) for four species. Meta-assembly improved overall transcript length. The total assembled base-pairs (A), transcript number (B), percent of reads used in contigs (C), and median transcript length (D) show improvement in transcript assembly. Drosophila did, and the number of matching transcripts actually decreased, as did the average length of the matching region. The decrease in number of matches may be due to the nature of the datasets. The Drosophila transcriptome includes multiple life stages and has a high level of coverage, whereas the B. dorsalis transcriptome only includes the adult stage [50]. Decreased representation could result in alignment of fewer genes even though the amount of sequence divergence is similar. In the end, annotation to B. dorsalis had the same limitations as Drosophila because of sequence divergence in the Sepsidae lineage. To determine whether comparison with other more complete databases could increase the number of annotated contigs, the contigs from the T. biloba meta-assembly were compared to the SwissProt databases. SwissProt has the ability to compare translated contigs, thus reducing the problem posed by nucleotide divergence. Additional transcripts were annotated through BLASTx against the SwissProt database, which had not been annotated through the comparison with D. melanogaster. An expectvalue cutoff of 0.00001 resulted in alignment of 16,705 (68.2%) of the translated sequences to sequences in the SwissProt database, which was a difference of 5,697 contigs (23.2%) compared to nucleotide BLAST against a single species. Analysis was performed to determine known protein domains in the Pfam database using the Trinity utility TransDecoder [51]. An additional 221 contigs that had not been annotated were found to contain Pfam domains increasing the number of contigs identified by at least one searched database to 16,926 (69.1%). The number of annotated contigs compares favorably to other de novo assemblies [52][53][54]. The high percentage of annotated transcripts indicates that the contigs generated through meta-assembly are true transcripts, and not mis-assembled contigs. Further improvements in annotation likely require greater coverage through increased sequencing depth and a larger sequence data set. To determine ontology, T. biloba transcripts were submitted for KEGG pathway analysis resulting in 5,080 contigs with identified functions. Many developmentally import pathways involved in cell signaling such as the notch pathway were near complete (Additional file 3: Table S2). Transcripts were assigned gene ontologies, which were then grouped by function ( Figure 6) to determine whether the transcripts recovered from the meta-assembly were representative of the main cellular processes. A broad range of functional groups were present in the assembly, indicating that transcripts representing many different kinds of proteins were recovered. The distribution of contig gene ontologies is similar to those found in the distribution of GO terms found in the Drosophila transcriptome and other de novo transcriptome assembly efforts [34,55,52,54]. Bioinformatics and data management The de novo assembly of a transcriptome presents multiple challenges including computational requirements and accurate assembly of low abundance transcripts. Here we present a pipeline for de novo assembly that uses cloud computing and a multiple k-mer meta-assembly processes. The strength of a distributed, cloud-based approach to transcriptome assembly and sequence analysis is its versatility and the low initial investment in data processing [23,56]. We have found the primary advantage of hosting data analysis off-site is the ability to construct a low-cost, scalable network on demand with unrestricted access. The increased computing power is particularly important when generating multiple de novo assemblies, as is done in our meta-assembly processes. Meta-assembly processes that use a multiple k-mer length approach have been previously demonstrated to significantly improve the quality of transcriptomes [24,57]. The pipeline presented here incorporates an extensive and automated toolkit for parsing and trimming sequence reads prior to multiple k-mer assembly and the generation of a meta-assembly that best represents the transcripts available to be recovered. Automated sequence analysis tools are included to provide graphical views of read quality, transcript length and coverage per assembly, transcript extension, annotation information of sequence homologs from various databases, and the presence of unique sequences, and the assembly parameters used to recover the sequences. Increasing transcriptome quality with meta-assembly We validated our pipeline by assembling three previous published transcriptomes and the transcriptome of the sepsid fly T. biloba, which was sequenced as part of this project. Transcriptome quality was compared between our pipleline, which employs a meta-assembly process, and the standard practice of using a single 25 bp k-mer length for assembly. In all four species, the metaassembly increased the number of base pairs assembled, increased the length of contigs, increased the percentage of reads used in the contigs and recovered a greater number of transcripts than the 25 k-mer assembly. The increased quality of meta-assembly was further investigated in the T. biloba transcriptome by tracking the improvement in a candidate list of low abundance transcripts. For a subset of these transcripts, RT-PCR confirmed that meta-assembly increased the length of the transcripts by connecting fragments recovered from multiple k-mer length assemblies. Conclusions We have assembled transcript sequences from the complete life cycle of T. biloba, a sepsid fly which exhibits primary gain of a novel trait, and identified many developmentally important genes. These transcripts represent the first large-scale sequencing that has been performed within the family Sepsidae, a large and diverse family with over 250 species distributed globally. Sepsid flies have been used for taxonomic and behavioral studies and have diverse genital and appendage morphologies, but lack of sequence data has made genetic investigation of these traits difficult [58,9,4,8,11,2]. While many orthologous genes retain their functions between dipterans, large regions of gene sequence are often not conserved [18,59]. The T. biloba transcriptome and many of the genes we have identified will be used for future RNA-Seq studies of comparative gene expression, knockdown, and in situ hybridization experiments. Sequence for many developmentally important genes and transcription factors of interest were obtained including members of the HOX family and those associated with embryonic and morphological development. In addition, many sequences for genes involved in cell signaling pathways such as notch and torso signaling were recovered. Sequence for the T. biloba doublesex ortholog as well as several transcripts associated with mating and courtship in Drosophila were also recovered which aids investigation of the sepsid sex allocation pathway and the genetic mechanisms behind behavioral traits associated with the sepsid novel appendage. As more genomes become available, researchers using non-model organisms will have the opportunity to assemble RNA-seq reads to reference genomes of closely related species. Assembling to a reference, when available, yields a higher quality transcriptome than de novo assembly, and this result is robust to low-levels of genomic divergence between species [42,44]. Although these findings are encouraging, those working with non-model organisms should proceed with caution [60]. Based on in silico studies, Figure 6 Gene Ontology classification of the T. biloba transcriptome. Gene Ontology (GO) was assigned to all contigs from the T. biloba meta-assembly. Gene ontologies were group into three main categories and 42 sub-categories. Contigs are grouped by the percentage of sequences that match a specific GO term within three major groups. The most abundant transcripts represent the sub-categories containing structural proteins and regulators of various cellular processes. assembling to a reference that has a sequence divergence greater than 15% decreases the number of transcripts recovered compared to de novo assembly [44]. In our case, assembling the T. biloba reads to the Drosophila genome would have been inappropriate because the 17% sequence divergence between the two species would have resulted in decreased transcript recovery compared to de novo assembly. Choosing a closer relative based on phylogeny does not necessarily solve the problem, as our additional comparison to B. dorsalis revealed. Because the amount of sequence divergence between a non-model organism and its closely related reference species is rarely known prior to high-throughput sequencing, de novo assembly remains a powerful tool for recovering transcripts in non-model organisms. T. biloba colony Cultures of T. biloba were maintained in an incubator at 25C with a 16:8 hour light-dark cycle in overlapping generations. Larvae were raised in Petri dishes and fed agar mixed with soy infant formula (ProSobee) covered with a 1.0 cm layer of cow dung. Adults were fed honey mixed with water and provided with cow dung to facilitate mating and egg-laying. Tissue collection Tissue was collected from embryos, 3 rd instar larva, and 48-72 hour pupa. During collection all material was stored at −80°C in RNALater, prior to shipment to the sequencing facility. Embryos were collected regularly and washed several times with an egg wash solution of 0.12 M NaCl and 0.01% Triton X-100 to remove dung. The eggs were dechorionated using a 3% bleach solution. Third instar, wandering-phase larvae were everted in PEM buffer (100 mM PIPES-disodium salt, 2.0 mM EGTA, 1.0 mM MgSO 4 anhydrous, pH 7.0) to facilitate RNA extraction. Prior to pupation, gut-purged larvae were allowed to wander on moistened filter paper to remove dung and particulates. Pupae were staged to 48-72 hours before collection. All samples were stored in RNALater overnight at 4°C and transferred to −80°C for storage prior to sequencing. Sequencing RNA isolation, library cDNA preparation, and 454 sequencing were performed by the University of Arizona Genetics Core (UAGC). Prior to sequencing, the cDNA was screened using a 2100 Bioanalyzer (Agilent Technologies). Sequencing was done on a GS FLX Titanium (454 Life Sciences). Embryos, larvae, and pupae were sequenced separately, creating 3 separate pools of sequence. Approximately 1.48 million reads total with an average length of 400 bp were generated. Assembly and annotation Pre-processing of the sequence reads generated from T. biloba was performed using the FastX Toolkit [38]. Adaptor sequences were removed using the trimmer function. The quality filter removed sequences in which 80% of the base pairs had a Phred score of less than 20. The remaining 1.01 million reads were then converted to FASTA. The FastX collapsing tool was used to consolidate redundant sequences to reduce the amount of memory needed during the assembly process. An assembly was performed using the collapsed reads to determine the reduction in memory required for assembly (Additional file 4: Figure S2). We determined that although collapsing the reads significantly reduced the memory requirements for assembly, it was not necessary for the data sets described in this publication and may lead to a reduction in coverage. FastQC (v0.10.1) was used to assess the quality of reads before and after pre-processing [37]. Paired-end assemblies with K-mer lengths of 19 to 29 were generated using Velvet-Oases with an insert size of 200 bp [26,27]. Trinity was used to generate an additional paired-end assembly [47,48]. The resulting contigs were aligned to Drosophila using standalone BLAST to identify developmentally important transcripts. A BLAST alignment was then performed using each individual assembly as the query and the pooled contigs from all other assemblies as the database to identify contigs unique to each assembly. The assemblies were then concatenated and the pool of 138,954 transcripts was reassembled using CAP3 [28].
8,045.8
2014-03-12T00:00:00.000
[ "Biology", "Computer Science" ]
Reducing the Standard Deviation in Multiple-Assay Experiments Where the Variation Matters but the Absolute Value Does Not When the value of a quantity for a number of systems (cells, molecules, people, chunks of metal, DNA vectors, so on) is measured and the aim is to replicate the whole set again for different trials or assays, despite the efforts for a near-equal design, scientists might often obtain quite different measurements. As a consequence, some systems’ averages present standard deviations that are too large to render statistically significant results. This work presents a novel correction method of a very low mathematical and numerical complexity that can reduce the standard deviation of such results and increase their statistical significance. Two conditions are to be met: the inter-system variations of matter while its absolute value does not, and a similar tendency in the values of must be present in the different assays (or in other words, the results corresponding to different assays must present a high linear correlation). We demonstrate the improvements this method offers with a cell biology experiment, but it can definitely be applied to any problem that conforms to the described structure and requirements and in any quantitative scientific field that deals with data subject to uncertainty. two columns correspond to the average µ of the three assays for each system, and the associated standard deviation (or error) σ. The units are irrelevant for the discussion. The different systems can be anything, from cities to DNA sequences, from people to chunks of metal. They can even be the same system at different times if the quantity x is expected to evolve in some reproducible manner. The differences among the assays could be due to the experiments being performed by the same researcher on different days, by different (but in principle equally skilled) researchers using the same equipment, by the same researcher using different (but in principle equally accurate) equipment, by different (but in principle equally proficient) laboratories, etc. As long as we expect different assays to yield the same results. The problem The standard deviation from the systems' averages across assays in tab. 1 is comparable to the average itself for most of the systems. Only on a couple of them you are 'lucky' enough so that the former is about half the value of the latter. You check the corresponding chart in fig. 1, and you see the same despairing situation. The error bars are humongous, and this will render your results statistically insignificant if you perform, for example, a Student's t-test to check whether or not the observed differences are real. The requirements If two requirements about your problem and your results are met, you can apply the correction method in the next section to reduce the standard deviations and increase the statistical significance of your data: • The absolute value of x for each given system is not really very important to you. What you are really interested in properly measuring is the variation in x from one system to another. For example, whether or not you could safely claim that the value of x corresponding to system 1 is larger than, and approximately the double of, that associated to system 5. • Even if you seem to be measuring huge differences in absolute value across the different assays, the 'tendency' of the variations is similarly captured in all of them. You can check this by looking at a graphical representation of your data such as the one in fig. 2, or you could be safer and check for high linear correlation between each pair of assays by performing a number of least-square linear fits (placing one array in the x-axis and the other one in the pair in the y-axis). The correction method Perform a linear fit for every pair of assays k and l, with k = l, and k = 1, . . . , N , placing assay k in the x-axis and assay l in the y-axis. For this, compute the averages, A k and A l , of the measured quantity across systems for each assay in the pair: Of course, A l is obtained just changing k by l in this expression. Compute the standard deviation in A k (and A l ): Compute the covariance between the values in assay k and those in assay l: Take these quantities to the slope b kl and the intercept a kl : defining the best fit line: The results of these fits allow you to check for the required high linear correlation mentioned in the previous section. This is done by computing the Pearson's correlation coefficient for every pair of assays k and l: In the first three columns of tab. 2, we can see that r kl is close to 1.0 for all pairs in tab. 1. We can therefore suspect that our correction method will produce sizable improvements in the data. Now, for each k compute the average correlation coefficient r k of the k-th assay with respect to all the rest of them: and pick the one with the largest r k as the reference assay, i.e., the one against all the other assays will be corrected. The values for the example in tab. 1 are presented in the last column of tab. 2. We see that, in this case, the reference assay is the second one. assay 1 assay 2 assay 3 r k assay 1 0.000 0.947 0.852 0.900 assay 2 -0.00 0.881 0.914 assay 3 --0.000 0.867 Table 2: Pearson's correlation coefficient r lk between each pair of assays in tab. 1. The last column displays the average r k of each assay with respect to all the rest of them. Finally, denote by f the value of the index k that corresponds to the reference assay (f = 2 in our example) and usex l j for the corrected value associated to the original quantity x l j (system j, assay l). Now, the correction formula reads like this: In order to produce the whole set of corrected results, you should apply this for all assays l = f , with l = 1, . . . , M , and for all systems with the index j = 1, . . . , N . In tab. 3 and fig. 3, we show the numerical values and the bar charts for the corrected results obtained from the example in tab. 1 through the application of the correction in eq. (8). assay 1 assay 2 assay 3μ ±σ system 1 6. As you can see the standard deviations as well as the associated statistical significance have greatly improved. If your data fits into the basic setup and satisfies the requirements, you will probably see a similar improvement. Enjoy! All the formulae needed to compute the linear fits, the inter-assay correlation coefficients, as well as the correction in eq. (8) are provided in this section and they are very simple. The reader can choose to implement them in any spreadsheet of her liking, or she can use the Perl scripts we have written for the occasion and which can be found here. For more information, check the complete article at: http://arxiv.org/abs/1309.2462
1,687
2013-09-10T00:00:00.000
[ "Mathematics" ]
Expression of Francisella pathogenicity island protein intracellular growth locus E (IglE) in mammalian cells is involved in intracellular trafficking, possibly through microtubule organizing center Abstract Francisella tularensis is the causative agent of the infectious disease tularemia and is designated a category A bioterrorism agent. The type VI secretion system encoded by the Francisella pathogenicity island (FPI) is necessary for intracellular growth; however, the functions of FPI proteins are largely unknown. In this study, we found that the FPI protein intracellular growth locus E (IglE) showed a unique localization pattern compared to other FPI proteins. Deleting iglE from Francisella tularensis subsp. novicida (F. novicida) decreased intracellular growth. Immunoprecipitation and pull‐down assays revealed that IglE was associated with β‐tubulin. Additionally, GFP‐fused IglE colocalized with microtubule organizing centers (MTOCs) in 293T cells. The iglE deletion mutant was transferred with dynein toward MTOCs and packed into lysosome‐localizing areas. Conversely, the wild‐type F. novicida exhibited intracellular growth distant from MTOCs. In addition, IglE expressed in 293T cells colocalized with dynein. These results suggest that IglE helps to prevent dynein‐ and MTOC‐mediated intracellular trafficking in host cells to inhibit the transport of F. novicida toward lysosomes. 2012; Eshraghi et al., 2016), the molecular mechanisms underlying the functions of these proteins are poorly understood. In this study, we carried out an expression analysis of FPI proteins and found that the F. novicida intracellular growth locus E (IglE) shows unique localization and is associated with microtubule-organizing centers (MTOCs) to modulate membrane trafficking for the intracellular growth of the bacterium. | Bacterial strains and culture conditions F. novicida U112 was obtained from the Pathogenic Microorganism Genetic Resource Stock Center (Gifu University). F. novicida was cultured aerobically at 37°C in a chemically defined medium (CDM) (Nagle, Anderson, & Gary, 1960) or in a brain-heart infusion broth (Becton, Dickinson and Company, Franklin Lakes, NJ) supplemented with cysteine (BHIc) (Mc Gann et al., 2010) containing 1.5% agar (Wako Laboratory Chemicals, Osaka, Japan). Table S1 shows the primer sets and templates used to construct plasmids used in this study. PCR was carried out using KOD-Plus-Neo polymerase (Toyobo, Osaka, Japan), and ligation was performed with the Ligation High Ver. 2 kit (Toyobo) or the In-Fusion HD Cloning Kit (Takara Bio, Otsu, Japan). Plasmids were transformed into F. novicida by cryotransformation (Pavlov, Mokrievich, & Volkovoy, 1996). | Plasmid construction, transformation, and transfection Briefly, bacterial cells were suspended in transfer buffer (0.2 M MgSO4, 0.1 M Tris acetate [pH 7.5]) with 1 μg of plasmid DNA. The bacterial cells were frozen in liquid nitrogen, thawed at room temperature, and then cultured in CDM. Then, bacterial cells were collected and cultured on BHIc plates containing 50 μg/ml kanamycin or 2.5 μg/ml chloramphenicol. Plasmids were transferred into cell lines with FuGENE HD (Promega, Madison, WI) according to the instruction manual. | Construction of F. tularensis iglE and dotU mutants The ΔdotU mutants of F. novicida were generated by group II intron insertion using the TargeTron Gene Knockout System (Sigma-Aldrich) modified for Francisella species (Rodriguez, Yu, Davis, Arulanandam, & Klose, 2008), as described previously (Uda et al., 2014). Briefly, 2 μg of each pKEK-DotU was transformed, and bacterial cells were precultured in CDM at 30°C for 6 hr. Then, the cells were collected and cultured on BHIc plates containing 50 μg/ml kanamycin at 30°C. Mutagenesis was confirmed using PCR to detect the 915-bp insertion. To remove the plasmids, mutants cells were further cultured on BHIc plates without antibiotics at 37°C. The ΔiglE mutant was constructed by homologous recombination. To make the suicide vector pFRSU, the promoter region of sacB and antibiotic-resistance marker of pSR47s (Merriam, Mathur, Maxfield-Boumil, & Isberg, 1997) were replaced with the bfr promoter of pNVU1 (Uda et al., 2014) and the kanamycin-resistance gene kanR from pKEK1170 (Rodriguez et al., 2008), respectively. The upstream and downstream regions of iglE (1.5 kb each) were cloned into the SalI site of pFRSU to make pFRSU-IglE. One microgram of pFRSU-IglE was transformed into F. novicida, and the cells were cultured on BHIc plates containing 50 μg/ml kanamycin. Isolated bacteria were cultured in CDM without antibiotics for 6 hr and then plated on BHIc plates containing 5% sucrose. The deletion of the iglE gene was confirmed by PCR. | Immunoblotting To generate an antiserum against IglE, rabbits were immunized with the C+TGKNEFPLDKDIKD peptide. The peptide and antiserum were prepared by Eurofine Genetics (Tokyo, Japan). F. novicida was cultured in CDM containing 5% KCl to an OD 595 of 0.25. The culture supernatants were desalted to remove KCl with Amicon Ultra filters (Merck Millipore, Billerica, MA) and concentrated fivefold. Samples were mixed with SDS sample buffer (Thermo Fisher Scientific, Waltham, MA). Fifteen microliters of sample was loaded onto a NuPAGE Novex 4%-12% Bis-Tris Gel (Thermo Fisher Scientific) and separated by SDS-PAGE. Separated proteins were transferred onto a polyvinyl difluoride (PVDF) membrane (Merck Millipore). The membrane was treated with anti-IglE antiserum (1:100) or anti-PdpC antibody (Chong et al., 2008;Uda et al., 2014), a generous gift from Dr. J. Celli, followed by the treatment with HRP-conjugated anti-rabbit IgG (ab6717, 1:20,000; Abcam, Cambridge, UK). Proteins were detected with the ECL Prime Western Blotting System (GE Healthcare, Buckinghamshire, UK) and the LAS-4000 mini Imaging System (Fujifilm Life Science, Tokyo, Japan). | Intracellular growth assay THP-1 cells (4 × 10 5 cells/well) were preincubated in a 48-well tissue culture plate with 100 nM of phorbol myristate acetate (PMA) for 48 hr. F. novicida strains were added at a multiplicity of infection of 1. These plates were centrifuged for 10 min at 300× g and incubated for 30 min at 37°C. Then, THP-1 cells were washed twice with RPMI1640 medium, and extracellular bacteria were killed with a 60-min gentamicin (50 μg/ml) treatment. To measure the intracellular growth of F. novicida, the THP-1 cells were incubated in fresh medium at 37°C for the indicated time, washed three times with phosphate-buffered saline (PBS), and then lysed with 0.1% Triton X-100 in CDM. Colony-forming units were determined by serial dilution on BHIc plates. Samples were suspended in SDS sample buffer and heated at 70°C for 15 min. Fifteen microliters of sample was loaded onto a NuPAGE Novex 4%-12% Bis-Tris Gel, and the proteins that co-precipitated with IglE-GFP were separated by SDS-PAGE followed by staining with Quick-CBB PLUS (Wako Laboratory Chemicals). Peptide mass fingerprinting was performed according to the method of Yoshino et al. (Yoshino, Oshiro, Tokunaga, & Yonezawa, 2004). Briefly, the CBB-stained bands obtained from SDS-PAGE were excised and sliced into small strips. To remove the CBB, the strips were incubated in 50% methanol and 5% acetic acid for 1 hr and washed twice with water. The strips were dehydrated by incubation with 100% acetonitrile. To alkylate the proteins, the strips were incubated at 60°C for 1 hr with 10 mM dithiothreitol in 100 mM ammonium hydrogen carbonate followed by treatment at room temperature for 30 min with 55 mM iodoacetamide (Nacalai Tesque, Kyoto, Japan) in 100 mM ammonium hydrogen carbonate. In-gel trypsin digestion was performed by incubating a gel with 10 μg/ml trypsin (Promega). The digested peptides were eluted using 5% formic acid (Wako Laboratory Chemicals). The peptides were desalted with ZipTip C18 Pipette Tips (Merck Millipore), spotted onto sample plates, and mounted with saturated α-cyano-4-hydroxycinnamic acid (Nacalai Tesque) in 50% acetonitrile and 0.1% trifluoroacetic acid. An Autoflex mass spectrometer (Bruker Daltonics, Billerica, MA, USA) was used to measure the molecular weights of peptides. The reference database was searched by MASCOT software (Science Matrix, London, UK). | Pull-down assay The iglE gene from F. tularensis subsp. tularensis SCHU P9 was cloned | Statistical analysis One-way analysis of variance was used to compare the results expressed as the means and standard deviations. Differences between the groups were determined by multiple comparisons using the Bonferroni/Dunnett method. The differences were considered significant at p values < 0.01. | IglE shows unique localization Among the FPI proteins, 8 of them (PdpA, IglE, VgrG, IglF, IglI, IglJ, PdpE, and IglC) are considered to be secreted into the host cytosol (Bröms et al., 2012). To elucidate the function of FPI proteins secreted by F. novicida, we performed a comprehensive expression analysis of these eight FPI proteins. We assayed the localization of these proteins in host cells by expressing GFP fused to FPI proteins in 293T cells ( Figure S1). Among the 8 FPI proteins, only IglE showed unique localization-large foci near the nucleus and dot foci ( Figure 1a). The fractions of cells containing large or dot foci were 46.7 ± 4.2% and 86.0 ± 5.3%, respectively (Figure 1b). | Intracellular replication of F. novicida depends on IglE secretion We focused on IglE as an effector protein and assessed its characteristics because of its unique localization. First, we constructed an iglE deletion (ΔiglE) mutant of F. novicida by homologous recombination. This mutation decreased the intracellular growth of F. novicida in THP-1 cells, but complementation with wild-type iglE restored the intracellular growth ( Figure 2a). These results indicated that IglE was important for intracellular growth. To examine whether IglE protein was secreted into the culture medium, we cultured F. novicida in medium containing high concentrations of potassium chloride to mimic the host intracellular environment. Although the secretion of PdpC, an FPI protein, was not observed ( Figure S2), we observed IglE secretion in the wild-type and IglE overexpressing strains. However, the amount of IglE secreted from the wild-type bacteria was limited. Importantly, the secretion was not observed in the dotU deletion mutant (ΔdotU, a gene encoding part of the T6SS apparatus), and the secretion decreased in the ΔdotU mutant that overexpressed IglE (Figures 2b and S2a). | IglE is associated with β-tubulin and MTOCs To identify IglE-binding proteins, GFP-fused IglE was expressed in 293T cells, and GFP protein was precipitated with GFP-binding protein-conjugated agarose beads. The co-precipitated proteins were separated by SDS-PAGE. A protein of approximately 50 kDa was co-precipitated with GFP-fused IglE (Figure 3a). We identified this 50 kDa protein by matrix-assisted laser-desorption ionization/time-of-flight mass spectrometry as β-tubulin. To confirm the interaction between IglE and β-tubulin, a pull-down assay was conducted. HA-tagged β-tubulin and Myc-tagged IglE were expressed in 293T cells and precipitated with anti-HA antibody using protein G agarose beads. Co-precipitated Myc-tagged IglE was detected by immunoblotting with an anti-Myc antibody. | IglE disturbs membrane trafficking through MTOCs In general, phagosomes are transported toward MTOCs on microtubules. Lysosomes are also present around MTOC and fuse with phagosomes (Blocker, Griffiths, Olivo, Hyman, & Severin, 1998) (Figure 4d, white arrow). In cells with large F I G U R E 3 Intracellular growth locus E (IglE) is associated with β-tubulin and microtubule organizing centers (MTOCs). (a) 293T cells were transfected with pAcGFP-C1-IglE and incubated for 48 hr. The cells were disrupted, and IglE-GFP protein was precipitated with the GFP-Trap. Co-precipitated proteins were separated by SDS-PAGE and extracted from the gel. The extracted protein was examined with matrixassisted laser-desorption ionization/time-of-flight mass spectrometry. (b) Binding of IglE and β-tubulin was confirmed with a pull-down assay. 293T cells were transfected with pCMV-HA-Nβ-tubulin and pCMV-Myc-N-IglE. β-tubulin was precipitated by anti-HA antibody-conjugated agarose beads and separated by SDS-PAGE. Co-precipitated IglE-Myc protein was detected by immunoblotting for anti-Myc antibody (IB: Myc). (c) 293T cells were transfected with pAcGFP-C1-IglE and incubated for 48 hr. β-tubulin was stained using Alexa555-conjugated antiβ-tubulin antibody. Scale bar: 20 μm. (d) 293T cells were transfected with pAcGFP-C1-IglE and incubated for 48 hr. MTOCs were stained using anti-pericentrin antibody and Alexa555-conjugated anti-rabbit antibody. To observe the detailed localization of GFP-fused IglE, the sensitivity of detection for GFP was decreased compared to the experiment in Figure 1a. Scale bar: 20 μm. (e) THP-1 cells were infected with Francisella novicida harboring pOM5-GFP at multiplicity of infection = 1 and treated with 50 μg/ml of gentamicin. At 24 hr postinfection, the cells were treated with anti-pericentrin antibody and stained with Alexa555-conjugated anti-rabbit IgG. Scale bar: 40 μm. (f) The number of cells with MTOCs surrounded by F. novicida was calculated for F. novicida-infected cells. *p < 0.01 foci near the nucleus, dextran particles were aggregated or not ingested in 65.6 ± 8.6% of cells (Figure 4e). These results suggest that IglE disturbs membrane trafficking in host cells by interacting with MTOCs, allowing F. novicida to escape from fusion with lysosomes. | IglE inhibits dynein-based membrane trafficking Because the minus-end of microtubules is located near the MTOC, IglE may inhibit membrane trafficking toward the minus-end of microtubules. To confirm this hypothesis, we assayed the localization of the motor protein dynein, which moves toward the minus-end of microtubules. In 293T cells-expressing mCherry-fused IglE, dynein was localized to the tips of cells and colocalized with the dot foci of IglE (Figure 5a). Among IglE-expressing cells, 78.3 ± 8.8% of cells contained dynein colocalized with IglE (Figure 5b). In 293T cellscontaining mCherry-fused IglE, IglE also colocalized with pericentrin | D ISCUSS I ON The molecular mechanisms underlying the actions of effector proteins from Francisella species are poorly understood. In this study, we revealed the function of the T6SS effector protein IglE, which associated with MTOCs and modulated the membrane trafficking of host cells. To identify how IglE affects the intracellular environment, we analyzed IglE-binding proteins and found that β-tubulin and pericentrin were associated with IglE. Pericentrin is a component of the γ-tubulin ring complex (γ-TuRC). The γ-TuRC is the functional core of the MTOC and acts as a scaffold or a template for α/β-tubulin dimers (Conduit et al., 2015). In THP-1 cells, the ΔiglE mutant of F. novicida was transported to MTOCs, where lysosomes are located. Some bacterial effectors were reported to interact with microtubules or MTOCs and control the intracellular trafficking of bacteria. In Salmonella enterica, some T3SS effectors such as SseF or SseG interact with microtubules to form Salmonella-induced filaments (Müller, Chikkaballi, & Hensel, 2012). In Pseudomonas aeruginosa, the T6SS effector VgrG2b associates with the γ-TuRC, facilitating internalization of the bacterium (Sana et al., 2015). The Chlamydia trachomatis T3SS effector Francisella species are ingested through phagocytosis and grow into the cytosol or in autophagosomes after they escape from phagosomes (Checroun et al., 2006;Chong et al., 2012;Clemens et al., 2004Clemens et al., , 2005Golovliov et al., 2003). For the maturation of phagosomes, endosomes and autophagosomes, motor-based migration on microtubules toward the cell center, where lysosomes are located, is necessary (Blocker et al., 1998;Harrison, Bucci, Vieira, Schroer, & Grinstein, 2003;Kimura, Noda, & Yoshimori, 2008). In THP-1 cells infected with the F. novicida ΔiglE mutant, bacteria accumulated with dynein around MTOCs where the lysosome marker LAMP-1 was located, whereas the wild-type bacteria were located far away from MTOCs. Together, these results imply that IglE associates with MTOCs to inhibit the trafficking of F. novicida-containing phagosomes on microtubules and their subsequent fusion with lysosomes. This may allow F. novicida to escape from phagosomes and grow in the cytosol or in autophagosomes. With a microscopic observation, IglE seemed to colocalize with MTOCs or dynein. Although IglE co-precipitated with β-tubulin, MTOC proteins such as pericentrin or dynein were not detected by the co-precipitation assay with IglE. This may due to the abundance of β-tubulin in cell cytosol. In addition, IglE expression in 293T cells failed to inhibit the depolymerization or repolymerization in the presence or absence of colchicine, an inhibitor of tubulin polymerization (data not shown). Therefore, the direct target of IglE is still unclear. However, IglE is expected to associate with β-tubulin through MTOC or dynein because the colocalization of IglE and β-tubulin was not observed with microscopy. Several reports indicate that IglE is a bacterial lipoprotein with a signal peptide and is located at the bacterial membrane where it forms part of the T6SS apparatus (Bröms, Meyer, & Sjöstedt, 2017;Nguyen, Gilley, Zogaj, Rodriguez, & Klose, 2014;Robertson, Child, Ingle, Celli, & Norgard, 2013). In addition, we observed limited secretion of IglE into the culture medium. Thus, our results suggest F I G U R E 5 Intracellular growth locus E (IglE) colocalizes with dynein and disturbs dynein-based membrane trafficking. (a) 293T cells were transfected with pmCherry-C1-IglE and incubated for 48 hr. Dynein was stained using antidynein antibody and FITC-conjugated anti-mouse antibody. Fluorescent images were merged with differential interference contrast microscopy images. Microtubule organizing center (MTOC) was stained with anti-pericentrin antibody and FITC-conjugated anti-rabbit IgG. Arrowheads indicate colocalization of IglE and MTOC. Scale bar: 40 μm. (b) The number of cells containing dynein or pericentrin colocalized with IglE was calculated for IglE-expressing cells. (c) THP-1 cells were infected with Francisella novicida harboring pOM5-mCherry at multiplicity of infection = 1 and treated with 50 μg/ml of gentamicin. At 24 hr after infection, the cells were treated with anti-dynein antibody and stained with FITC-conjugated anti-mouse IgG. Scale bar: 40 μm. (d) The number of cells containing F. novicida colocalized to the dynein-positive area was calculated for F. novicida-infected cells. *p < 0.01 that IglE may not be the so-called effector protein. In fact, the secretion of other effectors, such as IglC, is inhibited in the ΔiglE mutant (Bröms et al., 2017). Therefore, in the case of the ΔiglE mutant, we could not rule out the possibility that its transportation to MTOCs and the inhibition of its intracellular growth were due to other effectors or a combination of IglE and other effectors. However, under our condition of IglE overexpression in host cells, IglE was associated with MTOCs and disturbed intracellular trafficking. These results at least suggest that IglE may have effector-like functions when bacteria escape into the cytosol and IglE is exposed, or if bacteria are lysed and IglE is released into the cytosol. Because IglE had a signal peptide and was still secreted in the ΔdotU mutant that overexpressed IglE, IglE may be secreted by an unknown Sec protein-related secretion system. Indeed, IglE is detected in the cytosol during Francisella infection (Bröms et al., 2012). IglE could be a therapeutic target for treating Francisella infections or a biological tool for inhibiting intracellular trafficking because our results suggest that IglE affects MTOCs and modulates intracellular trafficking. ACK N OWLED G EM ENTS We thank Dr. Jean Celli for kindly supplying anti-PdpC polyclonal antibody. This study was supported by JSPS KAKENHI Grant Number 15K08463. We acknowledge help with mass spectrometry measurements, which were supported by the general support team at Osaka City University funded by the Grantin-Aid for Scientific Research on Innovative Area "Harmonized Supramolecular Motility Machinery and Its Diversity" (25117501) directed by Makoto Miyata. CO N FLI C T O F I NTE R E S T All contributing authors declare no conflicts of interest.
4,285.4
2018-07-05T00:00:00.000
[ "Biology", "Medicine" ]
Thermodynamics as Control Theory I explore the reduction of thermodynamics to statistical mechanics by treating the former as a control theory: a theory of which transitions between states can be induced on a system (assumed to obey some known underlying dynamics) by means of operations from a fixed list. I recover the results of standard thermodynamics in this framework on the assumption that the available operations do not include measurements which affect subsequent choices of operations. I then relax this assumption and use the framework to consider the vexed questions of Maxwell's demon and Landauer's principle. Throughout I assume rather than prove the basic irreversibility features of statistical mechanics, taking care to distinguish them from the conceptually distinct assumptions of thermo-dynamics proper. Introduction Thermodynamics is misnamed.The name implies that it stands alongside the panoply of other "X-dynamics" theories in physics: Classical dynamics, quantum dynamics, electrodynamics, hydrodynamics, chromodynamics and so forth [1].But what makes these theories dynamical is that they tell us how systems of a certain kind-classical or quantum systems in the abstract, or charged matter and fields, or fluids, or quarks and gluons, or whatever-evolve if left to themselves.The paradigm of a dynamical theory is a state space, giving us the possible states of the system in question at an instant, and a dynamical equation, giving us a trajectory (or, perhaps, a family of trajectories indexed by probabilities) through each state that tells us how that state will evolve under the dynamics. Thermodynamics basically delivers on the state space part of the recipe: Its state space is the space of systems at equilibrium.But it is not in the business of telling us how those equilibrium states evolve if left to themselves, except in the trivial sense that they do not evolve at all: That is what equilibrium means, after all.When the states of thermodynamical systems change, it is because we do things to them: We put them in thermal contact with other systems, we insert or remove partitions, we squeeze or stretch or shake or stir them.And the laws of thermodynamics are not dynamical laws like Newton's: They concern what we can and cannot bring about through these various interventions. There is a general name for the study of how a system can be manipulated through external intervention: Control theory.Here again a system is characterised by its possible states, but instead of a dynamics being specified once and for all, a range of possible control actions is given.The name of the game is to investigate, for a given set of possible control actions, the extent to which the system can be controlled: That is, the extent to which it can be induced to transition from one specified state to another.The range of available transitions will be dependent on the forms of control available; the more liberal a notion of control, the more freedom we would expect to have to induce arbitrary transitions. This conception of thermodynamics is perfectly applicable to the theory understood phenomenologically: That is, without any consideration of its microphysical foundations.However, my purpose in this paper is instead to use the control-theory paradigm to explicate the relation between thermodynamics and statistical mechanics.That is: I will begin by assuming the main results of non-equilibrium statistical mechanics and then consider what forms of control theory they can underpin.In doing so I hope to clarify both the control-theory perspective itself and the reduction of thermodynamics to statistical mechanics, as well as providing some new ways to get insight into some puzzles in the literature: Notably, those surrounding Maxwell's Demon and Landauer's Principle. In Sections 2 and 3, I review the core results of statistical mechanics (making no attempt to justify them).In Sections 4 and 5 I introduce the general idea of a control theory and describe two simple examples: Adiabatic manipulation of a system and the placing of systems in and out of thermal contact.In Sections 6-8, I apply these ideas to construct a general account of classical thermodynamics as a control theory, and demonstrate that a rather minimal form of thermodynamics possesses the full control strength of much more general theories; I also explicate the notion of a one-molecule gas from the control-theoretic (and statistical-mechanical) perspective).In the remainder of the paper, I extend the notion of control theory to include systems with feedback, and demonstrate in what senses this does and does not increase the scope of thermodynamics. I develop the quantum and classical versions of the theory in parallel, and fairly deliberately flit between quantum and classical examples.When I use classical examples, in each case (I believe) the discussion transfers straightforwardly to the quantum case unless noted otherwise.The same is probably true in the other direction; if not, no matter, given that classical mechanics is of (non-historical) interest in statistical physics only insofar as it offers a good approximation to quantum mechanics. Statistical-Mechanical Preliminaries Statistical mechanics, as I will understand it in this paper, is a theory of dynamics in the conventional sense: It is in the business of specifying how a given system will evolve spontaneously.For the sake of definiteness, I lay out here exactly what I assume to be delivered by statistical mechanics. (1) The systems are classical or quantum systems, characterised inter alia by a classical phase space or quantum-mechanical Hilbert space Hamiltonian H[V I ] which may depend on one or more external parameters V I (in the paradigm case of a gas in a box, the parameter is volume).In the quantum case I assume the spectrum of the Hamiltonian to be discrete; in either case I assume that the possible values of the parameters comprise a connected subset of R N and that the Hamiltonian depends smoothly on them.(2) The states are probability distributions over phase space, or mixed states in Hilbert space.(Here I adopt what is sometimes called a Gibbsian approach to statistical mechanics; in [2], I defend the claim that this is compatible with a view of statistical mechanics as entirely objective.)Even in the classical case the interpretation of these probabilities is controversial; sometimes they are treated as quantifying an agent's state of knowledge, sometimes as being an objective feature of the system; my own view is that the latter is correct (and that the probabilities are a classical limit of quantum probabilities; cf [3]).In the quantum case the interpretation of the mixed states merges into the quantum measurement problem, an issue I explore further in [4].For the most part, though, the results of this paper are independent of the interpretation of the states.(3) Given two systems, their composite is specified by the Cartesian product of the phase spaces (classical case) or by the tensor product of the Hilbert spaces (quantum case), and by the sum of the Hamiltonians (either case).( 4) The Gibbs entropy is a real function of the state, defined in the classical case as and in the quantum case as (5) The dynamics are given by some flow on the space of states.In Hamiltonian dynamics, or unitary quantum mechanics, this would be the flow generated by Hamilton's equation or the Schrödinger equation from the Hamiltonian H[V I ], under which the Gibbs entropy is a constant of the motion; in statistical mechanics, however, we assume only that the flow (a) is entropy-non-decreasing, and (b) conserves energy, in the sense that the probability given by the state to any given energy is invariant under the flow.(6) For any given system there is some time, the equilibration timescale, after which the system has evolved to that state which maximises the Gibbs entropy subject to the conservation constraint above [5].Now, to be sure, it is controversial at best how statistical mechanics delivers all this.In particular, we have good reason to suppose that isolated (classical or quantum) systems ought really to evolve by Hamiltonian or unitary dynamics, according to which the Gibbs entropy is constant and equilibrium is never achieved; more generally, the statistical-mechanical recipe I give here is explicitly time-reversal-noninvariant, whereas the underlying dynamics of the systems in question have a time reversal symmetry. There are a variety of responses to offer to this problem, among them: • Perhaps no system can be treated as isolated, and interaction with an external environment somehow makes the dynamics of any realistic system non-Hamiltonian. • Perhaps the probability distribution (or mixed state) needs to be understood not as a property of the physical system but as somehow tracking our ignorance about the system's true state, and the increase in Gibbs entropy represents an increase in our level of ignorance.• Perhaps the true dynamics is not, after all, Hamiltonian, but incorporates some time-asymmetric correction. My own preferred solution to the problem (and the one that I believe most naturally incorporates the insights of the "Boltzmannian" approach to statistical mechanics) is that the state ρ should not be interpreted as the true probability distribution over microstates, but as a coarse-grained version of it, correctly predicting the probabilities relevant to any macroscopically manageable process but not correctly tracking the fine details of the microdynamics, and that the true signature of statistical mechanics is the possibility of defining (in appropriate regimes, under appropriate conditions, and for appropriate timescales) autonomous dynamics for this coarse-grained distribution that abstract away from the fine-grained details.The time asymmetry of the theory, on this view, arises from a time asymmetry in the assumptions that have to be made to justify that coarse-graining. But from the point of view of understanding the reduction of thermodynamics to statistical mechanics, all this is beside the point.The most important thing to realise about the statistical-mechanical results I give above is that manifestly they are correct: The entire edifice of statistical mechanics (a) rests upon them; and (b) is abundantly supported by empirical data.(See [6] for more on this point.)There is a foundational division of labour here: the question of how this machinery is justified given the underlying mechanics is profoundly important, but it can be distinguished from the question of how thermodynamics relates to statistical mechanics.Statistical mechanics is a thoroughly successful discipline in its own right, and not merely a foundational project to shore up thermodynamics. Characterising Statistical-Mechanical Equilibrium The "state which maximises the Gibbs entropy" can be evaluated explicitly.If the initial state ρ has a definite energy U , it will evolve to the distribution with the largest Gibbs entropy for that energy, and it is easy to see that (up to normalisation) in the classical case this is the uniform distribution on the hypersurface H[V I ](x) = U , and that in the quantum case it is the projection onto the eigensubspace of H[V I ] with energy U .Writing ρ U to denote this state, it follows that in general the equilibrium state achieved by a general initial ρ will be that statistical mixture of ρ U that gives the same probability to each energy as ρ did.In the classical case this is where P r(U ) = ρδ(H − U ); where the sum is over the distinct eigenvalues U i of the Hamiltonian, Pr(U i ) = Tr(ρΠ i ), and Π i projects onto the energy U i subspace.I will refer to states of this form (quantum or classical) as generalised equilibrium states.We can define the density of states V(U ) at energy U for a given Hamiltonian H in the classical case as follows: We take V(U )δU to be the phase-space volume of states with energies between U and U + δU .We can use the density of states to write the Gibbs entropy of a generalised equilibrium state explicitly as In the quantum case it is instead where Dim(U i ) is the dimension of the energy-U i subspace.Normally, I will assume that the quantum systems we are studying have sufficiently close-spaced energy eigenstates and sufficiently well-behaved states that we can approximate this expression by the classical one (defining VδU as the total dimension of eigensubspaces with energies between U and U + δU , and Pr(U )δU as the probability that the system has one of the energies in the range (U, U + δU )).Now, suppose that the effective spread ∆U over energies of a generalised equilibrium state around its expected energy U 0 is narrow enough that the Gibbs entropy can be accurately approximated simply as the logarithm of V(U 0 ).States of this kind are called microcanonical equilibrium states, or microcanonical distributions (though the term is sometimes reserved for the ideal limit, where Pr(U ) is a delta function at U 0 , so that ρ(x) = (1/V(U 0 ))δ(H(x) − U 0 )).A generalised equilibrium state can usefully be thought of as a statistical mixture of microcanonical distributions. If ρ is a microcanonical ensemble with respect to H[V I ] for particular values of the parameters V I , in general it will not be even a generalised equilibrium state for different values of those parameters.However, if close-spaced eigenvalues of the Hamiltonian remain close-spaced even when the parameters are changed, ρ will equilibrate into the microcanonical distribution.In this case, I will say that the system is parameter-stable; I will assume parameter stability for most of the systems I discuss. A microcanonical distribution is completely characterised (up to details of the precise energy width δU and the spread over that width) by its energy U and the external parameters V I .On the assumption that V(U ) is monotonically increasing with U for any values of the parameters (and, in the quantum case, that the system is large enough that we can approximate V(U ) as continuous) we can invert this and regard U as a function of Gibbs entropy S and the parameters.This function is (one form of) the equation of state of the system: For the ideal monatomic gas with N mass-m particles, for instance, we can readily calculate that and hence (for N 1) which can be inverted to get U in terms of V and S. The microcanonical temperature is then defined as (for the ideal monatomic gas, it is 2U/3N ).At the risk of repetition, it is not (or should not be!) controversial that these probability distributions are empirically correct as regards predictions of measurements made on equilibrated systems, both in terms of statistical averages and of fluctuations around those averages.It is an important and urgent question why they are correct, but it is not our question. Adiabatic Control Theory Given this understanding of statistical mechanics, we can proceed to the control theory of systems governed by it.We will develop several different control theories, but each will have the same general form, being specified by: • A controlled object, the physical system being controlled. • A set of control operations that can be performed on the controlled object. • A set of feedback measurements that can be made on the controlled object. • A set of control processes, which are sequences of control operations and feedback measurements, possibly subject to additional constraints and where the control operation performed at a given point may depend on the outcomes of feedback measurements made before that point. Our goal is to understand the range of transitions between states of the controlled object that can be induced.In this section and the next I develop two extremely basic control theories intended to serve as components for thermodynamics proper in Section 6. The first such theory, adiabatic control theory, is specified as follows: • The controlled object is a statistical-mechanical system which is parameter-stable and initially at microcanonical equilibrium.• The control operations consist of (a) smooth modifications to the external parameters of the controlled object over some finite interval of time; (b) leaving the controlled object alone for a time long compared to its equilibration timescale.• There are no feedback measurements: The control operations are applied without any feedback as to the results of previous operations.• The control processes are sequences of control operations ending with a leave-alone operation. Because of parameter stability, the end state is guaranteed to be not just at generalised equilibrium but at microcanonical equilibrium.The control processes therefore consist of moving the system's state around in the space of microcanonical equilibrium states.Since for any value of the parameters the controlled object's evolution is entropy-nondecreasing, one result is immediate: The only possible transitions are between states x, y with S G (y) ≥ S G (x).The remaining question is: Which such transitions are possible? To answer this, consider the following special control processes: A process is quasi-static if any variations of the external parameters are carried out so slowly that the systems can be approximated to any desired degree of accuracy as being at or extremely close to equilibrium throughout the process. A crucial feature of quasi-static processes is that the increase in Gibbs entropy in such a process is extremely small, tending to zero as the length of the process tends to infinity.To see this [7], suppose for simplicity that there is only one external parameter whose value at time t is V (t).If the expected energy of the state at time t is U (t), there will be a unique microcanonical equilibrium state ρ eq [U (t), V (t)] for each time determined by the values U (t) and V (t) of the expected energy and the parameter at that time.The full state ρ(t) at that time can be written as where the requirement that the change is quasi-static imposes the requirement that δρ(t) is small.The system's dynamics is determined by some equation of the form with the linear operator L depending on the value of the parameter.By the definition of equilibrium it follows that if the quasi-static process takes overall time T and brings about change ∆ρ in the state, we have δρ ∼ ∆ρ/T, i. e. , the typical magnitude of δρ(t) scales with 1/T for fixed overall change.Now the rate of change of Gibbs entropy in such a process is given by which may be expanded around ρ e q(t) ≡ ρ eq [U (t), V (t)] to give But since ρ eq maximises Gibbs entropy for given expected energy, and since the time evolution operator and so rate of entropy increase vanishes to first order in δρ.From ( 13) it follows that total entropy increase scales at most like 1/T and so can be made arbitrarily small [8]. (To see intuitively what is going on here, consider a very small change V → V + δV made suddenly to a system initially at equilibrium.The sudden change leaves the state, and hence the Gibbs entropy, unchanged.The system then regresses to equilibrium on a trajectory of constant expected energy.But since the change is very small, and since the equilibrium state is an extremal state of entropy on the constant-expected-energy surface, to first order in δV the change in entropy in this part of the process is also zero.) To summarise: Quasi-static adiabatic processes are isentropic: They do not induce changes in system entropy.What about non-quasi-static adiabatic processes?Well, if at any point in the process the system is not at (or very close to) equilibrium, by the baseline assumptions of statistical mechanics it follows that its entropy will increase as it evolves.So an adiabatic control process is isentropic if quasi-static, entropy-increasing otherwise. In at least some cases, the result that quasi-static adiabatic processes are isentropic does not rely on any explicit equilibration assumption.To be specific: If the Hamiltonian has the form then the adiabatic theorem of quantum mechanics [9] tells us that if the parameters are changed sufficiently slowly from λ 0 I to λ 1 I then (up to phase, and to an arbitrarily high degree of accuracy) the Hamiltonian dynamics will cause |ψ i (λ 0 I ) to evolve to |ψ i (λ 1 I ) ; hence, in this regime the dynamics takes microcanonical states to microcanonical states of the same energy. In any case, we now have a complete solution to the control problem.By quasi-static processes we can move the controlled object's state around arbitrarily on a given constant-entropy hypersurface; by applying a non-quasi-static process we can move it from one such hypersurface to a higher-entropy hypersurface.So the condition that the final state's entropy is not lower than the initial state's is sufficient as well as necessary: Adiabatic control theory allows a transition between equilibrium states iff it is entropy-nondecreasing. A little terminology: The work done on the controlled object under a given adiabatic control process is just the change in its energy, and is thus the same for any two control processes that induce the same transition, and it has an obvious physical interpretation: The work done is the energy cost of inducing the transition by any physical implementation of the control theory.(In phenomenological treatments of thermodynamics it is usual to assume some independent understanding of "work done", so that the observation that adiabatic transitions from x to y require the same amount of work however they are performed becomes contentful, and is one form of the First Law of Thermodynamics; from our perspective, though, it is just an application of conservation of energy.) Following the conventions of thermodynamics, we write dW for a very small quantity of work done during some part of a quasi-static control process.We have where the derivative is taken with all values of V J other than V I held constant and the last step implicitly defines the generalised pressures.(In the case where V I just is the volume, P I δV is the energy cost to compress the gas by an amount δV , and hence is just the ordinary pressure.) Thermal Contact Theory Our second control theory, thermal contact theory, is again intended largely as a tool for the development of more interesting theories.To develop it, suppose that we have two systems initially dynamically isolated from one another, and that we introduce a weak interaction Hamiltonian between the two systems.Doing so, to a good approximation, will leave the internal dynamics of each system largely unchanged but will allow energy to be transferred between the systems.Given our statistical-mechanical assumptions, this will cause the two systems (which are now one system with two almost-but-not-quite-isolated parts) to proceed, on some timescale, to a joint equilibrium state.When two systems are coupled in this way, we say that they are in thermal contact.Given our assumption that the interaction Hamiltonian is small, we will assume that the equilibration timescales of each system separately are very short compared to the joint equilibration timescale, so that the interaction is always between systems which separately have states extremely close to the equilibrium state. The result of this joint equilibration can be calculated explicitly.If two systems each confined to a narrow energy band are allowed to jointly equilibrate, the energies of one or other may end up spread across a wide range.For instance, if one system consists of a single atom initially with a definite energy E and it is brought in contact with a system of a great many such atoms, its post-equilibration energy distribution will be spread across a large number of states.However, for the most part we will assume that the microcanonical systems we consider are not induced to transition out of microcanonical equilibrium as a consequence of joint equilibration; systems with this property I call thermally stable. There is a well-known result that characterises systems that equilibrate with thermally stable systems which is worth rehearsing here.Suppose two systems have density-of-state functions V 1 , V 2 and are initially in microcanonical equilibrium with total energy U .The probability of the two systems having energies U 1 , U 2 is then and so the probability of the first system having energy U 1 is Assuming that the second system is thermally stable, we express the second term on the right hand side in terms of its Gibbs entropy and expand to first order around U (the assumption that the second system's energy distribution is narrow tells us that higher terms in the expansion will be negligible): Since the partial derivative here is just the inverse of the microcanonical temperature T of the second system, the conclusion is that which is recognisable as the canonical distribution at canonical temperature T .In any case, so long as we assume thermal stability then systems placed into thermal contact may be treated as remaining separately at equilibrium as they evolve towards a joint state of higher entropy. We can now state thermal contact theory: • The controlled object is a fixed, finite collection of mutually isolated thermally stable statistical mechanical systems.• The available control operations are (i) placing two systems in thermal contact; (ii) breaking thermal contact between two systems; (iii) waiting for some period of time.• There are no feedback measurements. • The control processes are arbitrary sequences of control operations. Given the previous discussion, thermal contact theory shares with adiabatic control theory the feature of inducing transitions between systems at equilibrium, and we can characterise the evolution of the systems during the control process entirely in terms of the energy flow between systems.The energy flow between two bodies in thermal contact is called heat.(A reminder: Strictly speaking, the actual amount of heat flow is a probabilistic quantity very sharply peaked around a certain value.) The quantitative rate of heat flow between two systems in thermal contact will of course depend inter alia on the precise details of the coupling Hamiltonian between the two systems.But in fact the direction of heat flow is independent of these details.For the total entropy change (in either the microcanonical or canonical framework) when a small quantity of heat dQ flows from system A to system B is But since the thermodynamical temperature T is just the rate of change of energy with entropy while external parameters are held constant, this can be rewritten as So heat will flow from A to B only if the inverse thermodynamical temperature of A is lower than that of B. In most cases (there are exotic counter-examples, notably in quantum systems with bounded energy) thermodynamical temperature is positive, so that this can be restated as: Heat will flow from A to B only if the thermodynamical temperature of A is greater than that of B. For simplicity I confine attention to this case. If we define two systems as being in thermal equilibrium when placing them in thermal contact does not lead to any heat flow between them, then we have the following thermodynamical results: (1) Two systems each in thermal equilibrium with a third system are at thermal equilibrium with one another; hence, thermal equilibrium is an equivalence relation.(The Zeroth Law of Thermodynamics). (2) There exist real-valued empirical temperature functions which assign to each equilibrium system X a temperature t(X) such that heat flows from X to Y when they are in thermal contact iff t(X) > t(Y ). (2) trivially implies (1); in phenomenological approaches to thermodynamics the converse is often asserted to be true, but of course various additional assumptions are required to make this inference. For our purposes, though, both are corollaries of statistical mechanics, and "empirical temperatures" are just monotonically increasing functions of thermodynamical temperature. Returning to control theory, we can now see just what transitions can and cannot be achieved via thermal contact theory.Specifically, the only transitions that can be induced are the heating and cooling of systems, and a system can be heated only if there is another system available at a higher temperature.The exact range of transitions thus achievable will depend on the size of the systems (if I have bodies at temperatures 300 K and 400 K, I can induce some temperature increase in the first, but how much will depend on how quickly the second is cooled). A useful extreme case involves heat baths: Systems at equilibrium assumed to be so large that no amount of thermal contact with other systems will appreciably change their temperature (and which are also assumed to have no controllable parameters, not that this matters for thermal control theory).The control transitions available via thermal contact theory with heat baths are easy to state: Any system can be cooled if its temperature is higher than some available heat bath, or heated if it is cooler than some such bath. Thermodynamics We are now in a position to do some non-trivial thermodynamics.In fact, we can consider two different thermodynamic theories that can thought of as two extremes.To be precise: Maximal no-feedback thermodynamics is specified like this: • The controlled object is a fixed, finite collection of mutually isolated statistical mechanical systems, assumed to be both thermally and parameter stable.• The control operations are (i) arbitrary entropy-non-decreasing transition maps on the combined states of the system; (ii) leaving the systems alone for a time longer than the equilibration timescale of each system.• There are no feedback measurements. • The control processes are arbitrary sequences of control operations terminating in operation (ii) (that is, arbitrary sequences after which the systems are allowed to reach equilibrium). The only constraints on this control theory are that control operations do not actually decrease phase-space volume, and that the control operations to apply are chosen once-and-for-all and not changed on the basis of feedback.By contrast, here is minimal thermodynamics, obtained simply by conjoining thermal contact theory and adiabatic control theory: • The controlled object is a fixed, finite collection of mutually isolated statistical mechanical systems, assumed to be both thermally and parameter stable.• The control operations are (i) moving two systems into or out of thermal contact; (ii) making smooth changes in the parameters determining the Hamiltonians of one or more system over some finite interval of time; (iii) leaving the systems alone for a time longer than the equilibration timescale of each system.• There are no feedback measurements. • The control processes are arbitrary sequences of control operations terminating in operation (iii) (that is, arbitrary sequences after which the systems are allowed to reach equilibrium). The control theory for maximal thermodynamics is straightforward.The theory induces transitions between equilibrium states; no such transition can decrease entropy; transitions are otherwise totally arbitrary.So we can induce a transition x → y between two equilibrium states x, y iff S(x) ≤ S(y).It is a striking feature of thermodynamics that under weak assumptions minimal thermodynamics has exactly the same control theory, so that the apparently much greater strength of maximal no-feedback thermodynamics is illusory. To begin a demonstration, recall that in the previous sections we defined the heat flow into a system as the change in its energy due to thermal contact, and the work done on a system as the change in its energy due to modification of the parameters.By decomposing any control process into periods of arbitrarily short length-in each of which we can linearise the total energy change as the change that would have occurred due to parameter change while treating each system as isolated plus the change that would have occurred due to entropy-increasing evolution while holding the dynamics fixed-and summing the results, we can preserve these concepts in minimal thermodynamics.For any system, we then have where U is the expected energy, Q is the expected heat flow into the system, and W is the expected work done on the system.This result also holds for any collection of systems, up to and including the entire controlled object; in the latter case, Q is zero and W can again be interpreted as the energy cost of performing the control process.The reader will probably recognise this result as another form of the First Law of Thermodynamics.In this context, it is a fairly trivial result: Its content, insofar as it has any, is just that there is a useful decomposition of energy changes by their various causes.In phenomenological treatments of thermodynamics the First Law gets physical content via some independent understanding of what "work done" is (in the axiomatic treatment of [10], for instance, it is understood in terms of the potential energy of some background weight).But the real content of the First Law from that perspective is that there is a thermodynamical quantity called energy which is conserved.In our microphysical-based framework the conservation of (expected) energy is a baseline assumption and does not need to be so derived. The concept of a quasi-static transition also generalises from adiabatic control theory to minimal thermodynamics.If dU is the change in system energy during an extremely small step of such a control process, we have and, given that quasi-static adiabatic processes are entropy-conserving, we can identify the first term as the expected work done on the system in this small step and the second as the expected heat flow into the system.Using our existing definitions we can rewrite this as yet another form of the First Law, but it is important to recognise that from our perspective, the expression itself has no physical content and is just a result of partial differentiation.The content comes in the identification of the first term as work and the second as heat. Putting our results so far together, we know that (1) Any given system can be induced to make any entropy-nondecreasing transition between states. (2) Any given system's entropy may be reduced by allowing it to exchange heat with a system at a lower temperature, at the cost of increasing that system's temperature by a greater amount.(3) The total entropy of the controlled object may not decrease. The only remaining question is then: Which transitions between collections of systems that do not decrease the total entropy can be induced by a combination of ( 1) and ( 2)?So far as I know there is no general answer to the question.However, we can answer it fully if we assume that one of the systems is what I will call a Carnot system: A system such that for any value of S, ( ∂U ∂S )| V I takes all positive values on the constant-S hypersurface.The operational content of this claim is that a Carnot system in any initial equilibrium state can be controlled so as to take on any temperature by an adiabatic quasi-static process. The ideal gas is an example of a Carnot system: Informally, it is clear that its temperature can be arbitrarily increased or decreased by adiabatically changing its volume.More formally, from its equation of state (8) we have so that the energy can be changed arbitrarily through adiabatic processes, and the temperature is proportional to the energy.Of course, no gas is ideal for all temperatures and in reality the most we can hope for is a system that behaves as a Carnot system across the relevant range of temperatures. In any case, given a Carnot system we can transfer entropy between systems with arbitrarily little net entropy increase.For given two systems at temperatures T A , T B with T A > T B , we can (i) adiabatically change the temperature of the Carnot system to just below T A ; (ii) place it in thermal contact with the hotter system, so that heat flows into the Carnot system with arbitrarily little net entropy increase; (iii) adiabatically lower the Carnot system to a temperature just above T B ; (iv) place it in thermal contact with the colder system, so that (if we wait the right period of time) heat flows out of the Carnot system with again arbitrarily little net entropy increase.(In the thermodynamics literature this kind of process is called a Carnot cycle: Hence my name for Carnot systems.) We then have a complete solution to the control problem for minimal thermodynamics: The possible transitions of the controlled object are exactly those which do not decrease the total entropy of all of the components.So "minimal" thermodynamics is, indeed, not actually that minimal. The major loophole in all this-feedback-will be discussed from Section 9 onwards.Firstly, though, it will be useful to make a connection with the Second Law of Thermodynamics in its more phenomenological form. The Second Law of Thermodynamics While "the Second Law of Thermodynamics" is often read simply as synonymous with "entropy cannot decrease", in phenomenal thermodynamics it has more directly empirical statements, each of which translates straightforwardly into our framework.Here's the first: The Second Law (Clausius statement): No sequence of control processes can induce heat flow Q from one system with an inverse temperature 1/T A , heat flow Q into a second system with a lower inverse temperature 1/T B , while leaving the states of all other systems unchanged. This is a generalisation of the basic result of thermal contact theory, and the argument is essentially the same: Any such process decreases the entropy of the first system by more than it increases the entropy of the second.Since the entropy of the remaining systems is unchanged (they start and end the process in the same equilibrium states), the process is overall entropy-decreasing and thus forbidden by the statistical-mechanical dynamics.If both temperatures are positive, the condition becomes the more familiar one that T B cannot be higher than T A . And the second: The Second Law (Kelvin statement): No sequence of control processes can induce heat flow Q from any one system with positive temperature while leaving the states of all other systems unchanged. By the conservation of energy, any such process must result in net work Q being generated; an alternative way to give the Kelvin version is therefore "no process can extract heat Q from one system and turn it into work while leaving the states of all other systems unchanged".In any case, the Kelvin version is again an almost immediate consequence of the principle that Gibbs entropy is non-decreasing: Since temperature is the rate of change of energy with entropy at constant parameter value, heat flow from a positive-temperature system must decrease its entropy, which (since the other systems are left unchanged) is again forbidden by the statistical-mechanical dynamics. In both cases the "leaving the states of all other systems unchanged" clause is crucial.It is trivial to move heat from system A to system B with no net work cost if, for instance, system C, a box of gas, is allowed to expand in the process and generate enough work to pay for the work cost of the transition.Thermodynamics textbooks often use the phrase "operating in a cycle" to describe this constraint, and it will be useful to cast that notion more explicitly in our framework. Specifically, let's define heat bath thermodynamics (without feedback) as follows: • The controlled object consists of (a) a collection of heat baths at various initial temperatures; (b) another finite collection of statistical-mechanical systems, the auxiliary object, containing at least one Carnot system, and whose initial states are unconstrained.• The control operations are (a) moving one or more systems in the auxiliary object into or out of thermal contact with other auxiliary-object systems and/or with one or more heat baths; (b) applying any desired smooth change to the parameters of the systems in the auxiliary object over some finite period of time; (c) inducing one or more systems in the auxiliary object to evolve in an arbitrary entropy-nondecreasing way.• There are no feedback measurements. • A control process is an arbitrary sequence of control operations. In this framework, a control process is cyclic if it leaves the state of the auxiliary object unchanged.The Clausius and Kelvin statements are then, respectively, that no cyclic process can have as its sole effect on the heat baths (a) that net heat Q flows from one bath to one with a higher temperature at no cost in work, and (b) that net heat Q from one bath is converted into work.And again, these are fairly immediate consequences of the fact that entropy is nondecreasing. But perhaps we don't care about cyclic processes?What does it matter what the actual final state of the auxiliary system is, provided the process works?We can make this intuition more precise like this: A control process delivers a given outcome repeatably if (i) we can perform it arbitrarily often using the final state of each process as the initial state of the next, and (ii) the Hamiltonian of the auxiliary object is the same at the end of each process as at the beginning.The Clausius statement, for instance, is now that no process can repeatably cause any quantity Q of heat to flow from one heat bath to another of higher temperature at no cost in work and with no heat flow between other heat baths. This offers no real improvement, though. In the Clausius case, any such heat flow is entropy-decreasing on the heat baths: Specifically, if they have temperatures T A and T B with T A > T B , a transfer of heat Q between them leads to an entropy increase of Q/(T A − T B ).So the entropy of the auxiliary object must increase by at least this much.By conservation of energy the auxiliary object's expected energy must be constant in this process.But the entropy of the auxiliary object has a maximum for given expected energy [11] and so this can be carried out only finitely many times.A similar argument can readily be given for the Kelvin statement. I pause to note that we can turn these entirely negative constraints on heat and work into quantitative limits in a familiar way by using our existing control theory results.(Here I largely recapitulate textbook thermodynamics.)Given two heat baths having temperatures T A , T B with T A > T B , and a Carnot system initially at temperature T A , the Carnot cycle to transfer heat from the colder system to the hotter is: (1) Adiabatically transition the Carnot system to the lower temperature T B . (2) Place the Carnot system in thermal contact with the lower-temperature heat bath, and modify its parameters quasi-statically so as to cause heat to flow from the heat bath to the system.(That is, carry out modifications which if done adiabatically would decrease the system's temperature.)Do so until heat Q B has been transferred to the system.(3) Adiabatically transition the Carnot system to temperature T A .(4) Place the Carnot system in thermal contact with the higher-temperature heat bath, and return its parameters quasi-statically to their initial values. At the end of this process the Carnot system has the same temperature and parameter values as at the beginning and so will be in the same equilibrium state; the process is therefore cyclic, and the entropy and energy of the Carnot system will be unchanged.But the entropy of the system is changed only by the heat flow in steps 2 and 4. If the heat flow out of the system in step 4 is Q A , then the entropy changes in those steps are respectively +Q B /T B and −Q A /T A , so that Q A /Q B = T A /T B .By conservation of energy the net work done on the Carnot system in the cycle is W = Q A − Q B , and we have the familiar result that for the amount of work required by a Carnot cycle-based heat pump to move a quantity of heat from a lower-to a higher-temperature heat bath.Since the process consists entirely of quasi-static modifications of parameters (and the making and breaking of thermal contact), it can as readily be run in reverse, giving us the equally-familiar formula for the efficiency of a heat engine: T B /T A .And since (on pain of violating the Kelvin statement) all reversible heat engines have the same efficiency (and all irreversible ones a lower efficiency), this result is general and not restricted to Carnot cycles. The One-Molecule Carnot System The Carnot systems used in our analysis so far have been assumed to be parameter-stable, thermally stable systems that can be treated via the microcanonical ensemble (and thus, in effect, to be macroscopically large).But in fact, this is an overly restrictive conception of a Carnot system, and it will be useful to relax it.All we require of such a system is that for any temperature T it possesses states which will transfer heat to and from temperature-T heat baths with arbitrarily low entropy gain, and that it can be adiabatically and quasi-statically transitioned between any two such states. As I noted in Section 5, it is a standard result in statistical mechanics that a system of any size in equilibrium with a heat bath of temperature T is described by the canonical distribution for that temperature, having probability density at energy U proportional to e −U/T .There is no guarantee that adiabatic, quasi-static transitions preserve the form of the canonical ensemble, but any system where this is the case will satisfy the criteria required for Carnot systems.I call such systems canonical Carnot systems; from here on, Carnot systems will be allowed to be either canonical or microcanonical. To get some insight into which systems are canonical Carnot systems, assume for simplicity that there is only one parameter V and that the Hamiltonian can be written in the form required by the adiabatic theorem: Then if the system begins in canonical equilibrium, its initial state is By the adiabatic theorem, if V is altered sufficiently slowly to V while the system continues to evolve under Hamiltonian dynamics, it will evolve to This will itself be in adiabatic form if we can find β and Z such that for which a necessary and sufficient condition is that or equivalently that For an ideal gas, elementary quantum mechanics tells us that the energy of a given mode is inversely proportional to the volume of the box in which the gas is confined: (Quick proof sketch: increasing the size of the box by a factor K decreases the gradient by that factor, and hence decreases the kinetic energy density by a factor K 2 .Energy is energy density × volume.) So an ideal gas is a canonical Carnot system.This result is independent of the number of particles in the gas and independent of any assumption that the gas spontaneously equilibrates.So in principle, even a gas with a single particle-the famous one-molecule gas introduced by [12]-is sufficient to function as a Carnot system.Any repeatable transfer of heat between heat baths via arbitrary entropy-non-decreasing operations on auxiliary systems can in principle be duplicated using only quasi-static operations on a one-molecule gas [13]. For the rest of the paper, I will consider how the account developed is modified when feedback is introduced.The one-molecule gas was introduced into thermodynamics for just this purpose, and will function as a useful illustration. Feedback What happens to the Gibbs entropy when a system with state ρ is measured?The classical case is easiest to analyse: Suppose phase space is decomposed into disjoint regions Γ i and that Then p i is the probability that a measurement of which phase-space region the system lies in will give result i.The state can be rewritten in the form where if x ∈ Γ i and is zero otherwise.and by probabilistic conditionalisation, ρ i is the state of the system after the measurement if result i is obtained.The expected value of the Gibbs entropy after the measurement ("p-m") is then But we have which, since the ρ i are mutually disjoint, reduces to But the integral in the first term is just 1 (since the ρ i are normalised) and the integral in the second term is −S G (ρ i ).So we have That is, measurement may decrease entropy for two reasons.Firstly, pure chance may mean that the measurement happens to yield a post-measurement state with low Gibbs entropy.But even the average value of the post-measurement entropy decreases, and the level of the decrease is equal to the Shannon entropy of the probability distribution of measurement outcomes.A measurement process which has a sufficiently dramatic level of randomness could, in principle, lead to a very sharp decrease in average Gibbs entropy [14]. In the quantum case, the situation is slightly more complicated.We can represent the measurement by a collection of mutually orthogonal projectors Π i summing to unity, and define measurement probabilities and post-measurement states but ρ is not necessarily equal to a weighted sum of these states.We can think of the measurement process, however, as consisting of two steps: A diagonalisation of ρ so that it does have this form (a non-selective measurement, or Luders projection, in foundations-of-QM jargon) followed by a random selection of the state.Mathematically the first process increases Gibbs (i.e. , von Neumann) entropy, and the second mathematically has the same form as the classical analysis, so that in the quantum case (42) holds as an inequality rather than as a strict equality.(Of course, how this process of measurement is to be interpreted-and even if it can really be thought of as measuring anything-is a controversial question and depends on one's preferred solution to the quantum measurement problem.)Insofar as "the Second Law of Thermodynamics" is taken just to mean "entropy never decreases", then, measurement is a straightforward counter-example, as has been widely recognised (see, for instance, [12,15], [16] [ch.5],or [17]) [18].From the control-theory perspective, though, the interesting content of thermodynamics is which transitions it allows and which it forbids, and the interesting question about feedback measurements is whether they permit transitions which feedback-free thermodynamics does not.Here the answer is again unambiguous: It does. To be precise: Define heat bath thermodynamics with feedback as follows: • The controlled object consists of (a) a collection of heat baths at various initial temperatures; (b) another finite collection of statistical-mechanical systems, the auxiliary object, containing at least one Carnot system, and whose initial states are unconstrained.• The control operations are (a) moving one or more systems in the auxiliary object into or out of thermal contact with other auxiliary-object systems and/or with one or more heat baths; (b) applying any desired smooth change to the parameters of the systems in the auxiliary object over some finite period of time; (c) inducing one or more systems in the auxiliary object to evolve in an arbitrary entropy-nondecreasing way.• Arbitrary feedback measurements may be made. • A control process is an arbitrary sequence of control operations. In this framework, the auxiliary object can straightforwardly be induced (with high probability) to transition from equilibrium state x to equilibrium state y with S G (y) < S G (x). Firstly, pick a measurement such that performing it transitions x to x i with probability p i , such that The expected value of the entropy of the post-measurement state will be much less than that of y; for an appropriate choice of measurement, with high probability the actually-obtained post-measurement state x i will satisfy S G (x i ) < S G (y). Now perform an entropy-increasing transformation from x i to y. (For instance, perform a Hamiltonian transformation of x i to some equilibrium state, then use standard methods of equilibrium thermodynamics to change that state to y). As such, the scope of controlled transitions of the auxiliary object is total: It can be transitioned between any two states.As a corollary, the Clausius and Carnot versions of the Second Law do not apply to this control theory: energy can be arbitrarily transferred from one heat bath to another, or converted from a heat bath into work. In fact, the full power of the arbitrary transformations available on the auxiliary system is not needed to produce these radical results.Following Szilard's classic method, let us assume that the auxiliary system is a one-molecule gas confined to a cylindrical container by a movable piston at each end, so that the Hamiltonian of the gas is parametrised by the position of the pistons.Now suppose that the position of the gas atom can be measured.If it is found to be closer to one piston than the other, the second piston can rapidly be moved at zero energy cost to the mid-point between the two.As a result, the volume of the gas has been halved without any change in its internal energy (and so its entropy has been decreased by ln 2; cf Equation (8).)If we now quasi-statically and adiabatically expand the gas to its original volume, its energy will decrease and so work will have been extracted from it.Now suppose we take a heat bath at temperature T and a one-atom gas at equilibrium also at temperature T .The above process allows us to reduce the energy of the box and extract some amount of work δW from it.Placing it back in thermal contact with the heat bath will return it to its initial state and so, by conservation of energy, extracts heat δQ = δW from the bath.This is a straightforward violation of the Kelvin version of the Second Law.If we use the extracted work to heat a heat bath which is hotter than the original bath, we generate a violation of the Clausius version also. To make this explicit, let's define Szilard theory as follows: • The controlled object consists of (a) a collection of heat baths at various initial temperatures; (b) a one-atom gas as defined above.• The control operations are (a) moving the one-atom gas into or out of thermal contact with one or more heat baths; (b) applying any desired smooth change in the positions of the pistons confining the one-atom gas.• The only possible feedback measurement is a measurement of the position of the atom in the one-atom gas.• A control process is an arbitrary sequence of control operations. Then the control operations available in Szilard theory include arbitrary cyclic transfers of heat between heat baths and conversion of heat into work. The use of a one-atom gas in this algorithm is not essential.Suppose that we measure instead the particle density in each half of a many-atom gas at equilibrium Random fluctuations ensure that one side of the gas is at a slightly higher density than the other; compressing the gas slightly using the piston on the low-density side will reduce its volume at a slightly lower cost in work than would be possible on average without feedback; iterating such processes will again allow heat to be converted into work.(The actual numbers in play here are utterly negligible, of course-as for the one-atom gas-but we are interested here in in-principle possibility, not practicality [19]. The most famous example of measurement-based entropy decrease, of course, is Maxwell's demon: A partition is placed between two boxes of gas initially at equilibrium at the same temperature.A flap, which can be opened or closed, is placed in the partition, and at short time intervals δt the boxes are measured to ascertain if, in the next period of time δt any particles will collide with the flap from (a) the left or (b) the right.If (a) holds but (b) does not, the flap is opened for the next δt seconds.Applying this alternation of feedback measurement and control operation for a sufficiently long time will reliably cause the density of the gas on the left to be much lower than on the right.Quasi-statically moving the partition to the left will then allow work to be extracted.The partition can then be removed, and reinserted in the middle; the temperature of the box will have been reduced.Placing the box in thermal contact with a heat bath will then extract heat from the bath equal to the work done; the Kelvin version of the Second Law is again violated.I will refrain from formally stating the "demonic control theory" into which these results could be embedded, but it is fairly clear that such a theory could be formulated. Landauer's Principle and the Physical Implementation of Control Processes Szilard control theory, and demonic control theory, allow thermodynamically forbidden transitions.Big deal, one might reasonably think: So does abracadabra control theory, where the allowed control operations include completely arbitrary shifts in a system's state.We don't care about abracadabra control theory because we have no reason to think that it is physically possible; we only have reason to care about entropy-decreasing control theories based on measurement if we have reason to think that they are physically possible. Of course, answering the general question of what is physically possible isn't easy.Is it physically possible to build mile-long relativistic starships?The answer turns on rather detailed questions of material science and the like.But no general physical principle forbids it.Similarly, detailed problems of implementation might make it impossible to build a scalable quantum computer, but the theory of fault-tolerant quantum computation [20,21] gives us strong reasons to think that such computers are not ruled out in principle.On the other hand, we do have reason to think that faster-than-light starships, or computers that can compute Turing-non-computable functions are in principle ruled out.It is this "in-principle" question of implementability that is of interest here. To answer that question, consider again heat-bath control theory.The action takes place mostly with respect to the auxiliary object: The heat baths are not manipulated in any way beyond moving into or out of contact with that object.We can then imagine treating the auxiliary object, and the control machinery, as a single larger system: We set the system going, and then simply allow it to run.It churns away, from time to time establishing or breaking physical contact with a heat bath or perhaps drawing on or topping up an external energy reservoir, and in due course completes the control process it was required to implement. This imagined treatment of the system can be readily incorporated into our system: We can take the auxiliary object of heat-bath theory with feedback together with its controlling mechanisms, draw a box around both together, and treat the result as a single auxiliary object for a heat-bath theory without feedback.Put another way, if the feedback-based control processes we are considering are physically possible, we ought to be able to treat the machinery that makes the measurement as physical, and the machinery that decides what operation to perform based on a given feedback result as likewise physical, and treat all that physical apparatus as part of the larger auxiliary object.Let's call the assumption that this is possible the automation constraint; to violate it is to assume that some aspects of computation or of measurement cannot be analysed as physical processes, an assumption I will reject here without further discussion. But we already know that heat bath theory without feedback does not permit any repeatable transfer of heat into work, or of a given quantity of heat from a cold body to a hotter body.Such transfers are possible, but only if the auxiliary object increases in Gibbs entropy.And given that the auxiliary object breaks into controlling sub-object and controlled sub-object and that ex hypothesi the control processes we are considering leave the controlled sub-object's state unchanged, we can conclude that the Gibbs entropy of the controlling sub-object must have increased. This raises an interesting question.From the perspective of the controlling system, control theory with feedback looks like a reasonable idealisation, but from the external perspective, we know that something must go wrong with that idealisation.The resolution of this problem lies in the effects of the measurement process on the controlling system itself: The process of iterated measurement is radically indeterministic from the perspective of the controlling object, and it can have only a finite number of relevantly distinct states, so eventually it runs out of states to use.This point (though controversial; cf [22,23], and references therein) has been widely appreciated in the physics literature and can be studied from a variety of perspectives; in this rest of this section, I briefly describe the most commonly discussed one.Keep in mind in the sequel that we already know that somehow the controlling system's strategy must fail (at least given the automation constraint): The task is not to show that it does but to understand how it does. The perspective we will discuss uses what might be called a computational model of feedback: It is most conveniently described within quantum mechanics.We assume that the controlling object consists, at least in part, of some collection of N systems -bits-each of whose Hilbert space is the direct sum of two memory subspaces 0 and 1 and each of which begins with its state somewhere in the 0 subspace.A measurement with two outcomes is then a dynamical transition which leaves the measured system alone and causes some so-far-unused bit to transition into the 1 subspace if one outcome is obtained and to remain in the 0 subspace if the other is obtained.That is, if T is some unitary transformation of the bit's Hilbert space that maps the 0 subspace into the 1 subspace, the measurement is represented by some unitary transformation on the joint system of controlled object and bit (with ( P , 1 − P ) being the projectors defining the measurement.A feedback-based control processes based on the result of this measurement is then represented by a unitary transformation of the form where P 0 , P 1 project onto the 0 and 1 subspaces and U 0 and U 1 are unitary operations on the controlled system.The combined process of V followed by U represents the process of measuring the controlled object and then performing U 0 on it if one result is obtained and U 1 if the other is.Measurements with 2 N outcomes, and control operations based on the results of such measurements, can likewise be represented through the use of N bits.The classical case is essentially identical (but the formalism of quantum theory makes the description simpler in the quantum case).The problem with this process is that eventually, the system runs out of unused bits.(Note that the procedure described above only works if the bit is guaranteed to be in the 0 subspace initially.To operate repeatably, the system will then have to reset some bits to the initial state.But Landauer's Principle states that such resetting carries an entropy cost.Since the principle is controversial (at least in the philosophy literature!)I will work through the details here from a control-theory perspective. Specifically, let's define a computational process as follows: It consists of N bits (the memory) together with a finite system (the computer) and another system (the environment).A computation is a transition which is deterministic at the level of bits: that is, if the N bits begin, collectively, in subspaces that encode the binary form of some natural number n, after the transition they are found, collectively, in subspaces encoding f (n) for some fixed function f .(Reference [24] is a highly insightful discussion which inter alia considers the case of indeterministic computation.)The control processes are arbitrary unitary (quantum) or Hamiltonian (classical) evolutions on the combined system of memory, computer, and environment; the question of interest is what constraints on the transitions of computer and environment are required for given computational transitions to be implemented.For the sake of continuity with the literature I work in the classical framework (the quantum generalisation is straightforward); for simplicity I assume that the bits have equal phase space V assigned to 0 and 1. If the function f is one-to-one, the solution to the problem is straightforward.The combined phase space of the memory can be partitioned into 2 N subspaces each of equal volume and each labelled with the natural number they represent.There is then a phase-space-preserving map from n to f (n) for each n, and these maps can be combined into a single map from the memory to itself.One-to-one ('reversible') computations can then be carried out without any implications for the states of computer or environment. But now suppose that the function f takes values only between 1 and 2 M (M < N ), so that any map implementing f must map the bits M + 1, . . .N into their zero subspaces independent of input.Any such map would map the uniform distribution over the memory (which has entropy N ln 2V ) to one with support in a region of volume (2V ) M ×V N −M (and so with maximum entropy M ln 2V +(N −M ) ln V ).Since the map as a whole is by assumption entropy-preserving, it must increase the joint entropy of system plus environment by (N − M ) ln 2. In the limiting case of reset, M = 0 (f (n) = 0 for all n) and so the computer and environment must jointly increase in entropy by at least N ln 2. This is Landauer's principle: Each bit that is reset generates at least ln 2 entropy. If the computer is to carry out the reset operation repeatably, its own entropy cannot increase without limit.So a repeatable reset process dumps at least entropy ln 2 per bit into the environment.In the special case where the environment is a heat bath at temperature T , Landauer's principle becomes the requirement that reset generates T ln 2 heat per bit. A more realistic feedback-based control theory, then, might incorporate Landauer's Principle explicitly, as in the following (call it computation heat-bath thermodynamics): • The controlled object consists of (a) a collection of heat baths at various initial temperatures; (b) another finite collection of statistical-mechanical systems, the auxiliary object, containing at least one Carnot system, and whose initial states are unconstrained; (c) a finite number N of 2-state systems ("bits"), the computational memory, each of which begins in some fixed ("zero") initial state with probability 1. • The control operations are (a) moving one or more systems in the auxiliary object into or out of thermal contact with other auxiliary-object systems and/or with one or more heat baths; (b) applying any desired smooth change to the parameters of the systems in the auxiliary object over some finite period of time; (c) inducing one or more systems in the auxiliary object to evolve in an arbitrary entropy-nondecreasing way; (d) erasing M bits of the memory-that is, restoring them to their zero states-and at the same time transferring heat M ln 2/T to some heat bath at temperature T ; (e) applying any computation to the computational memory.• Arbitrary feedback measurements may be made (including the memory bits) provided that: (a) they have finitely many results; (b) the result of the measurement is faithfully recorded in the state of some collection of bits which initially each have probability 1 of being in the 0 state.• A control process is an arbitrary sequence of control operations. At first sight, measurement in this framework is in the long run entropy-increasing: A measurement with 2 M outcomes having probabilities p 1 , . . .p 2 M will reduce the entropy by ∆S = − i p i ln p i , but the maximum value of this is M ln 2, which is the entropy increase required to erase the M bits required to record the result.But as Zurek [15] has pointed out, Shannon's noiseless coding theorem allows us to compress those M bits to, on average, − i p i ln p i bits, so that the overall process can be made entropy-neutral.This strategy of using Landauer's principle to explain why Maxwell demons cannot repeatably violate the Second Law has a long history (see [25] and references therein).It has recently come under sharp criticism by John Earman and John Norton [22,26] as either trivial or question-begging: They argue that any such defences ('exorcisms') rely on arguments for Landauer's Principle that are either Sound (that is, start off by assuming the Second Law), or Profound (that is, do not so start off).Exorcisms relying on Sound arguments are question-begging; those relying on Profound exorcisms leave us no good reason to accept Landauer's principle in the first place. Responses to Earman and Norton (see, e. g. [27,28]) have generally embraced the first horn of the dilemma, accepting that Landauer's Principle does assume the Second Law but arguing that use of it can still be pedagogically illuminating.(See [26,29] for responses to this move.)But I believe the dialectic here fails to distinguish between statistical mechanics and thermodynamics.The argument here for Landauer's Principle does indeed assume that the underlying dynamics are entropy-non-decreasing, and from that perspective appeal to Landauer's principle is merely of pedagogical value: It helps us to make sense of how feedback processes can be entropy-decreasing despite the fact that any black-box process, even if it involves internal measurement of subsystems, cannot repeatedly turn heat into work.But (this is one central message of this paper) that dynamical assumption within statistical mechanics should not simply be identified with the phenomenological Second Law.In Earman and Norton's terminology, the argument for Landauer's Principle is Sound with respect to statistical mechanics, but Profound with respect to phenomenological thermodynamics. Conclusion The results of my exploration of control theory can be summarised as follows: (1) In the absence of feedback, physically possible control processes are limited to inducing transitions that do not lower Gibbs entropy.(2) That limit can be reached with access to very minimal control resources: Specifically, a single Carnot system and the ability to adiabatically control and put it in thermal contact with other systems.(3) Introducing feedback allows arbitrary transitions.(4) If we try to model the feedback process as an internal dynamical process in a larger system, we find that feedback does not increase the power of the control process.(5) (3) and ( 4) can be reconciled by considering the physical changes to the controlling system during feedback processes.In particular, on a computation model of control and feedback, the entropy cost of resetting the memory used to record the result of measurement at least cancels out the entropy reduction induced by the measurement. I will end with a more general moral.As a rule, and partly for pedagogical reasons, foundational discussions of thermal physics tend to begin with thermodynamics and continue to statistical mechanics.The task of recovering thermodynamics from successfully grounded statistical mechanics is generally not cleanly separated from the task of understanding statistical mechanics itself, and the distinctive requirements of thermodynamics blur into the general problem of understanding statistical-mechanical irreversibility.Conversely, foundational work on thermodynamics proper is often focussed on thermodynamics understood phenomenologically: A well-motivated and worthwhile pursuit, but not one that obviates the need to understand thermodynamics from a statistical-mechanical perspective. The advantage of the control-theory way of seeing thermodynamics is that it permits a clean separation between the foundational problems of statistical mechanics itself and the reduction problem of grounding thermodynamics in statistical mechanics.I hope to have demonstrated: (a) These really are distinct problems, so that an understanding of (e.g.) why systems spontaneously approach equilibrium does not in itself suffice to give an understanding of thermodynamics; but also (b) that such an understanding, via the interpretation of thermodynamics as the control theory of statistical mechanics, can indeed be obtained, and can shed light on a number of extant problems at the statistical-mechanics/ thermodynamics boundary.
16,468.2
2014-01-24T00:00:00.000
[ "Physics" ]
using Web Ontology K.Vanitha, K.Yasudha Present vision for the web is the semantic web in which information is given explicit meaning, making it easier for machines to automatically process and integrate information available on the web. It provides the information exactly. Now days, ontology is playing a major role in knowledge representation for the semantic web [1]. Ontology is a conceptualization of domain into a human understandable and machine readable or machine process able format consisting of entities, attributes, relationships and axioms. Ontology web language is designed for use by applications that need to process the content of information [22]. In this context many e-learning systems were proposed in the literature. Semantic Web technology may support more advanced Artificial intelligence problems for knowledge retrieval [20]. This paper aims at presenting an intelligent e-learning system from the literature. INTRODUCTION The emergence of web technologies for data and knowledge interaction gives rise to the need for supportive frameworks for knowledge distribution.Semantic web in which information is given explicit meaning, making it easier for machines to automatically process and integrate information available on the web aimed at providing shared semantic spaces for web contents [12].Now days with the rapid development of technology the learning methods have been changed.E-learning systems are taking prominent role in making the humans learning methods apart from the class room teaching irrespective of their age, income etc., in this scenario, in the literature there are many methods have been proposed and used [3].Fayed et al proposed a model based on semantic web technology which is used by the Qatar university students and faculty of engineering [2].Another intelligent web teacher system for learning personalization using semantic web model was proposed by Nicola, Gaeta1 [3] and there is an adaptive educational hypermedia systems II. SEMANTIC WEB In recent years Semantic Web is the hottest topic in the area of AI and in the internet community.Semantic Web performs the meaning (semantics) of information and services on the web, and making it possible for the web to "understand" and satisfy the requests of people and machines to use the web content which is the idea of world wide web inventor Tim Berners-Lee. Semantic web builds an appropriate infrastructure for intelligent agents to verify the web, while performing complex actions for their users.Ultimately, Semantic Web is about how to implement reliable, large-scale interoperation of Web services, to make such services computer interpretableto create a Web of machineunderstandable and interoperable services that intelligent agents can discover, execute and compose automatically [2]. The latest view of the semantic web has been changed as services.These services can be divided on two families "world services" and "web Services". The example for a world service includes a shop, a museum, a restaurant, whose address type and description is accessible over the web.In contrast, a web service is a resource that can be automatically retrieved and invoked over the web [11].Web service based applications can consider as conglomerates of independent, autonomous services developed by independent parties.Such components are not integrated at design time; they are integrated dynamically at runtime according to the current needs [15].For example, an e-learning course can be assembled dynamically by composing learning objects stored in independent repositories. A. Meta data The preliminary source for performing semantic web operations is based on metadata.Metadata is "data about data".The aim of incorporating the Meta data is to find the data sources from the web, when end-user tries to search for information on the web [11].Generally the data sources will be heterogeneous which belongs to different types i.e., unstructured, semi-structured and structured.Generally for the semantic web the data source will be a document, a web page, textual content, data, audio or video [8]. In the Semantic web, documents are marked up with semantic metadata which is machine-understandable about the human readable content of documents.The following are the different types for Meta data.  Syntactic Metadata: The simplest form of metadata which describes non-contextual information about content and provides general information. www.ijacsa.thesai.org Structural Metadata: Provides the information regarding the structure of the content and describes how items are arranged. Semantic Metadata: This adds relationships, rules, and constraints to syntactic and structural metadata and describes contextually relevant or domain-specific information about content based on ontology [21]. A. OWL-S Service Ontology OWL-D is an OWL service upper ontology that offers a Vocabulary that can be used in conjunction with OWL to describe services in an unambiguous, computer interpretable format.OWL-S was developed with the goal of allowing discovery, invocation, composition, and automatic monitoring of Web services (Martin et al, 2006).OWL-S treats service composition as processes.There is a very clear distinction among process properties, Structure, and implementation in OWL-S, which provides a way to model a process independently of its implementation. The web service technology will revolutionize the way software is developed.Some of the potential benefits of the web services technologies are decentralization , speed, software packing and the other extreme web service technology has received a deal of criticism for providing an over simplified model .It leads out several fundamental concepts as Data definition, service invocation behavior mediation, composition and service guarantees. The technology will allow a distributed and decentralized way of web services [11].A positive effect of the increase of transactions through the web is forcing to adapt a more dynamic and user centered service model.It is transforming response time into the competitive advantage.The web service compositional model has the potential to review the format and allow to be developed as service components.In over simplified model of concepts there are no domain specific data definitions.It is used to model the input and output of every application that is depending upon application domain. III. WEB ONTOLOGY Ontology is about the exact description of things and their relationships.Ontology's are considered one of the pillars of the Semantic Web; although they do not have a universally accepted definition According to Tom Gruber [17] ontology is a formal specification of a shared conceptualization [18].For the web, ontology is about the exact description of web information and relationships between web information.The purpose of the Web Ontology domain is to be able to model the relationships between prominent web ontology's and map them onto equivalent freebase types and topics. IV. AN ADAPTIVE EDUCATIONAL HYPERMEDIA SYSTEM (AEHS) The focus of Mateo et al. is on the aspects of personalization.They proposed a model as "An Adaptive Educational Hypermedia System" which supports the individual in the process of finding, selecting, accessing and retrieving web resources [2].This model is based on the concepts of adaptive hypermedia system [19].This adaptive hyper media system is in turn based on hypermedia system which was presented in brief in this paper. A. Personalization The goal of personalization in the Semantic web is to make easier the access to the right resources.This task entitles two processes [19] [5].They are retrieval and presentation.Retrieval consists in finding or constructing the right resources when they are needed, either on demand otherwise, when the information arises in the work [8].Personalization is a process of filtering the access to web content according to the individual needs and requirements of each particular user. B. Adaptive Hypermedia System This enumerates the functionality of a hypermedia system which personalizes for the individual users. C. Hypermedia System A hypermedia system consists of documents which are connected by links [6].Thus, there are mainly two aspects which can be adapted to the users: the content and the links. D. Content Level There are five methods identified for content level adaption.  Additional explanation method which displays those parts of a document fits to user goals ,interest, tasks, knowledge etc.,  Prerequisite explanations: in this method, the user model checks the prerequisites necessary to understand the content of the page. Comparative explanation : Comparative explanation is to explain new topics by stressing their relations to known topics  Explanation variant: Explanation variants and extension to the prerequisite explanations  Sorting: According to the need of the user, the different of the document are sorted. Content level adaption methods will be implemented by the following techniques which deal with the knowledge.They are  Conditional text: Information about a knowledge concept is divided in two different parts.Every part is defined with the knowledge. Stretch text: For some keywords of a document, according to the requirement of the user this technique provides longer descriptions. Page or page fragment variant: Different parts of the page are stored. Frame base fragments: this technique stores the page fragments into frames in a special order. E. Link level adaption Personalization for the user is being made through the link level adaption the following are the methods for navigating link level adaption [16].i) Direct Guidance: "next best" and "page sequencing" are the two methods to guide the user sequentially through the hypermedia system [14]."Nest best" provides nest button to navigate where page sequencing generates a reading sequence.ii) Adaptive Sorting: "Similarity Sorting and "pre requisite sorting" are used based as the relevance system assumption by him/her, otherwise according to the prerequisite knowledge [8].iii) Adaptive Hiding: Irrelevant information can be limited by making them unavailable or invisible.iv) Link Annotation: Several methods are available to annotate the educational area links for example traffic metaphor, where a red ball indicates lack of knowledge of understanding the pages yellow ball indicates that the link to pages are not recommended for reading [7] [9].Green ball indicates links recommended pages.v) Map Annotation: the same link annotation methods can be applied for maps. A. HYPERMEDIA SYSTEM METHODOLOGY (AEHSM) : A component based logical description of adaptive educational hypermedia system is proposed by Matteo et al [12].This component based definition is based on the theory of diagnosis by Reiter [20]. B. How it works? According to Matteo et al [12] AEHS was decomposed into basic components according to their roles.This uses a user model to model various characteristics of individual users or user groups.The adaptive functionality is provided by the organization of the document space and the user model [10].This Adaptive Educational Hyper Media System is a quadruple.They are i) document Space (Docs), User Model (UM) Observations (OBS) and Adaption Component (AC) [22].The document space and observations describe basic data and runtime data.This data will be processed by the other two.AEHS makes it Simple by annotating text using the traffic light metaphor.This can be extended by using Knowledge graph instead of domain graph [15].This system is able to give a more differentiated traffic light annotation to hypertext links than simple [13].It is able to recommend pages with green icon and to show which links lead to documents that will become understandable with dark orange icon and yellow icon is for the pages which might be understandable and red icon for which are not recommended yet.The representation of AEHS Simple and Knowledge graph with quadruple were presented in detail with examples.a) Simple can annotate hypertext links by using the traffic light metaphor with two colors: red for nonrecommended, green for recommended pages.i) DOCSs: This component is made of a set of n constants and a finite set of predicates.Each of the constants represents a document in the document space (the documents are denoted by D1, D2, . .., Dn).The predicates define pre-requisite conditions, i.e. they state which documents need to be studied before a document can be learned, e.g.preq(Di,Dj) for certain Di _= Dj means that Dj is a prerequisite for Di ii) UMs: it contains a set of m constants, one for each individual user U1, U2, ..., Um. iii) OBSs: A special constant (Visited) is used within the special predicate obs to denote whether a document has been visited: obs (Di, Uj, Visited) is the observation that a document Di has been visited by the user Uj. iv) ACs: This component contains constants and rules.One constant is used for describing the values of the "learning state" of the adaptive functionality, two constants (Green Icon www.ijacsa.thesai.organd Red Icon) for representing values of the adaptive functionality.The learning state of a document is described by a set of rules of kind: This component contains also a set of rules for describing the adaptive link annotation with traffic lights.Such rules are of kind: or of kind: b) This simple AEHS can be extended by using a knowledge graph instead of a domain graph.The system, called Simple1, is able to give a more differentiated traffic light annotation to hypertext links than Simple [8].It is able to recommend pages (green icon), to show which links lead to documents that will become understandable (dark orange icon), which might be understandable (yellow icon), or which are not recommended yet (red icon) Let us represent Simple1 by a quadruple (DOCSs1, UMs1, OBSs1, ACs1): i) DOCSs1: The document space contains all axioms of the document space of Simple, DOCSs, but it does not contain any of the predicates.In addition, it contains a set of s constants which name the knowledge topics T1, T2, Ts in the knowledge space.It also contains a finite set of predicates, stating the learning dependencies between these topics: depends (Tj, Tk), with Tj _= Tk, means that topic Tk is required to understand Tj.The documents are characterized by predicate keyword which assigns a nonempty set of topics to each of them, so ∀Di∃Tjkeyword (Di, Tj), but keep in mind that more than one keyword might be assigned to a same document. ii) UMs1: The user model is the same as in Simple, plus an additional rule which defines that a topic Ti is assumed to be learned whenever the corresponding document has been visited by the user.To this aim, Simple 1 uses the constant Learned.The rule for processing the observation that a topic has been learned by a user is as follows (p obs is the abbreviation for "processing an observation"): iii) OBSs1: Are the same as in Simple.iv) ACs1: The adaptation component of Simple1 contains two further constants (w.r.t.Simple), representing new values for the learning state of a document [7] [4].Such constants are: Might be understandable and will become understandable Two more constants are added for representing new values for adaptive link annotation. They are: Orange Icon and Yellow Icon.Such constants appear in the rules that describe the educational state of a document, reported hereafter.The first rule states that a document is recommended for learning if all the prerequisites to the keywords of this document have already been learnt: The second rule states that a document might be understandable if at least some of the prerequisites have already been learnt by this user: The third rule entails that a document will become understandable if the user has some prerequisite knowledge for at least one of the document's keywords: Present and the future research in e-learning system are on the intelligent learning systems.The platform for this is the Semantic Web and the Web Ontology's.One common assumption is that the Semantic Web can be made a reality by gradually augmenting the existing data (HTML/XHTML) by ontological annotations, derived from the on-machinereadable content This paper presents an intelligent e-learning system i.e., An Adaptive Educational Hyper media System which is based on the hypermedia system using hypertext link by traffic metaphor.This system is aimed at providing user required information effectively and efficiently.The aim of this study is to extend this model to other areas like ecommerce, Artificial intelligence problems for knowledge retrieval. [AEHS] by Metteo et al.This paper aims at presenting intelligent e-learning systems modeled by Fayed et.al and Nicola et al and Mateo et al. Fig: 2 Fig:2 OWL-S Service Ontology Fig1: Types of Metadata ∀U i ∀D j (∃T k keyword (D j , T k )  (∃T l depends (T k , T l )  p obs(Tl, Ui, Learned) ∧¬learning state (D j , U i , Might be understandable)  Learning state(D j , U i , Will become understandable)))Four rules describe the adaptive link annotation:1) U i ∀ D j learning state(D j , U i ,Recommended for reading)  document annotation (D j , U i , Green Icon) 2) ∀ U i ∀ D j learning state (D j , U i ,Will become \ understandable)  document annotation (D j , U i ,Orange Icon)3) ∀U i ∀D j learning state(D j , U i , Might be understandable) www.ijacsa.thesai.org document annotation(D j , U i , Yellow Icon) j , U i ,Recommended_for_reading)  document annotation(Dj, U i ,Red Icon) VI.CONCLUSION
3,803.8
2013-01-01T00:00:00.000
[ "Computer Science" ]